Consider basic framework for AlphaGo and experiment with learning and play strength
Ultimate Tic-Tac-Toe (or "super Tic-Tac-Toe"), 3D Tic-Tac-Toe, or HEX (board game).
Image recognition using Deep Neural Nets. Generative Adversarial Networks (GANS).
Implementing Virtual Predators-Prey in a Virtual Environment (Genetic Algorithms)
Learning to Play Checkers
Apply a genetic algorithm to the problem of automatic generation of computer programs.
Apply a genetic algorithm to the problem of learning natural language grammars.
Build a system that uses heuristic search (with minimax and alpha-beta pruning) to play Connect-4. Evaluate it through self-play against versions with different heuristics and search effort. Add learning to tune heuristics.
Build (and train) a system that plays Connect-4 using a deep neural network. Train using minimax player. Evaluate performance.
A chess endgame player. An interesting variant is to design a method that learns end-game rules from examples and compare it with hand-generated chess endgame players.
Build a suite of neural network algorithms; test them on selected datasets from the machine learning dataset archive; determine why they did or did not work.
A theorem-proving system for some (small) subset of mathematics.
A program that generates automatic crossword puzzles, starting from a dictionary and an empty board.
Recreate from its specifications the reinforcement learning (neural net) system (Tesauro, 1992) that learns to play backgammon by playing games against itself.
A reactive, rule-based system that plays tetris.
Re-implement Samuel's checkers playing program.