- Instructor: Prof. Haym Hirsh, haym.hirsh@cornell.edu, Gates 352
- Office Hours: After class or by arrangement
- TAs:
- Peter Mocarski (pmm248@cornell.edu) (Head TA)
- Cameron Drew Chafetz (cdc97@cornell.edu)
- Xilun Chen (xc253@cornell.edu)
- Radhika Priya Chinni (rpc222@cornell.edu)
- Anusha Chowdhury (ac2633@cornell.edu)
- Abhimanyu Amit Gupta (aag245@cornell.edu)
- Hyun Kyo Jung (hj283@cornell.edu)
- Hannah Lee (hl838@cornell.edu)
- Lauren Meagan Lin (lml253@cornell.edu)
- Kaushik Murali (km693@cornell.edu)
- Saksham Papreja (sp2449@cornell.edu)
- Vaidehi Patel (vhp25@cornell.edu)
- Yuzhao Shen (ys525@cornell.edu)
- Claire Tang (yt338@cornell.edu)
- Danjing Yang (dy92@cornell.edu)
- Kevin Ye (ky242@cornell.edu)
- Joo Ho Yeo (jy396@cornell.edu)
- Jessica Zhou (jz499@cornell.edu)
- May Zhou (mz278@cornell.edu)

- Textbook:
*Artificial Intelligence: A Modern Approach*by Stuart Russell and Peter Norvig, 3rd Edition - Prerequisites:
- CS 2110/ENGRD 2110
- CS 2800 - particularly probability, first-order logic
- Calculus

- Class meeting time: MWF 1:25-2:15, Olin 155
- Online:
- Syllabus
- Course web page (takes you to this page)
- Piazza
- Gradescope, entry code 9BPY45

- Grade:
- 14% +/- 5%: Homeworks
- 35% +/- 5%: Prelim (Friday, March 23, 1:25-2:15, Kennedy 116 (Call Auditorium))
- 50% +/- 5%: Final (Monday, May 21, 2:00-4:30)
- 1%: Course evaluation
- Extra credit: used if you are borderline between two grades
- All regrade requests must be submitted through Gradescope within seven days of grades being released.

- Enrollment information: If you are interested in the class and are not enrolled, please get on the waitlist

Extra credit opportunities:

- Tu, Feb 27, 4:15pm, Gates G01: Nika Haghtalab, CMU

"Machine learning by the people, for the people" - Th, Mar 1, 4:15pm, Gates G01: Jacob Steinhardt, Stanford

"Provably Secure Machine Learning" - Fr, Mar 2, 12:15pm, Gates G01: Wei-Lun (Harry) Chao, USC

"Transfer learning towards intelligent systems in the wild" - Th, Mar 8, 4:15pm, Gates G01: Bo Zhu, MIT

"Exploring and Understanding Limits of Physical Systems" - Fr, Mar 9, 12:15pm, Gates G01: Byron Boots, Georgia Tech

"Learning Perception and Control for Agile Off-Road Autonomous Driving" - Fr, Mar 23, 12:15pm, Gates G01: Jesse Thomason, UT Austin

"Continuously Improving Embodied Natural Language Understanding through Human-Robot Conversation"

Course schedule:

- Week 1: Introduction to AI and to course

- Week 2: State space search (Textbook, Chapters 3 and 4)
- You can find further information on the state space representations of the problems discussed in class at the following locations:
- 8 Puzzle:
- Part of the state space (without the actions being shown - each of the undirected edges corresponds to two directed edges, one in each of the directions, labeled by the action that corresponds to that transition between states)
- Information on the 15 puzzle (from which the 8 puzzle was derived)
- A book on the history of the 15 puzzle.

- 8 Queens Puzzle:
- The Exercise in algorithm design section of the Wikipediaarticle on the 8 Queens Puzzle explains two of the search space representations that algorithms might use to solve the problem, and has a nice animation of a search algorithm seeking a solution.
- An example of the search space for the 4-Queens Puzzle if you have 16 actions, four per queen, where each queen corresponds to a row and its four actions correspond to placing that queen in any of the four columns (thus, for example, for queen 3 there's an action that places it in position (3,1), another in (3,2), and so on).
- WolframMathWorld has a discussion of the math of the N-Queens Puzzle.

- 8 Puzzle:
- Homework 1:
- Due Wednesday, February 7, 1:25PM
- This homework will not be graded - you get full credit for submitting it. Its purpose is to test you on material from the prerequisite classes, to give you an idea of what I'm expecting you to already know.
- To submit your homework you will need to do two things:
- Go to goo.gl/forms/PWw0OJvWIFICsX6V2 and submit your name, Net Id, and answers to the three questions
- Go to Gradescope and enter your solutions to the three questions. You can submit it as a document that you prepare separately or as a screen shot of your Google Forms submission above.

- Homework 1 Solutions

- You can find further information on the state space representations of the problems discussed in class at the following locations:
- Week 3: State Space Search (Chapter 3)
- (Recursive) Depth First Search, Depth-Bounded Depth-First Search, and Iterative Deepening Search
- (Non-Recursive) Depth First Search and Breadth First Search

- Week 4: State Space Search (Chapter 3)
- Depth First Search with checking for cycles
- Comparison of properties of search methods

This is a variant of Figure 3.21 of the textbook. Because the book numbers levels of trees starting the root at 0 rather than 1 the exponents differ by 1. Also, the book gives complexity analyses for breadth-first search and iterative deepening assuming a solution exists and is at depth d, whereas the complexity here is for the worst case when there is no solution so you go to the maximum depth m. - Homework 2, due Friday, February 23, 1:25pm
- Homework 2 solutions
- Skeletal Best First Search algorithm

- Week 5: Informed Search (Chapters 3 and 4)

- Week 6: Adversarial Search (Chapter 5)
- Game Tree Search

I've added in the condition that if game.tie is true then 0 is returned. - Game Tree Search with Alpha Beta Pruning
- Homework 3, due Tuesday, March 13, 4:00pm

- Game Tree Search
- Week 7: Adversarial Search, Machine Learning (Overview, Perceptrons) (Chapters 5, 18.1, 18.2, 18.6.3)

- Week 8: Machine Learning (Neural Networks) (Chapters 18.6.3, 18.6.4)
- Perceptron Learning Rule
- Perceptron example - you can change any of the items in the cells in orange (alpha, the initial weights, and what training data you give it)
- Sample exam questions
- More sample exam questions

- Week 9: Machine Learning (Neural Networks) (Chapters 18.6.4, 18.7)
- Stochastic gradient descent vs Gradient descent
- Backpropagation Learning Algorithm
- Logistic regression example using stochastic gradient descent - this is identical to the Perceptron example except that the calculation for h now uses the logistic function and the weight update rule includes the additional h*(1-h) term in the update rule
- Homework 4, due April 13 1:25pm

- Week 10: Machine Learning (Naive Bayes, k-mean Clustering)

- Week 11: Reinforcement Learning (Chapter 17.1-17.3)

- Week 12: Reinforcement Learning (Chapter 17.1-17.3)

- Week 13: Reinforcement Learning (Chapter 21.1,21.3),
- Q-learning
- Propositional logic example
- Homework 5

Due: Wednesday, May 9, 1:25PM, but there will be no penalty for late submission until Monday, 14, 1:25pm (when solutions will be released)

This homework will not be graded - you get full credit for submitting it.

- Week 14: Formal logic (Chapter 7.4-7.5)