- Lecture 1 : Introduction, course details, what is learning theory, learning frameworks [slides]
Reference : [1] (ch 1 and 3)
- Lecture 2 : Learning frameworks and Minimax Rates [lec]
Reference : [1] (ch 1 and 3)
- Lecture 3: Minimax Rates, Comparing the Learning Frameworks, No Free Lunch Theorem [lec]
Reference : [1] (ch 1 and 3)
- Lecture 4: Statistical Learning, Uniform Convergence [lec]
Reference : [1] (ch 1 and 3)
- Lecture 5: Statistical Learning, Uniform Convergence, MDL, Infinite Classes [lec]
- Lecture 6: Symmetrization, Rademacher Complexity, Growth Function and VC dimension [lec]
- Lecture 7: Growth Function, VC dimension, Sauer-Shelah-VC Lemma and Massart's finite lemma [lec]
- Lecture 8: Massart's finite lemma, Properties of Rademacher Complexity [lec]
- Lecture 9: Properties of Rademacher Complexity, Examples [lec]
- Lecture 10: Covering Number, Pollard's Bound, Dudley Integral Complexity [lec]
- Lecture 11: Wrapping up Statistical Learning Theory [lec]
- Lecture 12: Online Learning, Bit Prediction [lec]
- Guest Lecture by Bobby Kleinberg: Multiplicative Weights
- Lecture 13: Online Learning, Bit Prediction, Cover's result [lec]
- Lecture 14: Learning with Covariates, Exponential Weights Algorithm [lec]
- Lecture 15: Online Convex Optimization and Online Gradient Descent [lec]
- Lecture 16: Online Mirror Descent [lec] [Supplementary material]
- Lecture 17: Online Mirror Descent: Strongly convex and Exp-concave losses [lec]
- Lecture 18: General Online Learning and Relaxations [lec]
- Lecture 19: Sufficient Statistics and Online Learning [lec]
- Lecture 20: Burkholder Method [lec]
- Lecture 21: Burkholder Method [lec]
- Lecture 22: Burkholder Method [lec]
- Lecture 23: Matrix Prediction Via Burkholder Method [lec]
- Lecture 24: Matrix Prediction Via Burkholder Method [lec]
- Lecture 25: Random Playouts and Last Lecture [lec]
- Lecture 26: Last Lecture [lec]