- Lecture 1: Introduction, course details, what is learning theory, learning frameworks [slides]
Reference : [1] (ch 1 and 3)
- Lecture 2: Learning frameworks, Minimax Rates [pdf]
- Lecture 3: No Free Lunch Theorem, ERM, Rates for finite classes [pdf]
- Lecture 4: MDL, Uniform rates, Infinite class and Symmetrization [pdf]
- Lecture 5: Symmetrization, Rademacher Complexity, Effective Size [pdf]
- Lecture 6: Effective size, VC Dimension, Learnability and VC/Sauer/Shelah Lemma [pdf]
- Lecture 7: Massart's Finite LEmma, Properties of Rademacher Complexity [pdf]
- Lecture 8: Properties of Rademacher Complexity, Contraction Lemma, Examples [pdf]
- Lecture 9: Covering Numbers, Pollard Bound and Dudley Chaining [pdf]
- Lecture 10: Covering Numbers, Pollard Bound and Dudley Chaining [pdf]
- Lecture 11: Wrapping up Statistical Learning [pdf]
- Lecture 12: Online Learning: Bit prediction [pdf]
- Lecture 13: Online Learning: Bit prediction continued + Linear betting game [pdf]
- Lecture 14: Online Learning: Bit prediction continued + Linear betting game [pdf]
- Lecture 15: Online Convex Optimization: Setting + Online to batch + Gradient Descent [pdf]
- Lecture 16: Online Convex Optimization: Setting + Online to batch + Gradient Descent [pdf]
- Lecture 17: Online Mirror Descent [pdf]
- Lecture 18: Online Mirror Descent and Faster Rates [pdf]
- Lecture 19: Betting with Arbitrary Covariates [pdf]
- Lecture 20: Betting with Arbitrary Covariates [pdf]
- Lecture 21: Sequential Rademacher Complexity [pdf]
- Lecture 22: Burkholder Method for Supervised Learning with Convex Losses [pdf]
- Matrix Completion using Burkholder Method [pdf]