- Lecture 1: Introduction, course details, what is learning theory, learning frameworks [slides]
Reference : [1] (ch 1 and 3)
- Lecture 2: Learning frameworks, Minimax Rates [pdf]
- Lecture 3: No Free Lunch Theorem, ERM, Rates for finite classes, Going to Infinite Classes [pdf]
- Lecture 4: MDL Principle, Covering Numbers, Symmetrization [pdf]
- Lecture 5: Symmetrization, Rademacher Complexity and Massart's Finite Lemma [pdf]
- Lecture 6: Massart's Finite Lemma Continued, Binary Classification, Growth Function, VC dimension and VC Lemma [pdf]
- Lecture 7: Rademacher Complexity [pdf]
- Lecture 8: Rademacher Complexity [pdf]
- Lecture 9: Covering Numbers, Pollard and Dudley Bounds [pdf]
- Lecture 10: Wrapping Statistical Learning [pdf]
- Lecture 11: Online Games [pdf]
- Lecture 12: Online Games continued [pdf]
- Lecture 13: Online Convex Optimization [pdf]
- Lecture 14: Online Mirror Descent [pdf] [Supplementary Material]
- Lecture 15: Online Mirror Descent [pdf] [Supplementary Material]
- Lecture 16: Online Mirror Descent, Fater Rates For Curved Losses [pdf]
- Lecture 17: Relaxations for General Online Learning [pdf]
- Lecture 18: Relaxations for General Online Learning [pdf]
- Lecture 19: Online Linear Bandit Problem [pdf]
- Lecture 20: Online Linear Bandit Problem [pdf]
- Lecture 21: Stability and Learning [pdf]
- Lecture 22: Stability and Learning [pdf]
- Lecture 23: Stability Based Analysis [pdf]
- Lecture 24: Boosting an Online Learning [pdf] Nice illustrative example By Robert Schapire at [link]
- Lecture 25: Stochastic Multi-armed Bandit [pdf]
- Lecture 26: Stochastic Multi-armed Bandit, Lower Bounds [pdf]
- Lecture 27: Contextual Bandit and Semi-Bandits [pdf]
- Lecture 28: Last Lecture [pdf]