{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Lecture 5: Stochastic gradient descent\n",
"\n",
"## CS4787 — Principles of Large-Scale Machine Learning Systems\n",
"\n",
"$\\newcommand{\\R}{\\mathbb{R}}$\n",
"$\\newcommand{\\norm}[1]{\\left\\|#1\\right\\|}$"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"slideshow": {
"slide_type": "skip"
}
},
"outputs": [],
"source": [
"import numpy\n",
"import scipy\n",
"import matplotlib\n",
"from matplotlib import pyplot\n",
"import time"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Where we left off...\n",
"\n",
"Gradient descent converges, both for strongly convex loss functions, in which case (with appropriate step size setting) it is guaranteed to reach an objective gap of no more than $\\epsilon$ (i.e. $f(w_T) - f^* \\le \\epsilon$) after $T$ iterations if\n",
"\n",
"$$T \\ge \\kappa \\cdot \\log\\left( \\frac{f(w_0) - f^*}{\\epsilon} \\right)$$\n",
"\n",
"where $\\kappa$ is the _condition number_ and measures how hard the problem is to solve. We also saw that GD converges even under weaker conditions, where all we have is an L-smoothness bound, in which case for the largest allowable step size $\\alpha = 1/L$ we'd get\n",
"\n",
"$$\\min_{t \\in \\{0,\\ldots,T-1\\}} \\| \\nabla f(w_t) \\|^2 \\le \\frac{2L (f(w_0) - f^*)}{T}.$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Stochastic Gradient Descent\n",
"\n",
"Basic idea: **in gradient descent, just replace the full gradient (which is a sum) with a single gradient example**. Initialize the parameters at some value $w_0 \\in \\R^d$, and decrease the value of the empirical risk iteratively by sampling a random index $\\tilde i_t$ uniformly from $\\{1, \\ldots, n\\}$ and then updating\n",
"\n",
"$w_{t+1} = w_t - \\alpha_t \\cdot \\nabla f_{\\tilde i_t}(w_t) = w_t - \\alpha_t \\cdot \\nabla \\ell(w_t; x_{i_t}, y_{i_t})$\n",
"\n",
"where as usual $w_t$ is the value of the parameter vector at time $t$, $\\alpha_t$ is the _learning rate_ or _step size_, and $\\nabla f_i$ denotes the gradient of the loss function of the $i$th training example.\n",
"Compared with gradient descent and Newton's method, SGD is simple to implement and runs each iteration faster."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### A potential objection!\n",
"\n",
"**This is not necessarily going to be decreasing the loss at every step!**\n",
"\n",
"* Because we're just moving in a direction that will decrease the loss _for one particular example_: this won't necessarily decrease the total loss!\n",
"\n",
"* So we can't demonstrate convergence by using a proof like the one we used for gradient descent, where we showed that the loss decreases at every iteration of the algorithm.\n",
"\n",
"* The fact that SGD doesn't always improve the loss at each iteration motivates the question: **does SGD even work? And if so, why does SGD work?**"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## This time, let's do the derivation on the white board!\n",
"\n",
"If you want the full thing in text form, it's in the notes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"@webio": {
"lastCommId": null,
"lastKernelId": null
},
"celltoolbar": "Slideshow",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"rise": {
"scroll": true,
"transition": "none"
}
},
"nbformat": 4,
"nbformat_minor": 2
}