Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js
Estimating Probabilities from data
Remember that the Bayes Optimal classifier: "all we needed was" P(Y|X). Most of supervised learning can be viewed as estimating P(X,Y).
There are two cases of supervised learning:
- When we estimate P(Y|X) directly, then we call it discriminative learning.
- When we estimate P(X|Y)P(Y), then we call it generative learning.
So where do these probabilities come from?
There are many ways to estimate these probabilities from data.
Simple scenario: coin toss
Suppose you find a coin and it's ancient and very valuable. Naturally, you ask yourself, "What is the probability that it comes up heads when I toss it?"
You toss it n=10 times and get results: H,T,T,H,H,H,T,T,T,T.
Maximum Likelihood Estimation (MLE)
What is P(H)=θ?
We observed nH heads and nT tails. So, intuitively,
θ≈nHnH+nT=0.4
Can we derive this?
MLE Principle: Choose the θ to maximize the likelihood of the data, P(Data), where P(Data) is defined as
P(Data)=(nH+nTnH)θnH(1−θ)nT
i.e.
θ=argmaxθ(nH+nTnH)θnH(1−θ)nT=argmaxθlog(nH+nTnH)+nH⋅log(θ)+nT⋅log(1−θ)=argmaxθnH⋅log(θ)+nT⋅log(1−θ)
We can now solve for θ by taking the derivative and equating it to zero. This results in
nHθ=nT1−θ⟹nH−nHθ=nTθ⟹θ=nHnH+nT
Check: 1≥θ≥0 (no constraints necessary)
- MLE gives the explanation of the data you observed.
- But the MLE can overfit the data if n is small. It works well when n is large.
Maximum a Posteriori Probability Estimation (MAP)
Assume you have a hunch that θ is close to 0.5. But your sample size is small, so you don't trust your estimate.
Simple fix: Add m imaginery throws that would result in θ′ (e.g. θ=0.5). Add m Heads and m Tails to your data.
θ←nH+mnH+nT+2m
For large n, this is an insignificant change.
For small n, it incorporates your "prior belief" about what θ should be.
Can we derive this update formally?
Let θ be a random variable, drawn from a Dirichlet distribution.
- Note:
Here we transcend into Bayesian statistics. θ is not a random variable associated with an event in a sample space.
In frequentist statistics, this is forbidden. In Bayesian statistics, this is allowed.
- In lecture, Dirichlet distribution was briefly introduced as a probability distribution over probability distributions. All this really is that
a Dirichlet distribution is a distribution over a (k−1)-dimensional probability simplex, where each sample from this distribution have all of its
components greater than or equal to 0 and sum to 1 (note that the sample itself has k components, but the simplex is (k−1)-dimensional because
of the constraints to be a valid probability distribution). To help understanding, you can imagine yourself rolling a poorly made 6-sided die. But before rolling anything,
you must draw a die from a bag full of dice of different sizes. Drawing a die from this bag is just like sampling from a Dirichlet distribution. The Dirichlet distribution says which dice are more likely and which are less likely. For example you could have a strong belief that dice with roughly even probability are likely whereas dice that have highly skewed probabilities (e.g. only "1" ever comes up are extremely unlikely.)
After drawing the die, your die now represents yet another probability function, but this time over its 6 faces.
As θ is a random variable drawn from a Dirichlet distribution, we can express P(θ) as
P(θ)=θβ1−1(1−θ)β0−1B(β1,β0)
where B(β1,β0) is the normalization constant. Note that this is also the formulation for Beta distribution, which is a specific case of
the Dirichlet distribution where there are exactly two free parameters. The Dirichlet distribution is the multivariate generalization of the Beta distribution.
For the MAP estimate, we pick the most likely θ given the data.
θ=argmaxθP(θ|Data)=argmaxθP(Data|θ)P(θ)P(Data)(By Bayes rule)=argmaxθlog(P(Data|θ))+log(P(θ))=argmaxθnH⋅log(θ)+nT⋅log(1−θ)+(β1−1)⋅log(θ)+(β0−1)⋅log(1−θ)=argmaxθ(nH+β1−1)⋅log(θ)+(nT+β0−1)⋅log(1−θ)⟹θ=nH+β1−1nH+nT+β0+β1−2
- MAP is a great estimator if prior belief exists and is accurate.
- It can be very wrong if prior belief is wrong!
"True" Bayesian approach
Let θ be our parameter.
We allow θ to be a random variable. If we are to make prediction using θ, we formulate the prediction as
P(Y|X,D)=∫θP(Y|X,θ,D)dθ=∫θP(Y|X,θ)P(θ|D)dθ
This is called the posterior predictive distribution. Unfortunately, above is generally intractable in closed form and
various other techniques, such as Monte Carlo approximations, are used to approximate the distribution.