Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js

Estimating Probabilities from data

Remember that the Bayes Optimal classifier: "all we needed was" P(Y|X). Most of supervised learning can be viewed as estimating P(X,Y).

There are two cases of supervised learning: So where do these probabilities come from?

There are many ways to estimate these probabilities from data.

Simple scenario: coin toss

Suppose you find a coin and it's ancient and very valuable. Naturally, you ask yourself, "What is the probability that it comes up heads when I toss it?" You toss it n=10 times and get results: H,T,T,H,H,H,T,T,T,T.

Maximum Likelihood Estimation (MLE)

What is P(H)=θ?

We observed nH heads and nT tails. So, intuitively, θnHnH+nT=0.4 Can we derive this?

MLE Principle: Choose the θ to maximize the likelihood of the data, P(Data), where P(Data) is defined as P(Data)=(nH+nTnH)θnH(1θ)nT i.e. θ=argmaxθ(nH+nTnH)θnH(1θ)nT=argmaxθlog(nH+nTnH)+nHlog(θ)+nTlog(1θ)=argmaxθnHlog(θ)+nTlog(1θ) We can now solve for θ by taking the derivative and equating it to zero. This results in nHθ=nT1θnHnHθ=nTθθ=nHnH+nT

Check: 1θ0 (no constraints necessary)

Maximum a Posteriori Probability Estimation (MAP)

Assume you have a hunch that θ is close to 0.5. But your sample size is small, so you don't trust your estimate.

Simple fix: Add m imaginery throws that would result in θ (e.g. θ=0.5). Add m Heads and m Tails to your data. θnH+mnH+nT+2m For large n, this is an insignificant change. For small n, it incorporates your "prior belief" about what θ should be.

Can we derive this update formally?

Let θ be a random variable, drawn from a Dirichlet distribution. As θ is a random variable drawn from a Dirichlet distribution, we can express P(θ) as P(θ)=θβ11(1θ)β01B(β1,β0) where B(β1,β0) is the normalization constant. Note that this is also the formulation for Beta distribution, which is a specific case of the Dirichlet distribution where there are exactly two free parameters. The Dirichlet distribution is the multivariate generalization of the Beta distribution.

For the MAP estimate, we pick the most likely θ given the data. θ=argmaxθP(θ|Data)=argmaxθP(Data|θ)P(θ)P(Data)(By Bayes rule)=argmaxθlog(P(Data|θ))+log(P(θ))=argmaxθnHlog(θ)+nTlog(1θ)+(β11)log(θ)+(β01)log(1θ)=argmaxθ(nH+β11)log(θ)+(nT+β01)log(1θ)θ=nH+β11nH+nT+β0+β12

"True" Bayesian approach

Let θ be our parameter.

We allow θ to be a random variable. If we are to make prediction using θ, we formulate the prediction as P(Y|X,D)=θP(Y|X,θ,D)dθ=θP(Y|X,θ)P(θ|D)dθ This is called the posterior predictive distribution. Unfortunately, above is generally intractable in closed form and various other techniques, such as Monte Carlo approximations, are used to approximate the distribution.