%matplotlib notebook
import math
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

# 1 Some Monte Carlo integration examples

This notebook shows some examples of Monte Carlo integration which we’ll look at in class. Some are 1D integrals on the unit integral; others are 2D integrals on the unit hemisphere, which lead directly to algorithms we use for illumination.

If you are looking at the HTML export of this notebook, the source file is here: monte-carlo.ipynb and you can download it and run it in Jupyter Notebook if you have it. (If you’d like to get it, installing Anaconda is a quick way.)

## 1.1 Sampling and integration in 1D

Here is a function I might like to integrate.

def f(x):
return 1 - np.sqrt(1 - x**4)
plt.figure()
xs = np.linspace(0, 1, 501)
plt.plot(xs, f(xs));
<IPython.core.display.Javascript object>

The integral of this function doesn’t have a closed form. Wolfram Alpha can integrate it in terms of the Elliptic Integral and gives the approximate value 0.125981 for the integral from 0 to 1.

The simplest Monte Carlo approach is to use uniform samples over the desired integration interval.

def estimate_simple(f, N):
"""Integrate f from 0 to 1 using N uniform random samples."""
xs = np.random.random_sample(N)
return np.mean(f(xs))

I can try this estimator with different numbers of samples to see how the accuracy improves:

estimate_simple(f, 10000000) - 0.125981
-4.7454293878868015e-06

I’m going to package this up in a “sampling strategy” that has the function, the code to generate samples, and the pdf of the samples, organized into a standard 3-method interface.

strategy1 = {
'eval': lambda x: 1 - np.sqrt(1 - x**4),
'sample': lambda xi: xi,
'pdf': lambda x: np.ones(x.shape)
}
def estimate(strategy, N):
"""Integrate a function using N independent samples drawn from a provided pdf."""
xis = np.random.random_sample(N)
xs = strategy['sample'](xis)
return np.mean(strategy['eval'](xs) / strategy['pdf'](xs))

I can do the same experiment as before using this interface.

estimate(strategy1, 1000000) - 0.125981
-1.717783785970539e-05

Let’s more scientifically compute the standard deviation of this estimator as a function of the number of samples.

def stdev_of(fn, M):
"""Compute the standard deviation of repeated calls to a function."""
results = [fn() for i in range(M)]
return np.std(results)
stdev_of(lambda: estimate(strategy1, 1000000), 100)
0.0001795481760075688
Ns = np.int32(np.logspace(2, 5, 7))
stdevs = [stdev_of(lambda: estimate(strategy1, N), 100) for N in Ns]
plt.figure()
plt.loglog(Ns, stdevs, '-')
plt.xlabel('# samples')
plt.ylabel('standard deviation');
plt.axis('equal');
<IPython.core.display.Javascript object>

That shows a very clear trend: standard deviation is proportional to the square root of the number of samples.

$\textstyle \sigma{G_n} = \sigma{g} N^{-1/2}$

$\textstyle \log \sigma{G_n} = \log \sigma{g} - \frac{1}{2} \log N$

Here is another sampling stragegy, using the probability density $\textstyle p(x) = 5 x^4$.

strategy2 = {
'eval': lambda x: 1 - np.sqrt(1 - x**4),
'sample': lambda xi: xi ** (1/5),
'pdf': lambda x: 5 * x**4
}

The sample method is using the inverse-cdf algorithm. The desired pdf is $\textstyle p(x) = 5 x^4$; the corresponding cumulative distribution function (cdf) is

$\textstyle P(x) = \int_0^x 5t^4 dt = t^5$

and the inverse cdf is

$\textstyle P^{-1}(\xi) = \xi^{1/5}$

First let’s verify that this sampling method really generates the pdf that it claims to. The following function computes many samples drawn from sample and plots a histogram showing the density of those samples against the value returned by the pdf method:

def histcheck(strategy, N):
xis = np.random.random_sample(N)
xs = strategy['sample'](xis)
ps, edges, patches = plt.hist(xs, math.ceil(math.sqrt(N)/10), density=True)
plt.plot(edges, strategy['pdf'](edges), 'k')
plt.figure()
histcheck(strategy2, 100000)
<IPython.core.display.Javascript object>

Good, that shows that the pdf method really does report the probability with which the sample method generates a sample near a given value.

estimate(strategy2, 1000000) - 0.125981
-2.196604269441571e-05
stdev_of(lambda: estimate(strategy2, 1000000), 100)
1.993623186073966e-05

Now let’s compare the standard deviation of this estimator to the other one, for various numbers of samples.

Ns = np.int32(np.logspace(2, 5, 7))
stdevs1 = [stdev_of(lambda: estimate(strategy1, N), 100) for N in Ns]
stdevs2 = [stdev_of(lambda: estimate(strategy2, N), 100) for N in Ns]
plt.figure()
plt.loglog(Ns, stdevs1, Ns, stdevs2)
plt.xlabel('# samples')
plt.ylabel('standard deviation');
plt.axis('equal');
<IPython.core.display.Javascript object>

This shows that second strategy outperforms the first one by a constant factor of about 10.

## 1.2 Sampling uniformly from unit disc

To generate samples in the unit disc it is convenient to generate them in polar coordinate form. Unfortunately uniformly generating $\textstyle r$ and $\textstyle \theta$ does not do it:

def plot_circle(r, c='k'):
plt.plot(r * np.cos(np.linspace(0,2*np.pi,121)), r * np.sin(np.linspace(0,2*np.pi,121)), c)
rs = np.random.random_sample(10000)
thetas = 2 * np.pi * np.random.random_sample(10000)
plt.figure()
plt.plot(rs * np.cos(thetas), rs * np.sin(thetas), '.', ms=1)
plt.axis('equal')
plot_circle(1)
<IPython.core.display.Javascript object>

These samples are clumped towards the middle because the radii are uniformly distributed, so there are roughly equal numbers of points with $\textstyle 0 < r < 0.1$ and $\textstyle 0.9 < r < 1$. But the areas on the plane corresponding to these two intervals in $\textstyle r$ are very different.

plot_circle(0.1)
plot_circle(0.9)

The area occupied by the interval between $\textstyle r$ and $\textstyle r+dr$ is proportional to $\textstyle r$, so to generate points uniformly we want $\textstyle p(r) \propto r$. The integral of this is $\textstyle \frac{1}{2}r^2$, so the normalized pdf is $\textstyle p(r) = 2r$ with the cdf $\textstyle P(r) = r^2$. This means if we have a random number $\textstyle \xi$ that’s uniform on the unit interval, we can compute $\textstyle r = \sqrt{\xi}$ and then $\textstyle r \sim p(r)$.

rs = np.sqrt(np.random.random_sample(10000))
plt.figure()
plt.plot(rs * np.cos(thetas), rs * np.sin(thetas), '.', ms=1)
plt.axis('equal')
plot_circle(1)
<IPython.core.display.Javascript object>