分享

Stigler Diet – Notes on the Dirichlet Distribution and Dirichlet Process

 htxu91 2015-07-30
In [3]:
%matplotlib inline

Note: I wrote this post in an IPython notebook. It might be rendered better on NBViewer.

Dirichlet Distribution?

The symmetric Dirichlet distribution (DD) can be considered a distribution of distributions. Each sample from the DD is a categorial distribution over K categories. It is parameterized G0, a distribution over K categories and α, a scale factor.

The expected value of the DD is G0. The variance of the DD is a function of the scale factor. When α is large, samples from DD(α?G0) will be very close to G0. When α is small, samples will vary more widely.

We demonstrate below by setting G0=[.2,.2,.6] and varying α from 0.1 to 1000. In each case, the mean of the samples is roughly G0, but the standard deviation is decreases as α increases.

In [10]:
import numpy as np
from scipy.stats import dirichlet
np.set_printoptions(precision=2)

def stats(scale_factor, G0=[.2, .2, .6], N=10000):
    samples = dirichlet(alpha = scale_factor * np.array(G0)).rvs(N)
    print "                          alpha:", scale_factor
    print "              element-wise mean:", samples.mean(axis=0)
    print "element-wise standard deviation:", samples.std(axis=0)
    print
    
for scale in [0.1, 1, 10, 100, 1000]:
    stats(scale)
                          alpha: 0.1
              element-wise mean: [ 0.2  0.2  0.6]
element-wise standard deviation: [ 0.38  0.38  0.47]

                          alpha: 1
              element-wise mean: [ 0.2  0.2  0.6]
element-wise standard deviation: [ 0.28  0.28  0.35]

                          alpha: 10
              element-wise mean: [ 0.2  0.2  0.6]
element-wise standard deviation: [ 0.12  0.12  0.15]

                          alpha: 100
              element-wise mean: [ 0.2  0.2  0.6]
element-wise standard deviation: [ 0.04  0.04  0.05]

                          alpha: 1000
              element-wise mean: [ 0.2  0.2  0.6]
element-wise standard deviation: [ 0.01  0.01  0.02]

Dirichlet Process?

The Dirichlet Process can be considered a way to generalize the Dirichlet distribution. While the Dirichlet distribution is parameterized by a discrete distribution G0 and generates samples that are similar discrete distributions, the Dirichlet process is parameterized by a generic distribution H0 and generates samples which are distributions similar to H0. The Dirichlet process also has a parameter α that determines how similar how widely samples will vary from H0.

We can construct a sample H (recall that H is a probability distribution) from a Dirichlet process DP(αH0) by drawing a countably infinite number of samples θk from H0) and setting:

H=k=1πk?δ(x?θk)

where the πk are carefully chosen weights (more later) that sum to 1. (δ is the Dirac delta function.)

H, a sample from DP(αH0), is a probability distribution that looks similar to H0 (also a distribution). In particular, H is a discrete distribution that takes the value θk with probability πk. This sampled distribution H is a discrete distribution even if H0 has continuous support; the support of H is a countably infinite subset of the support H0.

The weights (pk values) of a Dirichlet process sample related the Dirichlet process back to the Dirichlet distribution.

Gregor Heinrich writes:

The defining property of the DP is that its samples have weights πk and locations θk distributed in such a way that when partitioning S(H) into finitely many arbitrary disjoint subsets S1,,Sj J<, the sums of the weights πk in each of these J subsets are distributed according to a Dirichlet distribution that is parameterized by α and a discrete base distribution (like G0) whose weights are equal to the integrals of the base distribution H0 over the subsets Sn.

As an example, Heinrich imagines a DP with a standard normal base measure H0N(0,1). Let H be a sample from DP(H0) and partition the real line (the support of a normal distribution) as S1=(?,?1], S2=(?1,1], and S3=(1,] then

H(S1),H(S2),H(S3)Dir(αerf(?1),α(erf(1)?erf(?1)),α(1?erf(1)))

where H(Sn) be the sum of the πk values whose θk lie in Sn.

These Sn subsets are chosen for convenience, however similar results would hold for any choice of Sn. For any sample from a Dirichlet process, we can construct a sample from a Dirichlet distribution by partitioning the support of the sample into a finite number of bins.

There are several equivalent ways to choose the πk so that this property is satisfied: the Chinese restaurant process, the stick-breaking process, and the Pólya urn scheme.

To generate {πk} according to a stick-breaking process we define βk to be a sample from Beta(1,α). π1 is equal to β1. Successive values are defined recursively as

πk=βkj=1k?1(1?βj).

Thus, if we want to draw a sample from a Dirichlet distribution, we could, in theory, sample an infinite number of θk values from the base distribution H0, an infinite number of βk values from the Beta distribution. Of course, sampling an infinite number of values is easier in theory than in practice.

However, by noting that the πk values are positive values summing to 1, we note that, in expectation, they must get increasingly small as k. Thus, we can reasonably approximate a sample HDP(αH0) by drawing enough samples such that Kk=1πk1.

We use this method below to draw approximate samples from several Dirichlet processes with a standard normal (N(0,1)) base distribution but varying α values.

Recall that a single sample from a Dirichlet process is a probability distribution over a countably infinite subset of the support of the base measure.

The blue line is the PDF for a standard normal. The black lines represent the θk and πk values; θk is indicated by the position of the black line on the x-axis; πk is proportional to the height of each line.

We generate enough πk values are generated so their sum is greater than 0.99. When α is small, very few θk's will have corresponding πk values larger than 0.01. However, as α grows large, the sample becomes a more accurate (though still discrete) approximation of N(0,1).

In [13]:
import matplotlib.pyplot as plt
from scipy.stats import beta, norm

def dirichlet_sample_approximation(base_measure, alpha, tol=0.01):
    betas = []
    pis = []
    betas.append(beta(1, alpha).rvs())
    pis.append(betas[0])
    while sum(pis) < (1.-tol):
        s = np.sum([np.log(1 - b) for b in betas])
        new_beta = beta(1, alpha).rvs() 
        betas.append(new_beta)
        pis.append(new_beta * np.exp(s))
    pis = np.array(pis)
    thetas = np.array([base_measure() for _ in pis])
    return pis, thetas

def plot_normal_dp_approximation(alpha):
    plt.figure()
    plt.title("Dirichlet Process Sample with N(0,1) Base Measure")
    plt.suptitle("alpha: %s" % alpha)
    pis, thetas = dirichlet_sample_approximation(lambda: norm().rvs(), alpha)
    pis = pis * (norm.pdf(0) / pis.max())
    plt.vlines(thetas, 0, pis, )
    X = np.linspace(-4,4,100)
    plt.plot(X, norm.pdf(X))

plot_normal_dp_approximation(.1)
plot_normal_dp_approximation(1)
plot_normal_dp_approximation(10)
plot_normal_dp_approximation(1000)

Often we want to draw samples from a distribution sampled from a Dirichlet process instead of from the Dirichlet process itself. Much of the literature on the topic unhelpful refers to this as sampling from a Dirichlet process.

Fortunately, we don't have to draw an infinite number of samples from the base distribution and stick breaking process to do this. Instead, we can draw these samples as they are needed.

Suppose, for example, we know a finite number of the θk and πk values for a sample HDir(αH0). For example, we know

π1=0.5,π3=0.3,θ1=0.1,θ2=?0.5.

To sample from H, we can generate a uniform random u number between 0 and 1. If u is less than 0.5, our sample is 0.1. If 0.5<=u<0.8, our sample is ?0.5. If u>=0.8, our sample (from H will be a new sample θ3 from H0. At the same time, we should also sample and store π3. When we draw our next sample, we will again draw uUniform(0,1) but will compare against π1,π2, AND π3.

The class below will take a base distribution H0 and α as arguments to its constructor. The class instance can then be called to generate samples from HDP(αH0).

In [20]:
from numpy.random import choice

class DirichletProcessSample():
    def __init__(self, base_measure, alpha):
        self.base_measure = base_measure
        self.alpha = alpha
        
        self.cache = []
        self.weights = []
        self.total_stick_used = 0.

    def __call__(self):
        remaining = 1.0 - self.total_stick_used
        i = DirichletProcessSample.roll_die(self.weights + [remaining])
        if i is not None and i < len(self.weights) :
            return self.cache[i]
        else:
            stick_piece = beta(1, self.alpha).rvs() * remaining
            self.total_stick_used += stick_piece
            self.weights.append(stick_piece)
            new_value = self.base_measure()
            self.cache.append(new_value)
            return new_value
        
    @staticmethod 
    def roll_die(weights):
        if weights:
            return choice(range(len(weights)), p=weights)
        else:
            return None

This Dirichlet process class could be called stochastic memoization. This idea was first articulated in somewhat abstruse terms by Daniel Roy, et al.

Below are histograms of 10000 samples drawn from samples drawn from Dirichlet processes with standard normal base distribution and varying α values.

In [22]:
import pandas as pd

base_measure = lambda: norm().rvs()
n_samples = 10000
samples = {}
for alpha in [1, 10, 100, 1000]:
    dirichlet_norm = DirichletProcessSample(base_measure=base_measure, alpha=alpha)
    samples["Alpha: %s" % alpha] = [dirichlet_norm() for _ in range(n_samples)]

_ = pd.DataFrame(samples).hist()

Note that these histograms look very similar to the corresponding plots of sampled distributions above. However, these histograms are plotting points sampled from a distribution sampled from a Dirichlet process while the plots above were showing approximate distributions samples from the Dirichlet process. Of course, as the number of samples from each H grows large, we would expect the histogram to be a very good empirical approximation of H.

In a future post, I will look at how this DirichletProcessSample class can be used to draw samples from a hierarchical Dirichlet process.

In [ ]:
 

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多