Computing the entropy of a distribution, numerically

Contents

Computing the entropy of a distribution, numerically#

The informativeness of a prior distribution is linked to its entropy. The larger the entropy, the less informative the prior distribution is. The entropy of a univariate continuous distribution with probability density function \(p(x)\) is defined as:

\[ H(p) = -\int_{-\infty}^{\infty} p(x) \log(p(x)) dx \]

Here, we will compute the entropy of continuous univariate distributions numerically to compare their informativeness relative to each other. Let us define a Gaussian distribution x:

from cuqi.distribution import Gaussian
import numpy as np
import scipy
x = Gaussian(0, 1)

Let us define a lambda function for the entropy integrand:

entropy_integrand = lambda dist, val: dist.pdf(val)*dist.logd(val)

To compute the entropy, we can use scipy’s quad function to integrate the entropy integrand over the support of the distribution:

import scipy.integrate as sp_integrate
x_entropy = -1*sp_integrate.quad(lambda val: entropy_integrand(x, val),
                                     -np.inf, np.inf)[0]
print("Entropy of x: ")
print(x_entropy)
Entropy of x: 
1.4189385332046731

Exercise:#

  1. Compute the entropy of the distribution y = Gaussian(1, 0.1) using the same method as above, what is the effect of changing the variance of the distribution on the entropy?

  2. Similarly, compute the entropy of the distribution z = Uniform(-3, 3) and compare with the entropy of the two Gaussian distributions. Hint: use -3 and 3 as the limits of the integration.

  3. Which of all the distributions has the highest entropy?

# your code here