Here we demonstrate computing the entropy of a prior distribution, numerically. The informativeness of a prior distribution is linked to its entropy. The larger the entropy, the less informative the prior distribution is. The entropy of a univariate continuous distribution with probability density function is defined as:
Here, we will compute the entropy of continuous univariate distributions numerically to compare their informativeness relative to each other. Let us define a Gaussian distribution x:
from cuqi.distribution import Gaussian
import numpy as np
import scipyx = Gaussian(0, 1)Let us define a lambda function for the entropy integrand:
entropy_integrand = lambda dist, val: dist.pdf(val)*dist.logd(val)To compute the entropy, we can use scipy’s quad function to integrate the entropy integrand over the support of the distribution:
import scipy.integrate as sp_integrate
x_entropy = -1*sp_integrate.quad(lambda val: entropy_integrand(x, val),
-np.inf, np.inf)[0]
print("Entropy of x: ")
print(x_entropy)Entropy of x:
1.4189385332046731
# your code here