Note
Go to the end to download the full example code.
How to define a Posterior distribution#
The recommended way to define a posterior distribution in CUQIpy is to use the
JointDistribution
class to define the joint
distribution of the parameters and the data and then condition on observed
data to obtain the posterior distribution as shown in the examples below.
import cuqi
import numpy as np
A simple Bayesian inverse problem#
Consider a deconvolution inverse problem given by
See Deconvolution1D
for more details.
A, y_obs, _ = cuqi.testproblem.Deconvolution1D().get_components()
Then consider the following Bayesian model
which can be written in CUQIpy as
x = cuqi.distribution.Gaussian(np.zeros(A.domain_dim), 0.1)
y = cuqi.distribution.Gaussian(A(x), 0.05**2)
The joint distribution \(p(\mathbf{x}, \mathbf{y})\) is then obtained by
joint = cuqi.distribution.JointDistribution(x, y)
print(joint)
JointDistribution(
Equation:
p(x,y) = p(x)p(y|x)
Densities:
x ~ CUQI Gaussian.
y ~ CUQI Gaussian. Conditioning variables ['x'].
)
The posterior \(p(\mathbf{x}|\mathbf{y}=\mathbf{y}^\mathrm{obs})\) is obtained by conditioning on the observed data as follows.
posterior = joint(y=y_obs)
print(posterior)
Posterior(
Equation:
p(x|y) ∝ L(x|y)p(x)
Densities:
y ~ CUQI Gaussian Likelihood function. Parameters ['x'].
x ~ CUQI Gaussian.
)
Evaluating the posterior log density is then as simple as
posterior.logd(np.ones(A.domain_dim))
array([-21527.9713636])
Posterior with two forward models#
Suppose we had two forward models \(\mathbf{A}\) and \(\mathbf{B}\):
# Both observations come from the same unknown x
A, y_obs, _ = cuqi.testproblem.Deconvolution1D().get_components()
B, d_obs, _ = cuqi.testproblem.Deconvolution1D(PSF="Defocus", noise_std=0.02).get_components()
Then consider the following Bayesian model
x = cuqi.distribution.Gaussian(np.zeros(A.domain_dim), 0.1)
y = cuqi.distribution.Gaussian(A(x), 0.05**2)
d = cuqi.distribution.Gaussian(B(x), 0.01**2)
The joint distribution \(p(\mathbf{x}, \mathbf{y}, \mathbf{d})\) is then obtained by
joint2 = cuqi.distribution.JointDistribution(x, y, d)
print(joint2)
JointDistribution(
Equation:
p(x,y,d) = p(x)p(y|x)p(d|x)
Densities:
x ~ CUQI Gaussian.
y ~ CUQI Gaussian. Conditioning variables ['x'].
d ~ CUQI Gaussian. Conditioning variables ['x'].
)
The posterior \(p(\mathbf{x}|\mathbf{y}=\mathbf{y}^\mathrm{obs},\mathbf{d}=\mathbf{d}^\mathrm{obs})\) is obtained by conditioning on the observed data as follows.
posterior2 = joint2(y=y_obs, d=d_obs)
print(posterior2)
MultipleLikelihoodPosterior(
Equation:
p(x|y,d) ∝ p(x)L(x|y)L(x|d)
Densities:
x ~ CUQI Gaussian.
y ~ CUQI Gaussian Likelihood function. Parameters ['x'].
d ~ CUQI Gaussian Likelihood function. Parameters ['x'].
)
Evaluating the posterior log density is then as simple as
posterior2.logd(np.ones(A.domain_dim))
array([-565491.2570644])
Arbitrarily complex posterior distributions#
The JointDistribution
class can be used to
construct arbitrarily complex posterior distributions. For example suppose
we have the following 3 forward models
where \(C\) is a nonlinear function.
# Same x for all 3 observations
A, y_obs, _ = cuqi.testproblem.Deconvolution1D().get_components()
B, d_obs, _ = cuqi.testproblem.Deconvolution1D(PSF="Defocus", noise_std=0.02).get_components()
C = cuqi.model.Model(lambda x: np.linalg.norm(x)**2, 1, A.domain_dim)
b_obs = 16
Then consider the following Bayesian model
q = cuqi.distribution.Uniform(0.1, 10)
l = cuqi.distribution.Gamma(1, 1)
s = cuqi.distribution.Gamma(1, 1e-2)
x = cuqi.distribution.Gaussian(np.zeros(A.domain_dim), lambda l: 1/l)
y = cuqi.distribution.Gaussian(A(x), lambda s: 1/s)
d = cuqi.distribution.Gaussian(B(x), 0.01**2)
b = cuqi.distribution.Laplace(C(x), lambda q: q)
The joint distribution \(p(q, l, s, \mathbf{x}, \mathbf{y}, \mathbf{d}, \mathbf{b})\) is then obtained by
joint3 = cuqi.distribution.JointDistribution(q, l, s, x, y, d, b)
print(joint3)
JointDistribution(
Equation:
p(q,l,s,x,y,d,b) = p(q)p(l)p(s)p(x|l)p(y|s,x)p(d|x)p(b|q,x)
Densities:
q ~ CUQI Uniform.
l ~ CUQI Gamma.
s ~ CUQI Gamma.
x ~ CUQI Gaussian. Conditioning variables ['l'].
y ~ CUQI Gaussian. Conditioning variables ['x', 's'].
d ~ CUQI Gaussian. Conditioning variables ['x'].
b ~ CUQI Laplace. Conditioning variables ['x', 'q'].
)
The posterior \(p(q, l, s, \mathbf{x}|\mathbf{y}=\mathbf{y}^\mathrm{obs},\mathbf{d}=\mathbf{d}^\mathrm{obs},\mathbf{b}=\mathbf{b}^\mathrm{obs})\) is obtained by conditioning on the observed data as follows.
posterior3 = joint3(y=y_obs, d=d_obs, b=b_obs)
print(posterior3)
JointDistribution(
Equation:
p(q,l,s,x|y,d,b) ∝ p(q)p(l)p(s)p(x|l)L(s,x|y)L(x|d)L(q,x|b)
Densities:
q ~ CUQI Uniform.
l ~ CUQI Gamma.
s ~ CUQI Gamma.
x ~ CUQI Gaussian. Conditioning variables ['l'].
y ~ CUQI Gaussian Likelihood function. Parameters ['x', 's'].
d ~ CUQI Gaussian Likelihood function. Parameters ['x'].
b ~ CUQI Laplace Likelihood function. Parameters ['x', 'q'].
)
Evaluating the posterior log density jointly over p, l, s, and \(\mathbf{x}\) is then as simple as
posterior3.logd(q=1, l=1, s=1, x=np.ones(A.domain_dim))
array([-541824.04845684])
Total running time of the script: (0 minutes 0.031 seconds)