{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "\n", "# Gibbs sampling\n", "\n", "In this notebook, we show how to use CUQIpy to sample hierarchical Bayesian models using Gibbs sampling.\n", "\n", "Gibbs sampling is a Markov chain Monte Carlo (MCMC) method for sampling a joint probability distribution of multiple random variables.\n", "Instead of sampling all variables simultaneously, Gibbs sampling samples the variables sequentially. The sampling of each variable is achieved by sampling from the conditional distribution of that variable given (fixed, previously sampled) values of the other variables.\n", "\n", "Gibbs sampling is often an efficient way of sampling from a joint distribution if the conditional distributions are easy to sample from. On the other hand, if the conditional distributions are highly correlated and/or are difficult to sample from, then Gibbs sampling can be very inefficient. For these reasons, Gibbs sampling is often a double-edged sword that needs to be used in the right context.\n", "\n", "## Learning objectives\n", "\n", "Going through this notebook you will see how to:\n", "\n", "- Describe the basic idea of Gibbs sampling.\n", "- Define a hierarchical Bayesian model using CUQIpy.\n", "- Define a Gibbs sampling scheme in CUQIpy.\n", "- Run the Gibbs sampler and analyze the results.\n", "\n", "## Table of contents\n", "\n", "1. [The basics of Gibbs sampling](#basics)\n", "2. [Gibbs for inverse problems](#inverse)\n", "3. [Exploring other priors](#exploring-other-priors)\n", "4. [Connection to BayesianProblem class](#BayesianProblem)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " ⚠️ Note: This notebook uses MCMC samplers from the new cuqi.experimental.mcmc module, which are expected to become the default soon. Check out the documentation for more details.\n", "
" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Setup\n", "We start by importing the necessary modules. " ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "import cuqi\n", "from cuqi.testproblem import Deconvolution1D\n", "from cuqi.distribution import Gaussian, Gamma, JointDistribution, GMRF, LMRF\n", "from cuqi.experimental.mcmc import HybridGibbs, LinearRTO, Conjugate, UGLA, ConjugateApprox, Direct\n", "from cuqi.problem import BayesianProblem" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# 1. The basics of Gibbs sampling \n", "\n", "To begin, let us consider a very simple Bayesian problem with two random variables, $d$ and $x$.\n", "\n", "$$\n", "\\begin{align*}\n", "d &\\sim \\mathrm{Gamma}(1, 1)\\\\\n", "x &\\sim \\mathrm{Gaussian}(0, d^{-1}).\n", "\\end{align*}\n", "$$\n", "\n", "As we have already seen in the previous notebooks, we can define the distributions in CUQIpy as follows:\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "d = Gamma(1, 1)\n", "x = Gaussian(0, cov=lambda d: 1/d)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Our goal is to sample from the joint distribution $p(d, x)$. We can think of this as our target distribution. \n", "\n", "**Note** In this contrived example, we can actually sample from the joint distribution directly. However, we will use this example to illustrate the basic idea of Gibbs sampling.\n", "\n", "We define the joint distribution in CUQIpy as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "target = JointDistribution(x, d)\n", "\n", "print(target)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "One idea for sampling $p(d,x)$ is to define the parameter $\\boldsymbol{\\theta} = [d; x]$ and sample from distribution $p(\\boldsymbol{\\theta})$ at the same time. However, this is only possible for simple models, where the joint distribution is easy to sample from.\n", "\n", "Gibbs sampling is an alternative approach, where we iteratively sample each variable from its conditional distribution, given the values of all other variables in the distribution. For this task with $p(d,x)$, we would sample from $p(d|x)$ and $p(x|d)$ iteratively. We can summarize this procedure as repeating the following steps:\n", "\n", "$$\n", "\\begin{align*}\n", "\\text{1. draw } d^{(i+1)} &\\sim p(d \\mid x^{(i)})\\\\\n", "\\text{2. draw } x^{(i+1)} &\\sim p(x \\mid d^{(i+1)})\n", "\\end{align*}\n", "$$\n", "\n", "where $x^{(i)}$ and $d^{(i)}$ are the values of $x$ and $d$ at iteration $i$. It can be shown that samples drawn from the above scheme will be distributed according to $p(d, x)$." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 1.1 Defining a sampling strategy\n", "\n", "When the conditional distributions are easy to sample from, Gibbs sampling can be very efficient. In the example above, we aim to sample from the following conditional distributions:\n", "\n", "1. Sample $x^{(i+1)}$ from $p(x \\mid d^{(i)})$,\n", "2. Sample $d^{(i+1)}$ from $p(d \\mid x^{(i+1)})\\propto L(d\\mid x^{(i+1)})p(d)$,\n", "\n", "where $L(d\\mid x):=p(x\\mid d)$ is the likelihood function with respect to $d$.\n", "\n", "**For step 1:** Note that we already know that $p(x\\mid d)=\\mathrm{Gaussian}(0, d^{-1})$. Hence, we can directly sample from $x \\mid d$ by sampling from $\\mathrm{Gaussian}(0, d^{-1})$.\n", "\n", "**For step 2:** For $p(d\\mid x)$ we could use any MCMC sampler to sample with. However, note that the Gaussian and Gamma distributions are conjugate, so we can recognize $p(d\\mid x)$ as a Gamma distribution:\n", "\n", "$$\n", "\\begin{align*}\n", "p(d\\mid x) &\\propto L(d\\mid x)p(d)\\\\\n", "&= \\exp\\left(-\\frac{d}{2}x^2\\right)\\frac{\\beta^\\alpha}{\\Gamma(\\alpha)}d^{\\alpha-1}\\exp(-\\beta d)\\\\\n", "&= \\mathrm{Gamma}(\\alpha+1/2, \\beta+\\frac{1}{2}x^2)\n", "\\end{align*}\n", "$$\n", "\n", "where $\\alpha=1$ and $\\beta=1$ are the original parameters of the Gamma distribution for $d$.\n", "\n", "Instead of manually deriving the Gamma distribution for the conjugate posterior, CUQIpy provides a `Conjugate` sampler, which can be used to automatically sample from any (supported) conjugate distributions. Similarly we can use the `Direct` sampler to sample from the Gaussian distribution.\n", "\n", "*The interface and design of the `Conjugate` and `Direct` samplers is still under development and may change in the future.*\n", "\n", "In summary, we define a Gibbs sampling scheme in CUQIpy as follows:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# Define the sampling strategy\n", "sampling_strategy = {\n", " 'x' : Direct(),\n", " 'd' : Conjugate(),\n", "}" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We can then define the Gibbs sampler and sample from the joint distribution as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "scroll-output" ] }, "outputs": [], "source": [ "# Gibbs sampler\n", "sampler = HybridGibbs(target, sampling_strategy)\n", "\n", "# Sample\n", "sampler.warmup(1000)\n", "sampler.sample(1000)\n", "\n", "# Get samples removing the burn-in\n", "samples = sampler.get_samples().burnthin(1000)\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "And we can visualize e.g. the trace plots as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "samples[\"x\"].plot_trace()\n", "samples[\"d\"].plot_trace()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# 2. Gibbs for inverse problems \n", "\n", "The above example was a simple example to illustrate the basic idea of Gibbs sampling. However, often for these simple examples there exist more efficient sampling methods. In the following we instead focus on inverse problems where Gibbs sampling will prove to be useful, because efficient samplers are not readily available.\"" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 2.1 Deterministic forward model and data\n", "Consider a Bayesian inverse problem\n", "\n", "$$\n", "\\mathbf{y}=\\mathbf{A}\\mathbf{x},\n", "$$\n", "\n", "where $\\mathbf{A}: \\mathbb{R}^n \\to \\mathbb{R}^m$ is the forward model of the inverse problem and $\\mathbf{y}$ and $\\mathbf{x}$ are random variables for the data and unknown parameter, respectively.\n", "\n", "In this case, we assume that the forward model is a convolution operator. We can load this example from the testproblem library of CUQIpy and visualize the true solution (sharp signal) and data (convolved signal)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Model and data\n", "A, y_data, probinfo = Deconvolution1D(phantom='sinc', noise_std=0.005, PSF_param=6).get_components()\n", "\n", "# Get dimension of signal\n", "n = A.domain_dim\n", "\n", "# Plot exact solution and observed data\n", "plt.subplot(121)\n", "probinfo.exactSolution.plot()\n", "plt.title('Exact solution')\n", "\n", "plt.subplot(122)\n", "y_data.plot()\n", "plt.title(\"Observed data\")\n", "\n", "# Print problem information\n", "print(probinfo)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 2.2 Hierarchical Bayesian model\n", "\n", "We are now going to define a hierarchical Bayesian model for the inverse problem.\n", "\n", "First, we are going to model the prior distribution of the parameter $\\mathbf{x}$ as a Gaussian Markov random field (GMRF), i.e.\n", "\n", "$$\n", "\\begin{align*}\n", "\\mathbf{x} \\mid d &\\sim \\mathrm{GMRF}(\\mathbf{0}, d).\n", "\\end{align*}\n", "$$\n", "\n", "This is a built-in distribution in CUQIpy, which models the difference between neighboring elements as a Gaussian, see [GMRF](https://cuqi-dtu.github.io/CUQIpy/api/_autosummary/cuqi.distribution/cuqi.distribution.GMRF.html#cuqi.distribution.GMRF) for more details.\n", "\n", "We do not know a good choice of precision parameter $d$ for this prior, so we make use of `lambda` (anonymous) functions to make $d$ a conditioning variable of the distribution. The distribution is defined in CUQIpy as follows.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define prior\n", "x = GMRF(np.zeros(n), lambda d: d)\n", "\n", "print(x)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "0f3aef5f", "metadata": {}, "source": [ "#### ★ Try yourself (optional): \n", "\n", "Try computing some samples from the GMRF distribution and visualize them. You can use the `sample` method of the `GMRF` distribution. You can also visualize the samples using the `plot` method of the obtained samples.\n", "\n", "Hint: Since $d$ is a conditioning variable, you need to specify the value of $d$ when sampling." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "# Your code here\n", "\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "The observed data is corrupted by additive Gaussian noise, so for random variable $\\mathbf{y}$ we have\n", "\n", "$$\\mathbf{y} \\mid \\mathbf{x}, s \\sim \\mathrm{Gaussian}(\\mathbf{A} \\mathbf{x}, s^{-1}\\mathbf{I}_m),$$\n", "\n", "where for this example, we pretend also not to know the noise precision $s$.\n", "\n", "Similar to the prior distribution, we can define this distribution as follows." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Data distribution (likelihood)\n", "y = Gaussian(A@x, cov=lambda s: 1/s)\n", "\n", "print(y)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "To model the two *hyperparameters* $d$ and $s$, we can use a weakly informative `Gamma` distribution, i.e. a distribution with a wide range of possible values.\n", "\n", "$$\n", "\\begin{align*}\n", " d &\\sim \\mathrm{Gamma}(1, 10^{-4}) \\\\\n", " s &\\sim \\mathrm{Gamma}(1, 10^{-4})\n", "\\end{align*}\n", "$$\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "d = Gamma(1, 1e-4)\n", "s = Gamma(1, 1e-4)\n", "\n", "print(d)\n", "print(s)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "To see that the Gamma hyperprior is weakly informative, we can visualize the pdf of the Gamma distribution. Notice that the pdf is mostly flat for the range of values from $0$ to $10^4$. This means that the Gamma distribution is fairly uninformative for this range of values and assign equal probability to all values in this range.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grid = np.linspace(1e-4, 1e+4, 1000)\n", "plt.plot(grid, [d.pdf(val) for val in grid])\n", "plt.ylim(0, 2*1e-4)\n", "plt.title(\"PDF of Gamma(1, 1e-4)\");" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "In total, we can summarize the hierarchical Bayesian problem as follows (without explicitly writing the conditioning variables):\n", "\n", "$$\n", "\\begin{align*}\n", " d &\\sim \\mathrm{Gamma}(1, 10^{-4}) \\\\\n", " s &\\sim \\mathrm{Gamma}(1, 10^{-4}) \\\\\n", " \\mathbf{x} &\\sim \\mathrm{GMRF}(\\mathbf{0}, d) \\\\\n", " \\mathbf{y} &\\sim \\mathrm{Gaussian}(\\mathbf{A} \\mathbf{x}, s^{-1} \\mathbf{I}_m)\n", "\\end{align*}\n", "$$\n", "\n", "which in CUQIpy matches line-by-line:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "d = Gamma(1, 1e-4)\n", "s = Gamma(1, 1e-4)\n", "x = GMRF(np.zeros(n), lambda d: d)\n", "y = Gaussian(A@x, cov=lambda s: 1/s)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Posterior distribution\n", "\n", "We now define the posterior distribution by first creating the joint distribution $p(\\mathbf{y}, \\mathbf{x}, d, s)$ and then conditioning on the observed data $\\mathbf{y}_\\mathrm{data}$ to obtain the posterior which can be derived in density form as\n", "\n", "$$\n", "\\begin{align*}\n", "p(\\mathbf{x}, d, s \\mid \\mathbf{y}_\\mathrm{data}) = L(\\mathbf{x}, s \\mid \\mathbf{y}_\\mathrm{data}) p(\\mathbf{x} \\mid d) p(d) p(s).\n", "\\end{align*}\n", "$$\n", "\n", "This in done in CUQIpy as follows" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Create joint distribution\n", "joint = JointDistribution(y, x, d, s)\n", "\n", "# Define posterior by conditioning on the data\n", "posterior = joint(y=y_data) # Automatically applies Bayes rule\n", "\n", "# View the posterior\n", "print(posterior)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Notice that after conditioning on the data, the distribution associated with $\\mathbf{y}$ became a likelihood function and that the posterior is now a joint distribution of the variables $\\mathbf{x}$, $d$ and $s$. It matches the mathematical expression above.\n", "\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "0f3aef5f", "metadata": {}, "source": [ "#### ★ Try yourself (optional): \n", "1. Try evaluating the un-normalized log probability density (logd) of the posterior distribution $p(\\mathbf{x}, d, s \\mid \\mathbf{y}_\\mathrm{data})$ at some points $\\mathbf{x}^*\\in \\mathbb{R}^n$, $d^*\\in \\mathbb{R}_+$ and $s^*\\in \\mathbb{R}_+$. Hint: The `logd` method of the `Joint` distribution takes comma-separated keyword arguments for the conditioning variables. For example, `logd(x=xstar, d=dstar, s=sstar)`.\n", "\n", "2. You can also try *conditioning* the posterior distribution on some of the variables, e.g. `posterior(x=xstar, d=dstar)`. This should compute the marginal distribution of $s$ given $\\mathbf{x}^*$ and $d^*$, i.e. $p(s \\mid \\mathbf{y}_\\mathrm{data}, \\mathbf{x}^*, d^*)$.\n", "\n", "3. Experiment with sampling from a similar BIP that does not include hyperparameters $d$ and $s$ (use the values $d=1$ and $s=10000$ to determine the precision of the prior and the likelihood). You can create the BIP as follows. Hint: use the `UQ` method of the `BayesianProblem` to sample from the posterior distribution.\n", "\n", "```python\n", "x2 = GMRF(np.zeros(n), prec=1)\n", "y2 = Gaussian(A@x2, prec=10000)\n", "BP2 = BayesianProblem(x2, y2)\n", "BP2.set_data(y2=y_data)\n", "```" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "# Your code here\n", "\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 2.3 Gibbs Sampler\n", "\n", "The hierarchical Bayesian problem above has some important properties that we\n", "can exploit to make the sampling more efficient.\n", "\n", "First, note that for the pair of a Gaussian distribution with a Gamma distribution\n", "for the precision is a conjugate pair (as we also saw earlier). This means that the\n", "posterior distribution with respect to the parameter for the precision is also a\n", "Gamma distribution. This means that we can efficiently sample from $d$ and $s$\n", "conditional on the other variables by exploiting conjugacy.\n", "\n", "Second, note that the prior distribution of $\\mathbf{x}$ is\n", "a Gaussian Markov random field (GMRF) and that the distribution for\n", "$\\mathbf{y}$ is also Gaussian with a Linear operator acting\n", "on $\\mathbf{x}$ as the mean variable. This means that we can\n", "efficiently sample from $\\mathbf{x}$ conditional on the other\n", "variables using the ``LinearRTO`` sampler.\n", "\n", "Taking these two facts into account, we can define a Gibbs sampler\n", "that uses the ``Conjugate`` sampler for $d$ and $s$ and\n", "the ``LinearRTO`` sampler for $\\mathbf{x}$.\n", "\n", "This is done in CUQIpy as follows:\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false }, "tags": [ "scroll-output" ] }, "outputs": [], "source": [ "# Define sampling strategy\n", "sampling_strategy = {\n", " 'x': LinearRTO(),\n", " 'd': Conjugate(),\n", " 's': Conjugate()\n", "}\n", "\n", "# Define Gibbs sampler\n", "sampler = HybridGibbs(posterior, sampling_strategy)\n", "\n", "# Run sampler\n", "sampler.warmup(200)\n", "sampler.sample(1000)\n", "\n", "# Get samples removing the burn-in\n", "samples = sampler.get_samples().burnthin(200)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 2.4 Analyze results\n", "\n", "After sampling we can inspect the results. The samples are stored\n", "as a dictionary with the variable names as keys. Samples for each \n", "variable is stored as a CUQIpy Samples object which contains the\n", "many convenience methods for diagnostics and plotting of MCMC samples.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Plot credible intervals for x\n", "samples['x'].plot_ci(exact=probinfo.exactSolution)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Trace plot for d\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "samples['d'].plot_trace(figsize=(8,2))" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Trace plot for s\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "samples['s'].plot_trace(figsize=(8,2), exact=1/0.005**2)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Notice that the true noise standard deviation was $0.005$ which translates to a precision of $s=40000$. The trace plot for $s$ shows that the sampler has converged close to the true value." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# 3 Exploring other priors \n", "\n", "Suppose the true solution was not modelled well by a GMRF prior. This would for example be the case for a piece-wise constant \"square\" signal. We can quickly try that experiment by generating new data from the test problem with a `square` phantom.\n", "\n", "For illustration we repeat all the code steps from above in a single cell.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "scroll-output" ] }, "outputs": [], "source": [ "# Deterministic forward model + data\n", "A, y_data, probinfo = Deconvolution1D(phantom='square').get_components()\n", "\n", "# Bayesian model\n", "d = Gamma(1, 1e-4)\n", "s = Gamma(1, 1e-4)\n", "x = GMRF(np.zeros(n), lambda d: d)\n", "y = Gaussian(A@x, cov=lambda s: 1/s)\n", "\n", "# Joint distribution\n", "joint = JointDistribution(y, x, d, s)\n", "\n", "# Posterior\n", "posterior = joint(y=y_data)\n", "\n", "# Sampling strategy\n", "sampling_strategy = {\n", " 'x': LinearRTO(),\n", " 'd': Conjugate(),\n", " 's': Conjugate()\n", "}\n", "\n", "# Gibbs sampler\n", "sampler = HybridGibbs(posterior, sampling_strategy)\n", "\n", "# Run sampler\n", "sampler.warmup(200)\n", "sampler.sample(1000)\n", "\n", "# Get samples removing the burn-in\n", "samples = sampler.get_samples().burnthin(200)\n", "\n", "# Plot credible intervals for x\n", "samples['x'].plot_ci(exact=probinfo.exactSolution)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Notice that while the sampling still ran, the posterior distribution does not match the characteristics of the square phantom. We can improve this result by switching to a prior that better matches this kind of signal.\n", "\n", "One choice is the [LMRF](https://cuqi-dtu.github.io/CUQIpy/api/_autosummary/cuqi.distribution/cuqi.distribution.LMRF.html) prior, which assumes a Laplace distribution for the differences between neighboring elements of $\\mathbf{x}$ instead of a Gaussian. That is we assume\n", "\n", "$$\n", "\\begin{align*}\n", "\\mathbf{x} \\sim \\text{LMRF}(\\mathbf{0}, d^{-1}),\n", "\\end{align*}\n", "$$\n", "\n", "which translates to the assumption that $x_i-x_{i-1} \\sim \\mathrm{Laplace}(0, d^{-1})$ for $i=1, \\ldots, n$ (excluding boundaries)\n", "\n", "This prior is implemented in CUQIpy as the ``LMRF`` distribution.\n", "To update our model we simply need to replace the ``GMRF`` distribution\n", "with the ``LMRF`` distribution. Note that the Laplace distribution\n", "is defined via a scale parameter, so we invert the parameter $d$.\n", "\n", "This laplace distribution and new posterior can be defined as follows:\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Define new distribution for x\n", "x = LMRF(np.zeros(n), lambda d: 1/d)\n", "\n", "# Define new joint distribution with piecewise constant prior\n", "joint_Ld = JointDistribution(y, x, d, s)\n", "\n", "# Define new posterior by conditioning on the data\n", "posterior_Ld = joint_Ld(y=y_data)\n", "\n", "print(posterior_Ld)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 3.1 Gibbs Sampler (with Laplace prior)\n", "\n", "Using the same approach as ealier we can define a Gibbs sampler\n", "for this new hierarchical model. The only difference is that we\n", "now need to use a different sampler for $\\mathbf{x}$ because\n", "the ``LinearRTO`` sampler only works for Gaussian distributions.\n", "\n", "In this case we use the [UGLA](https://cuqi-dtu.github.io/CUQIpy/api/_autosummary/cuqi.sampler/cuqi.sampler.UGLA.html) sampler\n", "for $\\mathbf{x}$. We also use an approximate Conjugate\n", "sampler for $d$ which approximately samples from the\n", "posterior distribution of $d$ conditional on the other\n", "variables in an efficient manner. For more details see e.g.\n", "[Uribe et al. (2021)](https://arxiv.org/abs/2104.06919).\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false }, "tags": [ "scroll-output" ] }, "outputs": [], "source": [ "# Define sampling strategy\n", "sampling_strategy = {\n", " 'x': UGLA(),\n", " 'd': ConjugateApprox(),\n", " 's': Conjugate()\n", "}\n", "\n", "# Define Gibbs sampler\n", "sampler_Ld = HybridGibbs(posterior_Ld, sampling_strategy)\n", "\n", "# Run sampler\n", "sampler_Ld.warmup(200)\n", "sampler_Ld.sample(1000)\n", "\n", "# Get samples removing the burn-in\n", "samples_Ld = sampler_Ld.get_samples().burnthin(200)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 3.2 Analyze results\n", "\n", "Again we can inspect the results.\n", "Here we notice the posterior distribution matches the exact solution better than before.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Plot credible intervals for the signal\n", "samples_Ld['x'].plot_ci(exact=probinfo.exactSolution) # Issue here?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "samples_Ld['d'].plot_trace(figsize=(8,2))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "samples_Ld['s'].plot_trace(figsize=(8,2), exact=1/0.01**2)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We can also view some posterior samples." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "samples_Ld['x'].plot()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# 4. Connection to BayesianProblem class \n", "\n", "Finally, let us connect back the non-expert interface through the `BayesianProblem` class.\n", "\n", "Instead of explicitly defining the Gibbs sampler, we can also (for supported cases) simply provide the distributions for each of the parameters and the observed data to the `BayesianProblem` class. The class will then automatically construct a Gibbs sampler for the problem. This is done as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "scroll-output" ] }, "outputs": [], "source": [ "# Provide distributions for parameters and set data\n", "BP = BayesianProblem(y,x,d,s).set_data(y=y_data)\n", "\n", "# DO UQ and compare with exact solution and noise precision\n", "samples_BP = BP.UQ(exact={\"x\":probinfo.exactSolution, \"s\":1/0.01**2}, experimental=True)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "0f3aef5f", "metadata": {}, "source": [ "#### ★ Try yourself (optional): \n", "\n", "Try building your own Bayesian problem and sampling from it using the `BayesianProblem` class. You can use the code from the previous sections as a starting point. If the automatic sampler selection fails, you can also try to defining the Gibbs sampler manually.\n", "\n", "You can also try the same Bayesian problem but for [2D deconvolution](https://cuqi-dtu.github.io/CUQIpy/api/_autosummary/cuqi.testproblem/cuqi.testproblem.Deconvolution2D.html#cuqi.testproblem.Deconvolution2D)" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "# Your code here\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### ★ Exercise \n", "\n", "Numerical verification of the conjugacy of a Gaussian likelihood a Gaussian prior. Consider the following simple BIP:\n", "```python\n", "# Prior\n", "cov1 = np.array([[1, 0.5], [0.5, 1]])\n", "x1 = Gaussian(np.zeros(2), cov1)\n", "I = np.eye(2)\n", "cov_obs = 0.1*I\n", "\n", "# Forward model and likelihood\n", "A = LinearModel(forward=I)\n", "x2 = Gaussian(A@x1, cov_obs)\n", "\n", "# Set the BIP\n", "BP2 = BayesianProblem(x1, x2)\n", "data = np.array([0.1, 0.2])\n", "BP2.set_data(x2=data)\n", "\n", "# Solve the BIP\n", "samples_post = BP2.sample_posterior(10000, experimental=True)\n", "samples_post.plot_pair()\n", "```\n", "\n", "Create a `CUQIpy` Gaussian distribution `x3` that is equivalent to the posterior distribution of this BIP. Hint: recall conjugacy of Gaussian likelihood and Gaussian prior. Sample `x3` and use the `plot_pair` method to visualize the samples. Compare the results with the samples from the `BayesianProblem` class." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.5" }, "vscode": { "interpreter": { "hash": "4ff4ac6af9578637e0e623c40bf41129eb04e2c9abec3a9480d43324f3a3fec8" } } }, "nbformat": 4, "nbformat_minor": 4 }