{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# Hamiltonian Monte Carlo with CUQIpy-PyTorch\n", "\n", "\n", "\n", "In this notebook, we use [CUQIpy-PyTorch](https://github.com/CUQI-DTU/CUQIpy-PyTorch) to extend CUQIpy by adding the ability to use PyTorch as a backend for array operations. PyTorch enables two main things: 1) GPU acceleration and 2) automatic differentiation. GPU acceleration is self-explanatory, but automatic differentiation deserves some explanation.\n", "\n", "[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) enables computing the gradient of a function with respect to its input variables automatically using repeated application of the chain rule. This is useful for many machine learning algorithms, but also in the context of Bayesian inference. In particular, it means that we can automatically compute the gradient of a log-posterior, which could be arbitrarily complex! This provides a huge advantage because we can then sample from the posterior distribution using Hamiltonian Monte Carlo (HMC) and other gradient-based methods.\n", "\n", "Hamiltonian Monte Carlo and in particular the [No-U-Turn Sampler](https://arxiv.org/abs/1111.4246) (NUTS) variant is a general, but still very efficient sampler for sampling high-dimensional distributions that only requires gradient information. This is useful when it is not possible to exploit the structure of the posterior distribution using e.g. conjugacy relations, linearity of forward models or other tricks, which in large part is what the main CUQIpy package is all about.\n", "\n", "In this way, CUQIpy-PyTorch compliments the main CUQIpy package by adding the option for an efficient sampling technique that works for arbitrary posterior distributions by using automatic differentiation to compute the gradient of the log-posterior.\n", "\n", "**Make sure you have installed the CUQIpy-PyTorch plugin (link in first paragraph) before starting this exercise.**\n", "\n", "## Learning objectives of this notebook:\n", "\n", "Going through this notebook, you will learn:\n", "\n", "- Why Hamiltonian Monte Carlo is useful for sampling distributions\n", "- The basics of PyTorch tensors\n", "- The basics of CUQIpy-PyTorch distributions\n", "- How to use Hamiltonian Monte Carlo to sample from distributions\n", "- How to use Hamiltonian Monte Carlo to sample Bayesian inference problems\n", "\n", "## Table of contents: \n", "* [1. Why Hamiltonian Monte Carlo?](#why-hmc?)\n", "* [2. PyTorch basics and CUQIpy-PyTorch](#pytorch-basics)\n", "* [3. Hamiltonian Monte Carlo in CUQIpy-PyTorch](#hmc-cuqipy-pytorch)\n", "* [4. Bayesian inverse problems with CUQIpy-PyTorch](#bayesian-inverse-problems)\n", "* [5. Open-ended exploration](#open-ended-exploration)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
cuqi.experimental.mcmc
module, which are expected to become the default soon. Check out the documentation for more details.\n",
"\n",
"\n",
"