mcmc#

Module path: cuqi.experimental.mcmc

Re-implementation of sampler module in a more object-oriented way.

Main changes for users#

  1. Previously one would call the .sample or sample_adapt methods of a sampler instance at cuqi.sampler to sample from a target distribution and store the samples as the output as follows:

    from cuqi.sampler import MH
    from cuqi.distribution import DistributionGallery
    
    # Target distribution
    target = DistributionGallery("donut")
    
    # Set up sampler
    sampler = MH(target)
    
    # Sample from the target distribution (Alternatively calling sample with explicit scale parameter set in sampler)
    samples = sampler.sample_adapt(Ns=100, Nb=100) # Burn-in (Nb) removed by default
    

    This has now changed to to a more object-oriented API which provides more flexibility and control over the sampling process.

    For example one can now more explicitly control when the sampler is tuned (warmup) and when it is sampling with fixed parameters.

    from cuqi.experimental.mcmc import MH
    from cuqi.distribution import DistributionGallery
    
    # Target distribution
    target = DistributionGallery("donut")
    
    # Set up sampler
    sampler = MH(target)
    
    # Sample from the target distribution
    sampler.warmup(Nb=100)  # Explicit warmup (tuning) of sampler
    sampler.sample(Ns=100)  # Sampling with fixed parameters
    samples = sampler.get_samples().burnthin(Nb=100) # Getting samples and removing burn-in from warmup
    

    Importantly, the removal of burn-in from e.g. warmup is now a separate step that is done after the sampling process is complete.

  2. cuqi.problem.BayesianProblem continues to have the same API for sample_posterior and the UQ method.

    There is now a flag experimental that can be set to True to use the new MCMC samplers.

    By default, the flag is set to False and the old samplers are used.

    For this more high-level interface, burn-in is automatically removed from the samples as was the case before.

  3. There are now more options for Gibbs sampling. Previously it was only possible to sample with Gibbs for samplers cuqi.sampler.LinearRTO, cuqi.sampler.RegularizedLinearRTO, cuqi.sampler.Conjugate, and cuqi.sampler.ConjugateApprox.

    Now, it is possible to define a Gibbs sampling scheme using any sampler from the cuqi.experimental.mcmc module.

    Example using a NUTS-within-Gibbs scheme for a 1D deconvolution problem:

    import cuqi
    import numpy as np
    from cuqi.distribution import Gamma, Gaussian, GMRF, JointDistribution
    from cuqi.experimental.mcmc import NUTS, HybridGibbs, Conjugate
    from cuqi.testproblem import Deconvolution1D
    
    # Forward problem
    A, y_data, info = Deconvolution1D(dim=128, phantom='sinc', noise_std=0.001).get_components()
    
    # Bayesian Inverse Problem
    s = Gamma(1, 1e-4)
    x = GMRF(np.zeros(A.domain_dim), 50)
    y = Gaussian(A @ x, lambda s: 1 / s)
    
    # Posterior
    target = JointDistribution(y, x, s)(y=y_data)
    
    # Gibbs sampling strategy. Note we can define initial_points and various parameters for each sampler
    sampling_strategy = {
        "x": NUTS(max_depth=10, initial_point=np.zeros(A.domain_dim)),
        "s": Conjugate()
    }
    
    # Here we do 10 internal steps with NUTS for each Gibbs step
    num_sampling_steps = {
        "x": 10,
        "s": 1
    }
    
    sampler = HybridGibbs(target, sampling_strategy, num_sampling_steps)
    
    sampler.warmup(50)
    sampler.sample(200)
    samples = sampler.get_samples().burnthin(Nb=50)
    
    samples["x"].plot_ci(exact=info.exactSolution)
    

Functions

find_valid_samplers

Finds all samplers in the cuqi.experimental.mcmc module that accept the provided target.

Classes

CWMH

Component-wise Metropolis Hastings sampler.

Conjugate

Conjugate sampler

ConjugateApprox

Approximate Conjugate sampler

Direct

Direct sampler

HybridGibbs

Hybrid Gibbs sampler for sampling a joint distribution.

LinearRTO

Linear RTO (Randomize-Then-Optimize) sampler.

MALA

Metropolis-adjusted Langevin algorithm (MALA) (Roberts and Tweedie, 1996)

MH

Metropolis-Hastings (MH) sampler.

MYULA

Moreau-Yoshida Unadjusted Langevin algorithm (MYUULA) (Durmus et al., 2018)

NUTS

No-U-Turn Sampler (Hoffman and Gelman, 2014).

PCN

PnPULA

Plug-and-Play Unadjusted Langevin algorithm (PnP-ULA) (Laumont et al., 2022)

ProposalBasedSampler

Abstract base class for samplers that use a proposal distribution.

RegularizedLinearRTO

Regularized Linear RTO (Randomize-Then-Optimize) sampler.

Sampler

Abstract base class for all samplers.

UGLA

Unadjusted (Gaussian) Laplace Approximation sampler

ULA

Unadjusted Langevin algorithm (ULA) (Roberts and Tweedie, 1996)