TweediePrior#

class cuqi.implicitprior.TweediePrior(prior, smoothing_strength=0.1, **kwargs)#

Alias for MoreauYoshidaPrior following Tweedie’s formula framework. TweediePrior defines priors where gradients are computed based on Tweedie’s identity that links MMSE (Minimum Mean Square Error) denoisers with the underlying smoothed prior, see:

Tweedie’s Formula#

In the context of denoising, Tweedie’s identity states that for a signal x corrupted by Gaussian noise:

∇_x log p_e(x) = (D_e(x) - x) / e

where D_e(x) is the MMSE denoiser output and e is the noise variance. This enables us to perform gradient-based sampling with algorithms like ULA.

At implementation level, TweediePrior shares identical functionality with MoreauYoshidaPrior. Thus, it is implemented as an alias of MoreauYoshidaPrior, meaning all methods, properties, and behavior are identical. The separate name provides clarity when working specifically with Tweedie’s formula-based approaches.

param prior:

Prior of the RestorationPrior type containing a denoiser/restorator.

type prior:

RestorationPrior

param smoothing_strength:

Corresponds to the noise variance e in Tweedie’s formula context.

type smoothing_strength:

float, default=0.1

param See MoreauYoshidaPrior for the underlying implementation with complete documentation.:

__init__(prior, smoothing_strength=0.1, **kwargs)#

Initialize the core properties of the distribution.

Parameters:
  • name (str, default None) – Name of distribution.

  • geometry (Geometry, default _DefaultGeometry (or None)) – Geometry of distribution.

  • is_symmetric (bool, default None) – Indicator if distribution is symmetric.

Methods

__init__(prior[, smoothing_strength])

Initialize the core properties of the distribution.

disable_FD()

Disable finite difference approximation for logd gradient.

enable_FD([epsilon])

Enable finite difference approximation for logd gradient.

get_conditioning_variables()

Returns the conditioning variables of the distribution.

get_mutable_variables()

Return any public variable that is mutable (attribute or property) except those in the ignore_vars list

get_parameter_names()

Returns the names of the parameters that the density can be evaluated at or conditioned on.

gradient(x)

This is the gradient of the regularizer ie gradient of the negative logpdf of the implicit prior.

logd(*args, **kwargs)

Evaluate the un-normalized log density function of the distribution.

logpdf(x)

The logpdf function.

pdf(x)

Evaluate the log probability density function of the distribution.

sample([N])

Sample from the distribution.

to_likelihood(data)

Convert conditional distribution to a likelihood function given observed data

Attributes

FD_enabled

Returns True if finite difference approximation of the logd gradient is enabled.

FD_epsilon

Spacing for the finite difference approximation of the logd gradient.

dim

Return the dimension of the distribution based on the geometry.

geometry

Return the geometry of the distribution.

is_cond

Returns True if instance (self) is a conditional distribution.

name

Name of the random variable associated with the density.

prior

Getter for the MoreauYoshida prior.

rv

Return a random variable object representing the distribution.

smoothing_strength

smoothing_strength of the distribution