TweediePrior#
- class cuqi.implicitprior.TweediePrior(prior, smoothing_strength=0.1, **kwargs)#
Alias for MoreauYoshidaPrior following Tweedie’s formula framework. TweediePrior defines priors where gradients are computed based on Tweedie’s identity that links MMSE (Minimum Mean Square Error) denoisers with the underlying smoothed prior, see:
Laumont et al. https://arxiv.org/abs/2103.04715 or https://doi.org/10.1137/21M1406349
Tweedie’s Formula#
In the context of denoising, Tweedie’s identity states that for a signal x corrupted by Gaussian noise:
∇_x log p_e(x) = (D_e(x) - x) / e
where D_e(x) is the MMSE denoiser output and e is the noise variance. This enables us to perform gradient-based sampling with algorithms like ULA.
At implementation level, TweediePrior shares identical functionality with MoreauYoshidaPrior. Thus, it is implemented as an alias of MoreauYoshidaPrior, meaning all methods, properties, and behavior are identical. The separate name provides clarity when working specifically with Tweedie’s formula-based approaches.
- param prior:
Prior of the RestorationPrior type containing a denoiser/restorator.
- type prior:
RestorationPrior
- param smoothing_strength:
Corresponds to the noise variance e in Tweedie’s formula context.
- type smoothing_strength:
float, default=0.1
- param See MoreauYoshidaPrior for the underlying implementation with complete documentation.:
- __init__(prior, smoothing_strength=0.1, **kwargs)#
Initialize the core properties of the distribution.
- Parameters:
name (str, default None) – Name of distribution.
geometry (Geometry, default _DefaultGeometry (or None)) – Geometry of distribution.
is_symmetric (bool, default None) – Indicator if distribution is symmetric.
Methods
__init__
(prior[, smoothing_strength])Initialize the core properties of the distribution.
Disable finite difference approximation for logd gradient.
enable_FD
([epsilon])Enable finite difference approximation for logd gradient.
Returns the conditioning variables of the distribution.
Return any public variable that is mutable (attribute or property) except those in the ignore_vars list
Returns the names of the parameters that the density can be evaluated at or conditioned on.
gradient
(x)This is the gradient of the regularizer ie gradient of the negative logpdf of the implicit prior.
logd
(*args, **kwargs)Evaluate the un-normalized log density function of the distribution.
logpdf
(x)The logpdf function.
pdf
(x)Evaluate the log probability density function of the distribution.
sample
([N])Sample from the distribution.
to_likelihood
(data)Convert conditional distribution to a likelihood function given observed data
Attributes
Returns True if finite difference approximation of the logd gradient is enabled.
Spacing for the finite difference approximation of the logd gradient.
Return the dimension of the distribution based on the geometry.
Return the geometry of the distribution.
Returns True if instance (self) is a conditional distribution.
Name of the random variable associated with the density.
Getter for the MoreauYoshida prior.
Return a random variable object representing the distribution.
smoothing_strength of the distribution