API: Turing.Inference
Turing.Inference.CSMC
— TypeCSMC(...)
Equivalent to PG
.
Turing.Inference.ESS
— TypeESS
Elliptical slice sampling algorithm.
Examples
julia> @model function gdemo(x)
+Inference · Turing API: Turing.Inference
Turing.Inference.CSMC
— TypeCSMC(...)
Equivalent to PG
.
sourceTuring.Inference.ESS
— TypeESS
Elliptical slice sampling algorithm.
Examples
julia> @model function gdemo(x)
m ~ Normal()
x ~ Normal(m, 0.5)
end
@@ -469,14 +11,14 @@
│ Row │ parameters │ mean │
│ │ Symbol │ Float64 │
├─────┼────────────┼──────────┤
-│ 1 │ m │ 0.824853 │
sourceTuring.Inference.Emcee
— TypeEmcee(n_walkers::Int, stretch_length=2.0)
Affine-invariant ensemble sampling algorithm.
Reference
Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. (2013). emcee: The MCMC Hammer. Publications of the Astronomical Society of the Pacific, 125 (925), 306. https://doi.org/10.1086/670067
sourceTuring.Inference.ExternalSampler
— TypeExternalSampler{S<:AbstractSampler,AD<:ADTypes.AbstractADType,Unconstrained}
Represents a sampler that is not an implementation of InferenceAlgorithm
.
The Unconstrained
type-parameter is to indicate whether the sampler requires unconstrained space.
Fields
sampler::AbstractMCMC.AbstractSampler
: the sampler to wrap
adtype::ADTypes.AbstractADType
: the automatic differentiation (AD) backend to use
sourceTuring.Inference.Gibbs
— TypeGibbs
A type representing a Gibbs sampler.
Fields
varnames::Any
: varnames representing variables for each sampler
samplers::Any
: samplers for each entry in varnames
sourceTuring.Inference.GibbsContext
— TypeGibbsContext(target_varnames, global_varinfo, context)
A context used in the implementation of the Turing.jl Gibbs sampler.
There will be one GibbsContext
for each iteration of a component sampler.
sourceTuring.Inference.HMC
— TypeHMC(ϵ::Float64, n_leapfrog::Int; adtype::ADTypes.AbstractADType = AutoForwardDiff())
Hamiltonian Monte Carlo sampler with static trajectory.
Arguments
ϵ
: The leapfrog step size to use.n_leapfrog
: The number of leapfrog steps to use.adtype
: The automatic differentiation (AD) backend. If not specified, ForwardDiff
is used, with its chunksize
automatically determined.
Usage
HMC(0.05, 10)
Tips
If you are receiving gradient errors when using HMC
, try reducing the leapfrog step size ϵ
, e.g.
# Original step size
+│ 1 │ m │ 0.824853 │
sourceTuring.Inference.Emcee
— TypeEmcee(n_walkers::Int, stretch_length=2.0)
Affine-invariant ensemble sampling algorithm.
Reference
Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. (2013). emcee: The MCMC Hammer. Publications of the Astronomical Society of the Pacific, 125 (925), 306. https://doi.org/10.1086/670067
sourceTuring.Inference.ExternalSampler
— TypeExternalSampler{S<:AbstractSampler,AD<:ADTypes.AbstractADType,Unconstrained}
Represents a sampler that is not an implementation of InferenceAlgorithm
.
The Unconstrained
type-parameter is to indicate whether the sampler requires unconstrained space.
Fields
sampler::AbstractMCMC.AbstractSampler
: the sampler to wrap
adtype::ADTypes.AbstractADType
: the automatic differentiation (AD) backend to use
sourceTuring.Inference.Gibbs
— TypeGibbs
A type representing a Gibbs sampler.
Fields
varnames::Any
: varnames representing variables for each sampler
samplers::Any
: samplers for each entry in varnames
sourceTuring.Inference.GibbsContext
— TypeGibbsContext(target_varnames, global_varinfo, context)
A context used in the implementation of the Turing.jl Gibbs sampler.
There will be one GibbsContext
for each iteration of a component sampler.
sourceTuring.Inference.HMC
— TypeHMC(ϵ::Float64, n_leapfrog::Int; adtype::ADTypes.AbstractADType = AutoForwardDiff())
Hamiltonian Monte Carlo sampler with static trajectory.
Arguments
ϵ
: The leapfrog step size to use.n_leapfrog
: The number of leapfrog steps to use.adtype
: The automatic differentiation (AD) backend. If not specified, ForwardDiff
is used, with its chunksize
automatically determined.
Usage
HMC(0.05, 10)
Tips
If you are receiving gradient errors when using HMC
, try reducing the leapfrog step size ϵ
, e.g.
# Original step size
sample(gdemo([1.5, 2]), HMC(0.1, 10), 1000)
# Reduced step size
-sample(gdemo([1.5, 2]), HMC(0.01, 10), 1000)
sourceTuring.Inference.HMCDA
— TypeHMCDA(
+sample(gdemo([1.5, 2]), HMC(0.01, 10), 1000)
sourceTuring.Inference.HMCDA
— TypeHMCDA(
n_adapts::Int, δ::Float64, λ::Float64; ϵ::Float64 = 0.0;
adtype::ADTypes.AbstractADType = AutoForwardDiff(),
-)
Hamiltonian Monte Carlo sampler with Dual Averaging algorithm.
Usage
HMCDA(200, 0.65, 0.3)
Arguments
n_adapts
: Numbers of samples to use for adaptation.δ
: Target acceptance rate. 65% is often recommended.λ
: Target leapfrog length.ϵ
: Initial step size; 0 means automatically search by Turing.adtype
: The automatic differentiation (AD) backend. If not specified, ForwardDiff
is used, with its chunksize
automatically determined.
Reference
For more information, please view the following paper (arXiv link):
Hoffman, Matthew D., and Andrew Gelman. "The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo." Journal of Machine Learning Research 15, no. 1 (2014): 1593-1623.
sourceTuring.Inference.IS
— TypeIS()
Importance sampling algorithm.
Usage:
IS()
Example:
# Define a simple Normal model with unknown mean and variance.
+)
Hamiltonian Monte Carlo sampler with Dual Averaging algorithm.
Usage
HMCDA(200, 0.65, 0.3)
Arguments
n_adapts
: Numbers of samples to use for adaptation.δ
: Target acceptance rate. 65% is often recommended.λ
: Target leapfrog length.ϵ
: Initial step size; 0 means automatically search by Turing.adtype
: The automatic differentiation (AD) backend. If not specified, ForwardDiff
is used, with its chunksize
automatically determined.
Reference
For more information, please view the following paper (arXiv link):
Hoffman, Matthew D., and Andrew Gelman. "The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo." Journal of Machine Learning Research 15, no. 1 (2014): 1593-1623.
sourceTuring.Inference.IS
— TypeIS()
Importance sampling algorithm.
Usage:
IS()
Example:
# Define a simple Normal model with unknown mean and variance.
@model function gdemo(x)
s² ~ InverseGamma(2,3)
m ~ Normal(0,sqrt.(s))
@@ -485,7 +27,7 @@
return s², m
end
-sample(gdemo([1.5, 2]), IS(), 1000)
sourceTuring.Inference.MH
— MethodMH(space...)
Construct a Metropolis-Hastings algorithm.
The arguments space
can be
- Blank (i.e.
MH()
), in which case MH
defaults to using the prior for each parameter as the proposal distribution. - An iterable of pairs or tuples mapping a
Symbol
to a AdvancedMH.Proposal
, Distribution
, or Function
that generates returns a conditional proposal distribution. - A covariance matrix to use as for mean-zero multivariate normal proposals.
Examples
The default MH
will draw proposal samples from the prior distribution using AdvancedMH.StaticProposal
.
@model function gdemo(x, y)
+sample(gdemo([1.5, 2]), IS(), 1000)
sourceTuring.Inference.MH
— MethodMH(space...)
Construct a Metropolis-Hastings algorithm.
The arguments space
can be
- Blank (i.e.
MH()
), in which case MH
defaults to using the prior for each parameter as the proposal distribution. - An iterable of pairs or tuples mapping a
Symbol
to a AdvancedMH.Proposal
, Distribution
, or Function
that generates returns a conditional proposal distribution. - A covariance matrix to use as for mean-zero multivariate normal proposals.
Examples
The default MH
will draw proposal samples from the prior distribution using AdvancedMH.StaticProposal
.
@model function gdemo(x, y)
s² ~ InverseGamma(2,3)
m ~ Normal(0, sqrt(s²))
x ~ Normal(m, sqrt(s²))
@@ -533,21 +75,21 @@
),
1_000
)
-mean(chain)
sourceTuring.Inference.MHLogDensityFunction
— TypeMHLogDensityFunction
A log density function for the MH sampler.
This variant uses the set_namedtuple!
function to update the VarInfo
.
sourceTuring.Inference.NUTS
— TypeNUTS(n_adapts::Int, δ::Float64; max_depth::Int=10, Δ_max::Float64=1000.0, init_ϵ::Float64=0.0; adtype::ADTypes.AbstractADType=AutoForwardDiff()
No-U-Turn Sampler (NUTS) sampler.
Usage:
NUTS() # Use default NUTS configuration.
-NUTS(1000, 0.65) # Use 1000 adaption steps, and target accept ratio 0.65.
Arguments:
n_adapts::Int
: The number of samples to use with adaptation.δ::Float64
: Target acceptance rate for dual averaging.max_depth::Int
: Maximum doubling tree depth.Δ_max::Float64
: Maximum divergence during doubling tree.init_ϵ::Float64
: Initial step size; 0 means automatically searching using a heuristic procedure.adtype::ADTypes.AbstractADType
: The automatic differentiation (AD) backend. If not specified, ForwardDiff
is used, with its chunksize
automatically determined.
sourceTuring.Inference.PG
— TypePG(n, space...)
+mean(chain)
sourceTuring.Inference.MHLogDensityFunction
— TypeMHLogDensityFunction
A log density function for the MH sampler.
This variant uses the set_namedtuple!
function to update the VarInfo
.
sourceTuring.Inference.NUTS
— TypeNUTS(n_adapts::Int, δ::Float64; max_depth::Int=10, Δ_max::Float64=1000.0, init_ϵ::Float64=0.0; adtype::ADTypes.AbstractADType=AutoForwardDiff()
No-U-Turn Sampler (NUTS) sampler.
Usage:
NUTS() # Use default NUTS configuration.
+NUTS(1000, 0.65) # Use 1000 adaption steps, and target accept ratio 0.65.
Arguments:
n_adapts::Int
: The number of samples to use with adaptation.δ::Float64
: Target acceptance rate for dual averaging.max_depth::Int
: Maximum doubling tree depth.Δ_max::Float64
: Maximum divergence during doubling tree.init_ϵ::Float64
: Initial step size; 0 means automatically searching using a heuristic procedure.adtype::ADTypes.AbstractADType
: The automatic differentiation (AD) backend. If not specified, ForwardDiff
is used, with its chunksize
automatically determined.
sourceTuring.Inference.PG
— TypePG(n, space...)
PG(n, [resampler = AdvancedPS.ResampleWithESSThreshold(), space = ()])
-PG(n, [resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])
Create a Particle Gibbs sampler of type PG
with n
particles for the variables in space
.
If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.
sourceTuring.Inference.PG
— Typestruct PG{space, R} <: Turing.Inference.ParticleInference
Particle Gibbs sampler.
Fields
nparticles::Int64
: Number of particles.
resampler::Any
: Resampling algorithm.
sourceTuring.Inference.PolynomialStepsize
— MethodPolynomialStepsize(a[, b=0, γ=0.55])
Create a polynomially decaying stepsize function.
At iteration t
, the step size is
\[a (b + t)^{-γ}.\]
sourceTuring.Inference.Prior
— TypePrior()
Algorithm for sampling from the prior.
sourceTuring.Inference.SGHMC
— TypeSGHMC{AD,space}
Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.e
Fields
learning_rate::Real
momentum_decay::Real
adtype::Any
Reference
Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).
sourceTuring.Inference.SGHMC
— MethodSGHMC(
+PG(n, [resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])
Create a Particle Gibbs sampler of type PG
with n
particles for the variables in space
.
If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.
sourceTuring.Inference.PG
— Typestruct PG{space, R} <: Turing.Inference.ParticleInference
Particle Gibbs sampler.
Fields
nparticles::Int64
: Number of particles.
resampler::Any
: Resampling algorithm.
sourceTuring.Inference.PolynomialStepsize
— MethodPolynomialStepsize(a[, b=0, γ=0.55])
Create a polynomially decaying stepsize function.
At iteration t
, the step size is
\[a (b + t)^{-γ}.\]
sourceTuring.Inference.Prior
— TypePrior()
Algorithm for sampling from the prior.
sourceTuring.Inference.SGHMC
— TypeSGHMC{AD,space}
Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.e
Fields
learning_rate::Real
momentum_decay::Real
adtype::Any
Reference
Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).
sourceTuring.Inference.SGHMC
— MethodSGHMC(
space::Symbol...;
learning_rate::Real,
momentum_decay::Real,
adtype::ADTypes.AbstractADType = AutoForwardDiff(),
-)
Create a Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.
If the automatic differentiation (AD) backend adtype
is not provided, ForwardDiff with automatically determined chunksize
is used.
Reference
Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).
sourceTuring.Inference.SGLD
— TypeSGLD
Stochastic gradient Langevin dynamics (SGLD) sampler.
Fields
stepsize::Any
: Step size function.
adtype::Any
Reference
Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).
sourceTuring.Inference.SGLD
— MethodSGLD(
+)
Create a Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.
If the automatic differentiation (AD) backend adtype
is not provided, ForwardDiff with automatically determined chunksize
is used.
Reference
Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).
sourceTuring.Inference.SGLD
— TypeSGLD
Stochastic gradient Langevin dynamics (SGLD) sampler.
Fields
stepsize::Any
: Step size function.
adtype::Any
Reference
Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).
sourceTuring.Inference.SGLD
— MethodSGLD(
space::Symbol...;
stepsize = PolynomialStepsize(0.01),
adtype::ADTypes.AbstractADType = AutoForwardDiff(),
-)
Stochastic gradient Langevin dynamics (SGLD) sampler.
By default, a polynomially decaying stepsize is used.
If the automatic differentiation (AD) backend adtype
is not provided, ForwardDiff with automatically determined chunksize
is used.
Reference
Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).
See also: PolynomialStepsize
sourceTuring.Inference.SMC
— TypeSMC(space...)
+)
Stochastic gradient Langevin dynamics (SGLD) sampler.
By default, a polynomially decaying stepsize is used.
If the automatic differentiation (AD) backend adtype
is not provided, ForwardDiff with automatically determined chunksize
is used.
Reference
Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).
See also: PolynomialStepsize
sourceTuring.Inference.SMC
— TypeSMC(space...)
SMC([resampler = AdvancedPS.ResampleWithESSThreshold(), space = ()])
-SMC([resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])
Create a sequential Monte Carlo sampler of type SMC
for the variables in space
.
If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.
sourceTuring.Inference.SMC
— Typestruct SMC{space, R} <: Turing.Inference.ParticleInference
Sequential Monte Carlo sampler.
Fields
resampler::Any
sourceStatsAPI.predict
— Methodpredict([rng::AbstractRNG,] model::Model, chain::MCMCChains.Chains; include_all=false)
Execute model
conditioned on each sample in chain
, and return the resulting Chains
.
If include_all
is false
, the returned Chains
will contain only those variables sampled/not present in chain
.
Details
Internally calls Turing.Inference.transitions_from_chain
to obtained the samples and then converts these into a Chains
object using AbstractMCMC.bundle_samples
.
Example
julia> using Turing; Turing.setprogress!(false);
+SMC([resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])
Create a sequential Monte Carlo sampler of type SMC
for the variables in space
.
If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.
sourceTuring.Inference.SMC
— Typestruct SMC{space, R} <: Turing.Inference.ParticleInference
Sequential Monte Carlo sampler.
Fields
resampler::Any
sourceStatsAPI.predict
— Methodpredict([rng::AbstractRNG,] model::Model, chain::MCMCChains.Chains; include_all=false)
Execute model
conditioned on each sample in chain
, and return the resulting Chains
.
If include_all
is false
, the returned Chains
will contain only those variables sampled/not present in chain
.
Details
Internally calls Turing.Inference.transitions_from_chain
to obtained the samples and then converts these into a Chains
object using AbstractMCMC.bundle_samples
.
Example
julia> using Turing; Turing.setprogress!(false);
[ Info: [Turing]: progress logging is disabled globally
julia> @model function linear_reg(x, y, σ = 0.1)
@@ -595,10 +137,11 @@
y[1] 20.0342 20.1188 20.2135 20.2588 20.4188
y[2] 20.1870 20.3178 20.3839 20.4466 20.5895
+
julia> ys_pred = vec(mean(Array(group(predictions, :y)); dims = 1));
julia> sum(abs2, ys_test - ys_pred) ≤ 0.1
-true
sourceTuring.Inference.dist_val_tuple
— Methoddist_val_tuple(spl::Sampler{<:MH}, vi::VarInfo)
Return two NamedTuples
.
The first NamedTuple
has symbols as keys and distributions as values. The second NamedTuple
has model symbols as keys and their stored values as values.
sourceTuring.Inference.externalsampler
— Methodexternalsampler(sampler::AbstractSampler; adtype=AutoForwardDiff(), unconstrained=true)
Wrap a sampler so it can be used as an inference algorithm.
Arguments
sampler::AbstractSampler
: The sampler to wrap.
Keyword Arguments
adtype::ADTypes.AbstractADType=ADTypes.AutoForwardDiff()
: The automatic differentiation (AD) backend to use.unconstrained::Bool=true
: Whether the sampler requires unconstrained space.
sourceTuring.Inference.getparams
— Methodgetparams(model, t)
Return a named tuple of parameters.
sourceTuring.Inference.gibbs_requires_recompute_logprob
— Methodgibbs_requires_recompute_logprob(model_dst, sampler_dst, sampler_src, state_dst, state_src)
Check if the log-probability of the destination model needs to be recomputed.
Defaults to true
sourceTuring.Inference.group_varnames_by_symbol
— Methodgroup_varnames_by_symbol(vns)
Group the varnames by their symbol.
Arguments
vns
: Iterable of VarName
.
Returns
OrderedDict{Symbol, Vector{VarName}}
: A dictionary mapping symbol to a vector of varnames.
sourceTuring.Inference.make_conditional
— Methodmake_conditional(model, target_variables, varinfo)
Return a new, conditioned model for a component of a Gibbs sampler.
Arguments
model::DynamicPPL.Model
: The model to condition.target_variables::AbstractVector{<:VarName}
: The target variables of the component
sampler. These will not be conditioned.
varinfo::DynamicPPL.AbstractVarInfo
: Values for all variables in the model. All the
values in varinfo
but not in target_variables
will be conditioned to the values they have in varinfo
.
Returns
- A new model with the variables not in
target_variables
conditioned. - The
GibbsContext
object that will be used to condition the variables. This is necessary
because evaluation can mutate its global_varinfo
field, which we need to access later.
sourceTuring.Inference.mh_accept
— Methodmh_accept(logp_current::Real, logp_proposal::Real, log_proposal_ratio::Real)
Decide if a proposal $x'$ with log probability $\log p(x') = logp_proposal$ and log proposal ratio $\log k(x', x) - \log k(x, x') = log_proposal_ratio$ in a Metropolis-Hastings algorithm with Markov kernel $k(x_t, x_{t+1})$ and current state $x$ with log probability $\log p(x) = logp_current$ is accepted by evaluating the Metropolis-Hastings acceptance criterion
\[\log U \leq \log p(x') - \log p(x) + \log k(x', x) - \log k(x, x')\]
for a uniform random number $U \in [0, 1)$.
sourceTuring.Inference.recompute_logprob!!
— Methodrecompute_logprob!!(rng, model, sampler, state)
Recompute the log-probability of the model
based on the given state
and return the resulting state.
sourceTuring.Inference.requires_unconstrained_space
— Methodrequires_unconstrained_space(sampler::ExternalSampler)
Return true
if the sampler requires unconstrained space, and false
otherwise.
sourceTuring.Inference.set_namedtuple!
— Methodset_namedtuple!(vi::VarInfo, nt::NamedTuple)
Places the values of a NamedTuple
into the relevant places of a VarInfo
.
sourceTuring.Inference.setparams_varinfo!!
— Methodsetparams_varinfo!!(model, sampler::Sampler, state, params::AbstractVarInfo)
A lot like AbstractMCMC.setparams!!, but instead of taking a vector of parameters, takes an AbstractVarInfo
object. Also takes the sampler
as an argument. By default, falls back to AbstractMCMC.setparams!!(model, state, params[:])
.
model
is typically a DynamicPPL.Model
, but can also be e.g. an AbstractMCMC.LogDensityModel
.
sourceTuring.Inference.transitions_from_chain
— Methodtransitions_from_chain(
+true
sourceTuring.Inference.dist_val_tuple
— Methoddist_val_tuple(spl::Sampler{<:MH}, vi::VarInfo)
Return two NamedTuples
.
The first NamedTuple
has symbols as keys and distributions as values. The second NamedTuple
has model symbols as keys and their stored values as values.
sourceTuring.Inference.externalsampler
— Methodexternalsampler(sampler::AbstractSampler; adtype=AutoForwardDiff(), unconstrained=true)
Wrap a sampler so it can be used as an inference algorithm.
Arguments
sampler::AbstractSampler
: The sampler to wrap.
Keyword Arguments
adtype::ADTypes.AbstractADType=ADTypes.AutoForwardDiff()
: The automatic differentiation (AD) backend to use.unconstrained::Bool=true
: Whether the sampler requires unconstrained space.
sourceTuring.Inference.getparams
— Methodgetparams(model, t)
Return a named tuple of parameters.
sourceTuring.Inference.group_varnames_by_symbol
— Methodgroup_varnames_by_symbol(vns)
Group the varnames by their symbol.
Arguments
vns
: Iterable of VarName
.
Returns
OrderedDict{Symbol, Vector{VarName}}
: A dictionary mapping symbol to a vector of varnames.
sourceTuring.Inference.make_conditional
— Methodmake_conditional(model, target_variables, varinfo)
Return a new, conditioned model for a component of a Gibbs sampler.
Arguments
model::DynamicPPL.Model
: The model to condition.target_variables::AbstractVector{<:VarName}
: The target variables of the component
sampler. These will not be conditioned.
varinfo::DynamicPPL.AbstractVarInfo
: Values for all variables in the model. All the
values in varinfo
but not in target_variables
will be conditioned to the values they have in varinfo
.
Returns
- A new model with the variables not in
target_variables
conditioned. - The
GibbsContext
object that will be used to condition the variables. This is necessary
because evaluation can mutate its global_varinfo
field, which we need to access later.
sourceTuring.Inference.mh_accept
— Methodmh_accept(logp_current::Real, logp_proposal::Real, log_proposal_ratio::Real)
Decide if a proposal $x'$ with log probability $\log p(x') = logp_proposal$ and log proposal ratio $\log k(x', x) - \log k(x, x') = log_proposal_ratio$ in a Metropolis-Hastings algorithm with Markov kernel $k(x_t, x_{t+1})$ and current state $x$ with log probability $\log p(x) = logp_current$ is accepted by evaluating the Metropolis-Hastings acceptance criterion
\[\log U \leq \log p(x') - \log p(x) + \log k(x', x) - \log k(x, x')\]
for a uniform random number $U \in [0, 1)$.
sourceTuring.Inference.requires_unconstrained_space
— Methodrequires_unconstrained_space(sampler::ExternalSampler)
Return true
if the sampler requires unconstrained space, and false
otherwise.
sourceTuring.Inference.set_namedtuple!
— Methodset_namedtuple!(vi::VarInfo, nt::NamedTuple)
Places the values of a NamedTuple
into the relevant places of a VarInfo
.
sourceTuring.Inference.setparams_varinfo!!
— Methodsetparams_varinfo!!(model, sampler::Sampler, state, params::AbstractVarInfo)
A lot like AbstractMCMC.setparams!!, but instead of taking a vector of parameters, takes an AbstractVarInfo
object. Also takes the sampler
as an argument. By default, falls back to AbstractMCMC.setparams!!(model, state, params[:])
.
model
is typically a DynamicPPL.Model
, but can also be e.g. an AbstractMCMC.LogDensityModel
.
sourceTuring.Inference.transitions_from_chain
— Methodtransitions_from_chain(
[rng::AbstractRNG,]
model::Model,
chain::MCMCChains.Chains;
@@ -624,5 +167,4 @@
julia> [first(t.θ.x) for t in transitions] # extract samples for `x`
2-element Array{Array{Float64,1},1}:
[-2.0844148956440796]
- [-1.704630494695469]
sourceSettings
This document was generated with Documenter.jl version 1.7.0 on Tuesday 5 November 2024. Using Julia version 1.11.1.
-
+ [-1.704630494695469]