diff --git a/index.html b/index.html index 3ac259691..6a5afc301 100644 --- a/index.html +++ b/index.html @@ -1,3 +1,2 @@ - diff --git a/previews/PR2328/.documenter-siteinfo.json b/previews/PR2328/.documenter-siteinfo.json index 14526f4a1..9749868bf 100644 --- a/previews/PR2328/.documenter-siteinfo.json +++ b/previews/PR2328/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-05T10:20:17","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-05T16:56:19","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/previews/PR2328/api/Inference/index.html b/previews/PR2328/api/Inference/index.html index 8d7bade57..1633f6b28 100644 --- a/previews/PR2328/api/Inference/index.html +++ b/previews/PR2328/api/Inference/index.html @@ -1,463 +1,5 @@ -Inference · Turing - - - - - -

API: Turing.Inference

Turing.Inference.ESSType
ESS

Elliptical slice sampling algorithm.

Examples

julia> @model function gdemo(x)
+Inference · Turing

API: Turing.Inference

Turing.Inference.ESSType
ESS

Elliptical slice sampling algorithm.

Examples

julia> @model function gdemo(x)
            m ~ Normal()
            x ~ Normal(m, 0.5)
        end
@@ -469,14 +11,14 @@
 │ Row │ parameters │ mean     │
 │     │ Symbol     │ Float64  │
 ├─────┼────────────┼──────────┤
-│ 1   │ m          │ 0.824853 │
source
Turing.Inference.EmceeType
Emcee(n_walkers::Int, stretch_length=2.0)

Affine-invariant ensemble sampling algorithm.

Reference

Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. (2013). emcee: The MCMC Hammer. Publications of the Astronomical Society of the Pacific, 125 (925), 306. https://doi.org/10.1086/670067

source
Turing.Inference.ExternalSamplerType
ExternalSampler{S<:AbstractSampler,AD<:ADTypes.AbstractADType,Unconstrained}

Represents a sampler that is not an implementation of InferenceAlgorithm.

The Unconstrained type-parameter is to indicate whether the sampler requires unconstrained space.

Fields

  • sampler::AbstractMCMC.AbstractSampler: the sampler to wrap

  • adtype::ADTypes.AbstractADType: the automatic differentiation (AD) backend to use

source
Turing.Inference.GibbsType
Gibbs

A type representing a Gibbs sampler.

Fields

  • varnames::Any: varnames representing variables for each sampler

  • samplers::Any: samplers for each entry in varnames

source
Turing.Inference.GibbsContextType
GibbsContext(target_varnames, global_varinfo, context)

A context used in the implementation of the Turing.jl Gibbs sampler.

There will be one GibbsContext for each iteration of a component sampler.

source
Turing.Inference.HMCType
HMC(ϵ::Float64, n_leapfrog::Int; adtype::ADTypes.AbstractADType = AutoForwardDiff())

Hamiltonian Monte Carlo sampler with static trajectory.

Arguments

  • ϵ: The leapfrog step size to use.
  • n_leapfrog: The number of leapfrog steps to use.
  • adtype: The automatic differentiation (AD) backend. If not specified, ForwardDiff is used, with its chunksize automatically determined.

Usage

HMC(0.05, 10)

Tips

If you are receiving gradient errors when using HMC, try reducing the leapfrog step size ϵ, e.g.

# Original step size
+│ 1   │ m          │ 0.824853 │
source
Turing.Inference.EmceeType
Emcee(n_walkers::Int, stretch_length=2.0)

Affine-invariant ensemble sampling algorithm.

Reference

Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. (2013). emcee: The MCMC Hammer. Publications of the Astronomical Society of the Pacific, 125 (925), 306. https://doi.org/10.1086/670067

source
Turing.Inference.ExternalSamplerType
ExternalSampler{S<:AbstractSampler,AD<:ADTypes.AbstractADType,Unconstrained}

Represents a sampler that is not an implementation of InferenceAlgorithm.

The Unconstrained type-parameter is to indicate whether the sampler requires unconstrained space.

Fields

  • sampler::AbstractMCMC.AbstractSampler: the sampler to wrap

  • adtype::ADTypes.AbstractADType: the automatic differentiation (AD) backend to use

source
Turing.Inference.GibbsType
Gibbs

A type representing a Gibbs sampler.

Fields

  • varnames::Any: varnames representing variables for each sampler

  • samplers::Any: samplers for each entry in varnames

source
Turing.Inference.GibbsContextType
GibbsContext(target_varnames, global_varinfo, context)

A context used in the implementation of the Turing.jl Gibbs sampler.

There will be one GibbsContext for each iteration of a component sampler.

source
Turing.Inference.HMCType
HMC(ϵ::Float64, n_leapfrog::Int; adtype::ADTypes.AbstractADType = AutoForwardDiff())

Hamiltonian Monte Carlo sampler with static trajectory.

Arguments

  • ϵ: The leapfrog step size to use.
  • n_leapfrog: The number of leapfrog steps to use.
  • adtype: The automatic differentiation (AD) backend. If not specified, ForwardDiff is used, with its chunksize automatically determined.

Usage

HMC(0.05, 10)

Tips

If you are receiving gradient errors when using HMC, try reducing the leapfrog step size ϵ, e.g.

# Original step size
 sample(gdemo([1.5, 2]), HMC(0.1, 10), 1000)
 
 # Reduced step size
-sample(gdemo([1.5, 2]), HMC(0.01, 10), 1000)
source
Turing.Inference.HMCDAType
HMCDA(
     n_adapts::Int, δ::Float64, λ::Float64; ϵ::Float64 = 0.0;
     adtype::ADTypes.AbstractADType = AutoForwardDiff(),
-)

Hamiltonian Monte Carlo sampler with Dual Averaging algorithm.

Usage

HMCDA(200, 0.65, 0.3)

Arguments

  • n_adapts: Numbers of samples to use for adaptation.
  • δ: Target acceptance rate. 65% is often recommended.
  • λ: Target leapfrog length.
  • ϵ: Initial step size; 0 means automatically search by Turing.
  • adtype: The automatic differentiation (AD) backend. If not specified, ForwardDiff is used, with its chunksize automatically determined.

Reference

For more information, please view the following paper (arXiv link):

Hoffman, Matthew D., and Andrew Gelman. "The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo." Journal of Machine Learning Research 15, no. 1 (2014): 1593-1623.

source
Turing.Inference.ISType
IS()

Importance sampling algorithm.

Usage:

IS()

Example:

# Define a simple Normal model with unknown mean and variance.
+)

Hamiltonian Monte Carlo sampler with Dual Averaging algorithm.

Usage

HMCDA(200, 0.65, 0.3)

Arguments

  • n_adapts: Numbers of samples to use for adaptation.
  • δ: Target acceptance rate. 65% is often recommended.
  • λ: Target leapfrog length.
  • ϵ: Initial step size; 0 means automatically search by Turing.
  • adtype: The automatic differentiation (AD) backend. If not specified, ForwardDiff is used, with its chunksize automatically determined.

Reference

For more information, please view the following paper (arXiv link):

Hoffman, Matthew D., and Andrew Gelman. "The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo." Journal of Machine Learning Research 15, no. 1 (2014): 1593-1623.

source
Turing.Inference.ISType
IS()

Importance sampling algorithm.

Usage:

IS()

Example:

# Define a simple Normal model with unknown mean and variance.
 @model function gdemo(x)
     s² ~ InverseGamma(2,3)
     m ~ Normal(0,sqrt.(s))
@@ -485,7 +27,7 @@
     return s², m
 end
 
-sample(gdemo([1.5, 2]), IS(), 1000)
source
Turing.Inference.MHMethod
MH(space...)

Construct a Metropolis-Hastings algorithm.

The arguments space can be

  • Blank (i.e. MH()), in which case MH defaults to using the prior for each parameter as the proposal distribution.
  • An iterable of pairs or tuples mapping a Symbol to a AdvancedMH.Proposal, Distribution, or Function that generates returns a conditional proposal distribution.
  • A covariance matrix to use as for mean-zero multivariate normal proposals.

Examples

The default MH will draw proposal samples from the prior distribution using AdvancedMH.StaticProposal.

@model function gdemo(x, y)
+sample(gdemo([1.5, 2]), IS(), 1000)
source
Turing.Inference.MHMethod
MH(space...)

Construct a Metropolis-Hastings algorithm.

The arguments space can be

  • Blank (i.e. MH()), in which case MH defaults to using the prior for each parameter as the proposal distribution.
  • An iterable of pairs or tuples mapping a Symbol to a AdvancedMH.Proposal, Distribution, or Function that generates returns a conditional proposal distribution.
  • A covariance matrix to use as for mean-zero multivariate normal proposals.

Examples

The default MH will draw proposal samples from the prior distribution using AdvancedMH.StaticProposal.

@model function gdemo(x, y)
     s² ~ InverseGamma(2,3)
     m ~ Normal(0, sqrt(s²))
     x ~ Normal(m, sqrt(s²))
@@ -533,21 +75,21 @@
     ),
     1_000
 )
-mean(chain)
source
Turing.Inference.NUTSType
NUTS(n_adapts::Int, δ::Float64; max_depth::Int=10, Δ_max::Float64=1000.0, init_ϵ::Float64=0.0; adtype::ADTypes.AbstractADType=AutoForwardDiff()

No-U-Turn Sampler (NUTS) sampler.

Usage:

NUTS()            # Use default NUTS configuration.
-NUTS(1000, 0.65)  # Use 1000 adaption steps, and target accept ratio 0.65.

Arguments:

  • n_adapts::Int : The number of samples to use with adaptation.
  • δ::Float64 : Target acceptance rate for dual averaging.
  • max_depth::Int : Maximum doubling tree depth.
  • Δ_max::Float64 : Maximum divergence during doubling tree.
  • init_ϵ::Float64 : Initial step size; 0 means automatically searching using a heuristic procedure.
  • adtype::ADTypes.AbstractADType : The automatic differentiation (AD) backend. If not specified, ForwardDiff is used, with its chunksize automatically determined.
source
Turing.Inference.NUTSType
NUTS(n_adapts::Int, δ::Float64; max_depth::Int=10, Δ_max::Float64=1000.0, init_ϵ::Float64=0.0; adtype::ADTypes.AbstractADType=AutoForwardDiff()

No-U-Turn Sampler (NUTS) sampler.

Usage:

NUTS()            # Use default NUTS configuration.
+NUTS(1000, 0.65)  # Use 1000 adaption steps, and target accept ratio 0.65.

Arguments:

  • n_adapts::Int : The number of samples to use with adaptation.
  • δ::Float64 : Target acceptance rate for dual averaging.
  • max_depth::Int : Maximum doubling tree depth.
  • Δ_max::Float64 : Maximum divergence during doubling tree.
  • init_ϵ::Float64 : Initial step size; 0 means automatically searching using a heuristic procedure.
  • adtype::ADTypes.AbstractADType : The automatic differentiation (AD) backend. If not specified, ForwardDiff is used, with its chunksize automatically determined.
source
Turing.Inference.PGType
PG(n, space...)
 PG(n, [resampler = AdvancedPS.ResampleWithESSThreshold(), space = ()])
-PG(n, [resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])

Create a Particle Gibbs sampler of type PG with n particles for the variables in space.

If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.

source
Turing.Inference.PGType
struct PG{space, R} <: Turing.Inference.ParticleInference

Particle Gibbs sampler.

Fields

  • nparticles::Int64: Number of particles.

  • resampler::Any: Resampling algorithm.

source
Turing.Inference.SGHMCType
SGHMC{AD,space}

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.e

Fields

  • learning_rate::Real

  • momentum_decay::Real

  • adtype::Any

Reference

Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).

source
Turing.Inference.SGHMCMethod
SGHMC(
+PG(n, [resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])

Create a Particle Gibbs sampler of type PG with n particles for the variables in space.

If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.

source
Turing.Inference.PGType
struct PG{space, R} <: Turing.Inference.ParticleInference

Particle Gibbs sampler.

Fields

  • nparticles::Int64: Number of particles.

  • resampler::Any: Resampling algorithm.

source
Turing.Inference.SGHMCType
SGHMC{AD,space}

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.e

Fields

  • learning_rate::Real

  • momentum_decay::Real

  • adtype::Any

Reference

Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).

source
Turing.Inference.SGHMCMethod
SGHMC(
     space::Symbol...;
     learning_rate::Real,
     momentum_decay::Real,
     adtype::ADTypes.AbstractADType = AutoForwardDiff(),
-)

Create a Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.

If the automatic differentiation (AD) backend adtype is not provided, ForwardDiff with automatically determined chunksize is used.

Reference

Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).

source
Turing.Inference.SGLDType
SGLD

Stochastic gradient Langevin dynamics (SGLD) sampler.

Fields

  • stepsize::Any: Step size function.

  • adtype::Any

Reference

Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).

source
Turing.Inference.SGLDMethod
SGLD(
+)

Create a Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.

If the automatic differentiation (AD) backend adtype is not provided, ForwardDiff with automatically determined chunksize is used.

Reference

Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).

source
Turing.Inference.SGLDType
SGLD

Stochastic gradient Langevin dynamics (SGLD) sampler.

Fields

  • stepsize::Any: Step size function.

  • adtype::Any

Reference

Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).

source
Turing.Inference.SGLDMethod
SGLD(
     space::Symbol...;
     stepsize = PolynomialStepsize(0.01),
     adtype::ADTypes.AbstractADType = AutoForwardDiff(),
-)

Stochastic gradient Langevin dynamics (SGLD) sampler.

By default, a polynomially decaying stepsize is used.

If the automatic differentiation (AD) backend adtype is not provided, ForwardDiff with automatically determined chunksize is used.

Reference

Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).

See also: PolynomialStepsize

source
Turing.Inference.SMCType
SMC(space...)
+)

Stochastic gradient Langevin dynamics (SGLD) sampler.

By default, a polynomially decaying stepsize is used.

If the automatic differentiation (AD) backend adtype is not provided, ForwardDiff with automatically determined chunksize is used.

Reference

Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).

See also: PolynomialStepsize

source
Turing.Inference.SMCType
SMC(space...)
 SMC([resampler = AdvancedPS.ResampleWithESSThreshold(), space = ()])
-SMC([resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])

Create a sequential Monte Carlo sampler of type SMC for the variables in space.

If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.

source
Turing.Inference.SMCType
struct SMC{space, R} <: Turing.Inference.ParticleInference

Sequential Monte Carlo sampler.

Fields

  • resampler::Any
source
StatsAPI.predictMethod
predict([rng::AbstractRNG,] model::Model, chain::MCMCChains.Chains; include_all=false)

Execute model conditioned on each sample in chain, and return the resulting Chains.

If include_all is false, the returned Chains will contain only those variables sampled/not present in chain.

Details

Internally calls Turing.Inference.transitions_from_chain to obtained the samples and then converts these into a Chains object using AbstractMCMC.bundle_samples.

Example

julia> using Turing; Turing.setprogress!(false);
+SMC([resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])

Create a sequential Monte Carlo sampler of type SMC for the variables in space.

If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.

source
Turing.Inference.SMCType
struct SMC{space, R} <: Turing.Inference.ParticleInference

Sequential Monte Carlo sampler.

Fields

  • resampler::Any
source
StatsAPI.predictMethod
predict([rng::AbstractRNG,] model::Model, chain::MCMCChains.Chains; include_all=false)

Execute model conditioned on each sample in chain, and return the resulting Chains.

If include_all is false, the returned Chains will contain only those variables sampled/not present in chain.

Details

Internally calls Turing.Inference.transitions_from_chain to obtained the samples and then converts these into a Chains object using AbstractMCMC.bundle_samples.

Example

julia> using Turing; Turing.setprogress!(false);
 [ Info: [Turing]: progress logging is disabled globally
 
 julia> @model function linear_reg(x, y, σ = 0.1)
@@ -595,10 +137,11 @@
         y[1]  20.0342  20.1188  20.2135  20.2588  20.4188
         y[2]  20.1870  20.3178  20.3839  20.4466  20.5895
 
+
 julia> ys_pred = vec(mean(Array(group(predictions, :y)); dims = 1));
 
 julia> sum(abs2, ys_test - ys_pred) ≤ 0.1
-true
source
Turing.Inference.dist_val_tupleMethod
dist_val_tuple(spl::Sampler{<:MH}, vi::VarInfo)

Return two NamedTuples.

The first NamedTuple has symbols as keys and distributions as values. The second NamedTuple has model symbols as keys and their stored values as values.

source
Turing.Inference.externalsamplerMethod
externalsampler(sampler::AbstractSampler; adtype=AutoForwardDiff(), unconstrained=true)

Wrap a sampler so it can be used as an inference algorithm.

Arguments

  • sampler::AbstractSampler: The sampler to wrap.

Keyword Arguments

  • adtype::ADTypes.AbstractADType=ADTypes.AutoForwardDiff(): The automatic differentiation (AD) backend to use.
  • unconstrained::Bool=true: Whether the sampler requires unconstrained space.
source
Turing.Inference.group_varnames_by_symbolMethod
group_varnames_by_symbol(vns)

Group the varnames by their symbol.

Arguments

  • vns: Iterable of VarName.

Returns

  • OrderedDict{Symbol, Vector{VarName}}: A dictionary mapping symbol to a vector of varnames.
source
Turing.Inference.make_conditionalMethod
make_conditional(model, target_variables, varinfo)

Return a new, conditioned model for a component of a Gibbs sampler.

Arguments

  • model::DynamicPPL.Model: The model to condition.
  • target_variables::AbstractVector{<:VarName}: The target variables of the component

sampler. These will not be conditioned.

  • varinfo::DynamicPPL.AbstractVarInfo: Values for all variables in the model. All the

values in varinfo but not in target_variables will be conditioned to the values they have in varinfo.

Returns

  • A new model with the variables not in target_variables conditioned.
  • The GibbsContext object that will be used to condition the variables. This is necessary

because evaluation can mutate its global_varinfo field, which we need to access later.

source
Turing.Inference.mh_acceptMethod
mh_accept(logp_current::Real, logp_proposal::Real, log_proposal_ratio::Real)

Decide if a proposal $x'$ with log probability $\log p(x') = logp_proposal$ and log proposal ratio $\log k(x', x) - \log k(x, x') = log_proposal_ratio$ in a Metropolis-Hastings algorithm with Markov kernel $k(x_t, x_{t+1})$ and current state $x$ with log probability $\log p(x) = logp_current$ is accepted by evaluating the Metropolis-Hastings acceptance criterion

\[\log U \leq \log p(x') - \log p(x) + \log k(x', x) - \log k(x, x')\]

for a uniform random number $U \in [0, 1)$.

source
Turing.Inference.setparams_varinfo!!Method
setparams_varinfo!!(model, sampler::Sampler, state, params::AbstractVarInfo)

A lot like AbstractMCMC.setparams!!, but instead of taking a vector of parameters, takes an AbstractVarInfo object. Also takes the sampler as an argument. By default, falls back to AbstractMCMC.setparams!!(model, state, params[:]).

model is typically a DynamicPPL.Model, but can also be e.g. an AbstractMCMC.LogDensityModel.

source
Turing.Inference.dist_val_tupleMethod
dist_val_tuple(spl::Sampler{<:MH}, vi::VarInfo)

Return two NamedTuples.

The first NamedTuple has symbols as keys and distributions as values. The second NamedTuple has model symbols as keys and their stored values as values.

source
Turing.Inference.externalsamplerMethod
externalsampler(sampler::AbstractSampler; adtype=AutoForwardDiff(), unconstrained=true)

Wrap a sampler so it can be used as an inference algorithm.

Arguments

  • sampler::AbstractSampler: The sampler to wrap.

Keyword Arguments

  • adtype::ADTypes.AbstractADType=ADTypes.AutoForwardDiff(): The automatic differentiation (AD) backend to use.
  • unconstrained::Bool=true: Whether the sampler requires unconstrained space.
source
Turing.Inference.group_varnames_by_symbolMethod
group_varnames_by_symbol(vns)

Group the varnames by their symbol.

Arguments

  • vns: Iterable of VarName.

Returns

  • OrderedDict{Symbol, Vector{VarName}}: A dictionary mapping symbol to a vector of varnames.
source
Turing.Inference.make_conditionalMethod
make_conditional(model, target_variables, varinfo)

Return a new, conditioned model for a component of a Gibbs sampler.

Arguments

  • model::DynamicPPL.Model: The model to condition.
  • target_variables::AbstractVector{<:VarName}: The target variables of the component

sampler. These will not be conditioned.

  • varinfo::DynamicPPL.AbstractVarInfo: Values for all variables in the model. All the

values in varinfo but not in target_variables will be conditioned to the values they have in varinfo.

Returns

  • A new model with the variables not in target_variables conditioned.
  • The GibbsContext object that will be used to condition the variables. This is necessary

because evaluation can mutate its global_varinfo field, which we need to access later.

source
Turing.Inference.mh_acceptMethod
mh_accept(logp_current::Real, logp_proposal::Real, log_proposal_ratio::Real)

Decide if a proposal $x'$ with log probability $\log p(x') = logp_proposal$ and log proposal ratio $\log k(x', x) - \log k(x, x') = log_proposal_ratio$ in a Metropolis-Hastings algorithm with Markov kernel $k(x_t, x_{t+1})$ and current state $x$ with log probability $\log p(x) = logp_current$ is accepted by evaluating the Metropolis-Hastings acceptance criterion

\[\log U \leq \log p(x') - \log p(x) + \log k(x', x) - \log k(x, x')\]

for a uniform random number $U \in [0, 1)$.

source
Turing.Inference.setparams_varinfo!!Method
setparams_varinfo!!(model, sampler::Sampler, state, params::AbstractVarInfo)

A lot like AbstractMCMC.setparams!!, but instead of taking a vector of parameters, takes an AbstractVarInfo object. Also takes the sampler as an argument. By default, falls back to AbstractMCMC.setparams!!(model, state, params[:]).

model is typically a DynamicPPL.Model, but can also be e.g. an AbstractMCMC.LogDensityModel.

source
Turing.Inference.transitions_from_chainMethod
transitions_from_chain(
     [rng::AbstractRNG,]
     model::Model,
     chain::MCMCChains.Chains;
@@ -624,5 +167,4 @@
 julia> [first(t.θ.x) for t in transitions] # extract samples for `x`
 2-element Array{Array{Float64,1},1}:
  [-2.0844148956440796]
- [-1.704630494695469]
source
- + [-1.704630494695469]
source
diff --git a/previews/PR2328/api/Optimisation/index.html b/previews/PR2328/api/Optimisation/index.html index f62f9b395..500285954 100644 --- a/previews/PR2328/api/Optimisation/index.html +++ b/previews/PR2328/api/Optimisation/index.html @@ -1,481 +1,22 @@ -Optimisation · Turing - - - - - -

API: Turing.Optimisation

SciMLBase.OptimizationProblemMethod
OptimizationProblem(log_density::OptimLogDensity, adtype, constraints)

Create an OptimizationProblem for the objective function defined by log_density.

source
Turing.Optimisation.MAPType
MAP <: ModeEstimator

Concrete type for maximum a posteriori estimation. Only used for the Optim.jl interface.

source
Turing.Optimisation.ModeEstimationConstraintsType
ModeEstimationConstraints

A struct that holds constraints for mode estimation problems.

The fields are the same as possible constraints supported by the Optimization.jl: ub and lb specify lower and upper bounds of box constraints. cons is a function that takes the parameters of the model and returns a list of derived quantities, which are then constrained by the lower and upper bounds set by lcons and ucons. We refer to these as generic constraints. Please see the documentation of Optimization.jl for more details.

Any of the fields can be nothing, disabling the corresponding constraints.

source
Turing.Optimisation.ModeEstimatorType
ModeEstimator

An abstract type to mark whether mode estimation is to be done with maximum a posteriori (MAP) or maximum likelihood estimation (MLE). This is only needed for the Optim.jl interface.

source
Turing.Optimisation.ModeResultType
ModeResult{
+Optimisation · Turing

API: Turing.Optimisation

SciMLBase.OptimizationProblemMethod
OptimizationProblem(log_density::OptimLogDensity, adtype, constraints)

Create an OptimizationProblem for the objective function defined by log_density.

source
Turing.Optimisation.MAPType
MAP <: ModeEstimator

Concrete type for maximum a posteriori estimation. Only used for the Optim.jl interface.

source
Turing.Optimisation.ModeEstimationConstraintsType
ModeEstimationConstraints

A struct that holds constraints for mode estimation problems.

The fields are the same as possible constraints supported by the Optimization.jl: ub and lb specify lower and upper bounds of box constraints. cons is a function that takes the parameters of the model and returns a list of derived quantities, which are then constrained by the lower and upper bounds set by lcons and ucons. We refer to these as generic constraints. Please see the documentation of Optimization.jl for more details.

Any of the fields can be nothing, disabling the corresponding constraints.

source
Turing.Optimisation.ModeEstimatorType
ModeEstimator

An abstract type to mark whether mode estimation is to be done with maximum a posteriori (MAP) or maximum likelihood estimation (MLE). This is only needed for the Optim.jl interface.

source
Turing.Optimisation.ModeResultType
ModeResult{
     V<:NamedArrays.NamedArray,
     M<:NamedArrays.NamedArray,
     O<:Optim.MultivariateOptimizationResults,
     S<:NamedArrays.NamedArray
-}

A wrapper struct to store various results from a MAP or MLE estimation.

source
Turing.Optimisation.ModeResultMethod
ModeResult(log_density::OptimLogDensity, solution::SciMLBase.OptimizationSolution)

Create a ModeResult for a given log_density objective and a solution given by solve.

Optimization.solve returns its own result type. This function converts that into the richer format of ModeResult. It also takes care of transforming them back to the original parameter space in case the optimization was done in a transformed space.

source
Turing.Optimisation.OptimLogDensityMethod
(f::OptimLogDensity)(z)
-(f::OptimLogDensity)(z, _)

Evaluate the negative log joint or log likelihood at the array z. Which one is evaluated depends on the context of f.

Any second argument is ignored. The two-argument method only exists to match interface the required by Optimization.jl.

source
Turing.Optimisation.OptimLogDensityMethod
OptimLogDensity(model::DynamicPPL.Model, context::OptimizationContext)

Create a callable OptimLogDensity struct that evaluates a model using the given context.

source
Turing.Optimisation.OptimizationContextType
OptimizationContext{C<:AbstractContext} <: AbstractContext

The OptimizationContext transforms variables to their constrained space, but does not use the density with respect to the transformation. This context is intended to allow an optimizer to sample in R^n freely.

source
Base.getMethod
Base.get(m::ModeResult, var_symbol::Symbol)
-Base.get(m::ModeResult, var_symbols::AbstractVector{Symbol})

Return the values of all the variables with the symbol(s) var_symbol in the mode result m. The return value is a NamedTuple with var_symbols as the key(s). The second argument should be either a Symbol or a vector of Symbols.

source
Turing.Optimisation.ModeResultMethod
ModeResult(log_density::OptimLogDensity, solution::SciMLBase.OptimizationSolution)

Create a ModeResult for a given log_density objective and a solution given by solve.

Optimization.solve returns its own result type. This function converts that into the richer format of ModeResult. It also takes care of transforming them back to the original parameter space in case the optimization was done in a transformed space.

source
Turing.Optimisation.OptimLogDensityMethod
(f::OptimLogDensity)(z)
+(f::OptimLogDensity)(z, _)

Evaluate the negative log joint or log likelihood at the array z. Which one is evaluated depends on the context of f.

Any second argument is ignored. The two-argument method only exists to match interface the required by Optimization.jl.

source
Turing.Optimisation.OptimLogDensityMethod
OptimLogDensity(model::DynamicPPL.Model, context::OptimizationContext)

Create a callable OptimLogDensity struct that evaluates a model using the given context.

source
Turing.Optimisation.OptimizationContextType
OptimizationContext{C<:AbstractContext} <: AbstractContext

The OptimizationContext transforms variables to their constrained space, but does not use the density with respect to the transformation. This context is intended to allow an optimizer to sample in R^n freely.

source
Base.getMethod
Base.get(m::ModeResult, var_symbol::Symbol)
+Base.get(m::ModeResult, var_symbols::AbstractVector{Symbol})

Return the values of all the variables with the symbol(s) var_symbol in the mode result m. The return value is a NamedTuple with var_symbols as the key(s). The second argument should be either a Symbol or a vector of Symbols.

source
Turing.Optimisation.estimate_modeFunction
estimate_mode(
     model::DynamicPPL.Model,
     estimator::ModeEstimator,
     [solver];
     kwargs...
-)

Find the mode of the probability distribution of a model.

Under the hood this function calls Optimization.solve.

Arguments

  • model::DynamicPPL.Model: The model for which to estimate the mode.
  • estimator::ModeEstimator: Can be either MLE() for maximum likelihood estimation or MAP() for maximum a posteriori estimation.
  • solver=nothing. The optimization algorithm to use. Optional. Can be any solver recognised by Optimization.jl. If omitted a default solver is used: LBFGS, or IPNewton if non-box constraints are present.

Keyword arguments

  • initial_params::Union{AbstractVector,Nothing}=nothing: Initial value for the optimization. Optional, unless non-box constraints are specified. If omitted it is generated by either sampling from the prior distribution or uniformly from the box constraints, if any.
  • adtype::AbstractADType=AutoForwardDiff(): The automatic differentiation type to use.
  • Keyword arguments lb, ub, cons, lcons, and ucons define constraints for the optimization problem. Please see ModeEstimationConstraints for more details.
  • Any extra keyword arguments are passed to Optimization.solve.
source
Turing.Optimisation.generate_initial_paramsMethod
generate_initial_params(model::DynamicPPL.Model, initial_params, constraints)

Generate an initial value for the optimization problem.

If initial_params is not nothing, a copy of it is returned. Otherwise initial parameter values are generated either by sampling from the prior (if no constraints are present) or uniformly from the box constraints. If generic constraints are set, an error is thrown.

source
Turing.Optimisation.maximum_a_posterioriMethod
maximum_a_posteriori(
+)

Find the mode of the probability distribution of a model.

Under the hood this function calls Optimization.solve.

Arguments

  • model::DynamicPPL.Model: The model for which to estimate the mode.
  • estimator::ModeEstimator: Can be either MLE() for maximum likelihood estimation or MAP() for maximum a posteriori estimation.
  • solver=nothing. The optimization algorithm to use. Optional. Can be any solver recognised by Optimization.jl. If omitted a default solver is used: LBFGS, or IPNewton if non-box constraints are present.

Keyword arguments

  • initial_params::Union{AbstractVector,Nothing}=nothing: Initial value for the optimization. Optional, unless non-box constraints are specified. If omitted it is generated by either sampling from the prior distribution or uniformly from the box constraints, if any.
  • adtype::AbstractADType=AutoForwardDiff(): The automatic differentiation type to use.
  • Keyword arguments lb, ub, cons, lcons, and ucons define constraints for the optimization problem. Please see ModeEstimationConstraints for more details.
  • Any extra keyword arguments are passed to Optimization.solve.
source
Turing.Optimisation.generate_initial_paramsMethod
generate_initial_params(model::DynamicPPL.Model, initial_params, constraints)

Generate an initial value for the optimization problem.

If initial_params is not nothing, a copy of it is returned. Otherwise initial parameter values are generated either by sampling from the prior (if no constraints are present) or uniformly from the box constraints. If generic constraints are set, an error is thrown.

source
- +)

Find the maximum likelihood estimate of a model.

This is a convenience function that calls estimate_mode with MLE() as the estimator. Please see the documentation of Turing.Optimisation.estimate_mode for more details.

source
diff --git a/previews/PR2328/api/index.html b/previews/PR2328/api/index.html index fd622d2be..2f0776a47 100644 --- a/previews/PR2328/api/index.html +++ b/previews/PR2328/api/index.html @@ -1,476 +1,18 @@ -API · Turing - - - - - -

API

Module-wide re-exports

Turing.jl directly re-exports the entire public API of the following packages:

Please see the individual packages for their documentation.

Individual exports and re-exports

All of the following symbols are exported unqualified by Turing, even though the documentation suggests that many of them are qualified. That means, for example, you can just write

using Turing
+API · Turing

API

Module-wide re-exports

Turing.jl directly re-exports the entire public API of the following packages:

Please see the individual packages for their documentation.

Individual exports and re-exports

All of the following symbols are exported unqualified by Turing, even though the documentation suggests that many of them are qualified. That means, for example, you can just write

using Turing
 
 @model function my_model() end
 
 sample(my_model(), Prior(), 100)

instead of

DynamicPPL.@model function my_model() end
 
-sample(my_model(), Turing.Inference.Prior(), 100)

even though Prior() is actually defined in the Turing.Inference module and @model in the DynamicPPL package.

Modelling

Exported symbolDocumentationDescription
@modelDynamicPPL.@modelDefine a probabilistic model
@varnameAbstractPPL.@varnameGenerate a VarName from a Julia expression
@submodelDynamicPPL.@submodelDefine a submodel

Inference

Exported symbolDocumentationDescription
sampleStatsBase.sampleSample from a model

Samplers

Exported symbolDocumentationDescription
PriorTuring.Inference.PriorSample from the prior distribution
MHTuring.Inference.MHMetropolis–Hastings
EmceeTuring.Inference.EmceeAffine-invariant ensemble sampler
ESSTuring.Inference.ESSElliptical slice sampling
GibbsTuring.Inference.GibbsGibbs sampling
GibbsConditionalTuring.Inference.GibbsConditionalA "pseudo-sampler" to provide analytical conditionals to Gibbs
HMCTuring.Inference.HMCHamiltonian Monte Carlo
SGLDTuring.Inference.SGLDStochastic gradient Langevin dynamics
SGHMCTuring.Inference.SGHMCStochastic gradient Hamiltonian Monte Carlo
PolynomialStepsizeTuring.Inference.PolynomialStepsizeReturns a function which generates polynomially decaying step sizes
HMCDATuring.Inference.HMCDAHamiltonian Monte Carlo with dual averaging
NUTSTuring.Inference.NUTSNo-U-Turn Sampler
ISTuring.Inference.ISImportance sampling
SMCTuring.Inference.SMCSequential Monte Carlo
PGTuring.Inference.PGParticle Gibbs
CSMCTuring.Inference.CSMCThe same as PG
externalsamplerTuring.Inference.externalsamplerWrap an external sampler for use in Turing

Variational inference

See the variational inference tutorial for a walkthrough on how to use these.

Exported symbolDocumentationDescription
viAdvancedVI.viPerform variational inference
ADVIAdvancedVI.ADVIConstruct an instance of a VI algorithm

Automatic differentiation types

These are used to specify the automatic differentiation backend to use. See the AD guide for more information.

Exported symbolDocumentationDescription
AutoForwardDiffADTypes.AutoForwardDiffForwardDiff.jl backend
AutoReverseDiffADTypes.AutoReverseDiffReverseDiff.jl backend
AutoZygoteADTypes.AutoZygoteZygote.jl backend
AutoMooncakeADTypes.AutoMooncakeMooncake.jl backend

Debugging

Turing.setprogress!Function
setprogress!(progress::Bool)

Enable progress logging in Turing if progress is true, and disable it otherwise.

source

Distributions

These distributions are defined in Turing.jl, but not in Distributions.jl.

Turing.FlatType
Flat()

The flat distribution is the improper distribution of real numbers that has the improper probability density function

\[f(x) = 1.\]

source
Turing.FlatPosType
FlatPos(l::Real)

The positive flat distribution with real-valued parameter l is the improper distribution of real numbers that has the improper probability density function

\[f(x) = \begin{cases} +sample(my_model(), Turing.Inference.Prior(), 100)

even though Prior() is actually defined in the Turing.Inference module and @model in the DynamicPPL package.

Modelling

Exported symbolDocumentationDescription
@modelDynamicPPL.@modelDefine a probabilistic model
@varnameAbstractPPL.@varnameGenerate a VarName from a Julia expression
@submodelDynamicPPL.@submodelDefine a submodel

Inference

Exported symbolDocumentationDescription
sampleStatsBase.sampleSample from a model

Samplers

Exported symbolDocumentationDescription
PriorTuring.Inference.PriorSample from the prior distribution
MHTuring.Inference.MHMetropolis–Hastings
EmceeTuring.Inference.EmceeAffine-invariant ensemble sampler
ESSTuring.Inference.ESSElliptical slice sampling
GibbsTuring.Inference.GibbsGibbs sampling
GibbsConditionalTuring.Inference.GibbsConditionalA "pseudo-sampler" to provide analytical conditionals to Gibbs
HMCTuring.Inference.HMCHamiltonian Monte Carlo
SGLDTuring.Inference.SGLDStochastic gradient Langevin dynamics
SGHMCTuring.Inference.SGHMCStochastic gradient Hamiltonian Monte Carlo
PolynomialStepsizeTuring.Inference.PolynomialStepsizeReturns a function which generates polynomially decaying step sizes
HMCDATuring.Inference.HMCDAHamiltonian Monte Carlo with dual averaging
NUTSTuring.Inference.NUTSNo-U-Turn Sampler
ISTuring.Inference.ISImportance sampling
SMCTuring.Inference.SMCSequential Monte Carlo
PGTuring.Inference.PGParticle Gibbs
CSMCTuring.Inference.CSMCThe same as PG
externalsamplerTuring.Inference.externalsamplerWrap an external sampler for use in Turing

Variational inference

See the variational inference tutorial for a walkthrough on how to use these.

Exported symbolDocumentationDescription
viAdvancedVI.viPerform variational inference
ADVIAdvancedVI.ADVIConstruct an instance of a VI algorithm

Automatic differentiation types

These are used to specify the automatic differentiation backend to use. See the AD guide for more information.

Exported symbolDocumentationDescription
AutoForwardDiffADTypes.AutoForwardDiffForwardDiff.jl backend
AutoReverseDiffADTypes.AutoReverseDiffReverseDiff.jl backend
AutoZygoteADTypes.AutoZygoteZygote.jl backend
AutoMooncakeADTypes.AutoMooncakeMooncake.jl backend

Debugging

Turing.setprogress!Function
setprogress!(progress::Bool)

Enable progress logging in Turing if progress is true, and disable it otherwise.

source

Distributions

These distributions are defined in Turing.jl, but not in Distributions.jl.

Turing.FlatType
Flat()

The flat distribution is the improper distribution of real numbers that has the improper probability density function

\[f(x) = 1.\]

source
Turing.FlatPosType
FlatPos(l::Real)

The positive flat distribution with real-valued parameter l is the improper distribution of real numbers that has the improper probability density function

\[f(x) = \begin{cases} 0 & \text{if } x \leq l, \\ 1 & \text{otherwise}. -\end{cases}\]

source
Turing.BinomialLogitType
BinomialLogit(n, logitp)

The Binomial distribution with logit parameterization characterizes the number of successes in a sequence of independent trials.

It has two parameters: n, the number of trials, and logitp, the logit of the probability of success in an individual trial, with the distribution

\[P(X = k) = {n \choose k}{(\text{logistic}(logitp))}^k (1 - \text{logistic}(logitp))^{n-k}, \quad \text{ for } k = 0,1,2, \ldots, n.\]

See also: Binomial

source
Turing.OrderedLogisticType
OrderedLogistic(η, c::AbstractVector)

The ordered logistic distribution with real-valued parameter η and cutpoints c has the probability mass function

\[P(X = k) = \begin{cases} +\end{cases}\]

source
Turing.BinomialLogitType
BinomialLogit(n, logitp)

The Binomial distribution with logit parameterization characterizes the number of successes in a sequence of independent trials.

It has two parameters: n, the number of trials, and logitp, the logit of the probability of success in an individual trial, with the distribution

\[P(X = k) = {n \choose k}{(\text{logistic}(logitp))}^k (1 - \text{logistic}(logitp))^{n-k}, \quad \text{ for } k = 0,1,2, \ldots, n.\]

See also: Binomial

source
Turing.OrderedLogisticType
OrderedLogistic(η, c::AbstractVector)

The ordered logistic distribution with real-valued parameter η and cutpoints c has the probability mass function

\[P(X = k) = \begin{cases} 1 - \text{logistic}(\eta - c_1) & \text{if } k = 1, \\ \text{logistic}(\eta - c_{k-1}) - \text{logistic}(\eta - c_k) & \text{if } 1 < k < K, \\ \text{logistic}(\eta - c_{K-1}) & \text{if } k = K, -\end{cases}\]

where K = length(c) + 1.

source
Turing.LogPoissonType
LogPoisson(logλ)

The Poisson distribution with logarithmic parameterization of the rate parameter describes the number of independent events occurring within a unit time interval, given the average rate of occurrence $\exp(\log\lambda)$.

The distribution has the probability mass function

\[P(X = k) = \frac{e^{k \cdot \log\lambda}}{k!} e^{-e^{\log\lambda}}, \quad \text{ for } k = 0,1,2,\ldots.\]

See also: Poisson

source

BernoulliLogit is part of Distributions.jl since version 0.25.77. If you are using an older version of Distributions where this isn't defined, Turing will export the same distribution.

Distributions.BernoulliLogitType
BernoulliLogit(logitp=0.0)

A Bernoulli distribution that is parameterized by the logit logitp = logit(p) = log(p/(1-p)) of its success rate p.

\[P(X = k) = \begin{cases} +\end{cases}\]

where K = length(c) + 1.

source
Turing.LogPoissonType
LogPoisson(logλ)

The Poisson distribution with logarithmic parameterization of the rate parameter describes the number of independent events occurring within a unit time interval, given the average rate of occurrence $\exp(\log\lambda)$.

The distribution has the probability mass function

\[P(X = k) = \frac{e^{k \cdot \log\lambda}}{k!} e^{-e^{\log\lambda}}, \quad \text{ for } k = 0,1,2,\ldots.\]

See also: Poisson

source

BernoulliLogit is part of Distributions.jl since version 0.25.77. If you are using an older version of Distributions where this isn't defined, Turing will export the same distribution.

Distributions.BernoulliLogitType
BernoulliLogit(logitp=0.0)

A Bernoulli distribution that is parameterized by the logit logitp = logit(p) = log(p/(1-p)) of its success rate p.

\[P(X = k) = \begin{cases} \operatorname{logistic}(-logitp) = \frac{1}{1 + \exp{(logitp)}} & \quad \text{for } k = 0, \\ \operatorname{logistic}(logitp) = \frac{1}{1 + \exp{(-logitp)}} & \quad \text{for } k = 1. \end{cases}\]

External links:

See also Bernoulli

source

Tools to work with distributions

Exported symbolDocumentationDescription
filldistDistributionsAD.filldistCreate a product distribution from a distribution and integers
arraydistDistributionsAD.arraydistCreate a product distribution from an array of distributions
NamedDistDynamicPPL.NamedDistA distribution that carries the name of the variable

Predictions

StatsAPI.predictFunction
predict([rng::AbstractRNG,] model::Model, chain::MCMCChains.Chains; include_all=false)

Execute model conditioned on each sample in chain, and return the resulting Chains.

If include_all is false, the returned Chains will contain only those variables sampled/not present in chain.

Details

Internally calls Turing.Inference.transitions_from_chain to obtained the samples and then converts these into a Chains object using AbstractMCMC.bundle_samples.

Example

julia> using Turing; Turing.setprogress!(false);
@@ -521,8 +63,8 @@
         y[1]  20.0342  20.1188  20.2135  20.2588  20.4188
         y[2]  20.1870  20.3178  20.3839  20.4466  20.5895
 
+
 julia> ys_pred = vec(mean(Array(group(predictions, :y)); dims = 1));
 
 julia> sum(abs2, ys_test - ys_pred) ≤ 0.1
-true
source

Querying model probabilities and quantities

Please see the generated quantities and probability interface guides for more information.

Exported symbolDocumentationDescription
generated_quantitiesDynamicPPL.generated_quantitiesCalculate additional quantities defined in a model
pointwise_loglikelihoodsDynamicPPL.pointwise_loglikelihoodsCompute log likelihoods for each sample in a chain
logpriorDynamicPPL.logpriorCompute log prior probability
logjointDynamicPPL.logjointCompute log joint probability
LogDensityFunctionDynamicPPL.LogDensityFunctionWrap a Turing model to satisfy LogDensityFunctions.jl interface
conditionAbstractPPL.conditionCondition a model on data
deconditionAbstractPPL.deconditionRemove conditioning on data
conditionedDynamicPPL.conditionedReturn the conditioned values of a model
fixDynamicPPL.fixFix the value of a variable
unfixDynamicPPL.unfixUnfix the value of a variable
OrderedDictOrderedCollections.OrderedDictAn ordered dictionary

Extra re-exports from Bijectors

Note that Bijectors itself does not export ordered.

Bijectors.orderedFunction
ordered(d::Distribution)

Return a Distribution whose support are ordered vectors, i.e., vectors with increasingly ordered elements.

Specifically, d is restricted to the subspace of its domain containing only ordered elements.

Warning

rand is implemented using rejection sampling, which can be slow for high-dimensional distributions. In such cases, consider using MCMC methods to sample from the distribution instead.

Warning

The resulting ordered distribution is un-normalized, which can cause issues in some contexts, e.g. in hierarchical models where the parameters of the ordered distribution are themselves sampled. See the notes below for a more detailed discussion.

Notes on ordered being un-normalized

The resulting ordered distribution is un-normalized. This is not a problem if used in a context where the normalizing factor is irrelevant, but if the value of the normalizing factor impacts the resulting computation, the results may be inaccurate.

For example, if the distribution is used in sampling a posterior distribution with MCMC and the parameters of the ordered distribution are themselves sampled, then the normalizing factor would in general be needed for accurate sampling, and ordered should not be used. However, if the parameters are fixed, then since MCMC does not require distributions be normalized, ordered may be used without problems.

A common case is where the distribution being ordered is a joint distribution of n identical univariate distributions. In this case the normalization factor works out to be the constant n!, and ordered can again be used without problems even if the parameters of the univariate distribution are sampled.

source

Point estimates

See the mode estimation tutorial for more information.

Exported symbolDocumentationDescription
maximum_a_posterioriTuring.Optimisation.maximum_a_posterioriFind a MAP estimate for a model
maximum_likelihoodTuring.Optimisation.maximum_likelihoodFind a MLE estimate for a model
MAPTuring.Optimisation.MAPType to use with Optim.jl for MAP estimation
MLETuring.Optimisation.MLEType to use with Optim.jl for MLE estimation
- +truesource

Querying model probabilities and quantities

Please see the generated quantities and probability interface guides for more information.

Exported symbolDocumentationDescription
generated_quantitiesDynamicPPL.generated_quantitiesCalculate additional quantities defined in a model
pointwise_loglikelihoodsDynamicPPL.pointwise_loglikelihoodsCompute log likelihoods for each sample in a chain
logpriorDynamicPPL.logpriorCompute log prior probability
logjointDynamicPPL.logjointCompute log joint probability
LogDensityFunctionDynamicPPL.LogDensityFunctionWrap a Turing model to satisfy LogDensityFunctions.jl interface
conditionAbstractPPL.conditionCondition a model on data
deconditionAbstractPPL.deconditionRemove conditioning on data
conditionedDynamicPPL.conditionedReturn the conditioned values of a model
fixDynamicPPL.fixFix the value of a variable
unfixDynamicPPL.unfixUnfix the value of a variable
OrderedDictOrderedCollections.OrderedDictAn ordered dictionary

Extra re-exports from Bijectors

Note that Bijectors itself does not export ordered.

Bijectors.orderedFunction
ordered(d::Distribution)

Return a Distribution whose support are ordered vectors, i.e., vectors with increasingly ordered elements.

Specifically, d is restricted to the subspace of its domain containing only ordered elements.

Warning

rand is implemented using rejection sampling, which can be slow for high-dimensional distributions. In such cases, consider using MCMC methods to sample from the distribution instead.

Warning

The resulting ordered distribution is un-normalized, which can cause issues in some contexts, e.g. in hierarchical models where the parameters of the ordered distribution are themselves sampled. See the notes below for a more detailed discussion.

Notes on ordered being un-normalized

The resulting ordered distribution is un-normalized. This is not a problem if used in a context where the normalizing factor is irrelevant, but if the value of the normalizing factor impacts the resulting computation, the results may be inaccurate.

For example, if the distribution is used in sampling a posterior distribution with MCMC and the parameters of the ordered distribution are themselves sampled, then the normalizing factor would in general be needed for accurate sampling, and ordered should not be used. However, if the parameters are fixed, then since MCMC does not require distributions be normalized, ordered may be used without problems.

A common case is where the distribution being ordered is a joint distribution of n identical univariate distributions. In this case the normalization factor works out to be the constant n!, and ordered can again be used without problems even if the parameters of the univariate distribution are sampled.

source

Point estimates

See the mode estimation tutorial for more information.

Exported symbolDocumentationDescription
maximum_a_posterioriTuring.Optimisation.maximum_a_posterioriFind a MAP estimate for a model
maximum_likelihoodTuring.Optimisation.maximum_likelihoodFind a MLE estimate for a model
MAPTuring.Optimisation.MAPType to use with Optim.jl for MAP estimation
MLETuring.Optimisation.MLEType to use with Optim.jl for MLE estimation
diff --git a/previews/PR2328/index.html b/previews/PR2328/index.html index 40afbc241..290a1191c 100644 --- a/previews/PR2328/index.html +++ b/previews/PR2328/index.html @@ -1,461 +1,2 @@ -Home · Turing - - - - - -

Turing.jl

This site contains the API documentation for the identifiers exported by Turing.jl.

If you are looking for usage examples and guides, please visit https://turinglang.org/docs.

- +Home · Turing

Turing.jl

This site contains the API documentation for the identifiers exported by Turing.jl.

If you are looking for usage examples and guides, please visit https://turinglang.org/docs.

diff --git a/previews/PR2328/objects.inv b/previews/PR2328/objects.inv index a7406431d..3fb5304db 100644 Binary files a/previews/PR2328/objects.inv and b/previews/PR2328/objects.inv differ diff --git a/previews/PR2328/search_index.js b/previews/PR2328/search_index.js index 7c1b56bfa..0f1c88904 100644 --- a/previews/PR2328/search_index.js +++ b/previews/PR2328/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"api/Optimisation/#API:-Turing.Optimisation","page":"Optimisation","title":"API: Turing.Optimisation","text":"","category":"section"},{"location":"api/Optimisation/","page":"Optimisation","title":"Optimisation","text":"Modules = [Turing.Optimisation]\nOrder = [:type, :function]","category":"page"},{"location":"api/Optimisation/#SciMLBase.OptimizationProblem-Tuple{LogDensityFunction{V, M, C} where {M<:DynamicPPL.Model, C<:Turing.Optimisation.OptimizationContext, V<:DynamicPPL.VarInfo}, Any, Any}","page":"Optimisation","title":"SciMLBase.OptimizationProblem","text":"OptimizationProblem(log_density::OptimLogDensity, adtype, constraints)\n\nCreate an OptimizationProblem for the objective function defined by log_density.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.MAP","page":"Optimisation","title":"Turing.Optimisation.MAP","text":"MAP <: ModeEstimator\n\nConcrete type for maximum a posteriori estimation. Only used for the Optim.jl interface.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Turing.Optimisation.MLE","page":"Optimisation","title":"Turing.Optimisation.MLE","text":"MLE <: ModeEstimator\n\nConcrete type for maximum likelihood estimation. Only used for the Optim.jl interface.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Turing.Optimisation.ModeEstimationConstraints","page":"Optimisation","title":"Turing.Optimisation.ModeEstimationConstraints","text":"ModeEstimationConstraints\n\nA struct that holds constraints for mode estimation problems.\n\nThe fields are the same as possible constraints supported by the Optimization.jl: ub and lb specify lower and upper bounds of box constraints. cons is a function that takes the parameters of the model and returns a list of derived quantities, which are then constrained by the lower and upper bounds set by lcons and ucons. We refer to these as generic constraints. Please see the documentation of Optimization.jl for more details.\n\nAny of the fields can be nothing, disabling the corresponding constraints.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Turing.Optimisation.ModeEstimator","page":"Optimisation","title":"Turing.Optimisation.ModeEstimator","text":"ModeEstimator\n\nAn abstract type to mark whether mode estimation is to be done with maximum a posteriori (MAP) or maximum likelihood estimation (MLE). This is only needed for the Optim.jl interface.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Turing.Optimisation.ModeResult","page":"Optimisation","title":"Turing.Optimisation.ModeResult","text":"ModeResult{\n V<:NamedArrays.NamedArray,\n M<:NamedArrays.NamedArray,\n O<:Optim.MultivariateOptimizationResults,\n S<:NamedArrays.NamedArray\n}\n\nA wrapper struct to store various results from a MAP or MLE estimation.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Turing.Optimisation.ModeResult-Tuple{LogDensityFunction{V, M, C} where {M<:DynamicPPL.Model, C<:Turing.Optimisation.OptimizationContext, V<:DynamicPPL.VarInfo}, SciMLBase.OptimizationSolution}","page":"Optimisation","title":"Turing.Optimisation.ModeResult","text":"ModeResult(log_density::OptimLogDensity, solution::SciMLBase.OptimizationSolution)\n\nCreate a ModeResult for a given log_density objective and a solution given by solve.\n\nOptimization.solve returns its own result type. This function converts that into the richer format of ModeResult. It also takes care of transforming them back to the original parameter space in case the optimization was done in a transformed space.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.OptimLogDensity","page":"Optimisation","title":"Turing.Optimisation.OptimLogDensity","text":"OptimLogDensity{M<:DynamicPPL.Model,C<:Context,V<:DynamicPPL.VarInfo}\n\nA struct that stores the negative log density function of a DynamicPPL model.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Turing.Optimisation.OptimLogDensity-Tuple{AbstractVector}","page":"Optimisation","title":"Turing.Optimisation.OptimLogDensity","text":"(f::OptimLogDensity)(z)\n(f::OptimLogDensity)(z, _)\n\nEvaluate the negative log joint or log likelihood at the array z. Which one is evaluated depends on the context of f.\n\nAny second argument is ignored. The two-argument method only exists to match interface the required by Optimization.jl.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.OptimLogDensity-Tuple{DynamicPPL.Model, Turing.Optimisation.OptimizationContext}","page":"Optimisation","title":"Turing.Optimisation.OptimLogDensity","text":"OptimLogDensity(model::DynamicPPL.Model, context::OptimizationContext)\n\nCreate a callable OptimLogDensity struct that evaluates a model using the given context.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.OptimizationContext","page":"Optimisation","title":"Turing.Optimisation.OptimizationContext","text":"OptimizationContext{C<:AbstractContext} <: AbstractContext\n\nThe OptimizationContext transforms variables to their constrained space, but does not use the density with respect to the transformation. This context is intended to allow an optimizer to sample in R^n freely.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Base.get-Tuple{Turing.Optimisation.ModeResult, AbstractVector{Symbol}}","page":"Optimisation","title":"Base.get","text":"Base.get(m::ModeResult, var_symbol::Symbol)\nBase.get(m::ModeResult, var_symbols::AbstractVector{Symbol})\n\nReturn the values of all the variables with the symbol(s) var_symbol in the mode result m. The return value is a NamedTuple with var_symbols as the key(s). The second argument should be either a Symbol or a vector of Symbols.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.estimate_mode","page":"Optimisation","title":"Turing.Optimisation.estimate_mode","text":"estimate_mode(\n model::DynamicPPL.Model,\n estimator::ModeEstimator,\n [solver];\n kwargs...\n)\n\nFind the mode of the probability distribution of a model.\n\nUnder the hood this function calls Optimization.solve.\n\nArguments\n\nmodel::DynamicPPL.Model: The model for which to estimate the mode.\nestimator::ModeEstimator: Can be either MLE() for maximum likelihood estimation or MAP() for maximum a posteriori estimation.\nsolver=nothing. The optimization algorithm to use. Optional. Can be any solver recognised by Optimization.jl. If omitted a default solver is used: LBFGS, or IPNewton if non-box constraints are present.\n\nKeyword arguments\n\ninitial_params::Union{AbstractVector,Nothing}=nothing: Initial value for the optimization. Optional, unless non-box constraints are specified. If omitted it is generated by either sampling from the prior distribution or uniformly from the box constraints, if any.\nadtype::AbstractADType=AutoForwardDiff(): The automatic differentiation type to use.\nKeyword arguments lb, ub, cons, lcons, and ucons define constraints for the optimization problem. Please see ModeEstimationConstraints for more details.\nAny extra keyword arguments are passed to Optimization.solve.\n\n\n\n\n\n","category":"function"},{"location":"api/Optimisation/#Turing.Optimisation.generate_initial_params-Tuple{DynamicPPL.Model, Any, Any}","page":"Optimisation","title":"Turing.Optimisation.generate_initial_params","text":"generate_initial_params(model::DynamicPPL.Model, initial_params, constraints)\n\nGenerate an initial value for the optimization problem.\n\nIf initial_params is not nothing, a copy of it is returned. Otherwise initial parameter values are generated either by sampling from the prior (if no constraints are present) or uniformly from the box constraints. If generic constraints are set, an error is thrown.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.maximum_a_posteriori-Tuple{DynamicPPL.Model, Vararg{Any}}","page":"Optimisation","title":"Turing.Optimisation.maximum_a_posteriori","text":"maximum_a_posteriori(\n model::DynamicPPL.Model,\n [solver];\n kwargs...\n)\n\nFind the maximum a posteriori estimate of a model.\n\nThis is a convenience function that calls estimate_mode with MAP() as the estimator. Please see the documentation of Turing.Optimisation.estimate_mode for more details.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.maximum_likelihood-Tuple{DynamicPPL.Model, Vararg{Any}}","page":"Optimisation","title":"Turing.Optimisation.maximum_likelihood","text":"maximum_likelihood(\n model::DynamicPPL.Model,\n [solver];\n kwargs...\n)\n\nFind the maximum likelihood estimate of a model.\n\nThis is a convenience function that calls estimate_mode with MLE() as the estimator. Please see the documentation of Turing.Optimisation.estimate_mode for more details.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#API:-Turing.Inference","page":"Inference","title":"API: Turing.Inference","text":"","category":"section"},{"location":"api/Inference/","page":"Inference","title":"Inference","text":"Modules = [Turing.Inference]\nOrder = [:type, :function]","category":"page"},{"location":"api/Inference/#Turing.Inference.CSMC","page":"Inference","title":"Turing.Inference.CSMC","text":"CSMC(...)\n\nEquivalent to PG.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.ESS","page":"Inference","title":"Turing.Inference.ESS","text":"ESS\n\nElliptical slice sampling algorithm.\n\nExamples\n\njulia> @model function gdemo(x)\n m ~ Normal()\n x ~ Normal(m, 0.5)\n end\ngdemo (generic function with 2 methods)\n\njulia> sample(gdemo(1.0), ESS(), 1_000) |> mean\nMean\n\n│ Row │ parameters │ mean │\n│ │ Symbol │ Float64 │\n├─────┼────────────┼──────────┤\n│ 1 │ m │ 0.824853 │\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.Emcee","page":"Inference","title":"Turing.Inference.Emcee","text":"Emcee(n_walkers::Int, stretch_length=2.0)\n\nAffine-invariant ensemble sampling algorithm.\n\nReference\n\nForeman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. (2013). emcee: The MCMC Hammer. Publications of the Astronomical Society of the Pacific, 125 (925), 306. https://doi.org/10.1086/670067\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.ExternalSampler","page":"Inference","title":"Turing.Inference.ExternalSampler","text":"ExternalSampler{S<:AbstractSampler,AD<:ADTypes.AbstractADType,Unconstrained}\n\nRepresents a sampler that is not an implementation of InferenceAlgorithm.\n\nThe Unconstrained type-parameter is to indicate whether the sampler requires unconstrained space.\n\nFields\n\nsampler::AbstractMCMC.AbstractSampler: the sampler to wrap\nadtype::ADTypes.AbstractADType: the automatic differentiation (AD) backend to use\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.Gibbs","page":"Inference","title":"Turing.Inference.Gibbs","text":"Gibbs\n\nA type representing a Gibbs sampler.\n\nFields\n\nvarnames::Any: varnames representing variables for each sampler\nsamplers::Any: samplers for each entry in varnames\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.GibbsContext","page":"Inference","title":"Turing.Inference.GibbsContext","text":"GibbsContext(target_varnames, global_varinfo, context)\n\nA context used in the implementation of the Turing.jl Gibbs sampler.\n\nThere will be one GibbsContext for each iteration of a component sampler.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.HMC","page":"Inference","title":"Turing.Inference.HMC","text":"HMC(ϵ::Float64, n_leapfrog::Int; adtype::ADTypes.AbstractADType = AutoForwardDiff())\n\nHamiltonian Monte Carlo sampler with static trajectory.\n\nArguments\n\nϵ: The leapfrog step size to use.\nn_leapfrog: The number of leapfrog steps to use.\nadtype: The automatic differentiation (AD) backend. If not specified, ForwardDiff is used, with its chunksize automatically determined.\n\nUsage\n\nHMC(0.05, 10)\n\nTips\n\nIf you are receiving gradient errors when using HMC, try reducing the leapfrog step size ϵ, e.g.\n\n# Original step size\nsample(gdemo([1.5, 2]), HMC(0.1, 10), 1000)\n\n# Reduced step size\nsample(gdemo([1.5, 2]), HMC(0.01, 10), 1000)\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.HMCDA","page":"Inference","title":"Turing.Inference.HMCDA","text":"HMCDA(\n n_adapts::Int, δ::Float64, λ::Float64; ϵ::Float64 = 0.0;\n adtype::ADTypes.AbstractADType = AutoForwardDiff(),\n)\n\nHamiltonian Monte Carlo sampler with Dual Averaging algorithm.\n\nUsage\n\nHMCDA(200, 0.65, 0.3)\n\nArguments\n\nn_adapts: Numbers of samples to use for adaptation.\nδ: Target acceptance rate. 65% is often recommended.\nλ: Target leapfrog length.\nϵ: Initial step size; 0 means automatically search by Turing.\nadtype: The automatic differentiation (AD) backend. If not specified, ForwardDiff is used, with its chunksize automatically determined.\n\nReference\n\nFor more information, please view the following paper (arXiv link):\n\nHoffman, Matthew D., and Andrew Gelman. \"The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo.\" Journal of Machine Learning Research 15, no. 1 (2014): 1593-1623.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.IS","page":"Inference","title":"Turing.Inference.IS","text":"IS()\n\nImportance sampling algorithm.\n\nUsage:\n\nIS()\n\nExample:\n\n# Define a simple Normal model with unknown mean and variance.\n@model function gdemo(x)\n s² ~ InverseGamma(2,3)\n m ~ Normal(0,sqrt.(s))\n x[1] ~ Normal(m, sqrt.(s))\n x[2] ~ Normal(m, sqrt.(s))\n return s², m\nend\n\nsample(gdemo([1.5, 2]), IS(), 1000)\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.MH-Tuple","page":"Inference","title":"Turing.Inference.MH","text":"MH(space...)\n\nConstruct a Metropolis-Hastings algorithm.\n\nThe arguments space can be\n\nBlank (i.e. MH()), in which case MH defaults to using the prior for each parameter as the proposal distribution.\nAn iterable of pairs or tuples mapping a Symbol to a AdvancedMH.Proposal, Distribution, or Function that generates returns a conditional proposal distribution.\nA covariance matrix to use as for mean-zero multivariate normal proposals.\n\nExamples\n\nThe default MH will draw proposal samples from the prior distribution using AdvancedMH.StaticProposal.\n\n@model function gdemo(x, y)\n s² ~ InverseGamma(2,3)\n m ~ Normal(0, sqrt(s²))\n x ~ Normal(m, sqrt(s²))\n y ~ Normal(m, sqrt(s²))\nend\n\nchain = sample(gdemo(1.5, 2.0), MH(), 1_000)\nmean(chain)\n\nSpecifying a single distribution implies the use of static MH:\n\n# Use a static proposal for s² (which happens to be the same\n# as the prior) and a static proposal for m (note that this\n# isn't a random walk proposal).\nchain = sample(\n gdemo(1.5, 2.0),\n MH(\n :s² => InverseGamma(2, 3),\n :m => Normal(0, 1)\n ),\n 1_000\n)\nmean(chain)\n\nSpecifying explicit proposals using the AdvancedMH interface:\n\n# Use a static proposal for s² and random walk with proposal\n# standard deviation of 0.25 for m.\nchain = sample(\n gdemo(1.5, 2.0),\n MH(\n :s² => AdvancedMH.StaticProposal(InverseGamma(2,3)),\n :m => AdvancedMH.RandomWalkProposal(Normal(0, 0.25))\n ),\n 1_000\n)\nmean(chain)\n\nUsing a custom function to specify a conditional distribution:\n\n# Use a static proposal for s and and a conditional proposal for m,\n# where the proposal is centered around the current sample.\nchain = sample(\n gdemo(1.5, 2.0),\n MH(\n :s² => InverseGamma(2, 3),\n :m => x -> Normal(x, 1)\n ),\n 1_000\n)\nmean(chain)\n\nProviding a covariance matrix will cause MH to perform random-walk sampling in the transformed space with proposals drawn from a multivariate normal distribution. The provided matrix must be positive semi-definite and square:\n\n# Providing a custom variance-covariance matrix\nchain = sample(\n gdemo(1.5, 2.0),\n MH(\n [0.25 0.05;\n 0.05 0.50]\n ),\n 1_000\n)\nmean(chain)\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.MHLogDensityFunction","page":"Inference","title":"Turing.Inference.MHLogDensityFunction","text":"MHLogDensityFunction\n\nA log density function for the MH sampler.\n\nThis variant uses the set_namedtuple! function to update the VarInfo.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.NUTS","page":"Inference","title":"Turing.Inference.NUTS","text":"NUTS(n_adapts::Int, δ::Float64; max_depth::Int=10, Δ_max::Float64=1000.0, init_ϵ::Float64=0.0; adtype::ADTypes.AbstractADType=AutoForwardDiff()\n\nNo-U-Turn Sampler (NUTS) sampler.\n\nUsage:\n\nNUTS() # Use default NUTS configuration.\nNUTS(1000, 0.65) # Use 1000 adaption steps, and target accept ratio 0.65.\n\nArguments:\n\nn_adapts::Int : The number of samples to use with adaptation.\nδ::Float64 : Target acceptance rate for dual averaging.\nmax_depth::Int : Maximum doubling tree depth.\nΔ_max::Float64 : Maximum divergence during doubling tree.\ninit_ϵ::Float64 : Initial step size; 0 means automatically searching using a heuristic procedure.\nadtype::ADTypes.AbstractADType : The automatic differentiation (AD) backend. If not specified, ForwardDiff is used, with its chunksize automatically determined.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.PG","page":"Inference","title":"Turing.Inference.PG","text":"PG(n, space...)\nPG(n, [resampler = AdvancedPS.ResampleWithESSThreshold(), space = ()])\nPG(n, [resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])\n\nCreate a Particle Gibbs sampler of type PG with n particles for the variables in space.\n\nIf the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.PG-2","page":"Inference","title":"Turing.Inference.PG","text":"struct PG{space, R} <: Turing.Inference.ParticleInference\n\nParticle Gibbs sampler.\n\nFields\n\nnparticles::Int64: Number of particles.\nresampler::Any: Resampling algorithm.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.PolynomialStepsize-Union{Tuple{T}, Tuple{T, T, T}} where T<:Real","page":"Inference","title":"Turing.Inference.PolynomialStepsize","text":"PolynomialStepsize(a[, b=0, γ=0.55])\n\nCreate a polynomially decaying stepsize function.\n\nAt iteration t, the step size is\n\na (b + t)^-γ\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.Prior","page":"Inference","title":"Turing.Inference.Prior","text":"Prior()\n\nAlgorithm for sampling from the prior.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.SGHMC","page":"Inference","title":"Turing.Inference.SGHMC","text":"SGHMC{AD,space}\n\nStochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.e\n\nFields\n\nlearning_rate::Real\nmomentum_decay::Real\nadtype::Any\n\nReference\n\nTianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.SGHMC-Tuple{Vararg{Symbol}}","page":"Inference","title":"Turing.Inference.SGHMC","text":"SGHMC(\n space::Symbol...;\n learning_rate::Real,\n momentum_decay::Real,\n adtype::ADTypes.AbstractADType = AutoForwardDiff(),\n)\n\nCreate a Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.\n\nIf the automatic differentiation (AD) backend adtype is not provided, ForwardDiff with automatically determined chunksize is used.\n\nReference\n\nTianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.SGLD","page":"Inference","title":"Turing.Inference.SGLD","text":"SGLD\n\nStochastic gradient Langevin dynamics (SGLD) sampler.\n\nFields\n\nstepsize::Any: Step size function.\nadtype::Any\n\nReference\n\nMax Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.SGLD-Tuple{Vararg{Symbol}}","page":"Inference","title":"Turing.Inference.SGLD","text":"SGLD(\n space::Symbol...;\n stepsize = PolynomialStepsize(0.01),\n adtype::ADTypes.AbstractADType = AutoForwardDiff(),\n)\n\nStochastic gradient Langevin dynamics (SGLD) sampler.\n\nBy default, a polynomially decaying stepsize is used.\n\nIf the automatic differentiation (AD) backend adtype is not provided, ForwardDiff with automatically determined chunksize is used.\n\nReference\n\nMax Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).\n\nSee also: PolynomialStepsize\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.SMC","page":"Inference","title":"Turing.Inference.SMC","text":"SMC(space...)\nSMC([resampler = AdvancedPS.ResampleWithESSThreshold(), space = ()])\nSMC([resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])\n\nCreate a sequential Monte Carlo sampler of type SMC for the variables in space.\n\nIf the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.SMC-2","page":"Inference","title":"Turing.Inference.SMC","text":"struct SMC{space, R} <: Turing.Inference.ParticleInference\n\nSequential Monte Carlo sampler.\n\nFields\n\nresampler::Any\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#StatsAPI.predict-Tuple{DynamicPPL.Model, Chains}","page":"Inference","title":"StatsAPI.predict","text":"predict([rng::AbstractRNG,] model::Model, chain::MCMCChains.Chains; include_all=false)\n\nExecute model conditioned on each sample in chain, and return the resulting Chains.\n\nIf include_all is false, the returned Chains will contain only those variables sampled/not present in chain.\n\nDetails\n\nInternally calls Turing.Inference.transitions_from_chain to obtained the samples and then converts these into a Chains object using AbstractMCMC.bundle_samples.\n\nExample\n\njulia> using Turing; Turing.setprogress!(false);\n[ Info: [Turing]: progress logging is disabled globally\n\njulia> @model function linear_reg(x, y, σ = 0.1)\n β ~ Normal(0, 1)\n\n for i ∈ eachindex(y)\n y[i] ~ Normal(β * x[i], σ)\n end\n end;\n\njulia> σ = 0.1; f(x) = 2 * x + 0.1 * randn();\n\njulia> Δ = 0.1; xs_train = 0:Δ:10; ys_train = f.(xs_train);\n\njulia> xs_test = [10 + Δ, 10 + 2 * Δ]; ys_test = f.(xs_test);\n\njulia> m_train = linear_reg(xs_train, ys_train, σ);\n\njulia> chain_lin_reg = sample(m_train, NUTS(100, 0.65), 200);\n┌ Info: Found initial step size\n└ ϵ = 0.003125\n\njulia> m_test = linear_reg(xs_test, Vector{Union{Missing, Float64}}(undef, length(ys_test)), σ);\n\njulia> predictions = predict(m_test, chain_lin_reg)\nObject of type Chains, with data of type 100×2×1 Array{Float64,3}\n\nIterations = 1:100\nThinning interval = 1\nChains = 1\nSamples per chain = 100\nparameters = y[1], y[2]\n\n2-element Array{ChainDataFrame,1}\n\nSummary Statistics\n parameters mean std naive_se mcse ess r_hat\n ────────── ─────── ────── ──────── ─────── ──────── ──────\n y[1] 20.1974 0.1007 0.0101 missing 101.0711 0.9922\n y[2] 20.3867 0.1062 0.0106 missing 101.4889 0.9903\n\nQuantiles\n parameters 2.5% 25.0% 50.0% 75.0% 97.5%\n ────────── ─────── ─────── ─────── ─────── ───────\n y[1] 20.0342 20.1188 20.2135 20.2588 20.4188\n y[2] 20.1870 20.3178 20.3839 20.4466 20.5895\n\n\njulia> ys_pred = vec(mean(Array(group(predictions, :y)); dims = 1));\n\njulia> sum(abs2, ys_test - ys_pred) ≤ 0.1\ntrue\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.dist_val_tuple-Tuple{DynamicPPL.Sampler{<:MH}, Union{DynamicPPL.ThreadSafeVarInfo{<:DynamicPPL.VarInfo{Tmeta}}, DynamicPPL.VarInfo{Tmeta}} where Tmeta}","page":"Inference","title":"Turing.Inference.dist_val_tuple","text":"dist_val_tuple(spl::Sampler{<:MH}, vi::VarInfo)\n\nReturn two NamedTuples.\n\nThe first NamedTuple has symbols as keys and distributions as values. The second NamedTuple has model symbols as keys and their stored values as values.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.externalsampler-Tuple{AbstractMCMC.AbstractSampler}","page":"Inference","title":"Turing.Inference.externalsampler","text":"externalsampler(sampler::AbstractSampler; adtype=AutoForwardDiff(), unconstrained=true)\n\nWrap a sampler so it can be used as an inference algorithm.\n\nArguments\n\nsampler::AbstractSampler: The sampler to wrap.\n\nKeyword Arguments\n\nadtype::ADTypes.AbstractADType=ADTypes.AutoForwardDiff(): The automatic differentiation (AD) backend to use.\nunconstrained::Bool=true: Whether the sampler requires unconstrained space.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.getparams-Tuple{Any, Any}","page":"Inference","title":"Turing.Inference.getparams","text":"getparams(model, t)\n\nReturn a named tuple of parameters.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.gibbs_requires_recompute_logprob-NTuple{5, Any}","page":"Inference","title":"Turing.Inference.gibbs_requires_recompute_logprob","text":"gibbs_requires_recompute_logprob(model_dst, sampler_dst, sampler_src, state_dst, state_src)\n\nCheck if the log-probability of the destination model needs to be recomputed.\n\nDefaults to true\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.group_varnames_by_symbol-Tuple{Any}","page":"Inference","title":"Turing.Inference.group_varnames_by_symbol","text":"group_varnames_by_symbol(vns)\n\nGroup the varnames by their symbol.\n\nArguments\n\nvns: Iterable of VarName.\n\nReturns\n\nOrderedDict{Symbol, Vector{VarName}}: A dictionary mapping symbol to a vector of varnames.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.make_conditional-Tuple{DynamicPPL.Model, AbstractVector{<:AbstractPPL.VarName}, Any}","page":"Inference","title":"Turing.Inference.make_conditional","text":"make_conditional(model, target_variables, varinfo)\n\nReturn a new, conditioned model for a component of a Gibbs sampler.\n\nArguments\n\nmodel::DynamicPPL.Model: The model to condition.\ntarget_variables::AbstractVector{<:VarName}: The target variables of the component\n\nsampler. These will not be conditioned.\n\nvarinfo::DynamicPPL.AbstractVarInfo: Values for all variables in the model. All the\n\nvalues in varinfo but not in target_variables will be conditioned to the values they have in varinfo.\n\nReturns\n\nA new model with the variables not in target_variables conditioned.\nThe GibbsContext object that will be used to condition the variables. This is necessary\n\nbecause evaluation can mutate its global_varinfo field, which we need to access later.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.mh_accept-Tuple{Real, Real, Real}","page":"Inference","title":"Turing.Inference.mh_accept","text":"mh_accept(logp_current::Real, logp_proposal::Real, log_proposal_ratio::Real)\n\nDecide if a proposal x with log probability log p(x) = logp_proposal and log proposal ratio log k(x x) - log k(x x) = log_proposal_ratio in a Metropolis-Hastings algorithm with Markov kernel k(x_t x_t+1) and current state x with log probability log p(x) = logp_current is accepted by evaluating the Metropolis-Hastings acceptance criterion\n\nlog U leq log p(x) - log p(x) + log k(x x) - log k(x x)\n\nfor a uniform random number U in 0 1).\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.recompute_logprob!!-Tuple{Random.AbstractRNG, DynamicPPL.Model, DynamicPPL.Sampler{<:Turing.Inference.ExternalSampler}, Any}","page":"Inference","title":"Turing.Inference.recompute_logprob!!","text":"recompute_logprob!!(rng, model, sampler, state)\n\nRecompute the log-probability of the model based on the given state and return the resulting state.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.requires_unconstrained_space-Union{Tuple{Turing.Inference.ExternalSampler{<:Any, <:Any, Unconstrained}}, Tuple{Unconstrained}} where Unconstrained","page":"Inference","title":"Turing.Inference.requires_unconstrained_space","text":"requires_unconstrained_space(sampler::ExternalSampler)\n\nReturn true if the sampler requires unconstrained space, and false otherwise.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.set_namedtuple!-Tuple{Union{DynamicPPL.ThreadSafeVarInfo{<:DynamicPPL.VarInfo{Tmeta}}, DynamicPPL.VarInfo{Tmeta}} where Tmeta, NamedTuple}","page":"Inference","title":"Turing.Inference.set_namedtuple!","text":"set_namedtuple!(vi::VarInfo, nt::NamedTuple)\n\nPlaces the values of a NamedTuple into the relevant places of a VarInfo.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.setparams_varinfo!!-Tuple{Any, DynamicPPL.Sampler, Any, DynamicPPL.AbstractVarInfo}","page":"Inference","title":"Turing.Inference.setparams_varinfo!!","text":"setparams_varinfo!!(model, sampler::Sampler, state, params::AbstractVarInfo)\n\nA lot like AbstractMCMC.setparams!!, but instead of taking a vector of parameters, takes an AbstractVarInfo object. Also takes the sampler as an argument. By default, falls back to AbstractMCMC.setparams!!(model, state, params[:]).\n\nmodel is typically a DynamicPPL.Model, but can also be e.g. an AbstractMCMC.LogDensityModel.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.transitions_from_chain-Tuple{DynamicPPL.Model, Chains}","page":"Inference","title":"Turing.Inference.transitions_from_chain","text":"transitions_from_chain(\n [rng::AbstractRNG,]\n model::Model,\n chain::MCMCChains.Chains;\n sampler = DynamicPPL.SampleFromPrior()\n)\n\nExecute model conditioned on each sample in chain, and return resulting transitions.\n\nThe returned transitions are represented in a Vector{<:Turing.Inference.Transition}.\n\nDetails\n\nIn a bit more detail, the process is as follows:\n\nFor every sample in chain\nFor every variable in sample\nSet variable in model to its value in sample\nExecute model with variables fixed as above, sampling variables NOT present in chain using SampleFromPrior\nReturn sampled variables and log-joint\n\nExample\n\njulia> using Turing\n\njulia> @model function demo()\n m ~ Normal(0, 1)\n x ~ Normal(m, 1)\n end;\n\njulia> m = demo();\n\njulia> chain = Chains(randn(2, 1, 1), [\"m\"]); # 2 samples of `m`\n\njulia> transitions = Turing.Inference.transitions_from_chain(m, chain);\n\njulia> [Turing.Inference.getlogp(t) for t in transitions] # extract the logjoints\n2-element Array{Float64,1}:\n -3.6294991938628374\n -2.5697948166987845\n\njulia> [first(t.θ.x) for t in transitions] # extract samples for `x`\n2-element Array{Array{Float64,1},1}:\n [-2.0844148956440796]\n [-1.704630494695469]\n\n\n\n\n\n","category":"method"},{"location":"api/#API","page":"API","title":"API","text":"","category":"section"},{"location":"api/#Module-wide-re-exports","page":"API","title":"Module-wide re-exports","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Turing.jl directly re-exports the entire public API of the following packages:","category":"page"},{"location":"api/","page":"API","title":"API","text":"Distributions.jl\nMCMCChains.jl\nAbstractMCMC.jl\nBijectors.jl\nLibtask.jl","category":"page"},{"location":"api/","page":"API","title":"API","text":"Please see the individual packages for their documentation.","category":"page"},{"location":"api/#Individual-exports-and-re-exports","page":"API","title":"Individual exports and re-exports","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"All of the following symbols are exported unqualified by Turing, even though the documentation suggests that many of them are qualified. That means, for example, you can just write","category":"page"},{"location":"api/","page":"API","title":"API","text":"using Turing\n\n@model function my_model() end\n\nsample(my_model(), Prior(), 100)","category":"page"},{"location":"api/","page":"API","title":"API","text":"instead of","category":"page"},{"location":"api/","page":"API","title":"API","text":"DynamicPPL.@model function my_model() end\n\nsample(my_model(), Turing.Inference.Prior(), 100)","category":"page"},{"location":"api/","page":"API","title":"API","text":"even though Prior() is actually defined in the Turing.Inference module and @model in the DynamicPPL package.","category":"page"},{"location":"api/#Modelling","page":"API","title":"Modelling","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\n@model DynamicPPL.@model Define a probabilistic model\n@varname AbstractPPL.@varname Generate a VarName from a Julia expression\n@submodel DynamicPPL.@submodel Define a submodel","category":"page"},{"location":"api/#Inference","page":"API","title":"Inference","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\nsample StatsBase.sample Sample from a model","category":"page"},{"location":"api/#Samplers","page":"API","title":"Samplers","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\nPrior Turing.Inference.Prior Sample from the prior distribution\nMH Turing.Inference.MH Metropolis–Hastings\nEmcee Turing.Inference.Emcee Affine-invariant ensemble sampler\nESS Turing.Inference.ESS Elliptical slice sampling\nGibbs Turing.Inference.Gibbs Gibbs sampling\nGibbsConditional Turing.Inference.GibbsConditional A \"pseudo-sampler\" to provide analytical conditionals to Gibbs\nHMC Turing.Inference.HMC Hamiltonian Monte Carlo\nSGLD Turing.Inference.SGLD Stochastic gradient Langevin dynamics\nSGHMC Turing.Inference.SGHMC Stochastic gradient Hamiltonian Monte Carlo\nPolynomialStepsize Turing.Inference.PolynomialStepsize Returns a function which generates polynomially decaying step sizes\nHMCDA Turing.Inference.HMCDA Hamiltonian Monte Carlo with dual averaging\nNUTS Turing.Inference.NUTS No-U-Turn Sampler\nIS Turing.Inference.IS Importance sampling\nSMC Turing.Inference.SMC Sequential Monte Carlo\nPG Turing.Inference.PG Particle Gibbs\nCSMC Turing.Inference.CSMC The same as PG\nexternalsampler Turing.Inference.externalsampler Wrap an external sampler for use in Turing","category":"page"},{"location":"api/#Variational-inference","page":"API","title":"Variational inference","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"See the variational inference tutorial for a walkthrough on how to use these.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\nvi AdvancedVI.vi Perform variational inference\nADVI AdvancedVI.ADVI Construct an instance of a VI algorithm","category":"page"},{"location":"api/#Automatic-differentiation-types","page":"API","title":"Automatic differentiation types","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"These are used to specify the automatic differentiation backend to use. See the AD guide for more information.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\nAutoForwardDiff ADTypes.AutoForwardDiff ForwardDiff.jl backend\nAutoReverseDiff ADTypes.AutoReverseDiff ReverseDiff.jl backend\nAutoZygote ADTypes.AutoZygote Zygote.jl backend\nAutoMooncake ADTypes.AutoMooncake Mooncake.jl backend","category":"page"},{"location":"api/#Debugging","page":"API","title":"Debugging","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"setprogress!","category":"page"},{"location":"api/#Turing.setprogress!","page":"API","title":"Turing.setprogress!","text":"setprogress!(progress::Bool)\n\nEnable progress logging in Turing if progress is true, and disable it otherwise.\n\n\n\n\n\n","category":"function"},{"location":"api/#Distributions","page":"API","title":"Distributions","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"These distributions are defined in Turing.jl, but not in Distributions.jl.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Flat\nFlatPos\nBinomialLogit\nOrderedLogistic\nLogPoisson","category":"page"},{"location":"api/#Turing.Flat","page":"API","title":"Turing.Flat","text":"Flat()\n\nThe flat distribution is the improper distribution of real numbers that has the improper probability density function\n\nf(x) = 1\n\n\n\n\n\n","category":"type"},{"location":"api/#Turing.FlatPos","page":"API","title":"Turing.FlatPos","text":"FlatPos(l::Real)\n\nThe positive flat distribution with real-valued parameter l is the improper distribution of real numbers that has the improper probability density function\n\nf(x) = begincases\n0 textif x leq l \n1 textotherwise\nendcases\n\n\n\n\n\n","category":"type"},{"location":"api/#Turing.BinomialLogit","page":"API","title":"Turing.BinomialLogit","text":"BinomialLogit(n, logitp)\n\nThe Binomial distribution with logit parameterization characterizes the number of successes in a sequence of independent trials.\n\nIt has two parameters: n, the number of trials, and logitp, the logit of the probability of success in an individual trial, with the distribution\n\nP(X = k) = n choose k(textlogistic(logitp))^k (1 - textlogistic(logitp))^n-k quad text for k = 012 ldots n\n\nSee also: Binomial\n\n\n\n\n\n","category":"type"},{"location":"api/#Turing.OrderedLogistic","page":"API","title":"Turing.OrderedLogistic","text":"OrderedLogistic(η, c::AbstractVector)\n\nThe ordered logistic distribution with real-valued parameter η and cutpoints c has the probability mass function\n\nP(X = k) = begincases\n 1 - textlogistic(eta - c_1) textif k = 1 \n textlogistic(eta - c_k-1) - textlogistic(eta - c_k) textif 1 k K \n textlogistic(eta - c_K-1) textif k = K\nendcases\n\nwhere K = length(c) + 1.\n\n\n\n\n\n","category":"type"},{"location":"api/#Turing.LogPoisson","page":"API","title":"Turing.LogPoisson","text":"LogPoisson(logλ)\n\nThe Poisson distribution with logarithmic parameterization of the rate parameter describes the number of independent events occurring within a unit time interval, given the average rate of occurrence exp(loglambda).\n\nThe distribution has the probability mass function\n\nP(X = k) = frace^k cdot loglambdak e^-e^loglambda quad text for k = 012ldots\n\nSee also: Poisson\n\n\n\n\n\n","category":"type"},{"location":"api/","page":"API","title":"API","text":"BernoulliLogit is part of Distributions.jl since version 0.25.77. If you are using an older version of Distributions where this isn't defined, Turing will export the same distribution.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Distributions.BernoulliLogit","category":"page"},{"location":"api/#Distributions.BernoulliLogit","page":"API","title":"Distributions.BernoulliLogit","text":"BernoulliLogit(logitp=0.0)\n\nA Bernoulli distribution that is parameterized by the logit logitp = logit(p) = log(p/(1-p)) of its success rate p.\n\nP(X = k) = begincases\noperatornamelogistic(-logitp) = frac11 + exp(logitp) quad textfor k = 0 \noperatornamelogistic(logitp) = frac11 + exp(-logitp) quad textfor k = 1\nendcases\n\nExternal links:\n\nBernoulli distribution on Wikipedia\n\nSee also Bernoulli\n\n\n\n\n\n","category":"type"},{"location":"api/#Tools-to-work-with-distributions","page":"API","title":"Tools to work with distributions","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\nfilldist DistributionsAD.filldist Create a product distribution from a distribution and integers\narraydist DistributionsAD.arraydist Create a product distribution from an array of distributions\nNamedDist DynamicPPL.NamedDist A distribution that carries the name of the variable","category":"page"},{"location":"api/#Predictions","page":"API","title":"Predictions","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"predict","category":"page"},{"location":"api/#StatsAPI.predict","page":"API","title":"StatsAPI.predict","text":"predict([rng::AbstractRNG,] model::Model, chain::MCMCChains.Chains; include_all=false)\n\nExecute model conditioned on each sample in chain, and return the resulting Chains.\n\nIf include_all is false, the returned Chains will contain only those variables sampled/not present in chain.\n\nDetails\n\nInternally calls Turing.Inference.transitions_from_chain to obtained the samples and then converts these into a Chains object using AbstractMCMC.bundle_samples.\n\nExample\n\njulia> using Turing; Turing.setprogress!(false);\n[ Info: [Turing]: progress logging is disabled globally\n\njulia> @model function linear_reg(x, y, σ = 0.1)\n β ~ Normal(0, 1)\n\n for i ∈ eachindex(y)\n y[i] ~ Normal(β * x[i], σ)\n end\n end;\n\njulia> σ = 0.1; f(x) = 2 * x + 0.1 * randn();\n\njulia> Δ = 0.1; xs_train = 0:Δ:10; ys_train = f.(xs_train);\n\njulia> xs_test = [10 + Δ, 10 + 2 * Δ]; ys_test = f.(xs_test);\n\njulia> m_train = linear_reg(xs_train, ys_train, σ);\n\njulia> chain_lin_reg = sample(m_train, NUTS(100, 0.65), 200);\n┌ Info: Found initial step size\n└ ϵ = 0.003125\n\njulia> m_test = linear_reg(xs_test, Vector{Union{Missing, Float64}}(undef, length(ys_test)), σ);\n\njulia> predictions = predict(m_test, chain_lin_reg)\nObject of type Chains, with data of type 100×2×1 Array{Float64,3}\n\nIterations = 1:100\nThinning interval = 1\nChains = 1\nSamples per chain = 100\nparameters = y[1], y[2]\n\n2-element Array{ChainDataFrame,1}\n\nSummary Statistics\n parameters mean std naive_se mcse ess r_hat\n ────────── ─────── ────── ──────── ─────── ──────── ──────\n y[1] 20.1974 0.1007 0.0101 missing 101.0711 0.9922\n y[2] 20.3867 0.1062 0.0106 missing 101.4889 0.9903\n\nQuantiles\n parameters 2.5% 25.0% 50.0% 75.0% 97.5%\n ────────── ─────── ─────── ─────── ─────── ───────\n y[1] 20.0342 20.1188 20.2135 20.2588 20.4188\n y[2] 20.1870 20.3178 20.3839 20.4466 20.5895\n\n\njulia> ys_pred = vec(mean(Array(group(predictions, :y)); dims = 1));\n\njulia> sum(abs2, ys_test - ys_pred) ≤ 0.1\ntrue\n\n\n\n\n\n","category":"function"},{"location":"api/#Querying-model-probabilities-and-quantities","page":"API","title":"Querying model probabilities and quantities","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Please see the generated quantities and probability interface guides for more information.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\ngenerated_quantities DynamicPPL.generated_quantities Calculate additional quantities defined in a model\npointwise_loglikelihoods DynamicPPL.pointwise_loglikelihoods Compute log likelihoods for each sample in a chain\nlogprior DynamicPPL.logprior Compute log prior probability\nlogjoint DynamicPPL.logjoint Compute log joint probability\nLogDensityFunction DynamicPPL.LogDensityFunction Wrap a Turing model to satisfy LogDensityFunctions.jl interface\ncondition AbstractPPL.condition Condition a model on data\ndecondition AbstractPPL.decondition Remove conditioning on data\nconditioned DynamicPPL.conditioned Return the conditioned values of a model\nfix DynamicPPL.fix Fix the value of a variable\nunfix DynamicPPL.unfix Unfix the value of a variable\nOrderedDict OrderedCollections.OrderedDict An ordered dictionary","category":"page"},{"location":"api/#Extra-re-exports-from-Bijectors","page":"API","title":"Extra re-exports from Bijectors","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Note that Bijectors itself does not export ordered.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Bijectors.ordered","category":"page"},{"location":"api/#Bijectors.ordered","page":"API","title":"Bijectors.ordered","text":"ordered(d::Distribution)\n\nReturn a Distribution whose support are ordered vectors, i.e., vectors with increasingly ordered elements.\n\nSpecifically, d is restricted to the subspace of its domain containing only ordered elements.\n\nwarning: Warning\nrand is implemented using rejection sampling, which can be slow for high-dimensional distributions. In such cases, consider using MCMC methods to sample from the distribution instead.\n\nwarning: Warning\nThe resulting ordered distribution is un-normalized, which can cause issues in some contexts, e.g. in hierarchical models where the parameters of the ordered distribution are themselves sampled. See the notes below for a more detailed discussion.\n\nNotes on ordered being un-normalized\n\nThe resulting ordered distribution is un-normalized. This is not a problem if used in a context where the normalizing factor is irrelevant, but if the value of the normalizing factor impacts the resulting computation, the results may be inaccurate.\n\nFor example, if the distribution is used in sampling a posterior distribution with MCMC and the parameters of the ordered distribution are themselves sampled, then the normalizing factor would in general be needed for accurate sampling, and ordered should not be used. However, if the parameters are fixed, then since MCMC does not require distributions be normalized, ordered may be used without problems.\n\nA common case is where the distribution being ordered is a joint distribution of n identical univariate distributions. In this case the normalization factor works out to be the constant n!, and ordered can again be used without problems even if the parameters of the univariate distribution are sampled.\n\n\n\n\n\n","category":"function"},{"location":"api/#Point-estimates","page":"API","title":"Point estimates","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"See the mode estimation tutorial for more information.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\nmaximum_a_posteriori Turing.Optimisation.maximum_a_posteriori Find a MAP estimate for a model\nmaximum_likelihood Turing.Optimisation.maximum_likelihood Find a MLE estimate for a model\nMAP Turing.Optimisation.MAP Type to use with Optim.jl for MAP estimation\nMLE Turing.Optimisation.MLE Type to use with Optim.jl for MLE estimation","category":"page"},{"location":"#Turing.jl","page":"Home","title":"Turing.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This site contains the API documentation for the identifiers exported by Turing.jl.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you are looking for usage examples and guides, please visit https://turinglang.org/docs.","category":"page"}] +[{"location":"api/Optimisation/#API:-Turing.Optimisation","page":"Optimisation","title":"API: Turing.Optimisation","text":"","category":"section"},{"location":"api/Optimisation/","page":"Optimisation","title":"Optimisation","text":"Modules = [Turing.Optimisation]\nOrder = [:type, :function]","category":"page"},{"location":"api/Optimisation/#SciMLBase.OptimizationProblem-Tuple{LogDensityFunction{V, M, C} where {M<:DynamicPPL.Model, C<:Turing.Optimisation.OptimizationContext, V<:DynamicPPL.VarInfo}, Any, Any}","page":"Optimisation","title":"SciMLBase.OptimizationProblem","text":"OptimizationProblem(log_density::OptimLogDensity, adtype, constraints)\n\nCreate an OptimizationProblem for the objective function defined by log_density.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.MAP","page":"Optimisation","title":"Turing.Optimisation.MAP","text":"MAP <: ModeEstimator\n\nConcrete type for maximum a posteriori estimation. Only used for the Optim.jl interface.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Turing.Optimisation.MLE","page":"Optimisation","title":"Turing.Optimisation.MLE","text":"MLE <: ModeEstimator\n\nConcrete type for maximum likelihood estimation. Only used for the Optim.jl interface.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Turing.Optimisation.ModeEstimationConstraints","page":"Optimisation","title":"Turing.Optimisation.ModeEstimationConstraints","text":"ModeEstimationConstraints\n\nA struct that holds constraints for mode estimation problems.\n\nThe fields are the same as possible constraints supported by the Optimization.jl: ub and lb specify lower and upper bounds of box constraints. cons is a function that takes the parameters of the model and returns a list of derived quantities, which are then constrained by the lower and upper bounds set by lcons and ucons. We refer to these as generic constraints. Please see the documentation of Optimization.jl for more details.\n\nAny of the fields can be nothing, disabling the corresponding constraints.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Turing.Optimisation.ModeEstimator","page":"Optimisation","title":"Turing.Optimisation.ModeEstimator","text":"ModeEstimator\n\nAn abstract type to mark whether mode estimation is to be done with maximum a posteriori (MAP) or maximum likelihood estimation (MLE). This is only needed for the Optim.jl interface.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Turing.Optimisation.ModeResult","page":"Optimisation","title":"Turing.Optimisation.ModeResult","text":"ModeResult{\n V<:NamedArrays.NamedArray,\n M<:NamedArrays.NamedArray,\n O<:Optim.MultivariateOptimizationResults,\n S<:NamedArrays.NamedArray\n}\n\nA wrapper struct to store various results from a MAP or MLE estimation.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Turing.Optimisation.ModeResult-Tuple{LogDensityFunction{V, M, C} where {M<:DynamicPPL.Model, C<:Turing.Optimisation.OptimizationContext, V<:DynamicPPL.VarInfo}, SciMLBase.OptimizationSolution}","page":"Optimisation","title":"Turing.Optimisation.ModeResult","text":"ModeResult(log_density::OptimLogDensity, solution::SciMLBase.OptimizationSolution)\n\nCreate a ModeResult for a given log_density objective and a solution given by solve.\n\nOptimization.solve returns its own result type. This function converts that into the richer format of ModeResult. It also takes care of transforming them back to the original parameter space in case the optimization was done in a transformed space.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.OptimLogDensity","page":"Optimisation","title":"Turing.Optimisation.OptimLogDensity","text":"OptimLogDensity{M<:DynamicPPL.Model,C<:Context,V<:DynamicPPL.VarInfo}\n\nA struct that stores the negative log density function of a DynamicPPL model.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Turing.Optimisation.OptimLogDensity-Tuple{AbstractVector}","page":"Optimisation","title":"Turing.Optimisation.OptimLogDensity","text":"(f::OptimLogDensity)(z)\n(f::OptimLogDensity)(z, _)\n\nEvaluate the negative log joint or log likelihood at the array z. Which one is evaluated depends on the context of f.\n\nAny second argument is ignored. The two-argument method only exists to match interface the required by Optimization.jl.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.OptimLogDensity-Tuple{DynamicPPL.Model, Turing.Optimisation.OptimizationContext}","page":"Optimisation","title":"Turing.Optimisation.OptimLogDensity","text":"OptimLogDensity(model::DynamicPPL.Model, context::OptimizationContext)\n\nCreate a callable OptimLogDensity struct that evaluates a model using the given context.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.OptimizationContext","page":"Optimisation","title":"Turing.Optimisation.OptimizationContext","text":"OptimizationContext{C<:AbstractContext} <: AbstractContext\n\nThe OptimizationContext transforms variables to their constrained space, but does not use the density with respect to the transformation. This context is intended to allow an optimizer to sample in R^n freely.\n\n\n\n\n\n","category":"type"},{"location":"api/Optimisation/#Base.get-Tuple{Turing.Optimisation.ModeResult, AbstractVector{Symbol}}","page":"Optimisation","title":"Base.get","text":"Base.get(m::ModeResult, var_symbol::Symbol)\nBase.get(m::ModeResult, var_symbols::AbstractVector{Symbol})\n\nReturn the values of all the variables with the symbol(s) var_symbol in the mode result m. The return value is a NamedTuple with var_symbols as the key(s). The second argument should be either a Symbol or a vector of Symbols.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.estimate_mode","page":"Optimisation","title":"Turing.Optimisation.estimate_mode","text":"estimate_mode(\n model::DynamicPPL.Model,\n estimator::ModeEstimator,\n [solver];\n kwargs...\n)\n\nFind the mode of the probability distribution of a model.\n\nUnder the hood this function calls Optimization.solve.\n\nArguments\n\nmodel::DynamicPPL.Model: The model for which to estimate the mode.\nestimator::ModeEstimator: Can be either MLE() for maximum likelihood estimation or MAP() for maximum a posteriori estimation.\nsolver=nothing. The optimization algorithm to use. Optional. Can be any solver recognised by Optimization.jl. If omitted a default solver is used: LBFGS, or IPNewton if non-box constraints are present.\n\nKeyword arguments\n\ninitial_params::Union{AbstractVector,Nothing}=nothing: Initial value for the optimization. Optional, unless non-box constraints are specified. If omitted it is generated by either sampling from the prior distribution or uniformly from the box constraints, if any.\nadtype::AbstractADType=AutoForwardDiff(): The automatic differentiation type to use.\nKeyword arguments lb, ub, cons, lcons, and ucons define constraints for the optimization problem. Please see ModeEstimationConstraints for more details.\nAny extra keyword arguments are passed to Optimization.solve.\n\n\n\n\n\n","category":"function"},{"location":"api/Optimisation/#Turing.Optimisation.generate_initial_params-Tuple{DynamicPPL.Model, Any, Any}","page":"Optimisation","title":"Turing.Optimisation.generate_initial_params","text":"generate_initial_params(model::DynamicPPL.Model, initial_params, constraints)\n\nGenerate an initial value for the optimization problem.\n\nIf initial_params is not nothing, a copy of it is returned. Otherwise initial parameter values are generated either by sampling from the prior (if no constraints are present) or uniformly from the box constraints. If generic constraints are set, an error is thrown.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.maximum_a_posteriori-Tuple{DynamicPPL.Model, Vararg{Any}}","page":"Optimisation","title":"Turing.Optimisation.maximum_a_posteriori","text":"maximum_a_posteriori(\n model::DynamicPPL.Model,\n [solver];\n kwargs...\n)\n\nFind the maximum a posteriori estimate of a model.\n\nThis is a convenience function that calls estimate_mode with MAP() as the estimator. Please see the documentation of Turing.Optimisation.estimate_mode for more details.\n\n\n\n\n\n","category":"method"},{"location":"api/Optimisation/#Turing.Optimisation.maximum_likelihood-Tuple{DynamicPPL.Model, Vararg{Any}}","page":"Optimisation","title":"Turing.Optimisation.maximum_likelihood","text":"maximum_likelihood(\n model::DynamicPPL.Model,\n [solver];\n kwargs...\n)\n\nFind the maximum likelihood estimate of a model.\n\nThis is a convenience function that calls estimate_mode with MLE() as the estimator. Please see the documentation of Turing.Optimisation.estimate_mode for more details.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#API:-Turing.Inference","page":"Inference","title":"API: Turing.Inference","text":"","category":"section"},{"location":"api/Inference/","page":"Inference","title":"Inference","text":"Modules = [Turing.Inference]\nOrder = [:type, :function]","category":"page"},{"location":"api/Inference/#Turing.Inference.CSMC","page":"Inference","title":"Turing.Inference.CSMC","text":"CSMC(...)\n\nEquivalent to PG.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.ESS","page":"Inference","title":"Turing.Inference.ESS","text":"ESS\n\nElliptical slice sampling algorithm.\n\nExamples\n\njulia> @model function gdemo(x)\n m ~ Normal()\n x ~ Normal(m, 0.5)\n end\ngdemo (generic function with 2 methods)\n\njulia> sample(gdemo(1.0), ESS(), 1_000) |> mean\nMean\n\n│ Row │ parameters │ mean │\n│ │ Symbol │ Float64 │\n├─────┼────────────┼──────────┤\n│ 1 │ m │ 0.824853 │\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.Emcee","page":"Inference","title":"Turing.Inference.Emcee","text":"Emcee(n_walkers::Int, stretch_length=2.0)\n\nAffine-invariant ensemble sampling algorithm.\n\nReference\n\nForeman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. (2013). emcee: The MCMC Hammer. Publications of the Astronomical Society of the Pacific, 125 (925), 306. https://doi.org/10.1086/670067\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.ExternalSampler","page":"Inference","title":"Turing.Inference.ExternalSampler","text":"ExternalSampler{S<:AbstractSampler,AD<:ADTypes.AbstractADType,Unconstrained}\n\nRepresents a sampler that is not an implementation of InferenceAlgorithm.\n\nThe Unconstrained type-parameter is to indicate whether the sampler requires unconstrained space.\n\nFields\n\nsampler::AbstractMCMC.AbstractSampler: the sampler to wrap\nadtype::ADTypes.AbstractADType: the automatic differentiation (AD) backend to use\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.Gibbs","page":"Inference","title":"Turing.Inference.Gibbs","text":"Gibbs\n\nA type representing a Gibbs sampler.\n\nFields\n\nvarnames::Any: varnames representing variables for each sampler\nsamplers::Any: samplers for each entry in varnames\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.GibbsContext","page":"Inference","title":"Turing.Inference.GibbsContext","text":"GibbsContext(target_varnames, global_varinfo, context)\n\nA context used in the implementation of the Turing.jl Gibbs sampler.\n\nThere will be one GibbsContext for each iteration of a component sampler.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.HMC","page":"Inference","title":"Turing.Inference.HMC","text":"HMC(ϵ::Float64, n_leapfrog::Int; adtype::ADTypes.AbstractADType = AutoForwardDiff())\n\nHamiltonian Monte Carlo sampler with static trajectory.\n\nArguments\n\nϵ: The leapfrog step size to use.\nn_leapfrog: The number of leapfrog steps to use.\nadtype: The automatic differentiation (AD) backend. If not specified, ForwardDiff is used, with its chunksize automatically determined.\n\nUsage\n\nHMC(0.05, 10)\n\nTips\n\nIf you are receiving gradient errors when using HMC, try reducing the leapfrog step size ϵ, e.g.\n\n# Original step size\nsample(gdemo([1.5, 2]), HMC(0.1, 10), 1000)\n\n# Reduced step size\nsample(gdemo([1.5, 2]), HMC(0.01, 10), 1000)\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.HMCDA","page":"Inference","title":"Turing.Inference.HMCDA","text":"HMCDA(\n n_adapts::Int, δ::Float64, λ::Float64; ϵ::Float64 = 0.0;\n adtype::ADTypes.AbstractADType = AutoForwardDiff(),\n)\n\nHamiltonian Monte Carlo sampler with Dual Averaging algorithm.\n\nUsage\n\nHMCDA(200, 0.65, 0.3)\n\nArguments\n\nn_adapts: Numbers of samples to use for adaptation.\nδ: Target acceptance rate. 65% is often recommended.\nλ: Target leapfrog length.\nϵ: Initial step size; 0 means automatically search by Turing.\nadtype: The automatic differentiation (AD) backend. If not specified, ForwardDiff is used, with its chunksize automatically determined.\n\nReference\n\nFor more information, please view the following paper (arXiv link):\n\nHoffman, Matthew D., and Andrew Gelman. \"The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo.\" Journal of Machine Learning Research 15, no. 1 (2014): 1593-1623.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.IS","page":"Inference","title":"Turing.Inference.IS","text":"IS()\n\nImportance sampling algorithm.\n\nUsage:\n\nIS()\n\nExample:\n\n# Define a simple Normal model with unknown mean and variance.\n@model function gdemo(x)\n s² ~ InverseGamma(2,3)\n m ~ Normal(0,sqrt.(s))\n x[1] ~ Normal(m, sqrt.(s))\n x[2] ~ Normal(m, sqrt.(s))\n return s², m\nend\n\nsample(gdemo([1.5, 2]), IS(), 1000)\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.MH-Tuple","page":"Inference","title":"Turing.Inference.MH","text":"MH(space...)\n\nConstruct a Metropolis-Hastings algorithm.\n\nThe arguments space can be\n\nBlank (i.e. MH()), in which case MH defaults to using the prior for each parameter as the proposal distribution.\nAn iterable of pairs or tuples mapping a Symbol to a AdvancedMH.Proposal, Distribution, or Function that generates returns a conditional proposal distribution.\nA covariance matrix to use as for mean-zero multivariate normal proposals.\n\nExamples\n\nThe default MH will draw proposal samples from the prior distribution using AdvancedMH.StaticProposal.\n\n@model function gdemo(x, y)\n s² ~ InverseGamma(2,3)\n m ~ Normal(0, sqrt(s²))\n x ~ Normal(m, sqrt(s²))\n y ~ Normal(m, sqrt(s²))\nend\n\nchain = sample(gdemo(1.5, 2.0), MH(), 1_000)\nmean(chain)\n\nSpecifying a single distribution implies the use of static MH:\n\n# Use a static proposal for s² (which happens to be the same\n# as the prior) and a static proposal for m (note that this\n# isn't a random walk proposal).\nchain = sample(\n gdemo(1.5, 2.0),\n MH(\n :s² => InverseGamma(2, 3),\n :m => Normal(0, 1)\n ),\n 1_000\n)\nmean(chain)\n\nSpecifying explicit proposals using the AdvancedMH interface:\n\n# Use a static proposal for s² and random walk with proposal\n# standard deviation of 0.25 for m.\nchain = sample(\n gdemo(1.5, 2.0),\n MH(\n :s² => AdvancedMH.StaticProposal(InverseGamma(2,3)),\n :m => AdvancedMH.RandomWalkProposal(Normal(0, 0.25))\n ),\n 1_000\n)\nmean(chain)\n\nUsing a custom function to specify a conditional distribution:\n\n# Use a static proposal for s and and a conditional proposal for m,\n# where the proposal is centered around the current sample.\nchain = sample(\n gdemo(1.5, 2.0),\n MH(\n :s² => InverseGamma(2, 3),\n :m => x -> Normal(x, 1)\n ),\n 1_000\n)\nmean(chain)\n\nProviding a covariance matrix will cause MH to perform random-walk sampling in the transformed space with proposals drawn from a multivariate normal distribution. The provided matrix must be positive semi-definite and square:\n\n# Providing a custom variance-covariance matrix\nchain = sample(\n gdemo(1.5, 2.0),\n MH(\n [0.25 0.05;\n 0.05 0.50]\n ),\n 1_000\n)\nmean(chain)\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.MHLogDensityFunction","page":"Inference","title":"Turing.Inference.MHLogDensityFunction","text":"MHLogDensityFunction\n\nA log density function for the MH sampler.\n\nThis variant uses the set_namedtuple! function to update the VarInfo.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.NUTS","page":"Inference","title":"Turing.Inference.NUTS","text":"NUTS(n_adapts::Int, δ::Float64; max_depth::Int=10, Δ_max::Float64=1000.0, init_ϵ::Float64=0.0; adtype::ADTypes.AbstractADType=AutoForwardDiff()\n\nNo-U-Turn Sampler (NUTS) sampler.\n\nUsage:\n\nNUTS() # Use default NUTS configuration.\nNUTS(1000, 0.65) # Use 1000 adaption steps, and target accept ratio 0.65.\n\nArguments:\n\nn_adapts::Int : The number of samples to use with adaptation.\nδ::Float64 : Target acceptance rate for dual averaging.\nmax_depth::Int : Maximum doubling tree depth.\nΔ_max::Float64 : Maximum divergence during doubling tree.\ninit_ϵ::Float64 : Initial step size; 0 means automatically searching using a heuristic procedure.\nadtype::ADTypes.AbstractADType : The automatic differentiation (AD) backend. If not specified, ForwardDiff is used, with its chunksize automatically determined.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.PG","page":"Inference","title":"Turing.Inference.PG","text":"PG(n, space...)\nPG(n, [resampler = AdvancedPS.ResampleWithESSThreshold(), space = ()])\nPG(n, [resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])\n\nCreate a Particle Gibbs sampler of type PG with n particles for the variables in space.\n\nIf the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.PG-2","page":"Inference","title":"Turing.Inference.PG","text":"struct PG{space, R} <: Turing.Inference.ParticleInference\n\nParticle Gibbs sampler.\n\nFields\n\nnparticles::Int64: Number of particles.\nresampler::Any: Resampling algorithm.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.PolynomialStepsize-Union{Tuple{T}, Tuple{T, T, T}} where T<:Real","page":"Inference","title":"Turing.Inference.PolynomialStepsize","text":"PolynomialStepsize(a[, b=0, γ=0.55])\n\nCreate a polynomially decaying stepsize function.\n\nAt iteration t, the step size is\n\na (b + t)^-γ\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.Prior","page":"Inference","title":"Turing.Inference.Prior","text":"Prior()\n\nAlgorithm for sampling from the prior.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.SGHMC","page":"Inference","title":"Turing.Inference.SGHMC","text":"SGHMC{AD,space}\n\nStochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.e\n\nFields\n\nlearning_rate::Real\nmomentum_decay::Real\nadtype::Any\n\nReference\n\nTianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.SGHMC-Tuple{Vararg{Symbol}}","page":"Inference","title":"Turing.Inference.SGHMC","text":"SGHMC(\n space::Symbol...;\n learning_rate::Real,\n momentum_decay::Real,\n adtype::ADTypes.AbstractADType = AutoForwardDiff(),\n)\n\nCreate a Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.\n\nIf the automatic differentiation (AD) backend adtype is not provided, ForwardDiff with automatically determined chunksize is used.\n\nReference\n\nTianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.SGLD","page":"Inference","title":"Turing.Inference.SGLD","text":"SGLD\n\nStochastic gradient Langevin dynamics (SGLD) sampler.\n\nFields\n\nstepsize::Any: Step size function.\nadtype::Any\n\nReference\n\nMax Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.SGLD-Tuple{Vararg{Symbol}}","page":"Inference","title":"Turing.Inference.SGLD","text":"SGLD(\n space::Symbol...;\n stepsize = PolynomialStepsize(0.01),\n adtype::ADTypes.AbstractADType = AutoForwardDiff(),\n)\n\nStochastic gradient Langevin dynamics (SGLD) sampler.\n\nBy default, a polynomially decaying stepsize is used.\n\nIf the automatic differentiation (AD) backend adtype is not provided, ForwardDiff with automatically determined chunksize is used.\n\nReference\n\nMax Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).\n\nSee also: PolynomialStepsize\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.SMC","page":"Inference","title":"Turing.Inference.SMC","text":"SMC(space...)\nSMC([resampler = AdvancedPS.ResampleWithESSThreshold(), space = ()])\nSMC([resampler = AdvancedPS.resample_systematic, ]threshold[, space = ()])\n\nCreate a sequential Monte Carlo sampler of type SMC for the variables in space.\n\nIf the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#Turing.Inference.SMC-2","page":"Inference","title":"Turing.Inference.SMC","text":"struct SMC{space, R} <: Turing.Inference.ParticleInference\n\nSequential Monte Carlo sampler.\n\nFields\n\nresampler::Any\n\n\n\n\n\n","category":"type"},{"location":"api/Inference/#StatsAPI.predict-Tuple{DynamicPPL.Model, Chains}","page":"Inference","title":"StatsAPI.predict","text":"predict([rng::AbstractRNG,] model::Model, chain::MCMCChains.Chains; include_all=false)\n\nExecute model conditioned on each sample in chain, and return the resulting Chains.\n\nIf include_all is false, the returned Chains will contain only those variables sampled/not present in chain.\n\nDetails\n\nInternally calls Turing.Inference.transitions_from_chain to obtained the samples and then converts these into a Chains object using AbstractMCMC.bundle_samples.\n\nExample\n\njulia> using Turing; Turing.setprogress!(false);\n[ Info: [Turing]: progress logging is disabled globally\n\njulia> @model function linear_reg(x, y, σ = 0.1)\n β ~ Normal(0, 1)\n\n for i ∈ eachindex(y)\n y[i] ~ Normal(β * x[i], σ)\n end\n end;\n\njulia> σ = 0.1; f(x) = 2 * x + 0.1 * randn();\n\njulia> Δ = 0.1; xs_train = 0:Δ:10; ys_train = f.(xs_train);\n\njulia> xs_test = [10 + Δ, 10 + 2 * Δ]; ys_test = f.(xs_test);\n\njulia> m_train = linear_reg(xs_train, ys_train, σ);\n\njulia> chain_lin_reg = sample(m_train, NUTS(100, 0.65), 200);\n┌ Info: Found initial step size\n└ ϵ = 0.003125\n\njulia> m_test = linear_reg(xs_test, Vector{Union{Missing, Float64}}(undef, length(ys_test)), σ);\n\njulia> predictions = predict(m_test, chain_lin_reg)\nObject of type Chains, with data of type 100×2×1 Array{Float64,3}\n\nIterations = 1:100\nThinning interval = 1\nChains = 1\nSamples per chain = 100\nparameters = y[1], y[2]\n\n2-element Array{ChainDataFrame,1}\n\nSummary Statistics\n parameters mean std naive_se mcse ess r_hat\n ────────── ─────── ────── ──────── ─────── ──────── ──────\n y[1] 20.1974 0.1007 0.0101 missing 101.0711 0.9922\n y[2] 20.3867 0.1062 0.0106 missing 101.4889 0.9903\n\nQuantiles\n parameters 2.5% 25.0% 50.0% 75.0% 97.5%\n ────────── ─────── ─────── ─────── ─────── ───────\n y[1] 20.0342 20.1188 20.2135 20.2588 20.4188\n y[2] 20.1870 20.3178 20.3839 20.4466 20.5895\n\n\njulia> ys_pred = vec(mean(Array(group(predictions, :y)); dims = 1));\n\njulia> sum(abs2, ys_test - ys_pred) ≤ 0.1\ntrue\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.dist_val_tuple-Tuple{DynamicPPL.Sampler{<:MH}, Union{DynamicPPL.ThreadSafeVarInfo{<:DynamicPPL.VarInfo{Tmeta}}, DynamicPPL.VarInfo{Tmeta}} where Tmeta}","page":"Inference","title":"Turing.Inference.dist_val_tuple","text":"dist_val_tuple(spl::Sampler{<:MH}, vi::VarInfo)\n\nReturn two NamedTuples.\n\nThe first NamedTuple has symbols as keys and distributions as values. The second NamedTuple has model symbols as keys and their stored values as values.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.externalsampler-Tuple{AbstractMCMC.AbstractSampler}","page":"Inference","title":"Turing.Inference.externalsampler","text":"externalsampler(sampler::AbstractSampler; adtype=AutoForwardDiff(), unconstrained=true)\n\nWrap a sampler so it can be used as an inference algorithm.\n\nArguments\n\nsampler::AbstractSampler: The sampler to wrap.\n\nKeyword Arguments\n\nadtype::ADTypes.AbstractADType=ADTypes.AutoForwardDiff(): The automatic differentiation (AD) backend to use.\nunconstrained::Bool=true: Whether the sampler requires unconstrained space.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.getparams-Tuple{Any, Any}","page":"Inference","title":"Turing.Inference.getparams","text":"getparams(model, t)\n\nReturn a named tuple of parameters.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.group_varnames_by_symbol-Tuple{Any}","page":"Inference","title":"Turing.Inference.group_varnames_by_symbol","text":"group_varnames_by_symbol(vns)\n\nGroup the varnames by their symbol.\n\nArguments\n\nvns: Iterable of VarName.\n\nReturns\n\nOrderedDict{Symbol, Vector{VarName}}: A dictionary mapping symbol to a vector of varnames.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.make_conditional-Tuple{DynamicPPL.Model, AbstractVector{<:AbstractPPL.VarName}, Any}","page":"Inference","title":"Turing.Inference.make_conditional","text":"make_conditional(model, target_variables, varinfo)\n\nReturn a new, conditioned model for a component of a Gibbs sampler.\n\nArguments\n\nmodel::DynamicPPL.Model: The model to condition.\ntarget_variables::AbstractVector{<:VarName}: The target variables of the component\n\nsampler. These will not be conditioned.\n\nvarinfo::DynamicPPL.AbstractVarInfo: Values for all variables in the model. All the\n\nvalues in varinfo but not in target_variables will be conditioned to the values they have in varinfo.\n\nReturns\n\nA new model with the variables not in target_variables conditioned.\nThe GibbsContext object that will be used to condition the variables. This is necessary\n\nbecause evaluation can mutate its global_varinfo field, which we need to access later.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.mh_accept-Tuple{Real, Real, Real}","page":"Inference","title":"Turing.Inference.mh_accept","text":"mh_accept(logp_current::Real, logp_proposal::Real, log_proposal_ratio::Real)\n\nDecide if a proposal x with log probability log p(x) = logp_proposal and log proposal ratio log k(x x) - log k(x x) = log_proposal_ratio in a Metropolis-Hastings algorithm with Markov kernel k(x_t x_t+1) and current state x with log probability log p(x) = logp_current is accepted by evaluating the Metropolis-Hastings acceptance criterion\n\nlog U leq log p(x) - log p(x) + log k(x x) - log k(x x)\n\nfor a uniform random number U in 0 1).\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.requires_unconstrained_space-Union{Tuple{Turing.Inference.ExternalSampler{<:Any, <:Any, Unconstrained}}, Tuple{Unconstrained}} where Unconstrained","page":"Inference","title":"Turing.Inference.requires_unconstrained_space","text":"requires_unconstrained_space(sampler::ExternalSampler)\n\nReturn true if the sampler requires unconstrained space, and false otherwise.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.set_namedtuple!-Tuple{Union{DynamicPPL.ThreadSafeVarInfo{<:DynamicPPL.VarInfo{Tmeta}}, DynamicPPL.VarInfo{Tmeta}} where Tmeta, NamedTuple}","page":"Inference","title":"Turing.Inference.set_namedtuple!","text":"set_namedtuple!(vi::VarInfo, nt::NamedTuple)\n\nPlaces the values of a NamedTuple into the relevant places of a VarInfo.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.setparams_varinfo!!-Tuple{Any, DynamicPPL.Sampler, Any, DynamicPPL.AbstractVarInfo}","page":"Inference","title":"Turing.Inference.setparams_varinfo!!","text":"setparams_varinfo!!(model, sampler::Sampler, state, params::AbstractVarInfo)\n\nA lot like AbstractMCMC.setparams!!, but instead of taking a vector of parameters, takes an AbstractVarInfo object. Also takes the sampler as an argument. By default, falls back to AbstractMCMC.setparams!!(model, state, params[:]).\n\nmodel is typically a DynamicPPL.Model, but can also be e.g. an AbstractMCMC.LogDensityModel.\n\n\n\n\n\n","category":"method"},{"location":"api/Inference/#Turing.Inference.transitions_from_chain-Tuple{DynamicPPL.Model, Chains}","page":"Inference","title":"Turing.Inference.transitions_from_chain","text":"transitions_from_chain(\n [rng::AbstractRNG,]\n model::Model,\n chain::MCMCChains.Chains;\n sampler = DynamicPPL.SampleFromPrior()\n)\n\nExecute model conditioned on each sample in chain, and return resulting transitions.\n\nThe returned transitions are represented in a Vector{<:Turing.Inference.Transition}.\n\nDetails\n\nIn a bit more detail, the process is as follows:\n\nFor every sample in chain\nFor every variable in sample\nSet variable in model to its value in sample\nExecute model with variables fixed as above, sampling variables NOT present in chain using SampleFromPrior\nReturn sampled variables and log-joint\n\nExample\n\njulia> using Turing\n\njulia> @model function demo()\n m ~ Normal(0, 1)\n x ~ Normal(m, 1)\n end;\n\njulia> m = demo();\n\njulia> chain = Chains(randn(2, 1, 1), [\"m\"]); # 2 samples of `m`\n\njulia> transitions = Turing.Inference.transitions_from_chain(m, chain);\n\njulia> [Turing.Inference.getlogp(t) for t in transitions] # extract the logjoints\n2-element Array{Float64,1}:\n -3.6294991938628374\n -2.5697948166987845\n\njulia> [first(t.θ.x) for t in transitions] # extract samples for `x`\n2-element Array{Array{Float64,1},1}:\n [-2.0844148956440796]\n [-1.704630494695469]\n\n\n\n\n\n","category":"method"},{"location":"api/#API","page":"API","title":"API","text":"","category":"section"},{"location":"api/#Module-wide-re-exports","page":"API","title":"Module-wide re-exports","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Turing.jl directly re-exports the entire public API of the following packages:","category":"page"},{"location":"api/","page":"API","title":"API","text":"Distributions.jl\nMCMCChains.jl\nAbstractMCMC.jl\nBijectors.jl\nLibtask.jl","category":"page"},{"location":"api/","page":"API","title":"API","text":"Please see the individual packages for their documentation.","category":"page"},{"location":"api/#Individual-exports-and-re-exports","page":"API","title":"Individual exports and re-exports","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"All of the following symbols are exported unqualified by Turing, even though the documentation suggests that many of them are qualified. That means, for example, you can just write","category":"page"},{"location":"api/","page":"API","title":"API","text":"using Turing\n\n@model function my_model() end\n\nsample(my_model(), Prior(), 100)","category":"page"},{"location":"api/","page":"API","title":"API","text":"instead of","category":"page"},{"location":"api/","page":"API","title":"API","text":"DynamicPPL.@model function my_model() end\n\nsample(my_model(), Turing.Inference.Prior(), 100)","category":"page"},{"location":"api/","page":"API","title":"API","text":"even though Prior() is actually defined in the Turing.Inference module and @model in the DynamicPPL package.","category":"page"},{"location":"api/#Modelling","page":"API","title":"Modelling","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\n@model DynamicPPL.@model Define a probabilistic model\n@varname AbstractPPL.@varname Generate a VarName from a Julia expression\n@submodel DynamicPPL.@submodel Define a submodel","category":"page"},{"location":"api/#Inference","page":"API","title":"Inference","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\nsample StatsBase.sample Sample from a model","category":"page"},{"location":"api/#Samplers","page":"API","title":"Samplers","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\nPrior Turing.Inference.Prior Sample from the prior distribution\nMH Turing.Inference.MH Metropolis–Hastings\nEmcee Turing.Inference.Emcee Affine-invariant ensemble sampler\nESS Turing.Inference.ESS Elliptical slice sampling\nGibbs Turing.Inference.Gibbs Gibbs sampling\nGibbsConditional Turing.Inference.GibbsConditional A \"pseudo-sampler\" to provide analytical conditionals to Gibbs\nHMC Turing.Inference.HMC Hamiltonian Monte Carlo\nSGLD Turing.Inference.SGLD Stochastic gradient Langevin dynamics\nSGHMC Turing.Inference.SGHMC Stochastic gradient Hamiltonian Monte Carlo\nPolynomialStepsize Turing.Inference.PolynomialStepsize Returns a function which generates polynomially decaying step sizes\nHMCDA Turing.Inference.HMCDA Hamiltonian Monte Carlo with dual averaging\nNUTS Turing.Inference.NUTS No-U-Turn Sampler\nIS Turing.Inference.IS Importance sampling\nSMC Turing.Inference.SMC Sequential Monte Carlo\nPG Turing.Inference.PG Particle Gibbs\nCSMC Turing.Inference.CSMC The same as PG\nexternalsampler Turing.Inference.externalsampler Wrap an external sampler for use in Turing","category":"page"},{"location":"api/#Variational-inference","page":"API","title":"Variational inference","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"See the variational inference tutorial for a walkthrough on how to use these.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\nvi AdvancedVI.vi Perform variational inference\nADVI AdvancedVI.ADVI Construct an instance of a VI algorithm","category":"page"},{"location":"api/#Automatic-differentiation-types","page":"API","title":"Automatic differentiation types","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"These are used to specify the automatic differentiation backend to use. See the AD guide for more information.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\nAutoForwardDiff ADTypes.AutoForwardDiff ForwardDiff.jl backend\nAutoReverseDiff ADTypes.AutoReverseDiff ReverseDiff.jl backend\nAutoZygote ADTypes.AutoZygote Zygote.jl backend\nAutoMooncake ADTypes.AutoMooncake Mooncake.jl backend","category":"page"},{"location":"api/#Debugging","page":"API","title":"Debugging","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"setprogress!","category":"page"},{"location":"api/#Turing.setprogress!","page":"API","title":"Turing.setprogress!","text":"setprogress!(progress::Bool)\n\nEnable progress logging in Turing if progress is true, and disable it otherwise.\n\n\n\n\n\n","category":"function"},{"location":"api/#Distributions","page":"API","title":"Distributions","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"These distributions are defined in Turing.jl, but not in Distributions.jl.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Flat\nFlatPos\nBinomialLogit\nOrderedLogistic\nLogPoisson","category":"page"},{"location":"api/#Turing.Flat","page":"API","title":"Turing.Flat","text":"Flat()\n\nThe flat distribution is the improper distribution of real numbers that has the improper probability density function\n\nf(x) = 1\n\n\n\n\n\n","category":"type"},{"location":"api/#Turing.FlatPos","page":"API","title":"Turing.FlatPos","text":"FlatPos(l::Real)\n\nThe positive flat distribution with real-valued parameter l is the improper distribution of real numbers that has the improper probability density function\n\nf(x) = begincases\n0 textif x leq l \n1 textotherwise\nendcases\n\n\n\n\n\n","category":"type"},{"location":"api/#Turing.BinomialLogit","page":"API","title":"Turing.BinomialLogit","text":"BinomialLogit(n, logitp)\n\nThe Binomial distribution with logit parameterization characterizes the number of successes in a sequence of independent trials.\n\nIt has two parameters: n, the number of trials, and logitp, the logit of the probability of success in an individual trial, with the distribution\n\nP(X = k) = n choose k(textlogistic(logitp))^k (1 - textlogistic(logitp))^n-k quad text for k = 012 ldots n\n\nSee also: Binomial\n\n\n\n\n\n","category":"type"},{"location":"api/#Turing.OrderedLogistic","page":"API","title":"Turing.OrderedLogistic","text":"OrderedLogistic(η, c::AbstractVector)\n\nThe ordered logistic distribution with real-valued parameter η and cutpoints c has the probability mass function\n\nP(X = k) = begincases\n 1 - textlogistic(eta - c_1) textif k = 1 \n textlogistic(eta - c_k-1) - textlogistic(eta - c_k) textif 1 k K \n textlogistic(eta - c_K-1) textif k = K\nendcases\n\nwhere K = length(c) + 1.\n\n\n\n\n\n","category":"type"},{"location":"api/#Turing.LogPoisson","page":"API","title":"Turing.LogPoisson","text":"LogPoisson(logλ)\n\nThe Poisson distribution with logarithmic parameterization of the rate parameter describes the number of independent events occurring within a unit time interval, given the average rate of occurrence exp(loglambda).\n\nThe distribution has the probability mass function\n\nP(X = k) = frace^k cdot loglambdak e^-e^loglambda quad text for k = 012ldots\n\nSee also: Poisson\n\n\n\n\n\n","category":"type"},{"location":"api/","page":"API","title":"API","text":"BernoulliLogit is part of Distributions.jl since version 0.25.77. If you are using an older version of Distributions where this isn't defined, Turing will export the same distribution.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Distributions.BernoulliLogit","category":"page"},{"location":"api/#Distributions.BernoulliLogit","page":"API","title":"Distributions.BernoulliLogit","text":"BernoulliLogit(logitp=0.0)\n\nA Bernoulli distribution that is parameterized by the logit logitp = logit(p) = log(p/(1-p)) of its success rate p.\n\nP(X = k) = begincases\noperatornamelogistic(-logitp) = frac11 + exp(logitp) quad textfor k = 0 \noperatornamelogistic(logitp) = frac11 + exp(-logitp) quad textfor k = 1\nendcases\n\nExternal links:\n\nBernoulli distribution on Wikipedia\n\nSee also Bernoulli\n\n\n\n\n\n","category":"type"},{"location":"api/#Tools-to-work-with-distributions","page":"API","title":"Tools to work with distributions","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\nfilldist DistributionsAD.filldist Create a product distribution from a distribution and integers\narraydist DistributionsAD.arraydist Create a product distribution from an array of distributions\nNamedDist DynamicPPL.NamedDist A distribution that carries the name of the variable","category":"page"},{"location":"api/#Predictions","page":"API","title":"Predictions","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"predict","category":"page"},{"location":"api/#StatsAPI.predict","page":"API","title":"StatsAPI.predict","text":"predict([rng::AbstractRNG,] model::Model, chain::MCMCChains.Chains; include_all=false)\n\nExecute model conditioned on each sample in chain, and return the resulting Chains.\n\nIf include_all is false, the returned Chains will contain only those variables sampled/not present in chain.\n\nDetails\n\nInternally calls Turing.Inference.transitions_from_chain to obtained the samples and then converts these into a Chains object using AbstractMCMC.bundle_samples.\n\nExample\n\njulia> using Turing; Turing.setprogress!(false);\n[ Info: [Turing]: progress logging is disabled globally\n\njulia> @model function linear_reg(x, y, σ = 0.1)\n β ~ Normal(0, 1)\n\n for i ∈ eachindex(y)\n y[i] ~ Normal(β * x[i], σ)\n end\n end;\n\njulia> σ = 0.1; f(x) = 2 * x + 0.1 * randn();\n\njulia> Δ = 0.1; xs_train = 0:Δ:10; ys_train = f.(xs_train);\n\njulia> xs_test = [10 + Δ, 10 + 2 * Δ]; ys_test = f.(xs_test);\n\njulia> m_train = linear_reg(xs_train, ys_train, σ);\n\njulia> chain_lin_reg = sample(m_train, NUTS(100, 0.65), 200);\n┌ Info: Found initial step size\n└ ϵ = 0.003125\n\njulia> m_test = linear_reg(xs_test, Vector{Union{Missing, Float64}}(undef, length(ys_test)), σ);\n\njulia> predictions = predict(m_test, chain_lin_reg)\nObject of type Chains, with data of type 100×2×1 Array{Float64,3}\n\nIterations = 1:100\nThinning interval = 1\nChains = 1\nSamples per chain = 100\nparameters = y[1], y[2]\n\n2-element Array{ChainDataFrame,1}\n\nSummary Statistics\n parameters mean std naive_se mcse ess r_hat\n ────────── ─────── ────── ──────── ─────── ──────── ──────\n y[1] 20.1974 0.1007 0.0101 missing 101.0711 0.9922\n y[2] 20.3867 0.1062 0.0106 missing 101.4889 0.9903\n\nQuantiles\n parameters 2.5% 25.0% 50.0% 75.0% 97.5%\n ────────── ─────── ─────── ─────── ─────── ───────\n y[1] 20.0342 20.1188 20.2135 20.2588 20.4188\n y[2] 20.1870 20.3178 20.3839 20.4466 20.5895\n\n\njulia> ys_pred = vec(mean(Array(group(predictions, :y)); dims = 1));\n\njulia> sum(abs2, ys_test - ys_pred) ≤ 0.1\ntrue\n\n\n\n\n\n","category":"function"},{"location":"api/#Querying-model-probabilities-and-quantities","page":"API","title":"Querying model probabilities and quantities","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Please see the generated quantities and probability interface guides for more information.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\ngenerated_quantities DynamicPPL.generated_quantities Calculate additional quantities defined in a model\npointwise_loglikelihoods DynamicPPL.pointwise_loglikelihoods Compute log likelihoods for each sample in a chain\nlogprior DynamicPPL.logprior Compute log prior probability\nlogjoint DynamicPPL.logjoint Compute log joint probability\nLogDensityFunction DynamicPPL.LogDensityFunction Wrap a Turing model to satisfy LogDensityFunctions.jl interface\ncondition AbstractPPL.condition Condition a model on data\ndecondition AbstractPPL.decondition Remove conditioning on data\nconditioned DynamicPPL.conditioned Return the conditioned values of a model\nfix DynamicPPL.fix Fix the value of a variable\nunfix DynamicPPL.unfix Unfix the value of a variable\nOrderedDict OrderedCollections.OrderedDict An ordered dictionary","category":"page"},{"location":"api/#Extra-re-exports-from-Bijectors","page":"API","title":"Extra re-exports from Bijectors","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Note that Bijectors itself does not export ordered.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Bijectors.ordered","category":"page"},{"location":"api/#Bijectors.ordered","page":"API","title":"Bijectors.ordered","text":"ordered(d::Distribution)\n\nReturn a Distribution whose support are ordered vectors, i.e., vectors with increasingly ordered elements.\n\nSpecifically, d is restricted to the subspace of its domain containing only ordered elements.\n\nwarning: Warning\nrand is implemented using rejection sampling, which can be slow for high-dimensional distributions. In such cases, consider using MCMC methods to sample from the distribution instead.\n\nwarning: Warning\nThe resulting ordered distribution is un-normalized, which can cause issues in some contexts, e.g. in hierarchical models where the parameters of the ordered distribution are themselves sampled. See the notes below for a more detailed discussion.\n\nNotes on ordered being un-normalized\n\nThe resulting ordered distribution is un-normalized. This is not a problem if used in a context where the normalizing factor is irrelevant, but if the value of the normalizing factor impacts the resulting computation, the results may be inaccurate.\n\nFor example, if the distribution is used in sampling a posterior distribution with MCMC and the parameters of the ordered distribution are themselves sampled, then the normalizing factor would in general be needed for accurate sampling, and ordered should not be used. However, if the parameters are fixed, then since MCMC does not require distributions be normalized, ordered may be used without problems.\n\nA common case is where the distribution being ordered is a joint distribution of n identical univariate distributions. In this case the normalization factor works out to be the constant n!, and ordered can again be used without problems even if the parameters of the univariate distribution are sampled.\n\n\n\n\n\n","category":"function"},{"location":"api/#Point-estimates","page":"API","title":"Point estimates","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"See the mode estimation tutorial for more information.","category":"page"},{"location":"api/","page":"API","title":"API","text":"Exported symbol Documentation Description\nmaximum_a_posteriori Turing.Optimisation.maximum_a_posteriori Find a MAP estimate for a model\nmaximum_likelihood Turing.Optimisation.maximum_likelihood Find a MLE estimate for a model\nMAP Turing.Optimisation.MAP Type to use with Optim.jl for MAP estimation\nMLE Turing.Optimisation.MLE Type to use with Optim.jl for MLE estimation","category":"page"},{"location":"#Turing.jl","page":"Home","title":"Turing.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This site contains the API documentation for the identifiers exported by Turing.jl.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you are looking for usage examples and guides, please visit https://turinglang.org/docs.","category":"page"}] }