-
Hi,
I'm running into the following error, which I'm having trouble deciphering:
|
Beta Was this translation helpful? Give feedback.
Answered by
torfjelde
Sep 13, 2023
Replies: 1 comment 3 replies
-
julia> using Turing, MvNormalCDF
julia> @model function flip(y,N,c1,c2)
C ~ LKJ(2, 1.0)
d = MvNormal(Distributions.PDMat(C))
M=5000
a1, er1 = mvnormcdf(d, [c2,c1], [Inf,Inf];m=M)
a2, er1 = mvnormcdf(d, [c2,-Inf], [Inf,c1];m=M)
for n in 1:N
y[n] ~ Binomial(10,a1/(a1+a2))
end
end
flip (generic function with 2 methods)
julia> c1 = 1.0
1.0
julia> c2 = 3.0
3.0
julia> M = 5000
5000
julia> Σ = rand(LKJ(2, 1.0))
2×2 Matrix{Float64}:
1.0 -0.735677
-0.735677 1.0
julia> d = MvNormal(Σ)
ZeroMeanFullNormal(
dim: 2
μ: Zeros(2)
Σ: [1.0 -0.735677360337871; -0.735677360337871 1.0]
)
julia> a1, er1 = mvnormcdf(d, [c2,c1], [Inf,Inf];m=M)
(5.662221910164488e-10, 6.13872325029682e-13)
julia> a2, er1 = mvnormcdf(d, [c2,-Inf], [Inf,c1];m=M)
(0.001349897466794402, 7.989072348427342e-13)
julia> p = a1/(a1+a2)
4.1945552713424e-7
julia> N = 1000
1000
julia> data = zeros(N)
1000-element Vector{Float64}:
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
⋮
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
julia> for n =1:N
data[n] = rand(Binomial(10,p))
end
julia> model = flip(data,N,c1,c2)
DynamicPPL.Model{typeof(flip), (:y, :N, :c1, :c2), (), (), Tuple{Vector{Float64}, Int64, Float64, Float64}, Tuple{}, DynamicPPL.DefaultContext}(flip, (y = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], N = 1000, c1 = 1.0, c2 = 3.0), NamedTuple(), DynamicPPL.DefaultContext())
julia> chain = sample(model, NUTS(), 1000)
┌ Info: Found initial step size
└ ϵ = 0.00625
Sampling 100%|██████████████████████████████████████████| Time: 0:00:09
Chains MCMC chain (1000×16×1 Array{Float64, 3}):
Iterations = 501:1:1500
Number of chains = 1
Samples per chain = 1000
Wall duration = 10.11 seconds
Compute duration = 10.11 seconds
parameters = C[1,1], C[2,1], C[1,2], C[2,2]
internals = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat e ⋯ Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯
C[1,1] 1.0000 0.0000 NaN NaN NaN NaN ⋯ C[2,1] -0.7954 0.1104 0.0083 151.4465 123.6805 1.0022 ⋯ C[1,2] -0.7954 0.1104 0.0083 151.4465 123.6805 1.0022 ⋯ C[2,2] 1.0000 0.0000 0.0000 496.5049 787.4993 0.9993 ⋯ 1 column omitted
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
C[1,1] 1.0000 1.0000 1.0000 1.0000 1.0000
C[2,1] -0.9850 -0.8880 -0.7878 -0.7086 -0.6040
C[1,2] -0.9850 -0.8880 -0.7878 -0.7086 -0.6040
C[2,2] 1.0000 1.0000 1.0000 1.0000 1.0000 I'm on the following versions: julia> using Pkg; Pkg.status()
Status `/tmp/jl_G4wXj3/Project.toml`
[37188c8d] MvNormalCDF v0.3.0
[fce5fe82] Turing v0.29.1 Note that I would also recommend using @model function flip(y,N,c1,c2)
C ~ LKJCholesky(2, 1.0)
d = MvNormal(Distributions.PDMat(C))
M=5000
a1, er1 = mvnormcdf(d, [c2,c1], [Inf,Inf];m=M)
a2, er1 = mvnormcdf(d, [c2,-Inf], [Inf,c1];m=M)
for n in 1:N
y[n] ~ Binomial(10,a1/(a1+a2))
end
end |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Also, you might want to check the numerical error on the
mvnormcdf
calls, i.e.Aaand I'd say use this stuff on your own risk 😬 You're currently performing aut…