-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistency between initialise
for KF and BF
#14
Comments
I ran into this exact issue when initially porting the interface. I left the |
@charlesknipp @FredericWantiez Think we could use with closing this soon as it is required by the Kalman Smoother which I'm using to unit test the CSMC implementations (and I'm not Git-savvy enough to handle such divergent branches). I think the simple solution to this is to modify function filter(
rng::AbstractRNG,
model::AbstractStateSpaceModel,
alg::AbstractFilter,
observations::AbstractVector;
callback=nothing,
kwargs...,
)
step_storage = create_step_storage(alg)
step_storage.filtered = initialise(rng, model, alg; kwargs...)
isnothing(callback) || callback(model, alg, step_storage, observations; kwargs...)
log_evidence = zero(eltype(model))
for t in eachindex(observations)
step_storage, log_marginal = step(
rng, model, alg, t, step_storage, observations[t]; callback, kwargs...
)
log_evidence += log_marginal
isnothing(callback) || callback(model, alg, t, step_storage, observations; kwargs...)
end
return step_storage.filtered, log_evidence
end
function step(
rng::AbstractRNG,
model::AbstractStateSpaceModel,
alg::AbstractFilter,
iter::Integer,
state,
observation;
kwargs...,
)
step_storage.proposed = predict(rng, model, alg, iter, step_storage.filtered; kwargs...)
step_storage.filtered, ll = update(model, alg, iter, step_storage.proposed, observation; kwargs...)
return step_storage, ll
end If you're both happy with this, I will make this change in the CSMC branch. |
I'm not convinced, especially if we want to implement non-trivial transition proposals, where we would need access to the previous filtered states to calculate the log weights. I think the easiest fix is altering the behavior of the Kalman filter such that we have a proposed and filtered Gaussian state |
Ah I maybe should have explained what the issue is more clearly. The plan was already to store both the proposed and filtered Gaussians for the Gaussian state. The issue is, this now means I see your point though, so we should maybe look into other approaches. I can think of two:
I think (2) might be cleaner and nothing immediately comes to mind as to how this could go wrong. |
I just noticed that we have different definitions of
initialise
for the Kalman filter and bootstrap filter.The bootstrap filter returns a container with the proposed and filtered states whereas the Kalman filter returns only the current state.
Obviously we could just change the Kalman filter to return a container of two Gaussians but then we run into an issues with Rao-Blackwellisation. Specifically, the initialise for a RBPF should return a container where each state is a combination of particles (set by the outer algorithm) and some marginalised distribution (set by the black-box inner algorithm, e.g. KF). If the inner algorithm returns a container, we'd have to add some glue code to insert this into the container the RBPF wants which seems a bit ugly to me.
Not sure exactly what the best way about this is but we can get by without this change for now.
The text was updated successfully, but these errors were encountered: