You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've realised that had a misconception about how to compute the incremental marginal likelihoods at the end of each update in the presence of conditional resampling.
Chopin's book gives this formula,
but it's a bit clunky because it requires us to record whether resampling took place. Charles had a nice solution by noting that these formulas coincidence of we reset the weights to unity after resampling.
Since this works I've reverted back to this approach in 731d206.
I'm still a bit uncomfortable with this since really the weights after resampling should be 1/N. I'm not sure whether there is any use case where this distinction would actually matter, but it's worth thinking about. It's also worth nothing that we require an additional logsumexp which it would be nice to avoid.
The text was updated successfully, but these errors were encountered:
I cleaned up some of the code earlier today. I agree that the additional logsumexp is pretty annoying, but from what I can tell this is the only proper way to get consistent likelihoods.
I'm still a bit uncomfortable with this since really the weights after resampling should be 1/N
I mean, this is technically true since we softmax a set of equal log weights. Chopin even suggests setting logw = 1 in algorithm 10.3, which yield identical results since we're operating in log space.
Regardless, the newest commits in 7b10071 are now passing all most of the unit tests and the variations discussed on Slack (not counting the GPU filter).
I've realised that had a misconception about how to compute the incremental marginal likelihoods at the end of each update in the presence of conditional resampling.
Chopin's book gives this formula,
but it's a bit clunky because it requires us to record whether resampling took place. Charles had a nice solution by noting that these formulas coincidence of we reset the weights to unity after resampling.
Since this works I've reverted back to this approach in 731d206.
I'm still a bit uncomfortable with this since really the weights after resampling should be 1/N. I'm not sure whether there is any use case where this distinction would actually matter, but it's worth thinking about. It's also worth nothing that we require an additional logsumexp which it would be nice to avoid.
The text was updated successfully, but these errors were encountered: