You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can someone tell me why we optimize NELBO? In the paper it only said "We optimize the ELBO with respect to the variational parameters." As far as I understand it D-ETM consists of three neural networks to find the distributions for theta, eta and alpha and then estimates KL divergences for them. And then the KL divergence values are simply added together and optimized jointly? But why is NLL added? And I thought that "Solving this optimization problem is equivalent to maximizing the evidence lower bound (ELBO)" would mean that we don't minimize it as a loss which the model seems to do but rather maximize it.
Sorry, I am pretty confused (I am rather new to Bayesian statistics and variational inference)
The text was updated successfully, but these errors were encountered:
Hi @legurp, NELBO is the "negative ELBO", and NLL should stand for "negative log-likelihood".
Usually people state they are maximising ELBO, it's true, but since logs of probabilities give you a quantity <=0, it's often more convenient to multiply it by -1 (so that it becomes positive) and then minimise this new quantity (as a loss).
Can someone tell me why we optimize NELBO? In the paper it only said "We optimize the ELBO with respect to the variational parameters." As far as I understand it D-ETM consists of three neural networks to find the distributions for theta, eta and alpha and then estimates KL divergences for them. And then the KL divergence values are simply added together and optimized jointly? But why is NLL added? And I thought that "Solving this optimization problem is equivalent to maximizing the evidence lower bound (ELBO)" would mean that we don't minimize it as a loss which the model seems to do but rather maximize it.
Sorry, I am pretty confused (I am rather new to Bayesian statistics and variational inference)
The text was updated successfully, but these errors were encountered: