You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this repository, we implementaed $\beta$-Variational AutoEncoder using CNN architecture.
The effect of introducing the coefficient of $\beta$ in the regularization term of loss function can be seen.
Visualization of manifolds in latent space
In the case of $\beta$ is equal to two or four, the disentanglement of the model is better than when $\beta$ is equal to one; however, if the coefficient is too large, $\beta$ is 20, the model cannot reconstruct images. This is because KL divergence contains the factor of mutual information between input $x$ and latent variable $z$, and if mutual information goes to zero, information of input cannot be encoded well.