Skip to content

masashi-hatano/beta-vae

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

$\beta$-VAE

Image generation of MNIST via Beta-VAE

In this repository, we implementaed $\beta$-Variational AutoEncoder using CNN architecture. The effect of introducing the coefficient of $\beta$ in the regularization term of loss function can be seen.

Visualization of manifolds in latent space

In the case of $\beta$ is equal to two or four, the disentanglement of the model is better than when $\beta$ is equal to one; however, if the coefficient is too large, $\beta$ is 20, the model cannot reconstruct images. This is because KL divergence contains the factor of mutual information between input $x$ and latent variable $z$, and if mutual information goes to zero, information of input cannot be encoded well.

$\beta$=1:

manifolds_beta=1

$\beta$=2:

manifolds_beta=2

$\beta$=4:

manifolds_beta=4

$\beta$=10:

manifolds_beta=10

$\beta$=20:

manifolds_beta=20

About

Image generation of MNIST via beta-VAE

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published