Personal implementation of the paper "Ladder variational autoencoders" by Sønderby, C. K., Raiko, T., Maaløe, L., Sønderby, S. K., & Winther, O. (2016) (LVAE) in PyTorch. The main purpose of this repository is to make the paper implementation accessible and clear to people that just started getting into Variational Autoencoders without having to look into highly optimized and difficult to search libraries.
I trained my model with the hyperparameters given in the original paper: we use 5 hierarchical layers with hidden units 512, 256, 128, 64, 32
respectively. The dimension of the latent space is 64, 32, 16, 8, 4 considering the layer from the upper one to the bottom one.
I trained the model only for 200 epochs. We compute the log-likelihood
TODO: this is far from being a complete repo. There are some changes I am still want make during my free time:
- train until convergence using a GPU
- create more functions to avoid repeated code
- print more infos during training
- using another dataset other than MNIST