Generative Adversarial Network(GAN) uses the concept of generator and discriminator as a deconvolutional and convolutional neural networks respectively along with other pooling and activation functions to generate random data which is very closely related to the original data used for testing and training from literally nothing provided to the generator network.
The weights gets updated according to the gradient calculated by the loss functions of both generator and discriminator networks in each iteration. So, the model gets improved gradually after many iterations.
This model outputs the tensor graphs to a local directory, so that we can visualize it using the tensorboard locally to get a better understanding of what is happening in the inner iterations. Due to less computational power(locally) I had to stop the training after 16k iterations. Training the same code over cloud would be a great option in this case.
Discriminator loss on generated values ranging over a number of iterations:
Discriminator loss on real values ranging over a number of iterations: