Orgasm with Generative Adversarial Networks (GANs) Math

Sarvesh Khetan
3 min readJun 14, 2024

--

Motivation

We saw autoencoder here but the issue with it is that it’s loss function has KL divergence in it and we already know the issues with KL divergence. Hence researchers wanted to change this loss function from KL Divergence to JSD (Jenson Shenon Divergence).

Solution — GANs

Anytime you want to generate something, always keep this trick in your toolkit : try moulding noisy data into required form

Hence taking inspiration from about trick, here we will try to convert some random noise into fake data using given dataset!!

Step 1 : Training the Classifier / Discriminator Network

Now as ideated above that need a classifier. Following diagram shows how we will use original dataset (X) to train this classifier network.

  • Here you will train your classifier to classify real data (good data) as 1 and fake data (bad data) as 0
  • At this time you will NOT train the generator network

Step 2: Training the Generator Network

Issues with GANs

  • Discriminator Overpowering Generator : Sometimes the discriminator begins to classify all the fake data correctly i.e. discriminator got trained enough to identify adverserial examples too.
  • Mode Collapse : Generator continuously produces similar fake data irrespective of change in latent data, this happens because generator might have discovered some weakness in discriminator

Conclusion

This method of training wherein two networks (Generator and Discriminator) compete with each other is called ADVERSARIAL TRAINING / ADVERSARIAL LEARNING.

Now once you generate these adversarial examples using GANs, add these to your original input dataset to increase the number of datapoint thus serving the purpose of data augmentation !!

--

--

Sarvesh Khetan
Sarvesh Khetan

Written by Sarvesh Khetan

A deep learning enthusiast and a Masters Student at University of Maryland, College Park.

No responses yet