Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discriminator with accuracy 1, generator fools it perfectly. What's going on? #45

Open
danielegrattarola opened this issue Aug 27, 2018 · 5 comments

Comments

@danielegrattarola
Copy link

I'm training a GAN where the discriminator has almost perfect accuracy (generated samples are classified 0, true samples are classified 1), but at the same time the generator is able to produce samples that fool the discriminator perfectly (always classified 1).
I'm guessing this is a vanishing gradient problem, but I cannot figure out how to solve it. All usual hacks don't seem to work.

Does anybody have some suggestions?

@davesean
Copy link

I'm training a GAN where the discriminator has almost perfect accuracy (generated samples are classified 0, true samples are classified 1), but at the same time the generator is able to produce samples that fool the discriminator perfectly (always classified 1).
I'm guessing this is a vanishing gradient problem, but I cannot figure out how to solve it. All usual hacks don't seem to work.

Does anybody have some suggestions?

What does the output look like by the generator?

@adv-ai-tech
Copy link

It is due to mode Collapse. The Generator knows some samples that can fool the discriminator and it always produces the same and fools D

@zyong812
Copy link

I have the same problem. I solved this by modifying the net architecture of D, perhaps it is because the previous D is easy to fool.

@micklexqg
Copy link

It is due to mode Collapse. The Generator knows some samples that can fool the discriminator and it always produces the same and fools D

so, how to solve it?

@micklexqg
Copy link

I have the same problem. I solved this by modifying the net architecture of D, perhaps it is because the previous D is easy to fool.

i used the paper D and did not do any change , but it was still the same. why i came across the problem while they did not?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants