You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am reading "Adversarial Decomposition of Text Representation" paper and the code. In the paper, the discriminator is said to be trained with Wasserstein Loss instead of Cross Entropy. However, in the code it is trained using Cross Entropy Loss and a entropy loss not mentioned in the paper.
I am wondering why there is the inconsistency between the code and the paper? Is the code obsolete or Wasserstein loss turns out to be not good?
Thanks.
The text was updated successfully, but these errors were encountered:
Thank you for releasing the code.
I am reading "Adversarial Decomposition of Text Representation" paper and the code. In the paper, the discriminator is said to be trained with Wasserstein Loss instead of Cross Entropy. However, in the code it is trained using Cross Entropy Loss and a entropy loss not mentioned in the paper.
I am wondering why there is the inconsistency between the code and the paper? Is the code obsolete or Wasserstein loss turns out to be not good?
Thanks.
The text was updated successfully, but these errors were encountered: