You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Adda results are indeed impressive. But I am wondering how it compares to:
train on MNIST, fine-tune on the small USPS dataset
mixes MNIST and small USPS dataset, and trained on the mixed dataset.
I tried 1) and 2) on some document classification (NLP) task, I found both 1) and 2) worked very well, i.e., improving target classification results from 0.74 to 0.87. Thus how does Adda compare to 1) and 2)?
The text was updated successfully, but these errors were encountered:
The Adda results are indeed impressive. But I am wondering how it compares to:
I tried 1) and 2) on some document classification (NLP) task, I found both 1) and 2) worked very well, i.e., improving target classification results from 0.74 to 0.87. Thus how does Adda compare to 1) and 2)?
The text was updated successfully, but these errors were encountered: