You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your excellent work, but it seems that when finetuning the model on a new domain data, the mapping net(8mlp) was not frozen, which conflicts with your papers, though requires_grad==False was set in L422-425 of train.py. The gradient is activated in L229 again and the G_optimizer optimize all parameters of G. When I print the parameters of 8MLP on the original model and finetuned, they indeed different.
The text was updated successfully, but these errors were encountered:
Thanks for pointing it out, there might be some missing lines when migrating the code from server to public repo. We'll look into it and update code soon.
Thanks for your excellent work, but it seems that when finetuning the model on a new domain data, the mapping net(8mlp) was not frozen, which conflicts with your papers, though requires_grad==False was set in L422-425 of train.py. The gradient is activated in L229 again and the G_optimizer optimize all parameters of G. When I print the parameters of 8MLP on the original model and finetuned, they indeed different.
The text was updated successfully, but these errors were encountered: