-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About Loss of InfoNCE and Cluter_results #22
Comments
Same question. |
For the first question, you can refer to the moco_v1 code, where they use cross-entropy directly for InfoNCE. As for the second question, they use the eval_dataset as negative prototypes, and in the line of code: output, target, output_proto, target_proto = model(im_q = images[0], im_k = images[1],
cluster_result = cluster_result, index = index) the passed |
Hi,
I notice that the labels created in InfoNCE loss is always a zero-vector:(
PCL/pcl/builder.py
Line 163 in 964da1f
I think this is wrong since otherwise the loss will always be zero. Did I mis-understand the codes?
In creating the Custer_Result dictionary, I found that only eval dataset was involved into consideration:
(
PCL/main_pcl.py
Line 299 in 964da1f
So what is the motivation behind this operation, I think we should run it on training set.
The text was updated successfully, but these errors were encountered: