-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reproduction problem #27
Comments
It seems totally failed. I did not meet this problem before. Did you load all pre-trained weights as instructed? |
yes, i load the pre-trained weights and besides i test the best model on pitts 30k that you provide (https://drive.google.com/drive/folders/1FLjxFhKRO-YJQ6FI-DcCMMHDL2K_Hsof) , the score is : |
I cannot locate your issue. Maybe provide your full testing log could help. |
this is the log of first evaluation of my initial model:Args:Namespace(arch='vgg16', cache_size=1000, data_dir='/data/yangling/datasets', dataset='pitts', deterministic=False, epochs=5, eval_step=1, features=4096, generations=4, gpu=0, height=480, init_dir='/data/yangling/openIBL/examples/../logs', iters=0, launcher='pytorch', layers='conv5', logs_dir='logs/netVLAD/pitts30k-vgg16/conv5-sare_ind-lr0.001-tuple1-SFRS', loss_type='sare_ind', lr=0.001, margin=0.1, momentum=0.9, neg_num=10, neg_pool=1000, ngpus_per_node=1, nowhiten=False, num_clusters=64, pos_num=10, pos_pool=20, print_freq=10, rank=0, resume='', scale='30k', seed=43, soft_weight=0.5, step_size=5, sync_gather=False, syncbn=True, tcp_port='19294', temperature=[0.07, 0.07, 0.06, 0.05], test_batch_size=8, tuple_size=1, weight_decay=0.001, width=640, workers=2, world_size=1)Pittsburgh dataset loaded
|
if you have retested the model rightly, could you please zip your local folder and send it to my email ([email protected])? I would appreciate it. |
It shows that resume='', indicating that you did not load any model weights. |
this is my training log, so resume='' |
And this is my testing log, the model is downloaded from your provided url: Use GPU: 0 for testing, rank no.0 of world_size 1Args:Namespace(arch='vgg16', data_dir='/data/yangling/test/OpenIBL-0.1.0-beta/examples/data', dataset='pitts', features=4096, gpu=0, height=480, lambda_value=0, launcher='pytorch', logs_dir='/data/yangling/test/OpenIBL-0.1.0-beta/examples/logs', ngpus_per_node=1, nowhiten=False, num_clusters=64, print_freq=10, rank=0, reduction=True, rerank=False, resume='logs/ori/model_best.pth.tar', rr_topk=25, scale='30k', sync_gather=False, tcp_port='5017', test_batch_size=6, vlad=True, width=640, workers=2, world_size=1)Pittsburgh dataset loaded
|
I think whether some default settings about dataset preprocessing are wrong with your current code, because the evaluation results of the models you provide and i train are similarly much lower than you report. |
I just run the code without any change and find the initial recall scores is as follow:
Recall Scores:
top-1 1.2%
top-5 4.6%
top-10 8.7%
then I continue the training process util the generation1 epoch2 with the recall score:
Recall Scores:
top-1 1.4%
top-5 6.3%
top-10 11.9%
it seems no obvious promotion, is that right?
The text was updated successfully, but these errors were encountered: