Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About Normalize in get_transformer_train and get_transformer_test #26

Open
sihan-fu-98 opened this issue Jul 14, 2021 · 1 comment
Open

Comments

@sihan-fu-98
Copy link

sihan-fu-98 commented Jul 14, 2021

Thanks for your work!
I saw you are using
T.Normalize(mean=[0.48501960784313836, 0.4579568627450961, 0.4076039215686255],
std=[0.00392156862745098, 0.00392156862745098, 0.00392156862745098]) in get_transformer_train and get_transformer_test, different from the usually used:
T.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]), which is also used in https://github.com/Nanne/pytorch-NetVlad.
(1) Is it due to that you are using the pretrained model vgg16_pitts_64_desc_cen.hdf5 which is matched to the former standard deviation while the pretrained VGG model in torchvision is using the later standard deviation?
And respectively, the learing rate you are using is 0.001, while Nanne/pytorch-NetVlad is using 0.0001, is it due to this?
(2) I'm trying to reproduce the SARE-joint result under the framework of Nanne/pytorch-NetVlad. I added the loss function written on my own and I'm using pretrained model vgg16_pitts_64_desc_cen.hdf5. The learning rate is still 0.0001 as Nanne/pytorch-NetVlad did. The standard deviation fallows Nanne/pytorch-NetVlad.
But I cannot achieve 89% Recall1 result on Pitts250k-test. I only have 3 GPUs to use so I have to set batch size to (3) I didn't add T.ColorJitter(0.7, 0.7, 0.7, 0.5) in get_transformer_train as Nanne/pytorch-NetVlad did.
Is it due to the batch size or T.ColorJitter or I should just use the pretrained VGG model in torchvision with T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])? Or would you give me some other suggestions?
Really thanks for your help!

@sihan-fu-98 sihan-fu-98 changed the title qq About Normalize in get_transformer_train and get_transformer_test Jul 14, 2021
@yxgeee
Copy link
Owner

yxgeee commented Jul 27, 2021

(1) Yes, the mean and std are used to match the pre-trained VGG model. The official codes of NetVLAD and SARE both used 0.001 as the learning rate, please check.
(2) I did not reproduce the results by using Nanne/pytorch-NetVlad, so I cannot localize the problem exactly. Why not trying to use this repo?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants