Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The pretrained weight of the background reconstruction #25

Open
SunWeiLin-Lynne opened this issue Aug 17, 2023 · 5 comments
Open

The pretrained weight of the background reconstruction #25

SunWeiLin-Lynne opened this issue Aug 17, 2023 · 5 comments

Comments

@SunWeiLin-Lynne
Copy link

Hello, thank you very much for your excellent work!
When I load the pretrained weight of the background(model_best_bg.pth), the following problem occurred. Can you update the pretrained weights for background reconstruction again?

*** RuntimeError: Error(s) in loading state_dict for BGPIFu_Net:
        Missing key(s) in state_dict: "global_encoder.conv1.weight", "global_encoder.bn1.weight", "global_encoder.bn1.bias", "global_encoder.bn1.running_mean", "global_encoder.bn1.running_var", "global_encoder.layer1.0.conv1.weight", "global_encoder.layer1.0.conv2.weight", "global_encoder.layer1.1.conv1.weight", "global_encoder.layer1.1.conv2.weight", "global_encoder.layer2.0.conv1.weight", "global_encoder.layer2.0.conv2.weight", "global_encoder.layer2.0.downsample.0.weight", "global_encoder.layer2.0.downsample.1.weight", "global_encoder.layer2.0.downsample.1.bias", "global_encoder.layer2.0.downsample.1.running_mean", "global_encoder.layer2.0.downsample.1.running_var", "global_encoder.layer2.1.conv1.weight", "global_encoder.layer2.1.conv2.weight", "global_encoder.layer3.0.conv1.weight", "global_encoder.layer3.0.conv2.weight", "global_encoder.layer3.0.downsample.0.weight", "global_encoder.layer3.0.downsample.1.weight", "global_encoder.layer3.0.downsample.1.bias", "global_encoder.layer3.0.downsample.1.running_mean", "global_encoder.layer3.0.downsample.1.running_var", "global_encoder.layer3.1.conv1.weight", "global_encoder.layer3.1.conv2.weight", "global_encoder.layer4.0.conv1.weight", "global_encoder.layer4.0.conv2.weight", "global_encoder.layer4.0.downsample.0.weight", "global_encoder.layer4.0.downsample.1.weight", "global_encoder.layer4.0.downsample.1.bias", "global_encoder.layer4.0.downsample.1.running_mean", "global_encoder.layer4.0.downsample.1.running_var", "global_encoder.layer4.1.conv1.weight", "global_encoder.layer4.1.conv2.weight", "global_encoder.fc.weight", "global_encoder.fc.bias".
        Unexpected key(s) in state_dict: "global_surface_classifier.conv0.weight", "global_surface_classifier.conv0.bias", "global_surface_classifier.conv1.weight", "global_surface_classifier.conv1.bias", "global_surface_classifier.conv2.weight", "global_surface_classifier.conv2.bias", "global_surface_classifier.conv3.weight", "global_surface_classifier.conv3.bias", "mask_decoder.0.weight", "mask_decoder.0.bias", "mask_decoder.2.weight", "mask_decoder.2.bias", "mask_decoder.4.weight", "mask_decoder.4.bias", "mask_decoder.6.weight", "mask_decoder.6.bias", "post_op_module.channel_mlp.0.weight", "post_op_module.channel_mlp.0.bias", "post_op_module.channel_mlp.2.weight", "post_op_module.channel_mlp.2.bias", "post_op_module.channel_mlp.4.weight", "post_op_module.channel_mlp.4.bias", "post_op_module.post_conv.0.weight", "post_op_module.post_conv.0.bias", "post_op_module.post_conv.2.weight", "post_op_module.post_conv.2.bias", "post_op_module.post_conv.4.weight", "post_op_module.post_conv.4.bias", "post_op_module.post_conv.6.weight", "post_op_module.post_conv.6.bias", "post_op_module.pre_conv.0.weight", "post_op_module.pre_conv.0.bias", "post_op_module.pre_conv.2.weight", "post_op_module.pre_conv.2.bias", "global_encoder.0.weight", "global_encoder.0.bias", "global_encoder.1.conv1.weight", "global_encoder.1.bn1.weight", "global_encoder.1.bn1.bias", "global_encoder.1.bn1.running_mean", "global_encoder.1.bn1.running_var", "global_encoder.1.bn1.num_batches_tracked", "global_encoder.1.layer1.0.conv1.weight", "global_encoder.1.layer1.0.conv2.weight", "global_encoder.1.layer1.1.conv1.weight", "global_encoder.1.layer1.1.conv2.weight", "global_encoder.1.layer2.0.conv1.weight", "global_encoder.1.layer2.0.conv2.weight", "global_encoder.1.layer2.0.downsample.0.weight", "global_encoder.1.layer2.0.downsample.1.weight", "global_encoder.1.layer2.0.downsample.1.bias", "global_encoder.1.layer2.0.downsample.1.running_mean", "global_encoder.1.layer2.0.downsample.1.running_var", "global_encoder.1.layer2.0.downsample.1.num_batches_tracked", "global_encoder.1.layer2.1.conv1.weight", "global_encoder.1.layer2.1.conv2.weight", "global_encoder.1.layer3.0.conv1.weight", "global_encoder.1.layer3.0.conv2.weight", "global_encoder.1.layer3.0.downsample.0.weight", "global_encoder.1.layer3.0.downsample.1.weight", "global_encoder.1.layer3.0.downsample.1.bias", "global_encoder.1.layer3.0.downsample.1.running_mean", "global_encoder.1.layer3.0.downsample.1.running_var", "global_encoder.1.layer3.0.downsample.1.num_batches_tracked", "global_encoder.1.layer3.1.conv1.weight", "global_encoder.1.layer3.1.conv2.weight", "global_encoder.1.layer4.0.conv1.weight", "global_encoder.1.layer4.0.conv2.weight", "global_encoder.1.layer4.0.downsample.0.weight", "global_encoder.1.layer4.0.downsample.1.weight", "global_encoder.1.layer4.0.downsample.1.bias", "global_encoder.1.layer4.0.downsample.1.running_mean", "global_encoder.1.layer4.0.downsample.1.running_var", "global_encoder.1.layer4.0.downsample.1.num_batches_tracked", "global_encoder.1.layer4.1.conv1.weight", "global_encoder.1.layer4.1.conv2.weight", "global_encoder.2.weight", "global_encoder.2.bias", "global_encoder.4.weight", "global_encoder.4.bias", "global_encoder.6.weight", "global_encoder.6.bias".
        size mismatch for surface_classifier.conv0.weight: copying a param with shape torch.Size([1024, 549, 1]) from checkpoint, the shape in current model is torch.Size([1024, 1283, 1]).
        size mismatch for surface_classifier.conv1.weight: copying a param with shape torch.Size([512, 1573, 1]) from checkpoint, the shape in current model is torch.Size([512, 2307, 1]).
        size mismatch for surface_classifier.conv2.weight: copying a param with shape torch.Size([256, 1061, 1]) from checkpoint, the shape in current model is torch.Size([256, 1795, 1]).
        size mismatch for surface_classifier.conv3.weight: copying a param with shape torch.Size([128, 805, 1]) from checkpoint, the shape in current model is torch.Size([128, 1539, 1]).
        size mismatch for surface_classifier.conv4.weight: copying a param with shape torch.Size([1, 677, 1]) from checkpoint, the shape in current model is torch.Size([1, 1411, 1]).
@HaolinLiu97
Copy link
Collaborator

Hi, Sorry for the late reply. I have checked the weight again, the model_best_bg.pth should be correct, is it possible that you loads the pretrained weight for the object model into the background network?
here are the weight keys for the model_best_bg.pth
odict_keys(['module.image_filter.conv1.weight', 'module.image_filter.conv1.bias', 'module.image_filter.bn1.weight', 'module.image_filter.bn1.bias', 'module.image_filter.conv2.conv1.weight', 'module.image_filter.conv2.conv2.weight', 'module.image_filter.conv2.conv3.weight', 'module.image_filter.conv2.bn1.weight', 'module.image_filter.conv2.bn1.bias', 'module.image_filter.conv2.bn2.weight', 'module.image_filter.conv2.bn2.bias', 'module.image_filter.conv2.bn3.weight', 'module.image_filter.conv2.bn3.bias', 'module.image_filter.conv2.bn4.weight', 'module.image_filter.conv2.bn4.bias', 'module.image_filter.conv2.downsample.0.weight', 'module.image_filter.conv2.downsample.0.bias', 'module.image_filter.conv2.downsample.2.weight', 'module.image_filter.conv3.conv1.weight', 'module.image_filter.conv3.conv2.weight', 'module.image_filter.conv3.conv3.weight', 'module.image_filter.conv3.bn1.weight', 'module.image_filter.conv3.bn1.bias', 'module.image_filter.conv3.bn2.weight', 'module.image_filter.conv3.bn2.bias', 'module.image_filter.conv3.bn3.weight', 'module.image_filter.conv3.bn3.bias', 'module.image_filter.conv3.bn4.weight', 'module.image_filter.conv3.bn4.bias', 'module.image_filter.conv4.conv1.weight', 'module.image_filter.conv4.conv2.weight', 'module.image_filter.conv4.conv3.weight', 'module.image_filter.conv4.bn1.weight', 'module.image_filter.conv4.bn1.bias', 'module.image_filter.conv4.bn2.weight', 'module.image_filter.conv4.bn2.bias', 'module.image_filter.conv4.bn3.weight', 'module.image_filter.conv4.bn3.bias', 'module.image_filter.conv4.bn4.weight', 'module.image_filter.conv4.bn4.bias', 'module.image_filter.conv4.downsample.0.weight', 'module.image_filter.conv4.downsample.0.bias', 'module.image_filter.conv4.downsample.2.weight', 'module.image_filter.m0.b1_2.conv1.weight', 'module.image_filter.m0.b1_2.conv2.weight', 'module.image_filter.m0.b1_2.conv3.weight', 'module.image_filter.m0.b1_2.bn1.weight', 'module.image_filter.m0.b1_2.bn1.bias', 'module.image_filter.m0.b1_2.bn2.weight', 'module.image_filter.m0.b1_2.bn2.bias', 'module.image_filter.m0.b1_2.bn3.weight', 'module.image_filter.m0.b1_2.bn3.bias', 'module.image_filter.m0.b1_2.bn4.weight', 'module.image_filter.m0.b1_2.bn4.bias', 'module.image_filter.m0.b2_2.conv1.weight', 'module.image_filter.m0.b2_2.conv2.weight', 'module.image_filter.m0.b2_2.conv3.weight', 'module.image_filter.m0.b2_2.bn1.weight', 'module.image_filter.m0.b2_2.bn1.bias', 'module.image_filter.m0.b2_2.bn2.weight', 'module.image_filter.m0.b2_2.bn2.bias', 'module.image_filter.m0.b2_2.bn3.weight', 'module.image_filter.m0.b2_2.bn3.bias', 'module.image_filter.m0.b2_2.bn4.weight', 'module.image_filter.m0.b2_2.bn4.bias', 'module.image_filter.m0.b1_1.conv1.weight', 'module.image_filter.m0.b1_1.conv2.weight', 'module.image_filter.m0.b1_1.conv3.weight', 'module.image_filter.m0.b1_1.bn1.weight', 'module.image_filter.m0.b1_1.bn1.bias', 'module.image_filter.m0.b1_1.bn2.weight', 'module.image_filter.m0.b1_1.bn2.bias', 'module.image_filter.m0.b1_1.bn3.weight', 'module.image_filter.m0.b1_1.bn3.bias', 'module.image_filter.m0.b1_1.bn4.weight', 'module.image_filter.m0.b1_1.bn4.bias', 'module.image_filter.m0.b2_1.conv1.weight', 'module.image_filter.m0.b2_1.conv2.weight', 'module.image_filter.m0.b2_1.conv3.weight', 'module.image_filter.m0.b2_1.bn1.weight', 'module.image_filter.m0.b2_1.bn1.bias', 'module.image_filter.m0.b2_1.bn2.weight', 'module.image_filter.m0.b2_1.bn2.bias', 'module.image_filter.m0.b2_1.bn3.weight', 'module.image_filter.m0.b2_1.bn3.bias', 'module.image_filter.m0.b2_1.bn4.weight', 'module.image_filter.m0.b2_1.bn4.bias', 'module.image_filter.m0.b2_plus_1.conv1.weight', 'module.image_filter.m0.b2_plus_1.conv2.weight', 'module.image_filter.m0.b2_plus_1.conv3.weight', 'module.image_filter.m0.b2_plus_1.bn1.weight', 'module.image_filter.m0.b2_plus_1.bn1.bias', 'module.image_filter.m0.b2_plus_1.bn2.weight', 'module.image_filter.m0.b2_plus_1.bn2.bias', 'module.image_filter.m0.b2_plus_1.bn3.weight', 'module.image_filter.m0.b2_plus_1.bn3.bias', 'module.image_filter.m0.b2_plus_1.bn4.weight', 'module.image_filter.m0.b2_plus_1.bn4.bias', 'module.image_filter.m0.b3_1.conv1.weight', 'module.image_filter.m0.b3_1.conv2.weight', 'module.image_filter.m0.b3_1.conv3.weight', 'module.image_filter.m0.b3_1.bn1.weight', 'module.image_filter.m0.b3_1.bn1.bias', 'module.image_filter.m0.b3_1.bn2.weight', 'module.image_filter.m0.b3_1.bn2.bias', 'module.image_filter.m0.b3_1.bn3.weight', 'module.image_filter.m0.b3_1.bn3.bias', 'module.image_filter.m0.b3_1.bn4.weight', 'module.image_filter.m0.b3_1.bn4.bias', 'module.image_filter.m0.b3_2.conv1.weight', 'module.image_filter.m0.b3_2.conv2.weight', 'module.image_filter.m0.b3_2.conv3.weight', 'module.image_filter.m0.b3_2.bn1.weight', 'module.image_filter.m0.b3_2.bn1.bias', 'module.image_filter.m0.b3_2.bn2.weight', 'module.image_filter.m0.b3_2.bn2.bias', 'module.image_filter.m0.b3_2.bn3.weight', 'module.image_filter.m0.b3_2.bn3.bias', 'module.image_filter.m0.b3_2.bn4.weight', 'module.image_filter.m0.b3_2.bn4.bias', 'module.image_filter.top_m_0.conv1.weight', 'module.image_filter.top_m_0.conv2.weight', 'module.image_filter.top_m_0.conv3.weight', 'module.image_filter.top_m_0.bn1.weight', 'module.image_filter.top_m_0.bn1.bias', 'module.image_filter.top_m_0.bn2.weight', 'module.image_filter.top_m_0.bn2.bias', 'module.image_filter.top_m_0.bn3.weight', 'module.image_filter.top_m_0.bn3.bias', 'module.image_filter.top_m_0.bn4.weight', 'module.image_filter.top_m_0.bn4.bias', 'module.image_filter.conv_last0.weight', 'module.image_filter.conv_last0.bias', 'module.image_filter.bn_end0.weight', 'module.image_filter.bn_end0.bias', 'module.image_filter.l0.weight', 'module.image_filter.l0.bias', 'module.image_filter.bl0.weight', 'module.image_filter.bl0.bias', 'module.image_filter.al0.weight', 'module.image_filter.al0.bias', 'module.image_filter.m1.b1_2.conv1.weight', 'module.image_filter.m1.b1_2.conv2.weight', 'module.image_filter.m1.b1_2.conv3.weight', 'module.image_filter.m1.b1_2.bn1.weight', 'module.image_filter.m1.b1_2.bn1.bias', 'module.image_filter.m1.b1_2.bn2.weight', 'module.image_filter.m1.b1_2.bn2.bias', 'module.image_filter.m1.b1_2.bn3.weight', 'module.image_filter.m1.b1_2.bn3.bias', 'module.image_filter.m1.b1_2.bn4.weight', 'module.image_filter.m1.b1_2.bn4.bias', 'module.image_filter.m1.b2_2.conv1.weight', 'module.image_filter.m1.b2_2.conv2.weight', 'module.image_filter.m1.b2_2.conv3.weight', 'module.image_filter.m1.b2_2.bn1.weight', 'module.image_filter.m1.b2_2.bn1.bias', 'module.image_filter.m1.b2_2.bn2.weight', 'module.image_filter.m1.b2_2.bn2.bias', 'module.image_filter.m1.b2_2.bn3.weight', 'module.image_filter.m1.b2_2.bn3.bias', 'module.image_filter.m1.b2_2.bn4.weight', 'module.image_filter.m1.b2_2.bn4.bias', 'module.image_filter.m1.b1_1.conv1.weight', 'module.image_filter.m1.b1_1.conv2.weight', 'module.image_filter.m1.b1_1.conv3.weight', 'module.image_filter.m1.b1_1.bn1.weight', 'module.image_filter.m1.b1_1.bn1.bias', 'module.image_filter.m1.b1_1.bn2.weight', 'module.image_filter.m1.b1_1.bn2.bias', 'module.image_filter.m1.b1_1.bn3.weight', 'module.image_filter.m1.b1_1.bn3.bias', 'module.image_filter.m1.b1_1.bn4.weight', 'module.image_filter.m1.b1_1.bn4.bias', 'module.image_filter.m1.b2_1.conv1.weight', 'module.image_filter.m1.b2_1.conv2.weight', 'module.image_filter.m1.b2_1.conv3.weight', 'module.image_filter.m1.b2_1.bn1.weight', 'module.image_filter.m1.b2_1.bn1.bias', 'module.image_filter.m1.b2_1.bn2.weight', 'module.image_filter.m1.b2_1.bn2.bias', 'module.image_filter.m1.b2_1.bn3.weight', 'module.image_filter.m1.b2_1.bn3.bias', 'module.image_filter.m1.b2_1.bn4.weight', 'module.image_filter.m1.b2_1.bn4.bias', 'module.image_filter.m1.b2_plus_1.conv1.weight', 'module.image_filter.m1.b2_plus_1.conv2.weight', 'module.image_filter.m1.b2_plus_1.conv3.weight', 'module.image_filter.m1.b2_plus_1.bn1.weight', 'module.image_filter.m1.b2_plus_1.bn1.bias', 'module.image_filter.m1.b2_plus_1.bn2.weight', 'module.image_filter.m1.b2_plus_1.bn2.bias', 'module.image_filter.m1.b2_plus_1.bn3.weight', 'module.image_filter.m1.b2_plus_1.bn3.bias', 'module.image_filter.m1.b2_plus_1.bn4.weight', 'module.image_filter.m1.b2_plus_1.bn4.bias', 'module.image_filter.m1.b3_1.conv1.weight', 'module.image_filter.m1.b3_1.conv2.weight', 'module.image_filter.m1.b3_1.conv3.weight', 'module.image_filter.m1.b3_1.bn1.weight', 'module.image_filter.m1.b3_1.bn1.bias', 'module.image_filter.m1.b3_1.bn2.weight', 'module.image_filter.m1.b3_1.bn2.bias', 'module.image_filter.m1.b3_1.bn3.weight', 'module.image_filter.m1.b3_1.bn3.bias', 'module.image_filter.m1.b3_1.bn4.weight', 'module.image_filter.m1.b3_1.bn4.bias', 'module.image_filter.m1.b3_2.conv1.weight', 'module.image_filter.m1.b3_2.conv2.weight', 'module.image_filter.m1.b3_2.conv3.weight', 'module.image_filter.m1.b3_2.bn1.weight', 'module.image_filter.m1.b3_2.bn1.bias', 'module.image_filter.m1.b3_2.bn2.weight', 'module.image_filter.m1.b3_2.bn2.bias', 'module.image_filter.m1.b3_2.bn3.weight', 'module.image_filter.m1.b3_2.bn3.bias', 'module.image_filter.m1.b3_2.bn4.weight', 'module.image_filter.m1.b3_2.bn4.bias', 'module.image_filter.top_m_1.conv1.weight', 'module.image_filter.top_m_1.conv2.weight', 'module.image_filter.top_m_1.conv3.weight', 'module.image_filter.top_m_1.bn1.weight', 'module.image_filter.top_m_1.bn1.bias', 'module.image_filter.top_m_1.bn2.weight', 'module.image_filter.top_m_1.bn2.bias', 'module.image_filter.top_m_1.bn3.weight', 'module.image_filter.top_m_1.bn3.bias', 'module.image_filter.top_m_1.bn4.weight', 'module.image_filter.top_m_1.bn4.bias', 'module.image_filter.conv_last1.weight', 'module.image_filter.conv_last1.bias', 'module.image_filter.bn_end1.weight', 'module.image_filter.bn_end1.bias', 'module.image_filter.l1.weight', 'module.image_filter.l1.bias', 'module.image_filter.bl1.weight', 'module.image_filter.bl1.bias', 'module.image_filter.al1.weight', 'module.image_filter.al1.bias', 'module.image_filter.m2.b1_2.conv1.weight', 'module.image_filter.m2.b1_2.conv2.weight', 'module.image_filter.m2.b1_2.conv3.weight', 'module.image_filter.m2.b1_2.bn1.weight', 'module.image_filter.m2.b1_2.bn1.bias', 'module.image_filter.m2.b1_2.bn2.weight', 'module.image_filter.m2.b1_2.bn2.bias', 'module.image_filter.m2.b1_2.bn3.weight', 'module.image_filter.m2.b1_2.bn3.bias', 'module.image_filter.m2.b1_2.bn4.weight', 'module.image_filter.m2.b1_2.bn4.bias', 'module.image_filter.m2.b2_2.conv1.weight', 'module.image_filter.m2.b2_2.conv2.weight', 'module.image_filter.m2.b2_2.conv3.weight', 'module.image_filter.m2.b2_2.bn1.weight', 'module.image_filter.m2.b2_2.bn1.bias', 'module.image_filter.m2.b2_2.bn2.weight', 'module.image_filter.m2.b2_2.bn2.bias', 'module.image_filter.m2.b2_2.bn3.weight', 'module.image_filter.m2.b2_2.bn3.bias', 'module.image_filter.m2.b2_2.bn4.weight', 'module.image_filter.m2.b2_2.bn4.bias', 'module.image_filter.m2.b1_1.conv1.weight', 'module.image_filter.m2.b1_1.conv2.weight', 'module.image_filter.m2.b1_1.conv3.weight', 'module.image_filter.m2.b1_1.bn1.weight', 'module.image_filter.m2.b1_1.bn1.bias', 'module.image_filter.m2.b1_1.bn2.weight', 'module.image_filter.m2.b1_1.bn2.bias', 'module.image_filter.m2.b1_1.bn3.weight', 'module.image_filter.m2.b1_1.bn3.bias', 'module.image_filter.m2.b1_1.bn4.weight', 'module.image_filter.m2.b1_1.bn4.bias', 'module.image_filter.m2.b2_1.conv1.weight', 'module.image_filter.m2.b2_1.conv2.weight', 'module.image_filter.m2.b2_1.conv3.weight', 'module.image_filter.m2.b2_1.bn1.weight', 'module.image_filter.m2.b2_1.bn1.bias', 'module.image_filter.m2.b2_1.bn2.weight', 'module.image_filter.m2.b2_1.bn2.bias', 'module.image_filter.m2.b2_1.bn3.weight', 'module.image_filter.m2.b2_1.bn3.bias', 'module.image_filter.m2.b2_1.bn4.weight', 'module.image_filter.m2.b2_1.bn4.bias', 'module.image_filter.m2.b2_plus_1.conv1.weight', 'module.image_filter.m2.b2_plus_1.conv2.weight', 'module.image_filter.m2.b2_plus_1.conv3.weight', 'module.image_filter.m2.b2_plus_1.bn1.weight', 'module.image_filter.m2.b2_plus_1.bn1.bias', 'module.image_filter.m2.b2_plus_1.bn2.weight', 'module.image_filter.m2.b2_plus_1.bn2.bias', 'module.image_filter.m2.b2_plus_1.bn3.weight', 'module.image_filter.m2.b2_plus_1.bn3.bias', 'module.image_filter.m2.b2_plus_1.bn4.weight', 'module.image_filter.m2.b2_plus_1.bn4.bias', 'module.image_filter.m2.b3_1.conv1.weight', 'module.image_filter.m2.b3_1.conv2.weight', 'module.image_filter.m2.b3_1.conv3.weight', 'module.image_filter.m2.b3_1.bn1.weight', 'module.image_filter.m2.b3_1.bn1.bias', 'module.image_filter.m2.b3_1.bn2.weight', 'module.image_filter.m2.b3_1.bn2.bias', 'module.image_filter.m2.b3_1.bn3.weight', 'module.image_filter.m2.b3_1.bn3.bias', 'module.image_filter.m2.b3_1.bn4.weight', 'module.image_filter.m2.b3_1.bn4.bias', 'module.image_filter.m2.b3_2.conv1.weight', 'module.image_filter.m2.b3_2.conv2.weight', 'module.image_filter.m2.b3_2.conv3.weight', 'module.image_filter.m2.b3_2.bn1.weight', 'module.image_filter.m2.b3_2.bn1.bias', 'module.image_filter.m2.b3_2.bn2.weight', 'module.image_filter.m2.b3_2.bn2.bias', 'module.image_filter.m2.b3_2.bn3.weight', 'module.image_filter.m2.b3_2.bn3.bias', 'module.image_filter.m2.b3_2.bn4.weight', 'module.image_filter.m2.b3_2.bn4.bias', 'module.image_filter.top_m_2.conv1.weight', 'module.image_filter.top_m_2.conv2.weight', 'module.image_filter.top_m_2.conv3.weight', 'module.image_filter.top_m_2.bn1.weight', 'module.image_filter.top_m_2.bn1.bias', 'module.image_filter.top_m_2.bn2.weight', 'module.image_filter.top_m_2.bn2.bias', 'module.image_filter.top_m_2.bn3.weight', 'module.image_filter.top_m_2.bn3.bias', 'module.image_filter.top_m_2.bn4.weight', 'module.image_filter.top_m_2.bn4.bias', 'module.image_filter.conv_last2.weight', 'module.image_filter.conv_last2.bias', 'module.image_filter.bn_end2.weight', 'module.image_filter.bn_end2.bias', 'module.image_filter.l2.weight', 'module.image_filter.l2.bias', 'module.image_filter.bl2.weight', 'module.image_filter.bl2.bias', 'module.image_filter.al2.weight', 'module.image_filter.al2.bias', 'module.image_filter.m3.b1_2.conv1.weight', 'module.image_filter.m3.b1_2.conv2.weight', 'module.image_filter.m3.b1_2.conv3.weight', 'module.image_filter.m3.b1_2.bn1.weight', 'module.image_filter.m3.b1_2.bn1.bias', 'module.image_filter.m3.b1_2.bn2.weight', 'module.image_filter.m3.b1_2.bn2.bias', 'module.image_filter.m3.b1_2.bn3.weight', 'module.image_filter.m3.b1_2.bn3.bias', 'module.image_filter.m3.b1_2.bn4.weight', 'module.image_filter.m3.b1_2.bn4.bias', 'module.image_filter.m3.b2_2.conv1.weight', 'module.image_filter.m3.b2_2.conv2.weight', 'module.image_filter.m3.b2_2.conv3.weight', 'module.image_filter.m3.b2_2.bn1.weight', 'module.image_filter.m3.b2_2.bn1.bias', 'module.image_filter.m3.b2_2.bn2.weight', 'module.image_filter.m3.b2_2.bn2.bias', 'module.image_filter.m3.b2_2.bn3.weight', 'module.image_filter.m3.b2_2.bn3.bias', 'module.image_filter.m3.b2_2.bn4.weight', 'module.image_filter.m3.b2_2.bn4.bias', 'module.image_filter.m3.b1_1.conv1.weight', 'module.image_filter.m3.b1_1.conv2.weight', 'module.image_filter.m3.b1_1.conv3.weight', 'module.image_filter.m3.b1_1.bn1.weight', 'module.image_filter.m3.b1_1.bn1.bias', 'module.image_filter.m3.b1_1.bn2.weight', 'module.image_filter.m3.b1_1.bn2.bias', 'module.image_filter.m3.b1_1.bn3.weight', 'module.image_filter.m3.b1_1.bn3.bias', 'module.image_filter.m3.b1_1.bn4.weight', 'module.image_filter.m3.b1_1.bn4.bias', 'module.image_filter.m3.b2_1.conv1.weight', 'module.image_filter.m3.b2_1.conv2.weight', 'module.image_filter.m3.b2_1.conv3.weight', 'module.image_filter.m3.b2_1.bn1.weight', 'module.image_filter.m3.b2_1.bn1.bias', 'module.image_filter.m3.b2_1.bn2.weight', 'module.image_filter.m3.b2_1.bn2.bias', 'module.image_filter.m3.b2_1.bn3.weight', 'module.image_filter.m3.b2_1.bn3.bias', 'module.image_filter.m3.b2_1.bn4.weight', 'module.image_filter.m3.b2_1.bn4.bias', 'module.image_filter.m3.b2_plus_1.conv1.weight', 'module.image_filter.m3.b2_plus_1.conv2.weight', 'module.image_filter.m3.b2_plus_1.conv3.weight', 'module.image_filter.m3.b2_plus_1.bn1.weight', 'module.image_filter.m3.b2_plus_1.bn1.bias', 'module.image_filter.m3.b2_plus_1.bn2.weight', 'module.image_filter.m3.b2_plus_1.bn2.bias', 'module.image_filter.m3.b2_plus_1.bn3.weight', 'module.image_filter.m3.b2_plus_1.bn3.bias', 'module.image_filter.m3.b2_plus_1.bn4.weight', 'module.image_filter.m3.b2_plus_1.bn4.bias', 'module.image_filter.m3.b3_1.conv1.weight', 'module.image_filter.m3.b3_1.conv2.weight', 'module.image_filter.m3.b3_1.conv3.weight', 'module.image_filter.m3.b3_1.bn1.weight', 'module.image_filter.m3.b3_1.bn1.bias', 'module.image_filter.m3.b3_1.bn2.weight', 'module.image_filter.m3.b3_1.bn2.bias', 'module.image_filter.m3.b3_1.bn3.weight', 'module.image_filter.m3.b3_1.bn3.bias', 'module.image_filter.m3.b3_1.bn4.weight', 'module.image_filter.m3.b3_1.bn4.bias', 'module.image_filter.m3.b3_2.conv1.weight', 'module.image_filter.m3.b3_2.conv2.weight', 'module.image_filter.m3.b3_2.conv3.weight', 'module.image_filter.m3.b3_2.bn1.weight', 'module.image_filter.m3.b3_2.bn1.bias', 'module.image_filter.m3.b3_2.bn2.weight', 'module.image_filter.m3.b3_2.bn2.bias', 'module.image_filter.m3.b3_2.bn3.weight', 'module.image_filter.m3.b3_2.bn3.bias', 'module.image_filter.m3.b3_2.bn4.weight', 'module.image_filter.m3.b3_2.bn4.bias', 'module.image_filter.top_m_3.conv1.weight', 'module.image_filter.top_m_3.conv2.weight', 'module.image_filter.top_m_3.conv3.weight', 'module.image_filter.top_m_3.bn1.weight', 'module.image_filter.top_m_3.bn1.bias', 'module.image_filter.top_m_3.bn2.weight', 'module.image_filter.top_m_3.bn2.bias', 'module.image_filter.top_m_3.bn3.weight', 'module.image_filter.top_m_3.bn3.bias', 'module.image_filter.top_m_3.bn4.weight', 'module.image_filter.top_m_3.bn4.bias', 'module.image_filter.conv_last3.weight', 'module.image_filter.conv_last3.bias', 'module.image_filter.bn_end3.weight', 'module.image_filter.bn_end3.bias', 'module.image_filter.l3.weight', 'module.image_filter.l3.bias', 'module.surface_classifier.conv0.weight', 'module.surface_classifier.conv0.bias', 'module.surface_classifier.conv1.weight', 'module.surface_classifier.conv1.bias', 'module.surface_classifier.conv2.weight', 'module.surface_classifier.conv2.bias', 'module.surface_classifier.conv3.weight', 'module.surface_classifier.conv3.bias', 'module.surface_classifier.conv4.weight', 'module.surface_classifier.conv4.bias', 'module.global_encoder.conv1.weight', 'module.global_encoder.bn1.weight', 'module.global_encoder.bn1.bias', 'module.global_encoder.bn1.running_mean', 'module.global_encoder.bn1.running_var', 'module.global_encoder.bn1.num_batches_tracked', 'module.global_encoder.layer1.0.conv1.weight', 'module.global_encoder.layer1.0.conv2.weight', 'module.global_encoder.layer1.1.conv1.weight', 'module.global_encoder.layer1.1.conv2.weight', 'module.global_encoder.layer2.0.conv1.weight', 'module.global_encoder.layer2.0.conv2.weight', 'module.global_encoder.layer2.0.downsample.0.weight', 'module.global_encoder.layer2.0.downsample.1.weight', 'module.global_encoder.layer2.0.downsample.1.bias', 'module.global_encoder.layer2.0.downsample.1.running_mean', 'module.global_encoder.layer2.0.downsample.1.running_var', 'module.global_encoder.layer2.0.downsample.1.num_batches_tracked', 'module.global_encoder.layer2.1.conv1.weight', 'module.global_encoder.layer2.1.conv2.weight', 'module.global_encoder.layer3.0.conv1.weight', 'module.global_encoder.layer3.0.conv2.weight', 'module.global_encoder.layer3.0.downsample.0.weight', 'module.global_encoder.layer3.0.downsample.1.weight', 'module.global_encoder.layer3.0.downsample.1.bias', 'module.global_encoder.layer3.0.downsample.1.running_mean', 'module.global_encoder.layer3.0.downsample.1.running_var', 'module.global_encoder.layer3.0.downsample.1.num_batches_tracked', 'module.global_encoder.layer3.1.conv1.weight', 'module.global_encoder.layer3.1.conv2.weight', 'module.global_encoder.layer4.0.conv1.weight', 'module.global_encoder.layer4.0.conv2.weight', 'module.global_encoder.layer4.0.downsample.0.weight', 'module.global_encoder.layer4.0.downsample.1.weight', 'module.global_encoder.layer4.0.downsample.1.bias', 'module.global_encoder.layer4.0.downsample.1.running_mean', 'module.global_encoder.layer4.0.downsample.1.running_var', 'module.global_encoder.layer4.0.downsample.1.num_batches_tracked', 'module.global_encoder.layer4.1.conv1.weight', 'module.global_encoder.layer4.1.conv2.weight', 'module.global_encoder.fc.weight', 'module.global_encoder.fc.bias'])

@chethanchinder
Copy link

chethanchinder commented Feb 13, 2024

@HaolinLiu97 Where can I find the model_best_bg.pth ? I could find only model_best.pth file in the link provided under best bg model. I renamed and tried to load the model. I am also getting the same error as discussed above.

@HaolinLiu97
Copy link
Collaborator

@chethanchinder
Copy link

@HaolinLiu97 Thank you for replying. I cannot see model_best_bg.pth under the link you have sent. Can you upload the model_best_bg.pth ?

@HaolinLiu97
Copy link
Collaborator

I will update it within few hours. This is the correct link including all data. https://cuhko365-my.sharepoint.com/:f:/g/personal/115010192_link_cuhk_edu_cn/Eg99g4P1VMVJoZ5fz3lmDkABvj7Gc7yCjq-qBuYNqWjl2w?e=72lix4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants