Replies: 8 comments 5 replies
-
This is the first result given by Google search, maybe it can solve your problem. |
Beta Was this translation helpful? Give feedback.
-
Please note that anno files in stgcn_3d and stgcn_2d are different. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
@cir7 could you please answer this question so that i can proceed further ? |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
@cir7 I am using the pre-trained model for 3d action . But since the original code uses 25 joints. My custom dataset uses only 17 so i modified the code for 17 joints for my dataset. Now when i try to use pre-trained model and train on my dataset , which is obvious. How can use your pre trained model for training on my custom dataset. oads checkpoint by local backend from path: stgcn_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221129-850308e1.pth size mismatch for backbone.data_bn.weight: copying a param with shape torch.Size([75]) from checkpoint, the shape in current model is torch.Size([51]). i CHANGED IN GRAPH.PY FILE When i train using script python tools/train.py configs/skeleton/stgcn/stgcn_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_omnilab.py --work-dir ./work_dirs/stgcn_80e_ntu60_xsub_keypoint_3d_25_04_23 --load_from stgcn_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221129-850308e1.pth --seed 0 --deterministic --cfg-options gpu_ids=[0,1] |
Beta Was this translation helpful? Give feedback.
-
In my opinion, you can't use a pre-train ckpt based on 25 points, for your 17 points data. |
Beta Was this translation helpful? Give feedback.
-
Hello Team,
Hope you are doing well.
I am using STGCN for 2D and 3D action recognition.
For 2D action recognition, I was training on my custom dataset. So I converted my dataset to the pkl file format. It ran successfully.
Now I am doing 3D action recognition. So I converted my custom dataset to pkl file format according to ntu60_val. For 3D action too the pickle file has to be in the same format as 2D? In case pickle file is different for 3D action recognition, where i can find the pickle file for 3D action because i tried to find but was unable to find the file.??
The config i used is this /mmaction2/configs/skeleton/stgcn/stgcn_80e_ntu60_xsub_keypoint_3d.py
This is how my pickle file looks acc to ntu60_val like after conversion which i pass in the config
{'frame_dir': 'xxx',
'label': 2,
'img_shape': (1200, 1200),
'original_shape': (1200, 1200),
'total_frames': 60,
'keypoint': array([[[[-0.5752752 , 1.5449048 , -0.35333407],
[-0.45848957, 2.122168 , -0.7783565 ],
[-0.56915903, 2.5114708 , -0.58006483],
...,
[-1.0122216 , -0.09795046, 0.48847854],
[-0.14722271, 1.5622706 , -0.28381488],
[-0.14523879, 0.97209 , -0.12902533]],
'keypoint_score': array([[[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]]], dtype=float16)},
Traceback :
Traceback (most recent call last):
File "/home/yukti/mmaction2/tools/train.py", line 222, in
main()
File "/home/yukti/mmaction2/tools/train.py", line 210, in main
train_model(
File "/home/yukti/mmaction2/mmaction/apis/train.py", line 232, in train_model
runner.run(data_loaders, cfg.workflow, cfg.total_epochs, **runner_kwargs)
File "/home/yukti/miniconda3/envs/demo/lib/python3.10/site-packages/mmcv/runner/epoch_based_runner.py", line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/yukti/miniconda3/envs/demo/lib/python3.10/site-packages/mmcv/runner/epoch_based_runner.py", line 53, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File "/home/yukti/miniconda3/envs/demo/lib/python3.10/site-packages/mmcv/runner/epoch_based_runner.py", line 31, in run_iter
outputs = self.model.train_step(data_batch, self.optimizer,
File "/home/yukti/miniconda3/envs/demo/lib/python3.10/site-packages/mmcv/parallel/data_parallel.py", line 77, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "/home/yukti/mmaction2/mmaction/models/skeleton_gcn/base.py", line 155, in train_step
losses = self(skeletons, label, return_loss=True)
File "/home/yukti/miniconda3/envs/demo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/yukti/mmaction2/mmaction/models/skeleton_gcn/base.py", line 106, in forward
return self.forward_train(keypoint, label, **kwargs)
File "/home/yukti/mmaction2/mmaction/models/skeleton_gcn/skeletongcn.py", line 16, in forward_train
x = self.extract_feat(skeletons)
File "/home/yukti/mmaction2/mmaction/models/skeleton_gcn/base.py", line 121, in extract_feat
x = self.backbone(skeletons)
File "/home/yukti/miniconda3/envs/demo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/yukti/mmaction2/mmaction/models/backbones/stgcn.py", line 274, in forward
x = self.data_bn(x)
File "/home/yukti/miniconda3/envs/demo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/yukti/miniconda3/envs/demo/lib/python3.10/site-packages/torch/nn/modules/batchnorm.py", line 168, in forward
return F.batch_norm(
File "/home/yukti/miniconda3/envs/demo/lib/python3.10/site-packages/torch/nn/functional.py", line 2438, in batch_norm
return torch.batch_norm(
RuntimeError: running_mean should contain 68 elements not 75
Can you please help me how can i resolve this bug? I am doing this work for my Master's Thesis work. I am stuck at this problem, so that i can proceed further as there is a deadline.
Awaiting for your response.
Thanks & Regards,
Yukti dya
Beta Was this translation helpful? Give feedback.
All reactions