Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Train spatio temporal stream on custom coco dataset #2883

Open
gguzzy opened this issue Oct 23, 2024 · 0 comments
Open

[Docs] Train spatio temporal stream on custom coco dataset #2883

gguzzy opened this issue Oct 23, 2024 · 0 comments
Assignees

Comments

@gguzzy
Copy link

gguzzy commented Oct 23, 2024

The doc issue

I wanted to know how we can actually use an annotated dataset in coco format, containing specific keypoints annotations, to train it on a custom 2s-agcn both bone-motion and/or joint-motion. The issue I do not get is why we should use train_test_split to create train, test and val datasets and how I would change this
`dataset_type = 'CocoDataset'
ann_file_train = 'train.json'
ann_file_val = 'test.json'
ann_file_test = 'val.json'
classes = ('Drunk', 'Not drunk')
train_pipeline = [
dict(type='PreNormalize2D'),
dict(type='GenSkeFeat', dataset='coco', feats=['bm']),
dict(type='UniformSampleFrames', clip_len=100),
dict(type='PoseDecode'),
dict(type='FormatGCNInput', num_person=2),
dict(type='PackActionInputs')
]
val_pipeline = [
dict(type='PreNormalize2D'),
dict(type='GenSkeFeat', dataset='coco', feats=['bm']),
dict(
type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True),
dict(type='PoseDecode'),
dict(type='FormatGCNInput', num_person=2),
dict(type='PackActionInputs')
]
test_pipeline = [
dict(type='PreNormalize2D'),
dict(type='GenSkeFeat', dataset='coco', feats=['bm']),
dict(
type='UniformSampleFrames', clip_len=100, num_clips=10,
test_mode=True),
dict(type='PoseDecode'),
dict(type='FormatGCNInput', num_person=2),
dict(type='PackActionInputs')
]

train_dataloader = dict(
classes=classes,
batch_size=16,
num_workers=2,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
dataset=dict(
type='RepeatDataset',
times=5,
dataset=dict(
type=dataset_type,
ann_file=ann_file_train,
pipeline=train_pipeline,
classes=classes,
# split='xsub_train'`

Considering that this code is not working since mmaction2 cannot find:

  1. CocoDataset raising: not in the mmaction::dataset registry. Please check whether the value of CocoDataset is correct or it was registered as expected.

Is there any available script to autoconvert my annotations or should I do anything else to accomplish a custom training? I do not want to lose my annotations.

Thanks

Suggest a potential alternative/fix

Write a real notebook showing how to train a real custom dataset!!! All documentation on website is obsolete, ambiguous and not working. Many key components of training are missing, and makes no sense to create a custom class for a well-known annotations as Coco are.

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants