You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to know how we can actually use an annotated dataset in coco format, containing specific keypoints annotations, to train it on a custom 2s-agcn both bone-motion and/or joint-motion. The issue I do not get is why we should use train_test_split to create train, test and val datasets and how I would change this
`dataset_type = 'CocoDataset'
ann_file_train = 'train.json'
ann_file_val = 'test.json'
ann_file_test = 'val.json'
classes = ('Drunk', 'Not drunk')
train_pipeline = [
dict(type='PreNormalize2D'),
dict(type='GenSkeFeat', dataset='coco', feats=['bm']),
dict(type='UniformSampleFrames', clip_len=100),
dict(type='PoseDecode'),
dict(type='FormatGCNInput', num_person=2),
dict(type='PackActionInputs')
]
val_pipeline = [
dict(type='PreNormalize2D'),
dict(type='GenSkeFeat', dataset='coco', feats=['bm']),
dict(
type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True),
dict(type='PoseDecode'),
dict(type='FormatGCNInput', num_person=2),
dict(type='PackActionInputs')
]
test_pipeline = [
dict(type='PreNormalize2D'),
dict(type='GenSkeFeat', dataset='coco', feats=['bm']),
dict(
type='UniformSampleFrames', clip_len=100, num_clips=10,
test_mode=True),
dict(type='PoseDecode'),
dict(type='FormatGCNInput', num_person=2),
dict(type='PackActionInputs')
]
Considering that this code is not working since mmaction2 cannot find:
CocoDataset raising: not in the mmaction::dataset registry. Please check whether the value of CocoDataset is correct or it was registered as expected.
Is there any available script to autoconvert my annotations or should I do anything else to accomplish a custom training? I do not want to lose my annotations.
Thanks
Suggest a potential alternative/fix
Write a real notebook showing how to train a real custom dataset!!! All documentation on website is obsolete, ambiguous and not working. Many key components of training are missing, and makes no sense to create a custom class for a well-known annotations as Coco are.
Thanks.
The text was updated successfully, but these errors were encountered:
The doc issue
I wanted to know how we can actually use an annotated dataset in coco format, containing specific keypoints annotations, to train it on a custom 2s-agcn both bone-motion and/or joint-motion. The issue I do not get is why we should use train_test_split to create train, test and val datasets and how I would change this
`dataset_type = 'CocoDataset'
ann_file_train = 'train.json'
ann_file_val = 'test.json'
ann_file_test = 'val.json'
classes = ('Drunk', 'Not drunk')
train_pipeline = [
dict(type='PreNormalize2D'),
dict(type='GenSkeFeat', dataset='coco', feats=['bm']),
dict(type='UniformSampleFrames', clip_len=100),
dict(type='PoseDecode'),
dict(type='FormatGCNInput', num_person=2),
dict(type='PackActionInputs')
]
val_pipeline = [
dict(type='PreNormalize2D'),
dict(type='GenSkeFeat', dataset='coco', feats=['bm']),
dict(
type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True),
dict(type='PoseDecode'),
dict(type='FormatGCNInput', num_person=2),
dict(type='PackActionInputs')
]
test_pipeline = [
dict(type='PreNormalize2D'),
dict(type='GenSkeFeat', dataset='coco', feats=['bm']),
dict(
type='UniformSampleFrames', clip_len=100, num_clips=10,
test_mode=True),
dict(type='PoseDecode'),
dict(type='FormatGCNInput', num_person=2),
dict(type='PackActionInputs')
]
train_dataloader = dict(
classes=classes,
batch_size=16,
num_workers=2,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
dataset=dict(
type='RepeatDataset',
times=5,
dataset=dict(
type=dataset_type,
ann_file=ann_file_train,
pipeline=train_pipeline,
classes=classes,
# split='xsub_train'`
Considering that this code is not working since mmaction2 cannot find:
CocoDataset
is correct or it was registered as expected.Is there any available script to autoconvert my annotations or should I do anything else to accomplish a custom training? I do not want to lose my annotations.
Thanks
Suggest a potential alternative/fix
Write a real notebook showing how to train a real custom dataset!!! All documentation on website is obsolete, ambiguous and not working. Many key components of training are missing, and makes no sense to create a custom class for a well-known annotations as Coco are.
Thanks.
The text was updated successfully, but these errors were encountered: