Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map.
(The depth map from MSRA Hand Gesture Dataset)
- inference
- ground-truth
For the various examples other than the sample image,
it is necessary to download MSRA Hand Gesture Dataset and extract to msra_dataset
directory as below.
v2v-posenet
├── msra_dataset
├── P0
...
├── P3
├── 1
├── 000000_depth.bin
├── 000001_depth.bin
...
├── joint.txt
├── 2
...
And also it use the precomputed centers.
The precomputed centers are obtained by training the hand center estimation network from DeepPrior++.
- MSRA Hand Pose Dataset [center] [estimation]
Automatically downloads the onnx and prototxt files on the first run. It is necessary to be connected to the Internet while downloading.
For the sample image,
$ python3 v2v-posenet.py
If you want to specify the input depthmap, put the image path after the --input
option.
$ python3 v2v-posenet.py --input DEPTH_MAP
You can draw ground-truth keypoints by specifying the --gt
option.
$ python3 v2v-posenet.py --gt
Pytorch
ONNX opset = 11