Skip to content

Commit

Permalink
Cliff (#93)
Browse files Browse the repository at this point in the history
* Create README.MD

* code and data link

* Update README.MD

Update README.MD
  • Loading branch information
zhihaolee authored Aug 1, 2022
1 parent e31e1f7 commit 7956f49
Show file tree
Hide file tree
Showing 38 changed files with 2,496 additions and 14 deletions.
21 changes: 21 additions & 0 deletions CLIFF/LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (C) 2022. Huawei Technologies Co., Ltd. All rights reserved.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
77 changes: 63 additions & 14 deletions CLIFF/README.MD
Original file line number Diff line number Diff line change
@@ -1,22 +1,28 @@
# CLIFF [ECCV 2022 Oral]

<img src="assets/teaser.gif" width="100%">

*(This testing video is from the 3DPW testset, and processed frame by frame without temporal smoothing.)*

This repo contains the CLIFF demo code,
## Introduction
This repo contains the demo code,
and the pseudo-GT SMPL parameters used in the training for the paper:

**CLIFF: Carrying Location Information in Full Frames into Human Pose and Shape Estimation**
**CLIFF: Carrying Location Information in Full Frames into Human Pose and Shape Estimation**.

## Installation
```bash
conda create -n cliff python=3.10
pip install -r requirements.txt
```

1. Download [the SMPL models](https://smpl.is.tue.mpg.de/) for rendering the reconstructed meshes
2. Download [pytorch-yolo-v3](https://github.com/ayooshkathuria/pytorch-yolo-v3), unzip it under the `lib` directory, and change the folder name to **"pytorch_yolo_v3_master"**
2. Download the pretrained checkpoints and some testing samples to run the demo [[Google Drive]()]
3. Download the CLIFF pseudo-GT for the in-the-wild datasets [[Google Drive]()]
1. Download [the SMPL models](https://smpl.is.tue.mpg.de) for rendering the reconstructed meshes
2. Download [pytorch-yolo-v3](https://github.com/ayooshkathuria/pytorch-yolo-v3),
unzip it under the `lib` directory, and change the folder name to **"pytorch_yolo_v3_master"**
3. Download the pretrained checkpoints and some testing samples to run the demo [[Google Drive](
https://drive.google.com/drive/folders/1EmSZwaDULhT9m1VvH7YOpCXwBWgYrgwP?usp=sharing)]
4. Download the CLIFF pseudo-GT for the in-the-wild datasets [[Google Drive](
https://drive.google.com/drive/folders/1EmSZwaDULhT9m1VvH7YOpCXwBWgYrgwP?usp=sharing)]

Finally put these data following the directory structure as below:
```
Expand All @@ -26,38 +32,81 @@ ${ROOT}
|-- mpii_cliffGT.npz
|-- data
|-- ckpt
|-- hr48-PA43.0_MJE69.0_MVE81.2_3dpw-PA32.6_MJE47.6_MVE60.1_h36m-p2.pt
|-- res50-PA45.7_MJE72.0_MVE85.3_3dpw-PA35.7_MJE52.8_MVE63.2_h36m-p2.pt
|-- hr48-PA43.0_MJE69.0_MVE81.2_3dpw.pt
|-- res50-PA45.7_MJE72.0_MVE85.3_3dpw.pt
|-- yolov3.weights
|-- smpl
|-- SMPL_FEMALE.pkl
|-- SMPL_MALE.pkl
|-- SMPL_NEUTRAL.pkl
|-- lib
|-- pytorch_yolo_v3_master
|-- yolov3.weights
```


## Demo
```shell
sh scripts/run_demo.sh
```
Change the options in the scripts.
See the option description in the bottom lines of `demo.py`

<p float="left">
<img src="assets/im08036/im08036.png" width="24%">
<img src="assets/im08036/im08036_bbox.jpg" width="24%">
<img src="assets/im08036/im08036_front_view_cliff_hr48.jpg" width="24%">
<img src="assets/im08036/im08036_side_view_cliff_hr48.jpg" width="24%">
</p>

<p float="left">
<img src="assets/im00492/im00492.png" width="24%">
<img src="assets/im00492/im00492_bbox.jpg" width="24%">
<img src="assets/im00492/im00492_front_view_cliff_hr48.jpg" width="24%">
<img src="assets/im00492/im00492_side_view_cliff_hr48.jpg" width="24%">
</p>

The results will be saved in the same directory as the input, including the detected bbox,
the front-view and side-view rendering of the reconstructed meshes,
and the npz file containing all the estimation information.

One can change the demo options in the script.
Please see the option description in the bottom lines of `demo.py`.

## Pseudo-GT

Run the script to visualize the pseudoGT, press the `<-` and `->` button to the next sample, and `esc` to exit.

```shell
sh scripts/run_cliffGT_visualization.sh
```

<p float="left">
<img src="assets/visualized_cliffGT/viz_cliffGT_perImg-COCO_sample.jpg" width="49%">
<img src="assets/visualized_cliffGT/viz_cliffGT_perSubject-MPII_sample.jpg" width="49%">
</p>

Data format:
```
'imgname': # image name, e.g., images/015601864.jpg, train2014/COCO_train2014_000000044474.jpg
'center': # bbox center, [x, y]
'scale': # bbox scale, bbox_size_in_pixel / 200.
'part': # 2D keypoint annotation, shape (24, 3), [x, y, conf], see common/skeleton_drawer.py for the order
'annot_id': # annotation ID, only available for the COCO dataset
'pose': # SMPL pose parameters in axis-angle, shape (72,)
'shape': # SMPL shape parameters, shape (10,)
'has_smpl': # whether the smpl parameters are available (true for all samples)
'global_t': # Pelvis translation in the camera coordinate system w.r.t the original full-frame image
'focal_l': # estimated focal length for the original image, np.sqrt(img_w ** 2 + img_h ** 2)
'S': # 3D joints with Pelvis aligned at (0, 0, 0), shape (24, 4), [x, y, z, conf], same order as 'part'
```

## Issues
When submitting an issue, please add **"[CLIFF]"** before the title, e.g., `[CLIFF] your issue title`.
This is for clarifying which subproject the issue points to.

## Citing
```
@Inproceedings{li_2022_eccv_cliff,
@Inproceedings{li2022cliff,
Title = {CLIFF: Carrying Location Information in Full Frames into Human Pose and Shape Estimation},
Author = {Li, Zhihao and Liu, Jianzhuang and Zhang, Zhensong and Xu, Songcen and Yan, Youliang},
Booktitle = {ECCV},
Year = {2022}
}
```
```
65 changes: 65 additions & 0 deletions CLIFF/THIRD PARTY OPEN SOURCE SOFTWARE NOTICE.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
Please note we provide an open source software notice for the third party open source software along with this software and/or this software component contributed by Huawei (in the following just “this SOFTWARE”). The open source software licenses are granted by the respective right holders.

Warranty Disclaimer
THE OPEN SOURCE SOFTWARE IN THIS SOFTWARE IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY, WITHOUT EVEN THE IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. SEE THE APPLICABLE LICENSES FOR MORE DETAILS.

Copyright Notice and License Texts
Software: SPIN (https://github.com/nkolot/SPIN)
Copyright notice:
Copyright (c) 2019,
University of Pennsylvania,
Max Planck Institute for Intelligent Systems
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:

1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.

3. The names of the contributors may not be used to endorse or
promote products derived from this software without specific
prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Copyright Notice and License Texts
Software: HRNet-Image-Classification (https://github.com/HRNet/HRNet-Image-Classification)
Copyright notice:
MIT License

Copyright (c) 2019 Microsoft Corporation

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Binary file added CLIFF/assets/im00492/im00492.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added CLIFF/assets/im00492/im00492_bbox.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added CLIFF/assets/im08036/im08036.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added CLIFF/assets/im08036/im08036_bbox.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added CLIFF/assets/teaser.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
23 changes: 23 additions & 0 deletions CLIFF/common/constants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Copyright (C) 2022. Huawei Technologies Co., Ltd. All rights reserved.

# This program is free software; you can redistribute it and/or modify it
# under the terms of the MIT license.

# This program is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE. See the MIT License for more details.

import os
from os.path import join

curr_dir = os.path.dirname(os.path.abspath(__file__))
SMPL_MEAN_PARAMS = join(curr_dir, '../data/smpl_mean_params.npz')
SMPL_MODEL_DIR = join(curr_dir, '../data')

CROP_IMG_HEIGHT = 256
CROP_IMG_WIDTH = 192
CROP_ASPECT_RATIO = CROP_IMG_HEIGHT / float(CROP_IMG_WIDTH)

# Mean and standard deviation for normalizing input image
IMG_NORM_MEAN = [0.485, 0.456, 0.406]
IMG_NORM_STD = [0.229, 0.224, 0.225]
150 changes: 150 additions & 0 deletions CLIFF/common/imutils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
# Copyright (c) 2019, University of Pennsylvania, Max Planck Institute for Intelligent Systems
# This script is borrowed and extended from SPIN

import cv2
import torch
import numpy as np
from torch.nn import functional as F

from common import constants


def get_transform(center, scale, res, rot=0):
"""Generate transformation matrix."""
# res: (height, width), (rows, cols)
crop_aspect_ratio = res[0] / float(res[1])
h = 200 * scale
w = h / crop_aspect_ratio
t = np.zeros((3, 3))
t[0, 0] = float(res[1]) / w
t[1, 1] = float(res[0]) / h
t[0, 2] = res[1] * (-float(center[0]) / w + .5)
t[1, 2] = res[0] * (-float(center[1]) / h + .5)
t[2, 2] = 1
if not rot == 0:
rot = -rot # To match direction of rotation from cropping
rot_mat = np.zeros((3, 3))
rot_rad = rot * np.pi / 180
sn, cs = np.sin(rot_rad), np.cos(rot_rad)
rot_mat[0, :2] = [cs, -sn]
rot_mat[1, :2] = [sn, cs]
rot_mat[2, 2] = 1
# Need to rotate around center
t_mat = np.eye(3)
t_mat[0, 2] = -res[1] / 2
t_mat[1, 2] = -res[0] / 2
t_inv = t_mat.copy()
t_inv[:2, 2] *= -1
t = np.dot(t_inv, np.dot(rot_mat, np.dot(t_mat, t)))
return t


def transform(pt, center, scale, res, invert=0, rot=0):
"""Transform pixel location to different reference."""
t = get_transform(center, scale, res, rot=rot)
if invert:
t = np.linalg.inv(t)
new_pt = np.array([pt[0] - 1, pt[1] - 1, 1.]).T
new_pt = np.dot(t, new_pt)
return np.array([round(new_pt[0]), round(new_pt[1])], dtype=int) + 1


def crop(img, center, scale, res):
"""
Crop image according to the supplied bounding box.
res: [rows, cols]
"""
# Upper left point
ul = np.array(transform([1, 1], center, scale, res, invert=1)) - 1
# Bottom right point
br = np.array(transform([res[1] + 1, res[0] + 1], center, scale, res, invert=1)) - 1

# Padding so that when rotated proper amount of context is included
pad = int(np.linalg.norm(br - ul) / 2 - float(br[1] - ul[1]) / 2)

new_shape = [br[1] - ul[1], br[0] - ul[0]]
if len(img.shape) > 2:
new_shape += [img.shape[2]]
new_img = np.zeros(new_shape, dtype=np.float32)

# Range to fill new array
new_x = max(0, -ul[0]), min(br[0], len(img[0])) - ul[0]
new_y = max(0, -ul[1]), min(br[1], len(img)) - ul[1]
# Range to sample from original image
old_x = max(0, ul[0]), min(len(img[0]), br[0])
old_y = max(0, ul[1]), min(len(img), br[1])
try:
new_img[new_y[0]:new_y[1], new_x[0]:new_x[1]] = img[old_y[0]:old_y[1], old_x[0]:old_x[1]]
except Exception as e:
print(e)

new_img = cv2.resize(new_img, (res[1], res[0])) # (cols, rows)

return new_img, ul, br


def bbox_from_detector(bbox, rescale=1.1):
"""
Get center and scale of bounding box from bounding box.
The expected format is [min_x, min_y, max_x, max_y].
"""
# center
center_x = (bbox[0] + bbox[2]) / 2.0
center_y = (bbox[1] + bbox[3]) / 2.0
center = torch.tensor([center_x, center_y])

# scale
bbox_w = bbox[2] - bbox[0]
bbox_h = bbox[3] - bbox[1]
bbox_size = max(bbox_w * constants.CROP_ASPECT_RATIO, bbox_h)
scale = bbox_size / 200.0
# adjust bounding box tightness
scale *= rescale
return center, scale


def process_image(orig_img_rgb, bbox,
crop_height=constants.CROP_IMG_HEIGHT,
crop_width=constants.CROP_IMG_WIDTH):
"""
Read image, do preprocessing and possibly crop it according to the bounding box.
If there are bounding box annotations, use them to crop the image.
If no bounding box is specified but openpose detections are available, use them to get the bounding box.
"""
try:
center, scale = bbox_from_detector(bbox)
except Exception as e:
print("Error occurs in person detection", e)
# Assume that the person is centered in the image
height = orig_img_rgb.shape[0]
width = orig_img_rgb.shape[1]
center = np.array([width // 2, height // 2])
scale = max(height, width * crop_height / float(crop_width)) / 200.

img, ul, br = crop(orig_img_rgb, center, scale, (crop_height, crop_width))
crop_img = img.copy()

img = img / 255.
mean = np.array(constants.IMG_NORM_MEAN, dtype=np.float32)
std = np.array(constants.IMG_NORM_STD, dtype=np.float32)
norm_img = (img - mean) / std
norm_img = np.transpose(norm_img, (2, 0, 1))

return norm_img, center, scale, ul, br, crop_img


def rot6d_to_rotmat(x):
"""Convert 6D rotation representation to 3x3 rotation matrix.
Based on Zhou et al., "On the Continuity of Rotation Representations in Neural Networks", CVPR 2019
Input:
(B,6) Batch of 6-D rotation representations
Output:
(B,3,3) Batch of corresponding rotation matrices
"""
x = x.view(-1, 3, 2)
a1 = x[:, :, 0]
a2 = x[:, :, 1]
b1 = F.normalize(a1)
b2 = F.normalize(a2 - torch.einsum('bi,bi->b', b1, a2).unsqueeze(-1) * b1)
b3 = torch.cross(b1, b2)
return torch.stack((b1, b2, b3), dim=-1)
Loading

0 comments on commit 7956f49

Please sign in to comment.