in this repo, we deployed SOLOv2 to TensorRT with C++. See the video.
- Ubuntu 16.04/18.04/20.04
- Cuda10.2
- Cudnn8
- TensorRT8.0.1
- OpenCV 3.4
- Libtorch 1.8.2
- CMake 3.20
1. Install Solov2 from SOLO
Download,and run it successfully
2. Export the ONNX model from original model
-
You can follow with SOLOv2.tensorRT.
-
Use a pre-exported model
baidudisk, Fetch Code:qdsm
- Export models by yourself
That is, before export, you have to modify some parts of the original SOLOv2 first:
- 2.1. Modify
SOLO-master/mmdet/models/anchor_heads/solov2_head.py:154:0
:
Original code of solov2_head.py
is:
# Origin from SOLO
x_range = torch.linspace(-1, 1, ins_feat.shape[-1], device=ins_feat.device)
y_range = torch.linspace(-1, 1, ins_feat.shape[-2], device=ins_feat.device)
y, x = torch.meshgrid(y_range, x_range)
y = y.expand([ins_feat.shape[0], 1, -1, -1])
x = x.expand([ins_feat.shape[0], 1, -1, -1])
change to:
#Modify for onnx export, frozen the input size = 800x800, batch size = 1
size = {0: 100, 1: 100, 2: 50, 3: 25, 4: 25}
feat_h, feat_w = ins_kernel_feat.shape[-2], ins_kernel_feat.shape[-1]
feat_h, feat_w = int(feat_h.cpu().numpy() if isinstance(feat_h, torch.Tensor) else feat_h), int(feat_w.cpu().numpy() if isinstance(feat_w, torch.Tensor) else feat_w)
x_range = torch.linspace(-1, 1, feat_w, device=ins_kernel_feat.device)
y_range = torch.linspace(-1, 1, feat_h, device=ins_kernel_feat.device)
y, x = torch.meshgrid(y_range, x_range)
y = y.expand([1, 1, -1, -1])
x = x.expand([1, 1, -1, -1])
coord_feat = torch.cat([x, y], 1)
ins_kernel_feat = torch.cat([ins_kernel_feat, coord_feat], 1)
- 2.2 Modify
SOLO-master/mmdet/models/detectors/single_stage_ins.py
In the function named forward_dummy()
, add the forward_dummy of mask, such as :
def forward_dummy(self, img):
x = self.extract_feat(img)
outs = self.bbox_head(x)
if self.with_mask_feat_head:
mask_feat_pred = self.mask_feat_head(
x[self.mask_feat_head.start_level:self.mask_feat_head.end_level + 1])
outs = (outs[0], outs[1], mask_feat_pred)
return outs
- 2.3 Modify
SOLO-master/mmdet/models/mask_heads/mask_feat_head.py
In line 108 of mask_feat_head.py
, original code is:
x_range = torch.linspace(-1, 1, input_feat.shape[-1], device=input_feat.device)
y_range = torch.linspace(-1, 1, input_feat.shape[-2], device=input_feat.device)
change to:
feat_h, feat_w = input_feat.shape[-2], input_feat.shape[-1] # shape get tensor during onnx.export()
feat_h, feat_w = int(feat_h.cpu().numpy() if isinstance(feat_h, torch.Tensor) else feat_h), \
int(feat_w.cpu().numpy() if isinstance(feat_w, torch.Tensor) else feat_w)
x_range = torch.linspace(-1, 1, feat_w, device=input_feat.device)
y_range = torch.linspace(-1, 1, feat_h, device=input_feat.device)
- 2.4 Export onnx model
Move the onnx_exporter.py
and common.py
to the SOLO/demo/
, then run
#kitti size
python onnx_exporter.py ../configs/solov2/solov2_light_448_r34_fpn_8gpu_3x.py ../weights/SOLOv2_light_R34.onnx --checkpoint ../checkpoints/SOLOv2_LIGHT_448_R34_3x.pth --shape 384 1152
3. build the tensorrt model
First, edit the config file:config.yaml
%YAML:1.0
IMAGE_WIDTH: 1226
IMAGE_HEIGHT: 370
#SOLO
ONNX_PATH: "/home/chen/ws/dynamic_ws/src/dynamic_vins/weights/solo/SOLOv2_light_R34_1152x384_cuda102.onnx"
SERIALIZE_PATH: "/home/chen/ws/dynamic_ws/src/dynamic_vins/weights/solo/tensorrt_model_1152x384.bin"
SOLO_NMS_PRE: 500
SOLO_MAX_PER_IMG: 100
SOLO_NMS_KERNEL: "gaussian"
#SOLO_NMS_SIGMA=2.0
SOLO_NMS_SIGMA: 2.0
SOLO_SCORE_THR: 0.1
SOLO_MASK_THR: 0.5
SOLO_UPDATE_THR: 0.2
LOG_PATH: "./segmentor_log.txt"
LOG_LEVEL: "debug"
LOG_FLUSH: "debug"
DATASET_DIR: "/media/chen/EC4A17F64A17BBF0/datasets/kitti/odometry/colors/07/image_2/"
WARN_UP_IMAGE_PATH: "/home/chen/CLionProjects/InstanceSegment/config/kitti.png"
and then,compile the CMake project:
mkdir build && cd build
cmake ..
make -j10
Finally, build the tensorrt model:
cd ..
./build/build_model ./config/config.yaml
4. run the demo
if you have the KITTI dataset, set config.yaml
with right path DATASET_DIR
,run:
./build/segment ./config/config.yaml
if you not , and just want run at a image, set config.yaml
with right image path kWarnUpImagePath
, then run :
./build/demo ./config/config.yaml