-
Notifications
You must be signed in to change notification settings - Fork 3
Simulation Depth Images
Basic Blender scene rendered as depth image:
If a virtual camera is created in Blender with the same camera parameters as the real camera being used (Kinect, ASUS, RealSense....) a depth image can be obtained. This image is similar to the one obtained with the real RGBD sensor. I guess you could even add noise from Blender to make the image closer to the image obtained in the real world. With the obtained depth image and the camera parameters a point cloud can be computed (similar to what PCL does with the image received from the RGBD sensor).
An important thing to take into account is that, event until now units didn't matter too much, as we are now simulating a real-world camera, we have to start taking into account the dimensions of the object and environment we are using.
Apart from that, other issue is how to store the depth image in a image file. For the pictures in this comment I have used normalization to represent all the depth values in a 8-bit interval (0 to 255). If normalization is not used, the depth image is clipped in that range, unless a image format with a higher dynamic range is used, such as EXR. This file format is intended for HDR photographs, and there is a library (OpenEXR) and some plugins for Sciki-image to read such images, but I haven't been able to make the plugin work yet.
And now, some additional experiments: