You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it is possible but the method to reduce the noise in the depthmap is out of the scope of this project. Assuming the depthmaps are perfect, I could work on providing such interface later.
Thanks for your reply! Assuming I have some perfect RGBD images, I would like to obtain a mesh for unbounded scene. Traditional volumetric fusion-based methods allows fusing them into a global TSDF but scales badly for large scales when meshing, while your proposed method seems to be an awesome solution. If I want to extend your method myself, what files should I look in particular? Thanks.
I recommend constructing your own SDF function based on depth maps and wrapping it as a function to feed to the "kernels" argument in ocmesher/core.py. (kernels is a list of functions, in your case it can just be a singleton)
Great works. I wonder how can this method extend to multiple view depthmap? Can I use it for unbounded mesh extraction in a fashion like TSDF fusion?
The text was updated successfully, but these errors were encountered: