You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Does this method support integrating multiple RGBD streams like FWD?
If "multiple" means that you do not have to train for a specific stream, then yes, it should generalize quite well for different RGB sensors and scenes, given that the provided depth maps have good quality. If "multiple" means, multiple streams at once for one scene then it would work method-wise. Although sensor interface things are not provided in this code base and you would need to do this your self. If this did not answer your question: Do you have a link or something that shows the use case you're referring to for FWD?
Does this support real-time synthesis of dynamic scenes, such as a person walking around?
No, LiveNVS generally assumes a static scene. Small "outliers" might be filtered out by the image fusion process, but we do not actively model any dynamic scene parts.
Hello! Two questions:
Thank you!
The text was updated successfully, but these errors were encountered: