-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
why i can't get glossy results when training using blender images? #5
Comments
hi @Kai-46 , i am curious about the initial roughness. why you choose big init roughness rather than the small one? |
Hi, can you post sample images here? Is your object glossy? |
Hi, |
Hi, It's very hard for me to judge what could be problematic in your case without any visuals. Can you try the following things to check if your camera poses are correct, and post the results?
|
when rendering using mitsuba, do you do some post-preprocessing operations? or just use the rendered w2c matrix and written like the json format? |
I meet the same problem, any ideas? |
Hi, @Kai-46 |
Hi, @Woolseyyy When trying complex scene, I got similar rendering results as yours. Have you solved this problem? Thanks a lot. |
hi @Kai-46 ,
thank you for your excellent job!! However I encountered a problem when trying to training using my images. Acturally, I rendered images using blender renderer, and keep the training set the same format as yours kitty. The training process seems okay except it didn't get glossy results throughout the training process. Do you have any ideas why would this happen? Something like there's difference between mitsuba/blender or rendered-images need post-processing?
Really hope to get your reply, thx in advance!
The text was updated successfully, but these errors were encountered: