Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why i can't get glossy results when training using blender images? #5

Open
Jemmagu opened this issue Feb 9, 2022 · 8 comments
Open

Comments

@Jemmagu
Copy link

Jemmagu commented Feb 9, 2022

hi @Kai-46 ,
thank you for your excellent job!! However I encountered a problem when trying to training using my images. Acturally, I rendered images using blender renderer, and keep the training set the same format as yours kitty. The training process seems okay except it didn't get glossy results throughout the training process. Do you have any ideas why would this happen? Something like there's difference between mitsuba/blender or rendered-images need post-processing?

Really hope to get your reply, thx in advance!

@Jemmagu
Copy link
Author

Jemmagu commented Feb 15, 2022

hi @Kai-46 , i am curious about the initial roughness. why you choose big init roughness rather than the small one?
thx!

@Kai-46
Copy link
Owner

Kai-46 commented Feb 16, 2022

Hi, can you post sample images here? Is your object glossy?

@Jemmagu
Copy link
Author

Jemmagu commented Feb 16, 2022

Hi,
yes the object is glossy.
you mentioned scene normalization in Nerf++ data processing, so do i need to do camera normalization in PhySG? when rendering using blender, i have put the object in unit sphere, but not doing camera normalization. Rendering results (shape, material) seems ok except the glossy part, it's totally diffuse. So wondering what's the problem, maybe render using blender need extra attention or initial roughness matters, or something like that...

@Kai-46
Copy link
Owner

Kai-46 commented Feb 16, 2022

Hi,

It's very hard for me to judge what could be problematic in your case without any visuals.

Can you try the following things to check if your camera poses are correct, and post the results?

  1. inspect the camera epipolar geometry like the NeRF++ codebase did
  2. visualize the camera and geometry like the NeRF++ codebase did. Note that this codebase requires slightly different normalization than NeRF++. Basically, NeRF++ puts all the cameras inside the unit sphere, while this one only puts the geometry inside the unit sphere (not necessarily the cameras)
  3. use the run_colmap_posed.py script in NeRF++ to reconstruct the geometry using conventional COLMAP MVS

@Jemmagu
Copy link
Author

Jemmagu commented Feb 17, 2022

when rendering using mitsuba, do you do some post-preprocessing operations? or just use the rendered w2c matrix and written like the json format?

@Woolseyyy
Copy link

I meet the same problem, any ideas?

@Woolseyyy
Copy link

Hi,

It's very hard for me to judge what could be problematic in your case without any visuals.

Can you try the following things to check if your camera poses are correct, and post the results?

  1. inspect the camera epipolar geometry like the NeRF++ codebase did
  2. visualize the camera and geometry like the NeRF++ codebase did. Note that this codebase requires slightly different normalization than NeRF++. Basically, NeRF++ puts all the cameras inside the unit sphere, while this one only puts the geometry inside the unit sphere (not necessarily the cameras)
  3. use the run_colmap_posed.py script in NeRF++ to reconstruct the geometry using conventional COLMAP MVS

Hi, @Kai-46
There is a parameter named "object_bounding_sphere" in .conf. Is it necessary to make sure object in unit sphere? Or just modify this parameter?
I tried some complex scene and it seems difficult to change the size of them. As a result, I change this parameter but the results look strange.

Train using IDR:
image

Train using PhySG:
image

@ThreeSRR
Copy link

Hi,
It's very hard for me to judge what could be problematic in your case without any visuals.
Can you try the following things to check if your camera poses are correct, and post the results?

  1. inspect the camera epipolar geometry like the NeRF++ codebase did
  2. visualize the camera and geometry like the NeRF++ codebase did. Note that this codebase requires slightly different normalization than NeRF++. Basically, NeRF++ puts all the cameras inside the unit sphere, while this one only puts the geometry inside the unit sphere (not necessarily the cameras)
  3. use the run_colmap_posed.py script in NeRF++ to reconstruct the geometry using conventional COLMAP MVS

Hi, @Kai-46 There is a parameter named "object_bounding_sphere" in .conf. Is it necessary to make sure object in unit sphere? Or just modify this parameter? I tried some complex scene and it seems difficult to change the size of them. As a result, I change this parameter but the results look strange.

Train using IDR: image

Train using PhySG: image

Hi, @Woolseyyy

When trying complex scene, I got similar rendering results as yours. Have you solved this problem? Thanks a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants