Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

what kind of matching is this? #9

Open
royw99 opened this issue May 29, 2024 · 5 comments
Open

what kind of matching is this? #9

royw99 opened this issue May 29, 2024 · 5 comments

Comments

@royw99
Copy link

royw99 commented May 29, 2024

Is this dense matching or key point matching?

@royw99
Copy link
Author

royw99 commented May 29, 2024

OmniGlue chooses to focus on sparse methods

The paper seems to suggest that the only the descriptor generated by Superpoint is improved but the coordinates (keypoints) are unchanged and it is still sparse matching

However, my colmap 3d reconstruction pipeline ends with poor performance based normally associated with dense matching (because the keypoints of every image are changed when that image is paired up with other distinct images)

@arjunkarpur
Copy link
Collaborator

Our method is considered a sparse keypoint matching method. We use extracted SuperPoint features as input to OmniGlue, without any modifications to the pretrained SP model. Any issues with keypoint repeatability are likely due to the behavior of SuperPoint detection. That being said, SuperPoint should always yield the same keypoints/descriptors for a fixed input image.

Could you try your SfM pipeline with SuperPoint + SuperGlue/LightGLue/MNN? This would help us determine if the issue is with SuperPoint detection or with OmniGlue matches


...(because the keypoints of every image are changed when that image is paired up with other distinct images)

One thing to note is that the OmniGlue.FindMatches() fn only returns matching keypoints, not the full set of extracted keypoints. You would need to modify the code slightly here to return the raw output of self.sp_extract() to get all extracted keypoints.

@royw99
Copy link
Author

royw99 commented May 30, 2024

Thanks for the reply. I have separated the key point detection and descriptor section from the omniglue matching process. The 3d reconstruction work now, but still doesn't work as good as other models, such as superpoint + Lightglue. Maximizing the number of keypoints detected helps to squeeze a little more accuracy, but I can't push it any further. any advice?

@arjunkarpur
Copy link
Collaborator

In combination with more keypoints, I would recommend lowering the match_threshold filter value to a very low value (e.g. 0.0 < threshold < 1e-3) to allow the maximum amount of candidate matches.

You can also try setting pad_random_features=True for the calls to self.sp_extract() here. When fewer than N(=1024 by default) keypoints are detected, the method will sample keypoints randomly in the image in order to provide additional context on the image to the matching model. This is especially helpful when # keypoints is higher (2k, 4k, 8k).

@royw99
Copy link
Author

royw99 commented Jun 2, 2024

In combination with more keypoints, I would recommend lowering the match_threshold filter value to a very low value (e.g. 0.0 < threshold < 1e-3) to allow the maximum amount of candidate matches.

You can also try setting pad_random_features=True for the calls to self.sp_extract() here. When fewer than N(=1024 by default) keypoints are detected, the method will sample keypoints randomly in the image in order to provide additional context on the image to the matching model. This is especially helpful when # keypoints is higher (2k, 4k, 8k).

Hi, thanks for the tips. I have tried your tricks but it still didn't work that will (may be it has to do with the distribution of the dataset or maybe my implementation in other part is mistaken).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants