Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XFeat + LightGlue #34

Open
guipotje opened this issue Jul 10, 2024 · 11 comments
Open

XFeat + LightGlue #34

guipotje opened this issue Jul 10, 2024 · 11 comments
Assignees

Comments

@guipotje
Copy link
Collaborator

Hello everyone,

I'm training some LightGlue variations (finding a neat trade-off between model size vs accuracy) and soon I will update the repo with the model and weights in the next weeks!

You can follow this issue if you are interested.

Best,

Guilherme

@cxb1998
Copy link

cxb1998 commented Jul 16, 2024

That's great! Looking forward to your work

@guipotje
Copy link
Collaborator Author

Hey guys, just released a version of LightGlue matcher, please check out in README

@muchluv525
Copy link

大家好,刚刚发布了一个版本的 LightGlue 匹配器,请在 README 中查看

Hello, author, didn't you say that XFEAT performance is better than Superpoint, why look at the data this time Superpoint is stronger

@guipotje
Copy link
Collaborator Author

Hi @muchluv525,

Please notice that we trained a smaller and faster version of LightGlue.

Other than that, there are still a few reasons that SuperPoint + LightGlue might be still better than XFeat + LG (full size)
1 - Descriptor embedding: XFeat extracts a much more compact (64-D vs 256-D) descriptors;
2 - SuperPoint has a much larger backbone.

@zhouzq-thu
Copy link

And the number of layers is just 6. BTW, would you upload the training code of LightGlue for XFeat?

@muchluv525
Copy link

嗨,

请注意,我们训练了一个_更小、更快_的 LightGlue 版本。

除此之外,SuperPoint + LightGlue 可能仍然优于 XFeat + LG(全尺寸)有几个原因 1 - 描述符嵌入:XFeat 提取更紧凑(64-D 与 256-D)的描述符; 2 - SuperPoint 具有更大的主干网。

我明白了,谢谢你的回答,我是初学者,期待对xfeat进行更详细的解释

@noahzn
Copy link

noahzn commented Aug 2, 2024

@guipotje Have you tried training Xfeat and LightGlue end-to-end?

@guipotje
Copy link
Collaborator Author

guipotje commented Aug 6, 2024

Hello @noahzn, I haven't tried to train it end-to-end. It might deliver some improvements, as mentioned in SuperGlue's paper (section 5.4), when backpropagating through the descriptors. However, it also might lead to less generalization to different scenes.

@zhangchang0127
Copy link

Hello, I encountered an error while converting the ONNX model using torch.export. Here is the code we used for the model conversion.
2024-08-12 14-43-37 的屏幕截图

This is the error.
2024-08-12 14-44-29 的屏幕截图

@EndlessPeak
Copy link

Hello,

Thank you very much for your great work.

I am curious about the modifications made to the lightglue network structure to achieve the balance between inference accuracy and speed mentioned in the README. Will this part of the code be made publicly available?

I checked the match_lighterglue.py file, but it does not provide more information on this aspect.

@noahzn
Copy link

noahzn commented Sep 3, 2024

Hello @noahzn, I haven't tried to train it end-to-end. It might deliver some improvements, as mentioned in SuperGlue's paper (section 5.4), when backpropagating through the descriptors. However, it also might lead to less generalization to different scenes.

Hi @guipotje I might try end-to-end training, do you have any idea about the implementation? Since XFeat and LightGlue use different homography code for training, I'm wondering if it's possible to keep their own random homography code, but only optimize two networks together. This may need XFeat to return a loss and then we add it to LightGlue's loss and backprop together. Do you think this is the minimal effort for an end-to-end training of the two networks?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants