Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

confusing about validation #15

Open
Eddylib opened this issue Sep 15, 2019 · 3 comments
Open

confusing about validation #15

Eddylib opened this issue Sep 15, 2019 · 3 comments

Comments

@Eddylib
Copy link

Eddylib commented Sep 15, 2019

Hi,
Thanks for your great job again, but Im still confusing about how to get validation metric...
More specifically, I cannot understand above sentence in your paper.

For our evaluation, we use the total number of positive matches as the length of the sequence, 
since every image in the live sequence has a correspondence in the memory-sequence.

What is the defination of "positive matches"? Is it the correct matches?
What is the defination of "length of the sequence", and how is it used in finall P-R curve?

Currently I know the PR curve is defined to adjusting a threshold of matching score to get specific precision and recall metrices and record their value to form a curve.
To get a PR curve, two sequence should be the input, one (LABEL) is the labels and the other is score of every element (SCORE, what algorithms estimated). In CARL, matches whose index is within six indices of itself is labeled as 1 and others are labeld as 0, and the matching scores become second input (SCORE). Could you please explain if there is any difference between your paper and CALC?

Your reply will be higly appreciated!

@mpkuse
Copy link
Owner

mpkuse commented Sep 20, 2019

This evaluation is for the sequences Uniconcourse, Gardenspoint and campusloop datasets only. Please refer to the arxiv document's figure 12. In this dataset there is a traversal of the same path 2 times (referred to as live (1st traversal) and memory (second traversal).

@Eddylib
Copy link
Author

Eddylib commented Sep 23, 2019

Thanks for reply!
But I wonder if its convenient to provide evaluation code? It seems that your code in https://github.com/mpkuse/cartwheel_train cannot generate such pr-curve in the paper.

@mpkuse
Copy link
Owner

mpkuse commented Sep 23, 2019

Hi eddy,
actually that code is a mess. It is easiest to implement it yourself. Perhaps this could help you a bit: https://github.com/mpkuse/vins_mono_debug_pkg/blob/master/src_place_recog/place_recog_analysis_tool.py

Basically I set various thresholds and record the precision-recall and plot it myself later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants