Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducibility #15

Open
slice4e opened this issue Dec 5, 2023 · 3 comments
Open

Reproducibility #15

slice4e opened this issue Dec 5, 2023 · 3 comments
Assignees

Comments

@slice4e
Copy link
Contributor

slice4e commented Dec 5, 2023

Thank you for a great work.
I would like to reproduce the results and compare to another engine. Following the documentation, it appears that you have already included a test in ann-benchmarks:

The code to run OG-LVQ can be found here, and it was included in the ANN-benchmarks and Big-ANN-benchmarks evaluation codes following their guidelines.

Unfortunately, I am not able to find it. Could you please provide a link?

thank you

@marianotepper
Copy link
Contributor

Thank you for your interest!

For our results, we made internal forks of the ANN-benchmarks and Big-ANN-benchmarks frameworks, as we collect more metrics than those in the original versions. As these forks are not part of SVS proper, we never got to release them. We're working on releasing sample code for the integration with these frameworks.

@akashsha1
Copy link

+1 My team would also be interested when you release code for the integration with ANN-benchmarks framework.

@mihaic mihaic self-assigned this Jun 10, 2024
@mihaic
Copy link
Member

mihaic commented Jun 10, 2024

I am working on releasing the integration code. I will update this issue as I make progress.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants