Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create Evals #130

Open
mathewpareles opened this issue Oct 28, 2024 · 0 comments
Open

Create Evals #130

mathewpareles opened this issue Oct 28, 2024 · 0 comments
Assignees

Comments

@mathewpareles
Copy link
Contributor

mathewpareles commented Oct 28, 2024

We should create evals for judging how well our LLMs perform on inline completions (ctrl+K), whole-file edits (ctrl+L), and autocomplete (tab).

Let me know if you have any suggestions on good evals to use.

@andrewpareles andrewpareles added the roadmap-under consideration New feature or request label Oct 28, 2024
@mathewpareles mathewpareles self-assigned this Nov 4, 2024
@mathewpareles mathewpareles added roadmap-planned We will implement this and removed backlog roadmap-under consideration New feature or request labels Jan 10, 2025
@mathewpareles mathewpareles moved this to 🔮 Planned in 🚙 Void Roadmap Jan 10, 2025
@andrewpareles andrewpareles removed the roadmap-planned We will implement this label Jan 22, 2025
@andrewpareles andrewpareles moved this from 🔮 Planned to Backlog in 🚙 Void Roadmap Jan 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Backlog
Development

No branches or pull requests

2 participants