-
-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finetuning scripts #9
Comments
Yeah it'd be nice to facilitate that process but I think it'd be best if we provided a data set and something like an example for how to run a LORA fine tune, or even just a link to one. There are many other projects that are better suited to handle the fine tuning process. |
On the other hand, it'd be good if we trained and maintained an r2 model and provided the weights. I have some AWS credits i don't mind burning for this but would need help preparing the dataset |
Agree on that. What we should focus is on documenting and providing ways to generate all this training data in a way that can be consumed to finetune our own models. ideally the functionary one or the mistral. or even utopia are the ones im using the most and would love to improve. |
Extending base models with custom information like r2 source code, the book, disassembly data, decompilation code and more is interesting, so r2ai should provide the basic infrastructure to finetune models without the hassle to write code. This guide is quite comprensible and easy to follow.
https://medium.com/@mohammed97ashraf/your-ultimate-guide-to-instinct-fine-tuning-and-optimizing-googles-gemma-2b-using-lora-51ac81467ad2
The text was updated successfully, but these errors were encountered: