Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EPIC: Auto mode #5

Open
2 of 6 tasks
trufae opened this issue Feb 28, 2024 · 7 comments
Open
2 of 6 tasks

EPIC: Auto mode #5

trufae opened this issue Feb 28, 2024 · 7 comments
Assignees

Comments

@trufae
Copy link
Contributor

trufae commented Feb 28, 2024

  • support custom hints to improve automation
  • change prompt from :auto to '
  • implement errors from r2lang.cmd
  • support chatml-function-calling via llama-cpp support chatml-function-calling via llama-cpp #4
  • support functionary v2 via llama-cpp
  • evaluate different models and compile a list of supported ones and figure out what prompts would work best for them
@trufae
Copy link
Contributor Author

trufae commented Apr 10, 2024

i've added doc/auto/*.txt and integrated the indexer context message to make the local functionary know which commands to run and how to achieve certain things with r2. this way we can improve the local model auto interaction quite a lot. pleae give it a try because i was able to do several more things properly without gpt thanks to this :)

also, ive updated some models, and mistral2 supports 32K contexts.. so maybe we can use that to compress large disassemblies or decompilation outputs to get an improved output processing. Because the maxtokens and context window is another issue we face with local models. until llama gets improved to handle larger ones in a proper way we must find ways to workaround those limitations.

unfornately i cant use some 3rd party apis because im in europe and some AI vendors dont give access to us :__ so once again another reason to use local models

@dnakov
Copy link
Collaborator

dnakov commented Apr 10, 2024

That's awesome, just tried functionary and it's actually usable with these simple instructions! I thought it'd be a bigger effort :)

@trufae
Copy link
Contributor Author

trufae commented Apr 11, 2024

To take into consideration for a another approach on auto mode without function calling https://github.com/user1342/Monocle

@trufae
Copy link
Contributor Author

trufae commented Apr 11, 2024

If everything looks good to u i would like to release 0.6 and publish it in pip. So we can start messing with other stuff from this stable point once again :) sgty?

@dnakov
Copy link
Collaborator

dnakov commented Apr 11, 2024

sounds great!

@nitanmarcel
Copy link
Contributor

In this topic, what do you guys think for this?

https://docs.phidata.com/introduction

I'm more interested in this:

https://docs.phidata.com/knowledge/introduction

The thought of having a way we can help the assistant know how to use radare excites me 😂

@dnakov
Copy link
Collaborator

dnakov commented Sep 11, 2024

@nitanmarcel there's already a RAG in r2ai that can be used for that I think. I remember pancake got one working a few months ago.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants