Autogen using Instructor #741
scruffynerf
started this conversation in
Show and tell
Replies: 1 comment
-
ah, it helps to double check the functions, I somehow missed: WIll double test and push my much better code in a bit. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
repo: https://github.com/scruffynerf/llm_framework_examples [see autogen_instructor_2agentchat.py]
First attempt at this, but I couldn't find anyone else doing this so far...
I am not a Instructor/Pydantic wizard by any means, nor an Autogen wizard, and any suggestions or improvements would be appreciated.
Basically we use the register_model_client() function, and give it a Instructor using Class. I wish it was easier, like just override the api_type in the config (I tried that approach, but it was messier... right now, Autogen doesn't really allow you to just replace the client with the instructor.from_openai wrapper via passing a new api_type, but really that would be ideal. If they improved the current double step of "config it, now tell us again with a explicit function" (and yes, that's how they do it, seriously... look at the Autogen code, they do a placeholder during the config processing, and wait for you to say 'you know that client Class I told you about before when I set up the llm config for this agent? Yes, I really do want to use it here on this agent.")
Annoying thing maybe someone can correct?:If I pass a response model to Instructor, it works great, but the rest of the response is all lost, cause Autogen chokes on just passing it back. I fake a full response, and put the json dump in as the message content, and Autogen is fine with that. If someone has a better way, or if I'm missing some way to get this to 'just work', please let me know. Maybe I missed it, but it felt like using the response model ends up as the entire response, so Autogen's expected OpenAI style Chat Completion response is not passed along as well? Is there a way to keep the Instructor response model in the message content only?
I think the code is pretty clear on how it works and could be used to build your own from the example, but feel free to ask questions or submit patches to make it better.
Beta Was this translation helpful? Give feedback.
All reactions