-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle tool calls in playground #758
Conversation
@@ -28,6 +32,7 @@ export async function runDocumentAction({ | |||
parameters, | |||
}: RunDocumentActionProps) { | |||
const { workspace, user } = await getCurrentUserOrError() | |||
let tools = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Requests to SDK in our playground are stateless. We need to implement a cache to remember where the user stopped because there were one o more tool call requested by the AI
(type === ChainEventTypes.StepComplete || | ||
type === ChainEventTypes.Complete) && | ||
data.response.streamType === 'text' | ||
) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the combination of AI response that produce tool calls
- Step completed of type
text
- Chain Completed of type
text
Am I missing something?
@@ -305,6 +313,8 @@ async function runStep({ | |||
} else { | |||
streamConsumer.stepCompleted(response) | |||
|
|||
// TODO: Here instead of running the state if the step has | |||
// toolCalls we serialize the chain up until this point |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When the AI is cached or when is fresh in both cases the step response can be stepCompleted
or chainCompleted
and we're doing the same.
Now we also want to handle tool calling and chain serialization so I think we need to DRY this in to a new unified method that handle all the cases
This API does not follow the pattern of other methods in the SDK:
Keep in mind sdk format follows this format: required arguments as positional + last options argument. This means that all required arguments are positional and then any optional arguments go in a last options arguments that is an object. In this case I imagine this would look something like:
|
Isn't this the same event than when a step is completed? Since a tool call request wouldbe the response from a step and providing the response would start a new step in the chain. |
If we emit stepCompleted the stream is not stopped. I'm talking in our playground. For that I send This works but maybe is not a good idea |
documentLogUuid: string | ||
}) { | ||
const key = generateCacheKey({ documentLogUuid, workspace }) | ||
const serialized = chain.serialize() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
chain.serialize() is a new method introduced in that PR
// and add more messages to the same stream session. | ||
// | ||
// For doing this we need a new gateway endpoint to send the tool calls to the server and run the chain | ||
// also the SDK method to be called here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll remove this comment. For now I opted for a different action. Let's see how it feels.
c6b81db
to
5f11b4e
Compare
packages/core/src/helpers.ts
Outdated
const text = data.text | ||
const object = type === 'object' ? data.object : undefined | ||
const toolCalls = type === 'text' ? (data.toolCalls ?? []) : [] | ||
let content: MessageContent[] = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: you can just avoid the content variable and push directly to the message.content
5f622ed
to
1afb6cd
Compare
const { completed, messages, config } = await (chain as PromptlChain).step( | ||
prevText, | ||
prevContent as PromptlMessage[] | undefined | string, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We now can add to promptl an array of messages as the previous response.
packages/core/src/helpers.ts
Outdated
typeof toolCallResponse.result === 'string' | ||
? JSON.parse(toolCallResponse.result) | ||
: toolCallResponse.result, | ||
isError: toolCallResponse.isError || false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like the fact that we're building response message with the types of old compiler. But I don't want to change this now.
We should do a PR killing all compiler types and start using prompt ASAP
4b1da6b
to
c3d363d
Compare
c3d363d
to
5043e6f
Compare
// FIXME: Kill compiler please. I made this module compatible with | ||
// both old compiler and promptl. But because everything is typed with | ||
// old compiler prompts in promptl version are also formatted as old compiler |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I second this.
5043e6f
to
0bae63b
Compare
<Text.H6> | ||
You have{' '} | ||
{totalToolRequests <= 1 ? ( | ||
<> | ||
<Badge variant='muted'>{totalToolRequests}</Badge> tool to be | ||
responded | ||
</> | ||
) : ( | ||
<> | ||
<Badge variant='muted'>{currentToolRequest}</Badge> of{' '} | ||
<Badge variant='muted'>{totalToolRequests}</Badge> tools to be | ||
responded | ||
</> | ||
)} | ||
</Text.H6> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that this is the first thing you read on the Tool Response input, since it is the top text, I'd move the (?) Tooltip here instead, to explain what do "tools to be responded" mean.
const [isEditorMounted, setIsEditorMounted] = useState(false) // to avoid race conditions | ||
// to avoid race conditions | ||
const [isEditorMounted, setIsEditorMounted] = useState(false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
xd
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm!
0bae63b
to
97f0894
Compare
We have a task to allow users give a response on one or more tool calls that the AI can request in our playground. The thing is that the work necessary to make this work is also necessary for making tool call work on our SDK so this PR is a spike to have a picture of what are the moving parts necessary to make this happen
97f0894
to
c8bede9
Compare
What?
We have a task to allow users to give a response on one or more tool calls that the AI can request in our playground. The thing is that the work necessary to make this work is also essential for making tool calling work on our SDK and Gateway. So this PR is a spike to have a picture of what are the moving parts necessary to make this happen.
Related issue
#598
Possible affected clients
These are the places I think these changes affect
TODO
documentLogUuid
?addMessage
if a cache chain exists for that message and there are tool calls.Shared prompts tool calling story 💀Don't allow tool calling responses in shared prompts UI cc @cesrTool calls when running prompt in Batch in Playground 🧠 🧠 🧠 🤯
How we can do this?
In SDK
Talked with @csansoon this is how it would look to have tool calls in SDK
Considerations
Promise.all
and wait for all the calls to be finished before continuing.