Openai: Support IO capture when streaming function / tool call #2055
Replies: 7 comments 10 replies
-
thanks for reporting |
Beta Was this translation helpful? Give feedback.
-
Thanks for reporting @DanrForetellix . This is a known limitation (tool call + streaming). Why exactly do you decide to stream the rather concise function call result? |
Beta Was this translation helpful? Give feedback.
-
The issue is that I am streaming the response, but I don't know from
beforehand if it will be a function call or not; i.e. if it is not a
function call I stream it to the user, otherwise I still use the stream API
to build the function call iteratively with the same API.
Indeed this is not well documented, but we are using a common solution
which builds the function call response based on the received chunks (we
use the function calling mainly to ensure the right format, and so the
function parameters can include a lot of tokens - we might decide to stream
the output to the user as well in the future).
Hope that makes sense
…On Tue, May 14, 2024 at 9:45 AM Hassieb Pakzad ***@***.***> wrote:
Thanks for reporting @DanrForetellix <https://github.com/DanrForetellix>
. This is a known limitation (tool call + streaming). Why exactly do you
decide to *stream* the rather concise function call result?
—
Reply to this email directly, view it on GitHub
<#1943 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BHGH4KO6KVWX4YQN5CEBVODZCIIQ5AVCNFSM6AAAAABHCECATGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJQGI4DONJYGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
The
information in this e-mail is confidential. The content may not
be disclosed or
used by anyone other than the addressee. Access to this
e-mail by anyone else
is unauthorised. If you are not the intended
recipient, please notify
Foretellix immediately and delete this e-mail.
Foretellix
cannot accept any responsibility for the accuracy or
completeness of this
e-mail as it has been sent over public networks. If
you have any concerns over
the content of this message or its Accuracy or
Integrity, please contact
Foretellix immediately.
All
outgoing e-mails
from Foretellix are checked using regularly updated virus
scanning software
but you should take whatever measures you deem to be
appropriate to ensure
that this message and any attachments are virus free.
|
Beta Was this translation helpful? Give feedback.
-
Thanks for the explanation, @DanrForetellix! I'll convert this issue to a discussion, such that we can gauge interest by other users in supporting this use case which helps us prioritize building a solution for it. |
Beta Was this translation helpful? Give feedback.
-
Hi can this be resolved. After some more debugging. I found it's indeed error from langfuse.openai traceback
the messages object was something similar to this
If you use same message_list in openai official client it works. but here we are getting it has no attribute get |
Beta Was this translation helpful? Give feedback.
-
Hi @DanrForetellix - a fix just went out with our latest release: https://github.com/langfuse/langfuse-python/releases/tag/v2.36.2 Thanks a lot for your contribution and the detailed debugging - helped a lot for fixing that issue! 🙏🏾 |
Beta Was this translation helpful? Give feedback.
-
Hey, is there any update on function calling tracing with Langchain and streamed output? I cant see the input tools json either |
Beta Was this translation helpful? Give feedback.
-
Describe the bug
when using function calling and streaming, the output is not captured and tokens aren't counted
To reproduce
rom langfuse import Langfuse
from langfuse.decorators import observe
from langfuse.openai import openai # OpenAI integration
from openai import OpenAI
client = OpenAI()
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
}
},
"required": ["location"],
},
},
}
]
messages = [
{
"role": "system",
"content": "You are a helpful assistant.",
"role": "user",
"content": "What is the weather in boston",
}
]
@observe()
def get_openai_response(tools):
return client.chat.completions.create(
model="gpt-3.5-turbo-16k",
tools=tools,
tool_choice="auto",
messages=messages,
stream=True,
)
@observe()
def main():
num_chunk = 0
for chunk in get_openai_response(tools):
num_chunk += 1
print(f"num chunk = {num_chunk}")
main()
Additional information
No response
Beta Was this translation helpful? Give feedback.
All reactions