Max Output tokens #5538
Unanswered
githubdebugger
asked this question in
General Question | 普通问题
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is this the setting that we need to use to set the max output tokens for the models?
The reason why I am setting this is: for some reasons, I feel the output that I get from the same model for the same prompt is smaller (even for ollama models) on lobechat compared to other platforms via API or via ollama/Open WebUI. Even when I add "detail, exhaustive" as keywords, I get very small output, compared to other tools.
Could someone please confirm if this is the only output tokens setting that we can set? Or is there some other place where we can set it?
Beta Was this translation helpful? Give feedback.
All reactions