-
Notifications
You must be signed in to change notification settings - Fork 508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gpt math solver #991
base: main
Are you sure you want to change the base?
Gpt math solver #991
Conversation
@microsoft-github-policy-service agree |
flaml/autogen/oai/completion.py
Outdated
# try to format the message with the data instance | ||
content = m["content"].format(**data_instance) | ||
except Exception: | ||
# if it fails, use the raw message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a warning message here to remind that the program failed to format the message with the data instance and thus raw message is being used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There might be a long sequence of texts and will be a lot of warning. We should assume that when use inputs messages instead of prompt, these message are less likely containing formatted strings. We can add comments to the doc to explain.
Thanks for the contribution. Could you please run pre-commit first: https://microsoft.github.io/FLAML/docs/Contribute#pre-commit to ensure the code passes the "Code formatting" test? |
Add a test case? |
Is this PR intended for merging? If not, please make it a draft. |
Please resolve the conflict. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use pre-commit to format the code automatically. Check the PR template to find how to do it.
'model': model, | ||
"max_tokens": max_tokens, | ||
'messages' : [ | ||
{"role": "system", "content": "You are a helpful assistant."}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about making the prompt before the "Problem:" a system message?
flaml/autogen/math_solver/main.py
Outdated
results = selfconsistency.vanilla_voting(responses["responses"], problem['solution']) | ||
print(results['success_vote'], results['votes']) | ||
|
||
def tool_voting_one_category(model, problem_set, saving_folder, n=2, n_per_time=1): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I skipped
Co-authored-by: Chi Wang <[email protected]>
Co-authored-by: Chi Wang <[email protected]>
…into gpt_math_solver
Why are these changes needed?
Related issue number
Closes #990
Checks