Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gpt math solver #991

Draft
wants to merge 136 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 2 commits
Commits
Show all changes
136 commits
Select commit Hold shift + click to select a range
7dc4035
handle format error in message in _construct_params
yiranwu0 Apr 11, 2023
83ff983
fix typo
yiranwu0 Apr 12, 2023
ab2cada
Add math solver with automatic tool queries.
yiranwu0 Apr 16, 2023
2d70c99
add imports in QueryHandler
yiranwu0 Apr 16, 2023
c823cbf
update math solver
yiranwu0 Apr 23, 2023
766b022
require wolfram id in readme
yiranwu0 Apr 23, 2023
84ba0be
Merge branch 'main' into gpt_math_solver
yiranwu0 Apr 23, 2023
8f67ed7
fix bug in running python code
yiranwu0 Apr 23, 2023
a511f0a
Update flaml/autogen/math_solver/MathSolver.py
yiranwu0 Apr 23, 2023
87ad79d
Update flaml/autogen/math_solver/README.md
yiranwu0 Apr 23, 2023
a16fa5f
revise according to comments
yiranwu0 Apr 23, 2023
e21fd76
Merge branch 'gpt_math_solver' of github.com:kevin666aa/FLAML into gp…
yiranwu0 Apr 23, 2023
45dcb7f
fix code format
yiranwu0 Apr 23, 2023
435e7a4
Add prompt to system message
yiranwu0 Apr 23, 2023
d1747cf
refrtor file names
yiranwu0 Apr 24, 2023
56627a7
refine prompts
yiranwu0 Apr 24, 2023
9821820
add baseline PoT
yiranwu0 Apr 24, 2023
e37ee3e
fix bugs in query_handler
yiranwu0 Apr 24, 2023
5d44e5e
refine prompts
yiranwu0 Apr 24, 2023
bab2878
refine prompt to output fractions
yiranwu0 Apr 24, 2023
d0b0d4b
change prompt
yiranwu0 Apr 24, 2023
3e171a3
add temperature as args
yiranwu0 Apr 24, 2023
2261c5c
fix concat float to str
yiranwu0 Apr 24, 2023
8c5a86c
change prompt back to use fractions instead of decimal
yiranwu0 Apr 24, 2023
2b8b717
rewind prompt back to e37ee3
yiranwu0 Apr 25, 2023
8b68ff7
pass args.samples_per_category in PoT
yiranwu0 Apr 25, 2023
54407a7
fix counting bug in PoT and print in mth_solver
yiranwu0 Apr 25, 2023
4806631
fix error: convet exception to str
yiranwu0 Apr 25, 2023
80a7063
add logger to log stdouts and compress files
yiranwu0 Apr 25, 2023
d737644
refine logging
yiranwu0 Apr 25, 2023
d146e35
add option to put prompt in either system or user message, add option…
yiranwu0 Apr 26, 2023
26c0caa
clean up main.py
yiranwu0 Apr 26, 2023
2a1a47e
create pseudo_main.py
yiranwu0 Apr 26, 2023
edfc679
fix category loading bug
yiranwu0 Apr 26, 2023
6a15761
handle timeout
yiranwu0 Apr 26, 2023
ab64723
two new prompts
yiranwu0 Apr 26, 2023
f723a8f
add bash
yiranwu0 Apr 27, 2023
1a5c93c
more prompts
yiranwu0 Apr 27, 2023
955edca
change run sequence
yiranwu0 Apr 27, 2023
8519967
add more prompts
yiranwu0 Apr 28, 2023
912193e
catch wolfram error
yiranwu0 Apr 28, 2023
c8f90b4
more runs on v2.1 select, v1.2 select, add new v3select
yiranwu0 Apr 28, 2023
7a8c2ac
compress when all finished
yiranwu0 Apr 28, 2023
b9a7e04
py exec output fix
yiranwu0 Apr 28, 2023
65f1580
v3.1 select
yiranwu0 Apr 29, 2023
73088ce
new both prompt, v3.2select
yiranwu0 Apr 29, 2023
144c148
change execute to run
yiranwu0 Apr 29, 2023
812477a
refine query handling and v3.3select
yiranwu0 Apr 30, 2023
25e2708
catch wolfram errors
yiranwu0 Apr 30, 2023
1c00283
ablation on only using python and zeroshot baseline
yiranwu0 May 1, 2023
1330a00
change run sequence
yiranwu0 May 1, 2023
e61212f
new run
yiranwu0 May 1, 2023
2b5dd52
new run
yiranwu0 May 1, 2023
ac11d2a
consitent ouput folder in PoT
yiranwu0 May 1, 2023
9d291b9
1erun pot , refined prompt v1.3 v1.4 and v3.4
yiranwu0 May 2, 2023
ce7144a
resume 22 if not finished
yiranwu0 May 2, 2023
6fefde3
handle wolfram exception
yiranwu0 May 2, 2023
eaae6ce
one run for v1.5
yiranwu0 May 2, 2023
8fdf74f
one run for v1.5 corrections
yiranwu0 May 2, 2023
ca75c91
two more prompts v3.5select and v3.1python based on v3python
yiranwu0 May 3, 2023
47179ce
remove error string clipping
yiranwu0 May 3, 2023
a8c3758
handle UnicodeDecodeError
yiranwu0 May 3, 2023
132638a
handle UnicodeDecodeError
yiranwu0 May 3, 2023
280f9de
quick test on adding wolfram to v3.1python
yiranwu0 May 3, 2023
45a4abd
rerun v3.1 with refine, add v3.7select to further test wolfram
yiranwu0 May 4, 2023
b0efcbf
switch run seq v3.7select then v3.1python
yiranwu0 May 4, 2023
10c28ae
add v3.2python, slightly refine from v3.1. try out v3.3python
yiranwu0 May 4, 2023
bfe61aa
more args for PoT and refine load_leve5 func
yiranwu0 May 5, 2023
39ea367
trial 38-42: validate our methods on all level of problems, run large…
yiranwu0 May 5, 2023
0cebecb
update run.sh
yiranwu0 May 5, 2023
bddb610
move
sonichi May 6, 2023
326da82
add v4
yiranwu0 May 7, 2023
bd040b5
Merge branch 'gpt_math_solver' of github.com:kevin666aa/FLAML into gp…
yiranwu0 May 7, 2023
c8ba447
test with new system message
yiranwu0 May 7, 2023
62b5259
add baseline pnas, run v4.2 on level5 problems, test new sys message …
yiranwu0 May 8, 2023
ef509d4
fix trial 49
yiranwu0 May 8, 2023
e60850f
remove print
yiranwu0 May 8, 2023
5fe0b0b
run v3 with specified sentence removed, 4.2 with original sys message…
yiranwu0 May 9, 2023
d92b559
remove trial 52
yiranwu0 May 9, 2023
ede98a5
endpoint
sonichi May 9, 2023
7082355
Merge branch 'gpt_math_solver' of https://github.com/kevin666aa/FLAML…
sonichi May 9, 2023
7d34485
fix bug in queryhandler
yiranwu0 May 9, 2023
8e34218
Merge branch 'gpt_math_solver' of github.com:kevin666aa/FLAML into gp…
yiranwu0 May 9, 2023
c592837
fix queryhandler 2
yiranwu0 May 9, 2023
6345e0b
v3.3python
yiranwu0 May 10, 2023
40fd299
remove print
yiranwu0 May 10, 2023
dac1551
test final prompts
yiranwu0 May 11, 2023
fff4e4b
change run sequence
yiranwu0 May 11, 2023
da0f7d9
run exact v3.1 as before
yiranwu0 May 11, 2023
2775e08
keep runing v3.1python and add general_5
yiranwu0 May 11, 2023
ad10b71
add general_5
yiranwu0 May 11, 2023
7800a46
continue run 55 and 56
yiranwu0 May 12, 2023
4f78539
switch seq
yiranwu0 May 12, 2023
a76113f
trial 63 v3.5python, then run large-scale with v3.3python
yiranwu0 May 12, 2023
7d22c07
add v3.3, 3.7, 3.8
yiranwu0 May 13, 2023
908d283
revise 3.6-3.8
yiranwu0 May 13, 2023
079b4e2
v3.9
yiranwu0 May 13, 2023
f071214
test interalge and precal on v3.9
yiranwu0 May 13, 2023
6444f91
test v3.9 on 50 problems, then zero shot
yiranwu0 May 13, 2023
1c4a278
fix prompt
yiranwu0 May 13, 2023
c744613
endpoint
sonichi May 13, 2023
2ad469f
Merge branch 'gpt_math_solver' of https://github.com/kevin666aa/FLAML…
sonichi May 13, 2023
b806dfb
run all problems on v3.9, and pnas
yiranwu0 May 13, 2023
3733028
endpoint
sonichi May 13, 2023
cbd0be0
Merge remote-tracking branch 'upstream/main' into gpt_math_solver
May 15, 2023
89d7512
run v1python
yiranwu0 May 15, 2023
3791326
Merge branch 'gpt_math_solver' of github.com:kevin666aa/FLAML into gp…
yiranwu0 May 15, 2023
2a6ffa1
run v1python+wolfram
yiranwu0 May 16, 2023
d833938
run pot with sys message
yiranwu0 May 19, 2023
bf73756
endpoint
sonichi May 19, 2023
f1b3873
Merge branch 'gpt_math_solver' of https://github.com/kevin666aa/FLAML…
sonichi May 19, 2023
e3d8de1
run pot with system message
yiranwu0 May 19, 2023
d84213d
Merge branch 'gpt_math_solver' of https://github.com/kevin666aa/FLAML…
sonichi May 19, 2023
f8c68ff
Merge branch 'gpt_math_solver' of github.com:kevin666aa/FLAML into gp…
May 19, 2023
9bc17db
fewshot+zeroshot prompt
May 19, 2023
769803e
add assert
May 19, 2023
85d9b59
refine fewshot
yiranwu0 May 20, 2023
bce7f4f
run pre-commit
yiranwu0 May 20, 2023
59bc9f9
rerun v3.9 with cache and get token info
yiranwu0 May 21, 2023
32de58f
run PoT on all problems
yiranwu0 May 22, 2023
9dabf61
Merge remote-tracking branch 'upstream/main' into gpt_math_solver
yiranwu0 May 22, 2023
9c3efd4
merge new changes and update pot
yiranwu0 May 22, 2023
fc8bcdc
endpoint
sonichi May 22, 2023
c711143
Merge branch 'gpt_math_solver' of https://github.com/kevin666aa/FLAML…
sonichi May 22, 2023
f535e50
fix decode in PoT
yiranwu0 May 22, 2023
841ff2a
Merge branch 'gpt_math_solver' of github.com:kevin666aa/FLAML into gp…
yiranwu0 May 22, 2023
43d8277
clean up and rename
yiranwu0 May 27, 2023
01f7712
resolve conflict in setup
yiranwu0 May 27, 2023
d4d8242
Merge branch 'microsoft:main' into gpt_math_solver
yiranwu0 Jun 7, 2023
1cfce5f
clean up
yiranwu0 Jun 7, 2023
be7bb3d
update readme
yiranwu0 Jun 7, 2023
d3e8719
add mathchat flow hart
yiranwu0 Jun 7, 2023
2c8823f
Update README.md
yiranwu0 Jun 7, 2023
7808b4f
Merge branch 'microsoft:main' into gpt_math_solver
yiranwu0 Jun 10, 2023
348446b
add missing files
yiranwu0 Jul 10, 2023
c49ab9c
Merge branch 'gpt_math_solver' of github.com:kevin666aa/FLAML into gp…
yiranwu0 Jul 10, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 17 additions & 9 deletions flaml/autogen/oai/completion.py
yiranwu0 marked this conversation as resolved.
Show resolved Hide resolved
yiranwu0 marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -649,15 +649,23 @@ def _construct_params(cls, data_instance, config, prompt=None, messages=None):
if messages is None:
raise ValueError("Either prompt or messages should be in config for chat models.")
if prompt is None:
params["messages"] = [
{
"role": m["role"],
"content": m["content"].format(**data_instance)
if isinstance(m["content"], str)
else m["content"](data_instance),
}
for m in messages
]
params["messages"] = []
for m in messages:
if isinstance(m["content"], str):
try:
# try to format the message with the data instance
content = m["content"].format(**data_instance)
except Exception:
# if it fails, use the raw message
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a warning message here to remind that the program failed to format the message with the data instance and thus raw message is being used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There might be a long sequence of texts and will be a lot of warning. We should assume that when use inputs messages instead of prompt, these message are less likely containing formatted strings. We can add comments to the doc to explain.

content = m["content"]
else:
content = m["content"](data_instance)
params["messages"].append(
{
"role": m["role"],
"content": content,
})

elif model in cls.chat_models:
# convert prompt to messages
if isinstance(prompt, str):
Expand Down
2 changes: 1 addition & 1 deletion website/docs/Use-Cases/Auto-Generation.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ The returned `config` contains the optimized configuration and `analysis` contai

### Perform inference with the tuned config

One can use [`flaml.oai.Completion.create`](../reference/autogen/oai/completion#create) to performance inference. It materializes a prompt using a given context. For example,
One can use [`flaml.oai.Completion.create`](../reference/autogen/oai/completion#create) to perform inference. It materializes a prompt using a given context. For example,

```python
response = oai.Completion.create(problme=problem, **config)
Expand Down