Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with different versions of GPT #10

Open
martin-krutsky opened this issue Nov 4, 2024 · 0 comments
Open

Issues with different versions of GPT #10

martin-krutsky opened this issue Nov 4, 2024 · 0 comments

Comments

@martin-krutsky
Copy link

In reverie/backend_server/persona/prompt_template/run_gpt_prompt.py, multiple requests to OpenAI are made with a hardcoded model gpt-35-turbo-0125, which is currently not a valid/supported model on the OpenAI API (unlike on Azure).
If I try to run the simulation (e.g., with the command run 1) with the name gpt-3.5-turbo-0125, the simulation seems to be working.

However, if I change the model to a newer (and cheaper) one, e.g., gpt-4o-mini-2024-07-18, I eventually get the following error:

  File "...generative_agents\reverie\backend_server\persona\memory_structures\spatial_memory.py", line 107, in get_str_accessible_arena_game_objects
    x = ", ".join(list(self.tree[curr_world][curr_sector][curr_arena.lower()]))
KeyError: '{main room'

caused by the output of the gpt call being {main room without a closing bracket, i.e. wrong syntax on the model's output.

Do you know more about which models have the prompts been successfully tested on (e.g., the original joonspk-research's repo contained yet another model, text-davinci-002), what was your reasoning behind choosing gpt-3.5-turbo-0125, and do you have any suggestion for resolution of the problem for other models (e.g., structured output comes to mind)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant