Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some errors occurred when do a command "run 90" #3

Open
aronfan opened this issue Aug 15, 2023 · 2 comments
Open

some errors occurred when do a command "run 90" #3

aronfan opened this issue Aug 15, 2023 · 2 comments

Comments

@aronfan
Copy link

aronfan commented Aug 15, 2023

GNS FUNCTION: <generate_act_obj_desc>
asdhfapsh8p9hfaiafdsi;ldfj as DEBUG 6
GPT4All PROMPT
"""
Task: We want to understand the state of an object that is being used by someone.

Let's think step by step.
We want to know about bed's state.
Step 1. Isabella Rodriguez is at/using the sleeping.
Step 2. Describe the bed's state: bed is
"""
Output the response to the prompt above in json. The output should ONLY contain the phrase that should go in .
Example output json:
{"output": "being fixed"}
Traceback (most recent call last):
File "D:\AIwork\SaturnCassini\reverie\backend_server\reverie.py", line 468, in open_server
rs.start_server(int_count)
File "D:\AIwork\SaturnCassini\reverie\backend_server\reverie.py", line 379, in start_server
next_tile, pronunciatio, description = persona.move(
File "D:\AIwork\SaturnCassini\reverie\backend_server\persona\persona.py", line 222, in move
plan = self.plan(maze, personas, new_day, retrieved)
File "D:\AIwork\SaturnCassini\reverie\backend_server\persona\persona.py", line 148, in plan
return plan(self, maze, personas, new_day, retrieved)
File "D:\AIwork\SaturnCassini\reverie\backend_server\persona\cognitive_modules\plan.py", line 959, in plan
_determine_action(persona, maze)
File "D:\AIwork\SaturnCassini\reverie\backend_server\persona\cognitive_modules\plan.py", line 635, in _determine_action
act_obj_desp = generate_act_obj_desc(act_game_object, act_desp, persona)
File "D:\AIwork\SaturnCassini\reverie\backend_server\persona\cognitive_modules\plan.py", line 269, in generate_act_obj_desc
return run_gpt_prompt_act_obj_desc(act_game_object, act_desp, persona)[0]
TypeError: 'NoneType' object is not subscriptable

What shall I do to resolve this error?

@Me1onMonster
Copy link

I had the same problem. Have you solved it yet?

@nyoma-diamond
Copy link

I think I found the mistake that is causing this:, lines 1028-1042 of run_gpt_prompt.py are commented out. I'm unsure why this is the case, but uncommenting that code appears to fix things.

Thus the function __chat_func_validate should be

def __chat_func_validate(gpt_response, prompt=""): ############
    try: 
      gpt_response = __func_clean_up(gpt_response, prompt="")
    except: 
      return False
    return True 

  print ("asdhfapsh8p9hfaiafdsi;ldfj as DEBUG 6") ########
  gpt_param = {"engine": "text-davinci-002", "max_tokens": 15, 
               "temperature": 0, "top_p": 1, "stream": False,
               "frequency_penalty": 0, "presence_penalty": 0, "stop": None}
  prompt_template = "persona/prompt_template/v3_ChatGPT/generate_obj_event_v1.txt" ########
  prompt_input = create_prompt_input(act_game_object, act_desp, persona)  ########
  prompt = generate_prompt(prompt_input, prompt_template)
  example_output = "being fixed" ########
  special_instruction = "The output should ONLY contain the phrase that should go in <fill in>." ########
  fail_safe = get_fail_safe(act_game_object) ########
  output = ChatGPT_safe_generate_response(prompt, example_output, special_instruction, 3, fail_safe,
                                          __chat_func_validate, __chat_func_clean_up, True)
  if output != False: 
    return output, [output, prompt, gpt_param, prompt_input, fail_safe]
  # ChatGPT Plugin ===========================================================

 # !!! The code below was commented out for some reason
  gpt_param = {"engine": "text-davinci-003", "max_tokens": 30,
               "temperature": 0, "top_p": 1, "stream": False,
               "frequency_penalty": 0, "presence_penalty": 0, "stop": ["\n"]}
  prompt_template = "persona/prompt_template/v2/generate_obj_event_v1.txt"
  prompt_input = create_prompt_input(act_game_object, act_desp, persona)
  prompt = generate_prompt(prompt_input, prompt_template)
  fail_safe = get_fail_safe(act_game_object)
  output = safe_generate_response(prompt, gpt_param, 5, fail_safe,
                                   __func_validate, __func_clean_up)

  if debug or verbose:
    print_run_prompts(prompt_template, persona, gpt_param,
                      prompt_input, prompt, output)

  return output, [output, prompt, gpt_param, prompt_input, fail_safe]

It's also commented out in the original codebase, so I'm unsure if this problem is a result of using GPT4All instead of the openAI API or not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants