You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are three parts of a config file: llm_request, citysim_request and apphub_request
llm_request:
text_request:
request_type: openai / qwenapi_key: xxxmodel: xxx(api_base): xxx (this is an optional config, if you use opanai and want to use your own backend LLM model, default to "https://api.openai.com/v1")img_understand_request:
request_type: openai / qwenapi_key: xxxmodel: xxx ('gpt-4-turbo' if you use openai)(api_base): same as text_requestimg_generate_request:
request_type: qwenapi_key: xxxmodel: xxxcitysim_request:
simulator:
server: https://api-opencity-2x.fiblab.net:58081map_request:
mongo_coll: map_beijing_extend_20240205cache_dir: ./cacheroute_request:
server: http://api-opencity-2x.fiblab.net:58082streetview_request:
engine: baidumap / googlemapmapAK: baidumap api-key (if you use baidumap engine)proxy: googlemap proxy (if you use googlemap engine)apphub_request:
hub_url: https://api-opencity-2x.fiblab.net:58080app_id: your APP IDapp_secret: your APP Secretprofile_image: the profile image of your agent
Forget about citysim_request, let's focus on the other two.
LLM_REQUEST
As you can see, the whole CityAgent is based on the LLM, by now, there are three different parts of config items: text_request, img_understand_request and img_generate_request
Install libGL.so.1, if you ara using Linux with a suitable package manager: (apt for instance)
apt-get install libgl1
CODE and RUN
Check the example folder and copy files from it (Remember replace the config file)
Look at the Demo: (A citizen Agent demo)
importyamlfrompycityagent.simulatorimportSimulatorfrompycityagent.urbanllmimportLLMConfig, UrbanLLMimportasyncioimporttimeasyncdefmain():
# load your configwithopen('config_template.yaml', 'r') asfile:
config=yaml.safe_load(file)
# get the simulator objectsmi=Simulator(config['citysim_request'])
# get the person by person_id, return agentagent=awaitsmi.GetCitizenAgent("name_of_agent", 8)
# Help you build unique agent by scratch/profileagent.Image.load_scratch('scratch_template.json')
# Load Memory and assist the agent to understand "Opencity"agent.Brain.Memory.Spatial.MemoryLoad('spatial_knowledge_template.json')
agent.Brain.Memory.Social.MemoryLoad('social_background_template.json')
# Connect to apphub so you can interact with your agent in front endagent.ConnectToHub(config['apphub_request'])
agent.Bind()
# Creat the soul (a LLM processor actually)llmConfig=LLMConfig(config['llm_request'])
soul=UrbanLLM(llmConfig)
# Add the soul to your agentagent.add_soul(soul)
# Start and have fun with it!!!whileTrue:
awaitagent.Run()
time.sleep(1)
if__name__=='__main__':
asyncio.run(main())
Congratulations
Following this "Hands On" guide, you have easily created an agent by your hand!
You can observe your AGENT in your console or in the Opencity website.