diff --git a/README.md b/README.md index a222496..46553b3 100644 --- a/README.md +++ b/README.md @@ -27,9 +27,10 @@ ```yaml llm_request: text_request: - request_type: qwen + request_type: openai / qwen api_key: xxx model: xxx + http_client: xxx (if you use opanai and want to use your own backend LLM model) img_understand_request: request_type: qwen api_key: xxx @@ -64,7 +65,8 @@ apphub_request: - As you can see, the whole CityAgent is based on the LLM, by now, there are three different parts of config items: **text_request**, **img_understand_request** and **img_generate_request** - By now, we support [**qwen**](https://tongyi.aliyun.com/) and [**openai**](https://openai.com/) - `Notice: Our environments are basically conducted with qwen. If you prefer to use openai, then you may encounter hardships. AND fell free to issue us.` -- Get your **api_key** and chooce your **model**s +- Get your **api_key** and chooce your **model** +- If you want to use your backend models, set the **http_client** (only available when using **openai**) #### CITYSIM_REQUEST - Most of the configuration options in this part are determined, such as **simulator.server**, **map_request.mongo_coll**, **route_request.server** diff --git a/example/config_template.yaml b/example/config_template.yaml index 735ca82..0adfe75 100644 --- a/example/config_template.yaml +++ b/example/config_template.yaml @@ -1,8 +1,9 @@ llm_request: text_request: - request_type: qwen + request_type: openai / qwen api_key: xxx model: xxx + http_client: xxx (if you use opanai and want to use your own backend LLM model) img_understand_request: request_type: qwen api_key: xxx diff --git a/pycityagent/urbanllm/urbanllm.py b/pycityagent/urbanllm/urbanllm.py index 4c22098..928c398 100644 --- a/pycityagent/urbanllm/urbanllm.py +++ b/pycityagent/urbanllm/urbanllm.py @@ -41,10 +41,12 @@ def text_request(self, dialog:list[dict]) -> str: Returns: - (str): the response content """ + client = None if self.config.text['http_client'] == None else self.config.text['http_client'] if self.config.text['request_type'] == 'openai': client = OpenAI( api_key=self.config.text['api_key'], - base_url=self.config.text['api_base'] + base_url=self.config.text['api_base'], + http_client=client ) response = client.chat.completions.create( model=self.config.text['model'], @@ -53,7 +55,6 @@ def text_request(self, dialog:list[dict]) -> str: ) return response.choices[0].message.content elif self.config.text['request_type'] == 'qwen': - response = dashscope.Generation.call( model=self.config.text['model'], api_key=self.config.text['api_key'],