Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to connect to an online llm #831

Open
aghozlane opened this issue Jan 24, 2025 · 1 comment
Open

Fail to connect to an online llm #831

aghozlane opened this issue Jan 24, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@aghozlane
Copy link

Hi,

I'm trying to connect to our deployed llm (online), but it keeps crashing. The same parameters are working with pydantic-ai agent.
I don't understand what is wrong here:

from paperqa import Settings, ask
import os

os.environ["OPENAI_API_KEY"] = ""
local_llm_config = dict(
    model_list=[
        dict(
            model_name="llama3-3-70b-local",
            litellm_params=dict(
                model="llama3-3-70b-local",
                api_base="https://llms*****.cloud",
                api_key="sk-******",
                temperature=0.1,
                frequency_penalty=1.5,
                max_tokens=512,
            ),
        )
    ]
)

answer = ask(
    "What manufacturing challenges are unique to bispecific antibodies?",
    settings=Settings(
        llm="llama3-3-70b-local",
        llm_config=local_llm_config,
        summary_llm="llama3-3-70b-local",
        summary_llm_config=local_llm_config,
        paper_directory="zotero_meteor",
    ),
)
Exception Group Traceback (most recent call last):
  |   File "/Users/aghozlan/workspace/pqa_pasteur/pqa_pasteur.py", line 22, in <module>
  |     answer = ask(
  |              ^^^^
  |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/paperqa/agents/__init__.py", line 103, in ask
  |     return get_loop().run_until_complete(
  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
  |     return future.result()
  |            ^^^^^^^^^^^^^^^
  |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/paperqa/agents/main.py", line 77, in agent_query
  |     response = await run_agent(docs, query, agent_type, **runner_kwargs)
  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/paperqa/agents/main.py", line 126, in run_agent
  |     await get_directory_index(settings=query.settings)
  |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/paperqa/agents/search.py", line 671, in get_directory_index
  |     async with anyio.create_task_group() as tg:
  |                ^^^^^^^^^^^^^^^^^^^^^^^^^
  |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 815, in __aexit__
  |     raise BaseExceptionGroup(
  | ExceptionGroup: unhandled errors in a TaskGroup (5 sub-exceptions)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/paperqa/agents/search.py", line 487, in process_file
    |     await tmp_docs.aadd(
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/paperqa/docs.py", line 301, in aadd
    |     result = await llm_model.run_prompt(
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/paperqa/llms.py", line 341, in run_prompt
    |     return await self._run_chat(prompt, data, callbacks, name, system_prompt)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/paperqa/llms.py", line 389, in _run_chat
    |     chunk = await self.achat(messages)
    |             ^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/paperqa/llms.py", line 539, in wrapper
    |     result = await func(self, *args, **kwargs)  # type: ignore[misc]
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/paperqa/llms.py", line 720, in achat
    |     response = await self.router.acompletion(self.name, list(messages))
    |                      ^^^^^^^^^^^
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/paperqa/llms.py", line 674, in router
    |     self._router = litellm.Router(
    |                    ^^^^^^^^^^^^^^^
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/litellm/router.py", line 362, in __init__
    |     self.set_model_list(model_list)
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/litellm/router.py", line 3910, in set_model_list
    |     self._create_deployment(
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/litellm/router.py", line 3830, in _create_deployment
    |     deployment = self._add_deployment(deployment=deployment)
    |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/litellm/router.py", line 3949, in _add_deployment
    |     ) = litellm.get_llm_provider(
    |         ^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 353, in get_llm_provider
    |     raise e
    |   File "/Users/aghozlan/.pyenv/versions/3.12.7/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 330, in get_llm_provider
    |     raise litellm.exceptions.BadRequestError(  # type: ignore
    | litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3-3-70b-local
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
Copy link

dosubot bot commented Jan 24, 2025

Hey @aghozlane! I'm here to help you with any issues you're facing. Let's work together to solve the problem.

The error you're encountering, litellm.exceptions.BadRequestError, indicates that the LLM provider is not specified correctly. In the litellm configuration, you need to ensure that the LLM provider is explicitly defined. The error message suggests that you should pass the model in a format like completion(model='huggingface/starcoder',...).

To resolve this, you should check the model_list configuration in your local_llm_config and ensure that the model is specified with the correct provider. Here's an example of how you might structure it:

local_llm_config = dict(
    model_list=[
        dict(
            model_name="llama3-3-70b-local",
            litellm_params=dict(
                model="huggingface/llama3-3-70b-local",  # Specify the provider here
                api_base="https://llms*****.cloud",
                api_key="sk-******",
                temperature=0.1,
                frequency_penalty=1.5,
                max_tokens=512,
            ),
        )
    ]
)

Make sure to replace "huggingface/llama3-3-70b-local" with the correct provider and model name that matches your deployment. You can refer to the litellm documentation for more details on specifying the provider correctly.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@dosubot dosubot bot added the bug Something isn't working label Jan 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant