Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added rest_server connector #2

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

AaLexUser
Copy link

Langchain connector for a typical REST server

@AaLexUser AaLexUser added the enhancement New feature or request label Aug 26, 2024
@AaLexUser AaLexUser self-assigned this Aug 26, 2024
Copy link

@jrzkaminski jrzkaminski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A decent pull request, however, there are some minor things to improve

protollm/connectors/rest_server.py Show resolved Hide resolved

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General Suggestions:
It would be easier for me to review that if there was a simple small usage example. Also tests would not hurt since they are easily generated by LLMs.
Custom errors (ChatRESTServerError or smth) instead of ValueErrors would be better, but that requires writing an Exceptions module, which is quite easy to do.
A custom logger module with logs would not hurt too.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With custom errors I agree, but it is ValueErrors that throws similar langchain classes in these situations. I thought about tests, but they are not possible, for the reason that it is an open library and I can't use any internal LLM url.
Logging tools for Runnables are built into langchain.

protollm/connectors/rest_server.py Show resolved Hide resolved
protollm/connectors/rest_server.py Show resolved Hide resolved
protollm/connectors/rest_server.py Outdated Show resolved Hide resolved
protollm/connectors/rest_server.py Outdated Show resolved Hide resolved
protollm/connectors/rest_server.py Outdated Show resolved Hide resolved
protollm/connectors/rest_server.py Show resolved Hide resolved
protollm/connectors/rest_server.py Show resolved Hide resolved
messages)
}
response = requests.post(
url=f'{self.base_url}/v1/chat/completions',
Copy link

@jrzkaminski jrzkaminski Aug 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some older models use /v1/completions with manual tokens. Also, some newer models provide more control over prompt in raw /v1/completions mode, which could be extremely beneficial. I have no clue how to generalize that, but that's something to keep in mind, because come users may have a custom LLM that operates custom tokens. There are also other modes like tokenization and so on.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's for the LLM classes in langchain. ChatBaseModel classes assume chat exactly, not custom tokens. Maybe it makes sense to let the developer choose the endpoint, but then I think the whole point of this class is lost. There are too many additional settings and I would choose to write my own class then. But I don't know, it's debatable.

stop: Optional[List[str]] = None,
**kwargs: Any,
) -> Dict[str, Any]:
payload = {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Max_tokens, temperature and other parameters must be configurable by user, consider adding these fields to that dictionary.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What we have on our server now does not support configuration of any parameters. When it will be available then I will add and test it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants