Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: GPT4All Model Integration #16

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

SaturnCassini
Copy link

In case anyone prefers running it locally for free.

Beware it's quite slower than OpenAI!

@winglian
Copy link

This should really be refactored into a new module so that people could choose one or the other. Removing openai support completely locks some people out of using this project.

@SaturnCassini
Copy link
Author

I agree, if someone feels like doing that fix please do so!

@MLTQ
Copy link

MLTQ commented Aug 12, 2023

This shouldn't contain a hardcoded model, but should reference something in utils.py, and max_token counts should be similarly parameterizable.

@SaturnCassini
Copy link
Author

Added the feature to select your own model, will look at max tokens and temperature next @MLTQ thanks for the feedback

@SaturnCassini
Copy link
Author

Enabled issues if anyone finds any I'll try to keep it tidy where possible. Also looking forward to someone's refactor to use either gpt4all or openai

@muzian666
Copy link

Can add GPU support? Currently is too slow to run on the CPU.

@mikeyang01
Copy link

Is there any opensource LLM good for running this agent? Thanks a lot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants