forked from joonspk-research/generative_agents
-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Have the basics of ollama #4
Open
DanielMarchand
wants to merge
39
commits into
drudilorenzo:fix-and-improve
Choose a base branch
from
DanielMarchand:fix-and-improve_add-ollama
base: fix-and-improve
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Have the basics of ollama #4
DanielMarchand
wants to merge
39
commits into
drudilorenzo:fix-and-improve
from
DanielMarchand:fix-and-improve_add-ollama
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
… for different backends
… is becoming relatively stable
…p schedule prompts
…rying to figure out Django view exceptions
…o a json for later model checking and evaluation
…raining the use of triples
…the only one I know that 'should' work
chowington
referenced
this pull request
in crcresearch/agentic_collab
Sep 30, 2024
Support vLLM on EC2 instances
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The basics work. The problem is the code base is not very well-designed to handled custom prompting depending on the model. For example wake up dates require longer token limits with the llama3 models than with opean ai ones. Also I had to switch from system to assistant in the chat complete to get better answers, there are other subtle differences in how the prompts need to be set up, would be nice to discuss an overall architecture for this. Otherwise I think this is a really cool direction letting people with decent GPUs (tested on 3080, i'm sure 4090 would be even more special) get nice results at no cost.
This is heavily based on joonspk-research#155 by ketsapiwiq I had do some aspects differently but much of the logic is the same