Chatflow to use different memory for each LLM block #4418
rayzamgh
started this conversation in
Suggestion
Replies: 2 comments
-
I'm looking forward to that feature. Maybe I can use this feature to implement query rewriting for multiple rounds of conversations. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Self Checks
1. Is this request related to a challenge you're experiencing?
I was trying out chatflows with multiple LLM blocks and was wondering why the processed data output seem to include the same memory for all LLM blocks?
Here is my chatflow configuration
Here is the fewshot sample for LLM block 1
Here is the processed data output from the first LLM block, notice that the fewshot example is correctly appended, but the memory section, particularly the output from the assistant is taken from the final answer block, instead of the output of LLM block 1
2. Describe the feature you'd like to see
Currently i suspect the memory is shared accross all LLM blocks like so:
Propose to separate them, such that each block has their own memory
3. How will this feature improve your workflow or experience?
I think this will improve chatflow for a lot of people with more complex needs, particularly with multi agent requirement with separate function, in a single chatflow
4. Additional context or comments
No response
5. Can you help us with this feature?
Beta Was this translation helpful? Give feedback.
All reactions