LangChain v0.2.0 Planning #15713
Replies: 23 comments 29 replies
-
Hi @hwchase17, thanks for your work! Is there anything planned (or that already exists that i don't know!) to handle more complex tools with multi-turn dialogue? I'm building something like this:
At the moment this involves changing some logics in the AgentExecutor and playing with some internal mechanism. |
Beta Was this translation helpful? Give feedback.
-
Is it being considered to fully upgrade Pydantic to v2 in version 0.2.0? |
Beta Was this translation helpful? Give feedback.
-
Need more content and examples about RAG,want to achieve more functions, the current way of implementation is too simple. Query transformation and also: In short, RAG is the trend of the future. I hope we can do more in this area. |
Beta Was this translation helpful? Give feedback.
-
RAG friendly downloadable documentation for each version update. Helps a noob like me create context, code, suggest tools, etc. |
Beta Was this translation helpful? Give feedback.
-
Would love to see LLMCompiler introduced as an Agent type, and more integration with grammars (e.g. Outlines or just more documentation around Ollama/llama.cpp grammars) |
Beta Was this translation helpful? Give feedback.
-
Thank you for your work but I'm considering dropping out the project or not adopting it in the future due to the lack of support from your team on PRs and issues... I only see replies from dusobut and can't find any activity from the team. |
Beta Was this translation helpful? Give feedback.
-
HI @hwchase17 and @gianfrancodemarco , thanks for your work and thinking. Recently I have some thinking about active multi-turn dialog between Agent and User, I think we need a communication architecture between agents to solve this problem. Agents are like people, they know very little. If the user wants to discuss with us or consult about a certain field, we need to ask for help from others, and this other person can be another Agent.So, there are three roles here: User, General Agent, and Expert Agent. The General Agent knows nothing, but it can do anything because it knows where to go for resources and help. For example, if the user wants to find a hotel nearby and book a room, the following conversation may occur between the user and the General Agent: The user said to the General Agent: I want to find a hotel nearby. However, the General Agent did not know how to do this at this time, but through searching, it found that there was a hotel service agent, so it started a conversation with the hotel service agent. General Agent said to Hotel Service Agent: The user wants to book a hotel, what should I do? So the General Agent and the User started a round of dialogue to collect some information, and fed this information back to the Hotel Service Agent. The Hotel Service Agent judged whether the information was enough or whether further communication was needed. When sufficient information is collected, the Hotel Service Agent can help the User book a room and return the results to the General Agent, who then returns the result to the user. During the entire process, the User does not know the existence of a Hotel Service Agent, and the General Agent does not know in advance how to complete the hotel reservation task. However, this relatively professional task can be completed through communication between Agents. In the above process, the General Agent only plays an intermediary role, but through this round of dialogue, the General Agent can learn how to book a hotel. This learning is not completed by training the neural network, but it saves the dialogue. Record, and by summarizing the conversation records, we know how to communicate with users in the field of hotel reservations. This way it can communicate directly with the user next time. The result of the next communication will still be returned to the Hotel Service Agent, which will determine whether the information is complete. Another important thing is that the action of booking a hotel is completed by the Hotel Service Agent. This action may be a tool, such as calling the API of a certain hotel management system. The General Agent knows nothing about this tool and does not need to know. If this method is feasible, then we need a structure to complete it. For example, we need to define the registration and search methods of Agents, the call protocol between Agents, etc. Do you think this method is feasible? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Add one more point. It is hoped that the load balancing feature of multiple openai/Azure openai accounts can be supported. For example, automatically choose which apikey to use for this operation, or if the request fails, langchain can switch to the next key to retry. Doing so can deal with concurrency problems in real-world scenarios. thx |
Beta Was this translation helpful? Give feedback.
-
Agent should be more like a human, it can: Maybe we need an Agent runtime environment, maybe this is the right way to real AI. |
Beta Was this translation helpful? Give feedback.
-
The learning curve is a bit steep, not conducive to learning quickly, and we need to maintain a comprehensive guide and constantly update this comprehensive guide. |
Beta Was this translation helpful? Give feedback.
-
@hwchase17 |
Beta Was this translation helpful? Give feedback.
-
Please add support for azure search 11.4.0. Right now it supports 11.4.b9 but they did some breaking changes and langchain doesnt support the stable azure search library yet. |
Beta Was this translation helpful? Give feedback.
-
Your direction is great. Like the LangGraph demos, really powerful. With all the various states (shared and unique to nodes), it makes me wonder if framework can help us manage not going beyond context window. If I'm fetching some chunks to add to context, it might be good to get some error as you add them to state if it goes above model limits (considering all the shared states/sys prompt/tools, etc.) so that we can have some decision there of adding perhaps less/smaller chunks instead of just finishing chain with error. Another approach may be that some trimming algorithm will be smart enough to know which parts are less important and snip them out (so nodes that add state, need some way to rank each chunk of data with importance level). |
Beta Was this translation helpful? Give feedback.
-
Rewriting based Splitter -- A more smarter splitter.One of the problems Splitter faces is how to put together things that are related in meaning, the simple solution is ask LLM to do this, we can use LLM to rewrite the original article to meaningful chunks, for example, as following prompt template: ''' article: chunks: ''' Thanks! |
Beta Was this translation helpful? Give feedback.
-
Any way to add docloader support for Enterprise content management tools |
Beta Was this translation helpful? Give feedback.
-
@hwchase17 , |
Beta Was this translation helpful? Give feedback.
-
More flexible token consumption statistics and cost statistics are required, and support for SSE and general requests is required |
Beta Was this translation helpful? Give feedback.
-
Currently, it is difficult to provide portable code with different LLM models. Indeed, it is necessary to carry out some calculations to know the maximum fragment size for a RAG for example. To do this, you must be able to have knowledge of the tokenizer associated with the LLM model. OpenAI uses tiktoken, but not all models do. Having a standardized API allowing access to a generic class allowing calculations on the tokens would be a plus. |
Beta Was this translation helpful? Give feedback.
-
support disable or enable pass event input&output data in stream event.
|
Beta Was this translation helpful? Give feedback.
-
support for cohere 5.0 |
Beta Was this translation helpful? Give feedback.
-
I would like a Dynamic Few Shot Prompt Examples with an Example Selector that could take not only the Human's input, but also taking into consideration the whole chat history... Like an Example Selector with memory. Not sure if there is something ready about this, but currently, I had to implement my own examples selector chain. |
Beta Was this translation helpful? Give feedback.
-
As we add more capability into the langchain family, i find the solutions require a higher level of knowledge, more code and more use case specific approaches. My personal opinion is that we are trending in the wrong direction in regards to those aspects. I thought a good step forward was the demo that Lance did around langchain codegen using the langchain docs in context. Is there any plan to move more towards enabling us to build solutions with natural language rather than trending the other way? |
Beta Was this translation helpful? Give feedback.
-
I like the idea of improving documentation for each use case and would love to see improvement on following two things:
|
Beta Was this translation helpful? Give feedback.
-
Hi all - you may have seen that we just released LangChain v0.1.0. See the discussion topic here: #15712 (comment)
This discussion is all about planning for v0.2.0. We have some ideas of what we want to do, but we want to hear what you want as well!
Things we are currently planning:
Continued Updates
create_xyz_chain
rather than use the chain classes)Improvements
New
langgraph
to create different types of agent executorslangconnect
) for production use cases (scalability, speed, cost). For this - we are still in alpha stages, please reach out if you want to be a design partnerThoughts? What else should we focus on/add?
Beta Was this translation helpful? Give feedback.
All reactions