Replies: 1 comment 2 replies
-
You might consider using a conversational agent instead of a chain. An agent would be able to "decide" whether it should look up information in the datastore or just respond to the query directly. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Stack used -
Here are some examples of bad questions and answers -
Q: “Hi” or “Hi “who are you
A: Tells about itself using “system” instruction provided the prompt. But it also returns sources of the top chunks as sources returned by embeddings search as per the langchain. gpt-turbo LLM correctly understands that sources are not relevant, but langchain doesn’t get to know about this. By then it has already returned sources from embedding search.
Q: “Good morning”.
A: “I am not sure about this. Ask me about some-random-topic-from-knowledgebase-it-picks“.
Q: “Ok, thanks” as reply to some answer bot has given
A: Same reply as good morning because it doesn’t match from topics in knowledebase.
Q: “What are you” as first message
A: Replies correctly from “system” message already provided in prompt.
Q: “What are you” or “Who are you” as reply to some answer bot to a question like “Tell about Solar system”,
A: As it remembers chat history (client sends with each query), so, bot tells back about solar system. The expected reply should have been from the “system” instruction in the prompt.
The same type of questions are asked in openai Chatgpt turbo which doesn’t have our knowledge base, then it answers all correctly as per the query intention.
Code -
Prompts -
Beta Was this translation helpful? Give feedback.
All reactions