-
Notifications
You must be signed in to change notification settings - Fork 814
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: cannot reshape array of size 1024 into shape (3, newaxis) #1854
Comments
Hi @hsx1612727380, Would it be possible for you to share the metrics you are using? This would help me to reproduce the error and work on finding a solution. |
Code: metrics = [ evaluator_llm and evaluator_embeddings are encapsulated evaluator models |
Hi @hsx1612727380, I ran the code, and it worked perfectly fine without any error messages on my end. I’m currently using Ragas version 0.2.11 and langchain-openai version 0.2.14. Could you please try running the code below on your end and let me know if you encounter the same issue? import os
from dotenv import load_dotenv
load_dotenv()
from ragas.llms import LangchainLLMWrapper
from ragas.embeddings import LangchainEmbeddingsWrapper
from langchain_openai.embeddings import OpenAIEmbeddings
from langchain_openai import ChatOpenAI
evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model="gpt-4o-mini"))
evaluator_embeddings = LangchainEmbeddingsWrapper(OpenAIEmbeddings(model="text-embedding-3-small")) from ragas.metrics import AnswerRelevancy, ContextPrecision, Faithfulness, NoiseSensitivity, SemanticSimilarity, FactualCorrectness, LLMContextRecall, ContextEntityRecall
metrics = [
AnswerRelevancy(embeddings=evaluator_embeddings, llm=evaluator_llm), # 响应相关性(question-answer)
# ResponseRelevancy(embeddings=evaluator_embeddings),
ContextPrecision(llm=evaluator_llm), # 上下文精确度(question-query)
Faithfulness(llm=evaluator_llm), # 忠实度(context-answer)
NoiseSensitivity(llm=evaluator_llm), # 噪声敏感度(context-answer)
SemanticSimilarity(embeddings=evaluator_embeddings), # 答案的语义相似度(answer-ground_truth)
FactualCorrectness(llm=evaluator_llm), # 事实正确性(answer-ground_truth)
LLMContextRecall(llm=evaluator_llm), # 上下文召回率(context-ground_truth)
ContextEntityRecall(llm=evaluator_llm), # 上下文实体召回率(context-ground_truth)
] from ragas.dataset_schema import SingleTurnSample, EvaluationDataset
from ragas import evaluate
samples = []
sample = SingleTurnSample(
user_input='Hello',
retrieved_contexts=['World', 'nihao'],
response='你好',
reference='Hi'
)
samples.append(sample)
dataset = EvaluationDataset(samples=samples)
results = evaluate(dataset=dataset, metrics=metrics)
results Output
|
Code: llm = ChatOpenAI(
openai_api_key="668f9a3643xxxxxxxxccd944baf",
base_url="https://test.int.com/v1",
)
embeddings = GPTEmbeddings(
api_key="3e12397sss8xxxxxxx1e74498c292e",
model="bce",
deployment="bce",
user="xxxx",
base_url="https://test.int.com/v1/",
)
evaluator_llm = LangchainLLMWrapper(llm)
evaluator_embeddings = LangchainEmbeddingsWrapper(embeddings) There are evaluator_llm and evaluator_embeddings, I can't think of anything wrong, maybe there's a problem with the encapsulated LLM model? But many metrics have normal results. This is the result:
|
@shahules786 can you take a look at this one? |
[ ] I checked the documentation and related resources and couldn't find an answer to my question.
Your Question
Execution error: ValueError(cannot reshape array of size 1024 into shape (3,newaxis))
Code Examples
This community speaks code. Share your code snippets to help us understand your question better.
Code:
Additional context
Anything else you want to share with us?
The text was updated successfully, but these errors were encountered: