llama.cpp生成的量化模型,如何使用langchain进行推理 #232
Unanswered
Beyondliby27
asked this question in
Q&A
Replies: 2 comments 6 replies
-
只生成了gguf文件,没有生成对应的bin文件是为啥呀 |
Beta Was this translation helpful? Give feedback.
0 replies
-
gguf就是之前的bin,扩展名换了而已 |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
用llama.cpp生成了ggml-model-q8_0.gguf这个模型权重文件,应该怎么样给langchain使用哇,请问有教程吗
Beta Was this translation helpful? Give feedback.
All reactions