Skip to content

Commit

Permalink
doc: correct wrong file size
Browse files Browse the repository at this point in the history
  • Loading branch information
ymcui committed Jan 29, 2024
1 parent b8d31ea commit 622d760
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ Mixtral是一个稀疏混合专家模型。该模型与以往的LLaMA等主流
- **LoRA版模型**:无法单独使用,必须与原版[Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)合并才能转为完整版模型,推荐网络带宽不足且手头有原版Mixtral的用户。合并方法请参阅:[**💻 模型合并步骤**](https://github.com/ymcui/Chinese-Mixtral/wiki/model_conversion_zh)
- **GGUF版模型**:兼容[llama.cpp](https://github.com/ggerganov/llama.cpp)等工具的GGUF量化版模型,推荐只需要做推理部署的用户下载。

| 模型名称 | 类型 | 规格 | 完整版(94 GB) | LoRA版(2.4 GB) | GGUF版 |
| 模型名称 | 类型 | 规格 | 完整版(87 GB) | LoRA版(2.4 GB) | GGUF版 |
| :------------------------ | :------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |
| Chinese-Mixtral | 基座模型 | 8x7B | [[Baidu]](https://pan.baidu.com/s/1nwJ8JkMTUrCkDEccg7C9Pw?pwd=33kb) [[🤗HF]](https://huggingface.co/hfl/chinese-mixtral) | [[Baidu]](https://pan.baidu.com/s/1XRw2-rumi-Pg0CrXqEh8ug?pwd=8gjk) [[🤗HF]](https://huggingface.co/hfl/chinese-mixtral-lora) | [[🤗HF]](https://huggingface.co/hfl/chinese-mixtral-gguf) |
| Chinese-Mixtral-Instruct | 指令模型 | 8x7B | [[Baidu]](https://pan.baidu.com/s/1ogGipoWgTsojGai5cSxdoQ?pwd=dq7x) [[🤗HF]](https://huggingface.co/hfl/chinese-mixtral-instruct) | [[Baidu]](https://pan.baidu.com/s/1hX_mrYE1U1FlUEToclEOwA?pwd=h2ng) [[🤗HF]](https://huggingface.co/hfl/chinese-mixtral-instruct-lora) | [[🤗HF]](https://huggingface.co/hfl/chinese-mixtral-instruct-gguf) |
Expand Down
2 changes: 1 addition & 1 deletion README_EN.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ Three different types of models are provided below:
- **LoRA Version Model**: Cannot be used alone, must be merged with the original [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) to convert into a full version model. Recommended for users with limited network bandwidth who already have the original Mixtral. For merging method, please refer to: [**💻 Model Merging Steps**](https://github.com/ymcui/Chinese-Mixtral/wiki/model_conversion_en)
- **GGUF Version Model**: A GGUF quantized version model compatible with tools like [llama.cpp](https://github.com/ggerganov/llama.cpp), recommended for users who only need to perform inference deployment.

| Model Name | Type | Setting | Full Version (94 GB) | LoRA Version (2.4 GB) | GGUF Version |
| Model Name | Type | Setting | Full Version (87 GB) | LoRA Version (2.4 GB) | GGUF Version |
| ------------------------ | :---------------: | :-----: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |
| Chinese-Mixtral | Base Model | 8x7B | [[Baidu]](https://pan.baidu.com/s/1nwJ8JkMTUrCkDEccg7C9Pw?pwd=33kb) [[🤗HF]](https://huggingface.co/hfl/chinese-mixtral) | [[Baidu]](https://pan.baidu.com/s/1XRw2-rumi-Pg0CrXqEh8ug?pwd=8gjk) [[🤗HF]](https://huggingface.co/hfl/chinese-mixtral-lora) | [[🤗HF]](https://huggingface.co/hfl/chinese-mixtral-gguf) |
| Chinese-Mixtral-Instruct | Instruction Model | 8x7B | [[Baidu]](https://pan.baidu.com/s/1ogGipoWgTsojGai5cSxdoQ?pwd=dq7x) [[🤗HF]](https://huggingface.co/hfl/chinese-mixtral-instruct) | [[Baidu]](https://pan.baidu.com/s/1hX_mrYE1U1FlUEToclEOwA?pwd=h2ng) [[🤗HF]](https://huggingface.co/hfl/chinese-mixtral-instruct-lora) | [[🤗HF]](https://huggingface.co/hfl/chinese-mixtral-instruct-gguf) |
Expand Down

0 comments on commit 622d760

Please sign in to comment.