Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

💄 style: fix tag version and add provider changelog #5582

Merged
merged 4 commits into from
Jan 25, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 25 additions & 0 deletions docs/changelog/2025-01-22-new-ai-provider.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
---
title: LobeChat Launches New AI Provider Management System
description: >-
LobeChat has revamped its AI Provider Management System, now supporting custom AI providers and models.

tags:
- LobeChat
- AI Provider
- Provider Management
- Multimodal
---

# New AI Provider Management System 🎉

We are excited to announce that LobeChat has launched a brand new AI Provider Management System, now available in both the open-source version and the Cloud version ([lobechat.com](https://lobechat.com)):

## 🚀 Key Updates

- 🔮 **Custom AI Providers**: You can now add, remove, or edit AI providers as needed.
- ⚡️ **Custom Model and Capability Configuration**: Easily add your own models to meet personalized requirements.
- 🌈 **Multimodal Support**: The new AI Provider Management System fully supports various modalities, including language, images, voice, and more. Stay tuned for video and music generation features!

## 📢 Feedback and Support

If you have any suggestions or thoughts about the new AI Provider Management System, feel free to engage with us in GitHub Discussions.
23 changes: 23 additions & 0 deletions docs/changelog/2025-01-22-new-ai-provider.zh-CN.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
title: LobeChat 推出全新 AI Provider 管理系统
description: LobeChat 焕新全新 AI Provider 管理系统,已支持自定义 AI 服务商与自定义模型
tags:
- LobeChat
- AI Provider
- 服务商管理
- 多模态
---

# 全新 AI Provider 管理系统 🎉

我们很高兴地宣布,LobeChat 推出了全新的 AI Provider 管理系统,已经在开源版与 Cloud 版([lobechat.com](https://lobechat.com))中可用:

## 🚀 主要更新

- 🔮 **自定义 AI 服务商**: 现在,您可以根据需要添加、删除或编辑 AI 服务商。
- ⚡️ **自定义模型与能力配置**: 轻松添加您自己的模型,满足个性化需求。
- 🌈 **多模态支持**: 新的 AI Provider 管理系统全面支持多种模态,包括语言、图像、语音等,视频和音乐生成功能,敬请期待!

## 📢 反馈与支持

如果您对新的 AI Provider 管理系统有任何建议或想法,欢迎在 GitHub Discussions 中与我们交流。
40 changes: 23 additions & 17 deletions docs/changelog/index.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,107 +2,113 @@
"$schema": "https://github.com/lobehub/lobe-chat/blob/main/docs/changelog/schema.json",
"cloud": [],
"community": [
{
"image": "https://github.com/user-attachments/assets/7350f211-61ce-488e-b0e2-f0fcac25caeb",
"id": "2025-01-22-new-ai-provider",
"date": "2025-01-22",
"versionRange": ["1.43.1", "1.47.7"]
},
{
"image": "https://github.com/user-attachments/assets/3d80e0f5-d32a-4412-85b2-e709731460a0",
"id": "2025-01-03-user-profile",
"date": "2025-01-03",
"versionRange": ["1.43.0", "1.43.1"]
"versionRange": ["1.34.1", "1.43.0"]
},
{
"image": "https://github.com/user-attachments/assets/2048b4c2-4a56-4029-acf9-71e35ff08652",
"id": "2024-11-27-forkable-chat",
"date": "2024-11-27",
"versionRange": ["1.34.0", "1.33.1"]
"versionRange": ["1.33.1", "1.34.0"]
},
{
"image": "https://github.com/user-attachments/assets/fa8fab19-ace2-4f85-8428-a3a0e28845bb",
"id": "2024-11-25-november-providers",
"date": "2024-11-25",
"versionRange": ["1.33.0", "1.30.1"]
"versionRange": ["1.30.1", "1.33.0"]
},
{
"image": "https://github.com/user-attachments/assets/eb3f3d8a-79ce-40aa-a206-2c846206c0c0",
"id": "2024-11-06-share-text-json",
"date": "2024-11-06",
"versionRange": ["1.28.0", "1.26.1"]
"versionRange": ["1.26.1", "1.28.0"]
},
{
"image": "https://github.com/user-attachments/assets/e70c2db6-05c9-43ea-b111-6f6f99e0ae88",
"id": "2024-10-27-pin-assistant",
"date": "2024-10-27",
"versionRange": ["1.26.0", "1.19.1"]
"versionRange": ["1.19.1", "1.26.0"]
},
{
"image": "https://github.com/user-attachments/assets/635f1c74-6327-48a8-a8d9-68d7376c7749",
"id": "2024-09-20-artifacts",
"date": "2024-09-20",
"versionRange": ["1.19.0", "1.17.1"]
"versionRange": ["1.17.1", "1.19.0"]
},
{
"image": "https://github.com/user-attachments/assets/bd6d0c82-8f14-4167-ad09-2a841f1e34e4",
"id": "2024-09-13-openai-o1-models",
"date": "2024-09-13",
"versionRange": ["1.17.0", "1.12.1"]
"versionRange": ["1.12.1", "1.17.0"]
},
{
"image": "https://github.com/user-attachments/assets/385eaca6-daea-484a-9bea-ba7270b4753d",
"id": "2024-08-21-file-upload-and-knowledge-base",
"date": "2024-08-21",
"versionRange": ["1.12.0", "1.8.1"]
"versionRange": ["1.8.1", "1.12.0"]
},
{
"image": "https://github.com/user-attachments/assets/2a4116a7-15ad-43e5-b801-cc62d8da2012",
"id": "2024-08-02-lobe-chat-database-docker",
"date": "2024-08-02",
"versionRange": ["1.8.0", "1.6.1"]
"versionRange": ["1.6.1", "1.8.0"]
},
{
"image": "https://github.com/user-attachments/assets/0e3a7174-6b66-4432-a319-dff60b033c24",
"id": "2024-07-19-gpt-4o-mini",
"date": "2024-07-19",
"versionRange": ["1.6.0", "1.0.1"]
"versionRange": ["1.0.1", "1.6.0"]
},
{
"image": "https://github.com/user-attachments/assets/82bfc467-e0c6-4d99-9b1f-18e4aea24285",
"id": "2024-06-19-lobe-chat-v1",
"date": "2024-06-19",
"versionRange": ["1.0.0", "0.147.0"]
"versionRange": ["0.147.0", "1.0.0"]
},
{
"image": "https://github.com/user-attachments/assets/aee846d5-b5ee-46cb-9dd0-d952ea708b67",
"id": "2024-02-14-ollama",
"date": "2024-02-14",
"versionRange": ["0.127.0", "0.125.1"]
"versionRange": ["0.125.1", "0.127.0"]
},
{
"image": "https://github.com/user-attachments/assets/533f7a5e-8a93-4a57-a62f-8233897d72b5",
"id": "2024-02-08-sso-oauth",
"date": "2024-02-08",
"versionRange": ["0.125.0", "0.118.1"]
"versionRange": ["0.118.1", "0.125.0"]
},
{
"image": "https://github.com/user-attachments/assets/6069332b-8e15-4d3c-8a77-479e8bc09c23",
"id": "2023-12-22-dalle-3",
"date": "2023-12-22",
"versionRange": ["0.118.0", "0.102.1"]
"versionRange": ["0.102.1", "0.118.0"]
},
{
"image": "https://github.com/user-attachments/assets/03433283-08a5-481a-8f6c-069b2fc6bace",
"id": "2023-11-19-tts-stt",
"date": "2023-11-19",
"versionRange": ["0.102.0", "0.101.1"]
"versionRange": ["0.101.1", "0.102.0"]
},
{
"image": "https://github.com/user-attachments/assets/dde2c9c5-cdda-4a65-8f32-b6f4da907df2",
"id": "2023-11-14-gpt4-vision",
"date": "2023-11-14",
"versionRange": ["0.101.0", "0.90.0"]
"versionRange": ["0.90.0", "0.101.0"]
},
{
"image": "https://github.com/user-attachments/assets/eaed3762-136f-4297-b161-ca92a27c4982",
"id": "2023-09-09-plugin-system",
"date": "2023-09-09",
"versionRange": ["0.72.0", "0.67.0"]
"versionRange": ["0.67.0", "0.72.0"]
}
]
}
1 change: 1 addition & 0 deletions docs/self-hosting/advanced/auth/next-auth/casdoor.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@ If you are deploying using a public network, the following assumptions apply:
}
}


</style>
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,7 @@ tags:
}
}


</style>
```

Expand Down
1 change: 1 addition & 0 deletions docs/self-hosting/advanced/knowledge-base.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ By correctly configuring and integrating these core components, you can build a
- **Purpose**: Use different Embedding generate vector representations for semantic search
- **Options**: support model provider list: zhipu/github/openai/bedrock/ollama
- **Deployment Tip**: Used to configure the default Embedding model

```
environment: DEFAULT_FILES_CONFIG=embedding_model=openai/embedding-text-3-small
```
1 change: 1 addition & 0 deletions docs/self-hosting/advanced/knowledge-base.zh-CN.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ Unstructured.io 是一个强大的文档处理工具。
- **用途**: 使用不同的嵌入模型(Embedding)生成文本的向量表示,用于语义搜索
- **选项**: 支持的模型提供商:zhipu/github/openai/bedrock/ollama
- **部署建议**: 使用环境变量配置默认嵌入模型

```
environment: DEFAULT_FILES_CONFIG=embedding_model=openai/embedding-text-3-small
```
73 changes: 28 additions & 45 deletions docs/self-hosting/advanced/observability/langfuse.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,67 +24,50 @@ tags:
## Get Started

<Steps>
### Set up Langfuse

### Set up Langfuse
Get your Langfuse API key by signing up for [Langfuse Cloud](https://cloud.langfuse.com) or [self-hosting](https://langfuse.com/docs/deployment/self-host) Langfuse.

Get your Langfuse API key by signing up for [Langfuse Cloud](https://cloud.langfuse.com) or [self-hosting](https://langfuse.com/docs/deployment/self-host) Langfuse.
### Set up LobeChat

### Set up LobeChat
There are multiple ways to [self-host LobeChat](https://lobehub.com/docs/self-hosting/start). For this example, we will use the Docker Desktop deployment.

There are multiple ways to [self-host LobeChat](https://lobehub.com/docs/self-hosting/start). For this example, we will use the Docker Desktop deployment.
<Tabs items={["Environment Variables", "Example in Docker Desktop"]}>
<Tab>
Before deploying LobeChat, set the following four environment variables with the Langfuse API keys you created in the previous step.

<Tabs items={["Environment Variables", "Example in Docker Desktop"]}>
<Tab>
```sh
ENABLE_LANGFUSE = '1'
LANGFUSE_SECRET_KEY = 'sk-lf...'
LANGFUSE_PUBLIC_KEY = 'pk-lf...'
LANGFUSE_HOST = 'https://cloud.langfuse.com'
```
</Tab>

Before deploying LobeChat, set the following four environment variables with the Langfuse API keys you created in the previous step.
<Tab>
Before running the Docker container, set the environment variables in the Docker Desktop with the Langfuse API keys you created in the previous step.

```sh
ENABLE_LANGFUSE = '1'
LANGFUSE_SECRET_KEY = 'sk-lf...'
LANGFUSE_PUBLIC_KEY = 'pk-lf...'
LANGFUSE_HOST = 'https://cloud.langfuse.com'
```
</Tab>
<Image alt={'Environment Variables in Docker Desktop'} src={'https://langfuse.com/images/docs/lobechat-docker-desktop-env.png'} />
</Tab>
</Tabs>

<Tab>
### Activate Analytics in Settings

Before running the Docker container, set the environment variables in the Docker Desktop with the Langfuse API keys you created in the previous step.
Once you have LobeChat running, navigate to the **About** tab in the **Settings** and activate analytics. This is necessary for traces to be sent to Langfuse.

<Image
alt={'Environment Variables in Docker Desktop'}
src={'https://langfuse.com/images/docs/lobechat-docker-desktop-env.png'}
/>
<Image alt={'LobeChat Settings'} src={'https://langfuse.com/images/docs/lobechat-settings.png'} />

</Tab>
### See Chat Traces in Langfuse

</Tabs>
After setting your LLM model key, you can start interacting with your LobeChat application.

### Activate Analytics in Settings
<Image alt={'LobeChat Conversation'} src={'https://langfuse.com/images/docs/lobechat-converstation.png'} />

Once you have LobeChat running, navigate to the **About** tab in the **Settings** and activate analytics. This is necessary for traces to be sent to Langfuse.
All conversations in the chat are automatically traced and sent to Langfuse. You can view the traces in the [Traces section](https://langfuse.com/docs/tracing) in the Langfuse UI.

<Image
alt={'LobeChat Settings'}
src={'https://langfuse.com/images/docs/lobechat-settings.png'}
/>

### See Chat Traces in Langfuse

After setting your LLM model key, you can start interacting with your LobeChat application.

<Image
alt={'LobeChat Conversation'}
src={'https://langfuse.com/images/docs/lobechat-converstation.png'}
/>

All conversations in the chat are automatically traced and sent to Langfuse. You can view the traces in the [Traces section](https://langfuse.com/docs/tracing) in the Langfuse UI.

<Image
alt={'LobeChat Example Trace'}
src={'https://langfuse.com/images/docs/lobechat-example-trace.png'}
/>
_[Example trace in the Langfuse UI](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/63e9246d-3f22-4e45-936d-b0c4ccf55a1e?timestamp=2024-11-26T17%3A00%3A02.028Z&observation=7ea75a0c-d9d1-425c-9b88-27561c63b413)_
<Image alt={'LobeChat Example Trace'} src={'https://langfuse.com/images/docs/lobechat-example-trace.png'} />

*[Example trace in the Langfuse UI](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/63e9246d-3f22-4e45-936d-b0c4ccf55a1e?timestamp=2024-11-26T17%3A00%3A02.028Z\&observation=7ea75a0c-d9d1-425c-9b88-27561c63b413)*
</Steps>

## Feedback
Expand Down
2 changes: 1 addition & 1 deletion docs/self-hosting/server-database.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ Since we support file-based conversations/knowledge base conversations, we need

<Callout type={'info'}>
You can generate a random 32-character string as the value of `KEY_VAULTS_SECRET` using `openssl
rand -base64 32`.
rand -base64 32`.
</Callout>
</Steps>

Expand Down
4 changes: 2 additions & 2 deletions locales/ar/changelog.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@
"allChangelog": "عرض جميع سجلات التحديثات",
"description": "تابع الميزات الجديدة والتحسينات في {{appName}}",
"pagination": {
"older": "عرض التغييرات السابقة",
"prev": "الصفحة السابقة"
"next": "الصفحة التالية",
"older": "عرض التغييرات السابقة"
},
"readDetails": "اقرأ التفاصيل",
"title": "سجل التحديثات",
Expand Down
27 changes: 27 additions & 0 deletions locales/ar/models.json
Original file line number Diff line number Diff line change
Expand Up @@ -809,9 +809,18 @@
"hunyuan-functioncall": {
"description": "نموذج Hunyuan الأحدث من نوع MOE FunctionCall، تم تدريبه على بيانات FunctionCall عالية الجودة، مع نافذة سياق تصل إلى 32K، ويحتل مرتبة متقدمة في مؤشرات التقييم عبر عدة أبعاد."
},
"hunyuan-large": {
"description": "نموذج Hunyuan-large يحتوي على حوالي 389 مليار معلمة، مع حوالي 52 مليار معلمة نشطة، وهو أكبر نموذج MoE مفتوح المصدر في الصناعة من حيث حجم المعلمات وأفضلها من حيث الأداء."
},
"hunyuan-large-longcontext": {
"description": "يتفوق في معالجة المهام الطويلة مثل تلخيص الوثائق والأسئلة والأجوبة المتعلقة بالوثائق، كما يمتلك القدرة على معالجة مهام إنشاء النصوص العامة. يظهر أداءً ممتازًا في تحليل وإنشاء النصوص الطويلة، مما يمكنه من التعامل بفعالية مع متطلبات معالجة المحتوى الطويل المعقد والمفصل."
},
"hunyuan-lite": {
"description": "تم الترقية إلى هيكل MOE، مع نافذة سياق تصل إلى 256k، متفوقًا على العديد من النماذج مفتوحة المصدر في تقييمات NLP، البرمجة، الرياضيات، والصناعات."
},
"hunyuan-lite-vision": {
"description": "نموذج مختلط حديث بقدرة 7 مليار معلمة، مع نافذة سياقية 32K، يدعم المحادثات متعددة الوسائط في السيناريوهات الصينية والإنجليزية، والتعرف على كائنات الصور، وفهم جداول الوثائق، والرياضيات متعددة الوسائط، ويتفوق في مؤشرات التقييم على نماذج المنافسة ذات 7 مليار معلمة في عدة أبعاد."
},
"hunyuan-pro": {
"description": "نموذج نصوص طويلة MOE-32K بحجم تريليون من المعلمات. يحقق مستوى رائد مطلق في مختلف المعايير، مع القدرة على التعامل مع التعليمات المعقدة والاستدلال، ويتميز بقدرات رياضية معقدة، ويدعم استدعاء الوظائف، مع تحسينات رئيسية في مجالات الترجمة متعددة اللغات، المالية، القانونية، والرعاية الصحية."
},
Expand All @@ -824,9 +833,24 @@
"hunyuan-standard-256K": {
"description": "يستخدم استراتيجية توجيه أفضل، مع تخفيف مشكلات التوازن في الحمل وتوافق الخبراء. في مجال النصوص الطويلة، تصل نسبة مؤشر البحث إلى 99.9%. MOE-256K يحقق اختراقًا إضافيًا في الطول والأداء، مما يوسع بشكل كبير طول المدخلات الممكنة."
},
"hunyuan-standard-vision": {
"description": "نموذج متعدد الوسائط حديث يدعم الإجابة بعدة لغات، مع توازن في القدرات بين الصينية والإنجليزية."
},
"hunyuan-turbo": {
"description": "نسخة المعاينة من الجيل الجديد من نموذج اللغة الكبير، يستخدم هيكل نموذج الخبراء المختلط (MoE) الجديد، مما يوفر كفاءة استدلال أسرع وأداء أقوى مقارنة بـ hunyuan-pro."
},
"hunyuan-turbo-20241120": {
"description": "الإصدار الثابت من hunyuan-turbo بتاريخ 20 نوفمبر 2024، وهو إصدار يقع بين hunyuan-turbo و hunyuan-turbo-latest."
},
"hunyuan-turbo-20241223": {
"description": "تحسينات في هذا الإصدار: توجيه البيانات، مما يعزز بشكل كبير قدرة النموذج على التعميم؛ تحسين كبير في القدرات الرياضية، البرمجية، وقدرات الاستدلال المنطقي؛ تحسين القدرات المتعلقة بفهم النصوص والكلمات؛ تحسين جودة إنشاء محتوى النص."
},
"hunyuan-turbo-latest": {
"description": "تحسين تجربة شاملة، بما في ذلك فهم اللغة الطبيعية، إنشاء النصوص، الدردشة، الأسئلة والأجوبة المعرفية، الترجمة، والمجالات الأخرى؛ تعزيز الطابع الإنساني، وتحسين الذكاء العاطفي للنموذج؛ تعزيز قدرة النموذج على توضيح النوايا الغامضة؛ تحسين القدرة على معالجة الأسئلة المتعلقة بتحليل الكلمات؛ تحسين جودة الإبداع والتفاعل؛ تعزيز تجربة التفاعل المتعدد الجولات."
},
"hunyuan-turbo-vision": {
"description": "نموذج اللغة البصرية الرائد من الجيل الجديد، يستخدم هيكل نموذج الخبراء المختلط (MoE) الجديد، مع تحسين شامل في القدرات المتعلقة بفهم النصوص والصور، وإنشاء المحتوى، والأسئلة والأجوبة المعرفية، والتحليل والاستدلال مقارنة بالنماذج السابقة."
},
"hunyuan-vision": {
"description": "نموذج Hunyuan الأحدث متعدد الوسائط، يدعم إدخال الصور والنصوص لتوليد محتوى نصي."
},
Expand Down Expand Up @@ -1193,6 +1217,9 @@
"pro-128k": {
"description": "سبارك برو 128K مزود بقدرة معالجة سياق كبيرة جدًا، قادر على معالجة ما يصل إلى 128K من معلومات السياق، مما يجعله مناسبًا بشكل خاص للتحليل الشامل ومعالجة الروابط المنطقية طويلة الأمد في المحتوى الطويل، ويمكنه تقديم منطق سلس ومتسق ودعم متنوع للاقتباسات في الاتصالات النصية المعقدة."
},
"qvq-72b-preview": {
"description": "نموذج QVQ هو نموذج بحث تجريبي تم تطويره بواسطة فريق Qwen، يركز على تعزيز قدرات الاستدلال البصري، خاصة في مجال الاستدلال الرياضي."
},
"qwen-coder-plus-latest": {
"description": "نموذج كود Qwen الشامل."
},
Expand Down
Loading
Loading