Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 拉取了main分支最新的代码,在美西vps部署,采用的本地部署方式,deepseek提示不给我用 #6026

Open
Nicolas-Gong opened this issue Jan 2, 2025 · 27 comments
Assignees
Labels
bug Something isn't working

Comments

@Nicolas-Gong
Copy link

📦 部署方式

Other

📌 软件版本

2.15.8

💻 系统环境

Other Linux

📌 系统版本

debian 11

🌐 浏览器

Chrome

📌 浏览器版本

131.0.6778.204

🐛 问题描述

.env 文件内增加了

CUSTOM_MODELS=-all,+glm-4-plus@ChatGLM,+glm-4-air@ChatGLM,+glm-4-long@ChatGLM,+glm-4-flash@ChatGLM,+glm-4v-flash@ChatGLM,+codegeex-4@ChatGLM,+cogview-3-flash@ChatGLM,+cogvideox-flash@ChatGLM,+deepseek-chat@DeepSeek

DEFAULT_MODEL=glm-4-flash

# DEEPSEEK
DEEPSEEK_URL=https://api.deepseek.com
DEEPSEEK_API_KEY=sk-fbxxxxxxxxxxxxxxxxxxxxxxxxxxx

返回错误:

{
  "error": true,
  "message": "you are not allowed to use deepseek-chat model"
}

这个报错到底是api接口的报错还是项目本身报的?看样子似乎是项目本身报错的,我应该如何解决这个问题呢?
另外我配置了DEFAULT_MODELglm-4-flash,但是打开总是默认不是它,而是glm-4-plus或者空的。

📷 复现步骤

No response

🚦 期望结果

No response

📝 补充信息

No response

@Nicolas-Gong Nicolas-Gong added the bug Something isn't working label Jan 2, 2025
@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Title: [Bug] I pulled the latest code of the main branch and deployed it on the US West vps. The local deployment method was adopted. Deepseek prompted me not to use it.

@bestsanmao
Copy link
Contributor

recheck if it is latest commit code from main branch
i just used your env and it works

the Commits on Dec 30, 2024 has fixed that issue

@Nicolas-Gong
Copy link
Author

recheck if it is latest commit code from main branch i just used your env and it works

the Commits on Dec 30, 2024 has fixed that issue

return git log

commit 0af04e0f2f5af2c39cdd771b2ebb496d9ca47f28 (HEAD -> main, origin/main, origin/HEAD)
Merge: 63c5baaa d184eb64
Author: RiverRay <[email protected]>
Date:   Tue Dec 31 16:23:10 2024 +0800

    Merge pull request #5468 from DDMeaqua/feat-shortcutkey
    
    feat: #5422 快捷键清除上下文

commit d184eb64585562de7f75e1ff7d291eb242b2f076
Author: DDMeaqua <[email protected]>
Date:   Tue Dec 31 14:50:54 2024 +0800

    chore: cmd + shift+ backspace

commit c5d9b1131ec932e53cd0394c283e24549f6426cb
Author: DDMeaqua <[email protected]>
Date:   Tue Dec 31 14:38:58 2024 +0800

    fix: merge bug

commit e13408dd2480c1726d0333d8ede3a937187f7991
Merge: aba4baf3 63c5baaa
Author: DDMeaqua <[email protected]>
Date:   Tue Dec 31 14:30:09 2024 +0800

    Merge branch 'main' into feat-shortcutkey

commit aba4baf38403dd717ee04f5555ba81749d9ee6c8
Author: DDMeaqua <[email protected]>
Date:   Tue Dec 31 14:25:43 2024 +0800

    chore: update

commit 6d84f9d3ae62da0c5d1617645f961d9f9e1a1a27
Author: DDMeaqua <[email protected]>
Date:   Tue Dec 31 13:27:15 2024 +0800

    chore: update

use yarn install & yarn build development
nodejs v18.20.5
yarn 1.22.22

@bestsanmao
Copy link
Contributor

bestsanmao commented Jan 2, 2025

另外我配置了DEFAULT_MODEL为glm-4-flash,但是打开总是默认不是它,而是glm-4-plus或者空的。

从这里看 很可能是配置文件没生效
你看看env文件 这些选项是否是唯一的 是不是上面还有同名的空项

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


In addition, I configured DEFAULT_MODEL as glm-4-flash, but when opened, it always defaults to glm-4-plus or empty.
Judging from here, it is likely that the configuration file has not taken effect.
Check the env file to see if these options are unique. Are there any empty items with the same name?

@Nicolas-Gong
Copy link
Author

另外我配置了DEFAULT_MODEL为glm-4-flash,但是打开总是默认不是它,而是glm-4-plus或者空的。

从这里看 很可能是配置文件没生效 你看看env文件 这些选项是否是唯一的 是不是上面还有同名的空项

# Your openai api key. (required)
OPENAI_API_KEY=

# Access password, separated by comma. (optional)
CODE='a#'

# You can start service behind a proxy. (optional)
PROXY_URL=

# (optional)
# Default: Empty
# Google Gemini Pro API key, set if you want to use Google Gemini Pro API.
GOOGLE_API_KEY=

# (optional)
# Default: https://generativelanguage.googleapis.com/
# Google Gemini Pro API url without pathname, set if you want to customize Google Gemini Pro API url.
GOOGLE_URL=

# Override openai api request base url. (optional)
# Default: https://api.openai.com
# Examples: http://your-openai-proxy.com
BASE_URL=

# Specify OpenAI organization ID.(optional)
# Default: Empty
OPENAI_ORG_ID=

# (optional)
# Default: Empty
# If you do not want users to use GPT-4, set this value to 1.
DISABLE_GPT4=

# (optional)
# Default: Empty
# If you do not want users to input their own API key, set this value to 1.
HIDE_USER_API_KEY=

# (optional)
# Default: Empty
# If you do want users to query balance, set this value to 1.
ENABLE_BALANCE_QUERY=

# (optional)
# Default: Empty
# If you want to disable parse settings from url, set this value to 1.
DISABLE_FAST_LINK=

# (optional)
# Default: Empty
# To control custom models, use + to add a custom model, use - to hide a model, use name=displayName to customize model name, separated by comma.
CUSTOM_MODELS=-all,+glm-4-plus@ChatGLM,+glm-4-air@ChatGLM,+glm-4-long@ChatGLM,+glm-4-flash@ChatGLM,+glm-4v-flash@ChatGLM,+codegeex-4@ChatGLM,+cogview-3-flash@ChatGLM,+cogvideox-flash@ChatGLM,+deepseek-chat@DeepSeek

# (optional)
# Default: Empty
# Change default model
DEFAULT_MODEL=glm-4-flash

# anthropic claude Api Key.(optional)
ANTHROPIC_API_KEY=

### anthropic claude Api version. (optional)
ANTHROPIC_API_VERSION=

### anthropic claude Api url (optional)
ANTHROPIC_URL=

### (optional)
WHITE_WEBDAV_ENDPOINTS=

# CHATGLM
CHATGLM_URL=https://open.bigmodel.cn
CHATGLM_API_KEY=dd5e

# DEEPSEEK
DEEPSEEK_URL=https://api.deepseek.com
DEEPSEEK_API_KEY=sk-fb6

VISION_MODELS=glm-4v-flash,cogview-3-flash,cogvideox-flash

.env 文件所有内容

@bestsanmao
Copy link
Contributor

DEFAULT_MODEL=glm-4-flash
可以试下改成
DEFAULT_MODEL=glm-4-flash@ChatGLM

not allow 那个问题 实在是未能复现

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


DEFAULT_MODEL=glm-4-flash
You can try changing it to
DEFAULT_MODEL=glm-4-flash@ChatGLM

The problem of not allow cannot be reproduced.

@Nicolas-Gong
Copy link
Author

DEFAULT_MODEL=glm-4-flash 可以试下改成 DEFAULT_MODEL=glm-4-flash@ChatGLM

not allow 那个问题 实在是未能复现

我试了下,直接yarn dev运行,可以正常使用,但是如果使用宝塔面板启动使用还是一样的报错
image
另外想问下到底.env文件叫啥名字呢?我看你们写的是.env.local,但是我用yarn dev的是.env也成功跑起来了,现在两个名字的文件同时存在,我也运行了,依然还是报同样的错误。

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


DEFAULT_MODEL=glm-4-flash You can try changing it to DEFAULT_MODEL=glm-4-flash@ChatGLM

not allow The problem cannot be reproduced.

I tried it and ran yarn dev directly. It can be used normally. However, if I use the pagoda panel to start it, I still get the same error.
image
Also, I would like to ask what is the name of the .env file? I see that you wrote .env.local, but I used yarn dev to run .env successfully. Now files with two names exist at the same time. I also ran it, and still got the same error message. error.

@bestsanmao
Copy link
Contributor

@Dogtiti 最后一问中的宝塔面板的情况 我也不太清楚了

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


@Dogtiti I’m not sure about the situation of the pagoda panel in the last question.

@Nicolas-Gong
Copy link
Author

@Dogtiti 最后一问中的宝塔面板的情况 我也不太清楚了

root@cloudnium:/www/wwwroot/ChatGPT-Next-Web# yarn build
yarn run v1.22.22
$ yarn mask && cross-env BUILD_MODE=standalone next build
$ npx tsx app/masks/build.ts
[Next] build mode standalone
[Next] build with chunk:  true
  ▲ Next.js 14.2.22
  - Environments: .env.local, .env
  - Experiments (use with caution):
    · forceSwcTransforms

打包的时候2个.env文件都被识别了,但是yarn start确实存在问题

从运行日志来看,似乎还是因为没有拿到deepseek的环境变量

[Proxy] v1/chat/completions
[Base Url] https://api.openai.com
fetchUrl https://api.openai.com/v1/chat/completions

yarn dev正常的是这样的

[Auth] use system api key
[Proxy]  /chat/completions
[Base Url] https://api.deepseek.com

@bestsanmao
Copy link
Contributor

`D:\ChatGPT-Next-Web>yarn start
yarn run v1.22.22
$ next start
▲ Next.js 14.1.1

[Next] build mode standalone
[Next] build with chunk: true`

我用yarn start 试了一下
是可以的
我的环境是:
代码根目录下 有.env 没有 .env.local

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


`D:\ChatGPT-Next-Web>yarn start
yarn run v1.22.22
$ next start
▲ Next.js 14.1.1

[Next] build mode standalone
[Next] build with chunk: true`

I tried it with yarn start
It's ok
My environment is:
There is .env in the code root directory but not .env.local

@Nicolas-Gong
Copy link
Author

`D:\ChatGPT-Next-Web>yarn start yarn run v1.22.22 $ next start ▲ Next.js 14.1.1

[Next] build mode standalone [Next] build with chunk: true`

我用yarn start 试了一下 是可以的 我的环境是: 代码根目录下 有.env 没有 .env.local

另外还发现个问题,chatglm相关的模型,只要带了flash就无法使用,提示

{
  "error": {
    "code": "1212",
    "message": "当前模型不支持SSE调用方式。"
  }
}

@Nicolas-Gong
Copy link
Author

`D:\ChatGPT-Next-Web>yarn start yarn run v1.22.22 $ next start ▲ Next.js 14.1.1

[Next] build mode standalone [Next] build with chunk: true`

我用yarn start 试了一下 是可以的 我的环境是: 代码根目录下 有.env 没有 .env.local

今天研究了下,不反代的直接端口访问没问题,但是通过nginx反代后就导致deepseek无法使用,其他的模型都没有问题。
估计是deepseek反代后有问题,会跑去运行openai的接口,于是就去看了你们的文档关于反向代理的问题。
这个问题应该统一用同一种方法吧,这样处理导致deepseek得特殊处理,不知道的人怕是跟我一样折腾很久。

# 不缓存,支持流式输出
proxy_cache off;  # 关闭缓存
proxy_buffering off;  # 关闭代理缓冲
chunked_transfer_encoding on;  # 开启分块传输编码
tcp_nopush on;  # 开启TCP NOPUSH选项,禁止Nagle算法
tcp_nodelay on;  # 开启TCP NODELAY选项,禁止延迟ACK算法
keepalive_timeout 300;  # 设定keep-alive超时时间为65秒

@bestsanmao
Copy link
Contributor

反代后就出问题了?
您的系统有没有啥防火墙 难道反代时过滤了什么参数?

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Did something go wrong after the rebellion?
Does your system have any firewall? Are there any parameters filtered during reverse generation?

@Nicolas-Gong
Copy link
Author

反代后就出问题了? 您的系统有没有啥防火墙 难道反代时过滤了什么参数?

应该就是反代出现的问题,防火墙肯定有的ufw防火墙,但是总不可能防火墙把443关了吧,这样我连访问都访问不了,所以推测估计是你们在处理选择deepseek是不是走了不一样的方案,和chatglm的部分模型不同,因为这些模型都可以使用,不过你可以在本地试试,安装个宝塔,选择默认方式反代一下,或者自己装个nignx反代下

server
{
listen 80;
    listen 443 ssl http2;
   server_name xx.xxx.xxx;
    index index.html index.htm default.htm default.html;
    #    root /www/wwwroot/ChatGPT-Next-Web;
    #CERT-APPLY-CHECK--START
    # 用于SSL证书申请时的文件验证相关配置 -- 请勿删除
    include /www/server/panel/vhost/nginx/well-known/ChatGPT_Next_Web.conf;
    #CERT-APPLY-CHECK--END


    #SSL-START SSL相关配置
    #error_page 404/404.html;
    ssl_certificate    /www/server/panel/vhost/cert/ChatGPT_Next_Web/fullchain.pem;
    ssl_certificate_key    /www/server/panel/vhost/cert/ChatGPT_Next_Web/privkey.pem;
    ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
    ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    add_header Strict-Transport-Security "max-age=31536000";
    error_page 497  https://$host$request_uri;
    #SSL-END

    #ERROR-PAGE-START  错误页相关配置
    #error_page 404 /404.html;
    #error_page 502 /502.html;
    #ERROR-PAGE-END

    #REWRITE-START 伪静态相关配置
    include /www/server/panel/vhost/rewrite/node_ChatGPT_Next_Web.conf;
    #REWRITE-END

    #禁止访问的文件或目录
    location ~ ^/(\.user.ini|\.htaccess|\.git|\.svn|\.project|LICENSE|README.md|package.json|package-lock.json|\.env) {
        return 404;
    }

    #一键申请SSL证书验证目录相关设置
    location /.well-known/ {
        root  /www/wwwroot/ChatGPT-Next-Web;
    }

    #禁止在证书验证目录放入敏感文件
    if ( $uri ~ "^/\.well-known/.*\.(php|jsp|py|js|css|lua|ts|go|zip|tar\.gz|rar|7z|sql|bak)$" ) {
        return 403;
    }


    # HTTP反向代理相关配置开始 >>>
    location ~ /purge(/.*) {
        proxy_cache_purge cache_one $host$request_uri$is_args$args;
    }

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host:$server_port;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header REMOTE-HOST $remote_addr;
        add_header X-Cache $upstream_cache_status;
        proxy_set_header X-Host $host:$server_port;
        proxy_set_header X-Scheme $scheme;
        proxy_connect_timeout 30s;
        proxy_read_timeout 86400s;
        proxy_send_timeout 30s;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
    # HTTP反向代理相关配置结束 <<<
    
    access_log  /www/wwwlogs/ChatGPT_Next_Web.log;
    error_log  /www/wwwlogs/ChatGPT_Next_Web.error.log;
}

这是nginx出问题的默认反代配置,加上上面那个配置就可以正常使用deepseek模型了

@bestsanmao
Copy link
Contributor

反代一般就是 原样转发 不会导致内容出问题
您的服务器 就安装了一个版本的nextchat吗
会不会是多个版本弄混了

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Reposting generally means forwarding it as it is, which will not cause problems with the content.
Does your server have a version of nextchat installed?
Could it be that multiple versions are mixed up?

@Nicolas-Gong
Copy link
Author

反代一般就是 原样转发 不会导致内容出问题 您的服务器 就安装了一个版本的nextchat吗 会不会是多个版本弄混了

不存在多个版本,首先这个vps才开没多久,其次,这个项目我上次玩的时候还是chatgpt刚火起来的时候,那时候这个项目才开始。
刚才我试了去掉了上面发的nginx配置也没问题,很奇怪,现在排除法都没法继续下去了。我之前yarn start命令行启动,一个是通过端口访问,没问题可以使用,一个是通过反向代理后使用,就会报无法使用deepseek模型,所以推测可能就是反代有问题,而且确实是加上了那段配置后才可以用的。

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Reverse generation generally means forwarding as it is, which will not cause problems with the content. Is one version of nextchat installed on your server? Could it be that multiple versions are mixed up?

There are not multiple versions. First of all, this vps has only been open for a short time. Secondly, the last time I played this project was when chatgpt had just become popular, and this project had just started at that time.
I just tried removing the nginx configuration posted above and there was no problem. It's strange that I can't continue with the troubleshooting method now. I previously started the yarn start command line. One is accessed through the port, which can be used without any problem. The other is used through a reverse proxy, and it will report that the deepseek model cannot be used. Therefore, it is speculated that there may be a problem with the reverse generation, and it is indeed added. It can be used only after completing that configuration.

@bestsanmao
Copy link
Contributor

这是nginx出问题的默认反代配置,加上上面那个配置就可以正常使用deepseek模型了

谢谢您的详细测试
加上哪些 就能用了?
区别在哪些配置上项上?

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


This is the default anti-generation configuration for nginx problems. With the above configuration, you can use the deepseek model normally.

Thank you for your detailed testing
What can be added to make it useful?
What configuration items are the differences?

@Nicolas-Gong
Copy link
Author

这是nginx出问题的默认反代配置,加上上面那个配置就可以正常使用deepseek模型了

谢谢您的详细测试 加上哪些 就能用了? 区别在哪些配置上项上?

# 不缓存,支持流式输出
proxy_cache off;  # 关闭缓存
proxy_buffering off;  # 关闭代理缓冲
chunked_transfer_encoding on;  # 开启分块传输编码
tcp_nopush on;  # 开启TCP NOPUSH选项,禁止Nagle算法
tcp_nodelay on;  # 开启TCP NODELAY选项,禁止延迟ACK算法
keepalive_timeout 300;  # 设定keep-alive超时时间为65秒

就这些配置就可以使用了,具体是因为哪项现在也没法测试了。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants