-
Notifications
You must be signed in to change notification settings - Fork 60k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] 把glm-4-flash设置为默认模型,但是依旧访问openapi的接口 #6053
Labels
bug
Something isn't working
Comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
📦 部署方式
官方安装包
📌 软件版本
v2.15.8
💻 系统环境
Other Linux
📌 系统版本
centos7.9
🌐 浏览器
Chrome
📌 浏览器版本
131.0.6778.264
🐛 问题描述
1.我使用docker部署,我想默认使用glm-4-flash模型,部署命令如下
2.我打开网页后,输入文本时,前台显示连接错误
3.我看了运行日志,显示调用的是openapi的api地址,而不是glm-4-flash模型的api地址
4.我看了当前会话,显示的模型是glm-4-flash,但是依旧访问不了
我怀疑是浏览器的缓存问题,但是我打开无痕窗口重新访问,接口依旧访问到openapi的api地址,而不是glm-4-flash模型的api地址
📷 复现步骤
无
🚦 期望结果
设置glm-4-flash为默认模型,打开的聊天,使用的是glm-4-flash模型,而不是openapi模型,并且可以正常使用
📝 补充信息
无
The text was updated successfully, but these errors were encountered: