Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

搭建配置询问 #140

Open
Cheng-Hz opened this issue Jul 23, 2024 · 4 comments
Open

搭建配置询问 #140

Cheng-Hz opened this issue Jul 23, 2024 · 4 comments

Comments

@Cheng-Hz
Copy link

你好,请问如果需要在Linux服务器上尝试搭建,cpu和gpu的大致配置是怎样的?

@HungryFour
Copy link
Collaborator

部署这个服务不需要很高的配置,服务本身不吃资源。向量化和诊断模型都可以用API的方式调用。

@Cheng-Hz
Copy link
Author

你好,如果需要本地部署模型的话。目前平台所支持的所有本地模型,cpu和gpu的大致配置是怎样的呢?

@HungryFour
Copy link
Collaborator

得看跑啥模型,之前我们本地微调的Qwen,13B以下的话只需要一张3090或4090就行。

@Cheng-Hz
Copy link
Author

Cheng-Hz commented Jul 26, 2024

请问比如Llama2-13b的话一张3090或4090够用吗?
对cpu有什么要求吗?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants