Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DeepSeek-R1-Distill-Qwen-1.5B inference answer is wrong #1949

Open
jhxiang opened this issue Feb 27, 2025 · 0 comments
Open

DeepSeek-R1-Distill-Qwen-1.5B inference answer is wrong #1949

jhxiang opened this issue Feb 27, 2025 · 0 comments

Comments

@jhxiang
Copy link

jhxiang commented Feb 27, 2025

I encountered the following problem when using llama cpp python on Mac. The model's answer is completely unreasonable. The red box contains the question and answer.

Image

The configuration is as follows:
model: DeepSeek-R1-Distill-Qwen-1.5B.gguf
Quantization: Q4_K_M
llama_cpp_python: 0.3.7
API: create_chat_completion
model input:[{'role': 'system', 'content': 'you are a helpful assistant'}, {'role': 'user', 'content': 'There are 20 chickens and rabbits in a cage. It is known that these chickens and rabbits have 56 legs in total. Chickens have two legs and rabbits have four legs. How many chickens and rabbits are there in the cage?'}]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant