You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running LLMs (Deepseek R1) from inside the AI Playground, the output seems to stop / get truncated after approximately 25 sec..
This is noticable when running Deepseek R1 which ouputs a lot of inner monologue. The output suddenly stops.
(According to the GPU-Utilization in Taskmanager, it appears, the model might keep running in the background, but its ouput is not displayed.)
To Reproduce
Steps to reproduce the behavior:
Go to the Answer tab
Load Deepseek-R1-Distill-Qwen-7B
Ask it literally anything that makes it think more than a few seconds
wait and observe
Expected behavior
A complete Output
Environment (please complete the following information):
OS: Windows 11 24H2
GPU: Intel Arc A770 16GB
CPU: Ryzen 5800X
RAM: 32GB
Version: 2.0.0 alpha
Backend: IPEX-LLM
The text was updated successfully, but these errors were encountered:
Describe the bug
When running LLMs (Deepseek R1) from inside the AI Playground, the output seems to stop / get truncated after approximately 25 sec..
This is noticable when running Deepseek R1 which ouputs a lot of inner monologue. The output suddenly stops.
(According to the GPU-Utilization in Taskmanager, it appears, the model might keep running in the background, but its ouput is not displayed.)
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A complete Output
Environment (please complete the following information):
The text was updated successfully, but these errors were encountered: