You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using RisuAI 136.0.1 as the frontend with a Yi 34B based model, text generation will cut off after around 400-500 tokens. Terminal shows the error message: "Token streaming was interrupted or aborted! [Errno 32] Broken pipe". Streaming is enabled in Risu, max response tokens is set to 4096, and API selected is OpenAI-compatible.
Kobold.cpp is version 1.76 (although this also happened with the previous version), started with the command line "./koboldcpp-linux-x64-nocuda --usevulkan --gpulayers 24 --threads 7 --contextsize 8192".
On up-to-date openSuse Tumbleweed, i7-12700KF cpu with 32GB DDR4 RAM and AMD RX 6800 card.
The text was updated successfully, but these errors were encountered:
When using RisuAI 136.0.1 as the frontend with a Yi 34B based model, text generation will cut off after around 400-500 tokens. Terminal shows the error message: "Token streaming was interrupted or aborted! [Errno 32] Broken pipe". Streaming is enabled in Risu, max response tokens is set to 4096, and API selected is OpenAI-compatible.
Kobold.cpp is version 1.76 (although this also happened with the previous version), started with the command line "./koboldcpp-linux-x64-nocuda --usevulkan --gpulayers 24 --threads 7 --contextsize 8192".
On up-to-date openSuse Tumbleweed, i7-12700KF cpu with 32GB DDR4 RAM and AMD RX 6800 card.
The text was updated successfully, but these errors were encountered: