We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I'm trying to running miniCPM3-4B:Q4_K_M model with ollama 0.5.1-ipex-llm-20250107 on intel MTL iGpu and Arc A770.
before load the model into VRAM, the VRAM usage is 2.4GB
when running the model, the compute usage is almost 0 and VRAM usage is 8.2GB(this 4B model is using VRAM higher than GLM4:9B)
before load the model , the VRAM usage is 1.1GB
when runing the modek ,the compute usage is around 43% and VRAM usage is 9.5GB running with "ollama run gfunsai/minicpm3-4b:q4_k_m" command
running with curl command
The output token per second performance is significant dropped when doing inference on both GPUs.
Please help fix this issue :)
The text was updated successfully, but these errors were encountered:
Hi @jianjungu, we have reproduced this issue and are working on fixing it :)
Sorry, something went wrong.
sgwhat
No branches or pull requests
I'm trying to running miniCPM3-4B:Q4_K_M model with ollama 0.5.1-ipex-llm-20250107 on intel MTL iGpu and Arc A770.
before load the model into VRAM, the VRAM usage is 2.4GB
when running the model, the compute usage is almost 0 and VRAM usage is 8.2GB(this 4B model is using VRAM higher than GLM4:9B)
before load the model , the VRAM usage is 1.1GB
when runing the modek ,the compute usage is around 43% and VRAM usage is 9.5GB
running with "ollama run gfunsai/minicpm3-4b:q4_k_m" command
running with curl command
The output token per second performance is significant dropped when doing inference on both GPUs.
Please help fix this issue :)
The text was updated successfully, but these errors were encountered: