Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ipex-llm[cpp]][ollama] low performance and gpu usage when running minicpm3-4B model #12675

Open
jianjungu opened this issue Jan 8, 2025 · 1 comment
Assignees

Comments

@jianjungu
Copy link

I'm trying to running miniCPM3-4B:Q4_K_M model with ollama 0.5.1-ipex-llm-20250107 on intel MTL iGpu and Arc A770.

  • When running the model on MTL igpu, there is no GPU compute usage with very high VRAM occupation

before load the model into VRAM, the VRAM usage is 2.4GB
Taskmgr_7pIqPjiV6W

when running the model, the compute usage is almost 0 and VRAM usage is 8.2GB(this 4B model is using VRAM higher than GLM4:9B)
explorer_2f7Lurdzar

  • Runing the model on Arc A770

before load the model , the VRAM usage is 1.1GB

when runing the modek ,the compute usage is around 43% and VRAM usage is 9.5GB
running with "ollama run gfunsai/minicpm3-4b:q4_k_m" command
explorer_lMZUq1Spam

running with curl command
explorer_YWjFlah2Zy

The output token per second performance is significant dropped when doing inference on both GPUs.

Please help fix this issue :)

@sgwhat
Copy link
Contributor

sgwhat commented Jan 9, 2025

Hi @jianjungu, we have reproduced this issue and are working on fixing it :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants