Possibilities wrt. intel GPUs (et. al.); SYCL / OpenCL / Vulkan etc.? #527
ghchris2021
started this conversation in
General
Replies: 1 comment
-
Intel support has been added in the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Possibilities wrt. intel GPUs (et. al.); SYCL / OpenCL / Vulkan etc.?
I saw one commit about adding some SYCL support though I'm not sure whether it was partially or fully working at that time or presently.
Personally my LLM inference use case interests are in the inference enablement of distributed (few LAN connected desktops) inference of popular LLMs across systems having some mix of CPU+RAM / NVIDIA GPUs / Intel consumer DGPUs (ARC7 et. seq.).
So far I haven't found any inference platform that handles such a case or definitively plans to any time soon; I think maybe llama.cpp is getting close, from what I see about aphrodite and intel / amd GPU support it seems hopeful perhaps there are some upcoming possibilities. So I thought I'd inquire.
Thank you very much!
Beta Was this translation helpful? Give feedback.
All reactions