[GH-ISSUE #10560] GPU unused #53463

Closed
opened 2026-04-29 03:17:06 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @HendyTurtle on GitHub (May 4, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10560

Originally created by @HendyTurtle on GitHub (May 4, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10560
GiteaMirror added the bug label 2026-04-29 03:17:06 -05:00
Author
Owner

@rick-github commented on GitHub (May 4, 2025):

ollama currently supports a subset of the available hardware acceleration platforms. See here for NPU support request and here for Arc.

<!-- gh-comment-id:2849190597 --> @rick-github commented on GitHub (May 4, 2025): ollama currently supports a [subset](https://github.com/ollama/ollama/blob/main/docs/gpu.md) of the available hardware acceleration platforms. See [here](https://github.com/ollama/ollama/issues/5186) for NPU support request and [here](https://github.com/ollama/ollama/issues/1590) for Arc.
Author
Owner

@chnxq commented on GitHub (May 4, 2025):

Hi @rick-github I have an attempt to use the latest version of Ollama to support Intel GPU. Currently, there are still some issues with the Moe model of QWen3 and it is being debugged. For Gema, DeepSeek, and Qwen2.5, they can be supported. I used the SYCL to modify Ollama to detect the memory size of the GPU. It is also hoped that it can be merged into the version of Ollama. https://github.com/chnxq/ollama .On the branch chnxq/add-oneapi, env var ref .bat file.

<!-- gh-comment-id:2849297233 --> @chnxq commented on GitHub (May 4, 2025): Hi @rick-github I have an attempt to use the latest version of Ollama to support Intel GPU. Currently, there are still some issues with the Moe model of QWen3 and it is being debugged. For Gema, DeepSeek, and Qwen2.5, they can be supported. I used the SYCL to modify Ollama to detect the memory size of the GPU. It is also hoped that it can be merged into the version of Ollama. https://github.com/chnxq/ollama .On the branch chnxq/add-oneapi, env var ref .bat file.
Author
Owner

@wbste commented on GitHub (May 5, 2025):

Hi @rick-github I have an attempt to use the latest version of Ollama to support Intel GPU. Currently, there are still some issues with the Moe model of QWen3 and it is being debugged. For Gema, DeepSeek, and Qwen2.5, they can be supported. I used the SYCL to modify Ollama to detect the memory size of the GPU. It is also hoped that it can be merged into the version of Ollama. https://github.com/chnxq/ollama .On the branch chnxq/add-oneapi, env var ref .bat file.

This would be so awesome! Intel's ipex version always lags behind. I'd love integration into ollama itself!

<!-- gh-comment-id:2849753379 --> @wbste commented on GitHub (May 5, 2025): > Hi [@rick-github](https://github.com/rick-github) I have an attempt to use the latest version of Ollama to support Intel GPU. Currently, there are still some issues with the Moe model of QWen3 and it is being debugged. For Gema, DeepSeek, and Qwen2.5, they can be supported. I used the SYCL to modify Ollama to detect the memory size of the GPU. It is also hoped that it can be merged into the version of Ollama. https://github.com/chnxq/ollama .On the branch chnxq/add-oneapi, env var ref .bat file. This would be so awesome! Intel's ipex version always lags behind. I'd love integration into ollama itself!
Author
Owner

@rick-github commented on GitHub (May 16, 2025):

#10322

<!-- gh-comment-id:2886469501 --> @rick-github commented on GitHub (May 16, 2025): #10322
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53463