[GH-ISSUE #10244] Add support Intel GPU by OneApi /SYCL #6721

Open
opened 2026-04-12 18:28:22 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @chnxq on GitHub (Apr 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10244

I made an example of adding SYCL/OneApi to ollama' main code. I don't know if a pull request can be initiated?
This example is on the chnxq/add-oneapi branch of https://github.com/ollama/ollama.

Only tested on the Windows system and integrated graphics card at present. and performance is only around 70% of the zip file executable released by Intel. Further optimization is required.

https://github.com/chnxq/ollama/tree/chnxq/add-oneapi

Originally created by @chnxq on GitHub (Apr 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10244 I made an example of adding SYCL/OneApi to ollama' main code. I don't know if a pull request can be initiated? This example is on the chnxq/add-oneapi branch of https://github.com/ollama/ollama. Only tested on the Windows system and integrated graphics card at present. and performance is only around 70% of the zip file executable released by Intel. Further optimization is required. https://github.com/chnxq/ollama/tree/chnxq/add-oneapi
GiteaMirror added the feature request label 2026-04-12 18:28:22 -05:00
Author
Owner

@chnxq commented on GitHub (Apr 12, 2025):

run gemma3:

set OLLAMA_NUM_GPU=55

.\ollama.exe run gemma3:27b --verbose

<!-- gh-comment-id:2798799136 --> @chnxq commented on GitHub (Apr 12, 2025): run gemma3: set OLLAMA_NUM_GPU=55 .\ollama.exe run gemma3:27b --verbose
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6721