[GH-ISSUE #8281] Runing ollama on Intel Ultra NPU or GPU #31058

Open
opened 2026-04-22 11:11:14 -05:00 by GiteaMirror · 16 comments
Owner

Originally created by @jackphj on GitHub (Jan 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8281

After I installed ollama through ollamaSetup, I found that it cannot use my gpu or npu. How to solve this problem?

CPU: intel ultra7 258v
System: windows 11 24h2

Originally created by @jackphj on GitHub (Jan 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8281 After I installed ollama through ollamaSetup, I found that it cannot use my gpu or npu. How to solve this problem? CPU: intel ultra7 258v System: windows 11 24h2
GiteaMirror added the gpuintel labels 2026-04-22 11:11:14 -05:00
Author
Owner

@ddpasa commented on GitHub (Jan 1, 2025):

With Vulkan you can at least use the GPU. But unfortunately the Ollama team is ignoring the PR. You can use this branch if you are interested: https://github.com/ollama/ollama/pull/5059

<!-- gh-comment-id:2567081019 --> @ddpasa commented on GitHub (Jan 1, 2025): With Vulkan you can at least use the GPU. But unfortunately the Ollama team is ignoring the PR. You can use this branch if you are interested: https://github.com/ollama/ollama/pull/5059
Author
Owner

@SiewwenL commented on GitHub (Jan 22, 2025):

Hi @jackphj

You can try to run Ollama with IPEX-LLM on Intel GPU by referring to this link
https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md

If you have further question on how to use this, please let me know.

Thanks and have a nice day!

<!-- gh-comment-id:2606172462 --> @SiewwenL commented on GitHub (Jan 22, 2025): Hi @jackphj You can try to run Ollama with IPEX-LLM on Intel GPU by referring to this link https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md If you have further question on how to use this, please let me know. Thanks and have a nice day!
Author
Owner

@ddpasa commented on GitHub (Jan 22, 2025):

Hi @jackphj

You can try to run Ollama with IPEX-LLM on Intel GPU by referring to this link https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md

If you have further question on how to use this, please let me know.

Thanks and have a nice day!

With all due respect, this is almost unusable. If Intel is serious about this, they need to merge this into ollama main so that it becomes easy to use.

<!-- gh-comment-id:2606688856 --> @ddpasa commented on GitHub (Jan 22, 2025): > Hi [@jackphj](https://github.com/jackphj) > > You can try to run Ollama with IPEX-LLM on Intel GPU by referring to this link https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md > > If you have further question on how to use this, please let me know. > > Thanks and have a nice day! With all due respect, this is almost unusable. If Intel is serious about this, they need to merge this into ollama main so that it becomes easy to use.
Author
Owner

@SiewwenL commented on GitHub (Jan 23, 2025):

Hi @ddpasa

You can refer to the IPEX-LLM GitHub repository for more information. If you encounter any issues, feel free to create a new issue there.

Thanks :)

<!-- gh-comment-id:2608629223 --> @SiewwenL commented on GitHub (Jan 23, 2025): Hi @ddpasa You can refer to the IPEX-LLM GitHub repository for more information. If you encounter any issues, feel free to create a new issue there. Thanks :)
Author
Owner

@NeoZhangJianyu commented on GitHub (Jan 23, 2025):

I will restore to support Intel GPU by llama.cpp SYCL backend after the current refactor is finished.
Here is the issue to track the job: https://github.com/ollama/ollama/issues/8414

<!-- gh-comment-id:2608700132 --> @NeoZhangJianyu commented on GitHub (Jan 23, 2025): I will restore to support Intel GPU by llama.cpp SYCL backend after the current refactor is finished. Here is the issue to track the job: https://github.com/ollama/ollama/issues/8414
Author
Owner

@iqbalsyamsu commented on GitHub (Jan 23, 2025):

I'm waiting for ollama to be able to run using NPU. It's useless to have Intel core ultra series but can't be used for further neural processes.

<!-- gh-comment-id:2609010111 --> @iqbalsyamsu commented on GitHub (Jan 23, 2025): I'm waiting for ollama to be able to run using NPU. It's useless to have Intel core ultra series but can't be used for further neural processes.
Author
Owner

@Tritonio commented on GitHub (Jan 24, 2025):

I'm waiting for ollama to be able to run using NPU. It's useless to have Intel core ultra series but can't be used for further neural processes.

I think the SYCL backend mentioned above can take advantage of Intel NPUs as well, @NeoZhangJianyu would be able to confirm or deny. In some LinusTT review of the Ultra CPUs I think I heard that their NPU is not a faster way to run LLMs but it's a very power efficient way, which is great, because it won't heat up my laptop nor kill the battery that much. But the iGPU on the Ultra will be able to run inference much faster than either CPU or NPU, so I'll be happy even if Ollama supports just the iGPU on Intel Ultras, though having the NPU too would be stellar.

<!-- gh-comment-id:2612456122 --> @Tritonio commented on GitHub (Jan 24, 2025): > I'm waiting for ollama to be able to run using NPU. It's useless to have Intel core ultra series but can't be used for further neural processes. I think the SYCL backend mentioned above can take advantage of Intel NPUs as well, @NeoZhangJianyu would be able to confirm or deny. In some LinusTT review of the Ultra CPUs I think I heard that their NPU is not a faster way to run LLMs but it's a very power efficient way, which is great, because it won't heat up my laptop nor kill the battery that much. But the iGPU on the Ultra will be able to run inference much faster than either CPU or NPU, so I'll be happy even if Ollama supports just the iGPU on Intel Ultras, though having the NPU too would be stellar.
Author
Owner

@SiewwenL commented on GitHub (Feb 19, 2025):

Hi @jackphj maybe this will helps you. https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_portablze_zip_quickstart.md#step-2-start-ollama-serve

<!-- gh-comment-id:2667595702 --> @SiewwenL commented on GitHub (Feb 19, 2025): Hi @jackphj maybe this will helps you. https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_portablze_zip_quickstart.md#step-2-start-ollama-serve
Author
Owner

@tetsuo974 commented on GitHub (Apr 24, 2025):

Did you try export OLLAMA_DEVICE=npu

<!-- gh-comment-id:2828274497 --> @tetsuo974 commented on GitHub (Apr 24, 2025): Did you try export OLLAMA_DEVICE=npu
Author
Owner

@JoseMariaZ commented on GitHub (Jul 4, 2025):

You can try this:

https://github.com/JoseMariaZ/Intelexia

<!-- gh-comment-id:3036186763 --> @JoseMariaZ commented on GitHub (Jul 4, 2025): You can try this: https://github.com/JoseMariaZ/Intelexia
Author
Owner

@JaMeyerUnica commented on GitHub (Jan 17, 2026):

Install Intel oneAPI:

Download from Intel oneAPI Base Toolkit

Includes SYCL runtime for NPU

Set environment variables:

export ONEAPI_DEVICE_SELECTOR=sycl
export SYCL_DEVICE_FILTER=ext_oneapi_npu
Run Ollama with NPU:

ollama serve --device-type npu

should work, at least by my laptop hp elite intel evo

<!-- gh-comment-id:3763025668 --> @JaMeyerUnica commented on GitHub (Jan 17, 2026): Install Intel oneAPI: # Download from Intel oneAPI Base Toolkit # Includes SYCL runtime for NPU Set environment variables: export ONEAPI_DEVICE_SELECTOR=sycl export SYCL_DEVICE_FILTER=ext_oneapi_npu Run Ollama with NPU: ollama serve --device-type npu should work, at least by my laptop hp elite intel evo
Author
Owner
<!-- gh-comment-id:3763041829 --> @JaMeyerUnica commented on GitHub (Jan 17, 2026): https://www.intel.com/content/www/us/en/content-details/826081/running-ollama-with-open-webui-on-intel-hardware-platform.html here a manual from intel
Author
Owner

@anime-shed commented on GitHub (Jan 19, 2026):

a good step but installing a 15GB bundle to make it works feels like a overkill

Image

Hope this can be minimized or become a part of the ollama itself.

<!-- gh-comment-id:3767647010 --> @anime-shed commented on GitHub (Jan 19, 2026): a good step but installing a 15GB bundle to make it works feels like a overkill <img width="1190" height="476" alt="Image" src="https://github.com/user-attachments/assets/8010c431-878b-4216-89b0-3d70376298a4" /> Hope this can be minimized or become a part of the ollama itself.
Author
Owner

@JaMeyerUnica commented on GitHub (Jan 19, 2026):

Hi indeed but you should only need the deep learning extension, is 1,5Gb. This is info from Intel!!! I have it running fully installed, biggest problem is that they sell stuff what’s not used from programs, oh i correct Adobe sh…t 😂.
Disclaimer: i never know anything about Intel and 3rd party!!! Not for home use 😂😈

<!-- gh-comment-id:3769557732 --> @JaMeyerUnica commented on GitHub (Jan 19, 2026): Hi indeed but you should only need the deep learning extension, is 1,5Gb. This is info from Intel!!! I have it running fully installed, biggest problem is that they sell stuff what’s not used from programs, oh i correct Adobe sh…t 😂. Disclaimer: i never know anything about Intel and 3rd party!!! Not for home use 😂😈
Author
Owner

@JaMeyerUnica commented on GitHub (Jan 19, 2026):

Ps AMD same only one is ARM but this is screaming in the church ✌️

<!-- gh-comment-id:3769562350 --> @JaMeyerUnica commented on GitHub (Jan 19, 2026): Ps AMD same only one is ARM but this is screaming in the church ✌️
Author
Owner

@AlyShmahell commented on GitHub (Jan 19, 2026):

Install Intel oneAPI:

Download from Intel oneAPI Base Toolkit

Includes SYCL runtime for NPU

Set environment variables:

export ONEAPI_DEVICE_SELECTOR=sycl export SYCL_DEVICE_FILTER=ext_oneapi_npu Run Ollama with NPU:

ollama serve --device-type npu

should work, at least by my laptop hp elite intel evo

would this work as an env variable in docker compose?

something like

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama-intel-gpu
    restart: unless-stopped
    devices:
      - /dev/dri:/dev/dri
    group_add:
      - render
      - video
    environment:
      ONEAPI_DEVICE_SELECTOR: sycl # or level_zero:gpu
      SYCL_DEVICE_FILTER: ext_oneapi_npu
      SYCL_CACHE_PERSISTENT: "1"
      OMP_NUM_THREADS: "8"
    volumes:
      - ollama:/root/.ollama
    ports:
      - "11434:11434"
volumes:
  ollama:

I cannot test it at the moment, I'm still eyeing a lunarlake device and I'm really interested, I just put this together from what I gathered from the issues.

<!-- gh-comment-id:3769572245 --> @AlyShmahell commented on GitHub (Jan 19, 2026): > Install Intel oneAPI: > > # Download from Intel oneAPI Base Toolkit > # Includes SYCL runtime for NPU > Set environment variables: > > export ONEAPI_DEVICE_SELECTOR=sycl export SYCL_DEVICE_FILTER=ext_oneapi_npu Run Ollama with NPU: > > ollama serve --device-type npu > > should work, at least by my laptop hp elite intel evo would this work as an env variable in docker compose? something like ```yml services: ollama: image: ollama/ollama:latest container_name: ollama-intel-gpu restart: unless-stopped devices: - /dev/dri:/dev/dri group_add: - render - video environment: ONEAPI_DEVICE_SELECTOR: sycl # or level_zero:gpu SYCL_DEVICE_FILTER: ext_oneapi_npu SYCL_CACHE_PERSISTENT: "1" OMP_NUM_THREADS: "8" volumes: - ollama:/root/.ollama ports: - "11434:11434" volumes: ollama: ``` I cannot test it at the moment, I'm still eyeing a lunarlake device and I'm really interested, I just put this together from what I gathered from the issues.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#31058