[GH-ISSUE #12220] Intel Core Ultra 5 235 GPU/NPU not found #54644

Closed
opened 2026-04-29 06:45:18 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @morpher412 on GitHub (Sep 8, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12220

What is the issue?

I am using Proxmox VE using a privileged LXC , and that the Intel Core Ultra 5 235 GPU/NPU is not being detected. I am unsure if this is a bug or undeveloped feature, but my understanding was that OpenVino would support these integrated pieces of hardware.

Relevant log output

root@ai-server:~# journalctl -u ollama.service
Sep 08 22:39:34 ai-server systemd[1]: Started ollama.service - Ollama Service.
Sep 08 22:39:34 ai-server ollama[6592]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Sep 08 22:39:34 ai-server ollama[6592]: Your new public key is:
Sep 08 22:39:34 ai-server ollama[6592]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEgVUFVj7f5PThQpxMWjObd7tvaPR9HXXQHlnxLxGLbM
Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.934Z level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: H>
Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.935Z level=INFO source=images.go:477 msg="total blobs: 0"
Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.935Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.936Z level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.10)"
Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.937Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.941Z level=INFO source=gpu.go:388 msg="no compatible GPUs were discovered"
Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.941Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="16.0 G>
Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.941Z level=INFO source=routes.go:1425 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB"
Sep 08 22:41:44 ai-server systemd[1]: Stopping ollama.service - Ollama Service...
Sep 08 22:41:44 ai-server systemd[1]: ollama.service: Deactivated successfully.
Sep 08 22:41:44 ai-server systemd[1]: Stopped ollama.service - Ollama Service.
-- Boot 8f3ab5b18b0e4906be202629cb67d4ab --
Sep 08 22:41:48 ai-server systemd[1]: Started ollama.service - Ollama Service.
Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.460Z level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HS>
Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.461Z level=INFO source=images.go:477 msg="total blobs: 0"
Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.461Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.462Z level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.10)"
Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.462Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.465Z level=INFO source=gpu.go:388 msg="no compatible GPUs were discovered"
Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.465Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="16.0 Gi>
Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.465Z level=INFO source=routes.go:1425 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB"
Sep 08 22:42:39 ai-server systemd[1]: Stopping ollama.service - Ollama Service...
Sep 08 22:42:39 ai-server systemd[1]: ollama.service: Deactivated successfully.
Sep 08 22:42:39 ai-server systemd[1]: Stopped ollama.service - Ollama Service.
Sep 08 22:42:39 ai-server systemd[1]: Started ollama.service - Ollama Service.

OS

Linux

GPU

Intel

CPU

Intel

Ollama version

No response

Originally created by @morpher412 on GitHub (Sep 8, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12220 ### What is the issue? I am using Proxmox VE using a privileged LXC , and that the Intel Core Ultra 5 235 GPU/NPU is not being detected. I am unsure if this is a bug or undeveloped feature, but my understanding was that OpenVino would support these integrated pieces of hardware. ### Relevant log output ```shell root@ai-server:~# journalctl -u ollama.service Sep 08 22:39:34 ai-server systemd[1]: Started ollama.service - Ollama Service. Sep 08 22:39:34 ai-server ollama[6592]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key. Sep 08 22:39:34 ai-server ollama[6592]: Your new public key is: Sep 08 22:39:34 ai-server ollama[6592]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEgVUFVj7f5PThQpxMWjObd7tvaPR9HXXQHlnxLxGLbM Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.934Z level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: H> Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.935Z level=INFO source=images.go:477 msg="total blobs: 0" Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.935Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.936Z level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.10)" Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.937Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.941Z level=INFO source=gpu.go:388 msg="no compatible GPUs were discovered" Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.941Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="16.0 G> Sep 08 22:39:34 ai-server ollama[6592]: time=2025-09-08T22:39:34.941Z level=INFO source=routes.go:1425 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB" Sep 08 22:41:44 ai-server systemd[1]: Stopping ollama.service - Ollama Service... Sep 08 22:41:44 ai-server systemd[1]: ollama.service: Deactivated successfully. Sep 08 22:41:44 ai-server systemd[1]: Stopped ollama.service - Ollama Service. -- Boot 8f3ab5b18b0e4906be202629cb67d4ab -- Sep 08 22:41:48 ai-server systemd[1]: Started ollama.service - Ollama Service. Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.460Z level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HS> Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.461Z level=INFO source=images.go:477 msg="total blobs: 0" Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.461Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.462Z level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.10)" Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.462Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.465Z level=INFO source=gpu.go:388 msg="no compatible GPUs were discovered" Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.465Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="16.0 Gi> Sep 08 22:41:48 ai-server ollama[272]: time=2025-09-08T22:41:48.465Z level=INFO source=routes.go:1425 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB" Sep 08 22:42:39 ai-server systemd[1]: Stopping ollama.service - Ollama Service... Sep 08 22:42:39 ai-server systemd[1]: ollama.service: Deactivated successfully. Sep 08 22:42:39 ai-server systemd[1]: Stopped ollama.service - Ollama Service. Sep 08 22:42:39 ai-server systemd[1]: Started ollama.service - Ollama Service. ``` ### OS Linux ### GPU Intel ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-29 06:45:18 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 8, 2025):

Ollama doesn't use OpenVino.

<!-- gh-comment-id:3268364560 --> @rick-github commented on GitHub (Sep 8, 2025): Ollama doesn't use OpenVino.
Author
Owner

@pdevine commented on GitHub (Sep 9, 2025):

Unfortunately it's not supported, but it is something we've been talking about supporting.

<!-- gh-comment-id:3271536595 --> @pdevine commented on GitHub (Sep 9, 2025): Unfortunately it's not supported, but it is something we've been talking about supporting.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54644