[GH-ISSUE #11774] [BUG] Intel Iris Xe GPU (i5-13500 / Raptor Lake) not detected in Docker on Ubuntu 24.04 #7803

Open
opened 2026-04-12 19:58:36 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @azgh on GitHub (Aug 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11774

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

System Details:

CPU: Intel Core i5-13500 with Iris Xe Graphics

OS: Ubuntu 24.04 LTS (Server, Headless Mode via SSH)

Kernel: (The user's kernel, likely 6.8+)

Docker Version: (The user's Docker version)

Intel Driver: Official Ubuntu package intel-opencl-icd version 23.43.27642.40-1ubuntu3

Ollama Version: ollama/ollama:latest (also tested 0.1.41, 0.1.32)

Problem Description:
Ollama running inside a Docker container consistently fails to detect the integrated Intel Iris Xe GPU. The log always shows "no compatible GPUs were discovered" and falls back to CPU mode, despite a fully functional host environment and comprehensive Docker configuration.

Relevant log output

Steps to Reproduce:

On a clean Ubuntu 24.04 server with an i5-13500 CPU, install the official Intel drivers: sudo apt install intel-opencl-icd intel-level-zero-gpu

Use the following docker-compose.yaml:

YAML

version: '3.8'
services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    devices:
      - /dev/dri:/dev/dri
    group_add:
      - "107" # GID of the 'render' group on the host
    volumes:
      - ollama:/root/.ollama
      - /etc/OpenCL/vendors:/etc/OpenCL/vendors
      - /etc/level-zero/vendors:/etc/level-zero/vendors
      - /usr/lib/x86_64-linux-gnu/intel-opencl:/usr/lib/x86_64-linux-gnu/intel-opencl
      - /usr/lib/x86_64-linux-gnu/libze_loader.so.1:/usr/lib/x86_64-linux-gnu/libze_loader.so.1
      - /usr/lib/x86_64-linux-gnu/libze_intel_gpu.so.1:/usr/lib/x86_64-linux-gnu/libze_intel_gpu.so.1
    restart: unless-stopped
volumes:
  ollama: {}
Run sudo docker-compose up -d.

Check logs with sudo docker logs ollama.

Troubleshooting Performed (Diagnostics):

clinfo on the host works perfectly and detects the Iris Xe GPU correctly, even in headless mode.

The render group and permissions for /dev/dri/renderD128 are correct on the host.

Problematic third-party PPAs (ppa:kobuk-team/intel-graphics) have been purged, and official Ubuntu drivers are now in use.

The exact paths to all required .so and .icd files have been verified with dpkg -L and cat.

Running clinfo inside the container (after apt install) shows Number of platforms: 0, even when driver libraries are volume-mounted. This points to a failure in the dynamic loader or an incompatibility within the container environment.

The issue persists across multiple Ollama versions and Docker Compose configurations.

OS

Linux

GPU

Intel

CPU

Intel

Ollama version

latest

Originally created by @azgh on GitHub (Aug 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11774 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? System Details: CPU: Intel Core i5-13500 with Iris Xe Graphics OS: Ubuntu 24.04 LTS (Server, Headless Mode via SSH) Kernel: (The user's kernel, likely 6.8+) Docker Version: (The user's Docker version) Intel Driver: Official Ubuntu package intel-opencl-icd version 23.43.27642.40-1ubuntu3 Ollama Version: ollama/ollama:latest (also tested 0.1.41, 0.1.32) Problem Description: Ollama running inside a Docker container consistently fails to detect the integrated Intel Iris Xe GPU. The log always shows "no compatible GPUs were discovered" and falls back to CPU mode, despite a fully functional host environment and comprehensive Docker configuration. ### Relevant log output ```shell Steps to Reproduce: On a clean Ubuntu 24.04 server with an i5-13500 CPU, install the official Intel drivers: sudo apt install intel-opencl-icd intel-level-zero-gpu Use the following docker-compose.yaml: YAML version: '3.8' services: ollama: image: ollama/ollama:latest container_name: ollama devices: - /dev/dri:/dev/dri group_add: - "107" # GID of the 'render' group on the host volumes: - ollama:/root/.ollama - /etc/OpenCL/vendors:/etc/OpenCL/vendors - /etc/level-zero/vendors:/etc/level-zero/vendors - /usr/lib/x86_64-linux-gnu/intel-opencl:/usr/lib/x86_64-linux-gnu/intel-opencl - /usr/lib/x86_64-linux-gnu/libze_loader.so.1:/usr/lib/x86_64-linux-gnu/libze_loader.so.1 - /usr/lib/x86_64-linux-gnu/libze_intel_gpu.so.1:/usr/lib/x86_64-linux-gnu/libze_intel_gpu.so.1 restart: unless-stopped volumes: ollama: {} Run sudo docker-compose up -d. Check logs with sudo docker logs ollama. Troubleshooting Performed (Diagnostics): clinfo on the host works perfectly and detects the Iris Xe GPU correctly, even in headless mode. The render group and permissions for /dev/dri/renderD128 are correct on the host. Problematic third-party PPAs (ppa:kobuk-team/intel-graphics) have been purged, and official Ubuntu drivers are now in use. The exact paths to all required .so and .icd files have been verified with dpkg -L and cat. Running clinfo inside the container (after apt install) shows Number of platforms: 0, even when driver libraries are volume-mounted. This points to a failure in the dynamic loader or an incompatibility within the container environment. The issue persists across multiple Ollama versions and Docker Compose configurations. ``` ### OS Linux ### GPU Intel ### CPU Intel ### Ollama version latest
GiteaMirror added the intelbug labels 2026-04-12 19:58:37 -05:00
Author
Owner

@NeoZhangJianyu commented on GitHub (Aug 7, 2025):

@azgh
You need to install the GPU driver in docker.
Please refer to: https://github.com/intel/intel-extension-for-pytorch/blob/xpu-master/docker/Dockerfile.

You could run clinfo -l to check the GPU in docker container if the installation is successful.

<!-- gh-comment-id:3162760179 --> @NeoZhangJianyu commented on GitHub (Aug 7, 2025): @azgh You need to install the GPU driver in docker. Please refer to: https://github.com/intel/intel-extension-for-pytorch/blob/xpu-master/docker/Dockerfile. You could run `clinfo -l` to check the GPU in docker container if the installation is successful.
Author
Owner

@rick-github commented on GitHub (Sep 23, 2025):

https://github.com/ollama/ollama/issues/3113

<!-- gh-comment-id:3324748105 --> @rick-github commented on GitHub (Sep 23, 2025): https://github.com/ollama/ollama/issues/3113
Author
Owner

@desmondsow commented on GitHub (Sep 25, 2025):

@azgh
You could use the Ollama portable zip https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_portable_zip_quickstart.md or https://github.com/intel/ipex-llm/blob/main/docker/llm/README.md if you prefer docker which will enable Intel iGPU with Ollama

<!-- gh-comment-id:3333180131 --> @desmondsow commented on GitHub (Sep 25, 2025): @azgh You could use the Ollama portable zip https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_portable_zip_quickstart.md or https://github.com/intel/ipex-llm/blob/main/docker/llm/README.md if you prefer docker which will enable Intel iGPU with Ollama
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7803