[GH-ISSUE #8755] Please add offical docker image for Intel IGPU ? #52192

Closed
opened 2026-04-28 22:27:50 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @hotrungnhan on GitHub (Feb 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8755

Hello, it's been a while since Intel released the ipex library for running models on its GPU.

Currently, I can run models like the ones shown below using the IGPU. That is why I am submitting this ticket request for the official Docker image. can anyone help ?

services:
  ollama:
    build:
      context: .
      dockerfile_inline: |
        FROM intelanalytics/ipex-llm-inference-cpp-xpu:latest    
        ENV ZES_ENABLE_SYSMAN=1
        ENV OLLAMA_HOST=0.0.0.0:11434
        RUN mkdir -p /llm/ollama; \ 
            cd /llm/ollama; \
            init-ollama;
        WORKDIR /llm/ollama
        ENTRYPOINT ["./ollama", "serve"]
    container_name: ollama
    restart: always
    environment:
        OLLAMA_INTEL_GPU: true
        DISPLAY: ${DISPLAY}
    devices:
      - /dev/dri:/dev/dri
    volumes:
      - /tmp/.X11-unix:/tmp/.X11-unix
      - ollama:/root/.ollama
  ollama-webui:
    image: ghcr.io/open-webui/open-webui
    container_name: ollama-webui
    ports:
      - 8080:8080
    volumes:
      - ollama-webui:/app/backend/data
    depends_on:
      - ollama-intel-gpu
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
    restart: unless-stopped

Originally created by @hotrungnhan on GitHub (Feb 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8755 Hello, it's been a while since Intel released the ipex library for running models on its GPU. Currently, I can run models like the ones shown below using the IGPU. That is why I am submitting this ticket request for the official Docker image. can anyone help ? ```docker-compose services: ollama: build: context: . dockerfile_inline: | FROM intelanalytics/ipex-llm-inference-cpp-xpu:latest ENV ZES_ENABLE_SYSMAN=1 ENV OLLAMA_HOST=0.0.0.0:11434 RUN mkdir -p /llm/ollama; \ cd /llm/ollama; \ init-ollama; WORKDIR /llm/ollama ENTRYPOINT ["./ollama", "serve"] container_name: ollama restart: always environment: OLLAMA_INTEL_GPU: true DISPLAY: ${DISPLAY} devices: - /dev/dri:/dev/dri volumes: - /tmp/.X11-unix:/tmp/.X11-unix - ollama:/root/.ollama ollama-webui: image: ghcr.io/open-webui/open-webui container_name: ollama-webui ports: - 8080:8080 volumes: - ollama-webui:/app/backend/data depends_on: - ollama-intel-gpu environment: - OLLAMA_BASE_URL=http://ollama:11434 restart: unless-stopped ```
GiteaMirror added the feature request label 2026-04-28 22:27:50 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 5, 2026):

Intel GPU support is enabled via Vulkan.

<!-- gh-comment-id:3709755057 --> @rick-github commented on GitHub (Jan 5, 2026): Intel GPU support is enabled via [Vulkan](https://github.com/ollama/ollama/blob/d087e46bd193b1101cef13e28841185a465a077f/docs/gpu.mdx#vulkan-gpu-support).
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52192