[GH-ISSUE #3517] MACOS M2 Docker Compose Failing with GPU Selection Step #2169

Closed
opened 2026-04-12 12:25:02 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @akramIOT on GitHub (Apr 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3517

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

MACOS M2 Docker Compose Failing with GPU Selection Step

(LLAMA_CPP_ENV) akram_personal@AKRAMs-MacBook-Pro packet_raptor % docker-compose up
Attaching to packet_raptor, ollama-1, ollama-webui-1
Gracefully stopping... (press Ctrl+C again to force)
Error response from daemon: could not select device driver "nvidia" with capabilities: gpu
(LLAMA_CPP_ENV) akram_personal@AKRAMs-MacBook-Pro packet_raptor %

What did you expect to see?

No response

Steps to reproduce

No response

Are there any recent changes that introduced the issue?

No response

OS

macOS

Architecture

arm64

Platform

Docker

Ollama version

0.1.30

GPU

Apple

GPU info

(base) akram_personal@AKRAMs-MacBook-Pro ~ % ioreg -l | grep num_cores
| | | | "GPUConfigurationVariable" = {"num_gps"=8,"gpu_gen"=14,"usc_gen"=2,"num_cores"=20,"num_mgpus"=2,"core_mask_list"=(1023,511),"num_frags"=20}
(base) akram_personal@AKRAMs-MacBook-Pro ~ %
(base) akram_personal@AKRAMs-MacBook-Pro ~ %
(base) akram_personal@AKRAMs-MacBook-Pro ~ % system_profiler SPDisplaysDataType
Graphics/Displays:

Apple M2 Pro:

  Chipset Model: Apple M2 Pro
  Type: GPU
  Bus: Built-In
  Total Number of Cores: 19
  Vendor: Apple (0x106b)
  Metal Support: Metal 3
  Displays:
    Color LCD:
      Display Type: Built-in Liquid Retina XDR Display
      Resolution: 3456 x 2234 Retina
      Main Display: Yes
      Mirror: Off
      Online: Yes
      Automatically Adjust Brightness: Yes
      Connection Type: Internal
    VX2757:
      Resolution: 1920 x 1080 (1080p FHD - Full High Definition)
      UI Looks like: 1920 x 1080 @ 75.00Hz
      Mirror: Off
      Online: Yes
      Rotation: Supported

(base) akram_personal@AKRAMs-MacBook-Pro ~ %

root:xnu-10002.81.5~7/RELEASE_ARM64_T6020 arm64

CPU

Apple

Other software

No response

Originally created by @akramIOT on GitHub (Apr 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3517 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? MACOS M2 Docker Compose Failing with GPU Selection Step (LLAMA_CPP_ENV) akram_personal@AKRAMs-MacBook-Pro packet_raptor % docker-compose up Attaching to packet_raptor, ollama-1, ollama-webui-1 Gracefully stopping... (press Ctrl+C again to force) Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]] (LLAMA_CPP_ENV) akram_personal@AKRAMs-MacBook-Pro packet_raptor % ### What did you expect to see? _No response_ ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS macOS ### Architecture arm64 ### Platform Docker ### Ollama version 0.1.30 ### GPU Apple ### GPU info (base) akram_personal@AKRAMs-MacBook-Pro ~ % ioreg -l | grep num_cores | | | | "GPUConfigurationVariable" = {"num_gps"=8,"gpu_gen"=14,"usc_gen"=2,"num_cores"=20,"num_mgpus"=2,"core_mask_list"=(1023,511),"num_frags"=20} (base) akram_personal@AKRAMs-MacBook-Pro ~ % (base) akram_personal@AKRAMs-MacBook-Pro ~ % (base) akram_personal@AKRAMs-MacBook-Pro ~ % system_profiler SPDisplaysDataType Graphics/Displays: Apple M2 Pro: Chipset Model: Apple M2 Pro Type: GPU Bus: Built-In Total Number of Cores: 19 Vendor: Apple (0x106b) Metal Support: Metal 3 Displays: Color LCD: Display Type: Built-in Liquid Retina XDR Display Resolution: 3456 x 2234 Retina Main Display: Yes Mirror: Off Online: Yes Automatically Adjust Brightness: Yes Connection Type: Internal VX2757: Resolution: 1920 x 1080 (1080p FHD - Full High Definition) UI Looks like: 1920 x 1080 @ 75.00Hz Mirror: Off Online: Yes Rotation: Supported (base) akram_personal@AKRAMs-MacBook-Pro ~ % root:xnu-10002.81.5~7/RELEASE_ARM64_T6020 arm64 ### CPU Apple ### Other software _No response_
GiteaMirror added the needs more info label 2026-04-12 12:25:02 -05:00
Author
Owner

@akramIOT commented on GitHub (Apr 7, 2024):

Docker Compose YAML File:

Version: '3.6'

networks:
ollama:

services:
ollama:
image: ollama/ollama
networks:
- ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
volumes:
- ./data/ollama:/root/.ollama
ports:
- 11434:11434

ollama-webui:
image: ghcr.io/ollama-webui/ollama-webui:main
volumes:
- ./data/ollama-webui:/app/backend/data
depends_on:
- ollama
ports:
- 3002:8080
environment:
- 'OLLAMA_API_BASE_URL=http://ollama:11434/api'
extra_hosts:
- host.docker.internal:host-gateway
networks:
- ollama

packet_raptor:
build:
context: ./
dockerfile: ./docker/Dockerfile # Specify the path to your existing Dockerfile
container_name: packet_raptor
restart: always
ports:
- "8585:8585"
volumes:
- ./config.toml:/root/.streamlit/config.toml
environment:
- OLLAMA_URL=http://ollama:11434
depends_on:
- ollama
networks:
- ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [ gpu ]

volumes:
ollama: {}

<!-- gh-comment-id:2041238255 --> @akramIOT commented on GitHub (Apr 7, 2024): Docker Compose YAML File: Version: '3.6' networks: ollama: services: ollama: image: ollama/ollama networks: - ollama deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] volumes: - ./data/ollama:/root/.ollama ports: - 11434:11434 ollama-webui: image: ghcr.io/ollama-webui/ollama-webui:main volumes: - ./data/ollama-webui:/app/backend/data depends_on: - ollama ports: - 3002:8080 environment: - 'OLLAMA_API_BASE_URL=http://ollama:11434/api' extra_hosts: - host.docker.internal:host-gateway networks: - ollama packet_raptor: build: context: ./ dockerfile: ./docker/Dockerfile # Specify the path to your existing Dockerfile container_name: packet_raptor restart: always ports: - "8585:8585" volumes: - ./config.toml:/root/.streamlit/config.toml environment: - OLLAMA_URL=http://ollama:11434 depends_on: - ollama networks: - ollama deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [ gpu ] volumes: ollama: {}
Author
Owner

@dhiltgen commented on GitHub (Apr 12, 2024):

If I understand correctly, you're trying to use Docker Desktop on an ARM Mac system to run Ollama. This will only work with CPU mode. Apple systems do not have NVIDIA GPUs, they have Apple GPUs, and Docker Desktop does not expose the GPU to the container. If you remove the GPU settings so it runs CPU only, then it should work, but you'll be getting ARM CPU based execution. You'll see much better performance if you stick with running on the native Mac where the mac binary of Ollama can leverage the GPU.

<!-- gh-comment-id:2052680938 --> @dhiltgen commented on GitHub (Apr 12, 2024): If I understand correctly, you're trying to use Docker Desktop on an ARM Mac system to run Ollama. This will only work with CPU mode. Apple systems do not have NVIDIA GPUs, they have Apple GPUs, and Docker Desktop does not expose the GPU to the container. If you remove the GPU settings so it runs CPU only, then it should work, but you'll be getting ARM CPU based execution. You'll see much better performance if you stick with running on the native Mac where the mac binary of Ollama can leverage the GPU.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2169