[GH-ISSUE #13169] Ollama 0.13.0 docker fails with cuda on ARM #55220

Closed
opened 2026-04-29 08:32:17 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @audunmg on GitHub (Nov 20, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13169

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Ollama 0.13.0 fails with 500 error when running on ARM64/Ampere

Downgrading to 0.12.9 fixes the issue.

nvidia-smi 
Thu Nov 20 13:49:55 2025       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.247.01             Driver Version: 535.247.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA RTX A5000               On  | 00000002:01:00.0 Off |                  Off |
|  0%   36C    P8              14W / 230W |      1MiB / 24564MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA RTX A5000               On  | 00000003:01:00.0 Off |                  Off |
|  0%   37C    P8              16W / 230W |      1MiB / 24564MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

Relevant log output

ollama  | //ml/backend/ggml/ggml/src/ggml-cuda/template-instances/../mma.cuh:445: ERROR: CUDA kernel mma has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__
... (repeated)
ollama  | //ml/backend/ggml/ggml/src/ggml-cuda/template-instances/../mma.cuh:445: ERROR: CUDA kernel mma has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__
ollama  | ggml_cuda_compute_forward: ROPE failed
ollama  | CUDA error: unspecified launch failure
ollama  |   current device: 0, in function ggml_cuda_compute_forward at //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2672
ollama  |   err
ollama  | //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:88: CUDA error
ollama  | /usr/lib/ollama/libggml-base.so(+0x23050)[0xffff48033050]
ollama  | /usr/lib/ollama/libggml-base.so(ggml_print_backtrace+0x268)[0xffff4803302c]
ollama  | /usr/lib/ollama/libggml-base.so(ggml_abort+0xe0)[0xffff48031fc0]
ollama  | /usr/lib/ollama/cuda_jetpack5/libggml-cuda.so(+0xd6cd4)[0xffff004d6cd4]
ollama  | /usr/lib/ollama/cuda_jetpack5/libggml-cuda.so(+0xe3b74)[0xffff004e3b74]
ollama  | /usr/lib/ollama/cuda_jetpack5/libggml-cuda.so(+0xe4df4)[0xffff004e4df4]
ollama  | /usr/bin/ollama(+0xf339ec)[0xaaaab49b39ec]
ollama  | /usr/bin/ollama(+0xec4730)[0xaaaab4944730]
ollama  | /usr/bin/ollama(+0x370b4c)[0xaaaab3df0b4c]
ollama  | SIGABRT: abort
ollama  | PC=0xffff917c7608 m=26 sigcode=18446744073709551610
ollama  | signal arrived during cgo execution
ollama  | 
ollama  | goroutine 965 gp=0x40004f7880 m=26 mp=0x4000204008 [syscall]:
ollama  | runtime.cgocall(0xaaaab4944708, 0x40000bda78)
ollama  | 	runtime/cgocall.go:167 +0x44 fp=0x40000bda30 sp=0x40000bd9f0 pc=0xaaaab3de5954
ollama  | github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0xffff18000ba0, 0xfff8f000d8d0)
ollama  | 	_cgo_gotypes.go:963 +0x34 fp=0x40000bda70 sp=0x40000bda30 pc=0xaaaab41baa74
ollama  | github.com/ollama/ollama/ml/backend/ggml.(*Context).ComputeWithNotify.func2(...)
ollama  | 	github.com/ollama/ollama/ml/backend/ggml/ggml.go:825
ollama  | github.com/ollama/ollama/ml/backend/ggml.(*Context).ComputeWithNotify(0x4000416080, 0x4000ca38f0?, {0x4000e03580, 0x1, 0x2?})
ollama  | 	github.com/ollama/ollama/ml/backend/ggml/ggml.go:825 +0x1a8 fp=0x40000bdb50 sp=0x40000bda70 pc=0xaaaab41c47b8
ollama  | github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0x400017c000, {0x0, {0xaaaab51268f0, 0x4000416080}, {0xaaaab5130e80, 0x40002e7c38}, {0x4000209800, 0xb, 0x10}, {{0xaaaab5130e80, ...}, ...}, ...})
ollama  | 	github.com/ollama/ollama/runner/ollamarunner/runner.go:723 +0x70c fp=0x40000bded0 sp=0x40000bdb50 pc=0xaaaab426b91c
ollama  | github.com/ollama/ollama/runner/ollamarunner.(*Server).run.gowrap1()
ollama  | 	github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x5c fp=0x40000bdfd0 sp=0x40000bded0 pc=0xaaaab4269c8c
ollama  | runtime.goexit({})
ollama  | 	runtime/asm_arm64.s:1223 +0x4 fp=0x40000bdfd0 sp=0x40000bdfd0 pc=0xaaaab3df0d54
ollama  | created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 66
ollama  | 	github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x22c
... 
ollama  | goroutine 952 gp=0x40004f76c0 m=nil [chan receive]:
ollama  | runtime.gopark(0x736501?, 0x795465726977a0c4?, 0xf8?, 0xfa?, 0xaaaab41436a0?)
ollama  | 	runtime/proc.go:435 +0xc8 fp=0x40004dfaa0 sp=0x40004dfa80 pc=0xaaaab3de8e68
ollama  | runtime.chanrecv(0x4000132150, 0x0, 0x1)
ollama  | 	runtime/chan.go:664 +0x42c fp=0x40004dfb20 sp=0x40004dfaa0 pc=0xaaaab3d84bec
ollama  | runtime.chanrecv1(0xaaaab4c4e1fc?, 0x2c?)
ollama  | 	runtime/chan.go:506 +0x14 fp=0x40004dfb50 sp=0x40004dfb20 pc=0xaaaab3d84784
ollama  | github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0x400017c000, {0x1, {0xaaaab51268f0, 0x4000466040}, {0xaaaab5130e80, 0x40003fd0f8}, {0x4001858008, 0x1, 0x1}, {{0xaaaab5130e80, ...}, ...}, ...})
ollama  | 	github.com/ollama/ollama/runner/ollamarunner/runner.go:651 +0x130 fp=0x40004dfed0 sp=0x40004dfb50 pc=0xaaaab426b340
ollama  | github.com/ollama/ollama/runner/ollamarunner.(*Server).run.gowrap1()
ollama  | 	github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x5c fp=0x40004dffd0 sp=0x40004dfed0 pc=0xaaaab4269c8c
ollama  | runtime.goexit({})
ollama  | 	runtime/asm_arm64.s:1223 +0x4 fp=0x40004dffd0 sp=0x40004dffd0 pc=0xaaaab3df0d54
ollama  | created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 66
ollama  | 	github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x22c
ollama  | 
ollama  | r0      0x0
ollama  | r1      0x13a
ollama  | r2      0x6
ollama  | r3      0xfff9217af140
ollama  | r4      0xffff91d7eb50
ollama  | r5      0x1
ollama  | r6      0x20
ollama  | r7      0xfff9217ad8e0
ollama  | r8      0x83
ollama  | r9      0x0
ollama  | r10     0xa
ollama  | r11     0x101010101010101
ollama  | r12     0xfff9217ad970
ollama  | r13     0x0
ollama  | r14     0x0
ollama  | r15     0x2cd
ollama  | r16     0x0
ollama  | r17     0x0
ollama  | r18     0x2c8
ollama  | r19     0x13a
ollama  | r20     0xfff9217af140
ollama  | r21     0x6
ollama  | r22     0xa70
ollama  | r23     0xffff40605268
ollama  | r24     0xffff1857d4e8
ollama  | r25     0xffff0ffec000
ollama  | r26     0xaaaac5bed210
ollama  | r27     0xffff0ffeca58
ollama  | r28     0xfff9217ae530
ollama  | r29     0xfff9217ad870
ollama  | lr      0xffff917c75f4
ollama  | sp      0xfff9217ad860
ollama  | pc      0xffff917c7608
ollama  | fault   0x0
ollama  | time=2025-11-20T04:40:34.280Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:33547/completion\": EOF"
ollama  | [GIN] 2025/11/20 - 04:40:34 | 500 |         7m47s |      172.21.0.1 | POST     "/api/chat"
ollama  | [GIN] 2025/11/20 - 04:40:34 | 500 |         12m0s |      172.21.0.1 | POST     "/api/chat"
ollama  | time=2025-11-20T04:40:34.375Z level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 2"

OS

Docker

GPU

Nvidia

CPU

Other

Ollama version

0.13.0

Originally created by @audunmg on GitHub (Nov 20, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13169 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Ollama 0.13.0 fails with 500 error when running on ARM64/Ampere Downgrading to 0.12.9 fixes the issue. ``` nvidia-smi Thu Nov 20 13:49:55 2025 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.247.01 Driver Version: 535.247.01 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA RTX A5000 On | 00000002:01:00.0 Off | Off | | 0% 36C P8 14W / 230W | 1MiB / 24564MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 1 NVIDIA RTX A5000 On | 00000003:01:00.0 Off | Off | | 0% 37C P8 16W / 230W | 1MiB / 24564MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ ``` ### Relevant log output ```shell ollama | //ml/backend/ggml/ggml/src/ggml-cuda/template-instances/../mma.cuh:445: ERROR: CUDA kernel mma has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__ ... (repeated) ollama | //ml/backend/ggml/ggml/src/ggml-cuda/template-instances/../mma.cuh:445: ERROR: CUDA kernel mma has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__ ollama | ggml_cuda_compute_forward: ROPE failed ollama | CUDA error: unspecified launch failure ollama | current device: 0, in function ggml_cuda_compute_forward at //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2672 ollama | err ollama | //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:88: CUDA error ollama | /usr/lib/ollama/libggml-base.so(+0x23050)[0xffff48033050] ollama | /usr/lib/ollama/libggml-base.so(ggml_print_backtrace+0x268)[0xffff4803302c] ollama | /usr/lib/ollama/libggml-base.so(ggml_abort+0xe0)[0xffff48031fc0] ollama | /usr/lib/ollama/cuda_jetpack5/libggml-cuda.so(+0xd6cd4)[0xffff004d6cd4] ollama | /usr/lib/ollama/cuda_jetpack5/libggml-cuda.so(+0xe3b74)[0xffff004e3b74] ollama | /usr/lib/ollama/cuda_jetpack5/libggml-cuda.so(+0xe4df4)[0xffff004e4df4] ollama | /usr/bin/ollama(+0xf339ec)[0xaaaab49b39ec] ollama | /usr/bin/ollama(+0xec4730)[0xaaaab4944730] ollama | /usr/bin/ollama(+0x370b4c)[0xaaaab3df0b4c] ollama | SIGABRT: abort ollama | PC=0xffff917c7608 m=26 sigcode=18446744073709551610 ollama | signal arrived during cgo execution ollama | ollama | goroutine 965 gp=0x40004f7880 m=26 mp=0x4000204008 [syscall]: ollama | runtime.cgocall(0xaaaab4944708, 0x40000bda78) ollama | runtime/cgocall.go:167 +0x44 fp=0x40000bda30 sp=0x40000bd9f0 pc=0xaaaab3de5954 ollama | github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0xffff18000ba0, 0xfff8f000d8d0) ollama | _cgo_gotypes.go:963 +0x34 fp=0x40000bda70 sp=0x40000bda30 pc=0xaaaab41baa74 ollama | github.com/ollama/ollama/ml/backend/ggml.(*Context).ComputeWithNotify.func2(...) ollama | github.com/ollama/ollama/ml/backend/ggml/ggml.go:825 ollama | github.com/ollama/ollama/ml/backend/ggml.(*Context).ComputeWithNotify(0x4000416080, 0x4000ca38f0?, {0x4000e03580, 0x1, 0x2?}) ollama | github.com/ollama/ollama/ml/backend/ggml/ggml.go:825 +0x1a8 fp=0x40000bdb50 sp=0x40000bda70 pc=0xaaaab41c47b8 ollama | github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0x400017c000, {0x0, {0xaaaab51268f0, 0x4000416080}, {0xaaaab5130e80, 0x40002e7c38}, {0x4000209800, 0xb, 0x10}, {{0xaaaab5130e80, ...}, ...}, ...}) ollama | github.com/ollama/ollama/runner/ollamarunner/runner.go:723 +0x70c fp=0x40000bded0 sp=0x40000bdb50 pc=0xaaaab426b91c ollama | github.com/ollama/ollama/runner/ollamarunner.(*Server).run.gowrap1() ollama | github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x5c fp=0x40000bdfd0 sp=0x40000bded0 pc=0xaaaab4269c8c ollama | runtime.goexit({}) ollama | runtime/asm_arm64.s:1223 +0x4 fp=0x40000bdfd0 sp=0x40000bdfd0 pc=0xaaaab3df0d54 ollama | created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 66 ollama | github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x22c ... ollama | goroutine 952 gp=0x40004f76c0 m=nil [chan receive]: ollama | runtime.gopark(0x736501?, 0x795465726977a0c4?, 0xf8?, 0xfa?, 0xaaaab41436a0?) ollama | runtime/proc.go:435 +0xc8 fp=0x40004dfaa0 sp=0x40004dfa80 pc=0xaaaab3de8e68 ollama | runtime.chanrecv(0x4000132150, 0x0, 0x1) ollama | runtime/chan.go:664 +0x42c fp=0x40004dfb20 sp=0x40004dfaa0 pc=0xaaaab3d84bec ollama | runtime.chanrecv1(0xaaaab4c4e1fc?, 0x2c?) ollama | runtime/chan.go:506 +0x14 fp=0x40004dfb50 sp=0x40004dfb20 pc=0xaaaab3d84784 ollama | github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0x400017c000, {0x1, {0xaaaab51268f0, 0x4000466040}, {0xaaaab5130e80, 0x40003fd0f8}, {0x4001858008, 0x1, 0x1}, {{0xaaaab5130e80, ...}, ...}, ...}) ollama | github.com/ollama/ollama/runner/ollamarunner/runner.go:651 +0x130 fp=0x40004dfed0 sp=0x40004dfb50 pc=0xaaaab426b340 ollama | github.com/ollama/ollama/runner/ollamarunner.(*Server).run.gowrap1() ollama | github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x5c fp=0x40004dffd0 sp=0x40004dfed0 pc=0xaaaab4269c8c ollama | runtime.goexit({}) ollama | runtime/asm_arm64.s:1223 +0x4 fp=0x40004dffd0 sp=0x40004dffd0 pc=0xaaaab3df0d54 ollama | created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 66 ollama | github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x22c ollama | ollama | r0 0x0 ollama | r1 0x13a ollama | r2 0x6 ollama | r3 0xfff9217af140 ollama | r4 0xffff91d7eb50 ollama | r5 0x1 ollama | r6 0x20 ollama | r7 0xfff9217ad8e0 ollama | r8 0x83 ollama | r9 0x0 ollama | r10 0xa ollama | r11 0x101010101010101 ollama | r12 0xfff9217ad970 ollama | r13 0x0 ollama | r14 0x0 ollama | r15 0x2cd ollama | r16 0x0 ollama | r17 0x0 ollama | r18 0x2c8 ollama | r19 0x13a ollama | r20 0xfff9217af140 ollama | r21 0x6 ollama | r22 0xa70 ollama | r23 0xffff40605268 ollama | r24 0xffff1857d4e8 ollama | r25 0xffff0ffec000 ollama | r26 0xaaaac5bed210 ollama | r27 0xffff0ffeca58 ollama | r28 0xfff9217ae530 ollama | r29 0xfff9217ad870 ollama | lr 0xffff917c75f4 ollama | sp 0xfff9217ad860 ollama | pc 0xffff917c7608 ollama | fault 0x0 ollama | time=2025-11-20T04:40:34.280Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:33547/completion\": EOF" ollama | [GIN] 2025/11/20 - 04:40:34 | 500 | 7m47s | 172.21.0.1 | POST "/api/chat" ollama | [GIN] 2025/11/20 - 04:40:34 | 500 | 12m0s | 172.21.0.1 | POST "/api/chat" ollama | time=2025-11-20T04:40:34.375Z level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 2" ``` ### OS Docker ### GPU Nvidia ### CPU Other ### Ollama version 0.13.0
GiteaMirror added the dockerbugnvidia labels 2026-04-29 08:32:18 -05:00
Author
Owner

@audunmg commented on GitHub (Nov 21, 2025):

Ollama errors out with the same error on 0.12.10, so it might be related to (or duplicate of) #13015

<!-- gh-comment-id:3560782829 --> @audunmg commented on GitHub (Nov 21, 2025): Ollama errors out with the same error on 0.12.10, so it might be related to (or duplicate of) #13015
Author
Owner

@dhiltgen commented on GitHub (Nov 21, 2025):

Can you share your Docker config? In particular, the environment variables you are setting? It appears to be trying to use Jetpack 5, but it sounds like you are on an SBSA ARM system with a discrete GPU, not a Jetson Jetpack ARM system. If you are setting any JETSON_JETPACK variables, remove those and try again and it should correctly use the cuda_v12 libraries. If you aren't setting those, please run the server with OLLAMA_DEBUG=2 set and share the startup logs up to the point it reports inference compute so we can see why it seems to be picking the wrong CUDA library.

<!-- gh-comment-id:3564914531 --> @dhiltgen commented on GitHub (Nov 21, 2025): Can you share your Docker config? In particular, the environment variables you are setting? It appears to be trying to use Jetpack 5, but it sounds like you are on an SBSA ARM system with a discrete GPU, not a Jetson Jetpack ARM system. If you are setting any JETSON_JETPACK variables, remove those and try again and it should correctly use the cuda_v12 libraries. If you aren't setting those, please run the server with OLLAMA_DEBUG=2 set and share the startup logs up to the point it reports `inference compute` so we can see why it seems to be picking the wrong CUDA library.
Author
Owner

@audunmg commented on GitHub (Nov 22, 2025):

Hi, thank you!
This is the docker compose.yaml:

services:
  ollama:
    container_name: ollama
    image: docker.io/ollama/ollama:latest
    #image: docker.io/ollama/ollama:0.12.9
    volumes:
     - ./data:/root/.ollama
    restart: always
    environment:
     - "OLLAMA_KEEP_ALIVE=30m"
    ports:
     - "11434:11434"
    healthcheck:
      test: ["CMD", "nvidia-smi"]
      interval: 30s
      timeout: 10s
      retries: 1
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              capabilities: [gpu]

I'm not setting a lot of environment variables, since it mostly just works.
The health check is there just to restart the container if it didn't get a GPU, it happens only on reboot.

To make sure, these are the environment variables in the docker:

# docker exec -it ollama  bash
root@525c2969c0c8:/# env
NVIDIA_VISIBLE_DEVICES=all
HOSTNAME=525c2969c0c8
PWD=/
NVIDIA_DRIVER_CAPABILITIES=compute,utility
HOME=/root
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=00:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.avif=01;35:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:*~=00;90:*#=00;90:*.bak=00;90:*.crdownload=00;90:*.dpkg-dist=00;90:*.dpkg-new=00;90:*.dpkg-old=00;90:*.dpkg-tmp=00;90:*.old=00;90:*.orig=00;90:*.part=00;90:*.rej=00;90:*.rpmnew=00;90:*.rpmorig=00;90:*.rpmsave=00;90:*.swp=00;90:*.tmp=00;90:*.ucf-dist=00;90:*.ucf-new=00;90:*.ucf-old=00;90:
OLLAMA_HOST=0.0.0.0:11434
TERM=xterm
SHLVL=1
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
OLLAMA_KEEP_ALIVE=30m
_=/usr/bin/env
root@525c2969c0c8:/# ollama --version
ollama version is 0.13.0
root@525c2969c0c8:/#

The debug log is quite long, so I attached it as a file:

logs.txt

To me it looks like it's picking the jetpack library because the jetpack library successfully detected the discrete GPUs, but I'm not sure.

<!-- gh-comment-id:3565342825 --> @audunmg commented on GitHub (Nov 22, 2025): Hi, thank you! This is the docker compose.yaml: ``` services: ollama: container_name: ollama image: docker.io/ollama/ollama:latest #image: docker.io/ollama/ollama:0.12.9 volumes: - ./data:/root/.ollama restart: always environment: - "OLLAMA_KEEP_ALIVE=30m" ports: - "11434:11434" healthcheck: test: ["CMD", "nvidia-smi"] interval: 30s timeout: 10s retries: 1 deploy: resources: reservations: devices: - driver: nvidia capabilities: [gpu] ``` I'm not setting a lot of environment variables, since it mostly just works. The health check is there just to restart the container if it didn't get a GPU, it happens only on reboot. To make sure, these are the environment variables in the docker: ``` # docker exec -it ollama bash root@525c2969c0c8:/# env NVIDIA_VISIBLE_DEVICES=all HOSTNAME=525c2969c0c8 PWD=/ NVIDIA_DRIVER_CAPABILITIES=compute,utility HOME=/root LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=00:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.avif=01;35:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:*~=00;90:*#=00;90:*.bak=00;90:*.crdownload=00;90:*.dpkg-dist=00;90:*.dpkg-new=00;90:*.dpkg-old=00;90:*.dpkg-tmp=00;90:*.old=00;90:*.orig=00;90:*.part=00;90:*.rej=00;90:*.rpmnew=00;90:*.rpmorig=00;90:*.rpmsave=00;90:*.swp=00;90:*.tmp=00;90:*.ucf-dist=00;90:*.ucf-new=00;90:*.ucf-old=00;90: OLLAMA_HOST=0.0.0.0:11434 TERM=xterm SHLVL=1 LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_KEEP_ALIVE=30m _=/usr/bin/env root@525c2969c0c8:/# ollama --version ollama version is 0.13.0 root@525c2969c0c8:/# ``` The debug log is quite long, so I attached it as a file: [logs.txt](https://github.com/user-attachments/files/23686329/logs.txt) To me it looks like it's picking the jetpack library because the jetpack library successfully detected the discrete GPUs, but I'm not sure.
Author
Owner

@dhiltgen commented on GitHub (Dec 1, 2025):

Thanks for sharing the logs. It looks like the jetpack libraries are enumerating the devices even though they wont work, and causing us to pick the wrong version. Until we get this bug fixed, you should be able to force it to use the correct version by setting OLLAMA_LLM_LIBRARY=cuda_v12 as a temporary workaround.

<!-- gh-comment-id:3598570743 --> @dhiltgen commented on GitHub (Dec 1, 2025): Thanks for sharing the logs. It looks like the jetpack libraries are enumerating the devices even though they wont work, and causing us to pick the wrong version. Until we get this bug fixed, you should be able to force it to use the correct version by setting `OLLAMA_LLM_LIBRARY=cuda_v12` as a temporary workaround.
Author
Owner

@audunmg commented on GitHub (Dec 3, 2025):

Thank you for the workaround and for your time, this works great on 0.13.0 now, and cuda is correctly selected.

<!-- gh-comment-id:3605171740 --> @audunmg commented on GitHub (Dec 3, 2025): Thank you for the workaround and for your time, this works great on 0.13.0 now, and cuda is correctly selected.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55220