[GH-ISSUE #1920] ollama + docker fails in GPU mode due to CUDA error #1103

Closed
opened 2026-04-12 10:51:04 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @giansegato on GitHub (Jan 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1920

Originally assigned to: @dhiltgen on GitHub.

nvidia-smi:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100-SXM4-40GB          On  | 00000000:07:00.0 Off |                    0 |
| N/A   41C    P0              73W / 400W |      4MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+

but if I run the example in the docker docs:

docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
docker exec -it ollama ollama run phi

it spins for a while and then hard crashes without ever returning.

If I do it in docker-compose, I get to see more logs:

version: '3.8'
services:
  ollama:
    image: ollama/ollama
    volumes:
      - ollama:/root/.ollama
    runtime: nvidia
    environment:
      - NVIDIA_VISIBLE_DEVICES=all
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - gpus=all
    ports:
      - "11434:11434"
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]

request:

curl http://127.0.0.1:11434/api/generate -d '{
  "model": "phi",
  "prompt":"Why is the sky blue?"
}'

What I get is this:

ollama_1  | 2024/01/11 08:24:48 images.go:808: total blobs: 6
ollama_1  | 2024/01/11 08:24:48 images.go:815: total unused blobs removed: 0
ollama_1  | 2024/01/11 08:24:48 routes.go:930: Listening on [::]:11434 (version 0.1.19)
ollama_1  | 2024/01/11 08:24:49 shim_ext_server.go:142: Dynamic LLM variants [cuda]
ollama_1  | 2024/01/11 08:24:49 gpu.go:35: Detecting GPU type
ollama_1  | 2024/01/11 08:24:49 gpu.go:54: Nvidia GPU detected

(...)

/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/ollama1061409751/cuda
ollama_1  | 2024/01/11 08:26:00 shim_ext_server.go:92: Loading Dynamic Shim llm server: /tmp/ollama1061409751/cuda/libext_server.so
ollama_1  | 2024/01/11 08:26:00 ext_server_common.go:136: Initializing internal llama server8.0

(...)

[[36mollama_1  |^[[0m llm_load_tensors: offloading 32 repeating layers to GPU
^[[36mollama_1  |^[[0m llm_load_tensors: offloading non-repeating layers to GPU
^[[36mollama_1  |^[[0m llm_load_tensors: offloaded 33/33 layers to GPU
^[[36mollama_1  |^[[0m llm_load_tensors: VRAM used: 0.00 MiB
^[[36mollama_1  |^[[0m ...........................................................................................
^[[36mollama_1  |^[[0m llama_new_context_with_model: n_ctx      = 2048
^[[36mollama_1  |^[[0m llama_new_context_with_model: freq_base  = 10000.0
^[[36mollama_1  |^[[0m llama_new_context_with_model: freq_scale = 1


^[[36mollama_1  |^[[0m CUDA error 3 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: initialization error
^[[36mollama_1  |^[[0m current device: 1882806432
^[[36mollama_1  |^[[0m GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: !"CUDA error"
^[[36mollama_1  |^[[0m Lazy loading /tmp/ollama3369185958/cuda/libext_server.so library
^[[36mollama_1  |^[[0m SIGABRT: abort
^[[36mollama_1  |^[[0m PC=0x7f3bd30369fc m=8 sigcode=18446744073709551610
^[[36mollama_1  |^[[0m signal arrived during cgo execution
^[[36mollama_1  |^[[0m
^[[36mollama_1  |^[[0m goroutine 710 [syscall]:
^[[36mollama_1  |^[[0m runtime.cgocall(0x9c0510, 0xc0003223d0)
^[[36mollama_1  |^[[0m  /usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0003223a8 sp=0xc000322370 pc=0x42666b
^[[36mollama_1  |^[[0m github.com/jmorganca/ollama/llm._Cfunc_dynamic_shim_llama_server_init({0x7f3b70001fe0, 0x7f3adbd4bb30, 0x7f3adbd3ed70, 0x7f3adbd41150, 0x7f3adbd58910, 0x7f3adbd49020, 0x7f3adbd40ff0, 0x7f3adbd3ee10, 0x7f3adbd58a40, 0x7f3adbd58de0, ...}, ...)
^[[36mollama_1  |^[[0m  _cgo_gotypes.go:291 +0x45 fp=0xc0003223d0 sp=0xc0003223a8 pc=0x7ccc45
^[[36mollama_1  |^[[0m github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init.func1(0x456bdb?, 0x80?, 0x80?)
^[[36mollama_1  |^[[0m  /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0xec fp=0xc0003224c0 sp=0xc0003223d0 pc=0x7d200c

(...)

ollama_1  | net.(*netFD).Read(0xc00048e080, {0xc0004aa461?, 0x0?, 0x0?})
ollama_1  |     /usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc000521700 sp=0xc0005216b8 pc=0x586885
ollama_1  | net.(*conn).Read(0xc00007e090, {0xc0004aa461?, 0x0?, 0x0?})
ollama_1  |     /usr/local/go/src/net/net.go:179 +0x45 fp=0xc000521748 sp=0xc000521700 pc=0x594b25
ollama_1  | net.(*TCPConn).Read(0x0?, {0xc0004aa461?, 0x0?, 0x0?})
ollama_1  |     <autogenerated>:1 +0x25 fp=0xc000521778 sp=0xc000521748 pc=0x5a6a25
ollama_1  | net/http.(*connReader).backgroundRead(0xc0004aa450)
ollama_1  |     /usr/local/go/src/net/http/server.go:683 +0x37 fp=0xc0005217c8 sp=0xc000521778 pc=0x6e1617
ollama_1  | net/http.(*connReader).startBackgroundRead.func2()
ollama_1  |     /usr/local/go/src/net/http/server.go:679 +0x25 fp=0xc0005217e0 sp=0xc0005217c8 pc=0x6e1545
ollama_1  | runtime.goexit()
ollama_1  |     /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005217e8 sp=0xc0005217e0 pc=0x48ae21
ollama_1  | created by net/http.(*connReader).startBackgroundRead in goroutine 82
ollama_1  |     /usr/local/go/src/net/http/server.go:679 +0xba
ollama_1  | 
ollama_1  | rax    0x0
ollama_1  | rbx    0x7fa883fff640
ollama_1  | rcx    0x7fa95ddf99fc
ollama_1  | rdx    0x6
ollama_1  | rdi    0x1
ollama_1  | rsi    0x27
ollama_1  | rbp    0x27
ollama_1  | rsp    0x7fa883ffcec0
ollama_1  | r8     0x7fa883ffcf90
ollama_1  | r9     0x7fa883ffcf20
ollama_1  | r10    0x8
ollama_1  | r11    0x246
ollama_1  | r12    0x6
ollama_1  | r13    0x16
ollama_1  | r14    0x7fa883ffd0ec
ollama_1  | r15    0x0
ollama_1  | rip    0x7fa95ddf99fc
ollama_1  | rflags 0x246
ollama_1  | cs     0x33
ollama_1  | fs     0x0
ollama_1  | gs     0x0
ollama_ollama_1 exited with code 2
Originally created by @giansegato on GitHub (Jan 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1920 Originally assigned to: @dhiltgen on GitHub. `nvidia-smi`: ``` +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 | | N/A 41C P0 73W / 400W | 4MiB / 40960MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ ``` but if I run the example in the docker docs: ``` docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama docker exec -it ollama ollama run phi ``` it spins for a while and then hard crashes without ever returning. If I do it in docker-compose, I get to see more logs: ```yml version: '3.8' services: ollama: image: ollama/ollama volumes: - ollama:/root/.ollama runtime: nvidia environment: - NVIDIA_VISIBLE_DEVICES=all - OPENAI_API_KEY=${OPENAI_API_KEY} - gpus=all ports: - "11434:11434" restart: unless-stopped deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] ``` request: ``` curl http://127.0.0.1:11434/api/generate -d '{ "model": "phi", "prompt":"Why is the sky blue?" }' ``` What I get is this: ``` ollama_1 | 2024/01/11 08:24:48 images.go:808: total blobs: 6 ollama_1 | 2024/01/11 08:24:48 images.go:815: total unused blobs removed: 0 ollama_1 | 2024/01/11 08:24:48 routes.go:930: Listening on [::]:11434 (version 0.1.19) ollama_1 | 2024/01/11 08:24:49 shim_ext_server.go:142: Dynamic LLM variants [cuda] ollama_1 | 2024/01/11 08:24:49 gpu.go:35: Detecting GPU type ollama_1 | 2024/01/11 08:24:49 gpu.go:54: Nvidia GPU detected (...) /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/ollama1061409751/cuda ollama_1 | 2024/01/11 08:26:00 shim_ext_server.go:92: Loading Dynamic Shim llm server: /tmp/ollama1061409751/cuda/libext_server.so ollama_1 | 2024/01/11 08:26:00 ext_server_common.go:136: Initializing internal llama server8.0 (...) [[36mollama_1 |^[[0m llm_load_tensors: offloading 32 repeating layers to GPU ^[[36mollama_1 |^[[0m llm_load_tensors: offloading non-repeating layers to GPU ^[[36mollama_1 |^[[0m llm_load_tensors: offloaded 33/33 layers to GPU ^[[36mollama_1 |^[[0m llm_load_tensors: VRAM used: 0.00 MiB ^[[36mollama_1 |^[[0m ........................................................................................... ^[[36mollama_1 |^[[0m llama_new_context_with_model: n_ctx = 2048 ^[[36mollama_1 |^[[0m llama_new_context_with_model: freq_base = 10000.0 ^[[36mollama_1 |^[[0m llama_new_context_with_model: freq_scale = 1 ^[[36mollama_1 |^[[0m CUDA error 3 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: initialization error ^[[36mollama_1 |^[[0m current device: 1882806432 ^[[36mollama_1 |^[[0m GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: !"CUDA error" ^[[36mollama_1 |^[[0m Lazy loading /tmp/ollama3369185958/cuda/libext_server.so library ^[[36mollama_1 |^[[0m SIGABRT: abort ^[[36mollama_1 |^[[0m PC=0x7f3bd30369fc m=8 sigcode=18446744073709551610 ^[[36mollama_1 |^[[0m signal arrived during cgo execution ^[[36mollama_1 |^[[0m ^[[36mollama_1 |^[[0m goroutine 710 [syscall]: ^[[36mollama_1 |^[[0m runtime.cgocall(0x9c0510, 0xc0003223d0) ^[[36mollama_1 |^[[0m /usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0003223a8 sp=0xc000322370 pc=0x42666b ^[[36mollama_1 |^[[0m github.com/jmorganca/ollama/llm._Cfunc_dynamic_shim_llama_server_init({0x7f3b70001fe0, 0x7f3adbd4bb30, 0x7f3adbd3ed70, 0x7f3adbd41150, 0x7f3adbd58910, 0x7f3adbd49020, 0x7f3adbd40ff0, 0x7f3adbd3ee10, 0x7f3adbd58a40, 0x7f3adbd58de0, ...}, ...) ^[[36mollama_1 |^[[0m _cgo_gotypes.go:291 +0x45 fp=0xc0003223d0 sp=0xc0003223a8 pc=0x7ccc45 ^[[36mollama_1 |^[[0m github.com/jmorganca/ollama/llm.(*shimExtServer).llama_server_init.func1(0x456bdb?, 0x80?, 0x80?) ^[[36mollama_1 |^[[0m /go/src/github.com/jmorganca/ollama/llm/shim_ext_server.go:40 +0xec fp=0xc0003224c0 sp=0xc0003223d0 pc=0x7d200c (...) ollama_1 | net.(*netFD).Read(0xc00048e080, {0xc0004aa461?, 0x0?, 0x0?}) ollama_1 | /usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc000521700 sp=0xc0005216b8 pc=0x586885 ollama_1 | net.(*conn).Read(0xc00007e090, {0xc0004aa461?, 0x0?, 0x0?}) ollama_1 | /usr/local/go/src/net/net.go:179 +0x45 fp=0xc000521748 sp=0xc000521700 pc=0x594b25 ollama_1 | net.(*TCPConn).Read(0x0?, {0xc0004aa461?, 0x0?, 0x0?}) ollama_1 | <autogenerated>:1 +0x25 fp=0xc000521778 sp=0xc000521748 pc=0x5a6a25 ollama_1 | net/http.(*connReader).backgroundRead(0xc0004aa450) ollama_1 | /usr/local/go/src/net/http/server.go:683 +0x37 fp=0xc0005217c8 sp=0xc000521778 pc=0x6e1617 ollama_1 | net/http.(*connReader).startBackgroundRead.func2() ollama_1 | /usr/local/go/src/net/http/server.go:679 +0x25 fp=0xc0005217e0 sp=0xc0005217c8 pc=0x6e1545 ollama_1 | runtime.goexit() ollama_1 | /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005217e8 sp=0xc0005217e0 pc=0x48ae21 ollama_1 | created by net/http.(*connReader).startBackgroundRead in goroutine 82 ollama_1 | /usr/local/go/src/net/http/server.go:679 +0xba ollama_1 | ollama_1 | rax 0x0 ollama_1 | rbx 0x7fa883fff640 ollama_1 | rcx 0x7fa95ddf99fc ollama_1 | rdx 0x6 ollama_1 | rdi 0x1 ollama_1 | rsi 0x27 ollama_1 | rbp 0x27 ollama_1 | rsp 0x7fa883ffcec0 ollama_1 | r8 0x7fa883ffcf90 ollama_1 | r9 0x7fa883ffcf20 ollama_1 | r10 0x8 ollama_1 | r11 0x246 ollama_1 | r12 0x6 ollama_1 | r13 0x16 ollama_1 | r14 0x7fa883ffd0ec ollama_1 | r15 0x0 ollama_1 | rip 0x7fa95ddf99fc ollama_1 | rflags 0x246 ollama_1 | cs 0x33 ollama_1 | fs 0x0 ollama_1 | gs 0x0 ollama_ollama_1 exited with code 2 ```
GiteaMirror added the bug label 2026-04-12 10:51:04 -05:00
Author
Owner

@retrokit-max commented on GitHub (Jan 19, 2024):

Encountered this exact error output when using Ollama on a laptop with an RTX 3070. Ollama was ran using Docker compose and was using the codellama model when I encountered this error. The same error occured when attempting to use the llama2 model.

<!-- gh-comment-id:1900257695 --> @retrokit-max commented on GitHub (Jan 19, 2024): Encountered this exact error output when using Ollama on a laptop with an RTX 3070. Ollama was ran using Docker compose and was using the codellama model when I encountered this error. The same error occured when attempting to use the llama2 model.
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

@giansegato we've fixed a number of CUDA related bugs since version 0.1.19. I'm not sure if that will fix the problem you're facing, but please give the latest release a try. (make sure to re-pull or specify tag 0.1.22)

<!-- gh-comment-id:1912866625 --> @dhiltgen commented on GitHub (Jan 27, 2024): @giansegato we've fixed a number of CUDA related bugs since version 0.1.19. I'm not sure if that will fix the problem you're facing, but please give the latest release a try. (make sure to re-pull or specify tag `0.1.22`)
Author
Owner

@retrokit-max commented on GitHub (Jan 27, 2024):

I actually solved this issue on my laptop with a simple driver update. Ollama is now running as expected with no other changes made to the config/setup.

<!-- gh-comment-id:1913168689 --> @retrokit-max commented on GitHub (Jan 27, 2024): I actually solved this issue on my laptop with a simple driver update. Ollama is now running as expected with no other changes made to the config/setup.
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

That's great to hear @retrokit-max!

@giansegato can you give that approach a shot as well as upgrading to 0.1.22 and see if your problem is resolved?

<!-- gh-comment-id:1913234501 --> @dhiltgen commented on GitHub (Jan 27, 2024): That's great to hear @retrokit-max! @giansegato can you give that approach a shot as well as upgrading to 0.1.22 and see if your problem is resolved?
Author
Owner

@dhiltgen commented on GitHub (Feb 19, 2024):

@giansegato please let us know if you're still having problems.

<!-- gh-comment-id:1953078192 --> @dhiltgen commented on GitHub (Feb 19, 2024): @giansegato please let us know if you're still having problems.
Author
Owner

@giansegato commented on GitHub (Feb 22, 2024):

Thanks y'all. For the record, I tried again and couldn't reproduce anymore! 🥳

<!-- gh-comment-id:1959358854 --> @giansegato commented on GitHub (Feb 22, 2024): Thanks y'all. For the record, I tried again and couldn't reproduce anymore! 🥳
Author
Owner

@Yaffa16 commented on GitHub (Mar 27, 2024):

Im having the same erre
ollama_api:
image: ollama/ollama:latest
ports:
- 11434:11434
volumes:
- ollama_data:/root/.ollama
restart: always
networks:
traefik:
labels:
com.centurylinklabs.watchtower.enable: 'true'
com.centurylinklabs.watchtower.scope: hertz-lab
traefik.enable: 'true'
traefik.http.routers.json2-flatware.rule: Host(ollamadocker.flatware.hertz-lab.zkm.de)
traefik.http.routers.json2-flatware.entryPoints: websecure
traefik.http.routers.json2-flatware.tls: 'true'
traefik.http.routers.json2-flatware.tls.certresolver: letsencrypt
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]

The error logs
time=2024-03-27T16:23:25.129Z level=INFO source=images.go:806 msg="total blobs: 16" time=2024-03-27T16:23:25.129Z level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-27T16:23:25.130Z level=INFO source=routes.go:1110 msg="Listening on [::]:11434 (version 0.1.29)" time=2024-03-27T16:23:25.130Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama2800678677/runners ..." time=2024-03-27T16:23:30.109Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx2 cpu cuda_v11 rocm_v60000 cpu_avx]" time=2024-03-27T16:23:30.109Z level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-27T16:23:30.109Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-27T16:23:30.109Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.15]" time=2024-03-27T16:23:30.118Z level=INFO source=gpu.go:82 msg="Nvidia GPU detected" time=2024-03-27T16:23:30.118Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-27T16:23:30.125Z level=INFO source=gpu.go:109 msg="error looking up CUDA GPU memory: device memory info lookup failure 0: 4" time=2024-03-27T16:23:30.125Z level=INFO source=routes.go:1133 msg="no GPU detected"

<!-- gh-comment-id:2023198004 --> @Yaffa16 commented on GitHub (Mar 27, 2024): Im having the same erre ollama_api: image: ollama/ollama:latest ports: - 11434:11434 volumes: - ollama_data:/root/.ollama restart: always networks: traefik: labels: com.centurylinklabs.watchtower.enable: 'true' com.centurylinklabs.watchtower.scope: hertz-lab traefik.enable: 'true' traefik.http.routers.json2-flatware.rule: Host(`ollamadocker.flatware.hertz-lab.zkm.de`) traefik.http.routers.json2-flatware.entryPoints: websecure traefik.http.routers.json2-flatware.tls: 'true' traefik.http.routers.json2-flatware.tls.certresolver: letsencrypt deploy: resources: reservations: devices: - driver: nvidia device_ids: ['0'] capabilities: [gpu] The error logs `time=2024-03-27T16:23:25.129Z level=INFO source=images.go:806 msg="total blobs: 16" time=2024-03-27T16:23:25.129Z level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-27T16:23:25.130Z level=INFO source=routes.go:1110 msg="Listening on [::]:11434 (version 0.1.29)" time=2024-03-27T16:23:25.130Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama2800678677/runners ..." time=2024-03-27T16:23:30.109Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx2 cpu cuda_v11 rocm_v60000 cpu_avx]" time=2024-03-27T16:23:30.109Z level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-27T16:23:30.109Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-27T16:23:30.109Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.15]" time=2024-03-27T16:23:30.118Z level=INFO source=gpu.go:82 msg="Nvidia GPU detected" time=2024-03-27T16:23:30.118Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-27T16:23:30.125Z level=INFO source=gpu.go:109 msg="error looking up CUDA GPU memory: device memory info lookup failure 0: 4" time=2024-03-27T16:23:30.125Z level=INFO source=routes.go:1133 msg="no GPU detected"`
Author
Owner

@dhiltgen commented on GitHub (Mar 27, 2024):

@Yaffa16 error looking up CUDA GPU memory: device memory info lookup failure 0: 4 -- error code 4 from CUDA relates to drivers being unloaded. I'd suggest trying to get nvidia-smi to work inside a container to confirm you have your container runtime set up correctly, and if that works and ollama is still unable to discover the GPU with the latest version, please open a new issue with your server logs so we can investigate.

<!-- gh-comment-id:2023757801 --> @dhiltgen commented on GitHub (Mar 27, 2024): @Yaffa16 `error looking up CUDA GPU memory: device memory info lookup failure 0: 4` -- error code 4 from CUDA relates to drivers being unloaded. I'd suggest trying to get `nvidia-smi` to work inside a container to confirm you have your container runtime set up correctly, and if that works and ollama is still unable to discover the GPU with the latest version, please open a new issue with your server logs so we can investigate.
Author
Owner

@Yaffa16 commented on GitHub (Apr 15, 2024):

@Yaffa16 error looking up CUDA GPU memory: device memory info lookup failure 0: 4 -- error code 4 from CUDA relates to drivers being unloaded. I'd suggest trying to get nvidia-smi to work inside a container to confirm you have your container runtime set up correctly, and if that works and ollama is still unable to discover the GPU with the latest version, please open a new issue with your server logs so we can investigate.

Hi ihave openend an issue. here: https://github.com/ollama/ollama/issues/3647

<!-- gh-comment-id:2056355482 --> @Yaffa16 commented on GitHub (Apr 15, 2024): > @Yaffa16 `error looking up CUDA GPU memory: device memory info lookup failure 0: 4` -- error code 4 from CUDA relates to drivers being unloaded. I'd suggest trying to get `nvidia-smi` to work inside a container to confirm you have your container runtime set up correctly, and if that works and ollama is still unable to discover the GPU with the latest version, please open a new issue with your server logs so we can investigate. Hi ihave openend an issue. here: https://github.com/ollama/ollama/issues/3647
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1103