[GH-ISSUE #4604] Ollama Docker - Failing to using GPU after idle time #64929

Closed
opened 2026-05-03 19:18:00 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @chakri-corp on GitHub (May 24, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4604

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

~$ nvidia-smi
Fri May 24 09:41:47 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.171.04 Driver Version: 535.171.04 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off |
| 0% 42C P8 18W / 450W | 569MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1640 C /usr/local/bin/python3 562MiB |
+---------------------------------------------------------------------------------------+

Docker execution command:
~$ sudo docker run -d --gpus=all -v /home/devteam/ollama/:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

ollama running using GPU for some time.

After idle time of 6+ hours, ollama failing to run on GPU

time=2024-05-24T03:58:55.594Z level=INFO source=gpu.go:197 msg="error looking up nvidia GPU memory" error="nvcuda failed to get primary device context 800"
time=2024-05-24T03:58:56.270Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="15.1 GiB" memory.required.full="4.6 GiB" memory.required.partial="4.6 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-05-24T03:58:56.270Z level=INFO source=server.go:320 msg="starting llama server" cmd="/tmp/ollama2862667729/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 36055"
time=2024-05-24T03:58:56.270Z level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-05-24T03:58:56.270Z level=INFO source=server.go:504 msg="waiting for llama runner to start responding"
time=2024-05-24T03:58:56.271Z level=INFO source=server.go:540 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="952d03d" tid="128305974663040" timestamp=1716523136
INFO [main] system info | n_threads=3 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="128305974663040" timestamp=1716523136 total_threads=6
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="5" port="36055" tid="128305974663040" timestamp=1716523136
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-05-24T03:58:56.522Z level=INFO source=server.go:540 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_tensors: ggml ctx size = 0.15 MiB
llm_load_tensors: CPU buffer size = 4437.80 MiB
.......................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.50 MiB
llama_new_context_with_model: CPU compute buffer size = 258.50 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="128305974663040" timestamp=1716523147
time=2024-05-24T03:59:07.329Z level=INFO source=server.go:545 msg="llama runner started in 11.06 seconds"
[GIN] 2024/05/24 - 03:59:07 | 200 | 11.995928028s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/24 - 03:59:42 | 200 | 4.595640842s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/24 - 04:11:24 | 200 | 45.693µs | 127.0.0.1 | GET "/api/version"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.38

Originally created by @chakri-corp on GitHub (May 24, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4604 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? ~$ nvidia-smi Fri May 24 09:41:47 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.171.04 Driver Version: 535.171.04 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off | | 0% 42C P8 18W / 450W | 569MiB / 24564MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 1640 C /usr/local/bin/python3 562MiB | +---------------------------------------------------------------------------------------+ Docker execution command: ~$ sudo docker run -d --gpus=all -v /home/devteam/ollama/:/root/.ollama -p 11434:11434 --name ollama ollama/ollama ollama running using GPU for some time. After idle time of 6+ hours, ollama failing to run on GPU time=2024-05-24T03:58:55.594Z level=INFO source=gpu.go:197 msg="error looking up nvidia GPU memory" error="nvcuda failed to get primary device context 800" time=2024-05-24T03:58:56.270Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="15.1 GiB" memory.required.full="4.6 GiB" memory.required.partial="4.6 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB" time=2024-05-24T03:58:56.270Z level=INFO source=server.go:320 msg="starting llama server" cmd="/tmp/ollama2862667729/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 2048 --batch-size 512 --embedding --log-disable --parallel 1 --port 36055" time=2024-05-24T03:58:56.270Z level=INFO source=sched.go:338 msg="loaded runners" count=1 time=2024-05-24T03:58:56.270Z level=INFO source=server.go:504 msg="waiting for llama runner to start responding" time=2024-05-24T03:58:56.271Z level=INFO source=server.go:540 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="952d03d" tid="128305974663040" timestamp=1716523136 INFO [main] system info | n_threads=3 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="128305974663040" timestamp=1716523136 total_threads=6 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="5" port="36055" tid="128305974663040" timestamp=1716523136 llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-05-24T03:58:56.522Z level=INFO source=server.go:540 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens definition check successful ( 256/128256 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_tensors: ggml ctx size = 0.15 MiB llm_load_tensors: CPU buffer size = 4437.80 MiB ....................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU output buffer size = 0.50 MiB llama_new_context_with_model: CPU compute buffer size = 258.50 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 INFO [main] model loaded | tid="128305974663040" timestamp=1716523147 time=2024-05-24T03:59:07.329Z level=INFO source=server.go:545 msg="llama runner started in 11.06 seconds" [GIN] 2024/05/24 - 03:59:07 | 200 | 11.995928028s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/05/24 - 03:59:42 | 200 | 4.595640842s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/05/24 - 04:11:24 | 200 | 45.693µs | 127.0.0.1 | GET "/api/version" ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.38
GiteaMirror added the dockerbugnvidia labels 2026-05-03 19:18:01 -05:00
Author
Owner

@10067 commented on GitHub (May 24, 2024):

met the same issue

<!-- gh-comment-id:2128669532 --> @10067 commented on GitHub (May 24, 2024): met the same issue
Author
Owner

@Flywolfs commented on GitHub (May 24, 2024):

me too, how to solve?

<!-- gh-comment-id:2128717721 --> @Flywolfs commented on GitHub (May 24, 2024): me too, how to solve?
Author
Owner

@rayankb92 commented on GitHub (May 24, 2024):

same issue

<!-- gh-comment-id:2129665183 --> @rayankb92 commented on GitHub (May 24, 2024): same issue
Author
Owner

@sammcj commented on GitHub (May 24, 2024):

I had this problem as well, for me this was to edit /etc/docker/daemon.json and add "exec-opts": ["native.cgroupdriver=cgroupfs"], (then restart everything).

cat /etc/docker/daemon.json
{
    "builder": {
        "gc": {
            "defaultKeepStorage": "50GB",
            "enabled": true
        }
    },
    "experimental": true,
    "features": {
        "buildkit": true
    },
    "max-concurrent-downloads": 14,
    "max-concurrent-uploads": 8,
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
   },
  "exec-opts": ["native.cgroupdriver=cgroupfs"],
  "data-root": "/opt/docker-data/overlay2",
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "1m",
    "max-file": "3"
  },
  "live-restore": true
}
<!-- gh-comment-id:2130436000 --> @sammcj commented on GitHub (May 24, 2024): I had this problem as well, for me this was to edit /etc/docker/daemon.json and add `"exec-opts": ["native.cgroupdriver=cgroupfs"],` (then restart everything). ``` cat /etc/docker/daemon.json { "builder": { "gc": { "defaultKeepStorage": "50GB", "enabled": true } }, "experimental": true, "features": { "buildkit": true }, "max-concurrent-downloads": 14, "max-concurrent-uploads": 8, "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } }, "exec-opts": ["native.cgroupdriver=cgroupfs"], "data-root": "/opt/docker-data/overlay2", "log-driver": "json-file", "log-opts": { "max-size": "1m", "max-file": "3" }, "live-restore": true } ```
Author
Owner

@aaronnewsome commented on GitHub (May 24, 2024):

i see this same behavior. when the container starts up, Ollama serves up answers from the GPU nicely. I leave it running 24/7 so at some point I'll see that Ollama has become very slow. When I check the processes on the GPU, I find that Ollama is not running on the GPU. It is still working, although slowly using the CPU. If I exec into the container and run nvidia-smi, I see this:

Failed to initialize NVML: Unknown Error

Restarting the docker container works as expected. After restart GPU is being used again, until it's not.

I'm currently using v0.1.38, but I see the same with the earlier versions. If I recall correctly, some number of versions ago, I didn't see this issue.

Does anyone have a clue what's going on here?

<!-- gh-comment-id:2130451429 --> @aaronnewsome commented on GitHub (May 24, 2024): i see this same behavior. when the container starts up, Ollama serves up answers from the GPU nicely. I leave it running 24/7 so at some point I'll see that Ollama has become very slow. When I check the processes on the GPU, I find that Ollama is not running on the GPU. It is still working, although slowly using the CPU. If I exec into the container and run nvidia-smi, I see this: Failed to initialize NVML: Unknown Error Restarting the docker container works as expected. After restart GPU is being used again, until it's not. I'm currently using v0.1.38, but I see the same with the earlier versions. If I recall correctly, some number of versions ago, I didn't see this issue. Does anyone have a clue what's going on here?
Author
Owner

@sammcj commented on GitHub (May 24, 2024):

@aaronnewsome have you tried the change to the daemon.json I mentioned above?

<!-- gh-comment-id:2130473581 --> @sammcj commented on GitHub (May 24, 2024): @aaronnewsome have you tried the change to the daemon.json I mentioned above?
Author
Owner

@aaronnewsome commented on GitHub (May 25, 2024):

@aaronnewsome have you tried the change to the daemon.json I mentioned above?

Hi Sam. Your post wasn't there when I posted mine, but I've changed the daemon.json. When I'm able to restart docker, I'll see how it works. Thanks for the tip!

<!-- gh-comment-id:2130605497 --> @aaronnewsome commented on GitHub (May 25, 2024): > @aaronnewsome have you tried the change to the daemon.json I mentioned above? Hi Sam. Your post wasn't there when I posted mine, but I've changed the daemon.json. When I'm able to restart docker, I'll see how it works. Thanks for the tip!
Author
Owner

@aaronnewsome commented on GitHub (May 25, 2024):

This seemed to have solved the issue. After an overnight idle time, Ollama is still using the GPU properly.

<!-- gh-comment-id:2131358747 --> @aaronnewsome commented on GitHub (May 25, 2024): This seemed to have solved the issue. After an overnight idle time, Ollama is still using the GPU properly.
Author
Owner

@sammcj commented on GitHub (May 26, 2024):

Excellent, I'm submitting a PR to update the docs now with this tip.

*edit PR: https://github.com/ollama/ollama/pull/4642

<!-- gh-comment-id:2132025378 --> @sammcj commented on GitHub (May 26, 2024): Excellent, I'm submitting a PR to update the docs now with this tip. *edit PR: https://github.com/ollama/ollama/pull/4642
Author
Owner

@yousecjoe commented on GitHub (Jul 3, 2024):

I had this problem as well, for me this was to edit /etc/docker/daemon.json and add "exec-opts": ["native.cgroupdriver=cgroupfs"], (then restart everything).

I've been running into the "error looking up nvidia GPU memory," and I am testing this suggested fix.

This bug has been super annoying. For most attempts at using Ollama, I cannot use Ollama without first restarting the container.

I added "exec-opts": ["native.cgroupdriver=cgroupfs"] to my daemon.json, and it's been working without issue for many hours. Thanks!

I used Ollama and asked dolphin-llama3:8b what this line does:

Prompt

What does this line do in my docker daemon.json
"exec-opts": ["native.cgroupdriver=cgroupfs"]

dolphin-llama3:8b

The Docker daemon's JSON configuration file, usually located at '/etc/docker/daemon.json' contains various settings that control how the Docker daemon operates.

In your case, the line "exec-opts": ["native.cgroupdriver=cgroupfs"] refers to a specific setting in this configuration file. This line enables the 'cgroupfs' driver as native cgroups in Docker.

Cgroups are Linux kernel features that allow you to set limits on a process's resource usage (like CPU, memory, disk IO). By specifying native.cgroupdriver=cgroupfs, you're telling Docker to use the Cgroups cgroupfs subsystem as its default mechanism for managing and setting these limits.

This change has been made to ensure better control over container's CPU shares when multiple containers are running on a host. The native.cgroupfs driver allows Docker to directly interact with the Linux kernel’s CGroup controls, giving it more precise control over resource allocation among your containers.

<!-- gh-comment-id:2205055061 --> @yousecjoe commented on GitHub (Jul 3, 2024): > I had this problem as well, for me this was to edit /etc/docker/daemon.json and add `"exec-opts": ["native.cgroupdriver=cgroupfs"],` (then restart everything). > I've been running into the "error looking up nvidia GPU memory," and I am testing this suggested fix. This bug has been super annoying. For most attempts at using Ollama, I cannot use Ollama without first restarting the container. I added `"exec-opts": ["native.cgroupdriver=cgroupfs"]` to my daemon.json, and it's been working without issue for many hours. Thanks! I used Ollama and asked dolphin-llama3:8b what this line does: ### Prompt What does this line do in my docker daemon.json `"exec-opts": ["native.cgroupdriver=cgroupfs"]` ### dolphin-llama3:8b The Docker daemon's JSON configuration file, usually located at '/etc/docker/daemon.json' contains various settings that control how the Docker daemon operates. In your case, the line `"exec-opts": ["native.cgroupdriver=cgroupfs"]` refers to a specific setting in this configuration file. This line enables the 'cgroupfs' driver as native cgroups in Docker. Cgroups are Linux kernel features that allow you to set limits on a process's resource usage (like CPU, memory, disk IO). By specifying native.cgroupdriver=cgroupfs, you're telling Docker to use the Cgroups cgroupfs subsystem as its default mechanism for managing and setting these limits. This change has been made to ensure better control over container's CPU shares when multiple containers are running on a host. The native.cgroupfs driver allows Docker to directly interact with the Linux kernel’s CGroup controls, giving it more precise control over resource allocation among your containers.
Author
Owner

@aurelienizl commented on GitHub (Sep 8, 2024):

Same problem here on an rtx 3080 with i9 9900KF using Ubuntu 24.04.1 LTS.
I will add "exec-opts": ["native. cgroupdriver=cgroupfs"] as saying.
This will be fixed in a new release ?

<!-- gh-comment-id:2336822010 --> @aurelienizl commented on GitHub (Sep 8, 2024): Same problem here on an rtx 3080 with i9 9900KF using Ubuntu 24.04.1 LTS. I will add "exec-opts": ["native. cgroupdriver=cgroupfs"] as saying. This will be fixed in a new release ?
Author
Owner

@frostronic commented on GitHub (Sep 12, 2024):

I had this same issue running Ollama:latest on RHEL 9.4 with and RTX5000. My docker/daemon.json looked slightly different that yours, but I went ahead and made the recommended changes. Will report back if the issue returns.

<!-- gh-comment-id:2346930768 --> @frostronic commented on GitHub (Sep 12, 2024): I had this same issue running Ollama:latest on RHEL 9.4 with and RTX5000. My docker/daemon.json looked slightly different that yours, but I went ahead and made the recommended changes. Will report back if the issue returns.
Author
Owner

@dhiltgen commented on GitHub (Oct 23, 2024):

Have folks hitting this problem found the proposed solution above to work consistently? If so, we should get this into our docs.

<!-- gh-comment-id:2433492807 --> @dhiltgen commented on GitHub (Oct 23, 2024): Have folks hitting this problem found the proposed solution above to work consistently? If so, we should get this into our docs.
Author
Owner

@aurelienizl commented on GitHub (Oct 24, 2024):

Have folks hitting this problem found the proposed solution above to work consistently? If so, we should get this into our docs.

Adding the line in the daemon file fixed the issue for me.

<!-- gh-comment-id:2435040844 --> @aurelienizl commented on GitHub (Oct 24, 2024): > Have folks hitting this problem found the proposed solution above to work consistently? If so, we should get this into our docs. Adding the line in the daemon file fixed the issue for me.
Author
Owner

@TheStigh commented on GitHub (Jan 6, 2025):

Hi @sammcj,

I still experience the same issue, even after adding the line to the json.
After some minutes, llama unloads from GPU, but it do reload if I send a question to it - with a delay.

My json was quite smaller - should I add something more?

{
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
    "exec-opts": ["native.cgroupdriver=cgroupfs"]
}
<!-- gh-comment-id:2573065027 --> @TheStigh commented on GitHub (Jan 6, 2025): Hi @sammcj, I still experience the same issue, even after adding the line to the json. After some minutes, llama unloads from GPU, but it do reload if I send a question to it - with a delay. My json was quite smaller - should I add something more? ``` { "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } } "exec-opts": ["native.cgroupdriver=cgroupfs"] } ```
Author
Owner

@pschakravarthi commented on GitHub (Jan 7, 2025):

Hi @sammcj,

I still experience the same issue, even after adding the line to the json. After some minutes, llama unloads from GPU, but it do reload if I send a question to it - with a delay.

My json was quite smaller - should I add something more?

{
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
    "exec-opts": ["native.cgroupdriver=cgroupfs"]
}

May be you are missing a comma
{
"runtimes": {
"nvidia": {
"args": [],
"path": "nvidia-container-runtime"
}
},
"exec-opts": ["native.cgroupdriver=cgroupfs"]
}

<!-- gh-comment-id:2576004410 --> @pschakravarthi commented on GitHub (Jan 7, 2025): > Hi @sammcj, > > I still experience the same issue, even after adding the line to the json. After some minutes, llama unloads from GPU, but it do reload if I send a question to it - with a delay. > > My json was quite smaller - should I add something more? > > ``` > { > "runtimes": { > "nvidia": { > "args": [], > "path": "nvidia-container-runtime" > } > } > "exec-opts": ["native.cgroupdriver=cgroupfs"] > } > ``` May be you are missing a comma { "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } }, "exec-opts": ["native.cgroupdriver=cgroupfs"] }
Author
Owner

@TheStigh commented on GitHub (Jan 8, 2025):

Hi @sammcj,
I still experience the same issue, even after adding the line to the json. After some minutes, llama unloads from GPU, but it do reload if I send a question to it - with a delay.
My json was quite smaller - should I add something more?

{
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
    "exec-opts": ["native.cgroupdriver=cgroupfs"]
}

May be you are missing a comma { "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } }, "exec-opts": ["native.cgroupdriver=cgroupfs"] }

Where should the comma be? I cannot see it missing somewhere?

<!-- gh-comment-id:2577485969 --> @TheStigh commented on GitHub (Jan 8, 2025): > > Hi @sammcj, > > I still experience the same issue, even after adding the line to the json. After some minutes, llama unloads from GPU, but it do reload if I send a question to it - with a delay. > > My json was quite smaller - should I add something more? > > ``` > > { > > "runtimes": { > > "nvidia": { > > "args": [], > > "path": "nvidia-container-runtime" > > } > > } > > "exec-opts": ["native.cgroupdriver=cgroupfs"] > > } > > ``` > > May be you are missing a comma { "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } }, "exec-opts": ["native.cgroupdriver=cgroupfs"] } Where should the comma be? I cannot see it missing somewhere?
Author
Owner

@pschakravarthi commented on GitHub (Jan 8, 2025):

Hi @sammcj,
I still experience the same issue, even after adding the line to the json. After some minutes, llama unloads from GPU, but it do reload if I send a question to it - with a delay.
My json was quite smaller - should I add something more?

{
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
    "exec-opts": ["native.cgroupdriver=cgroupfs"]
}

May be you are missing a comma { "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } }, "exec-opts": ["native.cgroupdriver=cgroupfs"] }

Where should the comma be? I cannot see it missing somewhere?

before "exec-opts". please check the reference I provided

<!-- gh-comment-id:2577489560 --> @pschakravarthi commented on GitHub (Jan 8, 2025): > > > Hi @sammcj, > > > I still experience the same issue, even after adding the line to the json. After some minutes, llama unloads from GPU, but it do reload if I send a question to it - with a delay. > > > My json was quite smaller - should I add something more? > > > ``` > > > { > > > "runtimes": { > > > "nvidia": { > > > "args": [], > > > "path": "nvidia-container-runtime" > > > } > > > } > > > "exec-opts": ["native.cgroupdriver=cgroupfs"] > > > } > > > ``` > > > > > > May be you are missing a comma { "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } }, "exec-opts": ["native.cgroupdriver=cgroupfs"] } > > Where should the comma be? I cannot see it missing somewhere? before "exec-opts". please check the reference I provided
Author
Owner

@TheStigh commented on GitHub (Jan 8, 2025):

Hi @sammcj,
I still experience the same issue, even after adding the line to the json. After some minutes, llama unloads from GPU, but it do reload if I send a question to it - with a delay.
My json was quite smaller - should I add something more?

{
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
    "exec-opts": ["native.cgroupdriver=cgroupfs"]
}

May be you are missing a comma { "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } }, "exec-opts": ["native.cgroupdriver=cgroupfs"] }

Where should the comma be? I cannot see it missing somewhere?

before "exec-opts". please check the reference I provided

Duuh, I missed it! Thanks! :)

<!-- gh-comment-id:2577499169 --> @TheStigh commented on GitHub (Jan 8, 2025): > > > > Hi @sammcj, > > > > I still experience the same issue, even after adding the line to the json. After some minutes, llama unloads from GPU, but it do reload if I send a question to it - with a delay. > > > > My json was quite smaller - should I add something more? > > > > ``` > > > > { > > > > "runtimes": { > > > > "nvidia": { > > > > "args": [], > > > > "path": "nvidia-container-runtime" > > > > } > > > > } > > > > "exec-opts": ["native.cgroupdriver=cgroupfs"] > > > > } > > > > ``` > > > > > > > > > May be you are missing a comma { "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } }, "exec-opts": ["native.cgroupdriver=cgroupfs"] } > > > > > > Where should the comma be? I cannot see it missing somewhere? > > before "exec-opts". please check the reference I provided Duuh, I missed it! Thanks! :)
Author
Owner

@themw123 commented on GitHub (Jan 24, 2025):

After some days of idle i am still seeing that gpu ist not used anymore and in ollama container when entering nvidia-smi i am facing:
Failed to initialize NVML: Unknown Error

on host nvidia-smi:

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.06              Driver Version: 555.42.06      CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX 4000 SFF Ada ...    Off |   00000000:01:00.0 Off |                  Off |
| 30%   48C    P8             14W /   70W |    1860MiB /  20475MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A   2274692      C   /usr/bin/python3.12                             0MiB |
+-----------------------------------------------------------------------------------------+

ollama version:
ollama version is 0.5.7-0-ga420a45-dirty

/etc/docker/daemon.json:

{
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
}

Do i still need the workaround with "exec-opts": ["native.cgroupdriver=cgroupfs"] ? Is there no fix in the newest version ?

<!-- gh-comment-id:2611914302 --> @themw123 commented on GitHub (Jan 24, 2025): After some days of idle i am still seeing that gpu ist not used anymore and in ollama container when entering `nvidia-smi` i am facing: `Failed to initialize NVML: Unknown Error` on host `nvidia-smi`: ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 555.42.06 Driver Version: 555.42.06 CUDA Version: 12.5 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA RTX 4000 SFF Ada ... Off | 00000000:01:00.0 Off | Off | | 30% 48C P8 14W / 70W | 1860MiB / 20475MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 2274692 C /usr/bin/python3.12 0MiB | +-----------------------------------------------------------------------------------------+ ``` ollama version: `ollama version is 0.5.7-0-ga420a45-dirty` /etc/docker/daemon.json: ``` { "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } } } ``` Do i still need the workaround with `"exec-opts": ["native.cgroupdriver=cgroupfs"]` ? Is there no fix in the newest version ?
Author
Owner

@MrWormsy commented on GitHub (Feb 19, 2025):

Hi,

I still get the issue and "exec-opts": ["native.cgroupdriver=cgroupfs"] seems to be the only workaround yet :'(

<!-- gh-comment-id:2668284186 --> @MrWormsy commented on GitHub (Feb 19, 2025): Hi, I still get the issue and `"exec-opts": ["native.cgroupdriver=cgroupfs"]` seems to be the only workaround yet :'(
Author
Owner

@0xmzk commented on GitHub (Apr 3, 2025):

Have folks hitting this problem found the proposed solution above to work consistently? If so, we should get this into our docs.

Editing /etc/docker/daemon.json seems to be a good workaround however this doesn't answer the Q, as to why this is an issue in the first place, nvidia-smi runs fine across other ubuntu docker images for me but not on any of the ollama images... so some more digging is required imo.

<!-- gh-comment-id:2776132824 --> @0xmzk commented on GitHub (Apr 3, 2025): > Have folks hitting this problem found the proposed solution above to work consistently? If so, we should get this into our docs. Editing `/etc/docker/daemon.json` seems to be a good _workaround_ however this doesn't answer the Q, as to why this is an issue in the first place, `nvidia-smi` runs fine across other ubuntu docker images for me but not on any of the ollama images... so some more digging is required imo.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64929