[GH-ISSUE #10300] Ollama reverts to CPU after several hours #53276

Closed
opened 2026-04-29 02:26:43 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @Mugane on GitHub (Apr 16, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10300

What is the issue?

Even if GPU was enabled and working fine, after several hours new requests do not use GPU and instead use CPU only. This is not a memory issue, there is abundant VRAM available for all the models used. Note that it worked fine about 7 hours ago. Restarting fixes it. Restarting every 6h is not an acceptable workaround.

Relevant log output

2025/04/15 03:00:13 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-04-15T03:00:13.354Z level=INFO source=images.go:458 msg="total blobs: 76"
time=2025-04-15T03:00:13.356Z level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-15T03:00:13.356Z level=INFO source=routes.go:1298 msg="Listening on [::]:11434 (version 0.6.5)"
time=2025-04-15T03:00:13.356Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-15T03:00:13.531Z level=INFO source=types.go:130 msg="inference compute" id=GPU-7330fd38-ea59-1617-e285-fe61a2e676b4 library=cuda variant=v12 compute=7.5 driver=12.4 name="Quadro RTX 5000 with Max-Q Design" total="15.7 GiB" available="15.5 GiB"
[GIN] 2025/04/15 - 04:57:19 | 200 |      204.19µs |      172.18.0.3 | GET      "/api/version"
[GIN] 2025/04/16 - 12:26:20 | 200 |   23.451235ms |      172.18.0.3 | GET      "/api/tags"
[GIN] 2025/04/16 - 12:26:20 | 200 |     431.049µs |      172.18.0.3 | GET      "/api/version"
cuda driver library failed to get device context 800time=2025-04-16T12:26:37.528Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
time=2025-04-16T12:26:37.559Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-04-16T12:26:37.559Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-04-16T12:26:37.559Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-04-16T12:26:37.560Z level=INFO source=sched.go:716 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-7330fd38-ea59-1617-e285-fe61a2e676b4 parallel=4 available=16599941120 required="10.8 GiB"
cuda driver library failed to get device context 800time=2025-04-16T12:26:37.562Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
time=2025-04-16T12:26:37.562Z level=INFO source=server.go:105 msg="system memory" total="125.4 GiB" free="52.7 GiB" free_swap="629.5 MiB"
time=2025-04-16T12:26:37.562Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-04-16T12:26:37.562Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-04-16T12:26:37.562Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-04-16T12:26:37.562Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 14B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 48
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.37 GiB (4.87 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 14.77 B
print_info: general.name     = DeepSeek R1 Distill Qwen 14B
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151643 '<|end▁of▁sentence|>'
print_info: EOT token        = 151643 '<|end▁of▁sentence|>'
print_info: PAD token        = 151643 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-04-16T12:26:38.043Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 6 --parallel 4 --port 45373"
time=2025-04-16T12:26:38.044Z level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-16T12:26:38.044Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-16T12:26:38.045Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-16T12:26:38.072Z level=INFO source=runner.go:853 msg="starting go runner"
ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
time=2025-04-16T12:26:38.262Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-04-16T12:26:38.263Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:45373"
time=2025-04-16T12:26:38.298Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 14B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 14B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 48
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.37 GiB (4.87 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 5120
print_info: n_layer          = 48
print_info: n_head           = 40
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 5
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 13824
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 14B
print_info: model params     = 14.77 B
print_info: general.name     = DeepSeek R1 Distill Qwen 14B
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151643 '<|end▁of▁sentence|>'
print_info: EOT token        = 151643 '<|end▁of▁sentence|>'
print_info: PAD token        = 151643 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors:   CPU_Mapped model buffer size =  8566.04 MiB
llama_init_from_model: n_seq_max     = 4
llama_init_from_model: n_ctx         = 8192
llama_init_from_model: n_ctx_per_seq = 2048
llama_init_from_model: n_batch       = 2048
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 0
llama_init_from_model: freq_base     = 1000000.0
llama_init_from_model: freq_scale    = 1
llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
llama_kv_cache_init:        CPU KV buffer size =  1536.00 MiB
llama_init_from_model: KV self size  = 1536.00 MiB, K (f16):  768.00 MiB, V (f16):  768.00 MiB
llama_init_from_model:        CPU  output buffer size =     2.40 MiB
llama_init_from_model:        CPU compute buffer size =   696.01 MiB
llama_init_from_model: graph nodes  = 1686
llama_init_from_model: graph splits = 1
time=2025-04-16T12:26:43.566Z level=INFO source=server.go:619 msg="llama runner started in 5.52 seconds"
[GIN] 2025/04/16 - 12:26:55 | 200 | 18.247520412s |      172.18.0.3 | POST     "/api/chat"
cuda driver library failed to get device context 800time=2025-04-16T12:31:55.740Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:55.992Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:56.243Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:56.496Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:56.744Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:56.995Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:57.248Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:57.496Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:57.743Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:57.993Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:58.244Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:58.494Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:58.745Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:58.993Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:59.242Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:59.494Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:59.743Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:31:59.995Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:32:00.246Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
cuda driver library failed to get device context 800time=2025-04-16T12:32:00.492Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
time=2025-04-16T12:32:00.741Z level=WARN source=sched.go:648 msg="gpu VRAM usage didn't recover within timeout" seconds=5.003486981 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e
cuda driver library failed to get device context 800time=2025-04-16T12:32:00.743Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
time=2025-04-16T12:32:00.990Z level=WARN source=sched.go:648 msg="gpu VRAM usage didn't recover within timeout" seconds=5.253058704 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e
cuda driver library failed to get device context 800time=2025-04-16T12:32:00.997Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory"
time=2025-04-16T12:32:01.241Z level=WARN source=sched.go:648 msg="gpu VRAM usage didn't recover within timeout" seconds=5.50372244 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.6.5

Originally created by @Mugane on GitHub (Apr 16, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10300 ### What is the issue? Even if GPU was enabled and working fine, after several hours new requests do not use GPU and instead use CPU only. This is not a memory issue, there is abundant VRAM available for all the models used. Note that it worked fine about 7 hours ago. Restarting fixes it. Restarting every 6h is not an acceptable workaround. ### Relevant log output ```shell 2025/04/15 03:00:13 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-04-15T03:00:13.354Z level=INFO source=images.go:458 msg="total blobs: 76" time=2025-04-15T03:00:13.356Z level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-15T03:00:13.356Z level=INFO source=routes.go:1298 msg="Listening on [::]:11434 (version 0.6.5)" time=2025-04-15T03:00:13.356Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-15T03:00:13.531Z level=INFO source=types.go:130 msg="inference compute" id=GPU-7330fd38-ea59-1617-e285-fe61a2e676b4 library=cuda variant=v12 compute=7.5 driver=12.4 name="Quadro RTX 5000 with Max-Q Design" total="15.7 GiB" available="15.5 GiB" [GIN] 2025/04/15 - 04:57:19 | 200 | 204.19µs | 172.18.0.3 | GET "/api/version" [GIN] 2025/04/16 - 12:26:20 | 200 | 23.451235ms | 172.18.0.3 | GET "/api/tags" [GIN] 2025/04/16 - 12:26:20 | 200 | 431.049µs | 172.18.0.3 | GET "/api/version" cuda driver library failed to get device context 800time=2025-04-16T12:26:37.528Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" time=2025-04-16T12:26:37.559Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-04-16T12:26:37.559Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-16T12:26:37.559Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-16T12:26:37.560Z level=INFO source=sched.go:716 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-7330fd38-ea59-1617-e285-fe61a2e676b4 parallel=4 available=16599941120 required="10.8 GiB" cuda driver library failed to get device context 800time=2025-04-16T12:26:37.562Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" time=2025-04-16T12:26:37.562Z level=INFO source=server.go:105 msg="system memory" total="125.4 GiB" free="52.7 GiB" free_swap="629.5 MiB" time=2025-04-16T12:26:37.562Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-04-16T12:26:37.562Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-04-16T12:26:37.562Z level=WARN source=ggml.go:152 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-04-16T12:26:37.562Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.0 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = DeepSeek R1 Distill Qwen 14B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-04-16T12:26:38.043Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 6 --parallel 4 --port 45373" time=2025-04-16T12:26:38.044Z level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-16T12:26:38.044Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-16T12:26:38.045Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-16T12:26:38.072Z level=INFO source=runner.go:853 msg="starting go runner" ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-04-16T12:26:38.262Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-04-16T12:26:38.263Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:45373" time=2025-04-16T12:26:38.298Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 14B llama_model_loader: - kv 5: qwen2.block_count u32 = 48 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 5120 print_info: n_layer = 48 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 13824 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 14B print_info: model params = 14.77 B print_info: general.name = DeepSeek R1 Distill Qwen 14B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: CPU_Mapped model buffer size = 8566.04 MiB llama_init_from_model: n_seq_max = 4 llama_init_from_model: n_ctx = 8192 llama_init_from_model: n_ctx_per_seq = 2048 llama_init_from_model: n_batch = 2048 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 1000000.0 llama_init_from_model: freq_scale = 1 llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 llama_kv_cache_init: CPU KV buffer size = 1536.00 MiB llama_init_from_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB llama_init_from_model: CPU output buffer size = 2.40 MiB llama_init_from_model: CPU compute buffer size = 696.01 MiB llama_init_from_model: graph nodes = 1686 llama_init_from_model: graph splits = 1 time=2025-04-16T12:26:43.566Z level=INFO source=server.go:619 msg="llama runner started in 5.52 seconds" [GIN] 2025/04/16 - 12:26:55 | 200 | 18.247520412s | 172.18.0.3 | POST "/api/chat" cuda driver library failed to get device context 800time=2025-04-16T12:31:55.740Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:55.992Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:56.243Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:56.496Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:56.744Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:56.995Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:57.248Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:57.496Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:57.743Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:57.993Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:58.244Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:58.494Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:58.745Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:58.993Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:59.242Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:59.494Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:59.743Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:31:59.995Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:32:00.246Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" cuda driver library failed to get device context 800time=2025-04-16T12:32:00.492Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" time=2025-04-16T12:32:00.741Z level=WARN source=sched.go:648 msg="gpu VRAM usage didn't recover within timeout" seconds=5.003486981 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e cuda driver library failed to get device context 800time=2025-04-16T12:32:00.743Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" time=2025-04-16T12:32:00.990Z level=WARN source=sched.go:648 msg="gpu VRAM usage didn't recover within timeout" seconds=5.253058704 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e cuda driver library failed to get device context 800time=2025-04-16T12:32:00.997Z level=WARN source=gpu.go:434 msg="error looking up nvidia GPU memory" time=2025-04-16T12:32:01.241Z level=WARN source=sched.go:648 msg="gpu VRAM usage didn't recover within timeout" seconds=5.50372244 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.5
GiteaMirror added the bug label 2026-04-29 02:26:43 -05:00
Author
Owner
<!-- gh-comment-id:2809699332 --> @rick-github commented on GitHub (Apr 16, 2025): https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#linux-docker
Author
Owner

@Mugane commented on GitHub (Apr 16, 2025):

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#linux-docker

Thanks, however disabling cgroup management or any other docker resource or security parameter globally across all running containers is not an option.

<!-- gh-comment-id:2809841575 --> @Mugane commented on GitHub (Apr 16, 2025): > https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#linux-docker Thanks, however disabling cgroup management or any other docker resource or security parameter globally across all running containers is not an option.
Author
Owner

@rick-github commented on GitHub (Apr 16, 2025):

The wording is confusing. It disables systemd management of cgroups by replacing it with cgroupfs management. It doesn't disable cgroup management.

cgroup documentation recommends using systemd for cgroup management when the rest of the system uses systemd, so the troubleshooting hint goes against that recommendation. However, there have been no reports of adverse effects of this change.

<!-- gh-comment-id:2809946862 --> @rick-github commented on GitHub (Apr 16, 2025): The wording is confusing. It disables systemd management of cgroups by replacing it with cgroupfs management. It doesn't disable cgroup management. cgroup documentation recommends using systemd for cgroup management when the rest of the system uses systemd, so the troubleshooting hint goes against that recommendation. However, there have been no reports of adverse effects of this change.
Author
Owner

@Mugane commented on GitHub (Apr 16, 2025):

@rick-github ok thanks, I'll research/test in that direction. I traced the issue to the following parameters in /proc/driver/nvidia/params

DynamicPowerManagement: 2
DynamicPowerManagementVideoMemoryThreshold: 200
EnableS0ixPowerManagement: 0
S0ixPowerManagementVideoMemoryThreshold: 256

Apparently these settings indirectly control timeout behavior by managing power states and memory thresholds. When these thresholds are reached, the GPU may enter lower power states or terminate connections, which affects timeout behavior. With luck, adjusting a system power management setting will fix this. I'll report back.

I wish there were a permission that could be assigned to the container in docker-compose.yml that would let it use nvidia-smi --gpu-reset or something to reconnect a disconnected GPU, instead of changing anything for other containers that may (intentionally or not) end up abusing a global change...

<!-- gh-comment-id:2810334522 --> @Mugane commented on GitHub (Apr 16, 2025): @rick-github ok thanks, I'll research/test in that direction. I traced the issue to the following parameters in `/proc/driver/nvidia/params` ``` DynamicPowerManagement: 2 DynamicPowerManagementVideoMemoryThreshold: 200 EnableS0ixPowerManagement: 0 S0ixPowerManagementVideoMemoryThreshold: 256 ``` Apparently these settings indirectly control timeout behavior by managing power states and memory thresholds. When these thresholds are reached, the GPU may enter lower power states or terminate connections, which affects timeout behavior. With luck, adjusting a system power management setting will fix this. I'll report back. I wish there were a permission that could be assigned to the container in docker-compose.yml that would let it use `nvidia-smi --gpu-reset` or something to reconnect a disconnected GPU, instead of changing anything for other containers that may (intentionally or not) end up abusing a global change...
Author
Owner

@Mugane commented on GitHub (Apr 17, 2025):

I should add that, with all due respect, this current resource priority profile is vaguely megalomaniacal. No matter what the solution to this issue ends up being, the behavior of Ollama seems awkward. Reverting to CPU as a fall-back after what is unambiguously a basic permissions timeout is a major interruption to limitless unknown processes that never anticipated this kind of resource competition. Once a GPU has been previously detected, the ability to take over system hardware for performance reasons is at best pointless? It will never come close, but will interrupt all sorts of other things while it tries ad nauseam. It poses an interesting question of what role a piece of software should assume it has in the universe of instances where it runs. At the very least, I suggest implementing an option to turn off this characteristic; e.g. "Do not CPU after GPU"

<!-- gh-comment-id:2812278291 --> @Mugane commented on GitHub (Apr 17, 2025): I should add that, with all due respect, this current resource priority profile is vaguely megalomaniacal. No matter what the solution to this issue ends up being, the behavior of Ollama seems awkward. Reverting to CPU as a fall-back after what is unambiguously a basic permissions timeout is a major interruption to limitless unknown processes that never anticipated this kind of resource competition. Once a GPU has been previously detected, the ability to take over system hardware for performance reasons is at best pointless? It will never come close, but will interrupt all sorts of other things while it tries ad nauseam. It poses an interesting question of what role a piece of software should assume it has in the universe of instances where it runs. At the very least, I suggest implementing an option to turn off this characteristic; e.g. "Do not CPU after GPU"
Author
Owner

@rick-github commented on GitHub (Apr 17, 2025):

ExecStart=bash -c 'exec prlimit --data=$[500 * 1024 * 1024] /usr/local/bin/ollama serve'
<!-- gh-comment-id:2812335033 --> @rick-github commented on GitHub (Apr 17, 2025): ``` ExecStart=bash -c 'exec prlimit --data=$[500 * 1024 * 1024] /usr/local/bin/ollama serve' ```
Author
Owner

@Mugane commented on GitHub (Apr 30, 2025):

ExecStart=bash -c 'exec prlimit --data=$[500 * 1024 * 1024] /usr/local/bin/ollama serve'

What is the purpose of this and where is it intended to be run? Thx

<!-- gh-comment-id:2842989674 --> @Mugane commented on GitHub (Apr 30, 2025): > ``` > ExecStart=bash -c 'exec prlimit --data=$[500 * 1024 * 1024] /usr/local/bin/ollama serve' > ``` What is the purpose of this and where is it intended to be run? Thx
Author
Owner

@rick-github commented on GitHub (Apr 30, 2025):

Sets resource limits in the service configuration file.

<!-- gh-comment-id:2843010398 --> @rick-github commented on GitHub (Apr 30, 2025): Sets resource limits in the service configuration file.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53276