[GH-ISSUE #10967] Model fallback to new CPU instance despite existing GPU instance. #32986

Closed
opened 2026-04-22 15:01:53 -05:00 by GiteaMirror · 26 comments
Owner

Originally created by @TheNha on GitHub (Jun 4, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10967

What is the issue?

I'm serving a model on Ollama using the GPU at 100% capacity, with the keep-alive setting set to -1. Initially, some requests are correctly handled by the GPU instance.

However, after a few successful requests, subsequent requests are served by a new instance of the model running on the CPU, while the original GPU-based instance is still active and continues to occupy GPU memory.

This behavior leads to inefficient resource usage and degraded inference performance due to CPU fallback, even though the GPU instance is still running and using memory.

Expected Behavior:
Requests should continue to be served by the existing GPU instance as long as it is active and available, especially with keep-alive: -1.

Image

Relevant log output

ERROR 2025-06-04T06:40:29.625833Z [resource.labels.containerName: app]
time=2025-06-04T06:40:29.650Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="83.5 GiB" before.free="79.0 GiB" before.free_swap="0 B" now.total="83.5 GiB" now.free="79.0 GiB" now.free_swap="0 B"
initializing /usr/local/nvidia/lib64/libcuda.so.535.183.06
dlsym: cuInit - 0x7f0fba655520
dlsym: cuDriverGetVersion - 0x7f0fba655540
dlsym: cuDeviceGetCount - 0x7f0fba655580
dlsym: cuDeviceGet - 0x7f0fba655560
dlsym: cuDeviceGetAttribute - 0x7f0fba655660
dlsym: cuDeviceGetUuid - 0x7f0fba6555c0
dlsym: cuDeviceGetName - 0x7f0fba6555a0
dlsym: cuCtxCreate_v3 - 0x7f0fba65d220
dlsym: cuMemGetInfo_v2 - 0x7f0fba6686f0
dlsym: cuCtxDestroy - 0x7f0fba6b76f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2025-06-04T06:40:29.898Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fe6b3872-6b48-3c13-7ef4-8174e69fd244 name="NVIDIA A100-SXM4-40GB" overhead="0 B" before.total="39.4 GiB" before.free="37.7 GiB" now.total="39.4 GiB" now.free="37.7 GiB" now.used="1.7 GiB"
releasing cuda driver library
time=2025-06-04T06:40:29.940Z level=DEBUG source=sched.go:225 msg="loading first model" model=/root/.ollama/models/blobs/sha256-b576e13fe5c36652c9277c2e649a4950bfb8bac04eb63417a512cc902dc032d5
time=2025-06-04T06:40:29.940Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[37.7 GiB]"
time=2025-06-04T06:40:29.940Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-06-04T06:40:29.941Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-06-04T06:40:29.941Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-06-04T06:40:29.941Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[37.7 GiB]"
time=2025-06-04T06:40:29.941Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-06-04T06:40:29.942Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-06-04T06:40:29.942Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-06-04T06:40:29.942Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-b576e13fe5c36652c9277c2e649a4950bfb8bac04eb63417a512cc902dc032d5 gpu=GPU-fe6b3872-6b48-3c13-7ef4-8174e69fd244 parallel=1 available=40450719744 required="18.0 GiB"
time=2025-06-04T06:40:29.942Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="83.5 GiB" before.free="79.0 GiB" before.free_swap="0 B" now.total="83.5 GiB" now.free="79.0 GiB" now.free_swap="0 B"
initializing /usr/local/nvidia/lib64/libcuda.so.535.183.06
dlsym: cuInit - 0x7f0fba655520
dlsym: cuDriverGetVersion - 0x7f0fba655540
dlsym: cuDeviceGetCount - 0x7f0fba655580
dlsym: cuDeviceGet - 0x7f0fba655560
dlsym: cuDeviceGetAttribute - 0x7f0fba655660
dlsym: cuDeviceGetUuid - 0x7f0fba6555c0
dlsym: cuDeviceGetName - 0x7f0fba6555a0
dlsym: cuCtxCreate_v3 - 0x7f0fba65d220
dlsym: cuMemGetInfo_v2 - 0x7f0fba6686f0
dlsym: cuCtxDestroy - 0x7f0fba6b76f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2025-06-04T06:40:30.158Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fe6b3872-6b48-3c13-7ef4-8174e69fd244 name="NVIDIA A100-SXM4-40GB" overhead="0 B" before.total="39.4 GiB" before.free="37.7 GiB" now.total="39.4 GiB" now.free="37.7 GiB" now.used="1.7 GiB"
releasing cuda driver library
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 33: general.quantization_version u32 = 2
llama_model_loader: - kv 34: quantize.imatrix.file str = /models_out/Qwen2.5-Coder-14B-Instruc...
llama_model_loader: - kv 35: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
llama_model_loader: - kv 36: quantize.imatrix.entries_count i32 = 336
llama_model_loader: - kv 37: quantize.imatrix.chunks_count i32 = 128
llama_model_loader: - type f32: 241 tensors
llama_model_loader: - type q6_K: 338 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q6_K
print_info: file size = 11.29 GiB (6.56 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 151660 '<|fim_middle|>' is not marked as EOG
load: control token: 151659 '<|fim_prefix|>' is not marked as EOG
load: control token: 151653 '<|vision_end|>' is not marked as EOG
load: control token: 151648 '<|box_start|>' is not marked as EOG
load: control token: 151646 '<|object_ref_start|>' is not marked as EOG
load: control token: 151649 '<|box_end|>' is not marked as EOG
load: control token: 151655 '<|image_pad|>' is not marked as EOG
load: control token: 151651 '<|quad_end|>' is not marked as EOG
load: control token: 151647 '<|object_ref_end|>' is not marked as EOG
load: control token: 151652 '<|vision_start|>' is not marked as EOG
load: control token: 151654 '<|vision_pad|>' is not marked as EOG
load: control token: 151656 '<|video_pad|>' is not marked as EOG
load: control token: 151644 '<|im_start|>' is not marked as EOG
load: control token: 151661 '<|fim_suffix|>' is not marked as EOG
load: control token: 151650 '<|quad_start|>' is not marked as EOG
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch = qwen2
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 14.77 B
print_info: general.name = Qwen2.5 Coder 14B Instruct
print_info: vocab type = BPE
print_info: n_vocab = 152064
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-06-04T06:40:30.482Z level=DEBUG source=server.go:335 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12
time=2025-06-04T06:40:30.482Z level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12]
time=2025-06-04T06:40:30.482Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-b576e13fe5c36652c9277c2e649a4950bfb8bac04eb63417a512cc902dc032d5 --ctx-size 25000 --batch-size 512 --n-gpu-layers 49 --verbose --threads 6 --parallel 1 --port 34613"
time=2025-06-04T06:40:30.483Z level=DEBUG source=server.go:423 msg=subprocess environment="[LD_LIBRARY_PATH=/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama/cuda_v12:/usr/lib/ollama PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin CUDA_VISIBLE_DEVICES=GPU-fe6b3872-6b48-3c13-7ef4-8174e69fd244]"
time=2025-06-04T06:40:30.483Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-06-04T06:40:30.483Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-06-04T06:40:30.484Z level=WARN source=server.go:587 msg="client connection closed before server finished loading, aborting load"
time=2025-06-04T06:40:30.484Z level=ERROR source=sched.go:456 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled"
time=2025-06-04T06:40:30.484Z level=DEBUG source=sched.go:459 msg="triggering expiration for failed load" model=/root/.ollama/models/blobs/sha256-b576e13fe5c36652c9277c2e649a4950bfb8bac04eb63417a512cc902dc032d5
time=2025-06-04T06:40:30.484Z level=DEBUG source=sched.go:361 msg="runner expired event received" modelPath=/root/.ollama/models/blobs/sha256-b576e13fe5c36652c9277c2e649a4950bfb8bac04eb63417a512cc902dc032d5
time=2025-06-04T06:40:30.484Z level=DEBUG source=sched.go:376 msg="got lock to unload" modelPath=/root/.ollama/models/blobs/sha256-b576e13fe5c36652c9277c2e649a4950bfb8bac04eb63417a512cc902dc032d5
time=2025-06-04T06:40:30.484Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="83.5 GiB" before.free="79.0 GiB" before.free_swap="0 B" now.total="83.5 GiB" now.free="78.8 GiB" now.free_swap="0 B"
initializing /usr/local/nvidia/lib64/libcuda.so.535.183.06

OS

No response

GPU

No response

CPU

No response

Ollama version

0.6.2

Originally created by @TheNha on GitHub (Jun 4, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10967 ### What is the issue? I'm serving a model on Ollama using the GPU at 100% capacity, with the keep-alive setting set to -1. Initially, some requests are correctly handled by the GPU instance. However, after a few successful requests, subsequent requests are served by a new instance of the model running on the CPU, while the original GPU-based instance is still active and continues to occupy GPU memory. This behavior leads to inefficient resource usage and degraded inference performance due to CPU fallback, even though the GPU instance is still running and using memory. Expected Behavior: Requests should continue to be served by the existing GPU instance as long as it is active and available, especially with keep-alive: -1. ![Image](https://github.com/user-attachments/assets/1633fb00-4a94-46ec-a418-ac4f2d58bc7f) ### Relevant log output ```shell ERROR 2025-06-04T06:40:29.625833Z [resource.labels.containerName: app] time=2025-06-04T06:40:29.650Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="83.5 GiB" before.free="79.0 GiB" before.free_swap="0 B" now.total="83.5 GiB" now.free="79.0 GiB" now.free_swap="0 B" initializing /usr/local/nvidia/lib64/libcuda.so.535.183.06 dlsym: cuInit - 0x7f0fba655520 dlsym: cuDriverGetVersion - 0x7f0fba655540 dlsym: cuDeviceGetCount - 0x7f0fba655580 dlsym: cuDeviceGet - 0x7f0fba655560 dlsym: cuDeviceGetAttribute - 0x7f0fba655660 dlsym: cuDeviceGetUuid - 0x7f0fba6555c0 dlsym: cuDeviceGetName - 0x7f0fba6555a0 dlsym: cuCtxCreate_v3 - 0x7f0fba65d220 dlsym: cuMemGetInfo_v2 - 0x7f0fba6686f0 dlsym: cuCtxDestroy - 0x7f0fba6b76f0 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2025-06-04T06:40:29.898Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fe6b3872-6b48-3c13-7ef4-8174e69fd244 name="NVIDIA A100-SXM4-40GB" overhead="0 B" before.total="39.4 GiB" before.free="37.7 GiB" now.total="39.4 GiB" now.free="37.7 GiB" now.used="1.7 GiB" releasing cuda driver library time=2025-06-04T06:40:29.940Z level=DEBUG source=sched.go:225 msg="loading first model" model=/root/.ollama/models/blobs/sha256-b576e13fe5c36652c9277c2e649a4950bfb8bac04eb63417a512cc902dc032d5 time=2025-06-04T06:40:29.940Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[37.7 GiB]" time=2025-06-04T06:40:29.940Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-06-04T06:40:29.941Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-06-04T06:40:29.941Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-06-04T06:40:29.941Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[37.7 GiB]" time=2025-06-04T06:40:29.941Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-06-04T06:40:29.942Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-06-04T06:40:29.942Z level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-06-04T06:40:29.942Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-b576e13fe5c36652c9277c2e649a4950bfb8bac04eb63417a512cc902dc032d5 gpu=GPU-fe6b3872-6b48-3c13-7ef4-8174e69fd244 parallel=1 available=40450719744 required="18.0 GiB" time=2025-06-04T06:40:29.942Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="83.5 GiB" before.free="79.0 GiB" before.free_swap="0 B" now.total="83.5 GiB" now.free="79.0 GiB" now.free_swap="0 B" initializing /usr/local/nvidia/lib64/libcuda.so.535.183.06 dlsym: cuInit - 0x7f0fba655520 dlsym: cuDriverGetVersion - 0x7f0fba655540 dlsym: cuDeviceGetCount - 0x7f0fba655580 dlsym: cuDeviceGet - 0x7f0fba655560 dlsym: cuDeviceGetAttribute - 0x7f0fba655660 dlsym: cuDeviceGetUuid - 0x7f0fba6555c0 dlsym: cuDeviceGetName - 0x7f0fba6555a0 dlsym: cuCtxCreate_v3 - 0x7f0fba65d220 dlsym: cuMemGetInfo_v2 - 0x7f0fba6686f0 dlsym: cuCtxDestroy - 0x7f0fba6b76f0 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2025-06-04T06:40:30.158Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fe6b3872-6b48-3c13-7ef4-8174e69fd244 name="NVIDIA A100-SXM4-40GB" overhead="0 B" before.total="39.4 GiB" before.free="37.7 GiB" now.total="39.4 GiB" now.free="37.7 GiB" now.used="1.7 GiB" releasing cuda driver library llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - kv 34: quantize.imatrix.file str = /models_out/Qwen2.5-Coder-14B-Instruc... llama_model_loader: - kv 35: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt llama_model_loader: - kv 36: quantize.imatrix.entries_count i32 = 336 llama_model_loader: - kv 37: quantize.imatrix.chunks_count i32 = 128 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q6_K: 338 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q6_K print_info: file size = 11.29 GiB (6.56 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 151660 '<|fim_middle|>' is not marked as EOG load: control token: 151659 '<|fim_prefix|>' is not marked as EOG load: control token: 151653 '<|vision_end|>' is not marked as EOG load: control token: 151648 '<|box_start|>' is not marked as EOG load: control token: 151646 '<|object_ref_start|>' is not marked as EOG load: control token: 151649 '<|box_end|>' is not marked as EOG load: control token: 151655 '<|image_pad|>' is not marked as EOG load: control token: 151651 '<|quad_end|>' is not marked as EOG load: control token: 151647 '<|object_ref_end|>' is not marked as EOG load: control token: 151652 '<|vision_start|>' is not marked as EOG load: control token: 151654 '<|vision_pad|>' is not marked as EOG load: control token: 151656 '<|video_pad|>' is not marked as EOG load: control token: 151644 '<|im_start|>' is not marked as EOG load: control token: 151661 '<|fim_suffix|>' is not marked as EOG load: control token: 151650 '<|quad_start|>' is not marked as EOG load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = Qwen2.5 Coder 14B Instruct print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-06-04T06:40:30.482Z level=DEBUG source=server.go:335 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12 time=2025-06-04T06:40:30.482Z level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12] time=2025-06-04T06:40:30.482Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-b576e13fe5c36652c9277c2e649a4950bfb8bac04eb63417a512cc902dc032d5 --ctx-size 25000 --batch-size 512 --n-gpu-layers 49 --verbose --threads 6 --parallel 1 --port 34613" time=2025-06-04T06:40:30.483Z level=DEBUG source=server.go:423 msg=subprocess environment="[LD_LIBRARY_PATH=/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama/cuda_v12:/usr/lib/ollama PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin CUDA_VISIBLE_DEVICES=GPU-fe6b3872-6b48-3c13-7ef4-8174e69fd244]" time=2025-06-04T06:40:30.483Z level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-06-04T06:40:30.483Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-06-04T06:40:30.484Z level=WARN source=server.go:587 msg="client connection closed before server finished loading, aborting load" time=2025-06-04T06:40:30.484Z level=ERROR source=sched.go:456 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled" time=2025-06-04T06:40:30.484Z level=DEBUG source=sched.go:459 msg="triggering expiration for failed load" model=/root/.ollama/models/blobs/sha256-b576e13fe5c36652c9277c2e649a4950bfb8bac04eb63417a512cc902dc032d5 time=2025-06-04T06:40:30.484Z level=DEBUG source=sched.go:361 msg="runner expired event received" modelPath=/root/.ollama/models/blobs/sha256-b576e13fe5c36652c9277c2e649a4950bfb8bac04eb63417a512cc902dc032d5 time=2025-06-04T06:40:30.484Z level=DEBUG source=sched.go:376 msg="got lock to unload" modelPath=/root/.ollama/models/blobs/sha256-b576e13fe5c36652c9277c2e649a4950bfb8bac04eb63417a512cc902dc032d5 time=2025-06-04T06:40:30.484Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="83.5 GiB" before.free="79.0 GiB" before.free_swap="0 B" now.total="83.5 GiB" now.free="78.8 GiB" now.free_swap="0 B" initializing /usr/local/nvidia/lib64/libcuda.so.535.183.06 ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.6.2
GiteaMirror added the bug label 2026-04-22 15:01:53 -05:00
Author
Owner

@sherpya commented on GitHub (Jun 4, 2025):

same problem, the gpu memory gets freed, version 0.7.0 does not have the bug, I believe the first affected version is 0.7.1

<!-- gh-comment-id:2939527496 --> @sherpya commented on GitHub (Jun 4, 2025): same problem, the gpu memory gets freed, version 0.7.0 does not have the bug, I believe the first affected version is 0.7.1
Author
Owner

@sherpya commented on GitHub (Jun 4, 2025):

same problem, the gpu memory gets freed, version 0.7.0 does not have the bug, I believe the first affected version is 0.7.1

hmm no got same problem on 0.7.0

<!-- gh-comment-id:2939612635 --> @sherpya commented on GitHub (Jun 4, 2025): > same problem, the gpu memory gets freed, version 0.7.0 does not have the bug, I believe the first affected version is 0.7.1 hmm no got same problem on 0.7.0
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

@TheNha Sounds like #10433. Try upgrading.
@sherpya Since you are on 0.7.*, it's likely not #10433 and may not be the same problem. Server logs may aid in debugging.

<!-- gh-comment-id:2940050085 --> @rick-github commented on GitHub (Jun 4, 2025): @TheNha Sounds like #10433. Try upgrading. @sherpya Since you are on 0.7.*, it's likely not #10433 and may not be the same problem. [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging.
Author
Owner

@sherpya commented on GitHub (Jun 4, 2025):

@TheNha Sounds like #10433. Try upgrading. @sherpya Since you are on 0.7.*, it's likely not #10433 and may not be the same problem. Server logs may aid in debugging.

I've the same problem on version 0.9.0, looks related to num_context param sent by openwebui

<!-- gh-comment-id:2940321617 --> @sherpya commented on GitHub (Jun 4, 2025): > [@TheNha](https://github.com/TheNha) Sounds like [#10433](https://github.com/ollama/ollama/issues/10433). Try upgrading. [@sherpya](https://github.com/sherpya) Since you are on 0.7.*, it's likely not [#10433](https://github.com/ollama/ollama/issues/10433) and may not be the same problem. [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging. I've the same problem on version 0.9.0, looks related to num_context param sent by openwebui
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

Server logs may aid in debugging.

<!-- gh-comment-id:2940325509 --> @rick-github commented on GitHub (Jun 4, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging.
Author
Owner

@sherpya commented on GitHub (Jun 4, 2025):

Server logs may aid in debugging.

log.txt

when it says:
Jun 04 13:08:54 propheta ollama[22305]: time=2025-06-04T13:08:54.184+02:00 level=WARN source=server.go:199 msg="flash attention enabled but not supported by gpu"
it's because it's switched on cpu

<!-- gh-comment-id:2940364335 --> @sherpya commented on GitHub (Jun 4, 2025): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging. [log.txt](https://github.com/user-attachments/files/20593712/log.txt) when it says: `Jun 04 13:08:54 propheta ollama[22305]: time=2025-06-04T13:08:54.184+02:00 level=WARN source=server.go:199 msg="flash attention enabled but not supported by gpu"` it's because it's switched on cpu
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

Jun 04 13:08:54 propheta ollama[22305]: time=2025-06-04T13:08:54.184+02:00 level=INFO source=server.go:168 msg=offload
 library=cpu layers.requested=0 layers.model=63 layers.offload=0 layers.split="" memory.available="[60.3 GiB]"
 memory.gpu_overhead="0 B" memory.required.full="29.9 GiB" memory.required.partial="0 B" memory.required.kv="1.3 GiB"
 memory.required.allocations="[3.2 GiB]" memory.weights.total="26.7 GiB" memory.weights.repeating="25.3 GiB"
 memory.weights.nonrepeating="1.4 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB"
 projector.weights="795.9 MiB" projector.graph="1.0 GiB"

The model is not being loaded into GPU because num_gpu is set to 0.

<!-- gh-comment-id:2940468111 --> @rick-github commented on GitHub (Jun 4, 2025): ``` Jun 04 13:08:54 propheta ollama[22305]: time=2025-06-04T13:08:54.184+02:00 level=INFO source=server.go:168 msg=offload library=cpu layers.requested=0 layers.model=63 layers.offload=0 layers.split="" memory.available="[60.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="29.9 GiB" memory.required.partial="0 B" memory.required.kv="1.3 GiB" memory.required.allocations="[3.2 GiB]" memory.weights.total="26.7 GiB" memory.weights.repeating="25.3 GiB" memory.weights.nonrepeating="1.4 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" ``` The model is not being loaded into GPU because `num_gpu` is set to 0.
Author
Owner

@sherpya commented on GitHub (Jun 4, 2025):

Jun 04 13:08:54 propheta ollama[22305]: time=2025-06-04T13:08:54.184+02:00 level=INFO source=server.go:168 msg=offload
 library=cpu layers.requested=0 layers.model=63 layers.offload=0 layers.split="" memory.available="[60.3 GiB]"
 memory.gpu_overhead="0 B" memory.required.full="29.9 GiB" memory.required.partial="0 B" memory.required.kv="1.3 GiB"
 memory.required.allocations="[3.2 GiB]" memory.weights.total="26.7 GiB" memory.weights.repeating="25.3 GiB"
 memory.weights.nonrepeating="1.4 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB"
 projector.weights="795.9 MiB" projector.graph="1.0 GiB"

The model is not being loaded into GPU because num_gpu is set to 0.

only after it fails, the first time is loaded on the gpu:

Jun 04 12:33:01 propheta ollama[22305]: time=2025-06-04T12:33:01.776+02:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="8.6 GiB"
Jun 04 12:33:01 propheta ollama[22305]: time=2025-06-04T12:33:01.776+02:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA1 size="8.6 GiB"
Jun 04 12:33:01 propheta ollama[22305]: time=2025-06-04T12:33:01.776+02:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA2 size="10.4 GiB"
Jun 04 12:33:01 propheta ollama[22305]: time=2025-06-04T12:33:01.776+02:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="1.4 GiB"
<!-- gh-comment-id:2940517712 --> @sherpya commented on GitHub (Jun 4, 2025): > ``` > Jun 04 13:08:54 propheta ollama[22305]: time=2025-06-04T13:08:54.184+02:00 level=INFO source=server.go:168 msg=offload > library=cpu layers.requested=0 layers.model=63 layers.offload=0 layers.split="" memory.available="[60.3 GiB]" > memory.gpu_overhead="0 B" memory.required.full="29.9 GiB" memory.required.partial="0 B" memory.required.kv="1.3 GiB" > memory.required.allocations="[3.2 GiB]" memory.weights.total="26.7 GiB" memory.weights.repeating="25.3 GiB" > memory.weights.nonrepeating="1.4 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB" > projector.weights="795.9 MiB" projector.graph="1.0 GiB" > ``` > > The model is not being loaded into GPU because `num_gpu` is set to 0. only after it fails, the first time is loaded on the gpu: ``` Jun 04 12:33:01 propheta ollama[22305]: time=2025-06-04T12:33:01.776+02:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="8.6 GiB" Jun 04 12:33:01 propheta ollama[22305]: time=2025-06-04T12:33:01.776+02:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA1 size="8.6 GiB" Jun 04 12:33:01 propheta ollama[22305]: time=2025-06-04T12:33:01.776+02:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA2 size="10.4 GiB" Jun 04 12:33:01 propheta ollama[22305]: time=2025-06-04T12:33:01.776+02:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="1.4 GiB" ```
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

The title of this issue is "Model fallback to new CPU instance despite existing GPU instance". If you load it the first time in GPU, then load it second time with "num_gpu:0` and it loads on CPU, that's working as intended. If that's not what you are expecting, please provide more details.

<!-- gh-comment-id:2940537706 --> @rick-github commented on GitHub (Jun 4, 2025): The title of this issue is "Model fallback to new CPU instance despite existing GPU instance". If you load it the first time in GPU, then load it second time with "`num_gpu`:0` and it loads on CPU, that's working as intended. If that's not what you are expecting, please provide more details.
Author
Owner

@TheNha commented on GitHub (Jun 4, 2025):

My requests are still the same. Requests from different clients, for
example continue, tabby in visual code and inteji. But next time i dont
understand why it serves the new CPU.

Vào Th 4, 4 thg 6, 2025 lúc 22:47 frob @.***> đã viết:

rick-github left a comment (ollama/ollama#10967)
https://github.com/ollama/ollama/issues/10967#issuecomment-2940537706

The title of this issue is "Model fallback to new CPU instance despite
existing GPU instance". If you load it the first time in GPU, then load it
second time with "num_gpu:0` and it loads on CPU, that's working as
intended. If that's not what you are expecting, please provide more details.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/10967#issuecomment-2940537706,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AKWOOYSZZ7KN6N3OFIXMPWL3B4IJNAVCNFSM6AAAAAB6R2ESRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSNBQGUZTONZQGY
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:2940551568 --> @TheNha commented on GitHub (Jun 4, 2025): My requests are still the same. Requests from different clients, for example continue, tabby in visual code and inteji. But next time i dont understand why it serves the new CPU. Vào Th 4, 4 thg 6, 2025 lúc 22:47 frob ***@***.***> đã viết: > *rick-github* left a comment (ollama/ollama#10967) > <https://github.com/ollama/ollama/issues/10967#issuecomment-2940537706> > > The title of this issue is "Model fallback to new CPU instance despite > existing GPU instance". If you load it the first time in GPU, then load it > second time with "num_gpu:0` and it loads on CPU, that's working as > intended. If that's not what you are expecting, please provide more details. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/10967#issuecomment-2940537706>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AKWOOYSZZ7KN6N3OFIXMPWL3B4IJNAVCNFSM6AAAAAB6R2ESRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSNBQGUZTONZQGY> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

My requests are still the same. Requests from different clients, for
example continue, tabby in visual code and inteji. But next time i dont
understand why it serves the new CPU.

Did you upgrade ollama?

<!-- gh-comment-id:2940557853 --> @rick-github commented on GitHub (Jun 4, 2025): > My requests are still the same. Requests from different clients, for > example continue, tabby in visual code and inteji. But next time i dont > understand why it serves the new CPU. Did you upgrade ollama?
Author
Owner

@sherpya commented on GitHub (Jun 4, 2025):

The title of this issue is "Model fallback to new CPU instance despite existing GPU instance". If you load it the first time in GPU, then load it second time with "num_gpu:0` and it loads on CPU, that's working as intended. If that's not what you are expecting, please provide more details.

so the second instance is loaded on cpu (for no apparent reason) and the first one on the gpu is stopped, why?

<!-- gh-comment-id:2940560522 --> @sherpya commented on GitHub (Jun 4, 2025): > The title of this issue is "Model fallback to new CPU instance despite existing GPU instance". If you load it the first time in GPU, then load it second time with "`num_gpu`:0` and it loads on CPU, that's working as intended. If that's not what you are expecting, please provide more details. so the second instance is loaded on cpu (for no apparent reason) and the first one on the gpu is stopped, why?
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

so the second instance is loaded on cpu (for no apparent reason) and the first one on the gpu is stopped, why?

From the logs, "num_gpu:0` is why the second instance is loaded on the CPU. If you want to make more progress, provide details on how you run these models and any configuration settings you have set.

<!-- gh-comment-id:2940568294 --> @rick-github commented on GitHub (Jun 4, 2025): > so the second instance is loaded on cpu (for no apparent reason) and the first one on the gpu is stopped, why? From the logs, "num_gpu:0` is why the second instance is loaded on the CPU. If you want to make more progress, provide details on how you run these models and any configuration settings you have set.
Author
Owner

@TheNha commented on GitHub (Jun 4, 2025):

which version fixed my problem please give me the documentation

Vào Th 4, 4 thg 6, 2025 lúc 22:54 Gianluigi Tiesi @.***>
đã viết:

sherpya left a comment (ollama/ollama#10967)
https://github.com/ollama/ollama/issues/10967#issuecomment-2940560522

The title of this issue is "Model fallback to new CPU instance despite
existing GPU instance". If you load it the first time in GPU, then load it
second time with "num_gpu:0` and it loads on CPU, that's working as
intended. If that's not what you are expecting, please provide more details.

so the second instance is loaded on cpu (for no apparent reason) and the
first one on the gpu is stopped, why?


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/10967#issuecomment-2940560522,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AKWOOYSUD72BZPTQC543EU33B4JDNAVCNFSM6AAAAAB6R2ESRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSNBQGU3DANJSGI
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:2940573761 --> @TheNha commented on GitHub (Jun 4, 2025): which version fixed my problem please give me the documentation Vào Th 4, 4 thg 6, 2025 lúc 22:54 Gianluigi Tiesi ***@***.***> đã viết: > *sherpya* left a comment (ollama/ollama#10967) > <https://github.com/ollama/ollama/issues/10967#issuecomment-2940560522> > > The title of this issue is "Model fallback to new CPU instance despite > existing GPU instance". If you load it the first time in GPU, then load it > second time with "num_gpu:0` and it loads on CPU, that's working as > intended. If that's not what you are expecting, please provide more details. > > so the second instance is loaded on cpu (for no apparent reason) and the > first one on the gpu is stopped, why? > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/10967#issuecomment-2940560522>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AKWOOYSUD72BZPTQC543EU33B4JDNAVCNFSM6AAAAAB6R2ESRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSNBQGU3DANJSGI> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@sherpya commented on GitHub (Jun 4, 2025):

so the second instance is loaded on cpu (for no apparent reason) and the first one on the gpu is stopped, why?

From the logs, "num_gpu:0` is why the second instance is loaded on the CPU. If you want to make more progress, provide details on how you run these models and any configuration settings you have set.

this is supposed not to happen:

Jun 04 12:33:01 propheta ollama[22305]: time=2025-06-04T12:33:01.503+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
Jun 04 13:08:53 propheta ollama[22305]: time=2025-06-04T13:08:53.297+02:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.232842952 runner.size="40.5 GiB" runner.vram="40.5 GiB" runner.parallel=2 runner.pid=22452 runner.model=/var/lib/ollama/models/blobs/sha256-1b046227289ed378bde9b80154dc3da99071e34eac3e08e5770d1966b19be2a8
Jun 04 13:08:53 propheta ollama[22305]: time=2025-06-04T13:08:53.547+02:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.483225117 runner.size="40.5 GiB" runner.vram="40.5 GiB" runner.parallel=2 runner.pid=22452 runner.model=/var/lib/ollama/models/blobs/sha256-1b046227289ed378bde9b80154dc3da99071e34eac3e08e5770d1966b19be2a8
Jun 04 13:08:53 propheta ollama[22305]: time=2025-06-04T13:08:53.797+02:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.732895527 runner.size="40.5 GiB" runner.vram="40.5 GiB" runner.parallel=2 runner.pid=22452 runner.model=/var/lib/ollama/models/blobs/sha256-1b046227289ed378bde9b80154dc3da99071e34eac3e08e5770d1966b19be2a8

running using:

OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=/var/lib/ollama/models OLLAMA_FLASH_ATTENTION=1 OLLAMA_ORIGINS=* OLLAMA_KEEP_ALIVE=-1 ollama serve

now running normal (without the offending openwebui client)

Distributor ID: Ubuntu
Description:    Ubuntu 24.04.2 LTS
Release:        24.04
Codename:       noble

Image

Linux propheta 6.8.0-60-generic #63-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 19:04:15 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux

ii  libnvidia-cfg1-535-server:amd64       535.247.01-0ubuntu0.24.04.1              amd64        NVIDIA binary OpenGL/GLX configuration library
ii  libnvidia-common-535-server           535.247.01-0ubuntu0.24.04.1              all          Shared files used by the NVIDIA libraries
ii  libnvidia-compute-535-server:amd64    535.247.01-0ubuntu0.24.04.1              amd64        NVIDIA libcompute package
ii  libnvidia-decode-535-server:amd64     535.247.01-0ubuntu0.24.04.1              amd64        NVIDIA Video Decoding runtime libraries
ii  libnvidia-egl-wayland1:amd64          1:1.1.13-1build1                         amd64        Wayland EGL External Platform library -- shared library
ii  libnvidia-encode-535-server:amd64     535.247.01-0ubuntu0.24.04.1              amd64        NVENC Video Encoding runtime library
ii  libnvidia-extra-535-server:amd64      535.247.01-0ubuntu0.24.04.1              amd64        Extra libraries for the NVIDIA Server Driver
ii  libnvidia-fbc1-535-server:amd64       535.247.01-0ubuntu0.24.04.1              amd64        NVIDIA OpenGL-based Framebuffer Capture runtime library
ii  libnvidia-gl-535-server:amd64         535.247.01-0ubuntu0.24.04.1              amd64        NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD
ii  nvidia-compute-utils-535-server       535.247.01-0ubuntu0.24.04.1              amd64        NVIDIA compute utilities
ii  nvidia-dkms-535-server                535.247.01-0ubuntu0.24.04.1              amd64        NVIDIA DKMS package
ii  nvidia-driver-535-server              535.247.01-0ubuntu0.24.04.1              amd64        NVIDIA Server Driver metapackage
ii  nvidia-firmware-535-server-535.247.01 535.247.01-0ubuntu0.24.04.1              amd64        Firmware files used by the kernel module
ii  nvidia-kernel-common-535-server       535.247.01-0ubuntu0.24.04.1              amd64        Shared files used with the kernel module
ii  nvidia-kernel-source-535-server       535.247.01-0ubuntu0.24.04.1              amd64        NVIDIA kernel source package
ii  nvidia-utils-535-server               535.247.01-0ubuntu0.24.04.1              amd64        NVIDIA Server Driver support binaries
ii  xserver-xorg-video-nvidia-535-server  535.247.01-0ubuntu0.24.04.1              amd64        NVIDIA binary Xorg driver

do you need other infos?

<!-- gh-comment-id:2940577181 --> @sherpya commented on GitHub (Jun 4, 2025): > > so the second instance is loaded on cpu (for no apparent reason) and the first one on the gpu is stopped, why? > > From the logs, "num_gpu:0` is why the second instance is loaded on the CPU. If you want to make more progress, provide details on how you run these models and any configuration settings you have set. this is supposed not to happen: ``` Jun 04 12:33:01 propheta ollama[22305]: time=2025-06-04T12:33:01.503+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" Jun 04 13:08:53 propheta ollama[22305]: time=2025-06-04T13:08:53.297+02:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.232842952 runner.size="40.5 GiB" runner.vram="40.5 GiB" runner.parallel=2 runner.pid=22452 runner.model=/var/lib/ollama/models/blobs/sha256-1b046227289ed378bde9b80154dc3da99071e34eac3e08e5770d1966b19be2a8 Jun 04 13:08:53 propheta ollama[22305]: time=2025-06-04T13:08:53.547+02:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.483225117 runner.size="40.5 GiB" runner.vram="40.5 GiB" runner.parallel=2 runner.pid=22452 runner.model=/var/lib/ollama/models/blobs/sha256-1b046227289ed378bde9b80154dc3da99071e34eac3e08e5770d1966b19be2a8 Jun 04 13:08:53 propheta ollama[22305]: time=2025-06-04T13:08:53.797+02:00 level=WARN source=sched.go:676 msg="gpu VRAM usage didn't recover within timeout" seconds=5.732895527 runner.size="40.5 GiB" runner.vram="40.5 GiB" runner.parallel=2 runner.pid=22452 runner.model=/var/lib/ollama/models/blobs/sha256-1b046227289ed378bde9b80154dc3da99071e34eac3e08e5770d1966b19be2a8 ``` running using: ``` OLLAMA_HOST=0.0.0.0 OLLAMA_MODELS=/var/lib/ollama/models OLLAMA_FLASH_ATTENTION=1 OLLAMA_ORIGINS=* OLLAMA_KEEP_ALIVE=-1 ollama serve ``` now running normal (without the offending openwebui client) ``` Distributor ID: Ubuntu Description: Ubuntu 24.04.2 LTS Release: 24.04 Codename: noble ``` ![Image](https://github.com/user-attachments/assets/94844a6e-2910-47ae-bd46-9609314ee8e7) Linux propheta 6.8.0-60-generic #63-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 19:04:15 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux ``` ii libnvidia-cfg1-535-server:amd64 535.247.01-0ubuntu0.24.04.1 amd64 NVIDIA binary OpenGL/GLX configuration library ii libnvidia-common-535-server 535.247.01-0ubuntu0.24.04.1 all Shared files used by the NVIDIA libraries ii libnvidia-compute-535-server:amd64 535.247.01-0ubuntu0.24.04.1 amd64 NVIDIA libcompute package ii libnvidia-decode-535-server:amd64 535.247.01-0ubuntu0.24.04.1 amd64 NVIDIA Video Decoding runtime libraries ii libnvidia-egl-wayland1:amd64 1:1.1.13-1build1 amd64 Wayland EGL External Platform library -- shared library ii libnvidia-encode-535-server:amd64 535.247.01-0ubuntu0.24.04.1 amd64 NVENC Video Encoding runtime library ii libnvidia-extra-535-server:amd64 535.247.01-0ubuntu0.24.04.1 amd64 Extra libraries for the NVIDIA Server Driver ii libnvidia-fbc1-535-server:amd64 535.247.01-0ubuntu0.24.04.1 amd64 NVIDIA OpenGL-based Framebuffer Capture runtime library ii libnvidia-gl-535-server:amd64 535.247.01-0ubuntu0.24.04.1 amd64 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD ii nvidia-compute-utils-535-server 535.247.01-0ubuntu0.24.04.1 amd64 NVIDIA compute utilities ii nvidia-dkms-535-server 535.247.01-0ubuntu0.24.04.1 amd64 NVIDIA DKMS package ii nvidia-driver-535-server 535.247.01-0ubuntu0.24.04.1 amd64 NVIDIA Server Driver metapackage ii nvidia-firmware-535-server-535.247.01 535.247.01-0ubuntu0.24.04.1 amd64 Firmware files used by the kernel module ii nvidia-kernel-common-535-server 535.247.01-0ubuntu0.24.04.1 amd64 Shared files used with the kernel module ii nvidia-kernel-source-535-server 535.247.01-0ubuntu0.24.04.1 amd64 NVIDIA kernel source package ii nvidia-utils-535-server 535.247.01-0ubuntu0.24.04.1 amd64 NVIDIA Server Driver support binaries ii xserver-xorg-video-nvidia-535-server 535.247.01-0ubuntu0.24.04.1 amd64 NVIDIA binary Xorg driver ``` do you need other infos?
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

which version fixed my problem please give me the documentation

https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-upgrade-ollama

<!-- gh-comment-id:2940577520 --> @rick-github commented on GitHub (Jun 4, 2025): > which version fixed my problem please give me the documentation https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-upgrade-ollama
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

now running normal (without the offending openwebui client)

So this only happens with open-webui? Sounds like you have configured num_gpu:0 in the client.

<!-- gh-comment-id:2940585627 --> @rick-github commented on GitHub (Jun 4, 2025): > now running normal (without the offending openwebui client) So this only happens with open-webui? Sounds like you have configured `num_gpu:0` in the client.
Author
Owner

@sherpya commented on GitHub (Jun 4, 2025):

now running normal (without the offending openwebui client)

So this only happens with open-webui? Sounds like you have configured num_gpu:0 in the client.

we have 3 clients, the problematic one had num_context changed, nothing else

<!-- gh-comment-id:2940588106 --> @sherpya commented on GitHub (Jun 4, 2025): > > now running normal (without the offending openwebui client) > > So this only happens with open-webui? Sounds like you have configured `num_gpu:0` in the client. we have 3 clients, the problematic one had num_context changed, nothing else
Author
Owner

@TheNha commented on GitHub (Jun 4, 2025):

I mean any documents that say they fixed my problem.
I use docker and I know how to upgrade.

Vào Th 4, 4 thg 6, 2025 lúc 23:00 frob @.***> đã viết:

rick-github left a comment (ollama/ollama#10967)
https://github.com/ollama/ollama/issues/10967#issuecomment-2940577520

which version fixed my problem please give me the documentation

https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-upgrade-ollama


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/10967#issuecomment-2940577520,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AKWOOYTW3N2LWECX3K75FTT3B4JYVAVCNFSM6AAAAAB6R2ESRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSNBQGU3TONJSGA
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:2940596478 --> @TheNha commented on GitHub (Jun 4, 2025): I mean any documents that say they fixed my problem. I use docker and I know how to upgrade. Vào Th 4, 4 thg 6, 2025 lúc 23:00 frob ***@***.***> đã viết: > *rick-github* left a comment (ollama/ollama#10967) > <https://github.com/ollama/ollama/issues/10967#issuecomment-2940577520> > > which version fixed my problem please give me the documentation > > > https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-upgrade-ollama > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/10967#issuecomment-2940577520>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AKWOOYTW3N2LWECX3K75FTT3B4JYVAVCNFSM6AAAAAB6R2ESRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSNBQGU3TONJSGA> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

I mean any documents that say they fixed my problem.

The bug I linked to in my first response points to a PR that fixed this sort of problem. Whether if fixes your problem is unknown until you upgrade and test.

<!-- gh-comment-id:2940606555 --> @rick-github commented on GitHub (Jun 4, 2025): > I mean any documents that say they fixed my problem. The bug I linked to in my first response points to a PR that fixed this sort of problem. Whether if fixes your problem is unknown until you upgrade and test.
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

we have 3 clients, the problematic one had num_context changed, nothing else

The logs indicate that num_ctx:0 is causing the model to be loaded into CPU. Either the client or the model has configured that. What does the Modelfile for the model in question look like?

<!-- gh-comment-id:2940612898 --> @rick-github commented on GitHub (Jun 4, 2025): > we have 3 clients, the problematic one had num_context changed, nothing else The logs indicate that `num_ctx:0` is causing the model to be loaded into CPU. Either the client or the model has configured that. What does the Modelfile for the model in question look like?
Author
Owner

@TheNha commented on GitHub (Jun 4, 2025):

Ok. I will try to upgrade to the latest version and check, then will update
the issue presented here.
Thank you very much.

Vào Th 4, 4 thg 6, 2025 lúc 23:09 frob @.***> đã viết:

rick-github left a comment (ollama/ollama#10967)
https://github.com/ollama/ollama/issues/10967#issuecomment-2940606555

I mean any documents that say they fixed my problem.

The bug I linked to in my first response points to a PR that fixed this
sort of problem. Whether if fixes your problem is unknown until you upgrade
and test.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/10967#issuecomment-2940606555,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AKWOOYRWG6ELDDBQ7OOX2O33B4K5NAVCNFSM6AAAAAB6R2ESRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSNBQGYYDMNJVGU
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:2940615415 --> @TheNha commented on GitHub (Jun 4, 2025): Ok. I will try to upgrade to the latest version and check, then will update the issue presented here. Thank you very much. Vào Th 4, 4 thg 6, 2025 lúc 23:09 frob ***@***.***> đã viết: > *rick-github* left a comment (ollama/ollama#10967) > <https://github.com/ollama/ollama/issues/10967#issuecomment-2940606555> > > I mean any documents that say they fixed my problem. > > The bug I linked to in my first response points to a PR that fixed this > sort of problem. Whether if fixes your problem is unknown until you upgrade > and test. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/10967#issuecomment-2940606555>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AKWOOYRWG6ELDDBQ7OOX2O33B4K5NAVCNFSM6AAAAAB6R2ESRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSNBQGYYDMNJVGU> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@sherpya commented on GitHub (Jun 4, 2025):

we have 3 clients, the problematic one had num_context changed, nothing else

The logs indicate that num_ctx:0 is causing the model to be loaded into CPU. Either the client or the model has configured that. What does the Modelfile for the model in question look like?

gemma3 from ollama registry:

# To build a new Modelfile based on this, replace FROM with:
# FROM gemma3:27b-it-q8_0

FROM /var/lib/ollama/models/blobs/sha256-1b046227289ed378bde9b80154dc3da99071e34eac3e08e5770d1966b19be2a8
TEMPLATE """{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if or (eq .Role "user") (eq .Role "system") }}<start_of_turn>user
{{ .Content }}<end_of_turn>
{{ if $last }}<start_of_turn>model
{{ end }}
{{- else if eq .Role "assistant" }}<start_of_turn>model
{{ .Content }}{{ if not $last }}<end_of_turn>
{{ end }}
{{- end }}
{{- end }}"""
PARAMETER top_k 64
PARAMETER top_p 0.95
PARAMETER stop <end_of_turn>
PARAMETER temperature 1
<!-- gh-comment-id:2940728123 --> @sherpya commented on GitHub (Jun 4, 2025): > > we have 3 clients, the problematic one had num_context changed, nothing else > > The logs indicate that `num_ctx:0` is causing the model to be loaded into CPU. Either the client or the model has configured that. What does the Modelfile for the model in question look like? gemma3 from ollama registry: ``` # To build a new Modelfile based on this, replace FROM with: # FROM gemma3:27b-it-q8_0 FROM /var/lib/ollama/models/blobs/sha256-1b046227289ed378bde9b80154dc3da99071e34eac3e08e5770d1966b19be2a8 TEMPLATE """{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if or (eq .Role "user") (eq .Role "system") }}<start_of_turn>user {{ .Content }}<end_of_turn> {{ if $last }}<start_of_turn>model {{ end }} {{- else if eq .Role "assistant" }}<start_of_turn>model {{ .Content }}{{ if not $last }}<end_of_turn> {{ end }} {{- end }} {{- end }}""" PARAMETER top_k 64 PARAMETER top_p 0.95 PARAMETER stop <end_of_turn> PARAMETER temperature 1 ```
Author
Owner

@rick-github commented on GitHub (Jun 4, 2025):

Then it seems the client is setting num_gpu:0.

<!-- gh-comment-id:2940735424 --> @rick-github commented on GitHub (Jun 4, 2025): Then it seems the client is setting `num_gpu:0`.
Author
Owner

@sherpya commented on GitHub (Jun 4, 2025):

Then it seems the client is setting num_gpu:0.

I'll try playing with the client, the only change was the max context, maybe sometihing went bad on the openwebui side

<!-- gh-comment-id:2940781015 --> @sherpya commented on GitHub (Jun 4, 2025): > Then it seems the client is setting `num_gpu:0`. I'll try playing with the client, the only change was the max context, maybe sometihing went bad on the openwebui side
Author
Owner

@duck-5 commented on GitHub (Jun 10, 2025):

Can you try using Wireshark to get the json request?

<!-- gh-comment-id:2960291031 --> @duck-5 commented on GitHub (Jun 10, 2025): Can you try using Wireshark to get the json request?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32986