[GH-ISSUE #10114] Ollama not freeing and eventually running out of memory [all models] #68692

Closed
opened 2026-05-04 14:52:10 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @michal0000000 on GitHub (Apr 3, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10114

What is the issue?

I wrote a script to stress test the server. What I'm seeing is all models across the board run out of memory eventually as if they we're not freeing it after generation. I am aware of #10040 but this problem is not exclusive to gemma3. Also, this problem seems to persist in version 0.6.4 where the gemma issue is supposedly fixed.

Tested models: gemma3:12b, mistral:7b, phi4:14b

GPU: nvidia geforce 5090

Ollama setup:

docker run -d \
        --gpus=all \
        -v /usr/share/ollama/.ollama/:/root/.ollama/ \
        -p 11434:11434 \
        -e https_proxy=http://192.168.230.254:8888/ \
        -e HTTPS_PROXY=http://192.168.230.254:8888 \
        -e HTTP_PROXY=http://192.168.230.254:8888 \
        -e no_proxy=127.0.0.1,localhost,0.0.0.0 \
        -e OLLAMA_KEEP_ALIVE=-1 \
        -e OLLAMA_DEBUG=1 \
        -e OLLAMA_CONTEXT_LENGTH=32768 \
        -e OLLAMA_MAX_LOADED_MODELS=2 \
        -e OLLAMA_NUM_PARALLEL=1 \
        -e OLLAMA_MAX_QUEUE=10 \
        --name ollama ollama/ollama:0.6.4  # tested 0.6.3, 0.6.1, 0.6.0

Relevant part of the stress testing script:

async function start(index: number) {
    let sessionId : string | null = null
    let cnt = 0
    while(true) {
        sessionId = await onMessage(sessionId, index)
        cnt += 1
        if(cnt > 10) {
            cnt = 0
            sessionId = null
        }
    }
}

for (let index = 0; index < 3; index++) {
    start(index)
}

Relevant log output

2025/04/03 22:44:24 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:http://redacted:8888 HTTP_PROXY:http://redacted:8888 NO_PROXY: OLLAMA_CONTEXT_LENGTH:32768 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:10 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy:http://redacted:8888/ no_proxy:127.0.0.1,localhost,0.0.0.0]"
time=2025-04-03T22:44:24.768Z level=INFO source=images.go:458 msg="total blobs: 20"
time=2025-04-03T22:44:24.769Z level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-03T22:44:24.770Z level=INFO source=routes.go:1298 msg="Listening on [::]:11434 (version 0.6.4)"
time=2025-04-03T22:44:24.770Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.07
# ...
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
[GPU-06c90607-eae5-26dd-6fc1-d08896bd788e] CUDA totalMem 32111 mb
[GPU-06c90607-eae5-26dd-6fc1-d08896bd788e] CUDA freeMem 31607 mb
[GPU-06c90607-eae5-26dd-6fc1-d08896bd788e] Compute Capability 12.0
time=2025-04-03T22:44:25.078Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-04-03T22:44:25.078Z level=INFO source=amd_linux.go:402 msg="no compatible amdgpu devices detected"
releasing cuda driver library
time=2025-04-03T22:44:25.078Z level=INFO source=types.go:130 msg="inference compute" id=GPU-06c90607-eae5-26dd-6fc1-d08896bd788e library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="30.9 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.07
dlsym: cuInit - 0x7f0bebd0fe70
dlsym: cuDriverGetVersion - 0x7f0bebd0fe90
dlsym: cuDeviceGetCount - 0x7f0bebd0fed0
dlsym: cuDeviceGet - 0x7f0bebd0feb0
dlsym: cuDeviceGetAttribute - 0x7f0bebd0ffb0
dlsym: cuDeviceGetUuid - 0x7f0bebd0ff10
dlsym: cuDeviceGetName - 0x7f0bebd0fef0
dlsym: cuCtxCreate_v3 - 0x7f0bebd10190
dlsym: cuMemGetInfo_v2 - 0x7f0bebd10910
dlsym: cuCtxDestroy - 0x7f0bebd6eab0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
releasing cuda driver library
time=2025-04-03T22:45:06.175Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0
time=2025-04-03T22:45:06.175Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-04-03T22:45:06.175Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64
time=2025-04-03T22:45:06.175Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64
time=2025-04-03T22:45:06.175Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-04-03T22:45:06.175Z level=INFO source=sched.go:716 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 gpu=GPU-06c90607-eae5-26dd-6fc1-d08896bd788e parallel=1 available=33142734848 required="2.2 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.07
dlsym: cuInit - 0x7f0bebd0fe70
dlsym: cuDriverGetVersion - 0x7f0bebd0fe90
dlsym: cuDeviceGetCount - 0x7f0bebd0fed0
dlsym: cuDeviceGet - 0x7f0bebd0feb0
dlsym: cuDeviceGetAttribute - 0x7f0bebd0ffb0
dlsym: cuDeviceGetUuid - 0x7f0bebd0ff10
dlsym: cuDeviceGetName - 0x7f0bebd0fef0
dlsym: cuCtxCreate_v3 - 0x7f0bebd10190
dlsym: cuMemGetInfo_v2 - 0x7f0bebd10910
dlsym: cuCtxDestroy - 0x7f0bebd6eab0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
releasing cuda driver library
time=2025-04-03T22:45:06.301Z level=INFO source=server.go:105 msg="system memory" total="60.7 GiB" free="54.0 GiB" free_swap="6.7 GiB"
time=2025-04-03T22:45:06.301Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0
time=2025-04-03T22:45:06.301Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-04-03T22:45:06.301Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64
time=2025-04-03T22:45:06.301Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64
time=2025-04-03T22:45:06.301Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-04-03T22:45:06.301Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[30.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.2 GiB" memory.required.partial="2.2 GiB" memory.required.kv="192.0 MiB" memory.required.allocations="[2.2 GiB]" memory.weights.total="1.0 GiB" memory.weights.repeating="577.2 MiB" memory.weights.nonrepeating="488.3 MiB" memory.graph.full="512.0 MiB" memory.graph.partial="512.0 MiB"
llama_model_loader: loaded meta data with 34 key-value pairs and 389 tensors from /root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = bert
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 567M
llama_model_loader: - kv   3:                            general.license str              = apache-2.0
llama_model_loader: - kv   4:                               general.tags arr[str,8]       = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv   5:                          general.languages arr[str,74]      = ["af", "ar", "az", "be", "bg", "bn", ...
llama_model_loader: - kv   6:                           bert.block_count u32              = 24
llama_model_loader: - kv   7:                        bert.context_length u32              = 8192
llama_model_loader: - kv   8:                      bert.embedding_length u32              = 1024
llama_model_loader: - kv   9:                   bert.feed_forward_length u32              = 4096
llama_model_loader: - kv  10:                  bert.attention.head_count u32              = 16
llama_model_loader: - kv  11:          bert.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                          general.file_type u32              = 1
llama_model_loader: - kv  13:                      bert.attention.causal bool             = false
llama_model_loader: - kv  14:                          bert.pooling_type u32              = 2
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = t5
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,250002]  = ["<s>", "<pad>", "</s>", "<unk>", ","...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,250002]  = [3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                      tokenizer.ggml.scores arr[f32,250002]  = [-10000.000000, -10000.000000, -10000...
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = true
llama_model_loader: - kv  22:            tokenizer.ggml.token_type_count u32              = 1
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  25:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  26:          tokenizer.ggml.seperator_token_id u32              = 2
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  28:                tokenizer.ggml.cls_token_id u32              = 0
llama_model_loader: - kv  29:               tokenizer.ggml.mask_token_id u32              = 250001
llama_model_loader: - kv  30:        tokenizer.ggml.precompiled_charsmap arr[str,316720]  = ["A", "L", "Q", "C", "A", "A", "C", "...
llama_model_loader: - kv  31:    tokenizer.ggml.remove_extra_whitespaces bool             = true
llama_model_loader: - kv  32:            tokenizer.ggml.add_space_prefix bool             = true
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  244 tensors
llama_model_loader: - type  f16:  145 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = F16
print_info: file size   = 1.07 GiB (16.25 BPW) 
init_tokenizer: initializing tokenizer for type 4
load: control token:      3 '<unk>' is not marked as EOG
load: control token: 250001 '<mask>' is not marked as EOG
load: control token:      2 '</s>' is not marked as EOG
load: control token:      1 '<pad>' is not marked as EOG
load: control token:      0 '<s>' is not marked as EOG
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 5
load: token to piece cache size = 2.1668 MB
print_info: arch             = bert
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 566.70 M
print_info: general.name     = n/a
print_info: vocab type       = UGM
print_info: n_vocab          = 250002
print_info: n_merges         = 0
print_info: BOS token        = 0 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 3 '<unk>'
print_info: SEP token        = 2 '</s>'
print_info: PAD token        = 1 '<pad>'
print_info: MASK token       = 250001 '<mask>'
print_info: LF token         = 6 '▁'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
llama_model_load: vocab only - skipping tensors
time=2025-04-03T22:45:06.632Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 --ctx-size 32768 --batch-size 512 --n-gpu-layers 25 --verbose --threads 16 --parallel 1 --port 38465"
time=2025-04-03T22:45:06.633Z level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-03T22:45:06.633Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-03T22:45:06.634Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-03T22:45:06.641Z level=INFO source=runner.go:858 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 1463
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 183
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 55
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2025-04-03T22:45:06.990Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-04-03T22:45:06.991Z level=INFO source=runner.go:918 msg="Server listening on 127.0.0.1:38465"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 31607 MiB free
llama_model_loader: loaded meta data with 34 key-value pairs and 389 tensors from /root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = bert
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 567M
llama_model_loader: - kv   3:                            general.license str              = apache-2.0
llama_model_loader: - kv   4:                               general.tags arr[str,8]       = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv   5:                          general.languages arr[str,74]      = ["af", "ar", "az", "be", "bg", "bn", ...
llama_model_loader: - kv   6:                           bert.block_count u32              = 24
llama_model_loader: - kv   7:                        bert.context_length u32              = 8192
llama_model_loader: - kv   8:                      bert.embedding_length u32              = 1024
llama_model_loader: - kv   9:                   bert.feed_forward_length u32              = 4096
llama_model_loader: - kv  10:                  bert.attention.head_count u32              = 16
llama_model_loader: - kv  11:          bert.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                          general.file_type u32              = 1
llama_model_loader: - kv  13:                      bert.attention.causal bool             = false
llama_model_loader: - kv  14:                          bert.pooling_type u32              = 2
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = t5
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,250002]  = ["<s>", "<pad>", "</s>", "<unk>", ","...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,250002]  = [3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, ...
time=2025-04-03T22:45:07.136Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  19:                      tokenizer.ggml.scores arr[f32,250002]  = [-10000.000000, -10000.000000, -10000...
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = true
llama_model_loader: - kv  22:            tokenizer.ggml.token_type_count u32              = 1
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  25:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  26:          tokenizer.ggml.seperator_token_id u32              = 2
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  28:                tokenizer.ggml.cls_token_id u32              = 0
llama_model_loader: - kv  29:               tokenizer.ggml.mask_token_id u32              = 250001
llama_model_loader: - kv  30:        tokenizer.ggml.precompiled_charsmap arr[str,316720]  = ["A", "L", "Q", "C", "A", "A", "C", "...
llama_model_loader: - kv  31:    tokenizer.ggml.remove_extra_whitespaces bool             = true
llama_model_loader: - kv  32:            tokenizer.ggml.add_space_prefix bool             = true
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  244 tensors
llama_model_loader: - type  f16:  145 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = F16
print_info: file size   = 1.07 GiB (16.25 BPW) 
init_tokenizer: initializing tokenizer for type 4
load: control token:      3 '<unk>' is not marked as EOG
load: control token: 250001 '<mask>' is not marked as EOG
load: control token:      2 '</s>' is not marked as EOG
load: control token:      1 '<pad>' is not marked as EOG
load: control token:      0 '<s>' is not marked as EOG
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 5
load: token to piece cache size = 2.1668 MB
print_info: arch             = bert
print_info: vocab_only       = 0
print_info: n_ctx_train      = 8192
print_info: n_embd           = 1024
print_info: n_layer          = 24
print_info: n_head           = 16
print_info: n_head_kv        = 16
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: n_embd_head_k    = 64
print_info: n_embd_head_v    = 64
print_info: n_gqa            = 1
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 1.0e-05
print_info: f_norm_rms_eps   = 0.0e+00
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 4096
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 0
print_info: pooling type     = 2
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 8192
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 335M
print_info: model params     = 566.70 M
print_info: general.name     = n/a
print_info: vocab type       = UGM
print_info: n_vocab          = 250002
print_info: n_merges         = 0
print_info: BOS token        = 0 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 3 '<unk>'
print_info: SEP token        = 2 '</s>'
print_info: PAD token        = 1 '<pad>'
print_info: MASK token       = 250001 '<mask>'
print_info: LF token         = 6 '▁'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: layer   0 assigned to device CUDA0
load_tensors: layer   1 assigned to device CUDA0
load_tensors: layer   2 assigned to device CUDA0
load_tensors: layer   3 assigned to device CUDA0
load_tensors: layer   4 assigned to device CUDA0
load_tensors: layer   5 assigned to device CUDA0
load_tensors: layer   6 assigned to device CUDA0
load_tensors: layer   7 assigned to device CUDA0
load_tensors: layer   8 assigned to device CUDA0
load_tensors: layer   9 assigned to device CUDA0
load_tensors: layer  10 assigned to device CUDA0
load_tensors: layer  11 assigned to device CUDA0
load_tensors: layer  12 assigned to device CUDA0
load_tensors: layer  13 assigned to device CUDA0
load_tensors: layer  14 assigned to device CUDA0
load_tensors: layer  15 assigned to device CUDA0
load_tensors: layer  16 assigned to device CUDA0
load_tensors: layer  17 assigned to device CUDA0
load_tensors: layer  18 assigned to device CUDA0
load_tensors: layer  19 assigned to device CUDA0
load_tensors: layer  20 assigned to device CUDA0
load_tensors: layer  21 assigned to device CUDA0
load_tensors: layer  22 assigned to device CUDA0
load_tensors: layer  23 assigned to device CUDA0
load_tensors: layer  24 assigned to device CUDA0
load_tensors: tensor 'token_embd.weight' (f16) (and 4 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead
load_tensors: offloading 24 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 25/25 layers to GPU
load_tensors:        CUDA0 model buffer size =   577.22 MiB
load_tensors:   CPU_Mapped model buffer size =   520.30 MiB
llama_init_from_model: n_seq_max     = 1
llama_init_from_model: n_ctx         = 32768
llama_init_from_model: n_ctx_per_seq = 32768
llama_init_from_model: n_batch       = 512
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 0
llama_init_from_model: freq_base     = 10000.0
llama_init_from_model: freq_scale    = 1
llama_init_from_model: n_ctx_pre_seq (32768) > n_ctx_train (8192) -- possible training context overflow
llama_kv_cache_init: kv_size = 32768, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1
llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init:      CUDA0 KV buffer size =  3072.00 MiB
llama_init_from_model: KV self size  = 3072.00 MiB, K (f16): 1536.00 MiB, V (f16): 1536.00 MiB
llama_init_from_model:  CUDA_Host  output buffer size =     0.00 MiB
llama_init_from_model:      CUDA0 compute buffer size =    25.01 MiB
llama_init_from_model:  CUDA_Host compute buffer size =     5.01 MiB
llama_init_from_model: graph nodes  = 849
llama_init_from_model: graph splits = 4 (with bs=512), 2 (with bs=1)
time=2025-04-03T22:45:10.145Z level=INFO source=server.go:619 msg="llama runner started in 3.51 seconds"
[GIN] 2025/04/03 - 22:45:10 | 200 |  4.412820418s |  10.141.106.183 | POST     "/api/embed"
[GIN] 2025/04/03 - 22:45:10 | 200 |    4.4219119s |  10.141.106.183 | POST     "/api/embed"
[GIN] 2025/04/03 - 22:45:10 | 200 |   4.42790042s |  10.141.106.183 | POST     "/api/embed"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.07
dlsym: cuInit - 0x7f0bebd0fe70
dlsym: cuDriverGetVersion - 0x7f0bebd0fe90
dlsym: cuDeviceGetCount - 0x7f0bebd0fed0
dlsym: cuDeviceGet - 0x7f0bebd0feb0
dlsym: cuDeviceGetAttribute - 0x7f0bebd0ffb0
dlsym: cuDeviceGetUuid - 0x7f0bebd0ff10
dlsym: cuDeviceGetName - 0x7f0bebd0fef0
dlsym: cuCtxCreate_v3 - 0x7f0bebd10190
dlsym: cuMemGetInfo_v2 - 0x7f0bebd10910
dlsym: cuCtxDestroy - 0x7f0bebd6eab0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
releasing cuda driver library
time=2025-04-03T22:45:10.618Z level=INFO source=sched.go:509 msg="updated VRAM based on existing loaded models" gpu=GPU-06c90607-eae5-26dd-6fc1-d08896bd788e library=cuda total="31.4 GiB" available="26.8 GiB"
time=2025-04-03T22:45:10.620Z level=INFO source=sched.go:716 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=GPU-06c90607-eae5-26dd-6fc1-d08896bd788e parallel=1 available=28745728000 required="12.7 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.07
dlsym: cuInit - 0x7f0bebd0fe70
dlsym: cuDriverGetVersion - 0x7f0bebd0fe90
dlsym: cuDeviceGetCount - 0x7f0bebd0fed0
dlsym: cuDeviceGet - 0x7f0bebd0feb0
dlsym: cuDeviceGetAttribute - 0x7f0bebd0ffb0
dlsym: cuDeviceGetUuid - 0x7f0bebd0ff10
dlsym: cuDeviceGetName - 0x7f0bebd0fef0
dlsym: cuCtxCreate_v3 - 0x7f0bebd10190
dlsym: cuMemGetInfo_v2 - 0x7f0bebd10910
dlsym: cuCtxDestroy - 0x7f0bebd6eab0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-04-03T22:45:10.745Z level=INFO source=server.go:105 msg="system memory" total="60.7 GiB" free="53.3 GiB" free_swap="6.7 GiB"
time=2025-04-03T22:45:10.746Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[26.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="12.7 GiB" memory.required.partial="12.7 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[12.7 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.4 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-04-03T22:45:10.814Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-03T22:45:10.820Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-04-03T22:45:10.820Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-03T22:45:10.820Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-03T22:45:10.820Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-03T22:45:10.820Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
[/usr/lib/ollama/cuda_v12]
time=2025-04-03T22:45:10.821Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 32768 --batch-size 512 --n-gpu-layers 49 --verbose --threads 16 --parallel 1 --port 42325"
time=2025-04-03T22:45:10.821Z level=INFO source=sched.go:451 msg="loaded runners" count=2
time=2025-04-03T22:45:10.821Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-03T22:45:10.821Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-03T22:45:10.829Z level=INFO source=runner.go:821 msg="starting ollama engine"
time=2025-04-03T22:45:10.829Z level=INFO source=runner.go:884 msg="Server listening on 127.0.0.1:42325"
time=2025-04-03T22:45:10.896Z level=WARN source=ggml.go:149 msg="key not found" key=general.name default=""
time=2025-04-03T22:45:10.896Z level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-04-03T22:45:10.896Z level=INFO source=ggml.go:66 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 1463
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 183
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 55
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2025-04-03T22:45:10.943Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
# ... redacted some lines
time=2025-04-03T22:45:25.997Z level=INFO source=ggml.go:380 msg="compute graph" backend=CUDA0 buffer_type=CUDA0
time=2025-04-03T22:45:25.997Z level=INFO source=ggml.go:380 msg="compute graph" backend=CPU buffer_type=CUDA_Host
time=2025-04-03T22:45:25.998Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-03T22:45:26.002Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-04-03T22:45:26.002Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-03T22:45:26.002Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-03T22:45:26.003Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-03T22:45:26.003Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-03T22:45:26.127Z level=INFO source=server.go:619 msg="llama runner started in 15.31 seconds"
[GIN] 2025/04/03 - 22:45:27 | 200 | 17.435845916s |  10.141.106.183 | POST     "/api/chat"
[GIN] 2025/04/03 - 22:45:27 | 200 |   62.319642ms |  10.141.106.183 | POST     "/api/embed"
[GIN] 2025/04/03 - 22:45:29 | 200 | 18.873696523s |  10.141.106.183 | POST     "/api/chat"
[GIN] 2025/04/03 - 22:45:29 | 200 |   62.090574ms |  10.141.106.183 | POST     "/api/embed"
[GIN] 2025/04/03 - 22:45:30 | 200 | 20.272128969s |  10.141.106.183 | POST     "/api/chat"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.6.4

Originally created by @michal0000000 on GitHub (Apr 3, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10114 ### What is the issue? I wrote a script to stress test the server. What I'm seeing is all models across the board run out of memory eventually as if they we're not freeing it after generation. I am aware of #10040 but this problem is not exclusive to `gemma3`. Also, this problem seems to persist in version 0.6.4 where the gemma issue is supposedly fixed. Tested models: `gemma3:12b`, `mistral:7b`, `phi4:14b` GPU: `nvidia geforce 5090` Ollama setup: ```bash docker run -d \ --gpus=all \ -v /usr/share/ollama/.ollama/:/root/.ollama/ \ -p 11434:11434 \ -e https_proxy=http://192.168.230.254:8888/ \ -e HTTPS_PROXY=http://192.168.230.254:8888 \ -e HTTP_PROXY=http://192.168.230.254:8888 \ -e no_proxy=127.0.0.1,localhost,0.0.0.0 \ -e OLLAMA_KEEP_ALIVE=-1 \ -e OLLAMA_DEBUG=1 \ -e OLLAMA_CONTEXT_LENGTH=32768 \ -e OLLAMA_MAX_LOADED_MODELS=2 \ -e OLLAMA_NUM_PARALLEL=1 \ -e OLLAMA_MAX_QUEUE=10 \ --name ollama ollama/ollama:0.6.4 # tested 0.6.3, 0.6.1, 0.6.0 ``` Relevant part of the stress testing script: ```typescript async function start(index: number) { let sessionId : string | null = null let cnt = 0 while(true) { sessionId = await onMessage(sessionId, index) cnt += 1 if(cnt > 10) { cnt = 0 sessionId = null } } } for (let index = 0; index < 3; index++) { start(index) } ``` ### Relevant log output ```shell 2025/04/03 22:44:24 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:http://redacted:8888 HTTP_PROXY:http://redacted:8888 NO_PROXY: OLLAMA_CONTEXT_LENGTH:32768 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:10 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy:http://redacted:8888/ no_proxy:127.0.0.1,localhost,0.0.0.0]" time=2025-04-03T22:44:24.768Z level=INFO source=images.go:458 msg="total blobs: 20" time=2025-04-03T22:44:24.769Z level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-03T22:44:24.770Z level=INFO source=routes.go:1298 msg="Listening on [::]:11434 (version 0.6.4)" time=2025-04-03T22:44:24.770Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" [/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.07 # ... calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 [GPU-06c90607-eae5-26dd-6fc1-d08896bd788e] CUDA totalMem 32111 mb [GPU-06c90607-eae5-26dd-6fc1-d08896bd788e] CUDA freeMem 31607 mb [GPU-06c90607-eae5-26dd-6fc1-d08896bd788e] Compute Capability 12.0 time=2025-04-03T22:44:25.078Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-04-03T22:44:25.078Z level=INFO source=amd_linux.go:402 msg="no compatible amdgpu devices detected" releasing cuda driver library time=2025-04-03T22:44:25.078Z level=INFO source=types.go:130 msg="inference compute" id=GPU-06c90607-eae5-26dd-6fc1-d08896bd788e library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="30.9 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.07 dlsym: cuInit - 0x7f0bebd0fe70 dlsym: cuDriverGetVersion - 0x7f0bebd0fe90 dlsym: cuDeviceGetCount - 0x7f0bebd0fed0 dlsym: cuDeviceGet - 0x7f0bebd0feb0 dlsym: cuDeviceGetAttribute - 0x7f0bebd0ffb0 dlsym: cuDeviceGetUuid - 0x7f0bebd0ff10 dlsym: cuDeviceGetName - 0x7f0bebd0fef0 dlsym: cuCtxCreate_v3 - 0x7f0bebd10190 dlsym: cuMemGetInfo_v2 - 0x7f0bebd10910 dlsym: cuCtxDestroy - 0x7f0bebd6eab0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 releasing cuda driver library time=2025-04-03T22:45:06.175Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0 time=2025-04-03T22:45:06.175Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-04-03T22:45:06.175Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64 time=2025-04-03T22:45:06.175Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64 time=2025-04-03T22:45:06.175Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-04-03T22:45:06.175Z level=INFO source=sched.go:716 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 gpu=GPU-06c90607-eae5-26dd-6fc1-d08896bd788e parallel=1 available=33142734848 required="2.2 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.07 dlsym: cuInit - 0x7f0bebd0fe70 dlsym: cuDriverGetVersion - 0x7f0bebd0fe90 dlsym: cuDeviceGetCount - 0x7f0bebd0fed0 dlsym: cuDeviceGet - 0x7f0bebd0feb0 dlsym: cuDeviceGetAttribute - 0x7f0bebd0ffb0 dlsym: cuDeviceGetUuid - 0x7f0bebd0ff10 dlsym: cuDeviceGetName - 0x7f0bebd0fef0 dlsym: cuCtxCreate_v3 - 0x7f0bebd10190 dlsym: cuMemGetInfo_v2 - 0x7f0bebd10910 dlsym: cuCtxDestroy - 0x7f0bebd6eab0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 releasing cuda driver library time=2025-04-03T22:45:06.301Z level=INFO source=server.go:105 msg="system memory" total="60.7 GiB" free="54.0 GiB" free_swap="6.7 GiB" time=2025-04-03T22:45:06.301Z level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0 time=2025-04-03T22:45:06.301Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-04-03T22:45:06.301Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64 time=2025-04-03T22:45:06.301Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64 time=2025-04-03T22:45:06.301Z level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-04-03T22:45:06.301Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[30.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.2 GiB" memory.required.partial="2.2 GiB" memory.required.kv="192.0 MiB" memory.required.allocations="[2.2 GiB]" memory.weights.total="1.0 GiB" memory.weights.repeating="577.2 MiB" memory.weights.nonrepeating="488.3 MiB" memory.graph.full="512.0 MiB" memory.graph.partial="512.0 MiB" llama_model_loader: loaded meta data with 34 key-value pairs and 389 tensors from /root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = bert llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 567M llama_model_loader: - kv 3: general.license str = apache-2.0 llama_model_loader: - kv 4: general.tags arr[str,8] = ["sentence-transformers", "feature-ex... llama_model_loader: - kv 5: general.languages arr[str,74] = ["af", "ar", "az", "be", "bg", "bn", ... llama_model_loader: - kv 6: bert.block_count u32 = 24 llama_model_loader: - kv 7: bert.context_length u32 = 8192 llama_model_loader: - kv 8: bert.embedding_length u32 = 1024 llama_model_loader: - kv 9: bert.feed_forward_length u32 = 4096 llama_model_loader: - kv 10: bert.attention.head_count u32 = 16 llama_model_loader: - kv 11: bert.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 1 llama_model_loader: - kv 13: bert.attention.causal bool = false llama_model_loader: - kv 14: bert.pooling_type u32 = 2 llama_model_loader: - kv 15: tokenizer.ggml.model str = t5 llama_model_loader: - kv 16: tokenizer.ggml.pre str = default llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,250002] = ["<s>", "<pad>", "</s>", "<unk>", ","... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.scores arr[f32,250002] = [-10000.000000, -10000.000000, -10000... llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = true llama_model_loader: - kv 22: tokenizer.ggml.token_type_count u32 = 1 llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001 llama_model_loader: - kv 30: tokenizer.ggml.precompiled_charsmap arr[str,316720] = ["A", "L", "Q", "C", "A", "A", "C", "... llama_model_loader: - kv 31: tokenizer.ggml.remove_extra_whitespaces bool = true llama_model_loader: - kv 32: tokenizer.ggml.add_space_prefix bool = true llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 244 tensors llama_model_loader: - type f16: 145 tensors print_info: file format = GGUF V3 (latest) print_info: file type = F16 print_info: file size = 1.07 GiB (16.25 BPW) init_tokenizer: initializing tokenizer for type 4 load: control token: 3 '<unk>' is not marked as EOG load: control token: 250001 '<mask>' is not marked as EOG load: control token: 2 '</s>' is not marked as EOG load: control token: 1 '<pad>' is not marked as EOG load: control token: 0 '<s>' is not marked as EOG load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 5 load: token to piece cache size = 2.1668 MB print_info: arch = bert print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 566.70 M print_info: general.name = n/a print_info: vocab type = UGM print_info: n_vocab = 250002 print_info: n_merges = 0 print_info: BOS token = 0 '<s>' print_info: EOS token = 2 '</s>' print_info: UNK token = 3 '<unk>' print_info: SEP token = 2 '</s>' print_info: PAD token = 1 '<pad>' print_info: MASK token = 250001 '<mask>' print_info: LF token = 6 '▁' print_info: EOG token = 2 '</s>' print_info: max token length = 48 llama_model_load: vocab only - skipping tensors time=2025-04-03T22:45:06.632Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 --ctx-size 32768 --batch-size 512 --n-gpu-layers 25 --verbose --threads 16 --parallel 1 --port 38465" time=2025-04-03T22:45:06.633Z level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-03T22:45:06.633Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-03T22:45:06.634Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-03T22:45:06.641Z level=INFO source=runner.go:858 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 1463 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 183 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 55 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2025-04-03T22:45:06.990Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-04-03T22:45:06.991Z level=INFO source=runner.go:918 msg="Server listening on 127.0.0.1:38465" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 31607 MiB free llama_model_loader: loaded meta data with 34 key-value pairs and 389 tensors from /root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = bert llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 567M llama_model_loader: - kv 3: general.license str = apache-2.0 llama_model_loader: - kv 4: general.tags arr[str,8] = ["sentence-transformers", "feature-ex... llama_model_loader: - kv 5: general.languages arr[str,74] = ["af", "ar", "az", "be", "bg", "bn", ... llama_model_loader: - kv 6: bert.block_count u32 = 24 llama_model_loader: - kv 7: bert.context_length u32 = 8192 llama_model_loader: - kv 8: bert.embedding_length u32 = 1024 llama_model_loader: - kv 9: bert.feed_forward_length u32 = 4096 llama_model_loader: - kv 10: bert.attention.head_count u32 = 16 llama_model_loader: - kv 11: bert.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 1 llama_model_loader: - kv 13: bert.attention.causal bool = false llama_model_loader: - kv 14: bert.pooling_type u32 = 2 llama_model_loader: - kv 15: tokenizer.ggml.model str = t5 llama_model_loader: - kv 16: tokenizer.ggml.pre str = default llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,250002] = ["<s>", "<pad>", "</s>", "<unk>", ","... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, ... time=2025-04-03T22:45:07.136Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 19: tokenizer.ggml.scores arr[f32,250002] = [-10000.000000, -10000.000000, -10000... llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = true llama_model_loader: - kv 22: tokenizer.ggml.token_type_count u32 = 1 llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001 llama_model_loader: - kv 30: tokenizer.ggml.precompiled_charsmap arr[str,316720] = ["A", "L", "Q", "C", "A", "A", "C", "... llama_model_loader: - kv 31: tokenizer.ggml.remove_extra_whitespaces bool = true llama_model_loader: - kv 32: tokenizer.ggml.add_space_prefix bool = true llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 244 tensors llama_model_loader: - type f16: 145 tensors print_info: file format = GGUF V3 (latest) print_info: file type = F16 print_info: file size = 1.07 GiB (16.25 BPW) init_tokenizer: initializing tokenizer for type 4 load: control token: 3 '<unk>' is not marked as EOG load: control token: 250001 '<mask>' is not marked as EOG load: control token: 2 '</s>' is not marked as EOG load: control token: 1 '<pad>' is not marked as EOG load: control token: 0 '<s>' is not marked as EOG load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 5 load: token to piece cache size = 2.1668 MB print_info: arch = bert print_info: vocab_only = 0 print_info: n_ctx_train = 8192 print_info: n_embd = 1024 print_info: n_layer = 24 print_info: n_head = 16 print_info: n_head_kv = 16 print_info: n_rot = 64 print_info: n_swa = 0 print_info: n_embd_head_k = 64 print_info: n_embd_head_v = 64 print_info: n_gqa = 1 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 1.0e-05 print_info: f_norm_rms_eps = 0.0e+00 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 4096 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 0 print_info: pooling type = 2 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 10000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 8192 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 335M print_info: model params = 566.70 M print_info: general.name = n/a print_info: vocab type = UGM print_info: n_vocab = 250002 print_info: n_merges = 0 print_info: BOS token = 0 '<s>' print_info: EOS token = 2 '</s>' print_info: UNK token = 3 '<unk>' print_info: SEP token = 2 '</s>' print_info: PAD token = 1 '<pad>' print_info: MASK token = 250001 '<mask>' print_info: LF token = 6 '▁' print_info: EOG token = 2 '</s>' print_info: max token length = 48 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: layer 0 assigned to device CUDA0 load_tensors: layer 1 assigned to device CUDA0 load_tensors: layer 2 assigned to device CUDA0 load_tensors: layer 3 assigned to device CUDA0 load_tensors: layer 4 assigned to device CUDA0 load_tensors: layer 5 assigned to device CUDA0 load_tensors: layer 6 assigned to device CUDA0 load_tensors: layer 7 assigned to device CUDA0 load_tensors: layer 8 assigned to device CUDA0 load_tensors: layer 9 assigned to device CUDA0 load_tensors: layer 10 assigned to device CUDA0 load_tensors: layer 11 assigned to device CUDA0 load_tensors: layer 12 assigned to device CUDA0 load_tensors: layer 13 assigned to device CUDA0 load_tensors: layer 14 assigned to device CUDA0 load_tensors: layer 15 assigned to device CUDA0 load_tensors: layer 16 assigned to device CUDA0 load_tensors: layer 17 assigned to device CUDA0 load_tensors: layer 18 assigned to device CUDA0 load_tensors: layer 19 assigned to device CUDA0 load_tensors: layer 20 assigned to device CUDA0 load_tensors: layer 21 assigned to device CUDA0 load_tensors: layer 22 assigned to device CUDA0 load_tensors: layer 23 assigned to device CUDA0 load_tensors: layer 24 assigned to device CUDA0 load_tensors: tensor 'token_embd.weight' (f16) (and 4 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead load_tensors: offloading 24 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 25/25 layers to GPU load_tensors: CUDA0 model buffer size = 577.22 MiB load_tensors: CPU_Mapped model buffer size = 520.30 MiB llama_init_from_model: n_seq_max = 1 llama_init_from_model: n_ctx = 32768 llama_init_from_model: n_ctx_per_seq = 32768 llama_init_from_model: n_batch = 512 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 10000.0 llama_init_from_model: freq_scale = 1 llama_init_from_model: n_ctx_pre_seq (32768) > n_ctx_train (8192) -- possible training context overflow llama_kv_cache_init: kv_size = 32768, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1 llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: CUDA0 KV buffer size = 3072.00 MiB llama_init_from_model: KV self size = 3072.00 MiB, K (f16): 1536.00 MiB, V (f16): 1536.00 MiB llama_init_from_model: CUDA_Host output buffer size = 0.00 MiB llama_init_from_model: CUDA0 compute buffer size = 25.01 MiB llama_init_from_model: CUDA_Host compute buffer size = 5.01 MiB llama_init_from_model: graph nodes = 849 llama_init_from_model: graph splits = 4 (with bs=512), 2 (with bs=1) time=2025-04-03T22:45:10.145Z level=INFO source=server.go:619 msg="llama runner started in 3.51 seconds" [GIN] 2025/04/03 - 22:45:10 | 200 | 4.412820418s | 10.141.106.183 | POST "/api/embed" [GIN] 2025/04/03 - 22:45:10 | 200 | 4.4219119s | 10.141.106.183 | POST "/api/embed" [GIN] 2025/04/03 - 22:45:10 | 200 | 4.42790042s | 10.141.106.183 | POST "/api/embed" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.07 dlsym: cuInit - 0x7f0bebd0fe70 dlsym: cuDriverGetVersion - 0x7f0bebd0fe90 dlsym: cuDeviceGetCount - 0x7f0bebd0fed0 dlsym: cuDeviceGet - 0x7f0bebd0feb0 dlsym: cuDeviceGetAttribute - 0x7f0bebd0ffb0 dlsym: cuDeviceGetUuid - 0x7f0bebd0ff10 dlsym: cuDeviceGetName - 0x7f0bebd0fef0 dlsym: cuCtxCreate_v3 - 0x7f0bebd10190 dlsym: cuMemGetInfo_v2 - 0x7f0bebd10910 dlsym: cuCtxDestroy - 0x7f0bebd6eab0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 releasing cuda driver library time=2025-04-03T22:45:10.618Z level=INFO source=sched.go:509 msg="updated VRAM based on existing loaded models" gpu=GPU-06c90607-eae5-26dd-6fc1-d08896bd788e library=cuda total="31.4 GiB" available="26.8 GiB" time=2025-04-03T22:45:10.620Z level=INFO source=sched.go:716 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=GPU-06c90607-eae5-26dd-6fc1-d08896bd788e parallel=1 available=28745728000 required="12.7 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.133.07 dlsym: cuInit - 0x7f0bebd0fe70 dlsym: cuDriverGetVersion - 0x7f0bebd0fe90 dlsym: cuDeviceGetCount - 0x7f0bebd0fed0 dlsym: cuDeviceGet - 0x7f0bebd0feb0 dlsym: cuDeviceGetAttribute - 0x7f0bebd0ffb0 dlsym: cuDeviceGetUuid - 0x7f0bebd0ff10 dlsym: cuDeviceGetName - 0x7f0bebd0fef0 dlsym: cuCtxCreate_v3 - 0x7f0bebd10190 dlsym: cuMemGetInfo_v2 - 0x7f0bebd10910 dlsym: cuCtxDestroy - 0x7f0bebd6eab0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-04-03T22:45:10.745Z level=INFO source=server.go:105 msg="system memory" total="60.7 GiB" free="53.3 GiB" free_swap="6.7 GiB" time=2025-04-03T22:45:10.746Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[26.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="12.7 GiB" memory.required.partial="12.7 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[12.7 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.4 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-04-03T22:45:10.814Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-03T22:45:10.820Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-04-03T22:45:10.820Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-03T22:45:10.820Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-03T22:45:10.820Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-03T22:45:10.820Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 [/usr/lib/ollama/cuda_v12] time=2025-04-03T22:45:10.821Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 32768 --batch-size 512 --n-gpu-layers 49 --verbose --threads 16 --parallel 1 --port 42325" time=2025-04-03T22:45:10.821Z level=INFO source=sched.go:451 msg="loaded runners" count=2 time=2025-04-03T22:45:10.821Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-03T22:45:10.821Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-03T22:45:10.829Z level=INFO source=runner.go:821 msg="starting ollama engine" time=2025-04-03T22:45:10.829Z level=INFO source=runner.go:884 msg="Server listening on 127.0.0.1:42325" time=2025-04-03T22:45:10.896Z level=WARN source=ggml.go:149 msg="key not found" key=general.name default="" time=2025-04-03T22:45:10.896Z level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" time=2025-04-03T22:45:10.896Z level=INFO source=ggml.go:66 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 1463 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 183 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 55 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2025-04-03T22:45:10.943Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) # ... redacted some lines time=2025-04-03T22:45:25.997Z level=INFO source=ggml.go:380 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 time=2025-04-03T22:45:25.997Z level=INFO source=ggml.go:380 msg="compute graph" backend=CPU buffer_type=CUDA_Host time=2025-04-03T22:45:25.998Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-03T22:45:26.002Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-04-03T22:45:26.002Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-03T22:45:26.002Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-03T22:45:26.003Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-03T22:45:26.003Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-04-03T22:45:26.127Z level=INFO source=server.go:619 msg="llama runner started in 15.31 seconds" [GIN] 2025/04/03 - 22:45:27 | 200 | 17.435845916s | 10.141.106.183 | POST "/api/chat" [GIN] 2025/04/03 - 22:45:27 | 200 | 62.319642ms | 10.141.106.183 | POST "/api/embed" [GIN] 2025/04/03 - 22:45:29 | 200 | 18.873696523s | 10.141.106.183 | POST "/api/chat" [GIN] 2025/04/03 - 22:45:29 | 200 | 62.090574ms | 10.141.106.183 | POST "/api/embed" [GIN] 2025/04/03 - 22:45:30 | 200 | 20.272128969s | 10.141.106.183 | POST "/api/chat" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.6.4
GiteaMirror added the bug label 2026-05-04 14:52:10 -05:00
Author
Owner

@somera commented on GitHub (Apr 3, 2025):

I see this with Ollama v0.6.3 too:

Image

Image

And I saw a high VRAM usage here #10086

After LLM is removed from VRAM it should looks so:

Image

This was a test after ollama restart.

<!-- gh-comment-id:2776245764 --> @somera commented on GitHub (Apr 3, 2025): I see this with Ollama v0.6.3 too: ![Image](https://github.com/user-attachments/assets/19280afa-5a1d-4531-8efb-2a15505c4a57) ![Image](https://github.com/user-attachments/assets/a6ffa518-2ab2-4e6a-837d-b151a8f19591) And I saw a high VRAM usage here #10086 After LLM is removed from VRAM it should looks so: ![Image](https://github.com/user-attachments/assets/5fc7c2b0-fa15-40f6-9091-061446557d1c) This was a test after ollama restart.
Author
Owner

@sieveLau commented on GitHub (Apr 4, 2025):

I have also encountered this problem, with ollama ps showing nothing but nvidia-smi tells an ollama thread is eating 4Gb (a number that a previously running model eats).

My ollama is built from source on the main branch and the problem happened two days ago. It happens randomly, not every time nor any specific step to reproduce. A systemctl restart ollama can free the VRAM.

<!-- gh-comment-id:2777479326 --> @sieveLau commented on GitHub (Apr 4, 2025): I have also encountered this problem, with `ollama ps` showing nothing but `nvidia-smi` tells an ollama thread is eating 4Gb (a number that a previously running model eats). My ollama is built from source on the main branch and the problem happened two days ago. It happens randomly, not every time nor any specific step to reproduce. A `systemctl restart ollama` can free the VRAM.
Author
Owner

@somera commented on GitHub (Apr 7, 2025):

I see this with Ollama v0.6.3 too:

I saw same issue today with v0.6.4.

Now I updated to v0.6.5. Update will follow.

<!-- gh-comment-id:2784437947 --> @somera commented on GitHub (Apr 7, 2025): > I see this with Ollama v0.6.3 too: I saw same issue today with v0.6.4. Now I updated to v0.6.5. Update will follow.
Author
Owner

@somera commented on GitHub (Apr 9, 2025):

I see same issue with Ollama v0.6.5.

<!-- gh-comment-id:2788992560 --> @somera commented on GitHub (Apr 9, 2025): I see same issue with Ollama v0.6.5.
Author
Owner

@victorcasignia commented on GitHub (Apr 9, 2025):

Having the same issue. After the end token, memory and gpu usage continue to stay high. I had to kill ollama to free resources. Using gemma 27b_qat at 64k context with v0.6.5 on a 3090 machine w/ 32gb system ram.

<!-- gh-comment-id:2789364025 --> @victorcasignia commented on GitHub (Apr 9, 2025): Having the same issue. After the end token, memory and gpu usage continue to stay high. I had to kill ollama to free resources. Using gemma 27b_qat at 64k context with v0.6.5 on a 3090 machine w/ 32gb system ram.
Author
Owner

@somera commented on GitHub (Apr 10, 2025):

Is there a solution to this problem? I have to restart Ollama 1-3 times a day to free up the used VRAM.

It's annoying in that you don't notice the problem right away.

<!-- gh-comment-id:2795241443 --> @somera commented on GitHub (Apr 10, 2025): Is there a solution to this problem? I have to restart Ollama 1-3 times a day to free up the used VRAM. It's annoying in that you don't notice the problem right away.
Author
Owner

@michal0000000 commented on GitHub (Apr 14, 2025):

It seems the issue was finally identified in #10040 but the fix is supposed to take some time.

<!-- gh-comment-id:2800732490 --> @michal0000000 commented on GitHub (Apr 14, 2025): It seems the issue was finally identified in #10040 but the fix is supposed to take some time.
Author
Owner

@reywang18 commented on GitHub (Sep 9, 2025):

I have a 64GB system RAM, AMD with iGPU 780M.
Somehow, my ollama Ubuntu setup it prefers to use GTT more over than VRAM (16gb).
My GTT size is about 24GB. The GTT preference over VRAM, is this AMD/ROCm issue ?
How about similar with Nvidia GPU ?

<!-- gh-comment-id:3271736041 --> @reywang18 commented on GitHub (Sep 9, 2025): I have a 64GB system RAM, AMD with iGPU 780M. Somehow, my ollama Ubuntu setup it prefers to use GTT more over than VRAM (16gb). My GTT size is about 24GB. The GTT preference over VRAM, is this AMD/ROCm issue ? How about similar with Nvidia GPU ?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68692