[GH-ISSUE #9149] Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed #52464

Closed
opened 2026-04-28 23:24:43 -05:00 by GiteaMirror · 21 comments
Owner

Originally created by @MyColorfulDays on GitHub (Feb 16, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9149

What is the issue?

I had already update ollama and nvidia driver to the latest version, but the error Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed still exists.

Device: Legion R9000K 2021H
CPU: AMD Ryzen 9 5900HX
Memory: 64G
GPU: NVIDIA GeForce RTX 3080 Laptop GPU 16GB
OS: Windows 11 23H2

C:\Users\Admin>ollama --version
ollama version is 0.5.11

C:\Users\Admin>nvidia-smi
Sun Feb 16 20:50:24 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 572.16                 Driver Version: 572.16         CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                  Driver-Model | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3080 ...  WDDM  |   00000000:01:00.0 Off |                  N/A |
| N/A   51C    P8             19W /  115W |       0MiB /  16384MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

C:\Users\Admin>nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Fri_Jun_14_16:44:19_Pacific_Daylight_Time_2024
Cuda compilation tools, release 12.6, V12.6.20
Build cuda_12.6.r12.6/compiler.34431801_0

C:\Users\Admin>ollama run qwen2:0.5b
Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed

Relevant log output

[GIN] 2025/02/16 - 21:00:49 | 200 |       526.8µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/16 - 21:00:49 | 200 |     19.4738ms |       127.0.0.1 | POST     "/api/show"
time=2025-02-16T21:00:50.022+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=E:\ollama\models\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 gpu=GPU-a1bc1654-c20b-0d6a-ecd9-1469fca534f3 parallel=4 available=15993929728 required="1.2 GiB"
time=2025-02-16T21:00:50.043+08:00 level=INFO source=server.go:100 msg="system memory" total="59.9 GiB" free="49.3 GiB" free_swap="111.4 GiB"
time=2025-02-16T21:00:50.044+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.2 GiB" memory.required.partial="1.2 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.2 GiB]" memory.weights.total="288.2 MiB" memory.weights.repeating="150.3 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB"
time=2025-02-16T21:00:50.066+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model E:\\ollama\\models\\blobs\\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 --ctx-size 8192 --batch-size 512 --n-gpu-layers 25 --threads 8 --no-mmap --parallel 4 --port 54629"
time=2025-02-16T21:00:50.126+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-16T21:00:50.126+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-02-16T21:00:50.126+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-02-16T21:00:50.156+08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-02-16T21:00:50.177+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=8
time=2025-02-16T21:00:50.179+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:54629"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080 Laptop GPU, compute capability 8.6, VMM: yes
load_backend: loaded CUDA backend from C:\Users\Admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\Admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3080 Laptop GPU) - 15253 MiB free
time=2025-02-16T21:00:50.378+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 21 key-value pairs and 290 tensors from E:\ollama\models\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.name str              = Qwen2-0.5B-Instruct
llama_model_loader: - kv   2:                          qwen2.block_count u32              = 24
llama_model_loader: - kv   3:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   4:                     qwen2.embedding_length u32              = 896
llama_model_loader: - kv   5:                  qwen2.feed_forward_length u32              = 4864
llama_model_loader: - kv   6:                 qwen2.attention.head_count u32              = 14
llama_model_loader: - kv   7:              qwen2.attention.head_count_kv u32              = 2
llama_model_loader: - kv   8:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% for message in messages %}{% if lo...
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q4_0:  168 tensors
llama_model_loader: - type q8_0:    1 tensors
llm_load_vocab: special tokens cache size = 293
llm_load_vocab: token to piece cache size = 0.9338 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 151936
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 896
llm_load_print_meta: n_layer          = 24
llm_load_print_meta: n_head           = 14
llm_load_print_meta: n_head_kv        = 2
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 64
llm_load_print_meta: n_embd_head_v    = 64
llm_load_print_meta: n_gqa            = 7
llm_load_print_meta: n_embd_k_gqa     = 128
llm_load_print_meta: n_embd_v_gqa     = 128
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 4864
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 1B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 494.03 M
llm_load_print_meta: model size       = 330.17 MiB (5.61 BPW)
llm_load_print_meta: general.name     = Qwen2-0.5B-Instruct
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 24 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 25/25 layers to GPU
llm_load_tensors:    CUDA_Host model buffer size =   137.94 MiB
llm_load_tensors:        CUDA0 model buffer size =   330.19 MiB
llama_new_context_with_model: n_seq_max     = 4
llama_new_context_with_model: n_ctx         = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 2048
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 1000000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1
llama_kv_cache_init:      CUDA0 KV buffer size =    96.00 MiB
llama_new_context_with_model: KV self size  =   96.00 MiB, K (f16):   48.00 MiB, V (f16):   48.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     2.33 MiB
D:\a\llama.cpp\llama.cpp\ggml\src\ggml.c:1726: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed
time=2025-02-16T21:00:50.881+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-02-16T21:00:50.900+08:00 level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 0xc0000409"
time=2025-02-16T21:00:51.131+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed"
[GIN] 2025/02/16 - 21:00:51 | 500 |    1.1672003s |       127.0.0.1 | POST     "/api/generate"
time=2025-02-16T21:00:56.175+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0443355 model=E:\ollama\models\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8
time=2025-02-16T21:00:56.425+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2941912 model=E:\ollama\models\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8
time=2025-02-16T21:00:56.675+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5445203 model=E:\ollama\models\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.5.11

Originally created by @MyColorfulDays on GitHub (Feb 16, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9149 ### What is the issue? I had already update ollama and nvidia driver to the latest version, but the error `Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed` still exists. Device: Legion R9000K 2021H CPU: AMD Ryzen 9 5900HX Memory: 64G GPU: NVIDIA GeForce RTX 3080 Laptop GPU 16GB OS: Windows 11 23H2 ```cmd C:\Users\Admin>ollama --version ollama version is 0.5.11 C:\Users\Admin>nvidia-smi Sun Feb 16 20:50:24 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 572.16 Driver Version: 572.16 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3080 ... WDDM | 00000000:01:00.0 Off | N/A | | N/A 51C P8 19W / 115W | 0MiB / 16384MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ C:\Users\Admin>nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Fri_Jun_14_16:44:19_Pacific_Daylight_Time_2024 Cuda compilation tools, release 12.6, V12.6.20 Build cuda_12.6.r12.6/compiler.34431801_0 C:\Users\Admin>ollama run qwen2:0.5b Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed ``` ### Relevant log output ```shell [GIN] 2025/02/16 - 21:00:49 | 200 | 526.8µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/16 - 21:00:49 | 200 | 19.4738ms | 127.0.0.1 | POST "/api/show" time=2025-02-16T21:00:50.022+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=E:\ollama\models\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 gpu=GPU-a1bc1654-c20b-0d6a-ecd9-1469fca534f3 parallel=4 available=15993929728 required="1.2 GiB" time=2025-02-16T21:00:50.043+08:00 level=INFO source=server.go:100 msg="system memory" total="59.9 GiB" free="49.3 GiB" free_swap="111.4 GiB" time=2025-02-16T21:00:50.044+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.2 GiB" memory.required.partial="1.2 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.2 GiB]" memory.weights.total="288.2 MiB" memory.weights.repeating="150.3 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB" time=2025-02-16T21:00:50.066+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model E:\\ollama\\models\\blobs\\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 --ctx-size 8192 --batch-size 512 --n-gpu-layers 25 --threads 8 --no-mmap --parallel 4 --port 54629" time=2025-02-16T21:00:50.126+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-16T21:00:50.126+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-02-16T21:00:50.126+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-02-16T21:00:50.156+08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-16T21:00:50.177+08:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(clang)" threads=8 time=2025-02-16T21:00:50.179+08:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:54629" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3080 Laptop GPU, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from C:\Users\Admin\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\Admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3080 Laptop GPU) - 15253 MiB free time=2025-02-16T21:00:50.378+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 21 key-value pairs and 290 tensors from E:\ollama\models\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.name str = Qwen2-0.5B-Instruct llama_model_loader: - kv 2: qwen2.block_count u32 = 24 llama_model_loader: - kv 3: qwen2.context_length u32 = 32768 llama_model_loader: - kv 4: qwen2.embedding_length u32 = 896 llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 4864 llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 14 llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 121 tensors llama_model_loader: - type q4_0: 168 tensors llama_model_loader: - type q8_0: 1 tensors llm_load_vocab: special tokens cache size = 293 llm_load_vocab: token to piece cache size = 0.9338 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 896 llm_load_print_meta: n_layer = 24 llm_load_print_meta: n_head = 14 llm_load_print_meta: n_head_kv = 2 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 128 llm_load_print_meta: n_embd_v_gqa = 128 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 4864 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 1B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 494.03 M llm_load_print_meta: model size = 330.17 MiB (5.61 BPW) llm_load_print_meta: general.name = Qwen2-0.5B-Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 24 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 25/25 layers to GPU llm_load_tensors: CUDA_Host model buffer size = 137.94 MiB llm_load_tensors: CUDA0 model buffer size = 330.19 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 96.00 MiB llama_new_context_with_model: KV self size = 96.00 MiB, K (f16): 48.00 MiB, V (f16): 48.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.33 MiB D:\a\llama.cpp\llama.cpp\ggml\src\ggml.c:1726: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed time=2025-02-16T21:00:50.881+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-02-16T21:00:50.900+08:00 level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 0xc0000409" time=2025-02-16T21:00:51.131+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed" [GIN] 2025/02/16 - 21:00:51 | 500 | 1.1672003s | 127.0.0.1 | POST "/api/generate" time=2025-02-16T21:00:56.175+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0443355 model=E:\ollama\models\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 time=2025-02-16T21:00:56.425+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2941912 model=E:\ollama\models\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 time=2025-02-16T21:00:56.675+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5445203 model=E:\ollama\models\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.5.11
GiteaMirror added the bug label 2026-04-28 23:24:43 -05:00
Author
Owner

@MyColorfulDays commented on GitHub (Feb 16, 2025):

I solved it.
Because I installed the llama.cpp in my Path, it's too old.

  1. Remove llama.cpp from Path or reinstall the latest version from https://github.com/ggml-org/llama.cpp/releases.
  2. Restart ollama.
<!-- gh-comment-id:2661448070 --> @MyColorfulDays commented on GitHub (Feb 16, 2025): I solved it. Because I installed the llama.cpp in my `Path`, it's too old. 1. Remove llama.cpp from `Path` or reinstall the latest version from <https://github.com/ggml-org/llama.cpp/releases>. 2. Restart ollama.
Author
Owner

@teocci commented on GitHub (Apr 1, 2025):

I have the same problem I removed llama.cpp bin directory from the Path and started to work. But this is not a solution right?

<!-- gh-comment-id:2768511339 --> @teocci commented on GitHub (Apr 1, 2025): I have the same problem I removed llama.cpp bin directory from the Path and started to work. But this is not a solution right?
Author
Owner

@MyColorfulDays commented on GitHub (Apr 1, 2025):

Hi, @teocci
The best solutation is the llama.cpp compatible with ollama, but it seems not implement yet.
I needn't llama.cpp always in my system Path, so remove it is enough for me.
You can try to find out the compatbile llama.cpp version to install or solve the ollama run model using system llama.cpp problem.

<!-- gh-comment-id:2768631043 --> @MyColorfulDays commented on GitHub (Apr 1, 2025): Hi, @teocci The best solutation is the llama.cpp compatible with ollama, but it seems not implement yet. I needn't llama.cpp always in my system Path, so remove it is enough for me. You can try to find out the compatbile llama.cpp version to install or solve the ollama run model using system llama.cpp problem.
Author
Owner

@Suyashd999 commented on GitHub (Apr 4, 2025):

I am still getting the error even though I never set llama.cpp in my Path

These are all my environment variables of my system:
User Paths
Image

System Paths
Image

Note: In this system I earlier was able to use ollama with no problems

(apologies for the messy paths)

Any help is appreciated

OS
Windows

GPU
Nvidia

CPU
AMD

Ollama version
0.6.2.0

P.S @MyColorfulDays can you reopen the issue pleas? I do not have the permission to do so

<!-- gh-comment-id:2778232057 --> @Suyashd999 commented on GitHub (Apr 4, 2025): I am still getting the error even though I never set `llama.cpp` in my `Path` These are all my environment variables of my system: `User Paths` ![Image](https://github.com/user-attachments/assets/a71ca9fb-a031-4707-886b-b39b49ef0fee) `System Paths` ![Image](https://github.com/user-attachments/assets/dc5b43fd-8baf-47bc-a039-ea4876450dd0) **Note: In this system I earlier was able to use ollama with no problems** (apologies for the messy paths) Any help is appreciated OS Windows GPU Nvidia CPU AMD Ollama version 0.6.2.0 P.S @MyColorfulDays can you reopen the issue pleas? I do not have the permission to do so
Author
Owner

@MyColorfulDays commented on GitHub (Apr 8, 2025):

Hi @Suyashd999
Did you fix it?
I found in your system path, there is a whisper.cpp, you can try to remove it for testing.

<!-- gh-comment-id:2785687757 --> @MyColorfulDays commented on GitHub (Apr 8, 2025): Hi @Suyashd999 Did you fix it? I found in your system path, there is a `whisper.cpp`, you can try to remove it for testing.
Author
Owner

@Acters commented on GitHub (Apr 11, 2025):

thank you removing llama.cpp from path fixed the error.

<!-- gh-comment-id:2795925129 --> @Acters commented on GitHub (Apr 11, 2025): thank you removing llama.cpp from path fixed the error.
Author
Owner

@mattjrutter commented on GitHub (Apr 28, 2025):

I don't have llama.cpp in my path. This just happened to me today. This is running on an RTX 5080, so it's newness already caused me problems with the whole CUDA 12.8 stuff. Ollama 0.6.6. I tried uninstalling/reinstalling and I didn't want to install llama.cpp, but maybe I have to or something.

<!-- gh-comment-id:2836964930 --> @mattjrutter commented on GitHub (Apr 28, 2025): I don't have llama.cpp in my path. This just happened to me today. This is running on an RTX 5080, so it's newness already caused me problems with the whole CUDA 12.8 stuff. Ollama 0.6.6. I tried uninstalling/reinstalling and I didn't want to install llama.cpp, but maybe I have to or something.
Author
Owner

@Xurple commented on GitHub (Apr 29, 2025):

seems to be back with cuda12/ ollama 0.6.6

<!-- gh-comment-id:2837185031 --> @Xurple commented on GitHub (Apr 29, 2025): seems to be back with cuda12/ ollama 0.6.6
Author
Owner

@nix18 commented on GitHub (Apr 29, 2025):

I don't have llama.cpp in my path. This just happened to me today. This is running on an RTX 5080, so it's newness already caused me problems with the whole CUDA 12.8 stuff. Ollama 0.6.6. I tried uninstalling/reinstalling and I didn't want to install llama.cpp, but maybe I have to or something.

same. My GPU:4070ti

<!-- gh-comment-id:2837649141 --> @nix18 commented on GitHub (Apr 29, 2025): > I don't have llama.cpp in my path. This just happened to me today. This is running on an RTX 5080, so it's newness already caused me problems with the whole CUDA 12.8 stuff. Ollama 0.6.6. I tried uninstalling/reinstalling and I didn't want to install llama.cpp, but maybe I have to or something. same. My GPU:4070ti
Author
Owner

@hyqzz commented on GitHub (Apr 29, 2025):

If docker is upgraded to 4.41.0, it will also cause this issue.

<!-- gh-comment-id:2837795954 --> @hyqzz commented on GitHub (Apr 29, 2025): If docker is upgraded to 4.41.0, it will also cause this issue.
Author
Owner

@grempire2 commented on GitHub (Apr 29, 2025):

Yup, it's the docker update. Was running fine right before updating docker. Quitting docker app does not help. Revert from docker 4.41.0 to an older version resolves the error.

<!-- gh-comment-id:2837828723 --> @grempire2 commented on GitHub (Apr 29, 2025): Yup, it's the docker update. Was running fine right before updating docker. Quitting docker app does not help. Revert from docker 4.41.0 to an older version resolves the error.
Author
Owner

@TxXCOZMOXxT commented on GitHub (Apr 29, 2025):

Yes, I can confirm it's the docker update... I knew i shouldn't have made the update...

Edit: Uninstalling docker does solve the issue...

<!-- gh-comment-id:2837837825 --> @TxXCOZMOXxT commented on GitHub (Apr 29, 2025): Yes, I can confirm it's the docker update... I knew i shouldn't have made the update... Edit: Uninstalling docker does solve the issue...
Author
Owner

@prototype5885 commented on GitHub (Apr 29, 2025):

it started happeing after i updated docker for me too, but i also dont use ollama on docker

<!-- gh-comment-id:2838086859 --> @prototype5885 commented on GitHub (Apr 29, 2025): it started happeing after i updated docker for me too, but i also dont use ollama on docker
Author
Owner

@zkrvf commented on GitHub (Apr 29, 2025):

I confirm I was using ollama in the console; after updating Docker it started failing with the error:
Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed

<!-- gh-comment-id:2838177431 --> @zkrvf commented on GitHub (Apr 29, 2025): I confirm I was using ollama in the console; after updating Docker it started failing with the error: `Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed`
Author
Owner

@toddp0 commented on GitHub (Apr 29, 2025):

Can confirm downgrading Desktop Docker to 4.40 from 4.41 corrected ollama error in cli:
ollama run llama3.2:latest (or any other model)
Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed

downloaded Docker 4.40.0 here: https://docs.docker.com/desktop/release-notes/#4400

I am running Windows 11, CUDA 12.8.93, Ollama 0.6.6, Nvidia 5090

Edit: Ollama is installed directly. Docker is installed to run open webui.

<!-- gh-comment-id:2839205911 --> @toddp0 commented on GitHub (Apr 29, 2025): Can confirm downgrading Desktop Docker to 4.40 from 4.41 corrected ollama error in cli: ollama run llama3.2:latest (or any other model) Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed downloaded Docker 4.40.0 here: https://docs.docker.com/desktop/release-notes/#4400 I am running Windows 11, CUDA 12.8.93, Ollama 0.6.6, Nvidia 5090 Edit: Ollama is installed directly. Docker is installed to run open webui.
Author
Owner

@Xurple commented on GitHub (Apr 29, 2025):

I was hopefully changing docker versions would work as it has for others. I have uninstalled everything and reinstalled ollama 0.6.6 with docker 4.40. Sadly I still get this error. Im not entirely sure when this began in the last week as I haven't been using Ollama but it worked last week. I installed the new Qwen model and discovered non of my models will load failing with this error.

Edit: got it working. I had a custom install location for my ollama and 0.6.6 was installing in the default location. Moving it to its custom location and uninstalling dicker got me up and running again.

<!-- gh-comment-id:2839625193 --> @Xurple commented on GitHub (Apr 29, 2025): I was hopefully changing docker versions would work as it has for others. I have uninstalled everything and reinstalled ollama 0.6.6 with docker 4.40. Sadly I still get this error. Im not entirely sure when this began in the last week as I haven't been using Ollama but it worked last week. I installed the new Qwen model and discovered non of my models will load failing with this error. Edit: got it working. I had a custom install location for my ollama and 0.6.6 was installing in the default location. Moving it to its custom location and uninstalling dicker got me up and running again.
Author
Owner

@mattjrutter commented on GitHub (Apr 29, 2025):

I see.
Info for those troubleshooting:
I also have Docker 4.41.0. My Ollama is installed directly, no docker, Ollama 0.6.6. Running models with an RTX 5080.
I use Docker to run a local instance of Open WebUI.
I get this error when I try to run any model with Ollama. For example ollama run gemma3:latest is met with the mentioned error. Open WebUI gives a generic 500 server error.

Renaming dlls also fixing possibly: https://github.com/ollama/ollama/issues/10469

<!-- gh-comment-id:2840268437 --> @mattjrutter commented on GitHub (Apr 29, 2025): I see. Info for those troubleshooting: I also have Docker 4.41.0. My Ollama is installed directly, no docker, Ollama 0.6.6. Running models with an RTX 5080. I use Docker to run a local instance of Open WebUI. I get this error when I try to run any model with Ollama. For example `ollama run gemma3:latest` is met with the mentioned error. Open WebUI gives a generic 500 server error. Renaming dlls also fixing possibly: https://github.com/ollama/ollama/issues/10469
Author
Owner

@redsun1988 commented on GitHub (Apr 30, 2025):

The issue is still here.
Any help is greatly appreciated.

My ollama version is:

ollama -v
ollama version is 0.6.6

I run it on Windows 11 Pro.
Here is the error message:

Error: POST predict: Post "http://127.0.0.1:51470/completion": read tcp 127.0.0.1:51473->127.0.0.1:51470: wsarecv: An existing connection was forcibly closed by the remote host.

No issue to download models. I have this error practically everywhere. Working with tiny deepseek-r1:1.5b or requesting embeddings for paraphrase-multilingual. Yesterday morning it worked just fine and suddenly had started to throw this error.

Tried to reinstall ollama and redownload models.

Here is part of my server.txt

time=2025-04-30T11:10:00.205+03:00 level=INFO source=server.go:619 msg="llama runner started in 0.75 seconds"
[GIN] 2025/04/30 - 11:10:00 | 200 |    1.5762114s |       127.0.0.1 | POST     "/api/generate"
time=2025-04-30T11:10:05.275+03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed
[GIN] 2025/04/30 - 11:10:05 | 200 |     62.4184ms |       127.0.0.1 | POST     "/api/chat"
time=2025-04-30T11:10:05.386+03:00 level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 0xc0000409"
[GIN] 2025/04/30 - 11:16:55 | 200 |      6.2555ms |       127.0.0.1 | GET      "/api/version"

I do not see any references to llama.cpp in my path variables?
The systems path enviroment variable <img width="653" alt="Image" src="https://github.com/user-attachments/assets/21bd969d-68e8-471a-8731-fba01826fad8" /> The users path enviroment variable
Image

Do I need to change something else?

Here is my docker build version:

docker -v
Docker version 28.1.1, build 4eba377
<!-- gh-comment-id:2841600036 --> @redsun1988 commented on GitHub (Apr 30, 2025): The issue is still here. Any help is greatly appreciated. My ollama version is: ```cmd ollama -v ollama version is 0.6.6 ``` I run it on Windows 11 Pro. Here is the error message: ``` Error: POST predict: Post "http://127.0.0.1:51470/completion": read tcp 127.0.0.1:51473->127.0.0.1:51470: wsarecv: An existing connection was forcibly closed by the remote host. ``` No issue to download models. I have this error practically everywhere. Working with tiny **deepseek-r1:1.5b** or requesting embeddings for **paraphrase-multilingual**. Yesterday morning it worked just fine and suddenly had started to throw this error. Tried to reinstall ollama and redownload models. Here is part of my server.txt ``` time=2025-04-30T11:10:00.205+03:00 level=INFO source=server.go:619 msg="llama runner started in 0.75 seconds" [GIN] 2025/04/30 - 11:10:00 | 200 | 1.5762114s | 127.0.0.1 | POST "/api/generate" time=2025-04-30T11:10:05.275+03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 D:\a\desktop-inference-engine-llama.cpp\desktop-inference-engine-llama.cpp\native\vendor\llama.cpp\ggml\src\ggml.c:1729: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed [GIN] 2025/04/30 - 11:10:05 | 200 | 62.4184ms | 127.0.0.1 | POST "/api/chat" time=2025-04-30T11:10:05.386+03:00 level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 0xc0000409" [GIN] 2025/04/30 - 11:16:55 | 200 | 6.2555ms | 127.0.0.1 | GET "/api/version" ``` I do not see any references to llama.cpp in my path variables? The system`s path enviroment variable <img width="653" alt="Image" src="https://github.com/user-attachments/assets/21bd969d-68e8-471a-8731-fba01826fad8" /> The user`s path enviroment variable <img width="617" alt="Image" src="https://github.com/user-attachments/assets/30448fd2-207d-4b51-a0a5-fb1165a9b99d" /> **Do I need to change something else?** Here is my docker build version: ```cmd docker -v Docker version 28.1.1, build 4eba377 ```
Author
Owner

@toddp0 commented on GitHub (Apr 30, 2025):

@redsun1988 In #9509 a conflict with "ggml-base.dll" is discovered. In a docker update to 4.41.0 installs "ggml-base.dll" in the folder C:\Program Files\Docker\Docker\resources\bin. Renaming this file temporarily fixes the issue for some.

I don't have llama.cpp installed but it might be a related conflict with DLL files.

<!-- gh-comment-id:2842057508 --> @toddp0 commented on GitHub (Apr 30, 2025): @redsun1988 In #9509 a conflict with "ggml-base.dll" is discovered. In a docker update to 4.41.0 installs "ggml-base.dll" in the folder C:\Program Files\Docker\Docker\resources\bin. Renaming this file temporarily fixes the issue for some. I don't have llama.cpp installed but it might be a related conflict with DLL files.
Author
Owner

@mrchris2000 commented on GitHub (Apr 30, 2025):

Latest docker 4.41.1 fixed this issue for me.

<!-- gh-comment-id:2842669511 --> @mrchris2000 commented on GitHub (Apr 30, 2025): Latest docker [4.41.1](https://docs.docker.com/desktop/release-notes/#4411) fixed this issue for me.
Author
Owner

@marty0678 commented on GitHub (Apr 30, 2025):

Docker release notes say Fixed potential conflict with 3rd-party tools by relocating llama.cpp DLLs.

I can confirm upgrading to 4.41.1 also fixed it for me.

<!-- gh-comment-id:2842873165 --> @marty0678 commented on GitHub (Apr 30, 2025): Docker release notes say `Fixed potential conflict with 3rd-party tools by relocating llama.cpp DLLs`. I can confirm upgrading to 4.41.1 also fixed it for me.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52464