[GH-ISSUE #9535] llama runner process has terminated: GGML_ASSERT(ctx->kv[key_id].get_type() != GGUF_TYPE_STRING) failed #6221

Closed
opened 2026-04-12 17:37:40 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @juangon on GitHub (Mar 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9535

What is the issue?

After upgrading an endpoint to latest Ollama (0.5.13) using official Docker Ollama image, using snowflake-embed-2 in a Tesla T4, it fails with:

llama runner process has terminated: GGML_ASSERT(ctx->kv[key_id].get_type() != GGUF_TYPE_STRING) failed

I tried pulling the model again using latest Ollama but still failing.

Relevant log output

2025/03/06 06:23:35 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:59m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-06T06:23:35.382Z level=INFO source=images.go:432 msg="total blobs: 3"
time=2025-03-06T06:23:35.382Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-06T06:23:35.383Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)"
time=2025-03-06T06:23:35.383Z level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-03-06T06:23:35.383Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-06T06:23:35.401Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-03-06T06:23:35.401Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-03-06T06:23:35.401Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-03-06T06:23:35.402Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.550.127.08]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.08
dlsym: cuInit - 0x7f649966ebc0
dlsym: cuDriverGetVersion - 0x7f649966ebe0
dlsym: cuDeviceGetCount - 0x7f649966ec20
dlsym: cuDeviceGet - 0x7f649966ec00
dlsym: cuDeviceGetAttribute - 0x7f649966ed00
dlsym: cuDeviceGetUuid - 0x7f649966ec60
dlsym: cuDeviceGetName - 0x7f649966ec40
dlsym: cuCtxCreate_v3 - 0x7f649966eee0
dlsym: cuMemGetInfo_v2 - 0x7f6499678e20
dlsym: cuCtxDestroy - 0x7f64996d3850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2025-03-06T06:23:35.406Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.127.08
[GPU-115740de-cf67-2e73-a461-d054d597eb22] CUDA totalMem 16112 mb
[GPU-115740de-cf67-2e73-a461-d054d597eb22] CUDA freeMem 15100 mb
[GPU-115740de-cf67-2e73-a461-d054d597eb22] Compute Capability 7.5
time=2025-03-06T06:23:35.549Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-03-06T06:23:35.549Z level=INFO source=types.go:130 msg="inference compute" id=GPU-115740de-cf67-2e73-a461-d054d597eb22 library=cuda variant=v12 compute=7.5 driver=12.4 name="Tesla T4" total="15.7 GiB" available="14.7 GiB"
time=2025-03-06T06:24:21.131Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="108.1 GiB" before.free="104.6 GiB" before.free_swap="0 B" now.total="108.1 GiB" now.free="104.6 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.08
dlsym: cuInit - 0x7f649966ebc0
dlsym: cuDriverGetVersion - 0x7f649966ebe0
dlsym: cuDeviceGetCount - 0x7f649966ec20
dlsym: cuDeviceGet - 0x7f649966ec00
dlsym: cuDeviceGetAttribute - 0x7f649966ed00
dlsym: cuDeviceGetUuid - 0x7f649966ec60
dlsym: cuDeviceGetName - 0x7f649966ec40
dlsym: cuCtxCreate_v3 - 0x7f649966eee0
dlsym: cuMemGetInfo_v2 - 0x7f6499678e20
dlsym: cuCtxDestroy - 0x7f64996d3850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2025-03-06T06:24:21.280Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-115740de-cf67-2e73-a461-d054d597eb22 name="Tesla T4" overhead="0 B" before.total="15.7 GiB" before.free="14.7 GiB" now.total="15.7 GiB" now.free="14.7 GiB" now.used="1012.9 MiB"
releasing cuda driver library
time=2025-03-06T06:24:21.378Z level=DEBUG source=sched.go:225 msg="loading first model" model=/root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613
time=2025-03-06T06:24:21.378Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.7 GiB]"
time=2025-03-06T06:24:21.379Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-06T06:24:21.379Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.key_length default=64
time=2025-03-06T06:24:21.379Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.value_length default=64
time=2025-03-06T06:24:21.379Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-06T06:24:21.379Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 gpu=GPU-115740de-cf67-2e73-a461-d054d597eb22 parallel=1 available=15833497600 required="1.6 GiB"
time=2025-03-06T06:24:21.379Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="108.1 GiB" before.free="104.6 GiB" before.free_swap="0 B" now.total="108.1 GiB" now.free="104.6 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.08
dlsym: cuInit - 0x7f649966ebc0
dlsym: cuDriverGetVersion - 0x7f649966ebe0
dlsym: cuDeviceGetCount - 0x7f649966ec20
dlsym: cuDeviceGet - 0x7f649966ec00
dlsym: cuDeviceGetAttribute - 0x7f649966ed00
dlsym: cuDeviceGetUuid - 0x7f649966ec60
dlsym: cuDeviceGetName - 0x7f649966ec40
dlsym: cuCtxCreate_v3 - 0x7f649966eee0
dlsym: cuMemGetInfo_v2 - 0x7f6499678e20
dlsym: cuCtxDestroy - 0x7f64996d3850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2025-03-06T06:24:21.521Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-115740de-cf67-2e73-a461-d054d597eb22 name="Tesla T4" overhead="0 B" before.total="15.7 GiB" before.free="14.7 GiB" now.total="15.7 GiB" now.free="14.7 GiB" now.used="1012.9 MiB"
releasing cuda driver library
time=2025-03-06T06:24:21.521Z level=INFO source=server.go:97 msg="system memory" total="108.1 GiB" free="104.6 GiB" free_swap="0 B"
time=2025-03-06T06:24:21.521Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.7 GiB]"
time=2025-03-06T06:24:21.522Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-06T06:24:21.522Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.key_length default=64
time=2025-03-06T06:24:21.522Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.value_length default=64
time=2025-03-06T06:24:21.522Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-06T06:24:21.522Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[14.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.6 GiB" memory.required.partial="1.6 GiB" memory.required.kv="12.0 MiB" memory.required.allocations="[1.6 GiB]" memory.weights.total="589.2 MiB" memory.weights.repeating="100.9 MiB" memory.weights.nonrepeating="488.3 MiB" memory.graph.full="32.0 MiB" memory.graph.partial="32.0 MiB"
time=2025-03-06T06:24:21.522Z level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
time=2025-03-06T06:24:21.522Z level=DEBUG source=server.go:302 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12
time=2025-03-06T06:24:21.522Z level=DEBUG source=server.go:310 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12]
time=2025-03-06T06:24:21.522Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 --ctx-size 2048 --batch-size 512 --n-gpu-layers 25 --verbose --threads 16 --parallel 1 --port 44397"
time=2025-03-06T06:24:21.523Z level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama/cuda_v12:/usr/lib/ollama CUDA_VISIBLE_DEVICES=GPU-115740de-cf67-2e73-a461-d054d597eb22]"
time=2025-03-06T06:24:21.523Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-06T06:24:21.523Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-03-06T06:24:21.523Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-06T06:24:21.556Z level=INFO source=runner.go:931 msg="starting go runner"
time=2025-03-06T06:24:21.556Z level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: Tesla T4, compute capability 7.5, VMM: yes
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2025-03-06T06:24:21.593Z level=DEBUG source=ggml.go:78 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib
time=2025-03-06T06:24:21.593Z level=DEBUG source=ggml.go:78 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib64
time=2025-03-06T06:24:21.593Z level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/lib/ollama
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 55
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
time=2025-03-06T06:24:21.616Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=16
time=2025-03-06T06:24:21.616Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:44397"
llama_model_load_from_file_impl: using device CUDA0 (Tesla T4) - 15100 MiB free
llama_model_loader: loaded meta data with 34 key-value pairs and 389 tensors from /root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = bert
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 567M
llama_model_loader: - kv   3:                            general.license str              = apache-2.0
llama_model_loader: - kv   4:                               general.tags arr[str,8]       = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv   5:                          general.languages arr[str,74]      = ["af", "ar", "az", "be", "bg", "bn", ...
llama_model_loader: - kv   6:                           bert.block_count u32              = 24
llama_model_loader: - kv   7:                        bert.context_length u32              = 8192
llama_model_loader: - kv   8:                      bert.embedding_length u32              = 1024
llama_model_loader: - kv   9:                   bert.feed_forward_length u32              = 4096
llama_model_loader: - kv  10:                  bert.attention.head_count u32              = 16
llama_model_loader: - kv  11:          bert.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                          general.file_type u32              = 1
llama_model_loader: - kv  13:                      bert.attention.causal bool             = false
llama_model_loader: - kv  14:                          bert.pooling_type u32              = 2
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = t5
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
time=2025-03-06T06:24:21.775Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,250002]  = ["<s>", "<pad>", "</s>", "<unk>", ","...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,250002]  = [3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                      tokenizer.ggml.scores arr[f32,250002]  = [-10000.000000, -10000.000000, -10000...
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = true
llama_model_loader: - kv  22:            tokenizer.ggml.token_type_count u32              = 1
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  25:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  26:          tokenizer.ggml.seperator_token_id u32              = 2
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  28:                tokenizer.ggml.cls_token_id u32              = 0
llama_model_loader: - kv  29:               tokenizer.ggml.mask_token_id u32              = 250001
llama_model_loader: - kv  30:        tokenizer.ggml.precompiled_charsmap arr[str,316720]  = ["A", "L", "Q", "C", "A", "A", "C", "...
llama_model_loader: - kv  31:    tokenizer.ggml.remove_extra_whitespaces bool             = true
llama_model_loader: - kv  32:            tokenizer.ggml.add_space_prefix bool             = true
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  244 tensors
llama_model_loader: - type  f16:  145 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = F16
print_info: file size   = 1.07 GiB (16.25 BPW) 
gguf.cpp:780: GGML_ASSERT(ctx->kv[key_id].get_type() != GGUF_TYPE_STRING) failed
/usr/bin/ollama(+0x10cc808)[0x55d3489db808]
/usr/bin/ollama(+0x10ccb86)[0x55d3489dbb86]
/usr/bin/ollama(+0x10e4c9e)[0x55d3489f3c9e]
/usr/bin/ollama(+0x1058716)[0x55d348967716]
/usr/bin/ollama(+0x1019100)[0x55d348928100]
/usr/bin/ollama(+0x1065b31)[0x55d348974b31]
/usr/bin/ollama(+0x106619b)[0x55d34897519b]
/usr/bin/ollama(+0xf70c72)[0x55d34887fc72]
/usr/bin/ollama(+0x328c21)[0x55d347c37c21]
SIGABRT: abort
PC=0x7f444e22700b m=9 sigcode=18446744073709551610
signal arrived during cgo execution

goroutine 14 gp=0xc000103340 m=9 mp=0xc000100808 [syscall]:
runtime.cgocall(0x55d34887fc30, 0xc000093c00)
	runtime/cgocall.go:167 +0x4b fp=0xc000093bd8 sp=0xc000093ba0 pc=0x55d347c2d58b
github.com/ollama/ollama/llama._Cfunc_llama_model_load_from_file(0x7f43e4000b60, {0x0, 0x19, 0x1, 0x0, 0x0, 0x55d34887f510, 0xc000714128, 0x0, 0x0, ...})
	_cgo_gotypes.go:754 +0x4b fp=0xc000093c00 sp=0xc000093bd8 pc=0x55d347fb3f6b
github.com/ollama/ollama/llama.LoadModelFromFile.func1(...)
	github.com/ollama/ollama/llama/llama.go:265
github.com/ollama/ollama/llama.LoadModelFromFile({0x7fffa77f3b61, 0x62}, {0x19, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000464930, ...})
	github.com/ollama/ollama/llama/llama.go:265 +0x36b fp=0xc000093dc8 sp=0xc000093c00 pc=0x55d347fb6aeb
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0004b62d0, {0x19, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000464930, 0x0}, ...)
	github.com/ollama/ollama/runner/llamarunner/runner.go:849 +0x9b fp=0xc000093f10 sp=0xc000093dc8 pc=0x55d347fd1ddb
github.com/ollama/ollama/runner/llamarunner.Execute.gowrap1()
	github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xda fp=0xc000093fe0 sp=0xc000093f10 pc=0x55d347fd369a
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc000093fe8 sp=0xc000093fe0 pc=0x55d347c37fa1
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
	github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7

goroutine 1 gp=0xc000002380 m=nil [IO wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc00012d5b8 sp=0xc00012d598 pc=0x55d347c3086e
runtime.netpollblock(0xc00012d608?, 0x47bca1a6?, 0xd3?)
	runtime/netpoll.go:575 +0xf7 fp=0xc00012d5f0 sp=0xc00012d5b8 pc=0x55d347bf5677
internal/poll.runtime_pollWait(0x7f43fef15eb0, 0x72)
	runtime/netpoll.go:351 +0x85 fp=0xc00012d610 sp=0xc00012d5f0 pc=0x55d347c2fa85
internal/poll.(*pollDesc).wait(0xc000480080?, 0x900000036?, 0x0)
	internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00012d638 sp=0xc00012d610 pc=0x55d347cb6f07
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000480080)
	internal/poll/fd_unix.go:620 +0x295 fp=0xc00012d6e0 sp=0xc00012d638 pc=0x55d347cbc2d5
net.(*netFD).accept(0xc000480080)
	net/fd_unix.go:172 +0x29 fp=0xc00012d798 sp=0xc00012d6e0 pc=0x55d347d2e749
net.(*TCPListener).accept(0xc0005aec00)
	net/tcpsock_posix.go:159 +0x1b fp=0xc00012d7e8 sp=0xc00012d798 pc=0x55d347d440fb
net.(*TCPListener).Accept(0xc0005aec00)
	net/tcpsock.go:380 +0x30 fp=0xc00012d818 sp=0xc00012d7e8 pc=0x55d347d42fb0
net/http.(*onceCloseListener).Accept(0xc0004b63f0?)
	<autogenerated>:1 +0x24 fp=0xc00012d830 sp=0xc00012d818 pc=0x55d347f59e64
net/http.(*Server).Serve(0xc000201600, {0x55d348ede528, 0xc0005aec00})
	net/http/server.go:3424 +0x30c fp=0xc00012d960 sp=0xc00012d830 pc=0x55d347f3172c
github.com/ollama/ollama/runner/llamarunner.Execute({0xc000034140, 0xf, 0x10})
	github.com/ollama/ollama/runner/llamarunner/runner.go:993 +0x116a fp=0xc00012dd08 sp=0xc00012d960 pc=0x55d347fd32ca
github.com/ollama/ollama/runner.Execute({0xc000034130?, 0x0?, 0x0?})
	github.com/ollama/ollama/runner/runner.go:22 +0xd4 fp=0xc00012dd30 sp=0xc00012dd08 pc=0x55d3481fd9b4
github.com/ollama/ollama/cmd.NewCLI.func2(0xc000201400?, {0x55d348a5d055?, 0x4?, 0x55d348a5d059?})
	github.com/ollama/ollama/cmd/cmd.go:1281 +0x45 fp=0xc00012dd58 sp=0xc00012dd30 pc=0x55d348812e45
github.com/spf13/cobra.(*Command).execute(0xc00079b508, {0xc00014ea50, 0xf, 0xf})
	github.com/spf13/cobra@v1.7.0/command.go:940 +0x85c fp=0xc00012de78 sp=0xc00012dd58 pc=0x55d347da79dc
github.com/spf13/cobra.(*Command).ExecuteC(0xc00015ec08)
	github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc00012df30 sp=0xc00012de78 pc=0x55d347da8225
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
	github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
	github.com/ollama/ollama/main.go:12 +0x4d fp=0xc00012df50 sp=0xc00012df30 pc=0x55d3488131ad
runtime.main()
	runtime/proc.go:283 +0x29d fp=0xc00012dfe0 sp=0xc00012df50 pc=0x55d347bfcc7d
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc00012dfe8 sp=0xc00012dfe0 pc=0x55d347c37fa1

goroutine 2 gp=0xc000002e00 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000084fa8 sp=0xc000084f88 pc=0x55d347c3086e
runtime.goparkunlock(...)
	runtime/proc.go:441
runtime.forcegchelper()
	runtime/proc.go:348 +0xb8 fp=0xc000084fe0 sp=0xc000084fa8 pc=0x55d347bfcfb8
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc000084fe8 sp=0xc000084fe0 pc=0x55d347c37fa1
created by runtime.init.7 in goroutine 1
	runtime/proc.go:336 +0x1a

goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000085780 sp=0xc000085760 pc=0x55d347c3086e
runtime.goparkunlock(...)
	runtime/proc.go:441
runtime.bgsweep(0xc00003a080)
	runtime/mgcsweep.go:316 +0xdf fp=0xc0000857c8 sp=0xc000085780 pc=0x55d347be77df
runtime.gcenable.gowrap1()
	runtime/mgc.go:204 +0x25 fp=0xc0000857e0 sp=0xc0000857c8 pc=0x55d347bdbbc5
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc0000857e8 sp=0xc0000857e0 pc=0x55d347c37fa1
created by runtime.gcenable in goroutine 1
	runtime/mgc.go:204 +0x66

goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]:
runtime.gopark(0x10000?, 0x55d348c0f9d8?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000085f78 sp=0xc000085f58 pc=0x55d347c3086e
runtime.goparkunlock(...)
	runtime/proc.go:441
runtime.(*scavengerState).park(0x55d349730980)
	runtime/mgcscavenge.go:425 +0x49 fp=0xc000085fa8 sp=0xc000085f78 pc=0x55d347be5229
runtime.bgscavenge(0xc00003a080)
	runtime/mgcscavenge.go:658 +0x59 fp=0xc000085fc8 sp=0xc000085fa8 pc=0x55d347be57b9
runtime.gcenable.gowrap2()
	runtime/mgc.go:205 +0x25 fp=0xc000085fe0 sp=0xc000085fc8 pc=0x55d347bdbb65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc000085fe8 sp=0xc000085fe0 pc=0x55d347c37fa1
created by runtime.gcenable in goroutine 1
	runtime/mgc.go:205 +0xa5

goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait]:
runtime.gopark(0x1b8?, 0xc000002380?, 0x1?, 0x23?, 0xc000084688?)
	runtime/proc.go:435 +0xce fp=0xc000084630 sp=0xc000084610 pc=0x55d347c3086e
runtime.runfinq()
	runtime/mfinal.go:196 +0x107 fp=0xc0000847e0 sp=0xc000084630 pc=0x55d347bdab87
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc0000847e8 sp=0xc0000847e0 pc=0x55d347c37fa1
created by runtime.createfing in goroutine 1
	runtime/mfinal.go:166 +0x3d

goroutine 6 gp=0xc0001de8c0 m=nil [chan receive]:
runtime.gopark(0xc0000ffae0?, 0xc000508018?, 0x60?, 0x67?, 0x55d347d15488?)
	runtime/proc.go:435 +0xce fp=0xc000086718 sp=0xc0000866f8 pc=0x55d347c3086e
runtime.chanrecv(0xc0000b8380, 0x0, 0x1)
	runtime/chan.go:664 +0x445 fp=0xc000086790 sp=0xc000086718 pc=0x55d347bccd85
runtime.chanrecv1(0x0?, 0x0?)
	runtime/chan.go:506 +0x12 fp=0xc0000867b8 sp=0xc000086790 pc=0x55d347bcc912
runtime.unique_runtime_registerUniqueMapCleanup.func2(...)
	runtime/mgc.go:1796
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
	runtime/mgc.go:1799 +0x2f fp=0xc0000867e0 sp=0xc0000867b8 pc=0x55d347bded6f
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc0000867e8 sp=0xc0000867e0 pc=0x55d347c37fa1
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
	runtime/mgc.go:1794 +0x85

goroutine 7 gp=0xc0001defc0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000086f38 sp=0xc000086f18 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc000086fc8 sp=0xc000086f38 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc000086fe0 sp=0xc000086fc8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc000086fe8 sp=0xc000086fe0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 18 gp=0xc000504000 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000080738 sp=0xc000080718 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc0000807c8 sp=0xc000080738 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc0000807e0 sp=0xc0000807c8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc0000807e8 sp=0xc0000807e0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 34 gp=0xc000102380 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc00011a738 sp=0xc00011a718 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc00011a7c8 sp=0xc00011a738 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc00011a7e0 sp=0xc00011a7c8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc00011a7e8 sp=0xc00011a7e0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 8 gp=0xc0001df180 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000087738 sp=0xc000087718 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc0000877c8 sp=0xc000087738 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc0000877e0 sp=0xc0000877c8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc0000877e8 sp=0xc0000877e0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 19 gp=0xc0005041c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000080f38 sp=0xc000080f18 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc000080fc8 sp=0xc000080f38 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc000080fe0 sp=0xc000080fc8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc000080fe8 sp=0xc000080fe0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 35 gp=0xc000102540 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc00011af38 sp=0xc00011af18 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc00011afc8 sp=0xc00011af38 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc00011afe0 sp=0xc00011afc8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc00011afe8 sp=0xc00011afe0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 9 gp=0xc0001df340 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000087f38 sp=0xc000087f18 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc000087fc8 sp=0xc000087f38 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc000087fe0 sp=0xc000087fc8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc000087fe8 sp=0xc000087fe0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 20 gp=0xc000504380 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000081738 sp=0xc000081718 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc0000817c8 sp=0xc000081738 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc0000817e0 sp=0xc0000817c8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc0000817e8 sp=0xc0000817e0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 10 gp=0xc0001df500 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000116738 sp=0xc000116718 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc0001167c8 sp=0xc000116738 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc0001167e0 sp=0xc0001167c8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc0001167e8 sp=0xc0001167e0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 11 gp=0xc0001df6c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000116f38 sp=0xc000116f18 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc000116fc8 sp=0xc000116f38 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc000116fe0 sp=0xc000116fc8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc000116fe8 sp=0xc000116fe0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 12 gp=0xc0001df880 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000117738 sp=0xc000117718 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc0001177c8 sp=0xc000117738 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc0001177e0 sp=0xc0001177c8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc0001177e8 sp=0xc0001177e0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 36 gp=0xc000102700 m=nil [GC worker (idle)]:
runtime.gopark(0xfffe6e96771?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc00011b738 sp=0xc00011b718 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc00011b7c8 sp=0xc00011b738 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc00011b7e0 sp=0xc00011b7c8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc00011b7e8 sp=0xc00011b7e0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 21 gp=0xc000504540 m=nil [GC worker (idle)]:
runtime.gopark(0xfffe6e69187?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000081f38 sp=0xc000081f18 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc000081fc8 sp=0xc000081f38 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc000081fe0 sp=0xc000081fc8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc000081fe8 sp=0xc000081fe0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 13 gp=0xc0001dfa40 m=nil [GC worker (idle)]:
runtime.gopark(0x55d3497df100?, 0x1?, 0xe4?, 0x6f?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000117f38 sp=0xc000117f18 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc000117fc8 sp=0xc000117f38 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc000117fe0 sp=0xc000117fc8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc000117fe8 sp=0xc000117fe0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 37 gp=0xc0001028c0 m=nil [GC worker (idle)]:
runtime.gopark(0xfffe6e97538?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc00011bf38 sp=0xc00011bf18 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc00011bfc8 sp=0xc00011bf38 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc00011bfe0 sp=0xc00011bfc8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc00011bfe8 sp=0xc00011bfe0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 22 gp=0xc000504700 m=nil [GC worker (idle)]:
runtime.gopark(0xfffe6e95f06?, 0x0?, 0x0?, 0x0?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000082738 sp=0xc000082718 pc=0x55d347c3086e
runtime.gcBgMarkWorker(0xc0000b97a0)
	runtime/mgc.go:1423 +0xe9 fp=0xc0000827c8 sp=0xc000082738 pc=0x55d347bde089
runtime.gcBgMarkStartWorkers.gowrap1()
	runtime/mgc.go:1339 +0x25 fp=0xc0000827e0 sp=0xc0000827c8 pc=0x55d347bddf65
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc0000827e8 sp=0xc0000827e0 pc=0x55d347c37fa1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	runtime/mgc.go:1339 +0x105

goroutine 15 gp=0xc000103500 m=nil [sync.WaitGroup.Wait]:
runtime.gopark(0x0?, 0x0?, 0x60?, 0x40?, 0x0?)
	runtime/proc.go:435 +0xce fp=0xc000118618 sp=0xc0001185f8 pc=0x55d347c3086e
runtime.goparkunlock(...)
	runtime/proc.go:441
runtime.semacquire1(0xc0004b62d8, 0x0, 0x1, 0x0, 0x18)
	runtime/sema.go:188 +0x229 fp=0xc000118680 sp=0xc000118618 pc=0x55d347c10249
sync.runtime_SemacquireWaitGroup(0x0?)
	runtime/sema.go:110 +0x25 fp=0xc0001186b8 sp=0xc000118680 pc=0x55d347c32285
sync.(*WaitGroup).Wait(0x0?)
	sync/waitgroup.go:118 +0x48 fp=0xc0001186e0 sp=0xc0001186b8 pc=0x55d347c43a08
github.com/ollama/ollama/runner/llamarunner.(*Server).run(0xc0004b62d0, {0x55d348ee07a0, 0xc000139e00})
	github.com/ollama/ollama/runner/llamarunner/runner.go:316 +0x47 fp=0xc0001187b8 sp=0xc0001186e0 pc=0x55d347fcea67
github.com/ollama/ollama/runner/llamarunner.Execute.gowrap2()
	github.com/ollama/ollama/runner/llamarunner/runner.go:973 +0x28 fp=0xc0001187e0 sp=0xc0001187b8 pc=0x55d347fd3588
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc0001187e8 sp=0xc0001187e0 pc=0x55d347c37fa1
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
	github.com/ollama/ollama/runner/llamarunner/runner.go:973 +0xd97

goroutine 16 gp=0xc0001036c0 m=nil [IO wait]:
runtime.gopark(0x55d347cba505?, 0xc000480100?, 0x40?, 0xfa?, 0xb?)
	runtime/proc.go:435 +0xce fp=0xc00023f948 sp=0xc00023f928 pc=0x55d347c3086e
runtime.netpollblock(0x55d347c53cf8?, 0x47bca1a6?, 0xd3?)
	runtime/netpoll.go:575 +0xf7 fp=0xc00023f980 sp=0xc00023f948 pc=0x55d347bf5677
internal/poll.runtime_pollWait(0x7f43fef15d98, 0x72)
	runtime/netpoll.go:351 +0x85 fp=0xc00023f9a0 sp=0xc00023f980 pc=0x55d347c2fa85
internal/poll.(*pollDesc).wait(0xc000480100?, 0xc000184000?, 0x0)
	internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00023f9c8 sp=0xc00023f9a0 pc=0x55d347cb6f07
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000480100, {0xc000184000, 0x1000, 0x1000})
	internal/poll/fd_unix.go:165 +0x27a fp=0xc00023fa60 sp=0xc00023f9c8 pc=0x55d347cb81fa
net.(*netFD).Read(0xc000480100, {0xc000184000?, 0xc00023fad0?, 0x55d347cb73c5?})
	net/fd_posix.go:55 +0x25 fp=0xc00023faa8 sp=0xc00023fa60 pc=0x55d347d2c7a5
net.(*conn).Read(0xc00012e460, {0xc000184000?, 0x0?, 0x0?})
	net/net.go:194 +0x45 fp=0xc00023faf0 sp=0xc00023faa8 pc=0x55d347d3ab65
net/http.(*connReader).Read(0xc0000c4420, {0xc000184000, 0x1000, 0x1000})
	net/http/server.go:798 +0x159 fp=0xc00023fb40 sp=0xc00023faf0 pc=0x55d347f265d9
bufio.(*Reader).fill(0xc0005c40c0)
	bufio/bufio.go:113 +0x103 fp=0xc00023fb78 sp=0xc00023fb40 pc=0x55d347d52303
bufio.(*Reader).Peek(0xc0005c40c0, 0x4)
	bufio/bufio.go:152 +0x53 fp=0xc00023fb98 sp=0xc00023fb78 pc=0x55d347d52433
net/http.(*conn).serve(0xc0004b63f0, {0x55d348ee0768, 0xc0007047b0})
	net/http/server.go:2137 +0x785 fp=0xc00023ffb8 sp=0xc00023fb98 pc=0x55d347f2c3c5
net/http.(*Server).Serve.gowrap3()
	net/http/server.go:3454 +0x28 fp=0xc00023ffe0 sp=0xc00023ffb8 pc=0x55d347f31b28
runtime.goexit({})
	runtime/asm_amd64.s:1700 +0x1 fp=0xc00023ffe8 sp=0xc00023ffe0 pc=0x55d347c37fa1
created by net/http.(*Server).Serve in goroutine 1
	net/http/server.go:3454 +0x485

rax    0x0
rbx    0x7f43ff7fe700
rcx    0x7f444e22700b
rdx    0x0
rdi    0x2
rsi    0x7f43ff7fd470
rbp    0x55d348c2cf4d
rsp    0x7f43ff7fd470
r8     0x0
r9     0x7f43ff7fd470
r10    0x8
r11    0x246
r12    0x55d348c60d4a
r13    0x30c
r14    0x7f43ff7fd8d0
r15    0x7f43ff7fd8b0
rip    0x7f444e22700b
rflags 0x246
cs     0x33
fs     0x0
gs     0x0
time=2025-03-06T06:24:22.026Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-06T06:24:22.036Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
time=2025-03-06T06:24:22.276Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(ctx->kv[key_id].get_type() != GGUF_TYPE_STRING) failed"
time=2025-03-06T06:24:22.276Z level=DEBUG source=sched.go:459 msg="triggering expiration for failed load" model=/root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613
time=2025-03-06T06:24:22.276Z level=DEBUG source=sched.go:361 msg="runner expired event received" modelPath=/root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613
time=2025-03-06T06:24:22.276Z level=DEBUG source=sched.go:376 msg="got lock to unload" modelPath=/root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613
[GIN] 2025/03/06 - 06:24:22 | 500 |  1.147102014s |      10.224.0.4 | POST     "/v1/embeddings"
time=2025-03-06T06:24:22.276Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="108.1 GiB" before.free="104.6 GiB" before.free_swap="0 B" now.total="108.1 GiB" now.free="104.6 GiB" now.free_swap="0 B"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.13

Originally created by @juangon on GitHub (Mar 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9535 ### What is the issue? After upgrading an endpoint to latest Ollama (0.5.13) using official Docker Ollama image, using snowflake-embed-2 in a Tesla T4, it fails with: `llama runner process has terminated: GGML_ASSERT(ctx->kv[key_id].get_type() != GGUF_TYPE_STRING) failed` I tried pulling the model again using latest Ollama but still failing. ### Relevant log output ```shell 2025/03/06 06:23:35 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:59m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-03-06T06:23:35.382Z level=INFO source=images.go:432 msg="total blobs: 3" time=2025-03-06T06:23:35.382Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-06T06:23:35.383Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)" time=2025-03-06T06:23:35.383Z level=DEBUG source=sched.go:106 msg="starting llm scheduler" time=2025-03-06T06:23:35.383Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-06T06:23:35.401Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-03-06T06:23:35.401Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-03-06T06:23:35.401Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-03-06T06:23:35.402Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.550.127.08] initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.08 dlsym: cuInit - 0x7f649966ebc0 dlsym: cuDriverGetVersion - 0x7f649966ebe0 dlsym: cuDeviceGetCount - 0x7f649966ec20 dlsym: cuDeviceGet - 0x7f649966ec00 dlsym: cuDeviceGetAttribute - 0x7f649966ed00 dlsym: cuDeviceGetUuid - 0x7f649966ec60 dlsym: cuDeviceGetName - 0x7f649966ec40 dlsym: cuCtxCreate_v3 - 0x7f649966eee0 dlsym: cuMemGetInfo_v2 - 0x7f6499678e20 dlsym: cuCtxDestroy - 0x7f64996d3850 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2025-03-06T06:23:35.406Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.127.08 [GPU-115740de-cf67-2e73-a461-d054d597eb22] CUDA totalMem 16112 mb [GPU-115740de-cf67-2e73-a461-d054d597eb22] CUDA freeMem 15100 mb [GPU-115740de-cf67-2e73-a461-d054d597eb22] Compute Capability 7.5 time=2025-03-06T06:23:35.549Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2025-03-06T06:23:35.549Z level=INFO source=types.go:130 msg="inference compute" id=GPU-115740de-cf67-2e73-a461-d054d597eb22 library=cuda variant=v12 compute=7.5 driver=12.4 name="Tesla T4" total="15.7 GiB" available="14.7 GiB" time=2025-03-06T06:24:21.131Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="108.1 GiB" before.free="104.6 GiB" before.free_swap="0 B" now.total="108.1 GiB" now.free="104.6 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.08 dlsym: cuInit - 0x7f649966ebc0 dlsym: cuDriverGetVersion - 0x7f649966ebe0 dlsym: cuDeviceGetCount - 0x7f649966ec20 dlsym: cuDeviceGet - 0x7f649966ec00 dlsym: cuDeviceGetAttribute - 0x7f649966ed00 dlsym: cuDeviceGetUuid - 0x7f649966ec60 dlsym: cuDeviceGetName - 0x7f649966ec40 dlsym: cuCtxCreate_v3 - 0x7f649966eee0 dlsym: cuMemGetInfo_v2 - 0x7f6499678e20 dlsym: cuCtxDestroy - 0x7f64996d3850 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2025-03-06T06:24:21.280Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-115740de-cf67-2e73-a461-d054d597eb22 name="Tesla T4" overhead="0 B" before.total="15.7 GiB" before.free="14.7 GiB" now.total="15.7 GiB" now.free="14.7 GiB" now.used="1012.9 MiB" releasing cuda driver library time=2025-03-06T06:24:21.378Z level=DEBUG source=sched.go:225 msg="loading first model" model=/root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 time=2025-03-06T06:24:21.378Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.7 GiB]" time=2025-03-06T06:24:21.379Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-06T06:24:21.379Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.key_length default=64 time=2025-03-06T06:24:21.379Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.value_length default=64 time=2025-03-06T06:24:21.379Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-06T06:24:21.379Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 gpu=GPU-115740de-cf67-2e73-a461-d054d597eb22 parallel=1 available=15833497600 required="1.6 GiB" time=2025-03-06T06:24:21.379Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="108.1 GiB" before.free="104.6 GiB" before.free_swap="0 B" now.total="108.1 GiB" now.free="104.6 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.08 dlsym: cuInit - 0x7f649966ebc0 dlsym: cuDriverGetVersion - 0x7f649966ebe0 dlsym: cuDeviceGetCount - 0x7f649966ec20 dlsym: cuDeviceGet - 0x7f649966ec00 dlsym: cuDeviceGetAttribute - 0x7f649966ed00 dlsym: cuDeviceGetUuid - 0x7f649966ec60 dlsym: cuDeviceGetName - 0x7f649966ec40 dlsym: cuCtxCreate_v3 - 0x7f649966eee0 dlsym: cuMemGetInfo_v2 - 0x7f6499678e20 dlsym: cuCtxDestroy - 0x7f64996d3850 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2025-03-06T06:24:21.521Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-115740de-cf67-2e73-a461-d054d597eb22 name="Tesla T4" overhead="0 B" before.total="15.7 GiB" before.free="14.7 GiB" now.total="15.7 GiB" now.free="14.7 GiB" now.used="1012.9 MiB" releasing cuda driver library time=2025-03-06T06:24:21.521Z level=INFO source=server.go:97 msg="system memory" total="108.1 GiB" free="104.6 GiB" free_swap="0 B" time=2025-03-06T06:24:21.521Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[14.7 GiB]" time=2025-03-06T06:24:21.522Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-06T06:24:21.522Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.key_length default=64 time=2025-03-06T06:24:21.522Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.value_length default=64 time=2025-03-06T06:24:21.522Z level=WARN source=ggml.go:136 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-06T06:24:21.522Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[14.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.6 GiB" memory.required.partial="1.6 GiB" memory.required.kv="12.0 MiB" memory.required.allocations="[1.6 GiB]" memory.weights.total="589.2 MiB" memory.weights.repeating="100.9 MiB" memory.weights.nonrepeating="488.3 MiB" memory.graph.full="32.0 MiB" memory.graph.partial="32.0 MiB" time=2025-03-06T06:24:21.522Z level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]" time=2025-03-06T06:24:21.522Z level=DEBUG source=server.go:302 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12 time=2025-03-06T06:24:21.522Z level=DEBUG source=server.go:310 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12] time=2025-03-06T06:24:21.522Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 --ctx-size 2048 --batch-size 512 --n-gpu-layers 25 --verbose --threads 16 --parallel 1 --port 44397" time=2025-03-06T06:24:21.523Z level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama/cuda_v12:/usr/lib/ollama CUDA_VISIBLE_DEVICES=GPU-115740de-cf67-2e73-a461-d054d597eb22]" time=2025-03-06T06:24:21.523Z level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-06T06:24:21.523Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-03-06T06:24:21.523Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-06T06:24:21.556Z level=INFO source=runner.go:931 msg="starting go runner" time=2025-03-06T06:24:21.556Z level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: Tesla T4, compute capability 7.5, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-03-06T06:24:21.593Z level=DEBUG source=ggml.go:78 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib time=2025-03-06T06:24:21.593Z level=DEBUG source=ggml.go:78 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib64 time=2025-03-06T06:24:21.593Z level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/lib/ollama ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 55 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-03-06T06:24:21.616Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=16 time=2025-03-06T06:24:21.616Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:44397" llama_model_load_from_file_impl: using device CUDA0 (Tesla T4) - 15100 MiB free llama_model_loader: loaded meta data with 34 key-value pairs and 389 tensors from /root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = bert llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 567M llama_model_loader: - kv 3: general.license str = apache-2.0 llama_model_loader: - kv 4: general.tags arr[str,8] = ["sentence-transformers", "feature-ex... llama_model_loader: - kv 5: general.languages arr[str,74] = ["af", "ar", "az", "be", "bg", "bn", ... llama_model_loader: - kv 6: bert.block_count u32 = 24 llama_model_loader: - kv 7: bert.context_length u32 = 8192 llama_model_loader: - kv 8: bert.embedding_length u32 = 1024 llama_model_loader: - kv 9: bert.feed_forward_length u32 = 4096 llama_model_loader: - kv 10: bert.attention.head_count u32 = 16 llama_model_loader: - kv 11: bert.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 12: general.file_type u32 = 1 llama_model_loader: - kv 13: bert.attention.causal bool = false llama_model_loader: - kv 14: bert.pooling_type u32 = 2 llama_model_loader: - kv 15: tokenizer.ggml.model str = t5 llama_model_loader: - kv 16: tokenizer.ggml.pre str = default time=2025-03-06T06:24:21.775Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,250002] = ["<s>", "<pad>", "</s>", "<unk>", ","... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.scores arr[f32,250002] = [-10000.000000, -10000.000000, -10000... llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = true llama_model_loader: - kv 22: tokenizer.ggml.token_type_count u32 = 1 llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001 llama_model_loader: - kv 30: tokenizer.ggml.precompiled_charsmap arr[str,316720] = ["A", "L", "Q", "C", "A", "A", "C", "... llama_model_loader: - kv 31: tokenizer.ggml.remove_extra_whitespaces bool = true llama_model_loader: - kv 32: tokenizer.ggml.add_space_prefix bool = true llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 244 tensors llama_model_loader: - type f16: 145 tensors print_info: file format = GGUF V3 (latest) print_info: file type = F16 print_info: file size = 1.07 GiB (16.25 BPW) gguf.cpp:780: GGML_ASSERT(ctx->kv[key_id].get_type() != GGUF_TYPE_STRING) failed /usr/bin/ollama(+0x10cc808)[0x55d3489db808] /usr/bin/ollama(+0x10ccb86)[0x55d3489dbb86] /usr/bin/ollama(+0x10e4c9e)[0x55d3489f3c9e] /usr/bin/ollama(+0x1058716)[0x55d348967716] /usr/bin/ollama(+0x1019100)[0x55d348928100] /usr/bin/ollama(+0x1065b31)[0x55d348974b31] /usr/bin/ollama(+0x106619b)[0x55d34897519b] /usr/bin/ollama(+0xf70c72)[0x55d34887fc72] /usr/bin/ollama(+0x328c21)[0x55d347c37c21] SIGABRT: abort PC=0x7f444e22700b m=9 sigcode=18446744073709551610 signal arrived during cgo execution goroutine 14 gp=0xc000103340 m=9 mp=0xc000100808 [syscall]: runtime.cgocall(0x55d34887fc30, 0xc000093c00) runtime/cgocall.go:167 +0x4b fp=0xc000093bd8 sp=0xc000093ba0 pc=0x55d347c2d58b github.com/ollama/ollama/llama._Cfunc_llama_model_load_from_file(0x7f43e4000b60, {0x0, 0x19, 0x1, 0x0, 0x0, 0x55d34887f510, 0xc000714128, 0x0, 0x0, ...}) _cgo_gotypes.go:754 +0x4b fp=0xc000093c00 sp=0xc000093bd8 pc=0x55d347fb3f6b github.com/ollama/ollama/llama.LoadModelFromFile.func1(...) github.com/ollama/ollama/llama/llama.go:265 github.com/ollama/ollama/llama.LoadModelFromFile({0x7fffa77f3b61, 0x62}, {0x19, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000464930, ...}) github.com/ollama/ollama/llama/llama.go:265 +0x36b fp=0xc000093dc8 sp=0xc000093c00 pc=0x55d347fb6aeb github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0004b62d0, {0x19, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000464930, 0x0}, ...) github.com/ollama/ollama/runner/llamarunner/runner.go:849 +0x9b fp=0xc000093f10 sp=0xc000093dc8 pc=0x55d347fd1ddb github.com/ollama/ollama/runner/llamarunner.Execute.gowrap1() github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xda fp=0xc000093fe0 sp=0xc000093f10 pc=0x55d347fd369a runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000093fe8 sp=0xc000093fe0 pc=0x55d347c37fa1 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7 goroutine 1 gp=0xc000002380 m=nil [IO wait]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00012d5b8 sp=0xc00012d598 pc=0x55d347c3086e runtime.netpollblock(0xc00012d608?, 0x47bca1a6?, 0xd3?) runtime/netpoll.go:575 +0xf7 fp=0xc00012d5f0 sp=0xc00012d5b8 pc=0x55d347bf5677 internal/poll.runtime_pollWait(0x7f43fef15eb0, 0x72) runtime/netpoll.go:351 +0x85 fp=0xc00012d610 sp=0xc00012d5f0 pc=0x55d347c2fa85 internal/poll.(*pollDesc).wait(0xc000480080?, 0x900000036?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00012d638 sp=0xc00012d610 pc=0x55d347cb6f07 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Accept(0xc000480080) internal/poll/fd_unix.go:620 +0x295 fp=0xc00012d6e0 sp=0xc00012d638 pc=0x55d347cbc2d5 net.(*netFD).accept(0xc000480080) net/fd_unix.go:172 +0x29 fp=0xc00012d798 sp=0xc00012d6e0 pc=0x55d347d2e749 net.(*TCPListener).accept(0xc0005aec00) net/tcpsock_posix.go:159 +0x1b fp=0xc00012d7e8 sp=0xc00012d798 pc=0x55d347d440fb net.(*TCPListener).Accept(0xc0005aec00) net/tcpsock.go:380 +0x30 fp=0xc00012d818 sp=0xc00012d7e8 pc=0x55d347d42fb0 net/http.(*onceCloseListener).Accept(0xc0004b63f0?) <autogenerated>:1 +0x24 fp=0xc00012d830 sp=0xc00012d818 pc=0x55d347f59e64 net/http.(*Server).Serve(0xc000201600, {0x55d348ede528, 0xc0005aec00}) net/http/server.go:3424 +0x30c fp=0xc00012d960 sp=0xc00012d830 pc=0x55d347f3172c github.com/ollama/ollama/runner/llamarunner.Execute({0xc000034140, 0xf, 0x10}) github.com/ollama/ollama/runner/llamarunner/runner.go:993 +0x116a fp=0xc00012dd08 sp=0xc00012d960 pc=0x55d347fd32ca github.com/ollama/ollama/runner.Execute({0xc000034130?, 0x0?, 0x0?}) github.com/ollama/ollama/runner/runner.go:22 +0xd4 fp=0xc00012dd30 sp=0xc00012dd08 pc=0x55d3481fd9b4 github.com/ollama/ollama/cmd.NewCLI.func2(0xc000201400?, {0x55d348a5d055?, 0x4?, 0x55d348a5d059?}) github.com/ollama/ollama/cmd/cmd.go:1281 +0x45 fp=0xc00012dd58 sp=0xc00012dd30 pc=0x55d348812e45 github.com/spf13/cobra.(*Command).execute(0xc00079b508, {0xc00014ea50, 0xf, 0xf}) github.com/spf13/cobra@v1.7.0/command.go:940 +0x85c fp=0xc00012de78 sp=0xc00012dd58 pc=0x55d347da79dc github.com/spf13/cobra.(*Command).ExecuteC(0xc00015ec08) github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc00012df30 sp=0xc00012de78 pc=0x55d347da8225 github.com/spf13/cobra.(*Command).Execute(...) github.com/spf13/cobra@v1.7.0/command.go:992 github.com/spf13/cobra.(*Command).ExecuteContext(...) github.com/spf13/cobra@v1.7.0/command.go:985 main.main() github.com/ollama/ollama/main.go:12 +0x4d fp=0xc00012df50 sp=0xc00012df30 pc=0x55d3488131ad runtime.main() runtime/proc.go:283 +0x29d fp=0xc00012dfe0 sp=0xc00012df50 pc=0x55d347bfcc7d runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00012dfe8 sp=0xc00012dfe0 pc=0x55d347c37fa1 goroutine 2 gp=0xc000002e00 m=nil [force gc (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000084fa8 sp=0xc000084f88 pc=0x55d347c3086e runtime.goparkunlock(...) runtime/proc.go:441 runtime.forcegchelper() runtime/proc.go:348 +0xb8 fp=0xc000084fe0 sp=0xc000084fa8 pc=0x55d347bfcfb8 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000084fe8 sp=0xc000084fe0 pc=0x55d347c37fa1 created by runtime.init.7 in goroutine 1 runtime/proc.go:336 +0x1a goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000085780 sp=0xc000085760 pc=0x55d347c3086e runtime.goparkunlock(...) runtime/proc.go:441 runtime.bgsweep(0xc00003a080) runtime/mgcsweep.go:316 +0xdf fp=0xc0000857c8 sp=0xc000085780 pc=0x55d347be77df runtime.gcenable.gowrap1() runtime/mgc.go:204 +0x25 fp=0xc0000857e0 sp=0xc0000857c8 pc=0x55d347bdbbc5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000857e8 sp=0xc0000857e0 pc=0x55d347c37fa1 created by runtime.gcenable in goroutine 1 runtime/mgc.go:204 +0x66 goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]: runtime.gopark(0x10000?, 0x55d348c0f9d8?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000085f78 sp=0xc000085f58 pc=0x55d347c3086e runtime.goparkunlock(...) runtime/proc.go:441 runtime.(*scavengerState).park(0x55d349730980) runtime/mgcscavenge.go:425 +0x49 fp=0xc000085fa8 sp=0xc000085f78 pc=0x55d347be5229 runtime.bgscavenge(0xc00003a080) runtime/mgcscavenge.go:658 +0x59 fp=0xc000085fc8 sp=0xc000085fa8 pc=0x55d347be57b9 runtime.gcenable.gowrap2() runtime/mgc.go:205 +0x25 fp=0xc000085fe0 sp=0xc000085fc8 pc=0x55d347bdbb65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000085fe8 sp=0xc000085fe0 pc=0x55d347c37fa1 created by runtime.gcenable in goroutine 1 runtime/mgc.go:205 +0xa5 goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait]: runtime.gopark(0x1b8?, 0xc000002380?, 0x1?, 0x23?, 0xc000084688?) runtime/proc.go:435 +0xce fp=0xc000084630 sp=0xc000084610 pc=0x55d347c3086e runtime.runfinq() runtime/mfinal.go:196 +0x107 fp=0xc0000847e0 sp=0xc000084630 pc=0x55d347bdab87 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000847e8 sp=0xc0000847e0 pc=0x55d347c37fa1 created by runtime.createfing in goroutine 1 runtime/mfinal.go:166 +0x3d goroutine 6 gp=0xc0001de8c0 m=nil [chan receive]: runtime.gopark(0xc0000ffae0?, 0xc000508018?, 0x60?, 0x67?, 0x55d347d15488?) runtime/proc.go:435 +0xce fp=0xc000086718 sp=0xc0000866f8 pc=0x55d347c3086e runtime.chanrecv(0xc0000b8380, 0x0, 0x1) runtime/chan.go:664 +0x445 fp=0xc000086790 sp=0xc000086718 pc=0x55d347bccd85 runtime.chanrecv1(0x0?, 0x0?) runtime/chan.go:506 +0x12 fp=0xc0000867b8 sp=0xc000086790 pc=0x55d347bcc912 runtime.unique_runtime_registerUniqueMapCleanup.func2(...) runtime/mgc.go:1796 runtime.unique_runtime_registerUniqueMapCleanup.gowrap1() runtime/mgc.go:1799 +0x2f fp=0xc0000867e0 sp=0xc0000867b8 pc=0x55d347bded6f runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000867e8 sp=0xc0000867e0 pc=0x55d347c37fa1 created by unique.runtime_registerUniqueMapCleanup in goroutine 1 runtime/mgc.go:1794 +0x85 goroutine 7 gp=0xc0001defc0 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000086f38 sp=0xc000086f18 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc000086fc8 sp=0xc000086f38 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000086fe0 sp=0xc000086fc8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000086fe8 sp=0xc000086fe0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 18 gp=0xc000504000 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000080738 sp=0xc000080718 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000807c8 sp=0xc000080738 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000807e0 sp=0xc0000807c8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000807e8 sp=0xc0000807e0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 34 gp=0xc000102380 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011a738 sp=0xc00011a718 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc00011a7c8 sp=0xc00011a738 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011a7e0 sp=0xc00011a7c8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011a7e8 sp=0xc00011a7e0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 8 gp=0xc0001df180 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000087738 sp=0xc000087718 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000877c8 sp=0xc000087738 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000877e0 sp=0xc0000877c8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000877e8 sp=0xc0000877e0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 19 gp=0xc0005041c0 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000080f38 sp=0xc000080f18 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc000080fc8 sp=0xc000080f38 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000080fe0 sp=0xc000080fc8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000080fe8 sp=0xc000080fe0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 35 gp=0xc000102540 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011af38 sp=0xc00011af18 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc00011afc8 sp=0xc00011af38 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011afe0 sp=0xc00011afc8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011afe8 sp=0xc00011afe0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 9 gp=0xc0001df340 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000087f38 sp=0xc000087f18 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc000087fc8 sp=0xc000087f38 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000087fe0 sp=0xc000087fc8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000087fe8 sp=0xc000087fe0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 20 gp=0xc000504380 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000081738 sp=0xc000081718 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000817c8 sp=0xc000081738 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000817e0 sp=0xc0000817c8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000817e8 sp=0xc0000817e0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 10 gp=0xc0001df500 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000116738 sp=0xc000116718 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc0001167c8 sp=0xc000116738 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0001167e0 sp=0xc0001167c8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001167e8 sp=0xc0001167e0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 11 gp=0xc0001df6c0 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000116f38 sp=0xc000116f18 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc000116fc8 sp=0xc000116f38 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000116fe0 sp=0xc000116fc8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000116fe8 sp=0xc000116fe0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 12 gp=0xc0001df880 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000117738 sp=0xc000117718 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc0001177c8 sp=0xc000117738 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0001177e0 sp=0xc0001177c8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001177e8 sp=0xc0001177e0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 36 gp=0xc000102700 m=nil [GC worker (idle)]: runtime.gopark(0xfffe6e96771?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011b738 sp=0xc00011b718 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc00011b7c8 sp=0xc00011b738 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011b7e0 sp=0xc00011b7c8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011b7e8 sp=0xc00011b7e0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 21 gp=0xc000504540 m=nil [GC worker (idle)]: runtime.gopark(0xfffe6e69187?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000081f38 sp=0xc000081f18 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc000081fc8 sp=0xc000081f38 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000081fe0 sp=0xc000081fc8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000081fe8 sp=0xc000081fe0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 13 gp=0xc0001dfa40 m=nil [GC worker (idle)]: runtime.gopark(0x55d3497df100?, 0x1?, 0xe4?, 0x6f?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000117f38 sp=0xc000117f18 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc000117fc8 sp=0xc000117f38 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000117fe0 sp=0xc000117fc8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000117fe8 sp=0xc000117fe0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 37 gp=0xc0001028c0 m=nil [GC worker (idle)]: runtime.gopark(0xfffe6e97538?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011bf38 sp=0xc00011bf18 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc00011bfc8 sp=0xc00011bf38 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011bfe0 sp=0xc00011bfc8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011bfe8 sp=0xc00011bfe0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 22 gp=0xc000504700 m=nil [GC worker (idle)]: runtime.gopark(0xfffe6e95f06?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000082738 sp=0xc000082718 pc=0x55d347c3086e runtime.gcBgMarkWorker(0xc0000b97a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000827c8 sp=0xc000082738 pc=0x55d347bde089 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000827e0 sp=0xc0000827c8 pc=0x55d347bddf65 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000827e8 sp=0xc0000827e0 pc=0x55d347c37fa1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 15 gp=0xc000103500 m=nil [sync.WaitGroup.Wait]: runtime.gopark(0x0?, 0x0?, 0x60?, 0x40?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000118618 sp=0xc0001185f8 pc=0x55d347c3086e runtime.goparkunlock(...) runtime/proc.go:441 runtime.semacquire1(0xc0004b62d8, 0x0, 0x1, 0x0, 0x18) runtime/sema.go:188 +0x229 fp=0xc000118680 sp=0xc000118618 pc=0x55d347c10249 sync.runtime_SemacquireWaitGroup(0x0?) runtime/sema.go:110 +0x25 fp=0xc0001186b8 sp=0xc000118680 pc=0x55d347c32285 sync.(*WaitGroup).Wait(0x0?) sync/waitgroup.go:118 +0x48 fp=0xc0001186e0 sp=0xc0001186b8 pc=0x55d347c43a08 github.com/ollama/ollama/runner/llamarunner.(*Server).run(0xc0004b62d0, {0x55d348ee07a0, 0xc000139e00}) github.com/ollama/ollama/runner/llamarunner/runner.go:316 +0x47 fp=0xc0001187b8 sp=0xc0001186e0 pc=0x55d347fcea67 github.com/ollama/ollama/runner/llamarunner.Execute.gowrap2() github.com/ollama/ollama/runner/llamarunner/runner.go:973 +0x28 fp=0xc0001187e0 sp=0xc0001187b8 pc=0x55d347fd3588 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001187e8 sp=0xc0001187e0 pc=0x55d347c37fa1 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/llamarunner/runner.go:973 +0xd97 goroutine 16 gp=0xc0001036c0 m=nil [IO wait]: runtime.gopark(0x55d347cba505?, 0xc000480100?, 0x40?, 0xfa?, 0xb?) runtime/proc.go:435 +0xce fp=0xc00023f948 sp=0xc00023f928 pc=0x55d347c3086e runtime.netpollblock(0x55d347c53cf8?, 0x47bca1a6?, 0xd3?) runtime/netpoll.go:575 +0xf7 fp=0xc00023f980 sp=0xc00023f948 pc=0x55d347bf5677 internal/poll.runtime_pollWait(0x7f43fef15d98, 0x72) runtime/netpoll.go:351 +0x85 fp=0xc00023f9a0 sp=0xc00023f980 pc=0x55d347c2fa85 internal/poll.(*pollDesc).wait(0xc000480100?, 0xc000184000?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00023f9c8 sp=0xc00023f9a0 pc=0x55d347cb6f07 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc000480100, {0xc000184000, 0x1000, 0x1000}) internal/poll/fd_unix.go:165 +0x27a fp=0xc00023fa60 sp=0xc00023f9c8 pc=0x55d347cb81fa net.(*netFD).Read(0xc000480100, {0xc000184000?, 0xc00023fad0?, 0x55d347cb73c5?}) net/fd_posix.go:55 +0x25 fp=0xc00023faa8 sp=0xc00023fa60 pc=0x55d347d2c7a5 net.(*conn).Read(0xc00012e460, {0xc000184000?, 0x0?, 0x0?}) net/net.go:194 +0x45 fp=0xc00023faf0 sp=0xc00023faa8 pc=0x55d347d3ab65 net/http.(*connReader).Read(0xc0000c4420, {0xc000184000, 0x1000, 0x1000}) net/http/server.go:798 +0x159 fp=0xc00023fb40 sp=0xc00023faf0 pc=0x55d347f265d9 bufio.(*Reader).fill(0xc0005c40c0) bufio/bufio.go:113 +0x103 fp=0xc00023fb78 sp=0xc00023fb40 pc=0x55d347d52303 bufio.(*Reader).Peek(0xc0005c40c0, 0x4) bufio/bufio.go:152 +0x53 fp=0xc00023fb98 sp=0xc00023fb78 pc=0x55d347d52433 net/http.(*conn).serve(0xc0004b63f0, {0x55d348ee0768, 0xc0007047b0}) net/http/server.go:2137 +0x785 fp=0xc00023ffb8 sp=0xc00023fb98 pc=0x55d347f2c3c5 net/http.(*Server).Serve.gowrap3() net/http/server.go:3454 +0x28 fp=0xc00023ffe0 sp=0xc00023ffb8 pc=0x55d347f31b28 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00023ffe8 sp=0xc00023ffe0 pc=0x55d347c37fa1 created by net/http.(*Server).Serve in goroutine 1 net/http/server.go:3454 +0x485 rax 0x0 rbx 0x7f43ff7fe700 rcx 0x7f444e22700b rdx 0x0 rdi 0x2 rsi 0x7f43ff7fd470 rbp 0x55d348c2cf4d rsp 0x7f43ff7fd470 r8 0x0 r9 0x7f43ff7fd470 r10 0x8 r11 0x246 r12 0x55d348c60d4a r13 0x30c r14 0x7f43ff7fd8d0 r15 0x7f43ff7fd8b0 rip 0x7f444e22700b rflags 0x246 cs 0x33 fs 0x0 gs 0x0 time=2025-03-06T06:24:22.026Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-06T06:24:22.036Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" time=2025-03-06T06:24:22.276Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: GGML_ASSERT(ctx->kv[key_id].get_type() != GGUF_TYPE_STRING) failed" time=2025-03-06T06:24:22.276Z level=DEBUG source=sched.go:459 msg="triggering expiration for failed load" model=/root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 time=2025-03-06T06:24:22.276Z level=DEBUG source=sched.go:361 msg="runner expired event received" modelPath=/root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 time=2025-03-06T06:24:22.276Z level=DEBUG source=sched.go:376 msg="got lock to unload" modelPath=/root/.ollama/models/blobs/sha256-8c625c9569c3c799f5f9595b5a141f91d224233055608189d66746347c14e613 [GIN] 2025/03/06 - 06:24:22 | 500 | 1.147102014s | 10.224.0.4 | POST "/v1/embeddings" time=2025-03-06T06:24:22.276Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="108.1 GiB" before.free="104.6 GiB" before.free_swap="0 B" now.total="108.1 GiB" now.free="104.6 GiB" now.free_swap="0 B" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.13
GiteaMirror added the bug label 2026-04-12 17:37:40 -05:00
Author
Owner

@Ninecheese commented on GitHub (Mar 6, 2025):

Getting the same error also

<!-- gh-comment-id:2702934078 --> @Ninecheese commented on GitHub (Mar 6, 2025): Getting the same error also
Author
Owner

@jmorganca commented on GitHub (Mar 6, 2025):

Merging with https://github.com/ollama/ollama/issues/9511

<!-- gh-comment-id:2702940061 --> @jmorganca commented on GitHub (Mar 6, 2025): Merging with https://github.com/ollama/ollama/issues/9511
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6221