[GH-ISSUE #13018] too much available memory reported #70679

Open
opened 2026-05-04 22:31:40 -05:00 by GiteaMirror · 16 comments
Owner

Originally created by @binarynoise on GitHub (Nov 8, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13018

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Even though I have >1GB VRAM used by other applications (i.e. desktop environment, browser), ollama reports that the memory is almost unused.
This leads to incorrect layer assignments and runner crashes because it tries to allocate more memory than available.
I added 3GB of overhead as a workaround, but that only is a workaround.

If I don't add the overhead, loading the model still works, but the runner crashes just after that: graph_reserve: failed to allocate compute buffers

Relevant log output

total="16.0 GiB" available="15.7 GiB"

level=INFO source=server.go:522 msg=offload library=ROCm layers.requested=-1 layers.model=65 layers.offload=27 layers.split=[27] memory.available="[15.7 GiB]" memory.gpu_overhead="2.8 GiB" memory.required.full="28.2 GiB" memory.required.partial="12.6 GiB" memory.required.kv="2.0 GiB" memory.required.allocations="[12.6 GiB]" memory.weights.total="24.4 GiB" memory.weights.repeating="23.8 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"

OS

Linux

GPU

AMD

CPU

Intel

Ollama version

0.12.10

Originally created by @binarynoise on GitHub (Nov 8, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13018 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Even though I have >1GB VRAM used by other applications (i.e. desktop environment, browser), ollama reports that the memory is almost unused. This leads to incorrect layer assignments and runner crashes because it tries to allocate more memory than available. I added 3GB of overhead as a workaround, but that only is a workaround. If I don't add the overhead, loading the model still works, but the runner crashes just after that: `graph_reserve: failed to allocate compute buffers` ### Relevant log output ```shell total="16.0 GiB" available="15.7 GiB" level=INFO source=server.go:522 msg=offload library=ROCm layers.requested=-1 layers.model=65 layers.offload=27 layers.split=[27] memory.available="[15.7 GiB]" memory.gpu_overhead="2.8 GiB" memory.required.full="28.2 GiB" memory.required.partial="12.6 GiB" memory.required.kv="2.0 GiB" memory.required.allocations="[12.6 GiB]" memory.weights.total="24.4 GiB" memory.weights.repeating="23.8 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" ``` ### OS Linux ### GPU AMD ### CPU Intel ### Ollama version 0.12.10
GiteaMirror added the amdbuggpu labels 2026-05-04 22:31:41 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 8, 2025):

Post the full log.

<!-- gh-comment-id:3506834100 --> @rick-github commented on GitHub (Nov 8, 2025): Post the full log.
Author
Owner

@binarynoise commented on GitHub (Nov 8, 2025):

systemd[1]: Started Ollama Service.
sudo[179945]: pam_unix(sudo:session): session closed for user root
ollama[179970]: time=2025-11-08T20:52:13.208+01:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:3000000000 OLLAMA_HOST:http://[::]:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:f16 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/Tilo4TB/var-lib-ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
ollama[179970]: time=2025-11-08T20:52:13.230+01:00 level=INFO source=images.go:522 msg="total blobs: 147"
ollama[179970]: time=2025-11-08T20:52:13.234+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
ollama[179970]: time=2025-11-08T20:52:13.236+01:00 level=INFO source=routes.go:1578 msg="Listening on [::]:11434 (version 0.12.10.r17.g91ec3ddbeb2e)"
ollama[179970]: time=2025-11-08T20:52:13.236+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
ollama[179970]: time=2025-11-08T20:52:13.238+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40151"
ollama[179970]: time=2025-11-08T20:52:19.126+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33873"
ollama[179970]: time=2025-11-08T20:52:25.561+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-c2c6236518c28f70 filter_id="" library=ROCm compute=gfx1101 name=ROCm0 description="AMD Radeon RX 7800 XT" libdirs=ollama driver=60443.48 pci_id=0000:03:00.0 type=discrete total="16.0 GiB" available="15.8 GiB"
ollama[179970]: time=2025-11-08T20:52:25.561+01:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="13.2 GiB" threshold="20.0 GiB"
kernel: amdgpu: Freeing queue vital buffer 0x71db70200000, queue evicted
ollama[179970]: time=2025-11-08T20:52:25.803+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33939"
ollama[179970]: llama_model_loader: loaded meta data with 56 key-value pairs and 561 tensors from /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c (version GGUF V3 (latest))
ollama[179970]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama[179970]: llama_model_loader: - kv   0:                       general.architecture str              = llama
ollama[179970]: llama_model_loader: - kv   1:                               general.type str              = model
ollama[179970]: llama_model_loader: - kv   2:                               general.name str              = Mistral Magistral Devstral Instruct F...
ollama[179970]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct-FUSED-CODER-Reasoning
ollama[179970]: llama_model_loader: - kv   4:                           general.basename str              = Mistral-Magistral-Devstral
ollama[179970]: llama_model_loader: - kv   5:                         general.size_label str              = 36B
ollama[179970]: llama_model_loader: - kv   6:                            general.license str              = apache-2.0
ollama[179970]: llama_model_loader: - kv   7:                   general.base_model.count u32              = 2
ollama[179970]: llama_model_loader: - kv   8:                  general.base_model.0.name str              = Devstral Small 2507
ollama[179970]: llama_model_loader: - kv   9:               general.base_model.0.version str              = 2507
ollama[179970]: llama_model_loader: - kv  10:          general.base_model.0.organization str              = Mistralai
ollama[179970]: llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/mistralai/Devs...
ollama[179970]: llama_model_loader: - kv  12:                  general.base_model.1.name str              = Magistral Small 2506
ollama[179970]: llama_model_loader: - kv  13:               general.base_model.1.version str              = 2506
ollama[179970]: llama_model_loader: - kv  14:          general.base_model.1.organization str              = Mistralai
ollama[179970]: llama_model_loader: - kv  15:              general.base_model.1.repo_url str              = https://huggingface.co/mistralai/Magi...
ollama[179970]: llama_model_loader: - kv  16:                               general.tags arr[str,14]      = ["merge", "programming", "code genera...
ollama[179970]: llama_model_loader: - kv  17:                          general.languages arr[str,24]      = ["en", "fr", "de", "es", "pt", "it", ...
ollama[179970]: llama_model_loader: - kv  18:                          llama.block_count u32              = 62
ollama[179970]: llama_model_loader: - kv  19:                       llama.context_length u32              = 131072
ollama[179970]: llama_model_loader: - kv  20:                     llama.embedding_length u32              = 5120
ollama[179970]: llama_model_loader: - kv  21:                  llama.feed_forward_length u32              = 32768
ollama[179970]: llama_model_loader: - kv  22:                 llama.attention.head_count u32              = 32
ollama[179970]: llama_model_loader: - kv  23:              llama.attention.head_count_kv u32              = 8
ollama[179970]: llama_model_loader: - kv  24:                       llama.rope.freq_base f32              = 1000000000.000000
ollama[179970]: llama_model_loader: - kv  25:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
ollama[179970]: llama_model_loader: - kv  26:                 llama.attention.key_length u32              = 128
ollama[179970]: llama_model_loader: - kv  27:               llama.attention.value_length u32              = 128
ollama[179970]: llama_model_loader: - kv  28:                           llama.vocab_size u32              = 131072
ollama[179970]: llama_model_loader: - kv  29:                 llama.rope.dimension_count u32              = 128
ollama[179970]: llama_model_loader: - kv  30:                       tokenizer.ggml.model str              = gpt2
ollama[179970]: llama_model_loader: - kv  31:                         tokenizer.ggml.pre str              = tekken
ollama[179970]: llama_model_loader: - kv  32:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "[INST]", "[...
ollama[179970]: llama_model_loader: - kv  33:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
ollama[179970]: [132B blob data]
ollama[179970]: llama_model_loader: - kv  35:                tokenizer.ggml.bos_token_id u32              = 1
ollama[179970]: llama_model_loader: - kv  36:                tokenizer.ggml.eos_token_id u32              = 2
ollama[179970]: llama_model_loader: - kv  37:            tokenizer.ggml.unknown_token_id u32              = 0
ollama[179970]: llama_model_loader: - kv  38:               tokenizer.ggml.add_bos_token bool             = true
ollama[179970]: llama_model_loader: - kv  39:               tokenizer.ggml.add_sep_token bool             = false
ollama[179970]: llama_model_loader: - kv  40:               tokenizer.ggml.add_eos_token bool             = false
ollama[179970]: llama_model_loader: - kv  41:                    tokenizer.chat_template str              = {%- set today = strftime_now("%Y-%m-%...
ollama[179970]: llama_model_loader: - kv  42:            tokenizer.ggml.add_space_prefix bool             = false
ollama[179970]: llama_model_loader: - kv  43:               general.quantization_version u32              = 2
ollama[179970]: llama_model_loader: - kv  44:                          general.file_type u32              = 23
ollama[179970]: llama_model_loader: - kv  45:                                general.url str              = https://huggingface.co/mradermacher/M...
ollama[179970]: llama_model_loader: - kv  46:              mradermacher.quantize_version str              = 2
ollama[179970]: llama_model_loader: - kv  47:                  mradermacher.quantized_by str              = mradermacher
ollama[179970]: llama_model_loader: - kv  48:                  mradermacher.quantized_at str              = 2025-07-30T12:53:06+02:00
ollama[179970]: llama_model_loader: - kv  49:                  mradermacher.quantized_on str              = rich1
ollama[179970]: llama_model_loader: - kv  50:                         general.source.url str              = https://huggingface.co/DavidAU/Mistra...
ollama[179970]: llama_model_loader: - kv  51:                  mradermacher.convert_type str              = hf
ollama[179970]: llama_model_loader: - kv  52:                      quantize.imatrix.file str              = Mistral-Magistral-Devstral-Instruct-F...
ollama[179970]: llama_model_loader: - kv  53:                   quantize.imatrix.dataset str              = imatrix-training-full-3
ollama[179970]: llama_model_loader: - kv  54:             quantize.imatrix.entries_count u32              = 434
ollama[179970]: llama_model_loader: - kv  55:              quantize.imatrix.chunks_count u32              = 321
ollama[179970]: llama_model_loader: - type  f32:  125 tensors
ollama[179970]: llama_model_loader: - type q4_K:   62 tensors
ollama[179970]: llama_model_loader: - type q5_K:    1 tensors
ollama[179970]: llama_model_loader: - type iq3_xxs:  186 tensors
ollama[179970]: llama_model_loader: - type iq3_s:   63 tensors
ollama[179970]: llama_model_loader: - type iq2_s:  124 tensors
ollama[179970]: print_info: file format = GGUF V3 (latest)
ollama[179970]: print_info: file type   = IQ3_XXS - 3.0625 bpw
ollama[179970]: print_info: file size   = 13.00 GiB (3.12 BPW)
ollama[179970]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
ollama[179970]: load: printing all EOG tokens:
ollama[179970]: load:   - 2 ('</s>')
ollama[179970]: load: special tokens cache size = 1000
ollama[179970]: load: token to piece cache size = 0.8498 MB
ollama[179970]: print_info: arch             = llama
ollama[179970]: print_info: vocab_only       = 1
ollama[179970]: print_info: model type       = ?B
ollama[179970]: print_info: model params     = 35.80 B
ollama[179970]: print_info: general.name     = Mistral Magistral Devstral Instruct FUSED CODER Reasoning 36B
ollama[179970]: print_info: vocab type       = BPE
ollama[179970]: print_info: n_vocab          = 131072
ollama[179970]: print_info: n_merges         = 269443
ollama[179970]: print_info: BOS token        = 1 '<s>'
ollama[179970]: print_info: EOS token        = 2 '</s>'
ollama[179970]: print_info: UNK token        = 0 '<unk>'
ollama[179970]: print_info: LF token         = 1010 'Ċ'
ollama[179970]: print_info: EOG token        = 2 '</s>'
ollama[179970]: print_info: max token length = 150
ollama[179970]: llama_model_load: vocab only - skipping tensors
ollama[179970]: time=2025-11-08T20:52:31.594+01:00 level=INFO source=server.go:215 msg="enabling flash attention"
ollama[179970]: time=2025-11-08T20:52:31.595+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --model /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c --port 38137"
ollama[179970]: time=2025-11-08T20:52:31.595+01:00 level=INFO source=server.go:470 msg="system memory" total="62.5 GiB" free="41.4 GiB" free_swap="92.5 GiB"
ollama[179970]: time=2025-11-08T20:52:31.596+01:00 level=INFO source=server.go:522 msg=offload library=ROCm layers.requested=-1 layers.model=63 layers.offload=50 layers.split=[50] memory.available="[15.8 GiB]" memory.gpu_overhead="2.8 GiB" memory.required.full="16.1 GiB" memory.required.partial="12.9 GiB" memory.required.kv="1.9 GiB" memory.required.allocations="[12.9 GiB]" memory.weights.total="12.7 GiB" memory.weights.repeating="12.3 GiB" memory.weights.nonrepeating="440.0 MiB" memory.graph.full="568.0 MiB" memory.graph.partial="801.0 MiB"
ollama[179970]: time=2025-11-08T20:52:31.602+01:00 level=INFO source=runner.go:910 msg="starting go runner"
ollama[179970]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ollama[179970]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ollama[179970]: ggml_cuda_init: found 1 ROCm devices:
ollama[179970]:   Device 0: AMD Radeon RX 7800 XT, gfx1101 (0x1101), VMM: no, Wave Size: 32, ID: GPU-c2c6236518c28f70
ollama[179970]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so
ollama[179970]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so
ollama[179970]: time=2025-11-08T20:52:36.698+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.SSE3=1 CPU.1.SSSE3=1 CPU.1.AVX=1 CPU.1.AVX_VNNI=1 CPU.1.AVX2=1 CPU.1.F16C=1 CPU.1.FMA=1 CPU.1.BMI2=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
ollama[179970]: time=2025-11-08T20:52:36.698+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:38137"
ollama[179970]: time=2025-11-08T20:52:36.701+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType:f16 NumThreads:16 GPULayers:50[ID:GPU-c2c6236518c28f70 Layers:50(12..61)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
ollama[179970]: time=2025-11-08T20:52:36.702+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
ollama[179970]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7800 XT) (0000:03:00.0) - 16154 MiB free
ollama[179970]: time=2025-11-08T20:52:36.702+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
ollama[179970]: llama_model_loader: loaded meta data with 56 key-value pairs and 561 tensors from /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c (version GGUF V3 (latest))
ollama[179970]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama[179970]: llama_model_loader: - kv   0:                       general.architecture str              = llama
ollama[179970]: llama_model_loader: - kv   1:                               general.type str              = model
ollama[179970]: llama_model_loader: - kv   2:                               general.name str              = Mistral Magistral Devstral Instruct F...
ollama[179970]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct-FUSED-CODER-Reasoning
ollama[179970]: llama_model_loader: - kv   4:                           general.basename str              = Mistral-Magistral-Devstral
ollama[179970]: llama_model_loader: - kv   5:                         general.size_label str              = 36B
ollama[179970]: llama_model_loader: - kv   6:                            general.license str              = apache-2.0
ollama[179970]: llama_model_loader: - kv   7:                   general.base_model.count u32              = 2
ollama[179970]: llama_model_loader: - kv   8:                  general.base_model.0.name str              = Devstral Small 2507
ollama[179970]: llama_model_loader: - kv   9:               general.base_model.0.version str              = 2507
ollama[179970]: llama_model_loader: - kv  10:          general.base_model.0.organization str              = Mistralai
ollama[179970]: llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/mistralai/Devs...
ollama[179970]: llama_model_loader: - kv  12:                  general.base_model.1.name str              = Magistral Small 2506
ollama[179970]: llama_model_loader: - kv  13:               general.base_model.1.version str              = 2506
ollama[179970]: llama_model_loader: - kv  14:          general.base_model.1.organization str              = Mistralai
ollama[179970]: llama_model_loader: - kv  15:              general.base_model.1.repo_url str              = https://huggingface.co/mistralai/Magi...
ollama[179970]: llama_model_loader: - kv  16:                               general.tags arr[str,14]      = ["merge", "programming", "code genera...
ollama[179970]: llama_model_loader: - kv  17:                          general.languages arr[str,24]      = ["en", "fr", "de", "es", "pt", "it", ...
ollama[179970]: llama_model_loader: - kv  18:                          llama.block_count u32              = 62
ollama[179970]: llama_model_loader: - kv  19:                       llama.context_length u32              = 131072
ollama[179970]: llama_model_loader: - kv  20:                     llama.embedding_length u32              = 5120
ollama[179970]: llama_model_loader: - kv  21:                  llama.feed_forward_length u32              = 32768
ollama[179970]: llama_model_loader: - kv  22:                 llama.attention.head_count u32              = 32
ollama[179970]: llama_model_loader: - kv  23:              llama.attention.head_count_kv u32              = 8
ollama[179970]: llama_model_loader: - kv  24:                       llama.rope.freq_base f32              = 1000000000.000000
ollama[179970]: llama_model_loader: - kv  25:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
ollama[179970]: llama_model_loader: - kv  26:                 llama.attention.key_length u32              = 128
ollama[179970]: llama_model_loader: - kv  27:               llama.attention.value_length u32              = 128
ollama[179970]: llama_model_loader: - kv  28:                           llama.vocab_size u32              = 131072
ollama[179970]: llama_model_loader: - kv  29:                 llama.rope.dimension_count u32              = 128
ollama[179970]: llama_model_loader: - kv  30:                       tokenizer.ggml.model str              = gpt2
ollama[179970]: llama_model_loader: - kv  31:                         tokenizer.ggml.pre str              = tekken
ollama[179970]: llama_model_loader: - kv  32:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "[INST]", "[...
ollama[179970]: llama_model_loader: - kv  33:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
ollama[179970]: [132B blob data]
ollama[179970]: llama_model_loader: - kv  35:                tokenizer.ggml.bos_token_id u32              = 1
ollama[179970]: llama_model_loader: - kv  36:                tokenizer.ggml.eos_token_id u32              = 2
ollama[179970]: llama_model_loader: - kv  37:            tokenizer.ggml.unknown_token_id u32              = 0
ollama[179970]: llama_model_loader: - kv  38:               tokenizer.ggml.add_bos_token bool             = true
ollama[179970]: llama_model_loader: - kv  39:               tokenizer.ggml.add_sep_token bool             = false
ollama[179970]: llama_model_loader: - kv  40:               tokenizer.ggml.add_eos_token bool             = false
ollama[179970]: llama_model_loader: - kv  41:                    tokenizer.chat_template str              = {%- set today = strftime_now("%Y-%m-%...
ollama[179970]: llama_model_loader: - kv  42:            tokenizer.ggml.add_space_prefix bool             = false
ollama[179970]: llama_model_loader: - kv  43:               general.quantization_version u32              = 2
ollama[179970]: llama_model_loader: - kv  44:                          general.file_type u32              = 23
ollama[179970]: llama_model_loader: - kv  45:                                general.url str              = https://huggingface.co/mradermacher/M...
ollama[179970]: llama_model_loader: - kv  46:              mradermacher.quantize_version str              = 2
ollama[179970]: llama_model_loader: - kv  47:                  mradermacher.quantized_by str              = mradermacher
ollama[179970]: llama_model_loader: - kv  48:                  mradermacher.quantized_at str              = 2025-07-30T12:53:06+02:00
ollama[179970]: llama_model_loader: - kv  49:                  mradermacher.quantized_on str              = rich1
ollama[179970]: llama_model_loader: - kv  50:                         general.source.url str              = https://huggingface.co/DavidAU/Mistra...
ollama[179970]: llama_model_loader: - kv  51:                  mradermacher.convert_type str              = hf
ollama[179970]: llama_model_loader: - kv  52:                      quantize.imatrix.file str              = Mistral-Magistral-Devstral-Instruct-F...
ollama[179970]: llama_model_loader: - kv  53:                   quantize.imatrix.dataset str              = imatrix-training-full-3
ollama[179970]: llama_model_loader: - kv  54:             quantize.imatrix.entries_count u32              = 434
ollama[179970]: llama_model_loader: - kv  55:              quantize.imatrix.chunks_count u32              = 321
ollama[179970]: llama_model_loader: - type  f32:  125 tensors
ollama[179970]: llama_model_loader: - type q4_K:   62 tensors
ollama[179970]: llama_model_loader: - type q5_K:    1 tensors
ollama[179970]: llama_model_loader: - type iq3_xxs:  186 tensors
ollama[179970]: llama_model_loader: - type iq3_s:   63 tensors
ollama[179970]: llama_model_loader: - type iq2_s:  124 tensors
ollama[179970]: print_info: file format = GGUF V3 (latest)
ollama[179970]: print_info: file type   = IQ3_XXS - 3.0625 bpw
ollama[179970]: print_info: file size   = 13.00 GiB (3.12 BPW)
ollama[179970]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
ollama[179970]: load: printing all EOG tokens:
ollama[179970]: load:   - 2 ('</s>')
ollama[179970]: load: special tokens cache size = 1000
ollama[179970]: load: token to piece cache size = 0.8498 MB
ollama[179970]: print_info: arch             = llama
ollama[179970]: print_info: vocab_only       = 0
ollama[179970]: print_info: n_ctx_train      = 131072
ollama[179970]: print_info: n_embd           = 5120
ollama[179970]: print_info: n_layer          = 62
ollama[179970]: print_info: n_head           = 32
ollama[179970]: print_info: n_head_kv        = 8
ollama[179970]: print_info: n_rot            = 128
ollama[179970]: print_info: n_swa            = 0
ollama[179970]: print_info: is_swa_any       = 0
ollama[179970]: print_info: n_embd_head_k    = 128
ollama[179970]: print_info: n_embd_head_v    = 128
ollama[179970]: print_info: n_gqa            = 4
ollama[179970]: print_info: n_embd_k_gqa     = 1024
ollama[179970]: print_info: n_embd_v_gqa     = 1024
ollama[179970]: print_info: f_norm_eps       = 0.0e+00
ollama[179970]: print_info: f_norm_rms_eps   = 1.0e-05
ollama[179970]: print_info: f_clamp_kqv      = 0.0e+00
ollama[179970]: print_info: f_max_alibi_bias = 0.0e+00
ollama[179970]: print_info: f_logit_scale    = 0.0e+00
ollama[179970]: print_info: f_attn_scale     = 0.0e+00
ollama[179970]: print_info: n_ff             = 32768
ollama[179970]: print_info: n_expert         = 0
ollama[179970]: print_info: n_expert_used    = 0
ollama[179970]: print_info: causal attn      = 1
ollama[179970]: print_info: pooling type     = 0
ollama[179970]: print_info: rope type        = 0
ollama[179970]: print_info: rope scaling     = linear
ollama[179970]: print_info: freq_base_train  = 1000000000.0
ollama[179970]: print_info: freq_scale_train = 1
ollama[179970]: print_info: n_ctx_orig_yarn  = 131072
ollama[179970]: print_info: rope_finetuned   = unknown
ollama[179970]: print_info: model type       = ?B
ollama[179970]: print_info: model params     = 35.80 B
ollama[179970]: print_info: general.name     = Mistral Magistral Devstral Instruct FUSED CODER Reasoning 36B
ollama[179970]: print_info: vocab type       = BPE
ollama[179970]: print_info: n_vocab          = 131072
ollama[179970]: print_info: n_merges         = 269443
ollama[179970]: print_info: BOS token        = 1 '<s>'
ollama[179970]: print_info: EOS token        = 2 '</s>'
ollama[179970]: print_info: UNK token        = 0 '<unk>'
ollama[179970]: print_info: LF token         = 1010 'Ċ'
ollama[179970]: print_info: EOG token        = 2 '</s>'
ollama[179970]: print_info: max token length = 150
ollama[179970]: load_tensors: loading model tensors, this can take a while... (mmap = true)
ollama[179970]: load_tensors: offloading 50 repeating layers to GPU
ollama[179970]: load_tensors: offloaded 50/63 layers to GPU
ollama[179970]: load_tensors:        ROCm0 model buffer size = 10160.16 MiB
ollama[179970]: load_tensors:   CPU_Mapped model buffer size =  3153.46 MiB
ollama[179970]: llama_context: constructing llama_context
ollama[179970]: llama_context: n_seq_max     = 2
ollama[179970]: llama_context: n_ctx         = 8192
ollama[179970]: llama_context: n_ctx_per_seq = 4096
ollama[179970]: llama_context: n_batch       = 1024
ollama[179970]: llama_context: n_ubatch      = 512
ollama[179970]: llama_context: causal_attn   = 1
ollama[179970]: llama_context: flash_attn    = enabled
ollama[179970]: llama_context: kv_unified    = false
ollama[179970]: llama_context: freq_base     = 1000000000.0
ollama[179970]: llama_context: freq_scale    = 1
ollama[179970]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
ollama[179970]: llama_context:        CPU  output buffer size =     1.04 MiB
ollama[179970]: llama_kv_cache:      ROCm0 KV buffer size =  1600.00 MiB
ollama[179970]: llama_kv_cache:        CPU KV buffer size =   384.00 MiB
ollama[179970]: llama_kv_cache: size = 1984.00 MiB (  4096 cells,  62 layers,  2/2 seqs), K (f16):  992.00 MiB, V (f16):  992.00 MiB
ollama[179970]: llama_context:      ROCm0 compute buffer size =   716.00 MiB
ollama[179970]: llama_context:  ROCm_Host compute buffer size =    18.01 MiB
ollama[179970]: llama_context: graph nodes  = 2053
ollama[179970]: llama_context: graph splits = 136 (with bs=512), 3 (with bs=1)
ollama[179970]: time=2025-11-08T20:52:39.211+01:00 level=INFO source=server.go:1289 msg="llama runner started in 7.62 seconds"
ollama[179970]: time=2025-11-08T20:52:39.211+01:00 level=INFO source=sched.go:500 msg="loaded runners" count=1
ollama[179970]: time=2025-11-08T20:52:39.211+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
ollama[179970]: time=2025-11-08T20:52:39.211+01:00 level=INFO source=server.go:1289 msg="llama runner started in 7.62 seconds"
<!-- gh-comment-id:3506841178 --> @binarynoise commented on GitHub (Nov 8, 2025): ``` systemd[1]: Started Ollama Service. sudo[179945]: pam_unix(sudo:session): session closed for user root ollama[179970]: time=2025-11-08T20:52:13.208+01:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:3000000000 OLLAMA_HOST:http://[::]:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:f16 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/Tilo4TB/var-lib-ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" ollama[179970]: time=2025-11-08T20:52:13.230+01:00 level=INFO source=images.go:522 msg="total blobs: 147" ollama[179970]: time=2025-11-08T20:52:13.234+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" ollama[179970]: time=2025-11-08T20:52:13.236+01:00 level=INFO source=routes.go:1578 msg="Listening on [::]:11434 (version 0.12.10.r17.g91ec3ddbeb2e)" ollama[179970]: time=2025-11-08T20:52:13.236+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." ollama[179970]: time=2025-11-08T20:52:13.238+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40151" ollama[179970]: time=2025-11-08T20:52:19.126+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33873" ollama[179970]: time=2025-11-08T20:52:25.561+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-c2c6236518c28f70 filter_id="" library=ROCm compute=gfx1101 name=ROCm0 description="AMD Radeon RX 7800 XT" libdirs=ollama driver=60443.48 pci_id=0000:03:00.0 type=discrete total="16.0 GiB" available="15.8 GiB" ollama[179970]: time=2025-11-08T20:52:25.561+01:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="13.2 GiB" threshold="20.0 GiB" kernel: amdgpu: Freeing queue vital buffer 0x71db70200000, queue evicted ollama[179970]: time=2025-11-08T20:52:25.803+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 33939" ollama[179970]: llama_model_loader: loaded meta data with 56 key-value pairs and 561 tensors from /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c (version GGUF V3 (latest)) ollama[179970]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama[179970]: llama_model_loader: - kv 0: general.architecture str = llama ollama[179970]: llama_model_loader: - kv 1: general.type str = model ollama[179970]: llama_model_loader: - kv 2: general.name str = Mistral Magistral Devstral Instruct F... ollama[179970]: llama_model_loader: - kv 3: general.finetune str = Instruct-FUSED-CODER-Reasoning ollama[179970]: llama_model_loader: - kv 4: general.basename str = Mistral-Magistral-Devstral ollama[179970]: llama_model_loader: - kv 5: general.size_label str = 36B ollama[179970]: llama_model_loader: - kv 6: general.license str = apache-2.0 ollama[179970]: llama_model_loader: - kv 7: general.base_model.count u32 = 2 ollama[179970]: llama_model_loader: - kv 8: general.base_model.0.name str = Devstral Small 2507 ollama[179970]: llama_model_loader: - kv 9: general.base_model.0.version str = 2507 ollama[179970]: llama_model_loader: - kv 10: general.base_model.0.organization str = Mistralai ollama[179970]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/mistralai/Devs... ollama[179970]: llama_model_loader: - kv 12: general.base_model.1.name str = Magistral Small 2506 ollama[179970]: llama_model_loader: - kv 13: general.base_model.1.version str = 2506 ollama[179970]: llama_model_loader: - kv 14: general.base_model.1.organization str = Mistralai ollama[179970]: llama_model_loader: - kv 15: general.base_model.1.repo_url str = https://huggingface.co/mistralai/Magi... ollama[179970]: llama_model_loader: - kv 16: general.tags arr[str,14] = ["merge", "programming", "code genera... ollama[179970]: llama_model_loader: - kv 17: general.languages arr[str,24] = ["en", "fr", "de", "es", "pt", "it", ... ollama[179970]: llama_model_loader: - kv 18: llama.block_count u32 = 62 ollama[179970]: llama_model_loader: - kv 19: llama.context_length u32 = 131072 ollama[179970]: llama_model_loader: - kv 20: llama.embedding_length u32 = 5120 ollama[179970]: llama_model_loader: - kv 21: llama.feed_forward_length u32 = 32768 ollama[179970]: llama_model_loader: - kv 22: llama.attention.head_count u32 = 32 ollama[179970]: llama_model_loader: - kv 23: llama.attention.head_count_kv u32 = 8 ollama[179970]: llama_model_loader: - kv 24: llama.rope.freq_base f32 = 1000000000.000000 ollama[179970]: llama_model_loader: - kv 25: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 ollama[179970]: llama_model_loader: - kv 26: llama.attention.key_length u32 = 128 ollama[179970]: llama_model_loader: - kv 27: llama.attention.value_length u32 = 128 ollama[179970]: llama_model_loader: - kv 28: llama.vocab_size u32 = 131072 ollama[179970]: llama_model_loader: - kv 29: llama.rope.dimension_count u32 = 128 ollama[179970]: llama_model_loader: - kv 30: tokenizer.ggml.model str = gpt2 ollama[179970]: llama_model_loader: - kv 31: tokenizer.ggml.pre str = tekken ollama[179970]: llama_model_loader: - kv 32: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "[INST]", "[... ollama[179970]: llama_model_loader: - kv 33: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... ollama[179970]: [132B blob data] ollama[179970]: llama_model_loader: - kv 35: tokenizer.ggml.bos_token_id u32 = 1 ollama[179970]: llama_model_loader: - kv 36: tokenizer.ggml.eos_token_id u32 = 2 ollama[179970]: llama_model_loader: - kv 37: tokenizer.ggml.unknown_token_id u32 = 0 ollama[179970]: llama_model_loader: - kv 38: tokenizer.ggml.add_bos_token bool = true ollama[179970]: llama_model_loader: - kv 39: tokenizer.ggml.add_sep_token bool = false ollama[179970]: llama_model_loader: - kv 40: tokenizer.ggml.add_eos_token bool = false ollama[179970]: llama_model_loader: - kv 41: tokenizer.chat_template str = {%- set today = strftime_now("%Y-%m-%... ollama[179970]: llama_model_loader: - kv 42: tokenizer.ggml.add_space_prefix bool = false ollama[179970]: llama_model_loader: - kv 43: general.quantization_version u32 = 2 ollama[179970]: llama_model_loader: - kv 44: general.file_type u32 = 23 ollama[179970]: llama_model_loader: - kv 45: general.url str = https://huggingface.co/mradermacher/M... ollama[179970]: llama_model_loader: - kv 46: mradermacher.quantize_version str = 2 ollama[179970]: llama_model_loader: - kv 47: mradermacher.quantized_by str = mradermacher ollama[179970]: llama_model_loader: - kv 48: mradermacher.quantized_at str = 2025-07-30T12:53:06+02:00 ollama[179970]: llama_model_loader: - kv 49: mradermacher.quantized_on str = rich1 ollama[179970]: llama_model_loader: - kv 50: general.source.url str = https://huggingface.co/DavidAU/Mistra... ollama[179970]: llama_model_loader: - kv 51: mradermacher.convert_type str = hf ollama[179970]: llama_model_loader: - kv 52: quantize.imatrix.file str = Mistral-Magistral-Devstral-Instruct-F... ollama[179970]: llama_model_loader: - kv 53: quantize.imatrix.dataset str = imatrix-training-full-3 ollama[179970]: llama_model_loader: - kv 54: quantize.imatrix.entries_count u32 = 434 ollama[179970]: llama_model_loader: - kv 55: quantize.imatrix.chunks_count u32 = 321 ollama[179970]: llama_model_loader: - type f32: 125 tensors ollama[179970]: llama_model_loader: - type q4_K: 62 tensors ollama[179970]: llama_model_loader: - type q5_K: 1 tensors ollama[179970]: llama_model_loader: - type iq3_xxs: 186 tensors ollama[179970]: llama_model_loader: - type iq3_s: 63 tensors ollama[179970]: llama_model_loader: - type iq2_s: 124 tensors ollama[179970]: print_info: file format = GGUF V3 (latest) ollama[179970]: print_info: file type = IQ3_XXS - 3.0625 bpw ollama[179970]: print_info: file size = 13.00 GiB (3.12 BPW) ollama[179970]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect ollama[179970]: load: printing all EOG tokens: ollama[179970]: load: - 2 ('</s>') ollama[179970]: load: special tokens cache size = 1000 ollama[179970]: load: token to piece cache size = 0.8498 MB ollama[179970]: print_info: arch = llama ollama[179970]: print_info: vocab_only = 1 ollama[179970]: print_info: model type = ?B ollama[179970]: print_info: model params = 35.80 B ollama[179970]: print_info: general.name = Mistral Magistral Devstral Instruct FUSED CODER Reasoning 36B ollama[179970]: print_info: vocab type = BPE ollama[179970]: print_info: n_vocab = 131072 ollama[179970]: print_info: n_merges = 269443 ollama[179970]: print_info: BOS token = 1 '<s>' ollama[179970]: print_info: EOS token = 2 '</s>' ollama[179970]: print_info: UNK token = 0 '<unk>' ollama[179970]: print_info: LF token = 1010 'Ċ' ollama[179970]: print_info: EOG token = 2 '</s>' ollama[179970]: print_info: max token length = 150 ollama[179970]: llama_model_load: vocab only - skipping tensors ollama[179970]: time=2025-11-08T20:52:31.594+01:00 level=INFO source=server.go:215 msg="enabling flash attention" ollama[179970]: time=2025-11-08T20:52:31.595+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --model /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c --port 38137" ollama[179970]: time=2025-11-08T20:52:31.595+01:00 level=INFO source=server.go:470 msg="system memory" total="62.5 GiB" free="41.4 GiB" free_swap="92.5 GiB" ollama[179970]: time=2025-11-08T20:52:31.596+01:00 level=INFO source=server.go:522 msg=offload library=ROCm layers.requested=-1 layers.model=63 layers.offload=50 layers.split=[50] memory.available="[15.8 GiB]" memory.gpu_overhead="2.8 GiB" memory.required.full="16.1 GiB" memory.required.partial="12.9 GiB" memory.required.kv="1.9 GiB" memory.required.allocations="[12.9 GiB]" memory.weights.total="12.7 GiB" memory.weights.repeating="12.3 GiB" memory.weights.nonrepeating="440.0 MiB" memory.graph.full="568.0 MiB" memory.graph.partial="801.0 MiB" ollama[179970]: time=2025-11-08T20:52:31.602+01:00 level=INFO source=runner.go:910 msg="starting go runner" ollama[179970]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ollama[179970]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ollama[179970]: ggml_cuda_init: found 1 ROCm devices: ollama[179970]: Device 0: AMD Radeon RX 7800 XT, gfx1101 (0x1101), VMM: no, Wave Size: 32, ID: GPU-c2c6236518c28f70 ollama[179970]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so ollama[179970]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so ollama[179970]: time=2025-11-08T20:52:36.698+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.SSE3=1 CPU.1.SSSE3=1 CPU.1.AVX=1 CPU.1.AVX_VNNI=1 CPU.1.AVX2=1 CPU.1.F16C=1 CPU.1.FMA=1 CPU.1.BMI2=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) ollama[179970]: time=2025-11-08T20:52:36.698+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:38137" ollama[179970]: time=2025-11-08T20:52:36.701+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType:f16 NumThreads:16 GPULayers:50[ID:GPU-c2c6236518c28f70 Layers:50(12..61)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" ollama[179970]: time=2025-11-08T20:52:36.702+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" ollama[179970]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7800 XT) (0000:03:00.0) - 16154 MiB free ollama[179970]: time=2025-11-08T20:52:36.702+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" ollama[179970]: llama_model_loader: loaded meta data with 56 key-value pairs and 561 tensors from /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c (version GGUF V3 (latest)) ollama[179970]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama[179970]: llama_model_loader: - kv 0: general.architecture str = llama ollama[179970]: llama_model_loader: - kv 1: general.type str = model ollama[179970]: llama_model_loader: - kv 2: general.name str = Mistral Magistral Devstral Instruct F... ollama[179970]: llama_model_loader: - kv 3: general.finetune str = Instruct-FUSED-CODER-Reasoning ollama[179970]: llama_model_loader: - kv 4: general.basename str = Mistral-Magistral-Devstral ollama[179970]: llama_model_loader: - kv 5: general.size_label str = 36B ollama[179970]: llama_model_loader: - kv 6: general.license str = apache-2.0 ollama[179970]: llama_model_loader: - kv 7: general.base_model.count u32 = 2 ollama[179970]: llama_model_loader: - kv 8: general.base_model.0.name str = Devstral Small 2507 ollama[179970]: llama_model_loader: - kv 9: general.base_model.0.version str = 2507 ollama[179970]: llama_model_loader: - kv 10: general.base_model.0.organization str = Mistralai ollama[179970]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/mistralai/Devs... ollama[179970]: llama_model_loader: - kv 12: general.base_model.1.name str = Magistral Small 2506 ollama[179970]: llama_model_loader: - kv 13: general.base_model.1.version str = 2506 ollama[179970]: llama_model_loader: - kv 14: general.base_model.1.organization str = Mistralai ollama[179970]: llama_model_loader: - kv 15: general.base_model.1.repo_url str = https://huggingface.co/mistralai/Magi... ollama[179970]: llama_model_loader: - kv 16: general.tags arr[str,14] = ["merge", "programming", "code genera... ollama[179970]: llama_model_loader: - kv 17: general.languages arr[str,24] = ["en", "fr", "de", "es", "pt", "it", ... ollama[179970]: llama_model_loader: - kv 18: llama.block_count u32 = 62 ollama[179970]: llama_model_loader: - kv 19: llama.context_length u32 = 131072 ollama[179970]: llama_model_loader: - kv 20: llama.embedding_length u32 = 5120 ollama[179970]: llama_model_loader: - kv 21: llama.feed_forward_length u32 = 32768 ollama[179970]: llama_model_loader: - kv 22: llama.attention.head_count u32 = 32 ollama[179970]: llama_model_loader: - kv 23: llama.attention.head_count_kv u32 = 8 ollama[179970]: llama_model_loader: - kv 24: llama.rope.freq_base f32 = 1000000000.000000 ollama[179970]: llama_model_loader: - kv 25: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 ollama[179970]: llama_model_loader: - kv 26: llama.attention.key_length u32 = 128 ollama[179970]: llama_model_loader: - kv 27: llama.attention.value_length u32 = 128 ollama[179970]: llama_model_loader: - kv 28: llama.vocab_size u32 = 131072 ollama[179970]: llama_model_loader: - kv 29: llama.rope.dimension_count u32 = 128 ollama[179970]: llama_model_loader: - kv 30: tokenizer.ggml.model str = gpt2 ollama[179970]: llama_model_loader: - kv 31: tokenizer.ggml.pre str = tekken ollama[179970]: llama_model_loader: - kv 32: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "[INST]", "[... ollama[179970]: llama_model_loader: - kv 33: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... ollama[179970]: [132B blob data] ollama[179970]: llama_model_loader: - kv 35: tokenizer.ggml.bos_token_id u32 = 1 ollama[179970]: llama_model_loader: - kv 36: tokenizer.ggml.eos_token_id u32 = 2 ollama[179970]: llama_model_loader: - kv 37: tokenizer.ggml.unknown_token_id u32 = 0 ollama[179970]: llama_model_loader: - kv 38: tokenizer.ggml.add_bos_token bool = true ollama[179970]: llama_model_loader: - kv 39: tokenizer.ggml.add_sep_token bool = false ollama[179970]: llama_model_loader: - kv 40: tokenizer.ggml.add_eos_token bool = false ollama[179970]: llama_model_loader: - kv 41: tokenizer.chat_template str = {%- set today = strftime_now("%Y-%m-%... ollama[179970]: llama_model_loader: - kv 42: tokenizer.ggml.add_space_prefix bool = false ollama[179970]: llama_model_loader: - kv 43: general.quantization_version u32 = 2 ollama[179970]: llama_model_loader: - kv 44: general.file_type u32 = 23 ollama[179970]: llama_model_loader: - kv 45: general.url str = https://huggingface.co/mradermacher/M... ollama[179970]: llama_model_loader: - kv 46: mradermacher.quantize_version str = 2 ollama[179970]: llama_model_loader: - kv 47: mradermacher.quantized_by str = mradermacher ollama[179970]: llama_model_loader: - kv 48: mradermacher.quantized_at str = 2025-07-30T12:53:06+02:00 ollama[179970]: llama_model_loader: - kv 49: mradermacher.quantized_on str = rich1 ollama[179970]: llama_model_loader: - kv 50: general.source.url str = https://huggingface.co/DavidAU/Mistra... ollama[179970]: llama_model_loader: - kv 51: mradermacher.convert_type str = hf ollama[179970]: llama_model_loader: - kv 52: quantize.imatrix.file str = Mistral-Magistral-Devstral-Instruct-F... ollama[179970]: llama_model_loader: - kv 53: quantize.imatrix.dataset str = imatrix-training-full-3 ollama[179970]: llama_model_loader: - kv 54: quantize.imatrix.entries_count u32 = 434 ollama[179970]: llama_model_loader: - kv 55: quantize.imatrix.chunks_count u32 = 321 ollama[179970]: llama_model_loader: - type f32: 125 tensors ollama[179970]: llama_model_loader: - type q4_K: 62 tensors ollama[179970]: llama_model_loader: - type q5_K: 1 tensors ollama[179970]: llama_model_loader: - type iq3_xxs: 186 tensors ollama[179970]: llama_model_loader: - type iq3_s: 63 tensors ollama[179970]: llama_model_loader: - type iq2_s: 124 tensors ollama[179970]: print_info: file format = GGUF V3 (latest) ollama[179970]: print_info: file type = IQ3_XXS - 3.0625 bpw ollama[179970]: print_info: file size = 13.00 GiB (3.12 BPW) ollama[179970]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect ollama[179970]: load: printing all EOG tokens: ollama[179970]: load: - 2 ('</s>') ollama[179970]: load: special tokens cache size = 1000 ollama[179970]: load: token to piece cache size = 0.8498 MB ollama[179970]: print_info: arch = llama ollama[179970]: print_info: vocab_only = 0 ollama[179970]: print_info: n_ctx_train = 131072 ollama[179970]: print_info: n_embd = 5120 ollama[179970]: print_info: n_layer = 62 ollama[179970]: print_info: n_head = 32 ollama[179970]: print_info: n_head_kv = 8 ollama[179970]: print_info: n_rot = 128 ollama[179970]: print_info: n_swa = 0 ollama[179970]: print_info: is_swa_any = 0 ollama[179970]: print_info: n_embd_head_k = 128 ollama[179970]: print_info: n_embd_head_v = 128 ollama[179970]: print_info: n_gqa = 4 ollama[179970]: print_info: n_embd_k_gqa = 1024 ollama[179970]: print_info: n_embd_v_gqa = 1024 ollama[179970]: print_info: f_norm_eps = 0.0e+00 ollama[179970]: print_info: f_norm_rms_eps = 1.0e-05 ollama[179970]: print_info: f_clamp_kqv = 0.0e+00 ollama[179970]: print_info: f_max_alibi_bias = 0.0e+00 ollama[179970]: print_info: f_logit_scale = 0.0e+00 ollama[179970]: print_info: f_attn_scale = 0.0e+00 ollama[179970]: print_info: n_ff = 32768 ollama[179970]: print_info: n_expert = 0 ollama[179970]: print_info: n_expert_used = 0 ollama[179970]: print_info: causal attn = 1 ollama[179970]: print_info: pooling type = 0 ollama[179970]: print_info: rope type = 0 ollama[179970]: print_info: rope scaling = linear ollama[179970]: print_info: freq_base_train = 1000000000.0 ollama[179970]: print_info: freq_scale_train = 1 ollama[179970]: print_info: n_ctx_orig_yarn = 131072 ollama[179970]: print_info: rope_finetuned = unknown ollama[179970]: print_info: model type = ?B ollama[179970]: print_info: model params = 35.80 B ollama[179970]: print_info: general.name = Mistral Magistral Devstral Instruct FUSED CODER Reasoning 36B ollama[179970]: print_info: vocab type = BPE ollama[179970]: print_info: n_vocab = 131072 ollama[179970]: print_info: n_merges = 269443 ollama[179970]: print_info: BOS token = 1 '<s>' ollama[179970]: print_info: EOS token = 2 '</s>' ollama[179970]: print_info: UNK token = 0 '<unk>' ollama[179970]: print_info: LF token = 1010 'Ċ' ollama[179970]: print_info: EOG token = 2 '</s>' ollama[179970]: print_info: max token length = 150 ollama[179970]: load_tensors: loading model tensors, this can take a while... (mmap = true) ollama[179970]: load_tensors: offloading 50 repeating layers to GPU ollama[179970]: load_tensors: offloaded 50/63 layers to GPU ollama[179970]: load_tensors: ROCm0 model buffer size = 10160.16 MiB ollama[179970]: load_tensors: CPU_Mapped model buffer size = 3153.46 MiB ollama[179970]: llama_context: constructing llama_context ollama[179970]: llama_context: n_seq_max = 2 ollama[179970]: llama_context: n_ctx = 8192 ollama[179970]: llama_context: n_ctx_per_seq = 4096 ollama[179970]: llama_context: n_batch = 1024 ollama[179970]: llama_context: n_ubatch = 512 ollama[179970]: llama_context: causal_attn = 1 ollama[179970]: llama_context: flash_attn = enabled ollama[179970]: llama_context: kv_unified = false ollama[179970]: llama_context: freq_base = 1000000000.0 ollama[179970]: llama_context: freq_scale = 1 ollama[179970]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized ollama[179970]: llama_context: CPU output buffer size = 1.04 MiB ollama[179970]: llama_kv_cache: ROCm0 KV buffer size = 1600.00 MiB ollama[179970]: llama_kv_cache: CPU KV buffer size = 384.00 MiB ollama[179970]: llama_kv_cache: size = 1984.00 MiB ( 4096 cells, 62 layers, 2/2 seqs), K (f16): 992.00 MiB, V (f16): 992.00 MiB ollama[179970]: llama_context: ROCm0 compute buffer size = 716.00 MiB ollama[179970]: llama_context: ROCm_Host compute buffer size = 18.01 MiB ollama[179970]: llama_context: graph nodes = 2053 ollama[179970]: llama_context: graph splits = 136 (with bs=512), 3 (with bs=1) ollama[179970]: time=2025-11-08T20:52:39.211+01:00 level=INFO source=server.go:1289 msg="llama runner started in 7.62 seconds" ollama[179970]: time=2025-11-08T20:52:39.211+01:00 level=INFO source=sched.go:500 msg="loaded runners" count=1 ollama[179970]: time=2025-11-08T20:52:39.211+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" ollama[179970]: time=2025-11-08T20:52:39.211+01:00 level=INFO source=server.go:1289 msg="llama runner started in 7.62 seconds" ```
Author
Owner

@rick-github commented on GitHub (Nov 8, 2025):

There's no crash in this log. It also appears that it is a custom build.

<!-- gh-comment-id:3506875210 --> @rick-github commented on GitHub (Nov 8, 2025): There's no crash in this log. It also appears that it is a custom build.
Author
Owner

@binarynoise commented on GitHub (Nov 8, 2025):

As I told, I set OLLAMA_GPU_OVERHEAD:3000000000 to get rid of the crashes.
Yes this is a custom build but almost back to upstream (I increased the timeouts to talk to the GPU to fetch fresh memory info as those always timed out).

Here's a crash without gpu overhead

systemd[1]: Started Ollama Service.
sudo[234484]: pam_unix(sudo:session): session closed for user root
ollama[234492]: time=2025-11-08T21:40:34.744+01:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://[::]:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:f16 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/Tilo4TB/var-lib-ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
ollama[234492]: time=2025-11-08T21:40:34.768+01:00 level=INFO source=images.go:522 msg="total blobs: 147"
ollama[234492]: time=2025-11-08T21:40:34.771+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
ollama[234492]: time=2025-11-08T21:40:34.773+01:00 level=INFO source=routes.go:1578 msg="Listening on [::]:11434 (version 0.12.10.r17.g91ec3ddbeb2e)"
ollama[234492]: time=2025-11-08T21:40:34.774+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
ollama[234492]: time=2025-11-08T21:40:34.778+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39911"
ollama[234492]: time=2025-11-08T21:40:39.962+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42991"
ollama[234492]: time=2025-11-08T21:40:45.524+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-c2c6236518c28f70 filter_id="" library=ROCm compute=gfx1101 name=ROCm0 description="AMD Radeon RX 7800 XT" libdirs=ollama driver=60443.48 pci_id=0000:03:00.0 type=discrete total="16.0 GiB" available="15.8 GiB"
ollama[234492]: time=2025-11-08T21:40:45.524+01:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB"
kernel: amdgpu: Freeing queue vital buffer 0x734a69000000, queue evicted
ollama[234492]: time=2025-11-08T21:42:39.700+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40841"
ollama[234492]: llama_model_loader: loaded meta data with 56 key-value pairs and 561 tensors from /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c (version GGUF V3 (latest))
ollama[234492]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama[234492]: llama_model_loader: - kv   0:                       general.architecture str              = llama
ollama[234492]: llama_model_loader: - kv   1:                               general.type str              = model
ollama[234492]: llama_model_loader: - kv   2:                               general.name str              = Mistral Magistral Devstral Instruct F...
ollama[234492]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct-FUSED-CODER-Reasoning
ollama[234492]: llama_model_loader: - kv   4:                           general.basename str              = Mistral-Magistral-Devstral
ollama[234492]: llama_model_loader: - kv   5:                         general.size_label str              = 36B
ollama[234492]: llama_model_loader: - kv   6:                            general.license str              = apache-2.0
ollama[234492]: llama_model_loader: - kv   7:                   general.base_model.count u32              = 2
ollama[234492]: llama_model_loader: - kv   8:                  general.base_model.0.name str              = Devstral Small 2507
ollama[234492]: llama_model_loader: - kv   9:               general.base_model.0.version str              = 2507
ollama[234492]: llama_model_loader: - kv  10:          general.base_model.0.organization str              = Mistralai
ollama[234492]: llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/mistralai/Devs...
ollama[234492]: llama_model_loader: - kv  12:                  general.base_model.1.name str              = Magistral Small 2506
ollama[234492]: llama_model_loader: - kv  13:               general.base_model.1.version str              = 2506
ollama[234492]: llama_model_loader: - kv  14:          general.base_model.1.organization str              = Mistralai
ollama[234492]: llama_model_loader: - kv  15:              general.base_model.1.repo_url str              = https://huggingface.co/mistralai/Magi...
ollama[234492]: llama_model_loader: - kv  16:                               general.tags arr[str,14]      = ["merge", "programming", "code genera...
ollama[234492]: llama_model_loader: - kv  17:                          general.languages arr[str,24]      = ["en", "fr", "de", "es", "pt", "it", ...
ollama[234492]: llama_model_loader: - kv  18:                          llama.block_count u32              = 62
ollama[234492]: llama_model_loader: - kv  19:                       llama.context_length u32              = 131072
ollama[234492]: llama_model_loader: - kv  20:                     llama.embedding_length u32              = 5120
ollama[234492]: llama_model_loader: - kv  21:                  llama.feed_forward_length u32              = 32768
ollama[234492]: llama_model_loader: - kv  22:                 llama.attention.head_count u32              = 32
ollama[234492]: llama_model_loader: - kv  23:              llama.attention.head_count_kv u32              = 8
ollama[234492]: llama_model_loader: - kv  24:                       llama.rope.freq_base f32              = 1000000000.000000
ollama[234492]: llama_model_loader: - kv  25:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
ollama[234492]: llama_model_loader: - kv  26:                 llama.attention.key_length u32              = 128
ollama[234492]: llama_model_loader: - kv  27:               llama.attention.value_length u32              = 128
ollama[234492]: llama_model_loader: - kv  28:                           llama.vocab_size u32              = 131072
ollama[234492]: llama_model_loader: - kv  29:                 llama.rope.dimension_count u32              = 128
ollama[234492]: llama_model_loader: - kv  30:                       tokenizer.ggml.model str              = gpt2
ollama[234492]: llama_model_loader: - kv  31:                         tokenizer.ggml.pre str              = tekken
ollama[234492]: llama_model_loader: - kv  32:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "[INST]", "[...
ollama[234492]: llama_model_loader: - kv  33:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
ollama[234492]: [132B blob data]
ollama[234492]: llama_model_loader: - kv  35:                tokenizer.ggml.bos_token_id u32              = 1
ollama[234492]: llama_model_loader: - kv  36:                tokenizer.ggml.eos_token_id u32              = 2
ollama[234492]: llama_model_loader: - kv  37:            tokenizer.ggml.unknown_token_id u32              = 0
ollama[234492]: llama_model_loader: - kv  38:               tokenizer.ggml.add_bos_token bool             = true
ollama[234492]: llama_model_loader: - kv  39:               tokenizer.ggml.add_sep_token bool             = false
ollama[234492]: llama_model_loader: - kv  40:               tokenizer.ggml.add_eos_token bool             = false
ollama[234492]: llama_model_loader: - kv  41:                    tokenizer.chat_template str              = {%- set today = strftime_now("%Y-%m-%...
ollama[234492]: llama_model_loader: - kv  42:            tokenizer.ggml.add_space_prefix bool             = false
ollama[234492]: llama_model_loader: - kv  43:               general.quantization_version u32              = 2
ollama[234492]: llama_model_loader: - kv  44:                          general.file_type u32              = 23
ollama[234492]: llama_model_loader: - kv  45:                                general.url str              = https://huggingface.co/mradermacher/M...
ollama[234492]: llama_model_loader: - kv  46:              mradermacher.quantize_version str              = 2
ollama[234492]: llama_model_loader: - kv  47:                  mradermacher.quantized_by str              = mradermacher
ollama[234492]: llama_model_loader: - kv  48:                  mradermacher.quantized_at str              = 2025-07-30T12:53:06+02:00
ollama[234492]: llama_model_loader: - kv  49:                  mradermacher.quantized_on str              = rich1
ollama[234492]: llama_model_loader: - kv  50:                         general.source.url str              = https://huggingface.co/DavidAU/Mistra...
ollama[234492]: llama_model_loader: - kv  51:                  mradermacher.convert_type str              = hf
ollama[234492]: llama_model_loader: - kv  52:                      quantize.imatrix.file str              = Mistral-Magistral-Devstral-Instruct-F...
ollama[234492]: llama_model_loader: - kv  53:                   quantize.imatrix.dataset str              = imatrix-training-full-3
ollama[234492]: llama_model_loader: - kv  54:             quantize.imatrix.entries_count u32              = 434
ollama[234492]: llama_model_loader: - kv  55:              quantize.imatrix.chunks_count u32              = 321
ollama[234492]: llama_model_loader: - type  f32:  125 tensors
ollama[234492]: llama_model_loader: - type q4_K:   62 tensors
ollama[234492]: llama_model_loader: - type q5_K:    1 tensors
ollama[234492]: llama_model_loader: - type iq3_xxs:  186 tensors
ollama[234492]: llama_model_loader: - type iq3_s:   63 tensors
ollama[234492]: llama_model_loader: - type iq2_s:  124 tensors
ollama[234492]: print_info: file format = GGUF V3 (latest)
ollama[234492]: print_info: file type   = IQ3_XXS - 3.0625 bpw
ollama[234492]: print_info: file size   = 13.00 GiB (3.12 BPW)
ollama[234492]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
ollama[234492]: load: printing all EOG tokens:
ollama[234492]: load:   - 2 ('</s>')
ollama[234492]: load: special tokens cache size = 1000
ollama[234492]: load: token to piece cache size = 0.8498 MB
ollama[234492]: print_info: arch             = llama
ollama[234492]: print_info: vocab_only       = 1
ollama[234492]: print_info: model type       = ?B
ollama[234492]: print_info: model params     = 35.80 B
ollama[234492]: print_info: general.name     = Mistral Magistral Devstral Instruct FUSED CODER Reasoning 36B
ollama[234492]: print_info: vocab type       = BPE
ollama[234492]: print_info: n_vocab          = 131072
ollama[234492]: print_info: n_merges         = 269443
ollama[234492]: print_info: BOS token        = 1 '<s>'
ollama[234492]: print_info: EOS token        = 2 '</s>'
ollama[234492]: print_info: UNK token        = 0 '<unk>'
ollama[234492]: print_info: LF token         = 1010 'Ċ'
ollama[234492]: print_info: EOG token        = 2 '</s>'
ollama[234492]: print_info: max token length = 150
ollama[234492]: llama_model_load: vocab only - skipping tensors
ollama[234492]: time=2025-11-08T21:42:45.008+01:00 level=INFO source=server.go:215 msg="enabling flash attention"
ollama[234492]: time=2025-11-08T21:42:45.009+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --model /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c --port 42615"
ollama[234492]: time=2025-11-08T21:42:45.009+01:00 level=INFO source=server.go:470 msg="system memory" total="62.5 GiB" free="43.2 GiB" free_swap="92.3 GiB"
ollama[234492]: time=2025-11-08T21:42:45.010+01:00 level=INFO source=server.go:522 msg=offload library=ROCm layers.requested=-1 layers.model=63 layers.offload=62 layers.split=[62] memory.available="[15.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.1 GiB" memory.required.partial="15.7 GiB" memory.required.kv="1.9 GiB" memory.required.allocations="[15.7 GiB]" memory.weights.total="12.7 GiB" memory.weights.repeating="12.3 GiB" memory.weights.nonrepeating="440.0 MiB" memory.graph.full="568.0 MiB" memory.graph.partial="801.0 MiB"
ollama[234492]: time=2025-11-08T21:42:45.018+01:00 level=INFO source=runner.go:910 msg="starting go runner"
ollama[234492]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ollama[234492]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ollama[234492]: ggml_cuda_init: found 1 ROCm devices:
ollama[234492]:   Device 0: AMD Radeon RX 7800 XT, gfx1101 (0x1101), VMM: no, Wave Size: 32, ID: GPU-c2c6236518c28f70
ollama[234492]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so
ollama[234492]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so
ollama[234492]: time=2025-11-08T21:42:50.441+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.SSE3=1 CPU.1.SSSE3=1 CPU.1.AVX=1 CPU.1.AVX_VNNI=1 CPU.1.AVX2=1 CPU.1.F16C=1 CPU.1.FMA=1 CPU.1.BMI2=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
ollama[234492]: time=2025-11-08T21:42:50.441+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:42615"
ollama[234492]: time=2025-11-08T21:42:50.444+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType:f16 NumThreads:16 GPULayers:62[ID:GPU-c2c6236518c28f70 Layers:62(0..61)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
ollama[234492]: time=2025-11-08T21:42:50.444+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
ollama[234492]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7800 XT) (0000:03:00.0) - 16154 MiB free
ollama[234492]: time=2025-11-08T21:42:50.444+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
ollama[234492]: llama_model_loader: loaded meta data with 56 key-value pairs and 561 tensors from /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c (version GGUF V3 (latest))
ollama[234492]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama[234492]: llama_model_loader: - kv   0:                       general.architecture str              = llama
ollama[234492]: llama_model_loader: - kv   1:                               general.type str              = model
ollama[234492]: llama_model_loader: - kv   2:                               general.name str              = Mistral Magistral Devstral Instruct F...
ollama[234492]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct-FUSED-CODER-Reasoning
ollama[234492]: llama_model_loader: - kv   4:                           general.basename str              = Mistral-Magistral-Devstral
ollama[234492]: llama_model_loader: - kv   5:                         general.size_label str              = 36B
ollama[234492]: llama_model_loader: - kv   6:                            general.license str              = apache-2.0
ollama[234492]: llama_model_loader: - kv   7:                   general.base_model.count u32              = 2
ollama[234492]: llama_model_loader: - kv   8:                  general.base_model.0.name str              = Devstral Small 2507
ollama[234492]: llama_model_loader: - kv   9:               general.base_model.0.version str              = 2507
ollama[234492]: llama_model_loader: - kv  10:          general.base_model.0.organization str              = Mistralai
ollama[234492]: llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/mistralai/Devs...
ollama[234492]: llama_model_loader: - kv  12:                  general.base_model.1.name str              = Magistral Small 2506
ollama[234492]: llama_model_loader: - kv  13:               general.base_model.1.version str              = 2506
ollama[234492]: llama_model_loader: - kv  14:          general.base_model.1.organization str              = Mistralai
ollama[234492]: llama_model_loader: - kv  15:              general.base_model.1.repo_url str              = https://huggingface.co/mistralai/Magi...
ollama[234492]: llama_model_loader: - kv  16:                               general.tags arr[str,14]      = ["merge", "programming", "code genera...
ollama[234492]: llama_model_loader: - kv  17:                          general.languages arr[str,24]      = ["en", "fr", "de", "es", "pt", "it", ...
ollama[234492]: llama_model_loader: - kv  18:                          llama.block_count u32              = 62
ollama[234492]: llama_model_loader: - kv  19:                       llama.context_length u32              = 131072
ollama[234492]: llama_model_loader: - kv  20:                     llama.embedding_length u32              = 5120
ollama[234492]: llama_model_loader: - kv  21:                  llama.feed_forward_length u32              = 32768
ollama[234492]: llama_model_loader: - kv  22:                 llama.attention.head_count u32              = 32
ollama[234492]: llama_model_loader: - kv  23:              llama.attention.head_count_kv u32              = 8
ollama[234492]: llama_model_loader: - kv  24:                       llama.rope.freq_base f32              = 1000000000.000000
ollama[234492]: llama_model_loader: - kv  25:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
ollama[234492]: llama_model_loader: - kv  26:                 llama.attention.key_length u32              = 128
ollama[234492]: llama_model_loader: - kv  27:               llama.attention.value_length u32              = 128
ollama[234492]: llama_model_loader: - kv  28:                           llama.vocab_size u32              = 131072
ollama[234492]: llama_model_loader: - kv  29:                 llama.rope.dimension_count u32              = 128
ollama[234492]: llama_model_loader: - kv  30:                       tokenizer.ggml.model str              = gpt2
ollama[234492]: llama_model_loader: - kv  31:                         tokenizer.ggml.pre str              = tekken
ollama[234492]: llama_model_loader: - kv  32:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "[INST]", "[...
ollama[234492]: llama_model_loader: - kv  33:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
ollama[234492]: [132B blob data]
ollama[234492]: llama_model_loader: - kv  35:                tokenizer.ggml.bos_token_id u32              = 1
ollama[234492]: llama_model_loader: - kv  36:                tokenizer.ggml.eos_token_id u32              = 2
ollama[234492]: llama_model_loader: - kv  37:            tokenizer.ggml.unknown_token_id u32              = 0
ollama[234492]: llama_model_loader: - kv  38:               tokenizer.ggml.add_bos_token bool             = true
ollama[234492]: llama_model_loader: - kv  39:               tokenizer.ggml.add_sep_token bool             = false
ollama[234492]: llama_model_loader: - kv  40:               tokenizer.ggml.add_eos_token bool             = false
ollama[234492]: llama_model_loader: - kv  41:                    tokenizer.chat_template str              = {%- set today = strftime_now("%Y-%m-%...
ollama[234492]: llama_model_loader: - kv  42:            tokenizer.ggml.add_space_prefix bool             = false
ollama[234492]: llama_model_loader: - kv  43:               general.quantization_version u32              = 2
ollama[234492]: llama_model_loader: - kv  44:                          general.file_type u32              = 23
ollama[234492]: llama_model_loader: - kv  45:                                general.url str              = https://huggingface.co/mradermacher/M...
ollama[234492]: llama_model_loader: - kv  46:              mradermacher.quantize_version str              = 2
ollama[234492]: llama_model_loader: - kv  47:                  mradermacher.quantized_by str              = mradermacher
ollama[234492]: llama_model_loader: - kv  48:                  mradermacher.quantized_at str              = 2025-07-30T12:53:06+02:00
ollama[234492]: llama_model_loader: - kv  49:                  mradermacher.quantized_on str              = rich1
ollama[234492]: llama_model_loader: - kv  50:                         general.source.url str              = https://huggingface.co/DavidAU/Mistra...
ollama[234492]: llama_model_loader: - kv  51:                  mradermacher.convert_type str              = hf
ollama[234492]: llama_model_loader: - kv  52:                      quantize.imatrix.file str              = Mistral-Magistral-Devstral-Instruct-F...
ollama[234492]: llama_model_loader: - kv  53:                   quantize.imatrix.dataset str              = imatrix-training-full-3
ollama[234492]: llama_model_loader: - kv  54:             quantize.imatrix.entries_count u32              = 434
ollama[234492]: llama_model_loader: - kv  55:              quantize.imatrix.chunks_count u32              = 321
ollama[234492]: llama_model_loader: - type  f32:  125 tensors
ollama[234492]: llama_model_loader: - type q4_K:   62 tensors
ollama[234492]: llama_model_loader: - type q5_K:    1 tensors
ollama[234492]: llama_model_loader: - type iq3_xxs:  186 tensors
ollama[234492]: llama_model_loader: - type iq3_s:   63 tensors
ollama[234492]: llama_model_loader: - type iq2_s:  124 tensors
ollama[234492]: print_info: file format = GGUF V3 (latest)
ollama[234492]: print_info: file type   = IQ3_XXS - 3.0625 bpw
ollama[234492]: print_info: file size   = 13.00 GiB (3.12 BPW)
ollama[234492]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
ollama[234492]: load: printing all EOG tokens:
ollama[234492]: load:   - 2 ('</s>')
ollama[234492]: load: special tokens cache size = 1000
ollama[234492]: load: token to piece cache size = 0.8498 MB
ollama[234492]: print_info: arch             = llama
ollama[234492]: print_info: vocab_only       = 0
ollama[234492]: print_info: n_ctx_train      = 131072
ollama[234492]: print_info: n_embd           = 5120
ollama[234492]: print_info: n_layer          = 62
ollama[234492]: print_info: n_head           = 32
ollama[234492]: print_info: n_head_kv        = 8
ollama[234492]: print_info: n_rot            = 128
ollama[234492]: print_info: n_swa            = 0
ollama[234492]: print_info: is_swa_any       = 0
ollama[234492]: print_info: n_embd_head_k    = 128
ollama[234492]: print_info: n_embd_head_v    = 128
ollama[234492]: print_info: n_gqa            = 4
ollama[234492]: print_info: n_embd_k_gqa     = 1024
ollama[234492]: print_info: n_embd_v_gqa     = 1024
ollama[234492]: print_info: f_norm_eps       = 0.0e+00
ollama[234492]: print_info: f_norm_rms_eps   = 1.0e-05
ollama[234492]: print_info: f_clamp_kqv      = 0.0e+00
ollama[234492]: print_info: f_max_alibi_bias = 0.0e+00
ollama[234492]: print_info: f_logit_scale    = 0.0e+00
ollama[234492]: print_info: f_attn_scale     = 0.0e+00
ollama[234492]: print_info: n_ff             = 32768
ollama[234492]: print_info: n_expert         = 0
ollama[234492]: print_info: n_expert_used    = 0
ollama[234492]: print_info: causal attn      = 1
ollama[234492]: print_info: pooling type     = 0
ollama[234492]: print_info: rope type        = 0
ollama[234492]: print_info: rope scaling     = linear
ollama[234492]: print_info: freq_base_train  = 1000000000.0
ollama[234492]: print_info: freq_scale_train = 1
ollama[234492]: print_info: n_ctx_orig_yarn  = 131072
ollama[234492]: print_info: rope_finetuned   = unknown
ollama[234492]: print_info: model type       = ?B
ollama[234492]: print_info: model params     = 35.80 B
ollama[234492]: print_info: general.name     = Mistral Magistral Devstral Instruct FUSED CODER Reasoning 36B
ollama[234492]: print_info: vocab type       = BPE
ollama[234492]: print_info: n_vocab          = 131072
ollama[234492]: print_info: n_merges         = 269443
ollama[234492]: print_info: BOS token        = 1 '<s>'
ollama[234492]: print_info: EOS token        = 2 '</s>'
ollama[234492]: print_info: UNK token        = 0 '<unk>'
ollama[234492]: print_info: LF token         = 1010 'Ċ'
ollama[234492]: print_info: EOG token        = 2 '</s>'
ollama[234492]: print_info: max token length = 150
ollama[234492]: load_tensors: loading model tensors, this can take a while... (mmap = true)
ollama[234492]: load_tensors: offloading 62 repeating layers to GPU
ollama[234492]: load_tensors: offloaded 62/63 layers to GPU
ollama[234492]: load_tensors:        ROCm0 model buffer size = 12598.59 MiB
ollama[234492]: load_tensors:   CPU_Mapped model buffer size =   715.02 MiB
ollama[234492]: llama_context: constructing llama_context
ollama[234492]: llama_context: n_seq_max     = 2
ollama[234492]: llama_context: n_ctx         = 8192
ollama[234492]: llama_context: n_ctx_per_seq = 4096
ollama[234492]: llama_context: n_batch       = 1024
ollama[234492]: llama_context: n_ubatch      = 512
ollama[234492]: llama_context: causal_attn   = 1
ollama[234492]: llama_context: flash_attn    = enabled
ollama[234492]: llama_context: kv_unified    = false
ollama[234492]: llama_context: freq_base     = 1000000000.0
ollama[234492]: llama_context: freq_scale    = 1
ollama[234492]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
ollama[234492]: llama_context:        CPU  output buffer size =     1.04 MiB
ollama[234492]: llama_kv_cache:      ROCm0 KV buffer size =  1984.00 MiB
ollama[234492]: llama_kv_cache: size = 1984.00 MiB (  4096 cells,  62 layers,  2/2 seqs), K (f16):  992.00 MiB, V (f16):  992.00 MiB
kernel: amdgpu 0000:03:00.0: amdgpu: 00000000b0adae43 pin failed
kernel: [drm:amdgpu_dm_plane_helper_prepare_fb [amdgpu]] *ERROR* Failed to pin framebuffer with error -12
ollama[234492]: graph_reserve: failed to allocate compute buffers
ollama[234492]: SIGSEGV: segmentation violation
ollama[234492]: PC=0x741470f83e6a m=8 sigcode=1 addr=0x740a37002498
ollama[234492]: signal arrived during cgo execution
ollama[234492]: goroutine 53 gp=0xc000505340 m=8 mp=0xc000349808 [syscall]:
ollama[234492]: runtime.cgocall(0x5951d12d2100, 0xc0000afbf8)
ollama[234492]:         /usr/lib/go/src/runtime/cgocall.go:167 +0x4b fp=0xc0000afbd0 sp=0xc0000afb98 pc=0x5951d05c00cb
ollama[234492]: github.com/ollama/ollama/llama._Cfunc_llama_init_from_model(0x741314000ce0, {0x2000, 0x400, 0x200, 0x2, 0x10, 0x10, 0xffffffff, 0xffffffff, 0xffffffff, ...})
ollama[234492]:         _cgo_gotypes.go:753 +0x4e fp=0xc0000afbf8 sp=0xc0000afbd0 pc=0x5951d097c46e
ollama[234492]: github.com/ollama/ollama/llama.NewContextWithModel.func1(...)
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/ollama/llama/llama.go:280
ollama[234492]: github.com/ollama/ollama/llama.NewContextWithModel(0xc0001ffe18, {{0x2000, 0x400, 0x200, 0x2, 0x10, 0x10, 0xffffffff, 0xffffffff, 0xffffffff, ...}})
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/ollama/llama/llama.go:280 +0x158 fp=0xc0000afd98 sp=0xc0000afbf8 pc=0x5951d0980238
ollama[234492]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0001a0280, {0x3e, 0x0, 0x1, {0xc0001ffb84, 0x1, 0x1}, 0xc000705b00, 0x0}, {0x7ffdc9e7ab7f, ...}, ...)
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:797 +0x198 fp=0xc0000afee0 sp=0xc0000afd98 pc=0x5951d0a3e598
ollama[234492]: github.com/ollama/ollama/runner/llamarunner.(*Server).load.gowrap2()
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:879 +0x175 fp=0xc0000affe0 sp=0xc0000afee0 pc=0x5951d0a3f635
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000affe8 sp=0xc0000affe0 pc=0x5951d05cb681
ollama[234492]: created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 51
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:879 +0x7ce
ollama[234492]: goroutine 1 gp=0xc000002380 m=nil [IO wait]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00050f790 sp=0xc00050f770 pc=0x5951d05c354e
ollama[234492]: runtime.netpollblock(0xc00050f7e0?, 0xd0558526?, 0x51?)
ollama[234492]:         /usr/lib/go/src/runtime/netpoll.go:575 +0xf7 fp=0xc00050f7c8 sp=0xc00050f790 pc=0x5951d0587137
ollama[234492]: internal/poll.runtime_pollWait(0x7414cb888400, 0x72)
ollama[234492]:         /usr/lib/go/src/runtime/netpoll.go:351 +0x85 fp=0xc00050f7e8 sp=0xc00050f7c8 pc=0x5951d05c2725
ollama[234492]: internal/poll.(*pollDesc).wait(0xc0001fcb00?, 0x900000036?, 0x0)
ollama[234492]:         /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00050f810 sp=0xc00050f7e8 pc=0x5951d064b1a7
ollama[234492]: internal/poll.(*pollDesc).waitRead(...)
ollama[234492]:         /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89
ollama[234492]: internal/poll.(*FD).Accept(0xc0001fcb00)
ollama[234492]:         /usr/lib/go/src/internal/poll/fd_unix.go:613 +0x28c fp=0xc00050f8b8 sp=0xc00050f810 pc=0x5951d06505cc
ollama[234492]: net.(*netFD).accept(0xc0001fcb00)
ollama[234492]:         /usr/lib/go/src/net/fd_unix.go:161 +0x29 fp=0xc00050f970 sp=0xc00050f8b8 pc=0x5951d06baa49
ollama[234492]: net.(*TCPListener).accept(0xc0000c9840)
ollama[234492]:         /usr/lib/go/src/net/tcpsock_posix.go:159 +0x1b fp=0xc00050f9c0 sp=0xc00050f970 pc=0x5951d06d017b
ollama[234492]: net.(*TCPListener).Accept(0xc0000c9840)
ollama[234492]:         /usr/lib/go/src/net/tcpsock.go:380 +0x30 fp=0xc00050f9f0 sp=0xc00050f9c0 pc=0x5951d06cf010
ollama[234492]: net/http.(*onceCloseListener).Accept(0xc0004f6360?)
ollama[234492]:         <autogenerated>:1 +0x24 fp=0xc00050fa08 sp=0xc00050f9f0 pc=0x5951d08f19c4
ollama[234492]: net/http.(*Server).Serve(0xc000260200, {0x5951d1a2cda8, 0xc0000c9840})
ollama[234492]:         /usr/lib/go/src/net/http/server.go:3463 +0x30c fp=0xc00050fb38 sp=0xc00050fa08 pc=0x5951d08c93ac
ollama[234492]: github.com/ollama/ollama/runner/llamarunner.Execute({0xc000036260, 0x4, 0x4})
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:947 +0x8f4 fp=0xc00050fd08 sp=0xc00050fb38 pc=0x5951d0a3fff4
ollama[234492]: github.com/ollama/ollama/runner.Execute({0xc000036250?, 0x0?, 0x0?})
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/runner.go:22 +0xd4 fp=0xc00050fd30 sp=0xc00050fd08 pc=0x5951d0ae0e54
ollama[234492]: github.com/ollama/ollama/cmd.NewCLI.func2(0xc000223100?, {0x5951d15532eb?, 0x4?, 0x5951d15532ef?})
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/ollama/cmd/cmd.go:1841 +0x45 fp=0xc00050fd58 sp=0xc00050fd30 pc=0x5951d1263085
ollama[234492]: github.com/spf13/cobra.(*Command).execute(0xc0004f9508, {0xc000708d80, 0x4, 0x4})
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x88a fp=0xc00050fe78 sp=0xc00050fd58 pc=0x5951d073420a
ollama[234492]: github.com/spf13/cobra.(*Command).ExecuteC(0xc0004de908)
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x398 fp=0xc00050ff30 sp=0xc00050fe78 pc=0x5951d0734a38
ollama[234492]: github.com/spf13/cobra.(*Command).Execute(...)
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
ollama[234492]: github.com/spf13/cobra.(*Command).ExecuteContext(...)
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
ollama[234492]: main.main()
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/ollama/main.go:12 +0x4d fp=0xc00050ff50 sp=0xc00050ff30 pc=0x5951d1263b6d
ollama[234492]: runtime.main()
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:285 +0x29d fp=0xc00050ffe0 sp=0xc00050ff50 pc=0x5951d058e9dd
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00050ffe8 sp=0xc00050ffe0 pc=0x5951d05cb681
ollama[234492]: goroutine 2 gp=0xc000002e00 m=nil [force gc (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009afa8 sp=0xc00009af88 pc=0x5951d05c354e
ollama[234492]: runtime.goparkunlock(...)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:466
ollama[234492]: runtime.forcegchelper()
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:373 +0xb8 fp=0xc00009afe0 sp=0xc00009afa8 pc=0x5951d058ed18
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009afe8 sp=0xc00009afe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.init.7 in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:361 +0x1a
ollama[234492]: goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]:
ollama[234492]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009b780 sp=0xc00009b760 pc=0x5951d05c354e
ollama[234492]: runtime.goparkunlock(...)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:466
ollama[234492]: runtime.bgsweep(0xc0000c6000)
ollama[234492]:         /usr/lib/go/src/runtime/mgcsweep.go:323 +0xdf fp=0xc00009b7c8 sp=0xc00009b780 pc=0x5951d0578a3f
ollama[234492]: runtime.gcenable.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:212 +0x25 fp=0xc00009b7e0 sp=0xc00009b7c8 pc=0x5951d056c9c5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009b7e8 sp=0xc00009b7e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcenable in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:212 +0x66
ollama[234492]: goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]:
ollama[234492]: runtime.gopark(0x10000?, 0x5951d171b4a8?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009bf78 sp=0xc00009bf58 pc=0x5951d05c354e
ollama[234492]: runtime.goparkunlock(...)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:466
ollama[234492]: runtime.(*scavengerState).park(0x5951d22fdf20)
ollama[234492]:         /usr/lib/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc00009bfa8 sp=0xc00009bf78 pc=0x5951d05764a9
ollama[234492]: runtime.bgscavenge(0xc0000c6000)
ollama[234492]:         /usr/lib/go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc00009bfc8 sp=0xc00009bfa8 pc=0x5951d0576a59
ollama[234492]: runtime.gcenable.gowrap2()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:213 +0x25 fp=0xc00009bfe0 sp=0xc00009bfc8 pc=0x5951d056c965
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009bfe8 sp=0xc00009bfe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcenable in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:213 +0xa5
ollama[234492]: goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait]:
ollama[234492]: runtime.gopark(0x5951d059dd17?, 0x5951d05642e5?, 0xb8?, 0x1?, 0xc000002380?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009a620 sp=0xc00009a600 pc=0x5951d05c354e
ollama[234492]: runtime.runFinalizers()
ollama[234492]:         /usr/lib/go/src/runtime/mfinal.go:210 +0x107 fp=0xc00009a7e0 sp=0xc00009a620 pc=0x5951d056b8c7
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009a7e8 sp=0xc00009a7e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.createfing in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mfinal.go:172 +0x3d
ollama[234492]: goroutine 6 gp=0xc0002008c0 m=nil [cleanup wait]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009c768 sp=0xc00009c748 pc=0x5951d05c354e
ollama[234492]: runtime.goparkunlock(...)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:466
ollama[234492]: runtime.(*cleanupQueue).dequeue(0x5951d22fe880)
ollama[234492]:         /usr/lib/go/src/runtime/mcleanup.go:439 +0xc5 fp=0xc00009c7a0 sp=0xc00009c768 pc=0x5951d0568aa5
ollama[234492]: runtime.runCleanups()
ollama[234492]:         /usr/lib/go/src/runtime/mcleanup.go:635 +0x45 fp=0xc00009c7e0 sp=0xc00009c7a0 pc=0x5951d0569165
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009c7e8 sp=0xc00009c7e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.(*cleanupQueue).createGs in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mcleanup.go:589 +0xa5
ollama[234492]: goroutine 7 gp=0xc000200c40 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009cf38 sp=0xc00009cf18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00009cfc8 sp=0xc00009cf38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00009cfe0 sp=0xc00009cfc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009cfe8 sp=0xc00009cfe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 18 gp=0xc000504000 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000096738 sp=0xc000096718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0000967c8 sp=0xc000096738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0000967e0 sp=0xc0000967c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000967e8 sp=0xc0000967e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 34 gp=0xc000102380 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000118738 sp=0xc000118718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0001187c8 sp=0xc000118738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0001187e0 sp=0xc0001187c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0001187e8 sp=0xc0001187e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 35 gp=0xc000102540 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000118f38 sp=0xc000118f18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000118fc8 sp=0xc000118f38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000118fe0 sp=0xc000118fc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000118fe8 sp=0xc000118fe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 36 gp=0xc000102700 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000119738 sp=0xc000119718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0001197c8 sp=0xc000119738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0001197e0 sp=0xc0001197c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0001197e8 sp=0xc0001197e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 37 gp=0xc0001028c0 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000119f38 sp=0xc000119f18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000119fc8 sp=0xc000119f38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000119fe0 sp=0xc000119fc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000119fe8 sp=0xc000119fe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 38 gp=0xc000102a80 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00011a738 sp=0xc00011a718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00011a7c8 sp=0xc00011a738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00011a7e0 sp=0xc00011a7c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00011a7e8 sp=0xc00011a7e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 39 gp=0xc000102c40 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00011af38 sp=0xc00011af18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00011afc8 sp=0xc00011af38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00011afe0 sp=0xc00011afc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00011afe8 sp=0xc00011afe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 40 gp=0xc000102e00 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00011b738 sp=0xc00011b718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00011b7c8 sp=0xc00011b738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00011b7e0 sp=0xc00011b7c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00011b7e8 sp=0xc00011b7e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 41 gp=0xc000102fc0 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00011bf38 sp=0xc00011bf18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00011bfc8 sp=0xc00011bf38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00011bfe0 sp=0xc00011bfc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00011bfe8 sp=0xc00011bfe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 42 gp=0xc000103180 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000114738 sp=0xc000114718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0001147c8 sp=0xc000114738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0001147e0 sp=0xc0001147c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0001147e8 sp=0xc0001147e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 43 gp=0xc000103340 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000114f38 sp=0xc000114f18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000114fc8 sp=0xc000114f38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000114fe0 sp=0xc000114fc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000114fe8 sp=0xc000114fe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 44 gp=0xc000103500 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000115738 sp=0xc000115718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0001157c8 sp=0xc000115738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0001157e0 sp=0xc0001157c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0001157e8 sp=0xc0001157e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 8 gp=0xc000200e00 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009d738 sp=0xc00009d718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00009d7c8 sp=0xc00009d738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00009d7e0 sp=0xc00009d7c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009d7e8 sp=0xc00009d7e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 19 gp=0xc0005041c0 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000096f38 sp=0xc000096f18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000096fc8 sp=0xc000096f38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000096fe0 sp=0xc000096fc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000096fe8 sp=0xc000096fe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 20 gp=0xc000504380 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000097738 sp=0xc000097718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0000977c8 sp=0xc000097738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0000977e0 sp=0xc0000977c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000977e8 sp=0xc0000977e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 45 gp=0xc0001036c0 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000115f38 sp=0xc000115f18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000115fc8 sp=0xc000115f38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000115fe0 sp=0xc000115fc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000115fe8 sp=0xc000115fe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 9 gp=0xc000200fc0 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009df38 sp=0xc00009df18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00009dfc8 sp=0xc00009df38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00009dfe0 sp=0xc00009dfc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009dfe8 sp=0xc00009dfe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 10 gp=0xc000201180 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004ae738 sp=0xc0004ae718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004ae7c8 sp=0xc0004ae738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004ae7e0 sp=0xc0004ae7c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004ae7e8 sp=0xc0004ae7e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 11 gp=0xc000201340 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004aef38 sp=0xc0004aef18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004aefc8 sp=0xc0004aef38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004aefe0 sp=0xc0004aefc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004aefe8 sp=0xc0004aefe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 21 gp=0xc000504540 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000097f38 sp=0xc000097f18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000097fc8 sp=0xc000097f38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000097fe0 sp=0xc000097fc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000097fe8 sp=0xc000097fe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 46 gp=0xc000103880 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000116738 sp=0xc000116718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0001167c8 sp=0xc000116738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0001167e0 sp=0xc0001167c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0001167e8 sp=0xc0001167e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 12 gp=0xc000201500 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x5951d23adc80?, 0x1?, 0xb0?, 0x5a?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0000adf38 sp=0xc0000adf18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0000adfc8 sp=0xc0000adf38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0000adfe0 sp=0xc0000adfc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000adfe8 sp=0xc0000adfe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 13 gp=0xc0002016c0 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x81a4e14fdbb?, 0x1?, 0x6?, 0xc8?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004aff38 sp=0xc0004aff18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004affc8 sp=0xc0004aff38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004affe0 sp=0xc0004affc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004affe8 sp=0xc0004affe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 14 gp=0xc000201880 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x81a4e14f644?, 0x3?, 0xde?, 0x91?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004b0738 sp=0xc0004b0718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004b07c8 sp=0xc0004b0738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004b07e0 sp=0xc0004b07c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004b07e8 sp=0xc0004b07e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 15 gp=0xc000201a40 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x81a4e139e6b?, 0x1?, 0xfe?, 0xc3?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004b0f38 sp=0xc0004b0f18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004b0fc8 sp=0xc0004b0f38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004b0fe0 sp=0xc0004b0fc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004b0fe8 sp=0xc0004b0fe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 16 gp=0xc000201c00 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x5951d23adc80?, 0x1?, 0x8a?, 0x2c?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004b1738 sp=0xc0004b1718 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004b17c8 sp=0xc0004b1738 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004b17e0 sp=0xc0004b17c8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004b17e8 sp=0xc0004b17e0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 50 gp=0xc000201dc0 m=nil [GC worker (idle)]:
ollama[234492]: runtime.gopark(0x81a4e14fd02?, 0x1?, 0x3d?, 0x26?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004b1f38 sp=0xc0004b1f18 pc=0x5951d05c354e
ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004b1fc8 sp=0xc0004b1f38 pc=0x5951d056f0eb
ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004b1fe0 sp=0xc0004b1fc8 pc=0x5951d056efc5
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004b1fe8 sp=0xc0004b1fe0 pc=0x5951d05cb681
ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[234492]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[234492]: goroutine 66 gp=0xc000582380 m=nil [sync.WaitGroup.Wait]:
ollama[234492]: runtime.gopark(0x5951d23adc80?, 0x8cacb4?, 0x60?, 0x60?, 0x0?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004af620 sp=0xc0004af600 pc=0x5951d05c354e
ollama[234492]: runtime.goparkunlock(...)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:466
ollama[234492]: runtime.semacquire1(0xc0001a02a0, 0x0, 0x1, 0x0, 0x19)
ollama[234492]:         /usr/lib/go/src/runtime/sema.go:192 +0x229 fp=0xc0004af688 sp=0xc0004af620 pc=0x5951d05a27e9
ollama[234492]: sync.runtime_SemacquireWaitGroup(0xc0000a1808?, 0xa5?)
ollama[234492]:         /usr/lib/go/src/runtime/sema.go:114 +0x2e fp=0xc0004af6c0 sp=0xc0004af688 pc=0x5951d05c4f6e
ollama[234492]: sync.(*WaitGroup).Wait(0xc0001a0298)
ollama[234492]:         /usr/lib/go/src/sync/waitgroup.go:206 +0x85 fp=0xc0004af6e8 sp=0xc0004af6c0 pc=0x5951d05d72e5
ollama[234492]: github.com/ollama/ollama/runner/llamarunner.(*Server).run(0xc0001a0280, {0x5951d1a2f3b0, 0xc0001a2fa0})
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:334 +0x4b fp=0xc0004af7b8 sp=0xc0004af6e8 pc=0x5951d0a3b16b
ollama[234492]: github.com/ollama/ollama/runner/llamarunner.Execute.gowrap1()
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:926 +0x28 fp=0xc0004af7e0 sp=0xc0004af7b8 pc=0x5951d0a40268
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004af7e8 sp=0xc0004af7e0 pc=0x5951d05cb681
ollama[234492]: created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
ollama[234492]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:926 +0x4c5
ollama[234492]: goroutine 51 gp=0xc000505180 m=nil [IO wait]:
ollama[234492]: runtime.gopark(0xc000049950?, 0x5951d064e7a5?, 0x0?, 0x32?, 0xb?)
ollama[234492]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000049938 sp=0xc000049918 pc=0x5951d05c354e
ollama[234492]: runtime.netpollblock(0x5951d05e76f8?, 0xd0558526?, 0x51?)
ollama[234492]:         /usr/lib/go/src/runtime/netpoll.go:575 +0xf7 fp=0xc000049970 sp=0xc000049938 pc=0x5951d0587137
ollama[234492]: internal/poll.runtime_pollWait(0x7414cb888200, 0x72)
ollama[234492]:         /usr/lib/go/src/runtime/netpoll.go:351 +0x85 fp=0xc000049990 sp=0xc000049970 pc=0x5951d05c2725
ollama[234492]: internal/poll.(*pollDesc).wait(0xc0004a3200?, 0xc000277000?, 0x0)
ollama[234492]:         /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000499b8 sp=0xc000049990 pc=0x5951d064b1a7
ollama[234492]: internal/poll.(*pollDesc).waitRead(...)
ollama[234492]:         /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89
ollama[234492]: internal/poll.(*FD).Read(0xc0004a3200, {0xc000277000, 0x1000, 0x1000})
ollama[234492]:         /usr/lib/go/src/internal/poll/fd_unix.go:165 +0x279 fp=0xc000049a50 sp=0xc0000499b8 pc=0x5951d064c499
ollama[234492]: net.(*netFD).Read(0xc0004a3200, {0xc000277000?, 0x0?, 0xc000049ac8?})
ollama[234492]:         /usr/lib/go/src/net/fd_posix.go:68 +0x25 fp=0xc000049a98 sp=0xc000049a50 pc=0x5951d06b8ba5
ollama[234492]: net.(*conn).Read(0xc00009e8a8, {0xc000277000?, 0x0?, 0x0?})
ollama[234492]:         /usr/lib/go/src/net/net.go:196 +0x45 fp=0xc000049ae0 sp=0xc000049a98 pc=0x5951d06c6bc5
ollama[234492]: net/http.(*connReader).Read(0xc000708ec0, {0xc000277000, 0x1000, 0x1000})
ollama[234492]:         /usr/lib/go/src/net/http/server.go:812 +0x154 fp=0xc000049b38 sp=0xc000049ae0 pc=0x5951d08be414
ollama[234492]: bufio.(*Reader).fill(0xc0002725a0)
ollama[234492]:         /usr/lib/go/src/bufio/bufio.go:113 +0x103 fp=0xc000049b70 sp=0xc000049b38 pc=0x5951d06de0a3
ollama[234492]: bufio.(*Reader).Peek(0xc0002725a0, 0x4)
ollama[234492]:         /usr/lib/go/src/bufio/bufio.go:152 +0x53 fp=0xc000049b90 sp=0xc000049b70 pc=0x5951d06de1d3
ollama[234492]: net/http.(*conn).serve(0xc0004f6360, {0x5951d1a2f378, 0xc00025e450})
ollama[234492]:         /usr/lib/go/src/net/http/server.go:2145 +0x7c5 fp=0xc000049fb8 sp=0xc000049b90 pc=0x5951d08c3c45
ollama[234492]: net/http.(*Server).Serve.gowrap3()
ollama[234492]:         /usr/lib/go/src/net/http/server.go:3493 +0x28 fp=0xc000049fe0 sp=0xc000049fb8 pc=0x5951d08c97a8
ollama[234492]: runtime.goexit({})
ollama[234492]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000049fe8 sp=0xc000049fe0 pc=0x5951d05cb681
ollama[234492]: created by net/http.(*Server).Serve in goroutine 1
ollama[234492]:         /usr/lib/go/src/net/http/server.go:3493 +0x485
ollama[234492]: rax    0x740cb0ce1f60
ollama[234492]: rbx    0x1
ollama[234492]: rcx    0xffffffffb0c64010
ollama[234492]: rdx    0x740cb17a1510
ollama[234492]: rdi    0x740cb0cb7920
ollama[234492]: rsi    0x0
ollama[234492]: rbp    0x741481b18ac0
ollama[234492]: rsp    0x741481b18a78
ollama[234492]: r8     0x2
ollama[234492]: r9     0x741314000030
ollama[234492]: r10    0x2
ollama[234492]: r11    0x0
ollama[234492]: r12    0x740cb03b2700
ollama[234492]: r13    0x740cb03b2700
ollama[234492]: r14    0x0
ollama[234492]: r15    0x740cb03d2720
ollama[234492]: rip    0x741470f83e6a
ollama[234492]: rflags 0x10206
ollama[234492]: cs     0x33
ollama[234492]: fs     0x0
ollama[234492]: gs     0x0
kernel: amdgpu: Freeing queue vital buffer 0x740cbe000000, queue evicted
kernel: amdgpu: Freeing queue vital buffer 0x741290a00000, queue evicted
kernel: amdgpu: Freeing queue vital buffer 0x74131bc00000, queue evicted
ollama[234492]: time=2025-11-08T21:43:01.483+01:00 level=INFO source=sched.go:453 msg="Load failed" model=/mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c error="llama runner process has terminated: exitstatus 2"
ollama[234492]: [GIN] 2025/11/08 - 21:43:01 | 500 | 22.052640684s |             ::1 | POST     "/api/chat"
<!-- gh-comment-id:3506885856 --> @binarynoise commented on GitHub (Nov 8, 2025): As I told, I set `OLLAMA_GPU_OVERHEAD:3000000000` to get rid of the crashes. Yes this is a custom build but almost back to upstream (I increased the timeouts to talk to the GPU to fetch fresh memory info as those always timed out). Here's a crash without gpu overhead ``` systemd[1]: Started Ollama Service. sudo[234484]: pam_unix(sudo:session): session closed for user root ollama[234492]: time=2025-11-08T21:40:34.744+01:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://[::]:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:f16 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/Tilo4TB/var-lib-ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" ollama[234492]: time=2025-11-08T21:40:34.768+01:00 level=INFO source=images.go:522 msg="total blobs: 147" ollama[234492]: time=2025-11-08T21:40:34.771+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" ollama[234492]: time=2025-11-08T21:40:34.773+01:00 level=INFO source=routes.go:1578 msg="Listening on [::]:11434 (version 0.12.10.r17.g91ec3ddbeb2e)" ollama[234492]: time=2025-11-08T21:40:34.774+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." ollama[234492]: time=2025-11-08T21:40:34.778+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39911" ollama[234492]: time=2025-11-08T21:40:39.962+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42991" ollama[234492]: time=2025-11-08T21:40:45.524+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-c2c6236518c28f70 filter_id="" library=ROCm compute=gfx1101 name=ROCm0 description="AMD Radeon RX 7800 XT" libdirs=ollama driver=60443.48 pci_id=0000:03:00.0 type=discrete total="16.0 GiB" available="15.8 GiB" ollama[234492]: time=2025-11-08T21:40:45.524+01:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB" kernel: amdgpu: Freeing queue vital buffer 0x734a69000000, queue evicted ollama[234492]: time=2025-11-08T21:42:39.700+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40841" ollama[234492]: llama_model_loader: loaded meta data with 56 key-value pairs and 561 tensors from /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c (version GGUF V3 (latest)) ollama[234492]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama[234492]: llama_model_loader: - kv 0: general.architecture str = llama ollama[234492]: llama_model_loader: - kv 1: general.type str = model ollama[234492]: llama_model_loader: - kv 2: general.name str = Mistral Magistral Devstral Instruct F... ollama[234492]: llama_model_loader: - kv 3: general.finetune str = Instruct-FUSED-CODER-Reasoning ollama[234492]: llama_model_loader: - kv 4: general.basename str = Mistral-Magistral-Devstral ollama[234492]: llama_model_loader: - kv 5: general.size_label str = 36B ollama[234492]: llama_model_loader: - kv 6: general.license str = apache-2.0 ollama[234492]: llama_model_loader: - kv 7: general.base_model.count u32 = 2 ollama[234492]: llama_model_loader: - kv 8: general.base_model.0.name str = Devstral Small 2507 ollama[234492]: llama_model_loader: - kv 9: general.base_model.0.version str = 2507 ollama[234492]: llama_model_loader: - kv 10: general.base_model.0.organization str = Mistralai ollama[234492]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/mistralai/Devs... ollama[234492]: llama_model_loader: - kv 12: general.base_model.1.name str = Magistral Small 2506 ollama[234492]: llama_model_loader: - kv 13: general.base_model.1.version str = 2506 ollama[234492]: llama_model_loader: - kv 14: general.base_model.1.organization str = Mistralai ollama[234492]: llama_model_loader: - kv 15: general.base_model.1.repo_url str = https://huggingface.co/mistralai/Magi... ollama[234492]: llama_model_loader: - kv 16: general.tags arr[str,14] = ["merge", "programming", "code genera... ollama[234492]: llama_model_loader: - kv 17: general.languages arr[str,24] = ["en", "fr", "de", "es", "pt", "it", ... ollama[234492]: llama_model_loader: - kv 18: llama.block_count u32 = 62 ollama[234492]: llama_model_loader: - kv 19: llama.context_length u32 = 131072 ollama[234492]: llama_model_loader: - kv 20: llama.embedding_length u32 = 5120 ollama[234492]: llama_model_loader: - kv 21: llama.feed_forward_length u32 = 32768 ollama[234492]: llama_model_loader: - kv 22: llama.attention.head_count u32 = 32 ollama[234492]: llama_model_loader: - kv 23: llama.attention.head_count_kv u32 = 8 ollama[234492]: llama_model_loader: - kv 24: llama.rope.freq_base f32 = 1000000000.000000 ollama[234492]: llama_model_loader: - kv 25: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 ollama[234492]: llama_model_loader: - kv 26: llama.attention.key_length u32 = 128 ollama[234492]: llama_model_loader: - kv 27: llama.attention.value_length u32 = 128 ollama[234492]: llama_model_loader: - kv 28: llama.vocab_size u32 = 131072 ollama[234492]: llama_model_loader: - kv 29: llama.rope.dimension_count u32 = 128 ollama[234492]: llama_model_loader: - kv 30: tokenizer.ggml.model str = gpt2 ollama[234492]: llama_model_loader: - kv 31: tokenizer.ggml.pre str = tekken ollama[234492]: llama_model_loader: - kv 32: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "[INST]", "[... ollama[234492]: llama_model_loader: - kv 33: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... ollama[234492]: [132B blob data] ollama[234492]: llama_model_loader: - kv 35: tokenizer.ggml.bos_token_id u32 = 1 ollama[234492]: llama_model_loader: - kv 36: tokenizer.ggml.eos_token_id u32 = 2 ollama[234492]: llama_model_loader: - kv 37: tokenizer.ggml.unknown_token_id u32 = 0 ollama[234492]: llama_model_loader: - kv 38: tokenizer.ggml.add_bos_token bool = true ollama[234492]: llama_model_loader: - kv 39: tokenizer.ggml.add_sep_token bool = false ollama[234492]: llama_model_loader: - kv 40: tokenizer.ggml.add_eos_token bool = false ollama[234492]: llama_model_loader: - kv 41: tokenizer.chat_template str = {%- set today = strftime_now("%Y-%m-%... ollama[234492]: llama_model_loader: - kv 42: tokenizer.ggml.add_space_prefix bool = false ollama[234492]: llama_model_loader: - kv 43: general.quantization_version u32 = 2 ollama[234492]: llama_model_loader: - kv 44: general.file_type u32 = 23 ollama[234492]: llama_model_loader: - kv 45: general.url str = https://huggingface.co/mradermacher/M... ollama[234492]: llama_model_loader: - kv 46: mradermacher.quantize_version str = 2 ollama[234492]: llama_model_loader: - kv 47: mradermacher.quantized_by str = mradermacher ollama[234492]: llama_model_loader: - kv 48: mradermacher.quantized_at str = 2025-07-30T12:53:06+02:00 ollama[234492]: llama_model_loader: - kv 49: mradermacher.quantized_on str = rich1 ollama[234492]: llama_model_loader: - kv 50: general.source.url str = https://huggingface.co/DavidAU/Mistra... ollama[234492]: llama_model_loader: - kv 51: mradermacher.convert_type str = hf ollama[234492]: llama_model_loader: - kv 52: quantize.imatrix.file str = Mistral-Magistral-Devstral-Instruct-F... ollama[234492]: llama_model_loader: - kv 53: quantize.imatrix.dataset str = imatrix-training-full-3 ollama[234492]: llama_model_loader: - kv 54: quantize.imatrix.entries_count u32 = 434 ollama[234492]: llama_model_loader: - kv 55: quantize.imatrix.chunks_count u32 = 321 ollama[234492]: llama_model_loader: - type f32: 125 tensors ollama[234492]: llama_model_loader: - type q4_K: 62 tensors ollama[234492]: llama_model_loader: - type q5_K: 1 tensors ollama[234492]: llama_model_loader: - type iq3_xxs: 186 tensors ollama[234492]: llama_model_loader: - type iq3_s: 63 tensors ollama[234492]: llama_model_loader: - type iq2_s: 124 tensors ollama[234492]: print_info: file format = GGUF V3 (latest) ollama[234492]: print_info: file type = IQ3_XXS - 3.0625 bpw ollama[234492]: print_info: file size = 13.00 GiB (3.12 BPW) ollama[234492]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect ollama[234492]: load: printing all EOG tokens: ollama[234492]: load: - 2 ('</s>') ollama[234492]: load: special tokens cache size = 1000 ollama[234492]: load: token to piece cache size = 0.8498 MB ollama[234492]: print_info: arch = llama ollama[234492]: print_info: vocab_only = 1 ollama[234492]: print_info: model type = ?B ollama[234492]: print_info: model params = 35.80 B ollama[234492]: print_info: general.name = Mistral Magistral Devstral Instruct FUSED CODER Reasoning 36B ollama[234492]: print_info: vocab type = BPE ollama[234492]: print_info: n_vocab = 131072 ollama[234492]: print_info: n_merges = 269443 ollama[234492]: print_info: BOS token = 1 '<s>' ollama[234492]: print_info: EOS token = 2 '</s>' ollama[234492]: print_info: UNK token = 0 '<unk>' ollama[234492]: print_info: LF token = 1010 'Ċ' ollama[234492]: print_info: EOG token = 2 '</s>' ollama[234492]: print_info: max token length = 150 ollama[234492]: llama_model_load: vocab only - skipping tensors ollama[234492]: time=2025-11-08T21:42:45.008+01:00 level=INFO source=server.go:215 msg="enabling flash attention" ollama[234492]: time=2025-11-08T21:42:45.009+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --model /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c --port 42615" ollama[234492]: time=2025-11-08T21:42:45.009+01:00 level=INFO source=server.go:470 msg="system memory" total="62.5 GiB" free="43.2 GiB" free_swap="92.3 GiB" ollama[234492]: time=2025-11-08T21:42:45.010+01:00 level=INFO source=server.go:522 msg=offload library=ROCm layers.requested=-1 layers.model=63 layers.offload=62 layers.split=[62] memory.available="[15.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.1 GiB" memory.required.partial="15.7 GiB" memory.required.kv="1.9 GiB" memory.required.allocations="[15.7 GiB]" memory.weights.total="12.7 GiB" memory.weights.repeating="12.3 GiB" memory.weights.nonrepeating="440.0 MiB" memory.graph.full="568.0 MiB" memory.graph.partial="801.0 MiB" ollama[234492]: time=2025-11-08T21:42:45.018+01:00 level=INFO source=runner.go:910 msg="starting go runner" ollama[234492]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ollama[234492]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ollama[234492]: ggml_cuda_init: found 1 ROCm devices: ollama[234492]: Device 0: AMD Radeon RX 7800 XT, gfx1101 (0x1101), VMM: no, Wave Size: 32, ID: GPU-c2c6236518c28f70 ollama[234492]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so ollama[234492]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so ollama[234492]: time=2025-11-08T21:42:50.441+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.SSE3=1 CPU.1.SSSE3=1 CPU.1.AVX=1 CPU.1.AVX_VNNI=1 CPU.1.AVX2=1 CPU.1.F16C=1 CPU.1.FMA=1 CPU.1.BMI2=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) ollama[234492]: time=2025-11-08T21:42:50.441+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:42615" ollama[234492]: time=2025-11-08T21:42:50.444+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType:f16 NumThreads:16 GPULayers:62[ID:GPU-c2c6236518c28f70 Layers:62(0..61)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" ollama[234492]: time=2025-11-08T21:42:50.444+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" ollama[234492]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7800 XT) (0000:03:00.0) - 16154 MiB free ollama[234492]: time=2025-11-08T21:42:50.444+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" ollama[234492]: llama_model_loader: loaded meta data with 56 key-value pairs and 561 tensors from /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c (version GGUF V3 (latest)) ollama[234492]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama[234492]: llama_model_loader: - kv 0: general.architecture str = llama ollama[234492]: llama_model_loader: - kv 1: general.type str = model ollama[234492]: llama_model_loader: - kv 2: general.name str = Mistral Magistral Devstral Instruct F... ollama[234492]: llama_model_loader: - kv 3: general.finetune str = Instruct-FUSED-CODER-Reasoning ollama[234492]: llama_model_loader: - kv 4: general.basename str = Mistral-Magistral-Devstral ollama[234492]: llama_model_loader: - kv 5: general.size_label str = 36B ollama[234492]: llama_model_loader: - kv 6: general.license str = apache-2.0 ollama[234492]: llama_model_loader: - kv 7: general.base_model.count u32 = 2 ollama[234492]: llama_model_loader: - kv 8: general.base_model.0.name str = Devstral Small 2507 ollama[234492]: llama_model_loader: - kv 9: general.base_model.0.version str = 2507 ollama[234492]: llama_model_loader: - kv 10: general.base_model.0.organization str = Mistralai ollama[234492]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/mistralai/Devs... ollama[234492]: llama_model_loader: - kv 12: general.base_model.1.name str = Magistral Small 2506 ollama[234492]: llama_model_loader: - kv 13: general.base_model.1.version str = 2506 ollama[234492]: llama_model_loader: - kv 14: general.base_model.1.organization str = Mistralai ollama[234492]: llama_model_loader: - kv 15: general.base_model.1.repo_url str = https://huggingface.co/mistralai/Magi... ollama[234492]: llama_model_loader: - kv 16: general.tags arr[str,14] = ["merge", "programming", "code genera... ollama[234492]: llama_model_loader: - kv 17: general.languages arr[str,24] = ["en", "fr", "de", "es", "pt", "it", ... ollama[234492]: llama_model_loader: - kv 18: llama.block_count u32 = 62 ollama[234492]: llama_model_loader: - kv 19: llama.context_length u32 = 131072 ollama[234492]: llama_model_loader: - kv 20: llama.embedding_length u32 = 5120 ollama[234492]: llama_model_loader: - kv 21: llama.feed_forward_length u32 = 32768 ollama[234492]: llama_model_loader: - kv 22: llama.attention.head_count u32 = 32 ollama[234492]: llama_model_loader: - kv 23: llama.attention.head_count_kv u32 = 8 ollama[234492]: llama_model_loader: - kv 24: llama.rope.freq_base f32 = 1000000000.000000 ollama[234492]: llama_model_loader: - kv 25: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 ollama[234492]: llama_model_loader: - kv 26: llama.attention.key_length u32 = 128 ollama[234492]: llama_model_loader: - kv 27: llama.attention.value_length u32 = 128 ollama[234492]: llama_model_loader: - kv 28: llama.vocab_size u32 = 131072 ollama[234492]: llama_model_loader: - kv 29: llama.rope.dimension_count u32 = 128 ollama[234492]: llama_model_loader: - kv 30: tokenizer.ggml.model str = gpt2 ollama[234492]: llama_model_loader: - kv 31: tokenizer.ggml.pre str = tekken ollama[234492]: llama_model_loader: - kv 32: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "[INST]", "[... ollama[234492]: llama_model_loader: - kv 33: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... ollama[234492]: [132B blob data] ollama[234492]: llama_model_loader: - kv 35: tokenizer.ggml.bos_token_id u32 = 1 ollama[234492]: llama_model_loader: - kv 36: tokenizer.ggml.eos_token_id u32 = 2 ollama[234492]: llama_model_loader: - kv 37: tokenizer.ggml.unknown_token_id u32 = 0 ollama[234492]: llama_model_loader: - kv 38: tokenizer.ggml.add_bos_token bool = true ollama[234492]: llama_model_loader: - kv 39: tokenizer.ggml.add_sep_token bool = false ollama[234492]: llama_model_loader: - kv 40: tokenizer.ggml.add_eos_token bool = false ollama[234492]: llama_model_loader: - kv 41: tokenizer.chat_template str = {%- set today = strftime_now("%Y-%m-%... ollama[234492]: llama_model_loader: - kv 42: tokenizer.ggml.add_space_prefix bool = false ollama[234492]: llama_model_loader: - kv 43: general.quantization_version u32 = 2 ollama[234492]: llama_model_loader: - kv 44: general.file_type u32 = 23 ollama[234492]: llama_model_loader: - kv 45: general.url str = https://huggingface.co/mradermacher/M... ollama[234492]: llama_model_loader: - kv 46: mradermacher.quantize_version str = 2 ollama[234492]: llama_model_loader: - kv 47: mradermacher.quantized_by str = mradermacher ollama[234492]: llama_model_loader: - kv 48: mradermacher.quantized_at str = 2025-07-30T12:53:06+02:00 ollama[234492]: llama_model_loader: - kv 49: mradermacher.quantized_on str = rich1 ollama[234492]: llama_model_loader: - kv 50: general.source.url str = https://huggingface.co/DavidAU/Mistra... ollama[234492]: llama_model_loader: - kv 51: mradermacher.convert_type str = hf ollama[234492]: llama_model_loader: - kv 52: quantize.imatrix.file str = Mistral-Magistral-Devstral-Instruct-F... ollama[234492]: llama_model_loader: - kv 53: quantize.imatrix.dataset str = imatrix-training-full-3 ollama[234492]: llama_model_loader: - kv 54: quantize.imatrix.entries_count u32 = 434 ollama[234492]: llama_model_loader: - kv 55: quantize.imatrix.chunks_count u32 = 321 ollama[234492]: llama_model_loader: - type f32: 125 tensors ollama[234492]: llama_model_loader: - type q4_K: 62 tensors ollama[234492]: llama_model_loader: - type q5_K: 1 tensors ollama[234492]: llama_model_loader: - type iq3_xxs: 186 tensors ollama[234492]: llama_model_loader: - type iq3_s: 63 tensors ollama[234492]: llama_model_loader: - type iq2_s: 124 tensors ollama[234492]: print_info: file format = GGUF V3 (latest) ollama[234492]: print_info: file type = IQ3_XXS - 3.0625 bpw ollama[234492]: print_info: file size = 13.00 GiB (3.12 BPW) ollama[234492]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect ollama[234492]: load: printing all EOG tokens: ollama[234492]: load: - 2 ('</s>') ollama[234492]: load: special tokens cache size = 1000 ollama[234492]: load: token to piece cache size = 0.8498 MB ollama[234492]: print_info: arch = llama ollama[234492]: print_info: vocab_only = 0 ollama[234492]: print_info: n_ctx_train = 131072 ollama[234492]: print_info: n_embd = 5120 ollama[234492]: print_info: n_layer = 62 ollama[234492]: print_info: n_head = 32 ollama[234492]: print_info: n_head_kv = 8 ollama[234492]: print_info: n_rot = 128 ollama[234492]: print_info: n_swa = 0 ollama[234492]: print_info: is_swa_any = 0 ollama[234492]: print_info: n_embd_head_k = 128 ollama[234492]: print_info: n_embd_head_v = 128 ollama[234492]: print_info: n_gqa = 4 ollama[234492]: print_info: n_embd_k_gqa = 1024 ollama[234492]: print_info: n_embd_v_gqa = 1024 ollama[234492]: print_info: f_norm_eps = 0.0e+00 ollama[234492]: print_info: f_norm_rms_eps = 1.0e-05 ollama[234492]: print_info: f_clamp_kqv = 0.0e+00 ollama[234492]: print_info: f_max_alibi_bias = 0.0e+00 ollama[234492]: print_info: f_logit_scale = 0.0e+00 ollama[234492]: print_info: f_attn_scale = 0.0e+00 ollama[234492]: print_info: n_ff = 32768 ollama[234492]: print_info: n_expert = 0 ollama[234492]: print_info: n_expert_used = 0 ollama[234492]: print_info: causal attn = 1 ollama[234492]: print_info: pooling type = 0 ollama[234492]: print_info: rope type = 0 ollama[234492]: print_info: rope scaling = linear ollama[234492]: print_info: freq_base_train = 1000000000.0 ollama[234492]: print_info: freq_scale_train = 1 ollama[234492]: print_info: n_ctx_orig_yarn = 131072 ollama[234492]: print_info: rope_finetuned = unknown ollama[234492]: print_info: model type = ?B ollama[234492]: print_info: model params = 35.80 B ollama[234492]: print_info: general.name = Mistral Magistral Devstral Instruct FUSED CODER Reasoning 36B ollama[234492]: print_info: vocab type = BPE ollama[234492]: print_info: n_vocab = 131072 ollama[234492]: print_info: n_merges = 269443 ollama[234492]: print_info: BOS token = 1 '<s>' ollama[234492]: print_info: EOS token = 2 '</s>' ollama[234492]: print_info: UNK token = 0 '<unk>' ollama[234492]: print_info: LF token = 1010 'Ċ' ollama[234492]: print_info: EOG token = 2 '</s>' ollama[234492]: print_info: max token length = 150 ollama[234492]: load_tensors: loading model tensors, this can take a while... (mmap = true) ollama[234492]: load_tensors: offloading 62 repeating layers to GPU ollama[234492]: load_tensors: offloaded 62/63 layers to GPU ollama[234492]: load_tensors: ROCm0 model buffer size = 12598.59 MiB ollama[234492]: load_tensors: CPU_Mapped model buffer size = 715.02 MiB ollama[234492]: llama_context: constructing llama_context ollama[234492]: llama_context: n_seq_max = 2 ollama[234492]: llama_context: n_ctx = 8192 ollama[234492]: llama_context: n_ctx_per_seq = 4096 ollama[234492]: llama_context: n_batch = 1024 ollama[234492]: llama_context: n_ubatch = 512 ollama[234492]: llama_context: causal_attn = 1 ollama[234492]: llama_context: flash_attn = enabled ollama[234492]: llama_context: kv_unified = false ollama[234492]: llama_context: freq_base = 1000000000.0 ollama[234492]: llama_context: freq_scale = 1 ollama[234492]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized ollama[234492]: llama_context: CPU output buffer size = 1.04 MiB ollama[234492]: llama_kv_cache: ROCm0 KV buffer size = 1984.00 MiB ollama[234492]: llama_kv_cache: size = 1984.00 MiB ( 4096 cells, 62 layers, 2/2 seqs), K (f16): 992.00 MiB, V (f16): 992.00 MiB kernel: amdgpu 0000:03:00.0: amdgpu: 00000000b0adae43 pin failed kernel: [drm:amdgpu_dm_plane_helper_prepare_fb [amdgpu]] *ERROR* Failed to pin framebuffer with error -12 ollama[234492]: graph_reserve: failed to allocate compute buffers ollama[234492]: SIGSEGV: segmentation violation ollama[234492]: PC=0x741470f83e6a m=8 sigcode=1 addr=0x740a37002498 ollama[234492]: signal arrived during cgo execution ollama[234492]: goroutine 53 gp=0xc000505340 m=8 mp=0xc000349808 [syscall]: ollama[234492]: runtime.cgocall(0x5951d12d2100, 0xc0000afbf8) ollama[234492]: /usr/lib/go/src/runtime/cgocall.go:167 +0x4b fp=0xc0000afbd0 sp=0xc0000afb98 pc=0x5951d05c00cb ollama[234492]: github.com/ollama/ollama/llama._Cfunc_llama_init_from_model(0x741314000ce0, {0x2000, 0x400, 0x200, 0x2, 0x10, 0x10, 0xffffffff, 0xffffffff, 0xffffffff, ...}) ollama[234492]: _cgo_gotypes.go:753 +0x4e fp=0xc0000afbf8 sp=0xc0000afbd0 pc=0x5951d097c46e ollama[234492]: github.com/ollama/ollama/llama.NewContextWithModel.func1(...) ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/ollama/llama/llama.go:280 ollama[234492]: github.com/ollama/ollama/llama.NewContextWithModel(0xc0001ffe18, {{0x2000, 0x400, 0x200, 0x2, 0x10, 0x10, 0xffffffff, 0xffffffff, 0xffffffff, ...}}) ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/ollama/llama/llama.go:280 +0x158 fp=0xc0000afd98 sp=0xc0000afbf8 pc=0x5951d0980238 ollama[234492]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0001a0280, {0x3e, 0x0, 0x1, {0xc0001ffb84, 0x1, 0x1}, 0xc000705b00, 0x0}, {0x7ffdc9e7ab7f, ...}, ...) ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:797 +0x198 fp=0xc0000afee0 sp=0xc0000afd98 pc=0x5951d0a3e598 ollama[234492]: github.com/ollama/ollama/runner/llamarunner.(*Server).load.gowrap2() ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:879 +0x175 fp=0xc0000affe0 sp=0xc0000afee0 pc=0x5951d0a3f635 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000affe8 sp=0xc0000affe0 pc=0x5951d05cb681 ollama[234492]: created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 51 ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:879 +0x7ce ollama[234492]: goroutine 1 gp=0xc000002380 m=nil [IO wait]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00050f790 sp=0xc00050f770 pc=0x5951d05c354e ollama[234492]: runtime.netpollblock(0xc00050f7e0?, 0xd0558526?, 0x51?) ollama[234492]: /usr/lib/go/src/runtime/netpoll.go:575 +0xf7 fp=0xc00050f7c8 sp=0xc00050f790 pc=0x5951d0587137 ollama[234492]: internal/poll.runtime_pollWait(0x7414cb888400, 0x72) ollama[234492]: /usr/lib/go/src/runtime/netpoll.go:351 +0x85 fp=0xc00050f7e8 sp=0xc00050f7c8 pc=0x5951d05c2725 ollama[234492]: internal/poll.(*pollDesc).wait(0xc0001fcb00?, 0x900000036?, 0x0) ollama[234492]: /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00050f810 sp=0xc00050f7e8 pc=0x5951d064b1a7 ollama[234492]: internal/poll.(*pollDesc).waitRead(...) ollama[234492]: /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89 ollama[234492]: internal/poll.(*FD).Accept(0xc0001fcb00) ollama[234492]: /usr/lib/go/src/internal/poll/fd_unix.go:613 +0x28c fp=0xc00050f8b8 sp=0xc00050f810 pc=0x5951d06505cc ollama[234492]: net.(*netFD).accept(0xc0001fcb00) ollama[234492]: /usr/lib/go/src/net/fd_unix.go:161 +0x29 fp=0xc00050f970 sp=0xc00050f8b8 pc=0x5951d06baa49 ollama[234492]: net.(*TCPListener).accept(0xc0000c9840) ollama[234492]: /usr/lib/go/src/net/tcpsock_posix.go:159 +0x1b fp=0xc00050f9c0 sp=0xc00050f970 pc=0x5951d06d017b ollama[234492]: net.(*TCPListener).Accept(0xc0000c9840) ollama[234492]: /usr/lib/go/src/net/tcpsock.go:380 +0x30 fp=0xc00050f9f0 sp=0xc00050f9c0 pc=0x5951d06cf010 ollama[234492]: net/http.(*onceCloseListener).Accept(0xc0004f6360?) ollama[234492]: <autogenerated>:1 +0x24 fp=0xc00050fa08 sp=0xc00050f9f0 pc=0x5951d08f19c4 ollama[234492]: net/http.(*Server).Serve(0xc000260200, {0x5951d1a2cda8, 0xc0000c9840}) ollama[234492]: /usr/lib/go/src/net/http/server.go:3463 +0x30c fp=0xc00050fb38 sp=0xc00050fa08 pc=0x5951d08c93ac ollama[234492]: github.com/ollama/ollama/runner/llamarunner.Execute({0xc000036260, 0x4, 0x4}) ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:947 +0x8f4 fp=0xc00050fd08 sp=0xc00050fb38 pc=0x5951d0a3fff4 ollama[234492]: github.com/ollama/ollama/runner.Execute({0xc000036250?, 0x0?, 0x0?}) ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/runner.go:22 +0xd4 fp=0xc00050fd30 sp=0xc00050fd08 pc=0x5951d0ae0e54 ollama[234492]: github.com/ollama/ollama/cmd.NewCLI.func2(0xc000223100?, {0x5951d15532eb?, 0x4?, 0x5951d15532ef?}) ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/ollama/cmd/cmd.go:1841 +0x45 fp=0xc00050fd58 sp=0xc00050fd30 pc=0x5951d1263085 ollama[234492]: github.com/spf13/cobra.(*Command).execute(0xc0004f9508, {0xc000708d80, 0x4, 0x4}) ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x88a fp=0xc00050fe78 sp=0xc00050fd58 pc=0x5951d073420a ollama[234492]: github.com/spf13/cobra.(*Command).ExecuteC(0xc0004de908) ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x398 fp=0xc00050ff30 sp=0xc00050fe78 pc=0x5951d0734a38 ollama[234492]: github.com/spf13/cobra.(*Command).Execute(...) ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992 ollama[234492]: github.com/spf13/cobra.(*Command).ExecuteContext(...) ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985 ollama[234492]: main.main() ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/ollama/main.go:12 +0x4d fp=0xc00050ff50 sp=0xc00050ff30 pc=0x5951d1263b6d ollama[234492]: runtime.main() ollama[234492]: /usr/lib/go/src/runtime/proc.go:285 +0x29d fp=0xc00050ffe0 sp=0xc00050ff50 pc=0x5951d058e9dd ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00050ffe8 sp=0xc00050ffe0 pc=0x5951d05cb681 ollama[234492]: goroutine 2 gp=0xc000002e00 m=nil [force gc (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009afa8 sp=0xc00009af88 pc=0x5951d05c354e ollama[234492]: runtime.goparkunlock(...) ollama[234492]: /usr/lib/go/src/runtime/proc.go:466 ollama[234492]: runtime.forcegchelper() ollama[234492]: /usr/lib/go/src/runtime/proc.go:373 +0xb8 fp=0xc00009afe0 sp=0xc00009afa8 pc=0x5951d058ed18 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009afe8 sp=0xc00009afe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.init.7 in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/proc.go:361 +0x1a ollama[234492]: goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]: ollama[234492]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009b780 sp=0xc00009b760 pc=0x5951d05c354e ollama[234492]: runtime.goparkunlock(...) ollama[234492]: /usr/lib/go/src/runtime/proc.go:466 ollama[234492]: runtime.bgsweep(0xc0000c6000) ollama[234492]: /usr/lib/go/src/runtime/mgcsweep.go:323 +0xdf fp=0xc00009b7c8 sp=0xc00009b780 pc=0x5951d0578a3f ollama[234492]: runtime.gcenable.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:212 +0x25 fp=0xc00009b7e0 sp=0xc00009b7c8 pc=0x5951d056c9c5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009b7e8 sp=0xc00009b7e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcenable in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:212 +0x66 ollama[234492]: goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]: ollama[234492]: runtime.gopark(0x10000?, 0x5951d171b4a8?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009bf78 sp=0xc00009bf58 pc=0x5951d05c354e ollama[234492]: runtime.goparkunlock(...) ollama[234492]: /usr/lib/go/src/runtime/proc.go:466 ollama[234492]: runtime.(*scavengerState).park(0x5951d22fdf20) ollama[234492]: /usr/lib/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc00009bfa8 sp=0xc00009bf78 pc=0x5951d05764a9 ollama[234492]: runtime.bgscavenge(0xc0000c6000) ollama[234492]: /usr/lib/go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc00009bfc8 sp=0xc00009bfa8 pc=0x5951d0576a59 ollama[234492]: runtime.gcenable.gowrap2() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:213 +0x25 fp=0xc00009bfe0 sp=0xc00009bfc8 pc=0x5951d056c965 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009bfe8 sp=0xc00009bfe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcenable in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:213 +0xa5 ollama[234492]: goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait]: ollama[234492]: runtime.gopark(0x5951d059dd17?, 0x5951d05642e5?, 0xb8?, 0x1?, 0xc000002380?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009a620 sp=0xc00009a600 pc=0x5951d05c354e ollama[234492]: runtime.runFinalizers() ollama[234492]: /usr/lib/go/src/runtime/mfinal.go:210 +0x107 fp=0xc00009a7e0 sp=0xc00009a620 pc=0x5951d056b8c7 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009a7e8 sp=0xc00009a7e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.createfing in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mfinal.go:172 +0x3d ollama[234492]: goroutine 6 gp=0xc0002008c0 m=nil [cleanup wait]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009c768 sp=0xc00009c748 pc=0x5951d05c354e ollama[234492]: runtime.goparkunlock(...) ollama[234492]: /usr/lib/go/src/runtime/proc.go:466 ollama[234492]: runtime.(*cleanupQueue).dequeue(0x5951d22fe880) ollama[234492]: /usr/lib/go/src/runtime/mcleanup.go:439 +0xc5 fp=0xc00009c7a0 sp=0xc00009c768 pc=0x5951d0568aa5 ollama[234492]: runtime.runCleanups() ollama[234492]: /usr/lib/go/src/runtime/mcleanup.go:635 +0x45 fp=0xc00009c7e0 sp=0xc00009c7a0 pc=0x5951d0569165 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009c7e8 sp=0xc00009c7e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.(*cleanupQueue).createGs in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mcleanup.go:589 +0xa5 ollama[234492]: goroutine 7 gp=0xc000200c40 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009cf38 sp=0xc00009cf18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00009cfc8 sp=0xc00009cf38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00009cfe0 sp=0xc00009cfc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009cfe8 sp=0xc00009cfe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 18 gp=0xc000504000 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000096738 sp=0xc000096718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0000967c8 sp=0xc000096738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0000967e0 sp=0xc0000967c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000967e8 sp=0xc0000967e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 34 gp=0xc000102380 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000118738 sp=0xc000118718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0001187c8 sp=0xc000118738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0001187e0 sp=0xc0001187c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0001187e8 sp=0xc0001187e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 35 gp=0xc000102540 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000118f38 sp=0xc000118f18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000118fc8 sp=0xc000118f38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000118fe0 sp=0xc000118fc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000118fe8 sp=0xc000118fe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 36 gp=0xc000102700 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000119738 sp=0xc000119718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0001197c8 sp=0xc000119738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0001197e0 sp=0xc0001197c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0001197e8 sp=0xc0001197e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 37 gp=0xc0001028c0 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000119f38 sp=0xc000119f18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000119fc8 sp=0xc000119f38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000119fe0 sp=0xc000119fc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000119fe8 sp=0xc000119fe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 38 gp=0xc000102a80 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00011a738 sp=0xc00011a718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00011a7c8 sp=0xc00011a738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00011a7e0 sp=0xc00011a7c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00011a7e8 sp=0xc00011a7e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 39 gp=0xc000102c40 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00011af38 sp=0xc00011af18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00011afc8 sp=0xc00011af38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00011afe0 sp=0xc00011afc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00011afe8 sp=0xc00011afe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 40 gp=0xc000102e00 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00011b738 sp=0xc00011b718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00011b7c8 sp=0xc00011b738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00011b7e0 sp=0xc00011b7c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00011b7e8 sp=0xc00011b7e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 41 gp=0xc000102fc0 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00011bf38 sp=0xc00011bf18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00011bfc8 sp=0xc00011bf38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00011bfe0 sp=0xc00011bfc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00011bfe8 sp=0xc00011bfe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 42 gp=0xc000103180 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000114738 sp=0xc000114718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0001147c8 sp=0xc000114738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0001147e0 sp=0xc0001147c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0001147e8 sp=0xc0001147e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 43 gp=0xc000103340 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000114f38 sp=0xc000114f18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000114fc8 sp=0xc000114f38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000114fe0 sp=0xc000114fc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000114fe8 sp=0xc000114fe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 44 gp=0xc000103500 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000115738 sp=0xc000115718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0001157c8 sp=0xc000115738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0001157e0 sp=0xc0001157c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0001157e8 sp=0xc0001157e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 8 gp=0xc000200e00 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009d738 sp=0xc00009d718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00009d7c8 sp=0xc00009d738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00009d7e0 sp=0xc00009d7c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009d7e8 sp=0xc00009d7e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 19 gp=0xc0005041c0 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000096f38 sp=0xc000096f18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000096fc8 sp=0xc000096f38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000096fe0 sp=0xc000096fc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000096fe8 sp=0xc000096fe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 20 gp=0xc000504380 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000097738 sp=0xc000097718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0000977c8 sp=0xc000097738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0000977e0 sp=0xc0000977c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000977e8 sp=0xc0000977e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 45 gp=0xc0001036c0 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000115f38 sp=0xc000115f18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000115fc8 sp=0xc000115f38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000115fe0 sp=0xc000115fc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000115fe8 sp=0xc000115fe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 9 gp=0xc000200fc0 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009df38 sp=0xc00009df18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00009dfc8 sp=0xc00009df38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00009dfe0 sp=0xc00009dfc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009dfe8 sp=0xc00009dfe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 10 gp=0xc000201180 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004ae738 sp=0xc0004ae718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004ae7c8 sp=0xc0004ae738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004ae7e0 sp=0xc0004ae7c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004ae7e8 sp=0xc0004ae7e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 11 gp=0xc000201340 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004aef38 sp=0xc0004aef18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004aefc8 sp=0xc0004aef38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004aefe0 sp=0xc0004aefc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004aefe8 sp=0xc0004aefe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 21 gp=0xc000504540 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000097f38 sp=0xc000097f18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000097fc8 sp=0xc000097f38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000097fe0 sp=0xc000097fc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000097fe8 sp=0xc000097fe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 46 gp=0xc000103880 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000116738 sp=0xc000116718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0001167c8 sp=0xc000116738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0001167e0 sp=0xc0001167c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0001167e8 sp=0xc0001167e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 12 gp=0xc000201500 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x5951d23adc80?, 0x1?, 0xb0?, 0x5a?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0000adf38 sp=0xc0000adf18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0000adfc8 sp=0xc0000adf38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0000adfe0 sp=0xc0000adfc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000adfe8 sp=0xc0000adfe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 13 gp=0xc0002016c0 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x81a4e14fdbb?, 0x1?, 0x6?, 0xc8?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004aff38 sp=0xc0004aff18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004affc8 sp=0xc0004aff38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004affe0 sp=0xc0004affc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004affe8 sp=0xc0004affe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 14 gp=0xc000201880 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x81a4e14f644?, 0x3?, 0xde?, 0x91?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004b0738 sp=0xc0004b0718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004b07c8 sp=0xc0004b0738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004b07e0 sp=0xc0004b07c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004b07e8 sp=0xc0004b07e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 15 gp=0xc000201a40 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x81a4e139e6b?, 0x1?, 0xfe?, 0xc3?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004b0f38 sp=0xc0004b0f18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004b0fc8 sp=0xc0004b0f38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004b0fe0 sp=0xc0004b0fc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004b0fe8 sp=0xc0004b0fe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 16 gp=0xc000201c00 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x5951d23adc80?, 0x1?, 0x8a?, 0x2c?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004b1738 sp=0xc0004b1718 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004b17c8 sp=0xc0004b1738 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004b17e0 sp=0xc0004b17c8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004b17e8 sp=0xc0004b17e0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 50 gp=0xc000201dc0 m=nil [GC worker (idle)]: ollama[234492]: runtime.gopark(0x81a4e14fd02?, 0x1?, 0x3d?, 0x26?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004b1f38 sp=0xc0004b1f18 pc=0x5951d05c354e ollama[234492]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0004b1fc8 sp=0xc0004b1f38 pc=0x5951d056f0eb ollama[234492]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0004b1fe0 sp=0xc0004b1fc8 pc=0x5951d056efc5 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004b1fe8 sp=0xc0004b1fe0 pc=0x5951d05cb681 ollama[234492]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[234492]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[234492]: goroutine 66 gp=0xc000582380 m=nil [sync.WaitGroup.Wait]: ollama[234492]: runtime.gopark(0x5951d23adc80?, 0x8cacb4?, 0x60?, 0x60?, 0x0?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0004af620 sp=0xc0004af600 pc=0x5951d05c354e ollama[234492]: runtime.goparkunlock(...) ollama[234492]: /usr/lib/go/src/runtime/proc.go:466 ollama[234492]: runtime.semacquire1(0xc0001a02a0, 0x0, 0x1, 0x0, 0x19) ollama[234492]: /usr/lib/go/src/runtime/sema.go:192 +0x229 fp=0xc0004af688 sp=0xc0004af620 pc=0x5951d05a27e9 ollama[234492]: sync.runtime_SemacquireWaitGroup(0xc0000a1808?, 0xa5?) ollama[234492]: /usr/lib/go/src/runtime/sema.go:114 +0x2e fp=0xc0004af6c0 sp=0xc0004af688 pc=0x5951d05c4f6e ollama[234492]: sync.(*WaitGroup).Wait(0xc0001a0298) ollama[234492]: /usr/lib/go/src/sync/waitgroup.go:206 +0x85 fp=0xc0004af6e8 sp=0xc0004af6c0 pc=0x5951d05d72e5 ollama[234492]: github.com/ollama/ollama/runner/llamarunner.(*Server).run(0xc0001a0280, {0x5951d1a2f3b0, 0xc0001a2fa0}) ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:334 +0x4b fp=0xc0004af7b8 sp=0xc0004af6e8 pc=0x5951d0a3b16b ollama[234492]: github.com/ollama/ollama/runner/llamarunner.Execute.gowrap1() ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:926 +0x28 fp=0xc0004af7e0 sp=0xc0004af7b8 pc=0x5951d0a40268 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0004af7e8 sp=0xc0004af7e0 pc=0x5951d05cb681 ollama[234492]: created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 ollama[234492]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:926 +0x4c5 ollama[234492]: goroutine 51 gp=0xc000505180 m=nil [IO wait]: ollama[234492]: runtime.gopark(0xc000049950?, 0x5951d064e7a5?, 0x0?, 0x32?, 0xb?) ollama[234492]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000049938 sp=0xc000049918 pc=0x5951d05c354e ollama[234492]: runtime.netpollblock(0x5951d05e76f8?, 0xd0558526?, 0x51?) ollama[234492]: /usr/lib/go/src/runtime/netpoll.go:575 +0xf7 fp=0xc000049970 sp=0xc000049938 pc=0x5951d0587137 ollama[234492]: internal/poll.runtime_pollWait(0x7414cb888200, 0x72) ollama[234492]: /usr/lib/go/src/runtime/netpoll.go:351 +0x85 fp=0xc000049990 sp=0xc000049970 pc=0x5951d05c2725 ollama[234492]: internal/poll.(*pollDesc).wait(0xc0004a3200?, 0xc000277000?, 0x0) ollama[234492]: /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000499b8 sp=0xc000049990 pc=0x5951d064b1a7 ollama[234492]: internal/poll.(*pollDesc).waitRead(...) ollama[234492]: /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89 ollama[234492]: internal/poll.(*FD).Read(0xc0004a3200, {0xc000277000, 0x1000, 0x1000}) ollama[234492]: /usr/lib/go/src/internal/poll/fd_unix.go:165 +0x279 fp=0xc000049a50 sp=0xc0000499b8 pc=0x5951d064c499 ollama[234492]: net.(*netFD).Read(0xc0004a3200, {0xc000277000?, 0x0?, 0xc000049ac8?}) ollama[234492]: /usr/lib/go/src/net/fd_posix.go:68 +0x25 fp=0xc000049a98 sp=0xc000049a50 pc=0x5951d06b8ba5 ollama[234492]: net.(*conn).Read(0xc00009e8a8, {0xc000277000?, 0x0?, 0x0?}) ollama[234492]: /usr/lib/go/src/net/net.go:196 +0x45 fp=0xc000049ae0 sp=0xc000049a98 pc=0x5951d06c6bc5 ollama[234492]: net/http.(*connReader).Read(0xc000708ec0, {0xc000277000, 0x1000, 0x1000}) ollama[234492]: /usr/lib/go/src/net/http/server.go:812 +0x154 fp=0xc000049b38 sp=0xc000049ae0 pc=0x5951d08be414 ollama[234492]: bufio.(*Reader).fill(0xc0002725a0) ollama[234492]: /usr/lib/go/src/bufio/bufio.go:113 +0x103 fp=0xc000049b70 sp=0xc000049b38 pc=0x5951d06de0a3 ollama[234492]: bufio.(*Reader).Peek(0xc0002725a0, 0x4) ollama[234492]: /usr/lib/go/src/bufio/bufio.go:152 +0x53 fp=0xc000049b90 sp=0xc000049b70 pc=0x5951d06de1d3 ollama[234492]: net/http.(*conn).serve(0xc0004f6360, {0x5951d1a2f378, 0xc00025e450}) ollama[234492]: /usr/lib/go/src/net/http/server.go:2145 +0x7c5 fp=0xc000049fb8 sp=0xc000049b90 pc=0x5951d08c3c45 ollama[234492]: net/http.(*Server).Serve.gowrap3() ollama[234492]: /usr/lib/go/src/net/http/server.go:3493 +0x28 fp=0xc000049fe0 sp=0xc000049fb8 pc=0x5951d08c97a8 ollama[234492]: runtime.goexit({}) ollama[234492]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000049fe8 sp=0xc000049fe0 pc=0x5951d05cb681 ollama[234492]: created by net/http.(*Server).Serve in goroutine 1 ollama[234492]: /usr/lib/go/src/net/http/server.go:3493 +0x485 ollama[234492]: rax 0x740cb0ce1f60 ollama[234492]: rbx 0x1 ollama[234492]: rcx 0xffffffffb0c64010 ollama[234492]: rdx 0x740cb17a1510 ollama[234492]: rdi 0x740cb0cb7920 ollama[234492]: rsi 0x0 ollama[234492]: rbp 0x741481b18ac0 ollama[234492]: rsp 0x741481b18a78 ollama[234492]: r8 0x2 ollama[234492]: r9 0x741314000030 ollama[234492]: r10 0x2 ollama[234492]: r11 0x0 ollama[234492]: r12 0x740cb03b2700 ollama[234492]: r13 0x740cb03b2700 ollama[234492]: r14 0x0 ollama[234492]: r15 0x740cb03d2720 ollama[234492]: rip 0x741470f83e6a ollama[234492]: rflags 0x10206 ollama[234492]: cs 0x33 ollama[234492]: fs 0x0 ollama[234492]: gs 0x0 kernel: amdgpu: Freeing queue vital buffer 0x740cbe000000, queue evicted kernel: amdgpu: Freeing queue vital buffer 0x741290a00000, queue evicted kernel: amdgpu: Freeing queue vital buffer 0x74131bc00000, queue evicted ollama[234492]: time=2025-11-08T21:43:01.483+01:00 level=INFO source=sched.go:453 msg="Load failed" model=/mnt/Tilo4TB/var-lib-ollama/blobs/sha256-aa5d855a59f87e145069339227aa5d4e58f2d4a2463816db035678d7914eb97c error="llama runner process has terminated: exitstatus 2" ollama[234492]: [GIN] 2025/11/08 - 21:43:01 | 500 | 22.052640684s | ::1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Nov 8, 2025):

What version of the Linux kernel are you running? uname -a

<!-- gh-comment-id:3506904569 --> @rick-github commented on GitHub (Nov 8, 2025): What version of the Linux kernel are you running? `uname -a`
Author
Owner

@binarynoise commented on GitHub (Nov 8, 2025):

Linux 6.12.57-1-lts #1 SMP PREEMPT_DYNAMIC Mon, 03 Nov 2025 14:27:55 +0000 x86_64 GNU/Linux

<!-- gh-comment-id:3506934058 --> @binarynoise commented on GitHub (Nov 8, 2025): `Linux 6.12.57-1-lts #1 SMP PREEMPT_DYNAMIC Mon, 03 Nov 2025 14:27:55 +0000 x86_64 GNU/Linux`
Author
Owner

@binarynoise commented on GitHub (Nov 9, 2025):

It seems that for some models, even 3GB of overhead is not enough and they still run out of VRAM. As I thought, this workaround does not really help solve the underlying problem.

<!-- gh-comment-id:3508791228 --> @binarynoise commented on GitHub (Nov 9, 2025): It seems that for some models, even 3GB of overhead is not enough and they still run out of VRAM. As I thought, this workaround does not really help solve the underlying problem.
Author
Owner

@rick-github commented on GitHub (Nov 9, 2025):

kernel: amdgpu: Freeing queue vital buffer 0x734a69000000, queue evicted
kernel: amdgpu 0000:03:00.0: amdgpu: 00000000b0adae43 pin failed
kernel: [drm:amdgpu_dm_plane_helper_prepare_fb [amdgpu]] *ERROR* Failed to pin framebuffer with error -12

The problem is not a shortage of memory per se, the amdgpu driver and the kernel are having issues. This bug seems to have been introduced in the 6.* linux kernel and there have apparently been a few patches to deal with it, but I haven't found a definitive answer to whether it's been fixed or not. Is it feasible for you to try an older kernel? My ROCm based machines are running 6.11.0-29-generic and haven't seen this issue - other issues, but not this one.

<!-- gh-comment-id:3508940642 --> @rick-github commented on GitHub (Nov 9, 2025): ``` kernel: amdgpu: Freeing queue vital buffer 0x734a69000000, queue evicted kernel: amdgpu 0000:03:00.0: amdgpu: 00000000b0adae43 pin failed kernel: [drm:amdgpu_dm_plane_helper_prepare_fb [amdgpu]] *ERROR* Failed to pin framebuffer with error -12 ``` The problem is not a shortage of memory per se, the amdgpu driver and the kernel are having issues. This bug seems to have been introduced in the 6.* linux kernel and there have apparently been a few patches to deal with it, but I haven't found a definitive answer to whether it's been fixed or not. Is it feasible for you to try an older kernel? My ROCm based machines are running 6.11.0-29-generic and haven't seen this issue - other issues, but not this one.
Author
Owner

@binarynoise commented on GitHub (Nov 10, 2025):

I temporarily downgraded my kernel to Linux 6.11.0-arch1-1 #1 SMP PREEMPT_DYNAMIC Sun, 15 Sep 2024 18:38:36 +0000 x86_64 GNU/Linux, the amdgpu warnings went away, the crashes stayed.

systemd[1]: Started Ollama Service.
sudo[30433]: pam_unix(sudo:session): session closed for user root
ollama[30444]: time=2025-11-10T13:30:01.798+01:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY:OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://[::]:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:f16 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/Tilo4TB/var-lib-ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
ollama[30444]: time=2025-11-10T13:30:01.823+01:00 level=INFO source=images.go:522 msg="total blobs: 147"
ollama[30444]: time=2025-11-10T13:30:01.827+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
ollama[30444]: time=2025-11-10T13:30:01.829+01:00 level=INFO source=routes.go:1578 msg="Listening on [::]:11434 (version 0.12.10.r17.g91ec3ddbeb2e)"
ollama[30444]: time=2025-11-10T13:30:01.829+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
ollama[30444]: time=2025-11-10T13:30:01.832+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 32935"
ollama[30444]: time=2025-11-10T13:30:06.893+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35805"
ollama[30444]: time=2025-11-10T13:30:13.039+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-c2c6236518c28f70 filter_id="" library=ROCm compute=gfx1101 name=ROCm0 description="AMD Radeon RX 7800 XT" libdirs=ollama driver=60443.48 pci_id=0000:03:00.0 type=discrete total="16.0 GiB" available="15.8 GiB"
ollama[30444]: time=2025-11-10T13:30:13.039+01:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB"
ollama[30444]: time=2025-11-10T13:30:13.380+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44093"
ollama[30444]: llama_model_loader: loaded meta data with 27 key-value pairs and 322 tensors from /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-f729aa7a690b5128571dd9660124f888f1380496e1d06a71814319a1a03a2414 (version GGUF V3 (latest))
ollama[30444]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama[30444]: llama_model_loader: - kv   0:                       general.architecture str              = command-r
ollama[30444]: llama_model_loader: - kv   1:                               general.name str              = aya-23-35B
ollama[30444]: llama_model_loader: - kv   2:                      command-r.block_count u32              = 40
ollama[30444]: llama_model_loader: - kv   3:                   command-r.context_length u32              = 8192
ollama[30444]: llama_model_loader: - kv   4:                 command-r.embedding_length u32              = 8192
ollama[30444]: llama_model_loader: - kv   5:              command-r.feed_forward_length u32              = 22528
ollama[30444]: llama_model_loader: - kv   6:             command-r.attention.head_count u32              = 64
ollama[30444]: llama_model_loader: - kv   7:          command-r.attention.head_count_kv u32              = 64
ollama[30444]: llama_model_loader: - kv   8:                   command-r.rope.freq_base f32              = 8000000.000000
ollama[30444]: llama_model_loader: - kv   9:     command-r.attention.layer_norm_epsilon f32              = 0.000010
ollama[30444]: llama_model_loader: - kv  10:                          general.file_type u32              = 10
ollama[30444]: llama_model_loader: - kv  11:                      command-r.logit_scale f32              = 0.062500
ollama[30444]: llama_model_loader: - kv  12:                command-r.rope.scaling.type str              = none
ollama[30444]: llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
ollama[30444]: llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,256000]  = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ...
ollama[30444]: llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ...
ollama[30444]: llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,253333]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a...
ollama[30444]: llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 5
ollama[30444]: llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 255001
ollama[30444]: llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 0
ollama[30444]: llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
ollama[30444]: llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
ollama[30444]: llama_model_loader: - kv  22:           tokenizer.chat_template.tool_use str              = {{ bos_token }}{% if messages[0]['rol...
ollama[30444]: llama_model_loader: - kv  23:                tokenizer.chat_template.rag str              = {{ bos_token }}{% if messages[0]['rol...
ollama[30444]: llama_model_loader: - kv  24:                   tokenizer.chat_templates arr[str,2]       = ["rag", "tool_use"]
ollama[30444]: llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
ollama[30444]: llama_model_loader: - kv  26:               general.quantization_version u32              = 2
ollama[30444]: llama_model_loader: - type  f32:   41 tensors
ollama[30444]: llama_model_loader: - type q2_K:  160 tensors
ollama[30444]: llama_model_loader: - type q3_K:  120 tensors
ollama[30444]: llama_model_loader: - type q6_K:    1 tensors
ollama[30444]: print_info: file format = GGUF V3 (latest)
ollama[30444]: print_info: file type   = Q2_K - Medium
ollama[30444]: print_info: file size   = 12.86 GiB (3.16 BPW)
ollama[30444]: load: missing or unrecognized pre-tokenizer type, using: 'default'
ollama[30444]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
ollama[30444]: load: printing all EOG tokens:
ollama[30444]: load:   - 0 ('<PAD>')
ollama[30444]: load:   - 255001 ('<|END_OF_TURN_TOKEN|>')
ollama[30444]: load: special tokens cache size = 1008
ollama[30444]: load: token to piece cache size = 1.8528 MB
ollama[30444]: print_info: arch             = command-r
ollama[30444]: print_info: vocab_only       = 1
ollama[30444]: print_info: model type       = ?B
ollama[30444]: print_info: model params     = 34.98 B
ollama[30444]: print_info: general.name     = aya-23-35B
ollama[30444]: print_info: vocab type       = BPE
ollama[30444]: print_info: n_vocab          = 256000
ollama[30444]: print_info: n_merges         = 253333
ollama[30444]: print_info: BOS token        = 5 '<BOS_TOKEN>'
ollama[30444]: print_info: EOS token        = 255001 '<|END_OF_TURN_TOKEN|>'
ollama[30444]: print_info: PAD token        = 0 '<PAD>'
ollama[30444]: print_info: LF token         = 206 'Ċ'
ollama[30444]: print_info: FIM PAD token    = 0 '<PAD>'
ollama[30444]: print_info: EOG token        = 0 '<PAD>'
ollama[30444]: print_info: EOG token        = 255001 '<|END_OF_TURN_TOKEN|>'
ollama[30444]: print_info: max token length = 1024
ollama[30444]: llama_model_load: vocab only - skipping tensors
ollama[30444]: time=2025-11-10T13:30:18.656+01:00 level=INFO source=server.go:215 msg="enabling flash attention"
ollama[30444]: time=2025-11-10T13:30:18.657+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --model /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-f729aa7a690b5128571dd9660124f888f1380496e1d06a71814319a1a03a2414 --port 42169"
ollama[30444]: time=2025-11-10T13:30:18.657+01:00 level=INFO source=server.go:470 msg="system memory" total="62.5 GiB" free="53.4 GiB" free_swap="0 B"
ollama[30444]: time=2025-11-10T13:30:18.657+01:00 level=INFO source=server.go:522 msg=offload library=ROCm layers.requested=-1 layers.model=41 layers.offload=23 layers.split=[23] memory.available="[15.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="25.9 GiB" memory.required.partial="15.3 GiB" memory.required.kv="10.0 GiB" memory.required.allocations="[15.3 GiB]" memory.weights.total="12.9 GiB" memory.weights.repeating="11.3 GiB" memory.weights.nonrepeating="1.6 GiB" memory.graph.full="1.1 GiB" memory.graph.partial="2.1 GiB"
ollama[30444]: time=2025-11-10T13:30:18.664+01:00 level=INFO source=runner.go:910 msg="starting go runner"
ollama[30444]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ollama[30444]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ollama[30444]: ggml_cuda_init: found 1 ROCm devices:
ollama[30444]:   Device 0: AMD Radeon RX 7800 XT, gfx1101 (0x1101), VMM: no, Wave Size: 32, ID: GPU-c2c6236518c28f70
ollama[30444]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so
ollama[30444]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so
ollama[30444]: time=2025-11-10T13:30:23.600+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.SSE3=1 CPU.1.SSSE3=1 CPU.1.AVX=1 CPU.1.AVX_VNNI=1 CPU.1.AVX2=1 CPU.1.F16C=1 CPU.1.FMA=1 CPU.1.BMI2=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
ollama[30444]: time=2025-11-10T13:30:23.600+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:42169"
ollama[30444]: time=2025-11-10T13:30:23.611+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType:f16 NumThreads:16 GPULayers:23[ID:GPU-c2c6236518c28f70 Layers:23(17..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
ollama[30444]: time=2025-11-10T13:30:23.612+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
ollama[30444]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7800 XT) (0000:03:00.0) - 16140 MiB free
ollama[30444]: time=2025-11-10T13:30:23.612+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
ollama[30444]: llama_model_loader: loaded meta data with 27 key-value pairs and 322 tensors from /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-f729aa7a690b5128571dd9660124f888f1380496e1d06a71814319a1a03a2414 (version GGUF V3 (latest))
ollama[30444]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama[30444]: llama_model_loader: - kv   0:                       general.architecture str              = command-r
ollama[30444]: llama_model_loader: - kv   1:                               general.name str              = aya-23-35B
ollama[30444]: llama_model_loader: - kv   2:                      command-r.block_count u32              = 40
ollama[30444]: llama_model_loader: - kv   3:                   command-r.context_length u32              = 8192
ollama[30444]: llama_model_loader: - kv   4:                 command-r.embedding_length u32              = 8192
ollama[30444]: llama_model_loader: - kv   5:              command-r.feed_forward_length u32              = 22528
ollama[30444]: llama_model_loader: - kv   6:             command-r.attention.head_count u32              = 64
ollama[30444]: llama_model_loader: - kv   7:          command-r.attention.head_count_kv u32              = 64
ollama[30444]: llama_model_loader: - kv   8:                   command-r.rope.freq_base f32              = 8000000.000000
ollama[30444]: llama_model_loader: - kv   9:     command-r.attention.layer_norm_epsilon f32              = 0.000010
ollama[30444]: llama_model_loader: - kv  10:                          general.file_type u32              = 10
ollama[30444]: llama_model_loader: - kv  11:                      command-r.logit_scale f32              = 0.062500
ollama[30444]: llama_model_loader: - kv  12:                command-r.rope.scaling.type str              = none
ollama[30444]: llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
ollama[30444]: llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,256000]  = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ...
ollama[30444]: llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ...
ollama[30444]: llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,253333]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a...
ollama[30444]: llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 5
ollama[30444]: llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 255001
ollama[30444]: llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 0
ollama[30444]: llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
ollama[30444]: llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
ollama[30444]: llama_model_loader: - kv  22:           tokenizer.chat_template.tool_use str              = {{ bos_token }}{% if messages[0]['rol...
ollama[30444]: llama_model_loader: - kv  23:                tokenizer.chat_template.rag str              = {{ bos_token }}{% if messages[0]['rol...
ollama[30444]: llama_model_loader: - kv  24:                   tokenizer.chat_templates arr[str,2]       = ["rag", "tool_use"]
ollama[30444]: llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
ollama[30444]: llama_model_loader: - kv  26:               general.quantization_version u32              = 2
ollama[30444]: llama_model_loader: - type  f32:   41 tensors
ollama[30444]: llama_model_loader: - type q2_K:  160 tensors
ollama[30444]: llama_model_loader: - type q3_K:  120 tensors
ollama[30444]: llama_model_loader: - type q6_K:    1 tensors
ollama[30444]: print_info: file format = GGUF V3 (latest)
ollama[30444]: print_info: file type   = Q2_K - Medium
ollama[30444]: print_info: file size   = 12.86 GiB (3.16 BPW)
ollama[30444]: load: missing or unrecognized pre-tokenizer type, using: 'default'
ollama[30444]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
ollama[30444]: load: printing all EOG tokens:
ollama[30444]: load:   - 0 ('<PAD>')
ollama[30444]: load:   - 255001 ('<|END_OF_TURN_TOKEN|>')
ollama[30444]: load: special tokens cache size = 1008
ollama[30444]: load: token to piece cache size = 1.8528 MB
ollama[30444]: print_info: arch             = command-r
ollama[30444]: print_info: vocab_only       = 0
ollama[30444]: print_info: n_ctx_train      = 8192
ollama[30444]: print_info: n_embd           = 8192
ollama[30444]: print_info: n_layer          = 40
ollama[30444]: print_info: n_head           = 64
ollama[30444]: print_info: n_head_kv        = 64
ollama[30444]: print_info: n_rot            = 128
ollama[30444]: print_info: n_swa            = 0
ollama[30444]: print_info: is_swa_any       = 0
ollama[30444]: print_info: n_embd_head_k    = 128
ollama[30444]: print_info: n_embd_head_v    = 128
ollama[30444]: print_info: n_gqa            = 1
ollama[30444]: print_info: n_embd_k_gqa     = 8192
ollama[30444]: print_info: n_embd_v_gqa     = 8192
ollama[30444]: print_info: f_norm_eps       = 1.0e-05
ollama[30444]: print_info: f_norm_rms_eps   = 0.0e+00
ollama[30444]: print_info: f_clamp_kqv      = 0.0e+00
ollama[30444]: print_info: f_max_alibi_bias = 0.0e+00
ollama[30444]: print_info: f_logit_scale    = 6.2e-02
ollama[30444]: print_info: f_attn_scale     = 0.0e+00
ollama[30444]: print_info: n_ff             = 22528
ollama[30444]: print_info: n_expert         = 0
ollama[30444]: print_info: n_expert_used    = 0
ollama[30444]: print_info: causal attn      = 1
ollama[30444]: print_info: pooling type     = 0
ollama[30444]: print_info: rope type        = 0
ollama[30444]: print_info: rope scaling     = none
ollama[30444]: print_info: freq_base_train  = 8000000.0
ollama[30444]: print_info: freq_scale_train = 1
ollama[30444]: print_info: n_ctx_orig_yarn  = 8192
ollama[30444]: print_info: rope_finetuned   = unknown
ollama[30444]: print_info: model type       = 35B
ollama[30444]: print_info: model params     = 34.98 B
ollama[30444]: print_info: general.name     = aya-23-35B
ollama[30444]: print_info: vocab type       = BPE
ollama[30444]: print_info: n_vocab          = 256000
ollama[30444]: print_info: n_merges         = 253333
ollama[30444]: print_info: BOS token        = 5 '<BOS_TOKEN>'
ollama[30444]: print_info: EOS token        = 255001 '<|END_OF_TURN_TOKEN|>'
ollama[30444]: print_info: PAD token        = 0 '<PAD>'
ollama[30444]: print_info: LF token         = 206 'Ċ'
ollama[30444]: print_info: FIM PAD token    = 0 '<PAD>'
ollama[30444]: print_info: EOG token        = 0 '<PAD>'
ollama[30444]: print_info: EOG token        = 255001 '<|END_OF_TURN_TOKEN|>'
ollama[30444]: print_info: max token length = 1024
ollama[30444]: load_tensors: loading model tensors, this can take a while... (mmap = true)
ollama[30444]: load_tensors: offloading 23 repeating layers to GPU
ollama[30444]: load_tensors: offloaded 23/41 layers to GPU
ollama[30444]: load_tensors:        ROCm0 model buffer size =  6627.59 MiB
ollama[30444]: load_tensors:   CPU_Mapped model buffer size = 13166.91 MiB
ollama[30444]: llama_context: constructing llama_context
ollama[30444]: llama_context: n_seq_max     = 2
ollama[30444]: llama_context: n_ctx         = 8192
ollama[30444]: llama_context: n_ctx_per_seq = 4096
ollama[30444]: llama_context: n_batch       = 1024
ollama[30444]: llama_context: n_ubatch      = 512
ollama[30444]: llama_context: causal_attn   = 1
ollama[30444]: llama_context: flash_attn    = enabled
ollama[30444]: llama_context: kv_unified    = false
ollama[30444]: llama_context: freq_base     = 8000000.0
ollama[30444]: llama_context: freq_scale    = 1
ollama[30444]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
ollama[30444]: llama_context:        CPU  output buffer size =     2.02 MiB
ollama[30444]: llama_kv_cache:      ROCm0 KV buffer size =  5888.00 MiB
ollama[30444]: llama_kv_cache:        CPU KV buffer size =  4352.00 MiB
ollama[30444]: llama_kv_cache: size = 10240.00 MiB (  4096 cells,  40 layers,  2/2 seqs), K (f16): 5120.00 MiB, V (f16): 5120.00 MiB
ollama[30444]: graph_reserve: failed to allocate compute buffers
ollama[30444]: SIGSEGV: segmentation violation
ollama[30444]: PC=0x720a99783e6a m=9 sigcode=1 addr=0x720b0ee48518
ollama[30444]: signal arrived during cgo execution
ollama[30444]: goroutine 54 gp=0xc000503dc0 m=9 mp=0xc0002c9808 [syscall]:
ollama[30444]: runtime.cgocall(0x55e5d1060100, 0xc0000acbf8)
ollama[30444]:         /usr/lib/go/src/runtime/cgocall.go:167 +0x4b fp=0xc0000acbd0 sp=0xc0000acb98 pc=0x55e5d034e0cb
ollama[30444]: github.com/ollama/ollama/llama._Cfunc_llama_init_from_model(0x720940000ce0, {0x2000, 0x400, 0x200, 0x2, 0x10, 0x10, 0xffffffff, 0xffffffff, 0xffffffff, ...})
ollama[30444]:         _cgo_gotypes.go:753 +0x4e fp=0xc0000acbf8 sp=0xc0000acbd0 pc=0x55e5d070a46e
ollama[30444]: github.com/ollama/ollama/llama.NewContextWithModel.func1(...)
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/ollama/llama/llama.go:280
ollama[30444]: github.com/ollama/ollama/llama.NewContextWithModel(0xc000610cd8, {{0x2000, 0x400, 0x200, 0x2, 0x10, 0x10, 0xffffffff, 0xffffffff, 0xffffffff, ...}})
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/ollama/llama/llama.go:280 +0x158 fp=0xc0000acd98 sp=0xc0000acbf8 pc=0x55e5d070e238
ollama[30444]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc000463680, {0x17, 0x0, 0x1, {0xc000610a84, 0x1, 0x1}, 0xc0004021f0, 0x0}, {0x7ffc5b192b81, ...}, ...)
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:797 +0x198 fp=0xc0000acee0 sp=0xc0000acd98 pc=0x55e5d07cc598
ollama[30444]: github.com/ollama/ollama/runner/llamarunner.(*Server).load.gowrap2()
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:879 +0x175 fp=0xc0000acfe0 sp=0xc0000acee0 pc=0x55e5d07cd635
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000acfe8 sp=0xc0000acfe0 pc=0x55e5d0359681
ollama[30444]: created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 67
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:879 +0x7ce
ollama[30444]: goroutine 1 gp=0xc000002380 m=nil [IO wait]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00049d790 sp=0xc00049d770 pc=0x55e5d035154e
ollama[30444]: runtime.netpollblock(0xc00049d7e0?, 0xd02e6526?, 0xe5?)
ollama[30444]:         /usr/lib/go/src/runtime/netpoll.go:575 +0xf7 fp=0xc00049d7c8 sp=0xc00049d790 pc=0x55e5d0315137
ollama[30444]: internal/poll.runtime_pollWait(0x720af22b7400, 0x72)
ollama[30444]:         /usr/lib/go/src/runtime/netpoll.go:351 +0x85 fp=0xc00049d7e8 sp=0xc00049d7c8 pc=0x55e5d0350725
ollama[30444]: internal/poll.(*pollDesc).wait(0xc000614900?, 0x900000036?, 0x0)
ollama[30444]:         /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00049d810 sp=0xc00049d7e8 pc=0x55e5d03d91a7
ollama[30444]: internal/poll.(*pollDesc).waitRead(...)
ollama[30444]:         /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89
ollama[30444]: internal/poll.(*FD).Accept(0xc000614900)
ollama[30444]:         /usr/lib/go/src/internal/poll/fd_unix.go:613 +0x28c fp=0xc00049d8b8 sp=0xc00049d810 pc=0x55e5d03de5cc
ollama[30444]: net.(*netFD).accept(0xc000614900)
ollama[30444]:         /usr/lib/go/src/net/fd_unix.go:161 +0x29 fp=0xc00049d970 sp=0xc00049d8b8 pc=0x55e5d0448a49
ollama[30444]: net.(*TCPListener).accept(0xc0004a0600)
ollama[30444]:         /usr/lib/go/src/net/tcpsock_posix.go:159 +0x1b fp=0xc00049d9c0 sp=0xc00049d970 pc=0x55e5d045e17b
ollama[30444]: net.(*TCPListener).Accept(0xc0004a0600)
ollama[30444]:         /usr/lib/go/src/net/tcpsock.go:380 +0x30 fp=0xc00049d9f0 sp=0xc00049d9c0 pc=0x55e5d045d010
ollama[30444]: net/http.(*onceCloseListener).Accept(0xc0004663f0?)
ollama[30444]:         <autogenerated>:1 +0x24 fp=0xc00049da08 sp=0xc00049d9f0 pc=0x55e5d067f9c4
ollama[30444]: net/http.(*Server).Serve(0xc0001a3500, {0x55e5d17bada8, 0xc0004a0600})
ollama[30444]:         /usr/lib/go/src/net/http/server.go:3463 +0x30c fp=0xc00049db38 sp=0xc00049da08 pc=0x55e5d06573ac
ollama[30444]: github.com/ollama/ollama/runner/llamarunner.Execute({0xc000036260, 0x4, 0x4})
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:947 +0x8f4 fp=0xc00049dd08 sp=0xc00049db38 pc=0x55e5d07cdff4
ollama[30444]: github.com/ollama/ollama/runner.Execute({0xc000036250?, 0x0?, 0x0?})
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/runner.go:22 +0xd4 fp=0xc00049dd30 sp=0xc00049dd08 pc=0x55e5d086ee54
ollama[30444]: github.com/ollama/ollama/cmd.NewCLI.func2(0xc0001a3100?, {0x55e5d12e12eb?, 0x4?, 0x55e5d12e12ef?})
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/ollama/cmd/cmd.go:1841 +0x45 fp=0xc00049dd58 sp=0xc00049dd30 pc=0x55e5d0ff1085
ollama[30444]: github.com/spf13/cobra.(*Command).execute(0xc000469508, {0xc0004a03c0, 0x4, 0x4})
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x88a fp=0xc00049de78 sp=0xc00049dd58 pc=0x55e5d04c220a
ollama[30444]: github.com/spf13/cobra.(*Command).ExecuteC(0xc0004b9208)
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x398 fp=0xc00049df30 sp=0xc00049de78 pc=0x55e5d04c2a38
ollama[30444]: github.com/spf13/cobra.(*Command).Execute(...)
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
ollama[30444]: github.com/spf13/cobra.(*Command).ExecuteContext(...)
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
ollama[30444]: main.main()
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/ollama/main.go:12 +0x4d fp=0xc00049df50 sp=0xc00049df30 pc=0x55e5d0ff1b6d
ollama[30444]: runtime.main()
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:285 +0x29d fp=0xc00049dfe0 sp=0xc00049df50 pc=0x55e5d031c9dd
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00049dfe8 sp=0xc00049dfe0 pc=0x55e5d0359681
ollama[30444]: goroutine 2 gp=0xc000002e00 m=nil [force gc (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009afa8 sp=0xc00009af88 pc=0x55e5d035154e
ollama[30444]: runtime.goparkunlock(...)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:466
ollama[30444]: runtime.forcegchelper()
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:373 +0xb8 fp=0xc00009afe0 sp=0xc00009afa8 pc=0x55e5d031cd18
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009afe8 sp=0xc00009afe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.init.7 in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:361 +0x1a
ollama[30444]: goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]:
ollama[30444]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009b780 sp=0xc00009b760 pc=0x55e5d035154e
ollama[30444]: runtime.goparkunlock(...)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:466
ollama[30444]: runtime.bgsweep(0xc0000c6000)
ollama[30444]:         /usr/lib/go/src/runtime/mgcsweep.go:323 +0xdf fp=0xc00009b7c8 sp=0xc00009b780 pc=0x55e5d0306a3f
ollama[30444]: runtime.gcenable.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:212 +0x25 fp=0xc00009b7e0 sp=0xc00009b7c8 pc=0x55e5d02fa9c5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009b7e8 sp=0xc00009b7e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcenable in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:212 +0x66
ollama[30444]: goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]:
ollama[30444]: runtime.gopark(0x10000?, 0x55e5d14a94a8?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009bf78 sp=0xc00009bf58 pc=0x55e5d035154e
ollama[30444]: runtime.goparkunlock(...)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:466
ollama[30444]: runtime.(*scavengerState).park(0x55e5d208bf20)
ollama[30444]:         /usr/lib/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc00009bfa8 sp=0xc00009bf78 pc=0x55e5d03044a9
ollama[30444]: runtime.bgscavenge(0xc0000c6000)
ollama[30444]:         /usr/lib/go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc00009bfc8 sp=0xc00009bfa8 pc=0x55e5d0304a59
ollama[30444]: runtime.gcenable.gowrap2()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:213 +0x25 fp=0xc00009bfe0 sp=0xc00009bfc8 pc=0x55e5d02fa965
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009bfe8 sp=0xc00009bfe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcenable in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:213 +0xa5
ollama[30444]: goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait]:
ollama[30444]: runtime.gopark(0x55e5d032bd17?, 0x55e5d02f22e5?, 0xb8?, 0x1?, 0xc000002380?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009a620 sp=0xc00009a600 pc=0x55e5d035154e
ollama[30444]: runtime.runFinalizers()
ollama[30444]:         /usr/lib/go/src/runtime/mfinal.go:210 +0x107 fp=0xc00009a7e0 sp=0xc00009a620 pc=0x55e5d02f98c7
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009a7e8 sp=0xc00009a7e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.createfing in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mfinal.go:172 +0x3d
ollama[30444]: goroutine 6 gp=0xc0001808c0 m=nil [cleanup wait]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009c768 sp=0xc00009c748 pc=0x55e5d035154e
ollama[30444]: runtime.goparkunlock(...)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:466
ollama[30444]: runtime.(*cleanupQueue).dequeue(0x55e5d208c880)
ollama[30444]:         /usr/lib/go/src/runtime/mcleanup.go:439 +0xc5 fp=0xc00009c7a0 sp=0xc00009c768 pc=0x55e5d02f6aa5
ollama[30444]: runtime.runCleanups()
ollama[30444]:         /usr/lib/go/src/runtime/mcleanup.go:635 +0x45 fp=0xc00009c7e0 sp=0xc00009c7a0 pc=0x55e5d02f7165
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009c7e8 sp=0xc00009c7e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.(*cleanupQueue).createGs in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mcleanup.go:589 +0xa5
ollama[30444]: goroutine 7 gp=0xc000180c40 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009cf38 sp=0xc00009cf18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00009cfc8 sp=0xc00009cf38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00009cfe0 sp=0xc00009cfc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009cfe8 sp=0xc00009cfe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 18 gp=0xc000482380 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000096738 sp=0xc000096718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0000967c8 sp=0xc000096738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0000967e0 sp=0xc0000967c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000967e8 sp=0xc0000967e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 19 gp=0xc000482540 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000096f38 sp=0xc000096f18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000096fc8 sp=0xc000096f38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000096fe0 sp=0xc000096fc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000096fe8 sp=0xc000096fe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 34 gp=0xc000502380 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000518738 sp=0xc000518718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005187c8 sp=0xc000518738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005187e0 sp=0xc0005187c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005187e8 sp=0xc0005187e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 35 gp=0xc000502540 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000518f38 sp=0xc000518f18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000518fc8 sp=0xc000518f38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000518fe0 sp=0xc000518fc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000518fe8 sp=0xc000518fe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 36 gp=0xc000502700 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000519738 sp=0xc000519718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005197c8 sp=0xc000519738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005197e0 sp=0xc0005197c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005197e8 sp=0xc0005197e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 37 gp=0xc0005028c0 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000519f38 sp=0xc000519f18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000519fc8 sp=0xc000519f38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000519fe0 sp=0xc000519fc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000519fe8 sp=0xc000519fe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 38 gp=0xc000502a80 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00051a738 sp=0xc00051a718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00051a7c8 sp=0xc00051a738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00051a7e0 sp=0xc00051a7c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00051a7e8 sp=0xc00051a7e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 39 gp=0xc000502c40 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00051af38 sp=0xc00051af18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00051afc8 sp=0xc00051af38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00051afe0 sp=0xc00051afc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00051afe8 sp=0xc00051afe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 40 gp=0xc000502e00 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00051b738 sp=0xc00051b718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00051b7c8 sp=0xc00051b738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00051b7e0 sp=0xc00051b7c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00051b7e8 sp=0xc00051b7e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 41 gp=0xc000502fc0 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00051bf38 sp=0xc00051bf18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00051bfc8 sp=0xc00051bf38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00051bfe0 sp=0xc00051bfc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00051bfe8 sp=0xc00051bfe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 42 gp=0xc000503180 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000514738 sp=0xc000514718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005147c8 sp=0xc000514738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005147e0 sp=0xc0005147c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005147e8 sp=0xc0005147e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 43 gp=0xc000503340 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000514f38 sp=0xc000514f18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000514fc8 sp=0xc000514f38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000514fe0 sp=0xc000514fc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000514fe8 sp=0xc000514fe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 44 gp=0xc000503500 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000515738 sp=0xc000515718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005157c8 sp=0xc000515738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005157e0 sp=0xc0005157c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005157e8 sp=0xc0005157e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 45 gp=0xc0005036c0 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000515f38 sp=0xc000515f18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000515fc8 sp=0xc000515f38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000515fe0 sp=0xc000515fc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000515fe8 sp=0xc000515fe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 50 gp=0xc000584000 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00058a738 sp=0xc00058a718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00058a7c8 sp=0xc00058a738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00058a7e0 sp=0xc00058a7c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00058a7e8 sp=0xc00058a7e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 51 gp=0xc0005841c0 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00058af38 sp=0xc00058af18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00058afc8 sp=0xc00058af38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00058afe0 sp=0xc00058afc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00058afe8 sp=0xc00058afe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 8 gp=0xc000180e00 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009d738 sp=0xc00009d718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00009d7c8 sp=0xc00009d738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00009d7e0 sp=0xc00009d7c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009d7e8 sp=0xc00009d7e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 46 gp=0xc000503880 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000516738 sp=0xc000516718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005167c8 sp=0xc000516738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005167e0 sp=0xc0005167c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005167e8 sp=0xc0005167e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 20 gp=0xc000482700 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000097738 sp=0xc000097718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0000977c8 sp=0xc000097738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0000977e0 sp=0xc0000977c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000977e8 sp=0xc0000977e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 9 gp=0xc000180fc0 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009df38 sp=0xc00009df18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00009dfc8 sp=0xc00009df38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00009dfe0 sp=0xc00009dfc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009dfe8 sp=0xc00009dfe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 10 gp=0xc000181180 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000586738 sp=0xc000586718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005867c8 sp=0xc000586738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005867e0 sp=0xc0005867c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005867e8 sp=0xc0005867e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 11 gp=0xc000181340 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000586f38 sp=0xc000586f18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000586fc8 sp=0xc000586f38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000586fe0 sp=0xc000586fc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000586fe8 sp=0xc000586fe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 12 gp=0xc000181500 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0xd83e05201e?, 0x1?, 0x99?, 0x18?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000587738 sp=0xc000587718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005877c8 sp=0xc000587738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005877e0 sp=0xc0005877c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005877e8 sp=0xc0005877e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 13 gp=0xc0001816c0 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0xd83e051b28?, 0x1?, 0xf0?, 0x4a?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000587f38 sp=0xc000587f18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000587fc8 sp=0xc000587f38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000587fe0 sp=0xc000587fc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000587fe8 sp=0xc000587fe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 14 gp=0xc000181880 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0x55e5d213bc80?, 0x1?, 0x4b?, 0xac?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0000adf38 sp=0xc0000adf18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0000adfc8 sp=0xc0000adf38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0000adfe0 sp=0xc0000adfc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000adfe8 sp=0xc0000adfe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 15 gp=0xc000181a40 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0xd83e051fe2?, 0x3?, 0x57?, 0x7?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000588f38 sp=0xc000588f18 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000588fc8 sp=0xc000588f38 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000588fe0 sp=0xc000588fc8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000588fe8 sp=0xc000588fe0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 16 gp=0xc000181c00 m=nil [GC worker (idle)]:
ollama[30444]: runtime.gopark(0xd83e04e580?, 0x1?, 0xc1?, 0x4?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000589738 sp=0xc000589718 pc=0x55e5d035154e
ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0)
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005897c8 sp=0xc000589738 pc=0x55e5d02fd0eb
ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1()
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005897e0 sp=0xc0005897c8 pc=0x55e5d02fcfc5
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005897e8 sp=0xc0005897e0 pc=0x55e5d0359681
ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1
ollama[30444]:         /usr/lib/go/src/runtime/mgc.go:1373 +0x105
ollama[30444]: goroutine 66 gp=0xc000584fc0 m=nil [sync.WaitGroup.Wait]:
ollama[30444]: runtime.gopark(0x0?, 0x0?, 0xa0?, 0x62?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000516e20 sp=0xc000516e00 pc=0x55e5d035154e
ollama[30444]: runtime.goparkunlock(...)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:466
ollama[30444]: runtime.semacquire1(0xc0004636a0, 0x0, 0x1, 0x0, 0x19)
ollama[30444]:         /usr/lib/go/src/runtime/sema.go:192 +0x229 fp=0xc000516e88 sp=0xc000516e20 pc=0x55e5d03307e9
ollama[30444]: sync.runtime_SemacquireWaitGroup(0x0?, 0x0?)
ollama[30444]:         /usr/lib/go/src/runtime/sema.go:114 +0x2e fp=0xc000516ec0 sp=0xc000516e88 pc=0x55e5d0352f6e
ollama[30444]: sync.(*WaitGroup).Wait(0xc000463698)
ollama[30444]:         /usr/lib/go/src/sync/waitgroup.go:206 +0x85 fp=0xc000516ee8 sp=0xc000516ec0 pc=0x55e5d03652e5
ollama[30444]: github.com/ollama/ollama/runner/llamarunner.(*Server).run(0xc000463680, {0x55e5d17bd3b0, 0xc0004a6960})
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:334 +0x4b fp=0xc000516fb8 sp=0xc000516ee8 pc=0x55e5d07c916b
ollama[30444]: github.com/ollama/ollama/runner/llamarunner.Execute.gowrap1()
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:926 +0x28 fp=0xc000516fe0 sp=0xc000516fb8 pc=0x55e5d07ce268
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000516fe8 sp=0xc000516fe0 pc=0x55e5d0359681
ollama[30444]: created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
ollama[30444]:         /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:926 +0x4c5
ollama[30444]: goroutine 67 gp=0xc000585180 m=nil [IO wait]:
ollama[30444]: runtime.gopark(0xc000049950?, 0x55e5d03dc7a5?, 0x80?, 0x49?, 0xb?)
ollama[30444]:         /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000049938 sp=0xc000049918 pc=0x55e5d035154e
ollama[30444]: runtime.netpollblock(0x55e5d03756f8?, 0xd02e6526?, 0xe5?)
ollama[30444]:         /usr/lib/go/src/runtime/netpoll.go:575 +0xf7 fp=0xc000049970 sp=0xc000049938 pc=0x55e5d0315137
ollama[30444]: internal/poll.runtime_pollWait(0x720af22b7200, 0x72)
ollama[30444]:         /usr/lib/go/src/runtime/netpoll.go:351 +0x85 fp=0xc000049990 sp=0xc000049970 pc=0x55e5d0350725
ollama[30444]: internal/poll.(*pollDesc).wait(0xc000614980?, 0xc0001f7000?, 0x0)
ollama[30444]:         /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000499b8 sp=0xc000049990 pc=0x55e5d03d91a7
ollama[30444]: internal/poll.(*pollDesc).waitRead(...)
ollama[30444]:         /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89
ollama[30444]: internal/poll.(*FD).Read(0xc000614980, {0xc0001f7000, 0x1000, 0x1000})
ollama[30444]:         /usr/lib/go/src/internal/poll/fd_unix.go:165 +0x279 fp=0xc000049a50 sp=0xc0000499b8 pc=0x55e5d03da499
ollama[30444]: net.(*netFD).Read(0xc000614980, {0xc0001f7000?, 0x0?, 0xc000049ac8?})
ollama[30444]:         /usr/lib/go/src/net/fd_posix.go:68 +0x25 fp=0xc000049a98 sp=0xc000049a50 pc=0x55e5d0446ba5
ollama[30444]: net.(*conn).Read(0xc00052c518, {0xc0001f7000?, 0x0?, 0x0?})
ollama[30444]:         /usr/lib/go/src/net/net.go:196 +0x45 fp=0xc000049ae0 sp=0xc000049a98 pc=0x55e5d0454bc5
ollama[30444]: net/http.(*connReader).Read(0xc0004a0640, {0xc0001f7000, 0x1000, 0x1000})
ollama[30444]:         /usr/lib/go/src/net/http/server.go:812 +0x154 fp=0xc000049b38 sp=0xc000049ae0 pc=0x55e5d064c414
ollama[30444]: bufio.(*Reader).fill(0xc0001f2780)
ollama[30444]:         /usr/lib/go/src/bufio/bufio.go:113 +0x103 fp=0xc000049b70 sp=0xc000049b38 pc=0x55e5d046c0a3
ollama[30444]: bufio.(*Reader).Peek(0xc0001f2780, 0x4)
ollama[30444]:         /usr/lib/go/src/bufio/bufio.go:152 +0x53 fp=0xc000049b90 sp=0xc000049b70 pc=0x55e5d046c1d3
ollama[30444]: net/http.(*conn).serve(0xc0004663f0, {0x55e5d17bd378, 0xc00061f260})
ollama[30444]:         /usr/lib/go/src/net/http/server.go:2145 +0x7c5 fp=0xc000049fb8 sp=0xc000049b90 pc=0x55e5d0651c45
ollama[30444]: net/http.(*Server).Serve.gowrap3()
ollama[30444]:         /usr/lib/go/src/net/http/server.go:3493 +0x28 fp=0xc000049fe0 sp=0xc000049fb8 pc=0x55e5d06577a8
ollama[30444]: runtime.goexit({})
ollama[30444]:         /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000049fe8 sp=0xc000049fe0 pc=0x55e5d0359681
ollama[30444]: created by net/http.(*Server).Serve in goroutine 1
ollama[30444]:         /usr/lib/go/src/net/http/server.go:3493 +0x485
ollama[30444]: rax    0x72094118db60
ollama[30444]: rbx    0x1
ollama[30444]: rcx    0x39b974a0
ollama[30444]: rdx    0x72093a9b4ac0
ollama[30444]: rdi    0x720939fbbe20
ollama[30444]: rsi    0x0
ollama[30444]: rbp    0x720aa1fffac0
ollama[30444]: rsp    0x720aa1fffa78
ollama[30444]: r8     0x2
ollama[30444]: r9     0x720940000030
ollama[30444]: r10    0x2
ollama[30444]: r11    0x0
ollama[30444]: r12    0x720939c94000
ollama[30444]: r13    0x720939c94000
ollama[30444]: r14    0x0
ollama[30444]: r15    0x720939ee1cd0
ollama[30444]: rip    0x720a99783e6a
ollama[30444]: rflags 0x10206
ollama[30444]: cs     0x33
ollama[30444]: fs     0x0
ollama[30444]: gs     0x0
ollama[30444]: time=2025-11-10T13:30:30.771+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server not responding"
ollama[30444]: time=2025-11-10T13:30:31.022+01:00 level=INFO source=sched.go:453 msg="Load failed" model=/mnt/Tilo4TB/var-lib-ollama/blobs/sha256-f729aa7a690b5128571dd9660124f888f1380496e1d06a71814319a1a03a2414 error="llama runner process has terminated: exit status 2"
ollama[30444]: [GIN] 2025/11/10 - 13:30:31 | 500 | 18.026586451s |             ::1 | POST     "/api/chat"
<!-- gh-comment-id:3511362443 --> @binarynoise commented on GitHub (Nov 10, 2025): I temporarily downgraded my kernel to `Linux 6.11.0-arch1-1 #1 SMP PREEMPT_DYNAMIC Sun, 15 Sep 2024 18:38:36 +0000 x86_64 GNU/Linux`, the amdgpu warnings went away, the crashes stayed. ``` systemd[1]: Started Ollama Service. sudo[30433]: pam_unix(sudo:session): session closed for user root ollama[30444]: time=2025-11-10T13:30:01.798+01:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY:OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://[::]:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:f16 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/Tilo4TB/var-lib-ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" ollama[30444]: time=2025-11-10T13:30:01.823+01:00 level=INFO source=images.go:522 msg="total blobs: 147" ollama[30444]: time=2025-11-10T13:30:01.827+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" ollama[30444]: time=2025-11-10T13:30:01.829+01:00 level=INFO source=routes.go:1578 msg="Listening on [::]:11434 (version 0.12.10.r17.g91ec3ddbeb2e)" ollama[30444]: time=2025-11-10T13:30:01.829+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." ollama[30444]: time=2025-11-10T13:30:01.832+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 32935" ollama[30444]: time=2025-11-10T13:30:06.893+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35805" ollama[30444]: time=2025-11-10T13:30:13.039+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-c2c6236518c28f70 filter_id="" library=ROCm compute=gfx1101 name=ROCm0 description="AMD Radeon RX 7800 XT" libdirs=ollama driver=60443.48 pci_id=0000:03:00.0 type=discrete total="16.0 GiB" available="15.8 GiB" ollama[30444]: time=2025-11-10T13:30:13.039+01:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB" ollama[30444]: time=2025-11-10T13:30:13.380+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44093" ollama[30444]: llama_model_loader: loaded meta data with 27 key-value pairs and 322 tensors from /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-f729aa7a690b5128571dd9660124f888f1380496e1d06a71814319a1a03a2414 (version GGUF V3 (latest)) ollama[30444]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama[30444]: llama_model_loader: - kv 0: general.architecture str = command-r ollama[30444]: llama_model_loader: - kv 1: general.name str = aya-23-35B ollama[30444]: llama_model_loader: - kv 2: command-r.block_count u32 = 40 ollama[30444]: llama_model_loader: - kv 3: command-r.context_length u32 = 8192 ollama[30444]: llama_model_loader: - kv 4: command-r.embedding_length u32 = 8192 ollama[30444]: llama_model_loader: - kv 5: command-r.feed_forward_length u32 = 22528 ollama[30444]: llama_model_loader: - kv 6: command-r.attention.head_count u32 = 64 ollama[30444]: llama_model_loader: - kv 7: command-r.attention.head_count_kv u32 = 64 ollama[30444]: llama_model_loader: - kv 8: command-r.rope.freq_base f32 = 8000000.000000 ollama[30444]: llama_model_loader: - kv 9: command-r.attention.layer_norm_epsilon f32 = 0.000010 ollama[30444]: llama_model_loader: - kv 10: general.file_type u32 = 10 ollama[30444]: llama_model_loader: - kv 11: command-r.logit_scale f32 = 0.062500 ollama[30444]: llama_model_loader: - kv 12: command-r.rope.scaling.type str = none ollama[30444]: llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 ollama[30444]: llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,256000] = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ... ollama[30444]: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ... ollama[30444]: llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,253333] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a... ollama[30444]: llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 5 ollama[30444]: llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 255001 ollama[30444]: llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0 ollama[30444]: llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true ollama[30444]: llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false ollama[30444]: llama_model_loader: - kv 22: tokenizer.chat_template.tool_use str = {{ bos_token }}{% if messages[0]['rol... ollama[30444]: llama_model_loader: - kv 23: tokenizer.chat_template.rag str = {{ bos_token }}{% if messages[0]['rol... ollama[30444]: llama_model_loader: - kv 24: tokenizer.chat_templates arr[str,2] = ["rag", "tool_use"] ollama[30444]: llama_model_loader: - kv 25: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... ollama[30444]: llama_model_loader: - kv 26: general.quantization_version u32 = 2 ollama[30444]: llama_model_loader: - type f32: 41 tensors ollama[30444]: llama_model_loader: - type q2_K: 160 tensors ollama[30444]: llama_model_loader: - type q3_K: 120 tensors ollama[30444]: llama_model_loader: - type q6_K: 1 tensors ollama[30444]: print_info: file format = GGUF V3 (latest) ollama[30444]: print_info: file type = Q2_K - Medium ollama[30444]: print_info: file size = 12.86 GiB (3.16 BPW) ollama[30444]: load: missing or unrecognized pre-tokenizer type, using: 'default' ollama[30444]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect ollama[30444]: load: printing all EOG tokens: ollama[30444]: load: - 0 ('<PAD>') ollama[30444]: load: - 255001 ('<|END_OF_TURN_TOKEN|>') ollama[30444]: load: special tokens cache size = 1008 ollama[30444]: load: token to piece cache size = 1.8528 MB ollama[30444]: print_info: arch = command-r ollama[30444]: print_info: vocab_only = 1 ollama[30444]: print_info: model type = ?B ollama[30444]: print_info: model params = 34.98 B ollama[30444]: print_info: general.name = aya-23-35B ollama[30444]: print_info: vocab type = BPE ollama[30444]: print_info: n_vocab = 256000 ollama[30444]: print_info: n_merges = 253333 ollama[30444]: print_info: BOS token = 5 '<BOS_TOKEN>' ollama[30444]: print_info: EOS token = 255001 '<|END_OF_TURN_TOKEN|>' ollama[30444]: print_info: PAD token = 0 '<PAD>' ollama[30444]: print_info: LF token = 206 'Ċ' ollama[30444]: print_info: FIM PAD token = 0 '<PAD>' ollama[30444]: print_info: EOG token = 0 '<PAD>' ollama[30444]: print_info: EOG token = 255001 '<|END_OF_TURN_TOKEN|>' ollama[30444]: print_info: max token length = 1024 ollama[30444]: llama_model_load: vocab only - skipping tensors ollama[30444]: time=2025-11-10T13:30:18.656+01:00 level=INFO source=server.go:215 msg="enabling flash attention" ollama[30444]: time=2025-11-10T13:30:18.657+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --model /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-f729aa7a690b5128571dd9660124f888f1380496e1d06a71814319a1a03a2414 --port 42169" ollama[30444]: time=2025-11-10T13:30:18.657+01:00 level=INFO source=server.go:470 msg="system memory" total="62.5 GiB" free="53.4 GiB" free_swap="0 B" ollama[30444]: time=2025-11-10T13:30:18.657+01:00 level=INFO source=server.go:522 msg=offload library=ROCm layers.requested=-1 layers.model=41 layers.offload=23 layers.split=[23] memory.available="[15.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="25.9 GiB" memory.required.partial="15.3 GiB" memory.required.kv="10.0 GiB" memory.required.allocations="[15.3 GiB]" memory.weights.total="12.9 GiB" memory.weights.repeating="11.3 GiB" memory.weights.nonrepeating="1.6 GiB" memory.graph.full="1.1 GiB" memory.graph.partial="2.1 GiB" ollama[30444]: time=2025-11-10T13:30:18.664+01:00 level=INFO source=runner.go:910 msg="starting go runner" ollama[30444]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ollama[30444]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ollama[30444]: ggml_cuda_init: found 1 ROCm devices: ollama[30444]: Device 0: AMD Radeon RX 7800 XT, gfx1101 (0x1101), VMM: no, Wave Size: 32, ID: GPU-c2c6236518c28f70 ollama[30444]: load_backend: loaded ROCm backend from /usr/lib/ollama/libggml-hip.so ollama[30444]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so ollama[30444]: time=2025-11-10T13:30:23.600+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.SSE3=1 CPU.1.SSSE3=1 CPU.1.AVX=1 CPU.1.AVX_VNNI=1 CPU.1.AVX2=1 CPU.1.F16C=1 CPU.1.FMA=1 CPU.1.BMI2=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) ollama[30444]: time=2025-11-10T13:30:23.600+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:42169" ollama[30444]: time=2025-11-10T13:30:23.611+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType:f16 NumThreads:16 GPULayers:23[ID:GPU-c2c6236518c28f70 Layers:23(17..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" ollama[30444]: time=2025-11-10T13:30:23.612+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" ollama[30444]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7800 XT) (0000:03:00.0) - 16140 MiB free ollama[30444]: time=2025-11-10T13:30:23.612+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" ollama[30444]: llama_model_loader: loaded meta data with 27 key-value pairs and 322 tensors from /mnt/Tilo4TB/var-lib-ollama/blobs/sha256-f729aa7a690b5128571dd9660124f888f1380496e1d06a71814319a1a03a2414 (version GGUF V3 (latest)) ollama[30444]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama[30444]: llama_model_loader: - kv 0: general.architecture str = command-r ollama[30444]: llama_model_loader: - kv 1: general.name str = aya-23-35B ollama[30444]: llama_model_loader: - kv 2: command-r.block_count u32 = 40 ollama[30444]: llama_model_loader: - kv 3: command-r.context_length u32 = 8192 ollama[30444]: llama_model_loader: - kv 4: command-r.embedding_length u32 = 8192 ollama[30444]: llama_model_loader: - kv 5: command-r.feed_forward_length u32 = 22528 ollama[30444]: llama_model_loader: - kv 6: command-r.attention.head_count u32 = 64 ollama[30444]: llama_model_loader: - kv 7: command-r.attention.head_count_kv u32 = 64 ollama[30444]: llama_model_loader: - kv 8: command-r.rope.freq_base f32 = 8000000.000000 ollama[30444]: llama_model_loader: - kv 9: command-r.attention.layer_norm_epsilon f32 = 0.000010 ollama[30444]: llama_model_loader: - kv 10: general.file_type u32 = 10 ollama[30444]: llama_model_loader: - kv 11: command-r.logit_scale f32 = 0.062500 ollama[30444]: llama_model_loader: - kv 12: command-r.rope.scaling.type str = none ollama[30444]: llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 ollama[30444]: llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,256000] = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ... ollama[30444]: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ... ollama[30444]: llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,253333] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a... ollama[30444]: llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 5 ollama[30444]: llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 255001 ollama[30444]: llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0 ollama[30444]: llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true ollama[30444]: llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false ollama[30444]: llama_model_loader: - kv 22: tokenizer.chat_template.tool_use str = {{ bos_token }}{% if messages[0]['rol... ollama[30444]: llama_model_loader: - kv 23: tokenizer.chat_template.rag str = {{ bos_token }}{% if messages[0]['rol... ollama[30444]: llama_model_loader: - kv 24: tokenizer.chat_templates arr[str,2] = ["rag", "tool_use"] ollama[30444]: llama_model_loader: - kv 25: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... ollama[30444]: llama_model_loader: - kv 26: general.quantization_version u32 = 2 ollama[30444]: llama_model_loader: - type f32: 41 tensors ollama[30444]: llama_model_loader: - type q2_K: 160 tensors ollama[30444]: llama_model_loader: - type q3_K: 120 tensors ollama[30444]: llama_model_loader: - type q6_K: 1 tensors ollama[30444]: print_info: file format = GGUF V3 (latest) ollama[30444]: print_info: file type = Q2_K - Medium ollama[30444]: print_info: file size = 12.86 GiB (3.16 BPW) ollama[30444]: load: missing or unrecognized pre-tokenizer type, using: 'default' ollama[30444]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect ollama[30444]: load: printing all EOG tokens: ollama[30444]: load: - 0 ('<PAD>') ollama[30444]: load: - 255001 ('<|END_OF_TURN_TOKEN|>') ollama[30444]: load: special tokens cache size = 1008 ollama[30444]: load: token to piece cache size = 1.8528 MB ollama[30444]: print_info: arch = command-r ollama[30444]: print_info: vocab_only = 0 ollama[30444]: print_info: n_ctx_train = 8192 ollama[30444]: print_info: n_embd = 8192 ollama[30444]: print_info: n_layer = 40 ollama[30444]: print_info: n_head = 64 ollama[30444]: print_info: n_head_kv = 64 ollama[30444]: print_info: n_rot = 128 ollama[30444]: print_info: n_swa = 0 ollama[30444]: print_info: is_swa_any = 0 ollama[30444]: print_info: n_embd_head_k = 128 ollama[30444]: print_info: n_embd_head_v = 128 ollama[30444]: print_info: n_gqa = 1 ollama[30444]: print_info: n_embd_k_gqa = 8192 ollama[30444]: print_info: n_embd_v_gqa = 8192 ollama[30444]: print_info: f_norm_eps = 1.0e-05 ollama[30444]: print_info: f_norm_rms_eps = 0.0e+00 ollama[30444]: print_info: f_clamp_kqv = 0.0e+00 ollama[30444]: print_info: f_max_alibi_bias = 0.0e+00 ollama[30444]: print_info: f_logit_scale = 6.2e-02 ollama[30444]: print_info: f_attn_scale = 0.0e+00 ollama[30444]: print_info: n_ff = 22528 ollama[30444]: print_info: n_expert = 0 ollama[30444]: print_info: n_expert_used = 0 ollama[30444]: print_info: causal attn = 1 ollama[30444]: print_info: pooling type = 0 ollama[30444]: print_info: rope type = 0 ollama[30444]: print_info: rope scaling = none ollama[30444]: print_info: freq_base_train = 8000000.0 ollama[30444]: print_info: freq_scale_train = 1 ollama[30444]: print_info: n_ctx_orig_yarn = 8192 ollama[30444]: print_info: rope_finetuned = unknown ollama[30444]: print_info: model type = 35B ollama[30444]: print_info: model params = 34.98 B ollama[30444]: print_info: general.name = aya-23-35B ollama[30444]: print_info: vocab type = BPE ollama[30444]: print_info: n_vocab = 256000 ollama[30444]: print_info: n_merges = 253333 ollama[30444]: print_info: BOS token = 5 '<BOS_TOKEN>' ollama[30444]: print_info: EOS token = 255001 '<|END_OF_TURN_TOKEN|>' ollama[30444]: print_info: PAD token = 0 '<PAD>' ollama[30444]: print_info: LF token = 206 'Ċ' ollama[30444]: print_info: FIM PAD token = 0 '<PAD>' ollama[30444]: print_info: EOG token = 0 '<PAD>' ollama[30444]: print_info: EOG token = 255001 '<|END_OF_TURN_TOKEN|>' ollama[30444]: print_info: max token length = 1024 ollama[30444]: load_tensors: loading model tensors, this can take a while... (mmap = true) ollama[30444]: load_tensors: offloading 23 repeating layers to GPU ollama[30444]: load_tensors: offloaded 23/41 layers to GPU ollama[30444]: load_tensors: ROCm0 model buffer size = 6627.59 MiB ollama[30444]: load_tensors: CPU_Mapped model buffer size = 13166.91 MiB ollama[30444]: llama_context: constructing llama_context ollama[30444]: llama_context: n_seq_max = 2 ollama[30444]: llama_context: n_ctx = 8192 ollama[30444]: llama_context: n_ctx_per_seq = 4096 ollama[30444]: llama_context: n_batch = 1024 ollama[30444]: llama_context: n_ubatch = 512 ollama[30444]: llama_context: causal_attn = 1 ollama[30444]: llama_context: flash_attn = enabled ollama[30444]: llama_context: kv_unified = false ollama[30444]: llama_context: freq_base = 8000000.0 ollama[30444]: llama_context: freq_scale = 1 ollama[30444]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (8192) -- the full capacity of the model will not be utilized ollama[30444]: llama_context: CPU output buffer size = 2.02 MiB ollama[30444]: llama_kv_cache: ROCm0 KV buffer size = 5888.00 MiB ollama[30444]: llama_kv_cache: CPU KV buffer size = 4352.00 MiB ollama[30444]: llama_kv_cache: size = 10240.00 MiB ( 4096 cells, 40 layers, 2/2 seqs), K (f16): 5120.00 MiB, V (f16): 5120.00 MiB ollama[30444]: graph_reserve: failed to allocate compute buffers ollama[30444]: SIGSEGV: segmentation violation ollama[30444]: PC=0x720a99783e6a m=9 sigcode=1 addr=0x720b0ee48518 ollama[30444]: signal arrived during cgo execution ollama[30444]: goroutine 54 gp=0xc000503dc0 m=9 mp=0xc0002c9808 [syscall]: ollama[30444]: runtime.cgocall(0x55e5d1060100, 0xc0000acbf8) ollama[30444]: /usr/lib/go/src/runtime/cgocall.go:167 +0x4b fp=0xc0000acbd0 sp=0xc0000acb98 pc=0x55e5d034e0cb ollama[30444]: github.com/ollama/ollama/llama._Cfunc_llama_init_from_model(0x720940000ce0, {0x2000, 0x400, 0x200, 0x2, 0x10, 0x10, 0xffffffff, 0xffffffff, 0xffffffff, ...}) ollama[30444]: _cgo_gotypes.go:753 +0x4e fp=0xc0000acbf8 sp=0xc0000acbd0 pc=0x55e5d070a46e ollama[30444]: github.com/ollama/ollama/llama.NewContextWithModel.func1(...) ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/ollama/llama/llama.go:280 ollama[30444]: github.com/ollama/ollama/llama.NewContextWithModel(0xc000610cd8, {{0x2000, 0x400, 0x200, 0x2, 0x10, 0x10, 0xffffffff, 0xffffffff, 0xffffffff, ...}}) ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/ollama/llama/llama.go:280 +0x158 fp=0xc0000acd98 sp=0xc0000acbf8 pc=0x55e5d070e238 ollama[30444]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc000463680, {0x17, 0x0, 0x1, {0xc000610a84, 0x1, 0x1}, 0xc0004021f0, 0x0}, {0x7ffc5b192b81, ...}, ...) ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:797 +0x198 fp=0xc0000acee0 sp=0xc0000acd98 pc=0x55e5d07cc598 ollama[30444]: github.com/ollama/ollama/runner/llamarunner.(*Server).load.gowrap2() ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:879 +0x175 fp=0xc0000acfe0 sp=0xc0000acee0 pc=0x55e5d07cd635 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000acfe8 sp=0xc0000acfe0 pc=0x55e5d0359681 ollama[30444]: created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 67 ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:879 +0x7ce ollama[30444]: goroutine 1 gp=0xc000002380 m=nil [IO wait]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00049d790 sp=0xc00049d770 pc=0x55e5d035154e ollama[30444]: runtime.netpollblock(0xc00049d7e0?, 0xd02e6526?, 0xe5?) ollama[30444]: /usr/lib/go/src/runtime/netpoll.go:575 +0xf7 fp=0xc00049d7c8 sp=0xc00049d790 pc=0x55e5d0315137 ollama[30444]: internal/poll.runtime_pollWait(0x720af22b7400, 0x72) ollama[30444]: /usr/lib/go/src/runtime/netpoll.go:351 +0x85 fp=0xc00049d7e8 sp=0xc00049d7c8 pc=0x55e5d0350725 ollama[30444]: internal/poll.(*pollDesc).wait(0xc000614900?, 0x900000036?, 0x0) ollama[30444]: /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00049d810 sp=0xc00049d7e8 pc=0x55e5d03d91a7 ollama[30444]: internal/poll.(*pollDesc).waitRead(...) ollama[30444]: /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89 ollama[30444]: internal/poll.(*FD).Accept(0xc000614900) ollama[30444]: /usr/lib/go/src/internal/poll/fd_unix.go:613 +0x28c fp=0xc00049d8b8 sp=0xc00049d810 pc=0x55e5d03de5cc ollama[30444]: net.(*netFD).accept(0xc000614900) ollama[30444]: /usr/lib/go/src/net/fd_unix.go:161 +0x29 fp=0xc00049d970 sp=0xc00049d8b8 pc=0x55e5d0448a49 ollama[30444]: net.(*TCPListener).accept(0xc0004a0600) ollama[30444]: /usr/lib/go/src/net/tcpsock_posix.go:159 +0x1b fp=0xc00049d9c0 sp=0xc00049d970 pc=0x55e5d045e17b ollama[30444]: net.(*TCPListener).Accept(0xc0004a0600) ollama[30444]: /usr/lib/go/src/net/tcpsock.go:380 +0x30 fp=0xc00049d9f0 sp=0xc00049d9c0 pc=0x55e5d045d010 ollama[30444]: net/http.(*onceCloseListener).Accept(0xc0004663f0?) ollama[30444]: <autogenerated>:1 +0x24 fp=0xc00049da08 sp=0xc00049d9f0 pc=0x55e5d067f9c4 ollama[30444]: net/http.(*Server).Serve(0xc0001a3500, {0x55e5d17bada8, 0xc0004a0600}) ollama[30444]: /usr/lib/go/src/net/http/server.go:3463 +0x30c fp=0xc00049db38 sp=0xc00049da08 pc=0x55e5d06573ac ollama[30444]: github.com/ollama/ollama/runner/llamarunner.Execute({0xc000036260, 0x4, 0x4}) ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:947 +0x8f4 fp=0xc00049dd08 sp=0xc00049db38 pc=0x55e5d07cdff4 ollama[30444]: github.com/ollama/ollama/runner.Execute({0xc000036250?, 0x0?, 0x0?}) ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/runner.go:22 +0xd4 fp=0xc00049dd30 sp=0xc00049dd08 pc=0x55e5d086ee54 ollama[30444]: github.com/ollama/ollama/cmd.NewCLI.func2(0xc0001a3100?, {0x55e5d12e12eb?, 0x4?, 0x55e5d12e12ef?}) ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/ollama/cmd/cmd.go:1841 +0x45 fp=0xc00049dd58 sp=0xc00049dd30 pc=0x55e5d0ff1085 ollama[30444]: github.com/spf13/cobra.(*Command).execute(0xc000469508, {0xc0004a03c0, 0x4, 0x4}) ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x88a fp=0xc00049de78 sp=0xc00049dd58 pc=0x55e5d04c220a ollama[30444]: github.com/spf13/cobra.(*Command).ExecuteC(0xc0004b9208) ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x398 fp=0xc00049df30 sp=0xc00049de78 pc=0x55e5d04c2a38 ollama[30444]: github.com/spf13/cobra.(*Command).Execute(...) ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992 ollama[30444]: github.com/spf13/cobra.(*Command).ExecuteContext(...) ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985 ollama[30444]: main.main() ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/ollama/main.go:12 +0x4d fp=0xc00049df50 sp=0xc00049df30 pc=0x55e5d0ff1b6d ollama[30444]: runtime.main() ollama[30444]: /usr/lib/go/src/runtime/proc.go:285 +0x29d fp=0xc00049dfe0 sp=0xc00049df50 pc=0x55e5d031c9dd ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00049dfe8 sp=0xc00049dfe0 pc=0x55e5d0359681 ollama[30444]: goroutine 2 gp=0xc000002e00 m=nil [force gc (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009afa8 sp=0xc00009af88 pc=0x55e5d035154e ollama[30444]: runtime.goparkunlock(...) ollama[30444]: /usr/lib/go/src/runtime/proc.go:466 ollama[30444]: runtime.forcegchelper() ollama[30444]: /usr/lib/go/src/runtime/proc.go:373 +0xb8 fp=0xc00009afe0 sp=0xc00009afa8 pc=0x55e5d031cd18 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009afe8 sp=0xc00009afe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.init.7 in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/proc.go:361 +0x1a ollama[30444]: goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]: ollama[30444]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009b780 sp=0xc00009b760 pc=0x55e5d035154e ollama[30444]: runtime.goparkunlock(...) ollama[30444]: /usr/lib/go/src/runtime/proc.go:466 ollama[30444]: runtime.bgsweep(0xc0000c6000) ollama[30444]: /usr/lib/go/src/runtime/mgcsweep.go:323 +0xdf fp=0xc00009b7c8 sp=0xc00009b780 pc=0x55e5d0306a3f ollama[30444]: runtime.gcenable.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:212 +0x25 fp=0xc00009b7e0 sp=0xc00009b7c8 pc=0x55e5d02fa9c5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009b7e8 sp=0xc00009b7e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcenable in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:212 +0x66 ollama[30444]: goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]: ollama[30444]: runtime.gopark(0x10000?, 0x55e5d14a94a8?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009bf78 sp=0xc00009bf58 pc=0x55e5d035154e ollama[30444]: runtime.goparkunlock(...) ollama[30444]: /usr/lib/go/src/runtime/proc.go:466 ollama[30444]: runtime.(*scavengerState).park(0x55e5d208bf20) ollama[30444]: /usr/lib/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc00009bfa8 sp=0xc00009bf78 pc=0x55e5d03044a9 ollama[30444]: runtime.bgscavenge(0xc0000c6000) ollama[30444]: /usr/lib/go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc00009bfc8 sp=0xc00009bfa8 pc=0x55e5d0304a59 ollama[30444]: runtime.gcenable.gowrap2() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:213 +0x25 fp=0xc00009bfe0 sp=0xc00009bfc8 pc=0x55e5d02fa965 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009bfe8 sp=0xc00009bfe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcenable in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:213 +0xa5 ollama[30444]: goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait]: ollama[30444]: runtime.gopark(0x55e5d032bd17?, 0x55e5d02f22e5?, 0xb8?, 0x1?, 0xc000002380?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009a620 sp=0xc00009a600 pc=0x55e5d035154e ollama[30444]: runtime.runFinalizers() ollama[30444]: /usr/lib/go/src/runtime/mfinal.go:210 +0x107 fp=0xc00009a7e0 sp=0xc00009a620 pc=0x55e5d02f98c7 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009a7e8 sp=0xc00009a7e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.createfing in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mfinal.go:172 +0x3d ollama[30444]: goroutine 6 gp=0xc0001808c0 m=nil [cleanup wait]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009c768 sp=0xc00009c748 pc=0x55e5d035154e ollama[30444]: runtime.goparkunlock(...) ollama[30444]: /usr/lib/go/src/runtime/proc.go:466 ollama[30444]: runtime.(*cleanupQueue).dequeue(0x55e5d208c880) ollama[30444]: /usr/lib/go/src/runtime/mcleanup.go:439 +0xc5 fp=0xc00009c7a0 sp=0xc00009c768 pc=0x55e5d02f6aa5 ollama[30444]: runtime.runCleanups() ollama[30444]: /usr/lib/go/src/runtime/mcleanup.go:635 +0x45 fp=0xc00009c7e0 sp=0xc00009c7a0 pc=0x55e5d02f7165 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009c7e8 sp=0xc00009c7e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.(*cleanupQueue).createGs in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mcleanup.go:589 +0xa5 ollama[30444]: goroutine 7 gp=0xc000180c40 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009cf38 sp=0xc00009cf18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00009cfc8 sp=0xc00009cf38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00009cfe0 sp=0xc00009cfc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009cfe8 sp=0xc00009cfe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 18 gp=0xc000482380 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000096738 sp=0xc000096718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0000967c8 sp=0xc000096738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0000967e0 sp=0xc0000967c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000967e8 sp=0xc0000967e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 19 gp=0xc000482540 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000096f38 sp=0xc000096f18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000096fc8 sp=0xc000096f38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000096fe0 sp=0xc000096fc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000096fe8 sp=0xc000096fe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 34 gp=0xc000502380 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000518738 sp=0xc000518718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005187c8 sp=0xc000518738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005187e0 sp=0xc0005187c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005187e8 sp=0xc0005187e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 35 gp=0xc000502540 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000518f38 sp=0xc000518f18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000518fc8 sp=0xc000518f38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000518fe0 sp=0xc000518fc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000518fe8 sp=0xc000518fe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 36 gp=0xc000502700 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000519738 sp=0xc000519718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005197c8 sp=0xc000519738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005197e0 sp=0xc0005197c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005197e8 sp=0xc0005197e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 37 gp=0xc0005028c0 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000519f38 sp=0xc000519f18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000519fc8 sp=0xc000519f38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000519fe0 sp=0xc000519fc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000519fe8 sp=0xc000519fe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 38 gp=0xc000502a80 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00051a738 sp=0xc00051a718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00051a7c8 sp=0xc00051a738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00051a7e0 sp=0xc00051a7c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00051a7e8 sp=0xc00051a7e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 39 gp=0xc000502c40 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00051af38 sp=0xc00051af18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00051afc8 sp=0xc00051af38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00051afe0 sp=0xc00051afc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00051afe8 sp=0xc00051afe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 40 gp=0xc000502e00 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00051b738 sp=0xc00051b718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00051b7c8 sp=0xc00051b738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00051b7e0 sp=0xc00051b7c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00051b7e8 sp=0xc00051b7e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 41 gp=0xc000502fc0 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00051bf38 sp=0xc00051bf18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00051bfc8 sp=0xc00051bf38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00051bfe0 sp=0xc00051bfc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00051bfe8 sp=0xc00051bfe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 42 gp=0xc000503180 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000514738 sp=0xc000514718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005147c8 sp=0xc000514738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005147e0 sp=0xc0005147c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005147e8 sp=0xc0005147e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 43 gp=0xc000503340 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000514f38 sp=0xc000514f18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000514fc8 sp=0xc000514f38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000514fe0 sp=0xc000514fc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000514fe8 sp=0xc000514fe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 44 gp=0xc000503500 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000515738 sp=0xc000515718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005157c8 sp=0xc000515738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005157e0 sp=0xc0005157c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005157e8 sp=0xc0005157e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 45 gp=0xc0005036c0 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000515f38 sp=0xc000515f18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000515fc8 sp=0xc000515f38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000515fe0 sp=0xc000515fc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000515fe8 sp=0xc000515fe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 50 gp=0xc000584000 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00058a738 sp=0xc00058a718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00058a7c8 sp=0xc00058a738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00058a7e0 sp=0xc00058a7c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00058a7e8 sp=0xc00058a7e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 51 gp=0xc0005841c0 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00058af38 sp=0xc00058af18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00058afc8 sp=0xc00058af38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00058afe0 sp=0xc00058afc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00058afe8 sp=0xc00058afe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 8 gp=0xc000180e00 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009d738 sp=0xc00009d718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00009d7c8 sp=0xc00009d738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00009d7e0 sp=0xc00009d7c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009d7e8 sp=0xc00009d7e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 46 gp=0xc000503880 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000516738 sp=0xc000516718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005167c8 sp=0xc000516738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005167e0 sp=0xc0005167c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005167e8 sp=0xc0005167e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 20 gp=0xc000482700 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000097738 sp=0xc000097718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0000977c8 sp=0xc000097738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0000977e0 sp=0xc0000977c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000977e8 sp=0xc0000977e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 9 gp=0xc000180fc0 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc00009df38 sp=0xc00009df18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc00009dfc8 sp=0xc00009df38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc00009dfe0 sp=0xc00009dfc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc00009dfe8 sp=0xc00009dfe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 10 gp=0xc000181180 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000586738 sp=0xc000586718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005867c8 sp=0xc000586738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005867e0 sp=0xc0005867c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005867e8 sp=0xc0005867e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 11 gp=0xc000181340 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000586f38 sp=0xc000586f18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000586fc8 sp=0xc000586f38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000586fe0 sp=0xc000586fc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000586fe8 sp=0xc000586fe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 12 gp=0xc000181500 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0xd83e05201e?, 0x1?, 0x99?, 0x18?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000587738 sp=0xc000587718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005877c8 sp=0xc000587738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005877e0 sp=0xc0005877c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005877e8 sp=0xc0005877e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 13 gp=0xc0001816c0 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0xd83e051b28?, 0x1?, 0xf0?, 0x4a?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000587f38 sp=0xc000587f18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000587fc8 sp=0xc000587f38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000587fe0 sp=0xc000587fc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000587fe8 sp=0xc000587fe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 14 gp=0xc000181880 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0x55e5d213bc80?, 0x1?, 0x4b?, 0xac?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc0000adf38 sp=0xc0000adf18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0000adfc8 sp=0xc0000adf38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0000adfe0 sp=0xc0000adfc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0000adfe8 sp=0xc0000adfe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 15 gp=0xc000181a40 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0xd83e051fe2?, 0x3?, 0x57?, 0x7?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000588f38 sp=0xc000588f18 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc000588fc8 sp=0xc000588f38 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc000588fe0 sp=0xc000588fc8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000588fe8 sp=0xc000588fe0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 16 gp=0xc000181c00 m=nil [GC worker (idle)]: ollama[30444]: runtime.gopark(0xd83e04e580?, 0x1?, 0xc1?, 0x4?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000589738 sp=0xc000589718 pc=0x55e5d035154e ollama[30444]: runtime.gcBgMarkWorker(0xc0000d36c0) ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1463 +0xeb fp=0xc0005897c8 sp=0xc000589738 pc=0x55e5d02fd0eb ollama[30444]: runtime.gcBgMarkStartWorkers.gowrap1() ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x25 fp=0xc0005897e0 sp=0xc0005897c8 pc=0x55e5d02fcfc5 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc0005897e8 sp=0xc0005897e0 pc=0x55e5d0359681 ollama[30444]: created by runtime.gcBgMarkStartWorkers in goroutine 1 ollama[30444]: /usr/lib/go/src/runtime/mgc.go:1373 +0x105 ollama[30444]: goroutine 66 gp=0xc000584fc0 m=nil [sync.WaitGroup.Wait]: ollama[30444]: runtime.gopark(0x0?, 0x0?, 0xa0?, 0x62?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000516e20 sp=0xc000516e00 pc=0x55e5d035154e ollama[30444]: runtime.goparkunlock(...) ollama[30444]: /usr/lib/go/src/runtime/proc.go:466 ollama[30444]: runtime.semacquire1(0xc0004636a0, 0x0, 0x1, 0x0, 0x19) ollama[30444]: /usr/lib/go/src/runtime/sema.go:192 +0x229 fp=0xc000516e88 sp=0xc000516e20 pc=0x55e5d03307e9 ollama[30444]: sync.runtime_SemacquireWaitGroup(0x0?, 0x0?) ollama[30444]: /usr/lib/go/src/runtime/sema.go:114 +0x2e fp=0xc000516ec0 sp=0xc000516e88 pc=0x55e5d0352f6e ollama[30444]: sync.(*WaitGroup).Wait(0xc000463698) ollama[30444]: /usr/lib/go/src/sync/waitgroup.go:206 +0x85 fp=0xc000516ee8 sp=0xc000516ec0 pc=0x55e5d03652e5 ollama[30444]: github.com/ollama/ollama/runner/llamarunner.(*Server).run(0xc000463680, {0x55e5d17bd3b0, 0xc0004a6960}) ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:334 +0x4b fp=0xc000516fb8 sp=0xc000516ee8 pc=0x55e5d07c916b ollama[30444]: github.com/ollama/ollama/runner/llamarunner.Execute.gowrap1() ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:926 +0x28 fp=0xc000516fe0 sp=0xc000516fb8 pc=0x55e5d07ce268 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000516fe8 sp=0xc000516fe0 pc=0x55e5d0359681 ollama[30444]: created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 ollama[30444]: /tmp/makepkg/ollama-rocm-git/src/ollama/runner/llamarunner/runner.go:926 +0x4c5 ollama[30444]: goroutine 67 gp=0xc000585180 m=nil [IO wait]: ollama[30444]: runtime.gopark(0xc000049950?, 0x55e5d03dc7a5?, 0x80?, 0x49?, 0xb?) ollama[30444]: /usr/lib/go/src/runtime/proc.go:460 +0xce fp=0xc000049938 sp=0xc000049918 pc=0x55e5d035154e ollama[30444]: runtime.netpollblock(0x55e5d03756f8?, 0xd02e6526?, 0xe5?) ollama[30444]: /usr/lib/go/src/runtime/netpoll.go:575 +0xf7 fp=0xc000049970 sp=0xc000049938 pc=0x55e5d0315137 ollama[30444]: internal/poll.runtime_pollWait(0x720af22b7200, 0x72) ollama[30444]: /usr/lib/go/src/runtime/netpoll.go:351 +0x85 fp=0xc000049990 sp=0xc000049970 pc=0x55e5d0350725 ollama[30444]: internal/poll.(*pollDesc).wait(0xc000614980?, 0xc0001f7000?, 0x0) ollama[30444]: /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000499b8 sp=0xc000049990 pc=0x55e5d03d91a7 ollama[30444]: internal/poll.(*pollDesc).waitRead(...) ollama[30444]: /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89 ollama[30444]: internal/poll.(*FD).Read(0xc000614980, {0xc0001f7000, 0x1000, 0x1000}) ollama[30444]: /usr/lib/go/src/internal/poll/fd_unix.go:165 +0x279 fp=0xc000049a50 sp=0xc0000499b8 pc=0x55e5d03da499 ollama[30444]: net.(*netFD).Read(0xc000614980, {0xc0001f7000?, 0x0?, 0xc000049ac8?}) ollama[30444]: /usr/lib/go/src/net/fd_posix.go:68 +0x25 fp=0xc000049a98 sp=0xc000049a50 pc=0x55e5d0446ba5 ollama[30444]: net.(*conn).Read(0xc00052c518, {0xc0001f7000?, 0x0?, 0x0?}) ollama[30444]: /usr/lib/go/src/net/net.go:196 +0x45 fp=0xc000049ae0 sp=0xc000049a98 pc=0x55e5d0454bc5 ollama[30444]: net/http.(*connReader).Read(0xc0004a0640, {0xc0001f7000, 0x1000, 0x1000}) ollama[30444]: /usr/lib/go/src/net/http/server.go:812 +0x154 fp=0xc000049b38 sp=0xc000049ae0 pc=0x55e5d064c414 ollama[30444]: bufio.(*Reader).fill(0xc0001f2780) ollama[30444]: /usr/lib/go/src/bufio/bufio.go:113 +0x103 fp=0xc000049b70 sp=0xc000049b38 pc=0x55e5d046c0a3 ollama[30444]: bufio.(*Reader).Peek(0xc0001f2780, 0x4) ollama[30444]: /usr/lib/go/src/bufio/bufio.go:152 +0x53 fp=0xc000049b90 sp=0xc000049b70 pc=0x55e5d046c1d3 ollama[30444]: net/http.(*conn).serve(0xc0004663f0, {0x55e5d17bd378, 0xc00061f260}) ollama[30444]: /usr/lib/go/src/net/http/server.go:2145 +0x7c5 fp=0xc000049fb8 sp=0xc000049b90 pc=0x55e5d0651c45 ollama[30444]: net/http.(*Server).Serve.gowrap3() ollama[30444]: /usr/lib/go/src/net/http/server.go:3493 +0x28 fp=0xc000049fe0 sp=0xc000049fb8 pc=0x55e5d06577a8 ollama[30444]: runtime.goexit({}) ollama[30444]: /usr/lib/go/src/runtime/asm_amd64.s:1693 +0x1 fp=0xc000049fe8 sp=0xc000049fe0 pc=0x55e5d0359681 ollama[30444]: created by net/http.(*Server).Serve in goroutine 1 ollama[30444]: /usr/lib/go/src/net/http/server.go:3493 +0x485 ollama[30444]: rax 0x72094118db60 ollama[30444]: rbx 0x1 ollama[30444]: rcx 0x39b974a0 ollama[30444]: rdx 0x72093a9b4ac0 ollama[30444]: rdi 0x720939fbbe20 ollama[30444]: rsi 0x0 ollama[30444]: rbp 0x720aa1fffac0 ollama[30444]: rsp 0x720aa1fffa78 ollama[30444]: r8 0x2 ollama[30444]: r9 0x720940000030 ollama[30444]: r10 0x2 ollama[30444]: r11 0x0 ollama[30444]: r12 0x720939c94000 ollama[30444]: r13 0x720939c94000 ollama[30444]: r14 0x0 ollama[30444]: r15 0x720939ee1cd0 ollama[30444]: rip 0x720a99783e6a ollama[30444]: rflags 0x10206 ollama[30444]: cs 0x33 ollama[30444]: fs 0x0 ollama[30444]: gs 0x0 ollama[30444]: time=2025-11-10T13:30:30.771+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server not responding" ollama[30444]: time=2025-11-10T13:30:31.022+01:00 level=INFO source=sched.go:453 msg="Load failed" model=/mnt/Tilo4TB/var-lib-ollama/blobs/sha256-f729aa7a690b5128571dd9660124f888f1380496e1d06a71814319a1a03a2414 error="llama runner process has terminated: exit status 2" ollama[30444]: [GIN] 2025/11/10 - 13:30:31 | 500 | 18.026586451s | ::1 | POST "/api/chat" ```
Author
Owner

@dhiltgen commented on GitHub (Nov 12, 2025):

I have a PR that may improve this. Can you check on your system and see if the following reports more accurate VRAM data than what Ollama currently reports?

cat /sys/class/drm/card*/device/mem_info_vram_used
<!-- gh-comment-id:3519371416 --> @dhiltgen commented on GitHub (Nov 12, 2025): I have a [PR](https://github.com/ollama/ollama/pull/12871) that may improve this. Can you check on your system and see if the following reports more accurate VRAM data than what Ollama currently reports? ``` cat /sys/class/drm/card*/device/mem_info_vram_used ```
Author
Owner

@binarynoise commented on GitHub (Nov 12, 2025):

Yes, now the reported memory matches the sysfs node.

However, it still allocates more memory than calculated:
With the 3GB overhead, it allocates 14.8 GiB instead of the 11.1 GiB it shows in memory.required.partial

<!-- gh-comment-id:3523944259 --> @binarynoise commented on GitHub (Nov 12, 2025): Yes, now the reported memory matches the sysfs node. However, it still allocates more memory than calculated: With the 3GB overhead, it allocates 14.8 GiB instead of the 11.1 GiB it shows in `memory.required.partial`
Author
Owner

@binarynoise commented on GitHub (Nov 19, 2025):

I think I should rename this to "more memory allocated than available/calculated".
My computer gets crashed unless I manually add a RAM overhead so it can use 6GB more than it should, which turns out to be a problem when that memory isn't there.

Is there a general issue with the underlying memory calculation?

<!-- gh-comment-id:3554808346 --> @binarynoise commented on GitHub (Nov 19, 2025): I think I should rename this to "more memory allocated than available/calculated". My computer gets crashed unless I manually add a RAM overhead so it can use 6GB more than it should, which turns out to be a problem when that memory isn't there. Is there a general issue with the underlying memory calculation?
Author
Owner

@dhiltgen commented on GitHub (Nov 21, 2025):

My computer gets crashed unless I manually add a RAM overhead so it can use 6GB more than it should, which turns out to be a problem when that memory isn't there.

What model were you trying to load? Our new engine should back-off and try to load as many layers as it can, but it does rely on ROCm giving us an error if we ask for more memory than is available. If you're seeing a system crash, that may be a hardware fault, or driver bug. Can you share logs showing the crash?

<!-- gh-comment-id:3564636307 --> @dhiltgen commented on GitHub (Nov 21, 2025): > My computer gets crashed unless I manually add a RAM overhead so it can use 6GB more than it should, which turns out to be a problem when that memory isn't there. What model were you trying to load? Our new engine should back-off and try to load as many layers as it can, but it does rely on ROCm giving us an error if we ask for more memory than is available. If you're seeing a system crash, that may be a hardware fault, or driver bug. Can you share logs showing the crash?
Author
Owner

@binarynoise commented on GitHub (Nov 21, 2025):

It doesn't really crash, the PC just hangs up. The last thing I see in journalctl before it freezes is that the model gets loaded (this may take a while)
Doesn't really matter which model, Command A 111B was especially bad though and I got rid of it by now.

<!-- gh-comment-id:3564825207 --> @binarynoise commented on GitHub (Nov 21, 2025): It doesn't really crash, the PC just hangs up. The last thing I see in `journalctl` before it freezes is that the model gets loaded (this may take a while) Doesn't really matter which model, Command A 111B was especially bad though and I got rid of it by now.
Author
Owner

@dhiltgen commented on GitHub (Nov 22, 2025):

Command A 111B runs on the old engine which is less capable at finding the optimal memory layout. Can you try running a newer model that runs on the new engine, like gpt-oss:20b, qwen3, gemma3, etc.?

Running sudo dmesg -w while trying to load a model might shed some light on potential kernel/driver issues.

<!-- gh-comment-id:3565040292 --> @dhiltgen commented on GitHub (Nov 22, 2025): `Command A 111B` runs on the old engine which is less capable at finding the optimal memory layout. Can you try running a newer model that runs on the new engine, like gpt-oss:20b, qwen3, gemma3, etc.? Running `sudo dmesg -w` while trying to load a model might shed some light on potential kernel/driver issues.
Author
Owner

@Muxelmann commented on GitHub (Dec 1, 2025):

I'm using an Nvidia Blackwell (RTX 50xx) series GPU with 96GB, and also get this compute buffer allocation error. When I ran llama3.3:70b on two RTX30xx cards, it used to work (it ran slow, but ran...). Since I've experienced several compatibility problems with the new GPUs, maybe it's similar here, too?

Setup

I'm running Unraid and communicating with ollama via open-webui. Granted, I haven't upgraded my mainboard yet, so the GPU is still running on PCIe 3@16x, but that should only impact loading times, not whether the model can load or not (right?).

Error log

time=2025-12-01T09:04:50.624Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37605"
time=2025-12-01T09:04:51.046Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
llama_model_loader: loaded meta data with 36 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4824460d29f2058aaf6e1118a63a7a197a09bed509f0e7d4e2efb1ee273b447d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.1 70B Instruct 2024 12
llama_model_loader: - kv   3:                            general.version str              = 2024-12
llama_model_loader: - kv   4:                           general.finetune str              = Instruct
llama_model_loader: - kv   5:                           general.basename str              = Llama-3.1
llama_model_loader: - kv   6:                         general.size_label str              = 70B
llama_model_loader: - kv   7:                            general.license str              = llama3.1
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Llama 3.1 70B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Lla...
llama_model_loader: - kv  12:                               general.tags arr[str,5]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv  13:                          general.languages arr[str,7]       = ["fr", "it", "pt", "hi", "es", "th", ...
llama_model_loader: - kv  14:                          llama.block_count u32              = 80
llama_model_loader: - kv  15:                       llama.context_length u32              = 131072
llama_model_loader: - kv  16:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv  17:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv  18:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv  19:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  20:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  21:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  22:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  23:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  24:                          general.file_type u32              = 15
llama_model_loader: - kv  25:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  26:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  27:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  28:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  29:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  30:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  31:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  32:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  33:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  34:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  35:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  162 tensors
llama_model_loader: - type q4_K:  441 tensors
llama_model_loader: - type q5_K:   40 tensors
llama_model_loader: - type q6_K:   81 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 39.59 GiB (4.82 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 70.55 B
print_info: general.name     = Llama 3.1 70B Instruct 2024 12
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-12-01T09:04:51.718Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-4824460d29f2058aaf6e1118a63a7a197a09bed509f0e7d4e2efb1ee273b447d --port 44623"
time=2025-12-01T09:04:51.719Z level=INFO source=sched.go:443 msg="system memory" total="125.7 GiB" free="124.4 GiB" free_swap="0 B"
time=2025-12-01T09:04:51.719Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-1ffd96f2-58ea-681b-af9d-af4d065a9975 library=CUDA available="93.2 GiB" free="93.7 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-01T09:04:51.719Z level=INFO source=server.go:459 msg="loading model" "model layers"=81 requested=-1
time=2025-12-01T09:04:51.722Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="37.2 GiB"
time=2025-12-01T09:04:51.722Z level=INFO source=device.go:245 msg="model weights" device=CPU size="1.8 GiB"
time=2025-12-01T09:04:51.722Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="38.1 GiB"
time=2025-12-01T09:04:51.722Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="1000.0 MiB"
time=2025-12-01T09:04:51.722Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="16.4 GiB"
time=2025-12-01T09:04:51.722Z level=INFO source=device.go:272 msg="total memory" size="94.5 GiB"
time=2025-12-01T09:04:51.745Z level=INFO source=runner.go:963 msg="starting go runner"
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition, compute capability 12.0, VMM: yes, ID: GPU-1ffd96f2-58ea-681b-af9d-af4d065a9975
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2025-12-01T09:04:51.831Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-12-01T09:04:51.842Z level=INFO source=runner.go:999 msg="Server listening on 127.0.0.1:44623"
time=2025-12-01T09:04:51.853Z level=INFO source=runner.go:893 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:128000 KvCacheType: NumThreads:16 GPULayers:78[ID:GPU-1ffd96f2-58ea-681b-af9d-af4d065a9975 Layers:78(2..79)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
time=2025-12-01T09:04:51.854Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-01T09:04:51.854Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_cuda_device_get_memory device GPU-1ffd96f2-58ea-681b-af9d-af4d065a9975 utilizing NVML memory reporting free: 100603068416 total: 102641958912
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition) (0000:81:00.0) - 95942 MiB free
llama_model_loader: loaded meta data with 36 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4824460d29f2058aaf6e1118a63a7a197a09bed509f0e7d4e2efb1ee273b447d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.1 70B Instruct 2024 12
llama_model_loader: - kv   3:                            general.version str              = 2024-12
llama_model_loader: - kv   4:                           general.finetune str              = Instruct
llama_model_loader: - kv   5:                           general.basename str              = Llama-3.1
llama_model_loader: - kv   6:                         general.size_label str              = 70B
llama_model_loader: - kv   7:                            general.license str              = llama3.1
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Llama 3.1 70B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Meta Llama
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/meta-llama/Lla...
llama_model_loader: - kv  12:                               general.tags arr[str,5]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv  13:                          general.languages arr[str,7]       = ["fr", "it", "pt", "hi", "es", "th", ...
llama_model_loader: - kv  14:                          llama.block_count u32              = 80
llama_model_loader: - kv  15:                       llama.context_length u32              = 131072
llama_model_loader: - kv  16:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv  17:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv  18:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv  19:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  20:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  21:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  22:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  23:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  24:                          general.file_type u32              = 15
llama_model_loader: - kv  25:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  26:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  27:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  28:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  29:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  30:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  31:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  32:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  33:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  34:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  35:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  162 tensors
llama_model_loader: - type q4_K:  441 tensors
llama_model_loader: - type q5_K:   40 tensors
llama_model_loader: - type q6_K:   81 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 39.59 GiB (4.82 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 8192
print_info: n_layer          = 80
print_info: n_head           = 64
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 28672
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: model type       = 70B
print_info: model params     = 70.55 B
print_info: general.name     = Llama 3.1 70B Instruct 2024 12
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
[GIN] 2025/12/01 - 09:05:52 | 200 |   37.098476ms |      172.20.0.2 | GET      "/api/tags"
[GIN] 2025/12/01 - 09:05:52 | 200 |      47.193µs |      172.20.0.2 | GET      "/api/ps"
[GIN] 2025/12/01 - 09:05:52 | 200 |      69.503µs |      172.20.0.2 | GET      "/api/version"
load_tensors: offloading 78 repeating layers to GPU
load_tensors: offloaded 78/81 layers to GPU
load_tensors:        CUDA0 model buffer size = 38119.75 MiB
load_tensors:   CPU_Mapped model buffer size = 40543.11 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 128000
llama_context: n_ctx_per_seq = 128000
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = disabled
llama_context: kv_unified    = false
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (128000) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     0.52 MiB
llama_kv_cache:      CUDA0 KV buffer size = 39000.00 MiB
llama_kv_cache:        CPU KV buffer size =  1000.00 MiB
llama_kv_cache: size = 40000.00 MiB (128000 cells,  80 layers,  1/1 seqs), K (f16): 20000.00 MiB, V (f16): 20000.00 MiB
graph_reserve: failed to allocate compute buffers
SIGSEGV: segmentation violation
PC=0x14dc16320b22 m=4 sigcode=2 addr=0x14db7e4d4e98
signal arrived during cgo execution

goroutine 60 gp=0xc000583180 m=4 mp=0xc0000b1808 [syscall]:
runtime.cgocall(0x5653c54f4b50, 0xc00032fc00)
        runtime/cgocall.go:167 +0x4b fp=0xc00032fbd8 sp=0xc00032fba0 pc=0x5653c47d7b0b
github.com/ollama/ollama/llama._Cfunc_llama_init_from_model(0x14dc88000da0, {0x1f400, 0x200, 0x200, 0x1, 0x10, 0x10, 0xffffffff, 0xffffffff, 0xffffffff, ...})
        _cgo_gotypes.go:762 +0x4e fp=0xc00032fc00 sp=0xc00032fbd8 pc=0x5653c4b90aae
github.com/ollama/ollama/llama.NewContextWithModel.func1(...)
        github.com/ollama/ollama/llama/llama.go:317
github.com/ollama/ollama/llama.NewContextWithModel(0xc0003e0020, {{0x1f400, 0x200, 0x200, 0x1, 0x10, 0x10, 0xffffffff, 0xffffffff, 0xffffffff, ...}})
        github.com/ollama/ollama/llama/llama.go:317 +0x158 fp=0xc00032fda0 sp=0xc00032fc00 pc=0x5653c4b94cd8
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0004f39a0, {{0xc0003b0fc0, 0x1, 0x1}, 0x4e, 0x0, 0x1, {0xc0003b0fb8, 0x1, 0x2}, ...}, ...)
        github.com/ollama/ollama/runner/llamarunner/runner.go:845 +0x178 fp=0xc00032fee8 sp=0xc00032fda0 pc=0x5653c4c4d418
github.com/ollama/ollama/runner/llamarunner.(*Server).load.gowrap2()
        github.com/ollama/ollama/runner/llamarunner/runner.go:932 +0x115 fp=0xc00032ffe0 sp=0xc00032fee8 pc=0x5653c4c4e635
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00032ffe8 sp=0xc00032ffe0 pc=0x5653c47e2e21
created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 57
        github.com/ollama/ollama/runner/llamarunner/runner.go:932 +0x88a

goroutine 1 gp=0xc000002380 m=nil [IO wait, 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000529790 sp=0xc000529770 pc=0x5653c47daf8e
runtime.netpollblock(0xc00050d7e0?, 0xc47746c6?, 0x53?)
        runtime/netpoll.go:575 +0xf7 fp=0xc0005297c8 sp=0xc000529790 pc=0x5653c47a02b7
internal/poll.runtime_pollWait(0x14dce4a56eb0, 0x72)
        runtime/netpoll.go:351 +0x85 fp=0xc0005297e8 sp=0xc0005297c8 pc=0x5653c47da1a5
internal/poll.(*pollDesc).wait(0xc0001f7e00?, 0x900000036?, 0x0)
        internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000529810 sp=0xc0005297e8 pc=0x5653c48620e7
internal/poll.(*pollDesc).waitRead(...)
        internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001f7e00)
        internal/poll/fd_unix.go:620 +0x295 fp=0xc0005298b8 sp=0xc000529810 pc=0x5653c48674b5
net.(*netFD).accept(0xc0001f7e00)
        net/fd_unix.go:172 +0x29 fp=0xc000529970 sp=0xc0005298b8 pc=0x5653c48da389
net.(*TCPListener).accept(0xc00044d4c0)
        net/tcpsock_posix.go:159 +0x1b fp=0xc0005299c0 sp=0xc000529970 pc=0x5653c48efd3b
net.(*TCPListener).Accept(0xc00044d4c0)
        net/tcpsock.go:380 +0x30 fp=0xc0005299f0 sp=0xc0005299c0 pc=0x5653c48eebf0
net/http.(*onceCloseListener).Accept(0xc00015c3f0?)
        <autogenerated>:1 +0x24 fp=0xc000529a08 sp=0xc0005299f0 pc=0x5653c4b063c4
net/http.(*Server).Serve(0xc000051800, {0x5653c5d02f60, 0xc00044d4c0})
        net/http/server.go:3424 +0x30c fp=0xc000529b38 sp=0xc000529a08 pc=0x5653c4addc8c
github.com/ollama/ollama/runner/llamarunner.Execute({0xc000034260, 0x4, 0x4})
        github.com/ollama/ollama/runner/llamarunner/runner.go:1000 +0x8f5 fp=0xc000529d08 sp=0xc000529b38 pc=0x5653c4c4eff5
github.com/ollama/ollama/runner.Execute({0xc000034250?, 0x0?, 0x0?})
        github.com/ollama/ollama/runner/runner.go:22 +0xd4 fp=0xc000529d30 sp=0xc000529d08 pc=0x5653c4cf5454
github.com/ollama/ollama/cmd.NewCLI.func2(0xc000051500?, {0x5653c58040ab?, 0x4?, 0x5653c58040af?})
        github.com/ollama/ollama/cmd/cmd.go:1841 +0x45 fp=0xc000529d58 sp=0xc000529d30 pc=0x5653c5484d85
github.com/spf13/cobra.(*Command).execute(0xc00015f508, {0xc00044d2c0, 0x4, 0x4})
        github.com/spf13/cobra@v1.7.0/command.go:940 +0x85c fp=0xc000529e78 sp=0xc000529d58 pc=0x5653c49539dc
github.com/spf13/cobra.(*Command).ExecuteC(0xc00014af08)
        github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc000529f30 sp=0xc000529e78 pc=0x5653c4954225
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
        github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
        github.com/ollama/ollama/main.go:12 +0x4d fp=0xc000529f50 sp=0xc000529f30 pc=0x5653c548586d
runtime.main()
        runtime/proc.go:283 +0x29d fp=0xc000529fe0 sp=0xc000529f50 pc=0x5653c47a793d
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000529fe8 sp=0xc000529fe0 pc=0x5653c47e2e21

goroutine 2 gp=0xc000002e00 m=nil [force gc (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000aafa8 sp=0xc0000aaf88 pc=0x5653c47daf8e
runtime.goparkunlock(...)
        runtime/proc.go:441
runtime.forcegchelper()
        runtime/proc.go:348 +0xb8 fp=0xc0000aafe0 sp=0xc0000aafa8 pc=0x5653c47a7c78
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000aafe8 sp=0xc0000aafe0 pc=0x5653c47e2e21
created by runtime.init.7 in goroutine 1
        runtime/proc.go:336 +0x1a

goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000ab780 sp=0xc0000ab760 pc=0x5653c47daf8e
runtime.goparkunlock(...)
        runtime/proc.go:441
runtime.bgsweep(0xc0000d6000)
        runtime/mgcsweep.go:316 +0xdf fp=0xc0000ab7c8 sp=0xc0000ab780 pc=0x5653c479241f
runtime.gcenable.gowrap1()
        runtime/mgc.go:204 +0x25 fp=0xc0000ab7e0 sp=0xc0000ab7c8 pc=0x5653c4786805
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ab7e8 sp=0xc0000ab7e0 pc=0x5653c47e2e21
created by runtime.gcenable in goroutine 1
        runtime/mgc.go:204 +0x66

goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]:
runtime.gopark(0x10000?, 0x5653c59ce148?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000abf78 sp=0xc0000abf58 pc=0x5653c47daf8e
runtime.goparkunlock(...)
        runtime/proc.go:441
runtime.(*scavengerState).park(0x5653c65c5100)
        runtime/mgcscavenge.go:425 +0x49 fp=0xc0000abfa8 sp=0xc0000abf78 pc=0x5653c478fe69
runtime.bgscavenge(0xc0000d6000)
        runtime/mgcscavenge.go:658 +0x59 fp=0xc0000abfc8 sp=0xc0000abfa8 pc=0x5653c47903f9
runtime.gcenable.gowrap2()
        runtime/mgc.go:205 +0x25 fp=0xc0000abfe0 sp=0xc0000abfc8 pc=0x5653c47867a5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000abfe8 sp=0xc0000abfe0 pc=0x5653c47e2e21
created by runtime.gcenable in goroutine 1
        runtime/mgc.go:205 +0xa5

goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait, 1 minutes]:
runtime.gopark(0x1b8?, 0x5653c5ce15c0?, 0x1?, 0x23?, 0x5653c47e0e34?)
        runtime/proc.go:435 +0xce fp=0xc0000aa630 sp=0xc0000aa610 pc=0x5653c47daf8e
runtime.runfinq()
        runtime/mfinal.go:196 +0x107 fp=0xc0000aa7e0 sp=0xc0000aa630 pc=0x5653c47857c7
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000aa7e8 sp=0xc0000aa7e0 pc=0x5653c47e2e21
created by runtime.createfing in goroutine 1
        runtime/mfinal.go:166 +0x3d

goroutine 6 gp=0xc0001fa8c0 m=nil [chan receive]:
runtime.gopark(0xc00025b680?, 0xc00050e060?, 0x60?, 0xc7?, 0x5653c48c0fc8?)
        runtime/proc.go:435 +0xce fp=0xc0000ac718 sp=0xc0000ac6f8 pc=0x5653c47daf8e
runtime.chanrecv(0xc00003e380, 0x0, 0x1)
        runtime/chan.go:664 +0x445 fp=0xc0000ac790 sp=0xc0000ac718 pc=0x5653c47772a5
runtime.chanrecv1(0x0?, 0x0?)
        runtime/chan.go:506 +0x12 fp=0xc0000ac7b8 sp=0xc0000ac790 pc=0x5653c4776e32
runtime.unique_runtime_registerUniqueMapCleanup.func2(...)
        runtime/mgc.go:1796
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
        runtime/mgc.go:1799 +0x2f fp=0xc0000ac7e0 sp=0xc0000ac7b8 pc=0x5653c47899af
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ac7e8 sp=0xc0000ac7e0 pc=0x5653c47e2e21
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
        runtime/mgc.go:1794 +0x85

goroutine 7 gp=0xc0001fac40 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000acf38 sp=0xc0000acf18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000acfc8 sp=0xc0000acf38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000acfe0 sp=0xc0000acfc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000acfe8 sp=0xc0000acfe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 8 gp=0xc0001fae00 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000ad738 sp=0xc0000ad718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000ad7c8 sp=0xc0000ad738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000ad7e0 sp=0xc0000ad7c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ad7e8 sp=0xc0000ad7e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 18 gp=0xc000504000 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000a6738 sp=0xc0000a6718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000a67c8 sp=0xc0000a6738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000a67e0 sp=0xc0000a67c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a67e8 sp=0xc0000a67e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 34 gp=0xc000102380 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011a738 sp=0xc00011a718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011a7c8 sp=0xc00011a738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011a7e0 sp=0xc00011a7c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011a7e8 sp=0xc00011a7e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 35 gp=0xc000102540 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011af38 sp=0xc00011af18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011afc8 sp=0xc00011af38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011afe0 sp=0xc00011afc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011afe8 sp=0xc00011afe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 9 gp=0xc0001fafc0 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000adf38 sp=0xc0000adf18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000adfc8 sp=0xc0000adf38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000adfe0 sp=0xc0000adfc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000adfe8 sp=0xc0000adfe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 10 gp=0xc0001fb180 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000116738 sp=0xc000116718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0001167c8 sp=0xc000116738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0001167e0 sp=0xc0001167c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0001167e8 sp=0xc0001167e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 19 gp=0xc0005041c0 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000a6f38 sp=0xc0000a6f18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000a6fc8 sp=0xc0000a6f38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000a6fe0 sp=0xc0000a6fc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a6fe8 sp=0xc0000a6fe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 20 gp=0xc000504380 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000a7738 sp=0xc0000a7718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000a77c8 sp=0xc0000a7738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000a77e0 sp=0xc0000a77c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a77e8 sp=0xc0000a77e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 36 gp=0xc000102700 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011b738 sp=0xc00011b718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011b7c8 sp=0xc00011b738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011b7e0 sp=0xc00011b7c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011b7e8 sp=0xc00011b7e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 37 gp=0xc0001028c0 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011bf38 sp=0xc00011bf18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011bfc8 sp=0xc00011bf38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011bfe0 sp=0xc00011bfc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011bfe8 sp=0xc00011bfe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 11 gp=0xc0001fb340 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000116f38 sp=0xc000116f18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc000116fc8 sp=0xc000116f38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc000116fe0 sp=0xc000116fc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000116fe8 sp=0xc000116fe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 12 gp=0xc0001fb500 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000117738 sp=0xc000117718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0001177c8 sp=0xc000117738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0001177e0 sp=0xc0001177c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0001177e8 sp=0xc0001177e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 21 gp=0xc000504540 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000a7f38 sp=0xc0000a7f18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000a7fc8 sp=0xc0000a7f38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000a7fe0 sp=0xc0000a7fc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a7fe8 sp=0xc0000a7fe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 38 gp=0xc000102a80 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011c738 sp=0xc00011c718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011c7c8 sp=0xc00011c738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011c7e0 sp=0xc00011c7c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011c7e8 sp=0xc00011c7e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 13 gp=0xc0001fb6c0 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000117f38 sp=0xc000117f18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc000117fc8 sp=0xc000117f38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc000117fe0 sp=0xc000117fc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000117fe8 sp=0xc000117fe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 14 gp=0xc0001fb880 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000118738 sp=0xc000118718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0001187c8 sp=0xc000118738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0001187e0 sp=0xc0001187c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0001187e8 sp=0xc0001187e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 15 gp=0xc0001fba40 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000118f38 sp=0xc000118f18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc000118fc8 sp=0xc000118f38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc000118fe0 sp=0xc000118fc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000118fe8 sp=0xc000118fe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 16 gp=0xc0001fbc00 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000119738 sp=0xc000119718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0001197c8 sp=0xc000119738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0001197e0 sp=0xc0001197c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0001197e8 sp=0xc0001197e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 22 gp=0xc000504700 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000a8738 sp=0xc0000a8718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000a87c8 sp=0xc0000a8738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000a87e0 sp=0xc0000a87c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a87e8 sp=0xc0000a87e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 23 gp=0xc0005048c0 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000a8f38 sp=0xc0000a8f18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000a8fc8 sp=0xc0000a8f38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000a8fe0 sp=0xc0000a8fc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a8fe8 sp=0xc0000a8fe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 39 gp=0xc000102c40 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011cf38 sp=0xc00011cf18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011cfc8 sp=0xc00011cf38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011cfe0 sp=0xc00011cfc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011cfe8 sp=0xc00011cfe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 50 gp=0xc0001fbdc0 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000119f38 sp=0xc000119f18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc000119fc8 sp=0xc000119f38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc000119fe0 sp=0xc000119fc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000119fe8 sp=0xc000119fe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 40 gp=0xc000102e00 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011d738 sp=0xc00011d718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011d7c8 sp=0xc00011d738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011d7e0 sp=0xc00011d7c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011d7e8 sp=0xc00011d7e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 51 gp=0xc0004a6000 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0004ac738 sp=0xc0004ac718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0004ac7c8 sp=0xc0004ac738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0004ac7e0 sp=0xc0004ac7c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0004ac7e8 sp=0xc0004ac7e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 52 gp=0xc0004a61c0 m=nil [GC worker (idle)]:
runtime.gopark(0xf34034205f61?, 0x1?, 0x9a?, 0xb8?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0004acf38 sp=0xc0004acf18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0004acfc8 sp=0xc0004acf38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0004acfe0 sp=0xc0004acfc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0004acfe8 sp=0xc0004acfe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 53 gp=0xc0004a6380 m=nil [GC worker (idle)]:
runtime.gopark(0xf34034206817?, 0x1?, 0xeb?, 0x93?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0004ad738 sp=0xc0004ad718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0004ad7c8 sp=0xc0004ad738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0004ad7e0 sp=0xc0004ad7c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0004ad7e8 sp=0xc0004ad7e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 54 gp=0xc0004a6540 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x5653c6674ea0?, 0x1?, 0x45?, 0x8c?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0004adf38 sp=0xc0004adf18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0004adfc8 sp=0xc0004adf38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0004adfe0 sp=0xc0004adfc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0004adfe8 sp=0xc0004adfe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 41 gp=0xc000102fc0 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x5653c6674ea0?, 0x1?, 0xfb?, 0x27?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011df38 sp=0xc00011df18 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011dfc8 sp=0xc00011df38 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011dfe0 sp=0xc00011dfc8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011dfe8 sp=0xc00011dfe0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 42 gp=0xc000103180 m=nil [GC worker (idle)]:
runtime.gopark(0x5653c6674ea0?, 0x1?, 0x6?, 0x11?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0004a8738 sp=0xc0004a8718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0004a87c8 sp=0xc0004a8738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0004a87e0 sp=0xc0004a87c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0004a87e8 sp=0xc0004a87e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 24 gp=0xc000504a80 m=nil [GC worker (idle)]:
runtime.gopark(0x5653c6674ea0?, 0x1?, 0x40?, 0x14?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000a9738 sp=0xc0000a9718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000a97c8 sp=0xc0000a9738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000a97e0 sp=0xc0000a97c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a97e8 sp=0xc0000a97e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 55 gp=0xc0004a6700 m=nil [GC worker (idle), 1 minutes]:
runtime.gopark(0x5653c6674ea0?, 0x1?, 0x21?, 0xdf?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0004ae738 sp=0xc0004ae718 pc=0x5653c47daf8e
runtime.gcBgMarkWorker(0xc00003f7a0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0004ae7c8 sp=0xc0004ae738 pc=0x5653c4788cc9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0004ae7e0 sp=0xc0004ae7c8 pc=0x5653c4788ba5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0004ae7e8 sp=0xc0004ae7e0 pc=0x5653c47e2e21
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 56 gp=0xc000582c40 m=nil [sync.WaitGroup.Wait, 1 minutes]:
runtime.gopark(0x0?, 0x0?, 0x80?, 0x81?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0004aa620 sp=0xc0004aa600 pc=0x5653c47daf8e
runtime.goparkunlock(...)
        runtime/proc.go:441
runtime.semacquire1(0xc0004f39c0, 0x0, 0x1, 0x0, 0x18)
        runtime/sema.go:188 +0x229 fp=0xc0004aa688 sp=0xc0004aa620 pc=0x5653c47baf09
sync.runtime_SemacquireWaitGroup(0x0?)
        runtime/sema.go:110 +0x25 fp=0xc0004aa6c0 sp=0xc0004aa688 pc=0x5653c47dc8c5
sync.(*WaitGroup).Wait(0x0?)
        sync/waitgroup.go:118 +0x48 fp=0xc0004aa6e8 sp=0xc0004aa6c0 pc=0x5653c47ee768
github.com/ollama/ollama/runner/llamarunner.(*Server).run(0xc0004f39a0, {0x5653c5d05580, 0xc0003a49b0})
        github.com/ollama/ollama/runner/llamarunner/runner.go:359 +0x4b fp=0xc0004aa7b8 sp=0xc0004aa6e8 pc=0x5653c4c49dcb
github.com/ollama/ollama/runner/llamarunner.Execute.gowrap1()
        github.com/ollama/ollama/runner/llamarunner/runner.go:979 +0x28 fp=0xc0004aa7e0 sp=0xc0004aa7b8 pc=0x5653c4c4f268
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0004aa7e8 sp=0xc0004aa7e0 pc=0x5653c47e2e21
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
        github.com/ollama/ollama/runner/llamarunner/runner.go:979 +0x4c5

goroutine 57 gp=0xc000582e00 m=nil [IO wait]:
runtime.gopark(0x14dc9d698118?, 0xc0001f7e80?, 0x70?, 0x99?, 0xb?)
        runtime/proc.go:435 +0xce fp=0xc0001a9948 sp=0xc0001a9928 pc=0x5653c47daf8e
runtime.netpollblock(0x5653c47fe638?, 0xc47746c6?, 0x53?)
        runtime/netpoll.go:575 +0xf7 fp=0xc0001a9980 sp=0xc0001a9948 pc=0x5653c47a02b7
internal/poll.runtime_pollWait(0x14dce4a56d98, 0x72)
        runtime/netpoll.go:351 +0x85 fp=0xc0001a99a0 sp=0xc0001a9980 pc=0x5653c47da1a5
internal/poll.(*pollDesc).wait(0xc0001f7e80?, 0xc000178000?, 0x0)
        internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0001a99c8 sp=0xc0001a99a0 pc=0x5653c48620e7
internal/poll.(*pollDesc).waitRead(...)
        internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0001f7e80, {0xc000178000, 0x1000, 0x1000})
        internal/poll/fd_unix.go:165 +0x27a fp=0xc0001a9a60 sp=0xc0001a99c8 pc=0x5653c48633da
net.(*netFD).Read(0xc0001f7e80, {0xc000178000?, 0xc0001a9ad0?, 0x5653c48625a5?})
        net/fd_posix.go:55 +0x25 fp=0xc0001a9aa8 sp=0xc0001a9a60 pc=0x5653c48d83e5
net.(*conn).Read(0xc000128580, {0xc000178000?, 0x0?, 0x0?})
        net/net.go:194 +0x45 fp=0xc0001a9af0 sp=0xc0001a9aa8 pc=0x5653c48e67a5
net/http.(*connReader).Read(0xc000158bd0, {0xc000178000, 0x1000, 0x1000})
        net/http/server.go:798 +0x159 fp=0xc0001a9b40 sp=0xc0001a9af0 pc=0x5653c4ad2b39
bufio.(*Reader).fill(0xc000110720)
        bufio/bufio.go:113 +0x103 fp=0xc0001a9b78 sp=0xc0001a9b40 pc=0x5653c48fdf43
bufio.(*Reader).Peek(0xc000110720, 0x4)
        bufio/bufio.go:152 +0x53 fp=0xc0001a9b98 sp=0xc0001a9b78 pc=0x5653c48fe073
net/http.(*conn).serve(0xc00015c3f0, {0x5653c5d05548, 0xc000158ae0})
        net/http/server.go:2137 +0x785 fp=0xc0001a9fb8 sp=0xc0001a9b98 pc=0x5653c4ad8925
net/http.(*Server).Serve.gowrap3()
        net/http/server.go:3454 +0x28 fp=0xc0001a9fe0 sp=0xc0001a9fb8 pc=0x5653c4ade088
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0001a9fe8 sp=0xc0001a9fe0 pc=0x5653c47e2e21
created by net/http.(*Server).Serve in goroutine 1
        net/http/server.go:3454 +0x485

rax    0x14db7e4d4e98
rbx    0x1
rcx    0x14cfdc8f4250
rdx    0xffffffffdec8a450
rdi    0x14dc880862d0
rsi    0x0
rbp    0x14dc88025ce0
rsp    0x14dc97ffeaa8
r8     0x14cfdd00b
r9     0x7
r10    0x14cfdd00b7c0
r11    0xecb5761be6ed27e3
r12    0x14dc88025ce0
r13    0x0
r14    0x0
r15    0x14dc88000da0
rip    0x14dc16320b22
rflags 0x10202
cs     0x33
fs     0x0
gs     0x0
time=2025-12-01T09:06:14.730Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server not responding"
time=2025-12-01T09:06:24.015Z level=INFO source=sched.go:470 msg="Load failed" model=/root/.ollama/models/blobs/sha256-4824460d29f2058aaf6e1118a63a7a197a09bed509f0e7d4e2efb1ee273b447d error="llama runner process has terminated: exit status 2"
<!-- gh-comment-id:3595428821 --> @Muxelmann commented on GitHub (Dec 1, 2025): I'm using an Nvidia Blackwell (RTX 50xx) series GPU with 96GB, and also get this compute buffer allocation error. When I ran `llama3.3:70b` on two RTX30xx cards, it used to work (it ran slow, but ran...). Since I've experienced several compatibility problems with the new GPUs, maybe it's similar here, too? ## Setup I'm running Unraid and communicating with `ollama` via `open-webui`. Granted, I haven't upgraded my mainboard yet, so the GPU is still running on PCIe 3@16x, but that should only impact loading times, not whether the model can load or not (right?). ## Error log ``` time=2025-12-01T09:04:50.624Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37605" time=2025-12-01T09:04:51.046Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" llama_model_loader: loaded meta data with 36 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4824460d29f2058aaf6e1118a63a7a197a09bed509f0e7d4e2efb1ee273b447d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.1 70B Instruct 2024 12 llama_model_loader: - kv 3: general.version str = 2024-12 llama_model_loader: - kv 4: general.finetune str = Instruct llama_model_loader: - kv 5: general.basename str = Llama-3.1 llama_model_loader: - kv 6: general.size_label str = 70B llama_model_loader: - kv 7: general.license str = llama3.1 llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Llama 3.1 70B llama_model_loader: - kv 10: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Lla... llama_model_loader: - kv 12: general.tags arr[str,5] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 13: general.languages arr[str,7] = ["fr", "it", "pt", "hi", "es", "th", ... llama_model_loader: - kv 14: llama.block_count u32 = 80 llama_model_loader: - kv 15: llama.context_length u32 = 131072 llama_model_loader: - kv 16: llama.embedding_length u32 = 8192 llama_model_loader: - kv 17: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 18: llama.attention.head_count u32 = 64 llama_model_loader: - kv 19: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 21: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 22: llama.attention.key_length u32 = 128 llama_model_loader: - kv 23: llama.attention.value_length u32 = 128 llama_model_loader: - kv 24: general.file_type u32 = 15 llama_model_loader: - kv 25: llama.vocab_size u32 = 128256 llama_model_loader: - kv 26: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 27: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 28: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 29: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 30: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 31: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 32: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 33: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 34: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 35: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 39.59 GiB (4.82 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 70.55 B print_info: general.name = Llama 3.1 70B Instruct 2024 12 print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-12-01T09:04:51.718Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-4824460d29f2058aaf6e1118a63a7a197a09bed509f0e7d4e2efb1ee273b447d --port 44623" time=2025-12-01T09:04:51.719Z level=INFO source=sched.go:443 msg="system memory" total="125.7 GiB" free="124.4 GiB" free_swap="0 B" time=2025-12-01T09:04:51.719Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-1ffd96f2-58ea-681b-af9d-af4d065a9975 library=CUDA available="93.2 GiB" free="93.7 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-01T09:04:51.719Z level=INFO source=server.go:459 msg="loading model" "model layers"=81 requested=-1 time=2025-12-01T09:04:51.722Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="37.2 GiB" time=2025-12-01T09:04:51.722Z level=INFO source=device.go:245 msg="model weights" device=CPU size="1.8 GiB" time=2025-12-01T09:04:51.722Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="38.1 GiB" time=2025-12-01T09:04:51.722Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="1000.0 MiB" time=2025-12-01T09:04:51.722Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="16.4 GiB" time=2025-12-01T09:04:51.722Z level=INFO source=device.go:272 msg="total memory" size="94.5 GiB" time=2025-12-01T09:04:51.745Z level=INFO source=runner.go:963 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition, compute capability 12.0, VMM: yes, ID: GPU-1ffd96f2-58ea-681b-af9d-af4d065a9975 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-12-01T09:04:51.831Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-12-01T09:04:51.842Z level=INFO source=runner.go:999 msg="Server listening on 127.0.0.1:44623" time=2025-12-01T09:04:51.853Z level=INFO source=runner.go:893 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:128000 KvCacheType: NumThreads:16 GPULayers:78[ID:GPU-1ffd96f2-58ea-681b-af9d-af4d065a9975 Layers:78(2..79)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" time=2025-12-01T09:04:51.854Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-01T09:04:51.854Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" ggml_backend_cuda_device_get_memory device GPU-1ffd96f2-58ea-681b-af9d-af4d065a9975 utilizing NVML memory reporting free: 100603068416 total: 102641958912 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition) (0000:81:00.0) - 95942 MiB free llama_model_loader: loaded meta data with 36 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4824460d29f2058aaf6e1118a63a7a197a09bed509f0e7d4e2efb1ee273b447d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.1 70B Instruct 2024 12 llama_model_loader: - kv 3: general.version str = 2024-12 llama_model_loader: - kv 4: general.finetune str = Instruct llama_model_loader: - kv 5: general.basename str = Llama-3.1 llama_model_loader: - kv 6: general.size_label str = 70B llama_model_loader: - kv 7: general.license str = llama3.1 llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Llama 3.1 70B llama_model_loader: - kv 10: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Lla... llama_model_loader: - kv 12: general.tags arr[str,5] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 13: general.languages arr[str,7] = ["fr", "it", "pt", "hi", "es", "th", ... llama_model_loader: - kv 14: llama.block_count u32 = 80 llama_model_loader: - kv 15: llama.context_length u32 = 131072 llama_model_loader: - kv 16: llama.embedding_length u32 = 8192 llama_model_loader: - kv 17: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 18: llama.attention.head_count u32 = 64 llama_model_loader: - kv 19: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 21: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 22: llama.attention.key_length u32 = 128 llama_model_loader: - kv 23: llama.attention.value_length u32 = 128 llama_model_loader: - kv 24: general.file_type u32 = 15 llama_model_loader: - kv 25: llama.vocab_size u32 = 128256 llama_model_loader: - kv 26: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 27: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 28: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 29: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 30: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 31: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 32: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 33: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 34: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 35: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 39.59 GiB (4.82 BPW) load: printing all EOG tokens: load: - 128001 ('<|end_of_text|>') load: - 128008 ('<|eom_id|>') load: - 128009 ('<|eot_id|>') load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 8192 print_info: n_layer = 80 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 28672 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: model type = 70B print_info: model params = 70.55 B print_info: general.name = Llama 3.1 70B Instruct 2024 12 print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) [GIN] 2025/12/01 - 09:05:52 | 200 | 37.098476ms | 172.20.0.2 | GET "/api/tags" [GIN] 2025/12/01 - 09:05:52 | 200 | 47.193µs | 172.20.0.2 | GET "/api/ps" [GIN] 2025/12/01 - 09:05:52 | 200 | 69.503µs | 172.20.0.2 | GET "/api/version" load_tensors: offloading 78 repeating layers to GPU load_tensors: offloaded 78/81 layers to GPU load_tensors: CUDA0 model buffer size = 38119.75 MiB load_tensors: CPU_Mapped model buffer size = 40543.11 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 128000 llama_context: n_ctx_per_seq = 128000 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = disabled llama_context: kv_unified = false llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (128000) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.52 MiB llama_kv_cache: CUDA0 KV buffer size = 39000.00 MiB llama_kv_cache: CPU KV buffer size = 1000.00 MiB llama_kv_cache: size = 40000.00 MiB (128000 cells, 80 layers, 1/1 seqs), K (f16): 20000.00 MiB, V (f16): 20000.00 MiB graph_reserve: failed to allocate compute buffers SIGSEGV: segmentation violation PC=0x14dc16320b22 m=4 sigcode=2 addr=0x14db7e4d4e98 signal arrived during cgo execution goroutine 60 gp=0xc000583180 m=4 mp=0xc0000b1808 [syscall]: runtime.cgocall(0x5653c54f4b50, 0xc00032fc00) runtime/cgocall.go:167 +0x4b fp=0xc00032fbd8 sp=0xc00032fba0 pc=0x5653c47d7b0b github.com/ollama/ollama/llama._Cfunc_llama_init_from_model(0x14dc88000da0, {0x1f400, 0x200, 0x200, 0x1, 0x10, 0x10, 0xffffffff, 0xffffffff, 0xffffffff, ...}) _cgo_gotypes.go:762 +0x4e fp=0xc00032fc00 sp=0xc00032fbd8 pc=0x5653c4b90aae github.com/ollama/ollama/llama.NewContextWithModel.func1(...) github.com/ollama/ollama/llama/llama.go:317 github.com/ollama/ollama/llama.NewContextWithModel(0xc0003e0020, {{0x1f400, 0x200, 0x200, 0x1, 0x10, 0x10, 0xffffffff, 0xffffffff, 0xffffffff, ...}}) github.com/ollama/ollama/llama/llama.go:317 +0x158 fp=0xc00032fda0 sp=0xc00032fc00 pc=0x5653c4b94cd8 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0004f39a0, {{0xc0003b0fc0, 0x1, 0x1}, 0x4e, 0x0, 0x1, {0xc0003b0fb8, 0x1, 0x2}, ...}, ...) github.com/ollama/ollama/runner/llamarunner/runner.go:845 +0x178 fp=0xc00032fee8 sp=0xc00032fda0 pc=0x5653c4c4d418 github.com/ollama/ollama/runner/llamarunner.(*Server).load.gowrap2() github.com/ollama/ollama/runner/llamarunner/runner.go:932 +0x115 fp=0xc00032ffe0 sp=0xc00032fee8 pc=0x5653c4c4e635 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00032ffe8 sp=0xc00032ffe0 pc=0x5653c47e2e21 created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 57 github.com/ollama/ollama/runner/llamarunner/runner.go:932 +0x88a goroutine 1 gp=0xc000002380 m=nil [IO wait, 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000529790 sp=0xc000529770 pc=0x5653c47daf8e runtime.netpollblock(0xc00050d7e0?, 0xc47746c6?, 0x53?) runtime/netpoll.go:575 +0xf7 fp=0xc0005297c8 sp=0xc000529790 pc=0x5653c47a02b7 internal/poll.runtime_pollWait(0x14dce4a56eb0, 0x72) runtime/netpoll.go:351 +0x85 fp=0xc0005297e8 sp=0xc0005297c8 pc=0x5653c47da1a5 internal/poll.(*pollDesc).wait(0xc0001f7e00?, 0x900000036?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000529810 sp=0xc0005297e8 pc=0x5653c48620e7 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Accept(0xc0001f7e00) internal/poll/fd_unix.go:620 +0x295 fp=0xc0005298b8 sp=0xc000529810 pc=0x5653c48674b5 net.(*netFD).accept(0xc0001f7e00) net/fd_unix.go:172 +0x29 fp=0xc000529970 sp=0xc0005298b8 pc=0x5653c48da389 net.(*TCPListener).accept(0xc00044d4c0) net/tcpsock_posix.go:159 +0x1b fp=0xc0005299c0 sp=0xc000529970 pc=0x5653c48efd3b net.(*TCPListener).Accept(0xc00044d4c0) net/tcpsock.go:380 +0x30 fp=0xc0005299f0 sp=0xc0005299c0 pc=0x5653c48eebf0 net/http.(*onceCloseListener).Accept(0xc00015c3f0?) <autogenerated>:1 +0x24 fp=0xc000529a08 sp=0xc0005299f0 pc=0x5653c4b063c4 net/http.(*Server).Serve(0xc000051800, {0x5653c5d02f60, 0xc00044d4c0}) net/http/server.go:3424 +0x30c fp=0xc000529b38 sp=0xc000529a08 pc=0x5653c4addc8c github.com/ollama/ollama/runner/llamarunner.Execute({0xc000034260, 0x4, 0x4}) github.com/ollama/ollama/runner/llamarunner/runner.go:1000 +0x8f5 fp=0xc000529d08 sp=0xc000529b38 pc=0x5653c4c4eff5 github.com/ollama/ollama/runner.Execute({0xc000034250?, 0x0?, 0x0?}) github.com/ollama/ollama/runner/runner.go:22 +0xd4 fp=0xc000529d30 sp=0xc000529d08 pc=0x5653c4cf5454 github.com/ollama/ollama/cmd.NewCLI.func2(0xc000051500?, {0x5653c58040ab?, 0x4?, 0x5653c58040af?}) github.com/ollama/ollama/cmd/cmd.go:1841 +0x45 fp=0xc000529d58 sp=0xc000529d30 pc=0x5653c5484d85 github.com/spf13/cobra.(*Command).execute(0xc00015f508, {0xc00044d2c0, 0x4, 0x4}) github.com/spf13/cobra@v1.7.0/command.go:940 +0x85c fp=0xc000529e78 sp=0xc000529d58 pc=0x5653c49539dc github.com/spf13/cobra.(*Command).ExecuteC(0xc00014af08) github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc000529f30 sp=0xc000529e78 pc=0x5653c4954225 github.com/spf13/cobra.(*Command).Execute(...) github.com/spf13/cobra@v1.7.0/command.go:992 github.com/spf13/cobra.(*Command).ExecuteContext(...) github.com/spf13/cobra@v1.7.0/command.go:985 main.main() github.com/ollama/ollama/main.go:12 +0x4d fp=0xc000529f50 sp=0xc000529f30 pc=0x5653c548586d runtime.main() runtime/proc.go:283 +0x29d fp=0xc000529fe0 sp=0xc000529f50 pc=0x5653c47a793d runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000529fe8 sp=0xc000529fe0 pc=0x5653c47e2e21 goroutine 2 gp=0xc000002e00 m=nil [force gc (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000aafa8 sp=0xc0000aaf88 pc=0x5653c47daf8e runtime.goparkunlock(...) runtime/proc.go:441 runtime.forcegchelper() runtime/proc.go:348 +0xb8 fp=0xc0000aafe0 sp=0xc0000aafa8 pc=0x5653c47a7c78 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000aafe8 sp=0xc0000aafe0 pc=0x5653c47e2e21 created by runtime.init.7 in goroutine 1 runtime/proc.go:336 +0x1a goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000ab780 sp=0xc0000ab760 pc=0x5653c47daf8e runtime.goparkunlock(...) runtime/proc.go:441 runtime.bgsweep(0xc0000d6000) runtime/mgcsweep.go:316 +0xdf fp=0xc0000ab7c8 sp=0xc0000ab780 pc=0x5653c479241f runtime.gcenable.gowrap1() runtime/mgc.go:204 +0x25 fp=0xc0000ab7e0 sp=0xc0000ab7c8 pc=0x5653c4786805 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ab7e8 sp=0xc0000ab7e0 pc=0x5653c47e2e21 created by runtime.gcenable in goroutine 1 runtime/mgc.go:204 +0x66 goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]: runtime.gopark(0x10000?, 0x5653c59ce148?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000abf78 sp=0xc0000abf58 pc=0x5653c47daf8e runtime.goparkunlock(...) runtime/proc.go:441 runtime.(*scavengerState).park(0x5653c65c5100) runtime/mgcscavenge.go:425 +0x49 fp=0xc0000abfa8 sp=0xc0000abf78 pc=0x5653c478fe69 runtime.bgscavenge(0xc0000d6000) runtime/mgcscavenge.go:658 +0x59 fp=0xc0000abfc8 sp=0xc0000abfa8 pc=0x5653c47903f9 runtime.gcenable.gowrap2() runtime/mgc.go:205 +0x25 fp=0xc0000abfe0 sp=0xc0000abfc8 pc=0x5653c47867a5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000abfe8 sp=0xc0000abfe0 pc=0x5653c47e2e21 created by runtime.gcenable in goroutine 1 runtime/mgc.go:205 +0xa5 goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait, 1 minutes]: runtime.gopark(0x1b8?, 0x5653c5ce15c0?, 0x1?, 0x23?, 0x5653c47e0e34?) runtime/proc.go:435 +0xce fp=0xc0000aa630 sp=0xc0000aa610 pc=0x5653c47daf8e runtime.runfinq() runtime/mfinal.go:196 +0x107 fp=0xc0000aa7e0 sp=0xc0000aa630 pc=0x5653c47857c7 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000aa7e8 sp=0xc0000aa7e0 pc=0x5653c47e2e21 created by runtime.createfing in goroutine 1 runtime/mfinal.go:166 +0x3d goroutine 6 gp=0xc0001fa8c0 m=nil [chan receive]: runtime.gopark(0xc00025b680?, 0xc00050e060?, 0x60?, 0xc7?, 0x5653c48c0fc8?) runtime/proc.go:435 +0xce fp=0xc0000ac718 sp=0xc0000ac6f8 pc=0x5653c47daf8e runtime.chanrecv(0xc00003e380, 0x0, 0x1) runtime/chan.go:664 +0x445 fp=0xc0000ac790 sp=0xc0000ac718 pc=0x5653c47772a5 runtime.chanrecv1(0x0?, 0x0?) runtime/chan.go:506 +0x12 fp=0xc0000ac7b8 sp=0xc0000ac790 pc=0x5653c4776e32 runtime.unique_runtime_registerUniqueMapCleanup.func2(...) runtime/mgc.go:1796 runtime.unique_runtime_registerUniqueMapCleanup.gowrap1() runtime/mgc.go:1799 +0x2f fp=0xc0000ac7e0 sp=0xc0000ac7b8 pc=0x5653c47899af runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ac7e8 sp=0xc0000ac7e0 pc=0x5653c47e2e21 created by unique.runtime_registerUniqueMapCleanup in goroutine 1 runtime/mgc.go:1794 +0x85 goroutine 7 gp=0xc0001fac40 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000acf38 sp=0xc0000acf18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000acfc8 sp=0xc0000acf38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000acfe0 sp=0xc0000acfc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000acfe8 sp=0xc0000acfe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 8 gp=0xc0001fae00 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000ad738 sp=0xc0000ad718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000ad7c8 sp=0xc0000ad738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000ad7e0 sp=0xc0000ad7c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ad7e8 sp=0xc0000ad7e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 18 gp=0xc000504000 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000a6738 sp=0xc0000a6718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000a67c8 sp=0xc0000a6738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000a67e0 sp=0xc0000a67c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a67e8 sp=0xc0000a67e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 34 gp=0xc000102380 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011a738 sp=0xc00011a718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc00011a7c8 sp=0xc00011a738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011a7e0 sp=0xc00011a7c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011a7e8 sp=0xc00011a7e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 35 gp=0xc000102540 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011af38 sp=0xc00011af18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc00011afc8 sp=0xc00011af38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011afe0 sp=0xc00011afc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011afe8 sp=0xc00011afe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 9 gp=0xc0001fafc0 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000adf38 sp=0xc0000adf18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000adfc8 sp=0xc0000adf38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000adfe0 sp=0xc0000adfc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000adfe8 sp=0xc0000adfe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 10 gp=0xc0001fb180 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000116738 sp=0xc000116718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0001167c8 sp=0xc000116738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0001167e0 sp=0xc0001167c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001167e8 sp=0xc0001167e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 19 gp=0xc0005041c0 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000a6f38 sp=0xc0000a6f18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000a6fc8 sp=0xc0000a6f38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000a6fe0 sp=0xc0000a6fc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a6fe8 sp=0xc0000a6fe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 20 gp=0xc000504380 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000a7738 sp=0xc0000a7718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000a77c8 sp=0xc0000a7738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000a77e0 sp=0xc0000a77c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a77e8 sp=0xc0000a77e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 36 gp=0xc000102700 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011b738 sp=0xc00011b718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc00011b7c8 sp=0xc00011b738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011b7e0 sp=0xc00011b7c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011b7e8 sp=0xc00011b7e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 37 gp=0xc0001028c0 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011bf38 sp=0xc00011bf18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc00011bfc8 sp=0xc00011bf38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011bfe0 sp=0xc00011bfc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011bfe8 sp=0xc00011bfe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 11 gp=0xc0001fb340 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000116f38 sp=0xc000116f18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc000116fc8 sp=0xc000116f38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000116fe0 sp=0xc000116fc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000116fe8 sp=0xc000116fe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 12 gp=0xc0001fb500 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000117738 sp=0xc000117718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0001177c8 sp=0xc000117738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0001177e0 sp=0xc0001177c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001177e8 sp=0xc0001177e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 21 gp=0xc000504540 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000a7f38 sp=0xc0000a7f18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000a7fc8 sp=0xc0000a7f38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000a7fe0 sp=0xc0000a7fc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a7fe8 sp=0xc0000a7fe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 38 gp=0xc000102a80 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011c738 sp=0xc00011c718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc00011c7c8 sp=0xc00011c738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011c7e0 sp=0xc00011c7c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011c7e8 sp=0xc00011c7e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 13 gp=0xc0001fb6c0 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000117f38 sp=0xc000117f18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc000117fc8 sp=0xc000117f38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000117fe0 sp=0xc000117fc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000117fe8 sp=0xc000117fe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 14 gp=0xc0001fb880 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000118738 sp=0xc000118718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0001187c8 sp=0xc000118738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0001187e0 sp=0xc0001187c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001187e8 sp=0xc0001187e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 15 gp=0xc0001fba40 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000118f38 sp=0xc000118f18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc000118fc8 sp=0xc000118f38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000118fe0 sp=0xc000118fc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000118fe8 sp=0xc000118fe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 16 gp=0xc0001fbc00 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000119738 sp=0xc000119718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0001197c8 sp=0xc000119738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0001197e0 sp=0xc0001197c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001197e8 sp=0xc0001197e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 22 gp=0xc000504700 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000a8738 sp=0xc0000a8718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000a87c8 sp=0xc0000a8738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000a87e0 sp=0xc0000a87c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a87e8 sp=0xc0000a87e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 23 gp=0xc0005048c0 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000a8f38 sp=0xc0000a8f18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000a8fc8 sp=0xc0000a8f38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000a8fe0 sp=0xc0000a8fc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a8fe8 sp=0xc0000a8fe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 39 gp=0xc000102c40 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011cf38 sp=0xc00011cf18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc00011cfc8 sp=0xc00011cf38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011cfe0 sp=0xc00011cfc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011cfe8 sp=0xc00011cfe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 50 gp=0xc0001fbdc0 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000119f38 sp=0xc000119f18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc000119fc8 sp=0xc000119f38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000119fe0 sp=0xc000119fc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000119fe8 sp=0xc000119fe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 40 gp=0xc000102e00 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011d738 sp=0xc00011d718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc00011d7c8 sp=0xc00011d738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011d7e0 sp=0xc00011d7c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011d7e8 sp=0xc00011d7e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 51 gp=0xc0004a6000 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0004ac738 sp=0xc0004ac718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0004ac7c8 sp=0xc0004ac738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0004ac7e0 sp=0xc0004ac7c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0004ac7e8 sp=0xc0004ac7e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 52 gp=0xc0004a61c0 m=nil [GC worker (idle)]: runtime.gopark(0xf34034205f61?, 0x1?, 0x9a?, 0xb8?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0004acf38 sp=0xc0004acf18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0004acfc8 sp=0xc0004acf38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0004acfe0 sp=0xc0004acfc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0004acfe8 sp=0xc0004acfe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 53 gp=0xc0004a6380 m=nil [GC worker (idle)]: runtime.gopark(0xf34034206817?, 0x1?, 0xeb?, 0x93?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0004ad738 sp=0xc0004ad718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0004ad7c8 sp=0xc0004ad738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0004ad7e0 sp=0xc0004ad7c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0004ad7e8 sp=0xc0004ad7e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 54 gp=0xc0004a6540 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x5653c6674ea0?, 0x1?, 0x45?, 0x8c?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0004adf38 sp=0xc0004adf18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0004adfc8 sp=0xc0004adf38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0004adfe0 sp=0xc0004adfc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0004adfe8 sp=0xc0004adfe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 41 gp=0xc000102fc0 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x5653c6674ea0?, 0x1?, 0xfb?, 0x27?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011df38 sp=0xc00011df18 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc00011dfc8 sp=0xc00011df38 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011dfe0 sp=0xc00011dfc8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011dfe8 sp=0xc00011dfe0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 42 gp=0xc000103180 m=nil [GC worker (idle)]: runtime.gopark(0x5653c6674ea0?, 0x1?, 0x6?, 0x11?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0004a8738 sp=0xc0004a8718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0004a87c8 sp=0xc0004a8738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0004a87e0 sp=0xc0004a87c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0004a87e8 sp=0xc0004a87e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 24 gp=0xc000504a80 m=nil [GC worker (idle)]: runtime.gopark(0x5653c6674ea0?, 0x1?, 0x40?, 0x14?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000a9738 sp=0xc0000a9718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0000a97c8 sp=0xc0000a9738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000a97e0 sp=0xc0000a97c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a97e8 sp=0xc0000a97e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 55 gp=0xc0004a6700 m=nil [GC worker (idle), 1 minutes]: runtime.gopark(0x5653c6674ea0?, 0x1?, 0x21?, 0xdf?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0004ae738 sp=0xc0004ae718 pc=0x5653c47daf8e runtime.gcBgMarkWorker(0xc00003f7a0) runtime/mgc.go:1423 +0xe9 fp=0xc0004ae7c8 sp=0xc0004ae738 pc=0x5653c4788cc9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0004ae7e0 sp=0xc0004ae7c8 pc=0x5653c4788ba5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0004ae7e8 sp=0xc0004ae7e0 pc=0x5653c47e2e21 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 56 gp=0xc000582c40 m=nil [sync.WaitGroup.Wait, 1 minutes]: runtime.gopark(0x0?, 0x0?, 0x80?, 0x81?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0004aa620 sp=0xc0004aa600 pc=0x5653c47daf8e runtime.goparkunlock(...) runtime/proc.go:441 runtime.semacquire1(0xc0004f39c0, 0x0, 0x1, 0x0, 0x18) runtime/sema.go:188 +0x229 fp=0xc0004aa688 sp=0xc0004aa620 pc=0x5653c47baf09 sync.runtime_SemacquireWaitGroup(0x0?) runtime/sema.go:110 +0x25 fp=0xc0004aa6c0 sp=0xc0004aa688 pc=0x5653c47dc8c5 sync.(*WaitGroup).Wait(0x0?) sync/waitgroup.go:118 +0x48 fp=0xc0004aa6e8 sp=0xc0004aa6c0 pc=0x5653c47ee768 github.com/ollama/ollama/runner/llamarunner.(*Server).run(0xc0004f39a0, {0x5653c5d05580, 0xc0003a49b0}) github.com/ollama/ollama/runner/llamarunner/runner.go:359 +0x4b fp=0xc0004aa7b8 sp=0xc0004aa6e8 pc=0x5653c4c49dcb github.com/ollama/ollama/runner/llamarunner.Execute.gowrap1() github.com/ollama/ollama/runner/llamarunner/runner.go:979 +0x28 fp=0xc0004aa7e0 sp=0xc0004aa7b8 pc=0x5653c4c4f268 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0004aa7e8 sp=0xc0004aa7e0 pc=0x5653c47e2e21 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/llamarunner/runner.go:979 +0x4c5 goroutine 57 gp=0xc000582e00 m=nil [IO wait]: runtime.gopark(0x14dc9d698118?, 0xc0001f7e80?, 0x70?, 0x99?, 0xb?) runtime/proc.go:435 +0xce fp=0xc0001a9948 sp=0xc0001a9928 pc=0x5653c47daf8e runtime.netpollblock(0x5653c47fe638?, 0xc47746c6?, 0x53?) runtime/netpoll.go:575 +0xf7 fp=0xc0001a9980 sp=0xc0001a9948 pc=0x5653c47a02b7 internal/poll.runtime_pollWait(0x14dce4a56d98, 0x72) runtime/netpoll.go:351 +0x85 fp=0xc0001a99a0 sp=0xc0001a9980 pc=0x5653c47da1a5 internal/poll.(*pollDesc).wait(0xc0001f7e80?, 0xc000178000?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0001a99c8 sp=0xc0001a99a0 pc=0x5653c48620e7 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc0001f7e80, {0xc000178000, 0x1000, 0x1000}) internal/poll/fd_unix.go:165 +0x27a fp=0xc0001a9a60 sp=0xc0001a99c8 pc=0x5653c48633da net.(*netFD).Read(0xc0001f7e80, {0xc000178000?, 0xc0001a9ad0?, 0x5653c48625a5?}) net/fd_posix.go:55 +0x25 fp=0xc0001a9aa8 sp=0xc0001a9a60 pc=0x5653c48d83e5 net.(*conn).Read(0xc000128580, {0xc000178000?, 0x0?, 0x0?}) net/net.go:194 +0x45 fp=0xc0001a9af0 sp=0xc0001a9aa8 pc=0x5653c48e67a5 net/http.(*connReader).Read(0xc000158bd0, {0xc000178000, 0x1000, 0x1000}) net/http/server.go:798 +0x159 fp=0xc0001a9b40 sp=0xc0001a9af0 pc=0x5653c4ad2b39 bufio.(*Reader).fill(0xc000110720) bufio/bufio.go:113 +0x103 fp=0xc0001a9b78 sp=0xc0001a9b40 pc=0x5653c48fdf43 bufio.(*Reader).Peek(0xc000110720, 0x4) bufio/bufio.go:152 +0x53 fp=0xc0001a9b98 sp=0xc0001a9b78 pc=0x5653c48fe073 net/http.(*conn).serve(0xc00015c3f0, {0x5653c5d05548, 0xc000158ae0}) net/http/server.go:2137 +0x785 fp=0xc0001a9fb8 sp=0xc0001a9b98 pc=0x5653c4ad8925 net/http.(*Server).Serve.gowrap3() net/http/server.go:3454 +0x28 fp=0xc0001a9fe0 sp=0xc0001a9fb8 pc=0x5653c4ade088 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001a9fe8 sp=0xc0001a9fe0 pc=0x5653c47e2e21 created by net/http.(*Server).Serve in goroutine 1 net/http/server.go:3454 +0x485 rax 0x14db7e4d4e98 rbx 0x1 rcx 0x14cfdc8f4250 rdx 0xffffffffdec8a450 rdi 0x14dc880862d0 rsi 0x0 rbp 0x14dc88025ce0 rsp 0x14dc97ffeaa8 r8 0x14cfdd00b r9 0x7 r10 0x14cfdd00b7c0 r11 0xecb5761be6ed27e3 r12 0x14dc88025ce0 r13 0x0 r14 0x0 r15 0x14dc88000da0 rip 0x14dc16320b22 rflags 0x10202 cs 0x33 fs 0x0 gs 0x0 time=2025-12-01T09:06:14.730Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server not responding" time=2025-12-01T09:06:24.015Z level=INFO source=sched.go:470 msg="Load failed" model=/root/.ollama/models/blobs/sha256-4824460d29f2058aaf6e1118a63a7a197a09bed509f0e7d4e2efb1ee273b447d error="llama runner process has terminated: exit status 2" ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70679