[GH-ISSUE #9863] "ggml-alloc.c:819: GGML_ASSERT(talloc->buffer_id >= 0) failed" using gemma3 #52971

Closed
opened 2026-04-29 01:31:11 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @leokeba on GitHub (Mar 18, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9863

Originally assigned to: @jmorganca on GitHub.

What is the issue?

I get this error while trying to analyse tweets using gemma3:latest

It seems almost but not completely deterministic, because it almost always fails on the same requests but not quite always (if I insist on the same request it sometimes goes through, weirdly).

It looks like something related to the tokenizer but that's about as far as my expertise brings me.

This happens both on my Windows 11 system with a 3090Ti and my Mac Studio running macOS 13.5.

I tried both ollama 0.6.1 stable and 0.6.2, same issue.

Relevant log output

------------------------
WINDOWS 11

ggml-alloc.c:819: GGML_ASSERT(talloc->buffer_id >= 0) failed
[GIN] 2025/03/18 - 18:52:46 | 500 |    171.5499ms |    192.168.1.60 | POST     "/api/chat"
time=2025-03-18T18:52:46.930+01:00 level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 0xc0000409"
time=2025-03-18T18:52:51.899+01:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0180919 model=C:\Users\Ascidiacea\.ollama\models\blobs\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada
time=2025-03-18T18:52:52.036+01:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\Ascidiacea\.ollama\models\blobs\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada gpu=GPU-d744590e-2a3b-8e2e-f4bc-988c67d6c902 parallel=4 available=24125415424 required="6.2 GiB"
time=2025-03-18T18:52:52.058+01:00 level=INFO source=server.go:105 msg="system memory" total="31.9 GiB" free="24.2 GiB" free_swap="23.4 GiB"
time=2025-03-18T18:52:52.059+01:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[22.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.1 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="1.8 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-03-18T18:52:52.144+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-18T18:52:52.144+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-18T18:52:52.149+01:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2677817000000005 model=C:\Users\Ascidiacea\.ollama\models\blobs\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada
time=2025-03-18T18:52:52.149+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-18T18:52:52.152+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-18T18:52:52.152+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-18T18:52:52.154+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-18T18:52:52.154+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-18T18:52:52.154+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-18T18:52:52.158+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\Ascidiacea\\Downloads\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --model C:\\Users\\Ascidiacea\\.ollama\\models\\blobs\\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada --ctx-size 8192 --batch-size 512 --n-gpu-layers 35 --threads 10 --no-mmap --parallel 4 --port 64179"
time=2025-03-18T18:52:52.161+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-18T18:52:52.161+01:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-03-18T18:52:52.162+01:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-03-18T18:52:52.183+01:00 level=INFO source=runner.go:763 msg="starting ollama engine"
time=2025-03-18T18:52:52.184+01:00 level=INFO source=runner.go:823 msg="Server listening on 127.0.0.1:64179"
time=2025-03-18T18:52:52.265+01:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default=""
time=2025-03-18T18:52:52.265+01:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-03-18T18:52:52.265+01:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=35
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes
load_backend: loaded CUDA backend from C:\Users\Ascidiacea\Downloads\ollama-windows-amd64\lib\ollama\cuda_v12\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\Ascidiacea\Downloads\ollama-windows-amd64\lib\ollama\ggml-cpu-haswell.dll
time=2025-03-18T18:52:52.368+01:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-03-18T18:52:52.399+01:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5178834 model=C:\Users\Ascidiacea\.ollama\models\blobs\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada
time=2025-03-18T18:52:52.413+01:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
time=2025-03-18T18:52:52.455+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CUDA0 size="3.1 GiB"
time=2025-03-18T18:52:52.455+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="525.0 MiB"
time=2025-03-18T18:52:54.001+01:00 level=INFO source=ggml.go:358 msg="compute graph" backend=CUDA0 buffer_type=CUDA0
time=2025-03-18T18:52:54.001+01:00 level=INFO source=ggml.go:358 msg="compute graph" backend=CPU buffer_type=CUDA_Host
time=2025-03-18T18:52:54.008+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-18T18:52:54.009+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-18T18:52:54.012+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-18T18:52:54.014+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-18T18:52:54.014+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-18T18:52:54.014+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-18T18:52:54.014+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-18T18:52:54.014+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-18T18:52:54.167+01:00 level=INFO source=server.go:619 msg="llama runner started in 2.01 seconds"
llama_model_loader: loaded meta data with 34 key-value pairs and 883 tensors from C:\Users\Ascidiacea\.ollama\models\blobs\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                gemma3.attention.head_count u32              = 8
llama_model_loader: - kv   1:             gemma3.attention.head_count_kv u32              = 4
llama_model_loader: - kv   2:                gemma3.attention.key_length u32              = 256
llama_model_loader: - kv   3:            gemma3.attention.sliding_window u32              = 1024
llama_model_loader: - kv   4:              gemma3.attention.value_length u32              = 256
llama_model_loader: - kv   5:                         gemma3.block_count u32              = 34
llama_model_loader: - kv   6:                      gemma3.context_length u32              = 8192
llama_model_loader: - kv   7:                    gemma3.embedding_length u32              = 2560
llama_model_loader: - kv   8:                 gemma3.feed_forward_length u32              = 10240
llama_model_loader: - kv   9:         gemma3.vision.attention.head_count u32              = 16
llama_model_loader: - kv  10: gemma3.vision.attention.layer_norm_epsilon f32              = 0.000001
llama_model_loader: - kv  11:                  gemma3.vision.block_count u32              = 27
llama_model_loader: - kv  12:             gemma3.vision.embedding_length u32              = 1152
llama_model_loader: - kv  13:          gemma3.vision.feed_forward_length u32              = 4304
llama_model_loader: - kv  14:                   gemma3.vision.image_size u32              = 896
llama_model_loader: - kv  15:                 gemma3.vision.num_channels u32              = 3
llama_model_loader: - kv  16:                   gemma3.vision.patch_size u32              = 14
llama_model_loader: - kv  17:                       general.architecture str              = gemma3
llama_model_loader: - kv  18:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  19:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  20:           tokenizer.ggml.add_padding_token bool             = false
llama_model_loader: - kv  21:           tokenizer.ggml.add_unknown_token bool             = false
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  23:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,514906]  = ["\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n", ...
llama_model_loader: - kv  25:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  26:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  27:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  28:                      tokenizer.ggml.scores arr[f32,262145]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  29:                  tokenizer.ggml.token_type arr[i32,262145]  = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  30:                      tokenizer.ggml.tokens arr[str,262145]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  31:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  32:               general.quantization_version u32              = 2
llama_model_loader: - kv  33:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  479 tensors
llama_model_loader: - type  f16:  165 tensors
llama_model_loader: - type q4_K:  205 tensors
llama_model_loader: - type q6_K:   34 tensors
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 7
load: token to piece cache size = 1.9446 MB

------------------------
MACOS 13.5

ggml-alloc.c:819: GGML_ASSERT(talloc->buffer_id >= 0) failed
SIGABRT: abort
PC=0x198784764 m=70 sigcode=0
signal arrived during cgo execution

goroutine 12 gp=0x14000103340 m=70 mp=0x1400301c008 [syscall]:
runtime.cgocall(0x103056a84, 0x1400342fad8)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/cgocall.go:167 +0x44 fp=0x1400342fa90 sp=0x1400342fa50 pc=0x1023ad4f4
github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x131821400, 0x41b9822a0)
        _cgo_gotypes.go:483 +0x34 fp=0x1400342fad0 sp=0x1400342fa90 pc=0x10273dc64
github.com/ollama/ollama/ml/backend/ggml.Context.Compute.func1(...)
        /Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:497
github.com/ollama/ollama/ml/backend/ggml.Context.Compute({0x14003044280, 0x1575deb60, 0x41b9822a0, 0x0, 0x2000}, {0x1400388bee0, 0x1, 0x41b9822a0?})
        /Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:497 +0x9c
fp=0x1400342fb60 sp=0x1400342fad0 pc=0x102743d0c
github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute(0x14003339f20?, {0x1400388bee0?, 0x200?, 0x0?})
        <autogenerated>:1 +0x70 fp=0x1400342fbe0
sp=0x1400342fb60 pc=0x102748870
github.com/ollama/ollama/model.Forward({0x1034ca120, 0x14003339f20}, {0x1034c1770, 0x140002e61c0}, {{0x1400333c000, 0x200, 0x200}, {0x0, 0x0, 0x0}, ...})
        /Users/runner/work/ollama/ollama/model/model.go:305 +0x194 fp=0x1400342fcd0 sp=0x1400342fbe0 pc=0x10276cd34
github.com/ollama/ollama/runner/ollamarunner.(*Server).processBatch(0x14000159440)
        /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:395 +
0x344 fp=0x1400342ff80 sp=0x1400342fcd0 pc=0x1027c2854
github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0x14000159440, {0x1034c2aa0, 0x14000139680})
        /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:321 +0x54 fp=0x1400342ffa0 sp=0x1400342ff80 pc=0x1027c24d4
github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap2()
        /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:860 +0x30 fp=0x1400342ffd0 sp=0x1400342ffa0 pc=0x1027c5c30
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400342ffd0 sp=0x1400342ffd0 pc=0x1023b8604
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
        /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:860 +0x8cc

goroutine 1 gp=0x140000021c0 m=nil [IO wait]:
runtime.gopark(0x0?
, 0x0?, 0x0?, 0x0?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400016d5d0 sp=0x1400016d5b0 pc=0x1023b08a8
runtime.netpollblock(0x1400016d668?, 0x24345e0?, 0x1?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/netpoll.go:575 +0x158 fp=0x1400016d610 sp=0x1400016d5d0 pc=0x102376138
internal/poll.runtime_pollWait(0x12abdbe90, 0x72)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/netpoll.go
:351 +0xa0 fp=0x1400016d640 sp=0x1400016d610 pc=0x1023afa60
internal/poll.(*pollDesc).wait(0x14000626980?, 0x10235850c?, 0x0)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/internal/poll/fd_poll_runtime.go:84 +0x28 fp=0x1400016d670 sp=0x1400016d640 pc=0x10242fdf8
internal/poll.(*pollDesc).waitRead(...)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x14000626980)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/internal/poll/fd_unix.go:620 +0x24c fp=0x1400016d720 sp=0x1400016d670 pc=0x1024346cc
net.(*netFD).accept(0x14000626980)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/fd_unix.go:172 +0x28 fp=0x1400016d7e0 sp=0x1400016d720 pc=0x1024a4388
net.(*TCPListener).accept(0x14000137e40)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/tcpsock_posix.go:159 +0x24 fp=0x1400016d830 sp=0x1400016d7e0 pc=0x1024b85e4
net.(*TCPListener).Accept(0x14000137e40)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/tcpsock.go:380 +0x2c fp=0x1400016d870 sp=0x1400016d830 pc=0x1024b75cc
net/http.(*onceCloseListener).Accept(0x140000eddd0?)
        <autogenerated>:1 +0x30 fp=0x1400016d890 sp=0x1400016d870 pc=0x1026927b0
net/http.(*Server).Serve(0x1400050ef00, {0x1034c07d8, 0x14000137e40})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:3424 +0x290 fp=0x1400016d9c0 sp=0x1400016d890 pc=0x10266bef0
github.com/ollama/ollama/runner/ollamarunner.Execute({0x14000000270, 0xe, 0xf})
        /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:884 +0xbac fp=0x1400016dce0 sp=0x1400016d9c0 pc=0x1027c59cc
github.com/ollama/ollama/runner.Execute({0x14000000250?, 0x0?, 0x0?})
        /Users/runner/work/ollama/ollama/runner/runner.go:20 +0x120 fp=0x1400016dd10 sp=0x1400016dce0 pc=0x1027c6470
github.com/ollama/ollama/cmd.NewCLI.func2(0x1400050ed00?, {0x10306ed68?, 0x4?, 0x10306ed6c?})
        /Users/runner/work/ollama/ollama/cmd/cmd.go:1327 +0x54 fp=0x1400016dd40 sp=0x1400016dd10 pc=0x102e1fa34
github.com/spf13/cobra.(*Command).execute(0x140004def08, {0x14000154690, 0xf, 0xf})
        /Users/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x648 fp=0x1400016de60 sp=0x1400016dd40 pc=0x102512928
github.com/spf13/cobra.(*Command).ExecuteC(0x140004aef08)
        /Users/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x320 fp=0x1400016df20 sp=0x1400016de60 pc=0x102513070
github.com/spf13/cobra.(*Command).Execute(...)
        /Users/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
        /Users/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
        /Users/runner/work/ollama/ollama/main.go:12 +0x54 fp=0x1400016df40 sp=0x1400016df20 pc=0x102e1fd84
runtime.main()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:283 +0x284 fp=0x1400016dfd0 sp=0x1400016df40 pc=0x10237cc14
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400016dfd0 sp=0x1400016dfd0 pc=0x1023b8604

goroutine 2 gp=0x14000002c40 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006cf90 sp=0x1400006cf70 pc=0x1023b08a8
runtime.goparkunlock(...)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:441
runtime.forcegchelper()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:348 +0xb8 fp=0x1400006cfd0 sp=0x1400006cf90 pc=0x10237cf68
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006cfd0 sp=0x1400006cfd0 pc=0x1023b8604
created by runtime.init.7 in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:336 +0x24

goroutine 3 gp=0x14000003180 m=nil [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006d760 sp=0x1400006d740 pc=0x1023b08a8
runtime.goparkunlock(...)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:441
runtime.bgsweep(0x14000098000)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgcsweep.go:316 +0x108 fp=0x1400006d7b0 sp=0x1400006d760 pc=0x1023680d8
runtime.gcenable.gowrap1()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:204 +0x28 fp=0x1400006d7d0 sp=0x1400006d7b0 pc=0x10235bed8
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006d7d0 sp=0x1400006d7d0 pc=0x1023b8604
created by runtime.gcenable in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:204 +0x6c

goroutine 4 gp=0x14000003340 m=nil [GC scavenge wait]:
runtime.gopark(0xe1a8e1?, 0xdf5ef6?, 0x0?, 0x0?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006df60 sp=0x1400006df40 pc=0x1023b08a8
runtime.goparkunlock(...)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:441
runtime.(*scavengerState).park(0x103d44680)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgcscavenge.go:425 +0x5c fp=0x1400006df90 sp=0x1400006df60 pc=0x102365b6c
runtime.bgscavenge(0x14000098000)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgcscavenge.go:658 +0xac fp=0x1400006dfb0 sp=0x1400006df90 pc=0x10236610c
runtime.gcenable.gowrap2()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:205 +0x28 fp=0x1400006dfd0 sp=0x1400006dfb0 pc=0x10235be78
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006dfd0 sp=0x1400006dfd0 pc=0x1023b8604
created by runtime.gcenable in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:205 +0xac

goroutine 5 gp=0x14000003c00 m=nil [finalizer wait]:
runtime.gopark(0x0?, 0x1034ae418?, 0x10?, 0x20?, 0x1000000010?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006c590 sp=0x1400006c570 pc=0x1023b08a8
runtime.runfinq()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mfinal.go:196 +0x108 fp=0x1400006c7d0 sp=0x1400006c590 pc=0x10235aed8
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006c7d0 sp=0x1400006c7d0 pc=0x1023b8604
created by runtime.createfing in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mfinal.go:166 +0x80

goroutine 6 gp=0x140001dc700 m=nil [chan receive]:
runtime.gopark(0x140002294a0?, 0x1400332a018?, 0x48?, 0xe7?, 0x102478658?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006e6f0 sp=0x1400006e6d0 pc=0x1023b08a8
runtime.chanrecv(0x140000a6310, 0x0, 0x1)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/chan.go:664 +0x42c fp=0x1400006e770 sp=0x1400006e6f0 pc=0x10234d98c
runtime.chanrecv1(0x0?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/chan.go:506 +0x14 fp=0x1400006e7a0 sp=0x1400006e770 pc=0x10234d524
runtime.unique_runtime_registerUniqueMapCleanup.func2(...)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1796
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1799 +0x3c fp=0x1400006e7d0 sp=0x1400006e7a0 pc=0x10235f0fc
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006e7d0 sp=0x1400006e7d0 pc=0x1023b8604
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1794 +0x78

goroutine 7 gp=0x140001dca80 m=nil [GC worker (idle)]:
runtime.gopark(0x103db4e40?, 0x3?, 0x12?, 0xe6?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006ef10 sp=0x1400006eef0 pc=0x1023b08a8
runtime.gcBgMarkWorker(0x140000a78f0)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x1400006efb0 sp=0x1400006ef10 pc=0x10235e36c
runtime.gcBgMarkStartWorkers.gowrap1()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x1400006efd0 sp=0x1400006efb0 pc=0x10235e258
runtime.goexit({})
/Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006efd0 sp=0x1400006efd0 pc=0x1023b8604
created by runtime.gcBgMarkStartWorkers in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140

goroutine 18 gp=0x14000102380 m=nil [GC worker (idle)]:
runtime.gopark(0x103db4e40?, 0x1?, 0xfc?, 0x3b?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x14000068710 sp=0x140000686f0 pc=0x1023b08a8
runtime.gcBgMarkWorker(0x140000a78f0)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x140000687b0 sp=0x14000068710 pc=0x10235e36c
runtime.gcBgMarkStartWorkers.gowrap1()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x140000687d0 sp=0x140000687b0 pc=0x10235e258
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x140000687d0 sp=0x140000687d0 pc=0x1023b8604
created by runtime.gcBgMarkStartWorkers in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140

goroutine 34 gp=0x14000504000 m=nil [GC worker (idle)]:
runtime.gopark(0x19c2dcf5ad9ff?, 0x3?, 0x10?, 0x27?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400050a710 sp=0x1400050a6f0 pc=0x1023b08a8
runtime.gcBgMarkWorker(0x140000a78f0)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x1400050a7b0 sp=0x1400050a710 pc=0x10235e36c
runtime.gcBgMarkStartWorkers.gowrap1()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x1400050a7d0 sp=0x1400050a7b0 pc=0x10235e258
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400050a7d0 sp=0x1400050a7d0 pc=0x1023b8604
created by runtime.gcBgMarkStartWorkers in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140

goroutine 8 gp=0x140001dcc40 m=nil [GC worker (idle)]:
runtime.gopark(0x103db4e40?, 0x3?, 0x40?, 0x97?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006f710 sp=0x1400006f6f0 pc=0x1023b08a8
runtime.gcBgMarkWorker(0x140000a78f0)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x1400006f7b0 sp=0x1400006f710 pc=0x10235e36c
runtime.gcBgMarkStartWorkers.gowrap1()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x1400006f7d0 sp=0x1400006f7b0 pc=0x10235e258
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006f7d0 sp=0x1400006f7d0 pc=0x1023b8604
created by runtime.gcBgMarkStartWorkers in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140

goroutine 19 gp=0x14000102540 m=nil [GC worker (idle)]:
runtime.gopark(0x19c2dcf530eb8?, 0x1?, 0x63?, 0xf4?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x14000068f10 sp=0x14000068ef0 pc=0x1023b08a8
runtime.gcBgMarkWorker(0x140000a78f0)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x14000068fb0 sp=0x14000068f10 pc=0x10235e36c
runtime.gcBgMarkStartWorkers.gowrap1()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x14000068fd0 sp=0x14000068fb0 pc=0x10235e258
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x14000068fd0 sp=0x14000068fd0 pc=0x1023b8604
created by runtime.gcBgMarkStartWorkers in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140

goroutine 35 gp=0x140005041c0 m=nil [GC worker (idle)]:
runtime.gopark(0x19c2dcf5be952?, 0x3?, 0x1?, 0xd1?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400050af10 sp=0x1400050aef0 pc=0x1023b08a8
runtime.gcBgMarkWorker(0x140000a78f0)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x1400050afb0 sp=0x1400050af10 pc=0x10235e36c
runtime.gcBgMarkStartWorkers.gowrap1()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x1400050afd0 sp=0x1400050afb0 pc=0x10235e258
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400050afd0 sp=0x1400050afd0 pc=0x1023b8604
created by runtime.gcBgMarkStartWorkers in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140

goroutine 9 gp=0x140001dce00 m=nil [GC worker (idle)]:
runtime.gopark(0x19c2dcf596ee1?, 0x1?, 0x10?, 0x3d?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006ff10 sp=0x1400006fef0 pc=0x1023b08a8
runtime.gcBgMarkWorker(0x140000a78f0)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x1400006ffb0 sp=0x1400006ff10 pc=0x10235e36c
runtime.gcBgMarkStartWorkers.gowrap1()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x1400006ffd0 sp=0x1400006ffb0 pc=0x10235e258
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006ffd0 sp=0x1400006ffd0 pc=0x1023b8604
created by runtime.gcBgMarkStartWorkers in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140

goroutine 20 gp=0x14000102700 m=nil [GC worker (idle)]:
runtime.gopark(0x103db4e40?, 0x1?, 0xd?, 0x73?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x14000080f10 sp=0x14000080ef0 pc=0x1023b08a8
runtime.gcBgMarkWorker(0x140000a78f0)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x14000080fb0 sp=0x14000080f10 pc=0x10235e36c
runtime.gcBgMarkStartWorkers.gowrap1()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x14000080fd0 sp=0x14000080fb0 pc=0x10235e258
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x14000080fd0 sp=0x14000080fd0 pc=0x1023b8604
created by runtime.gcBgMarkStartWorkers in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140

goroutine 10 gp=0x140001dcfc0 m=nil [GC worker (idle)]:
runtime.gopark(0x19c2dcf59a926?, 0x1?, 0x26?, 0x46?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x14000694f10 sp=0x14000694ef0 pc=0x1023b08a8
runtime.gcBgMarkWorker(0x140000a78f0)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x14000694fb0 sp=0x14000694f10 pc=0x10235e36c
runtime.gcBgMarkStartWorkers.gowrap1()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x14000694fd0 sp=0x14000694fb0 pc=0x10235e258
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x14000694fd0 sp=0x14000694fd0 pc=0x1023b8604
created by runtime.gcBgMarkStartWorkers in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140

goroutine 36 gp=0x14000504380 m=nil [GC worker (idle)]:
runtime.gopark(0x19c2dcf532310?, 0x3?, 0x28?, 0x9f?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400050b710 sp=0x1400050b6f0 pc=0x1023b08a8
runtime.gcBgMarkWorker(0x140000a78f0)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x1400050b7b0 sp=0x1400050b710 pc=0x10235e36c
runtime.gcBgMarkStartWorkers.gowrap1()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x1400050b7d0 sp=0x1400050b7b0 pc=0x10235e258
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400050b7d0 sp=0x1400050b7d0 pc=0x1023b8604
created by runtime.gcBgMarkStartWorkers in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140

goroutine 946 gp=0x1400373d340 m=nil [select]:
runtime.gopark(0x1400016fa50?, 0x2?, 0x28?, 0xf7?, 0x1400016f7ec?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400016f600 sp=0x1400016f5e0 pc=0x1023b08a8
runtime.selectgo(0x1400016fa50, 0x1400016f7e8, 0x244?, 0x0, 0x1?, 0x1)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/run
time/select.go:351 +0x6c4 fp=0x1400016f730 sp=0x1400016f600 pc=0x102390284
github.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0x14000159440, {0x1034c09b8, 0x140031920e0}, 0x1400013a140)
        /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:649 +0x914 fp=0x1400016faa0 sp=0x1400016f730 pc=0x1027c4314
github.com/ollama/ollama/runner/ollamarunner.(*Server).completion-fm({0x1034c09b8?, 0x140031920e0?}, 0x1400016fb28?)
        <autogenerated>:1 +0x40 fp=0x1400016fad0 sp=0x1400016faa0 pc=0x1027c5f90
net/http.HandlerFunc.ServeHTTP(0x14000164b40?, {0x1034c09b8?, 0x140031920e0?}, 0x1400016fb10?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:2294 +0x38 fp=0x1400016fb00 sp=0x1400016fad0 pc=0x102668918
net/http.(*ServeMux).ServeHTTP(0x10?, {0x1034c09b8, 0x140031920e0}, 0x1400013a140)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:2822 +0x1b4 fp=0x1400016fb50 sp=0x1400016fb00 pc=0x10266a4a4
net/http.serverHandler.ServeHTTP({0x1034bd070?}, {0x1034c09b8?, 0x140031920e0?}, 0x1?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:3301 +0xbc fp=0x1400016fb80 sp=0x1400016fb50 pc=0x10268618c
net/http.(*conn).serve(0x140000eddd0, {0x1034c2a68, 0x14000612600})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:2102 +0x52c fp=0x1400016ffa0 sp=0x1400016fb80 pc=0x1026670bc
net/http.(*Server).Serve.gowrap3()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:3454 +0x30 fp=0x1400016ffd0 sp=0x1400016ffa0 pc=0x10266c280
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400016ffd0 sp=0x1400016ffd0 pc=0x1023b8604
created by net/http.(*Server).Serve in goroutine 1
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:3454 +0x3d8

goroutine 1035 gp=0x14024763880 m=nil [IO wait]:
runtime.gopark(0xffffffffffffffff?, 0xffffffffffffffff?, 0x23?, 0x0?, 0x1023d4200?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1402476ad80 sp=0x1402476ad60 pc=0x1023b08a8
runtime.netpollblock(0x0?, 0x0?, 0x0?)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/netpoll.go:575 +0x158 fp=0x1402476adc0 sp=0x1402476ad80 pc=0x102376138
internal/poll.runtime_pollWait(0x12abdbd78, 0x72)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/netpoll.go:351 +0xa0 fp=0x1402476adf0 sp=0x1402476adc0 pc=0x1023afa60
internal/poll.(*pollDesc).wait(0x14000626080?, 0x14003408131?, 0x0)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/internal/poll/fd_poll_runtime.go:84 +0x28 fp=0x1402476ae20 sp=0x1402476adf0 pc=0x10242fdf8
internal/poll.(*pollDesc).waitRead(...)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x14000626080, {0x14003408131, 0x1, 0x1})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/internal/poll/fd_unix.go:165 +0x1fc fp=0x1402476aec0 sp=0x1402476ae20 pc=0x1024310ac
net.(*netFD).Read(0x14000626080, {0x14003408131?, 0x1402476af58?, 0x102661b34?})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/fd_posix.go:55 +0x28 fp=0x1402476af10 sp=0x1402476aec0 pc=0x1024a2958
net.(*conn).Read(0x14000132008, {0x14003408131?, 0x0?, 0x0?})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/net.go:194 +0x34 fp=0x1402476af60 sp=0x1402476af10 pc=0x1024af824
net/http.(*connReader).backgroundRead(0x14003408120)
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:690 +0x40 fp=0x1402476afb0 sp=0x1402476af60 pc=0x102661a30
net/http.(*connReader).startBackgroundRead.gowrap2()
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:686 +0x28 fp=0x1402476afd0 sp=0x1402476afb0 pc=0x102661918
runtime.goexit({})
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1402476afd0 sp=0x1402476afd0 pc=0x1023b8604
created by net/http.(*connReader).startBackgroundRead in goroutine 946
        /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:686 +0xc4

r0      0x0
r1      0x0
r2      0x0
r3      0x0
r4      0x103234e1b
r5      0x377792d40
r6      0x64656c6961662029
r7      0x1318215a8
r8      0x616db9f7d34d95ff
r9      0x616db9f4a434a5ff
r10     0x2
r11     0xfffffffd
r12     0x10000000000
r13     0x0
r14     0x0
r15     0x0
r16     0x148
r17     0x1f83633a0
r18     0x0
r19     0x6
r20     0x377793000
r21     0x7003
r22     0x3777930e0
r23     0x606b8
r24     0x160008000
r25     0x103db3408
r26     0x1034adac0
r27     0x818
r28     0x14003011180
r29     0x377792ca0
lr      0x1987bbc28
sp      0x377792c80
pc      0x198784764
fault   0x198784764
[GIN] 2025/03/18 - 19:05:48 | 500 |    119.9725ms |       127.0.0.1 | POST     "/api/chat"
time=2025-03-18T19:05:48.818+01:00 level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 2"
time=2025-03-18T19:05:48.911+01:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/leo/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada gpu=0 parallel=4 available=22906503168 required="6.3 GiB"
time=2025-03-18T19:05:48.911+01:00 level=INFO source=server.go:105 msg="system memory" total="32.0 GiB" free="14.4 GiB" free_swap="0 B"
time=2025-03-18T19:05:48.912+01:00 level=INFO source=server.go:138 msg=offload library=metal layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[21.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.3 GiB" memory.required.partial="6.3 GiB" memory.required.kv="1.1 GiB" memory.required.allocations="[6.3 GiB]" memory.weights.total="1.8 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="517.0 MiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-03-18T19:05:48.974+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-18T19:05:48.975+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-18T19:05:48.977+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-18T19:05:48.980+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-18T19:05:48.980+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-18T19:05:48.980+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-18T19:05:48.980+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-18T19:05:48.980+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-18T19:05:48.983+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/leo/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada --ctx-size 8192 --batch-size 512 --n-gpu-layers 35 --threads 8 --parallel 4 --port 63790"
time=2025-03-18T19:05:48.984+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-18T19:05:48.984+01:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-18T19:05:48.984+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-18T19:05:49.001+01:00 level=INFO source=runner.go:823 msg="starting ollama engine"
time=2025-03-18T19:05:49.001+01:00 level=INFO source=runner.go:883 msg="Server listening on 127.0.0.1:63790"
time=2025-03-18T19:05:49.060+01:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default=""
time=2025-03-18T19:05:49.060+01:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-03-18T19:05:49.060+01:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=35
ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-icelake.so
ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-haswell.so
ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-alderlake.so
ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-sandybridge.so
ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-skylakex.so
time=2025-03-18T19:05:49.062+01:00 level=INFO source=ggml.go:109 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-03-18T19:05:49.169+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=Metal size="3.1 GiB"
time=2025-03-18T19:05:49.169+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="525.0 MiB"
time=2025-03-18T19:05:49.269+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Max
ggml_metal_init: picking default device: Apple M1 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M1 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple7  (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = false
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = false
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 22906.50 MB
ggml_metal_init: skipping kernel_get_rows_bf16                     (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row              (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4                (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16                  (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32                (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32                (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256      (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16                     (not supported)
time=2025-03-18T19:05:49.624+01:00 level=INFO source=ggml.go:356 msg="compute graph" backend=Metal buffer_type=Metal
time=2025-03-18T19:05:49.624+01:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CPU buffer_type=CPU
time=2025-03-18T19:05:49.625+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-18T19:05:49.626+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-18T19:05:49.628+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-18T19:05:49.630+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-18T19:05:49.631+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-18T19:05:49.631+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-18T19:05:49.631+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-18T19:05:49.631+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-18T19:05:49.772+01:00 level=INFO source=server.go:624 msg="llama runner started in 0.79 seconds"
llama_model_loader: loaded meta data with 34 key-value pairs and 883 tensors from /Users/leo/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                gemma3.attention.head_count u32              = 8
llama_model_loader: - kv   1:             gemma3.attention.head_count_kv u32              = 4
llama_model_loader: - kv   2:                gemma3.attention.key_length u32              = 256
llama_model_loader: - kv   3:            gemma3.attention.sliding_window u32              = 1024
llama_model_loader: - kv   4:              gemma3.attention.value_length u32              = 256
llama_model_loader: - kv   5:                         gemma3.block_count u32              = 34
llama_model_loader: - kv   6:                      gemma3.context_length u32              = 8192
llama_model_loader: - kv   7:                    gemma3.embedding_length u32              = 2560
llama_model_loader: - kv   8:                 gemma3.feed_forward_length u32              = 10240
llama_model_loader: - kv   9:         gemma3.vision.attention.head_count u32              = 16
llama_model_loader: - kv  10: gemma3.vision.attention.layer_norm_epsilon f32              = 0.000001
llama_model_loader: - kv  11:                  gemma3.vision.block_count u32              = 27
llama_model_loader: - kv  12:             gemma3.vision.embedding_length u32              = 1152
llama_model_loader: - kv  13:          gemma3.vision.feed_forward_length u32              = 4304
llama_model_loader: - kv  14:                   gemma3.vision.image_size u32              = 896
llama_model_loader: - kv  15:                 gemma3.vision.num_channels u32              = 3
llama_model_loader: - kv  16:                   gemma3.vision.patch_size u32              = 14
llama_model_loader: - kv  17:                       general.architecture str              = gemma3
llama_model_loader: - kv  18:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  19:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  20:           tokenizer.ggml.add_padding_token bool             = false
llama_model_loader: - kv  21:           tokenizer.ggml.add_unknown_token bool             = false
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  23:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,514906]  = ["\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n", ...
llama_model_loader: - kv  25:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  26:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  27:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  28:                      tokenizer.ggml.scores arr[f32,262145]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  29:                  tokenizer.ggml.token_type arr[i32,262145]  = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  30:                      tokenizer.ggml.tokens arr[str,262145]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  31:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  32:               general.quantization_version u32              = 2
llama_model_loader: - kv  33:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  479 tensors
llama_model_loader: - type  f16:  165 tensors
llama_model_loader: - type q4_K:  205 tensors
llama_model_loader: - type q6_K:   34 tensors
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 7
load: token to piece cache size = 1.9446 MB

OS

No response

GPU

No response

CPU

No response

Ollama version

0.6.1

Originally created by @leokeba on GitHub (Mar 18, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9863 Originally assigned to: @jmorganca on GitHub. ### What is the issue? I get this error while trying to analyse tweets using `gemma3:latest` It seems almost but not completely deterministic, because it almost always fails on the same requests but not quite always (if I insist on the same request it sometimes goes through, weirdly). It looks like something related to the tokenizer but that's about as far as my expertise brings me. This happens both on my Windows 11 system with a 3090Ti and my Mac Studio running macOS 13.5. I tried both ollama 0.6.1 stable and 0.6.2, same issue. ### Relevant log output ```shell ------------------------ WINDOWS 11 ggml-alloc.c:819: GGML_ASSERT(talloc->buffer_id >= 0) failed [GIN] 2025/03/18 - 18:52:46 | 500 | 171.5499ms | 192.168.1.60 | POST "/api/chat" time=2025-03-18T18:52:46.930+01:00 level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 0xc0000409" time=2025-03-18T18:52:51.899+01:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0180919 model=C:\Users\Ascidiacea\.ollama\models\blobs\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada time=2025-03-18T18:52:52.036+01:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\Ascidiacea\.ollama\models\blobs\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada gpu=GPU-d744590e-2a3b-8e2e-f4bc-988c67d6c902 parallel=4 available=24125415424 required="6.2 GiB" time=2025-03-18T18:52:52.058+01:00 level=INFO source=server.go:105 msg="system memory" total="31.9 GiB" free="24.2 GiB" free_swap="23.4 GiB" time=2025-03-18T18:52:52.059+01:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[22.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.1 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="1.8 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-03-18T18:52:52.144+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T18:52:52.144+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-18T18:52:52.149+01:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2677817000000005 model=C:\Users\Ascidiacea\.ollama\models\blobs\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada time=2025-03-18T18:52:52.149+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T18:52:52.152+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-03-18T18:52:52.152+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-18T18:52:52.154+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-18T18:52:52.154+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-18T18:52:52.154+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-18T18:52:52.158+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\Ascidiacea\\Downloads\\ollama-windows-amd64\\ollama.exe runner --ollama-engine --model C:\\Users\\Ascidiacea\\.ollama\\models\\blobs\\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada --ctx-size 8192 --batch-size 512 --n-gpu-layers 35 --threads 10 --no-mmap --parallel 4 --port 64179" time=2025-03-18T18:52:52.161+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-18T18:52:52.161+01:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-03-18T18:52:52.162+01:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-03-18T18:52:52.183+01:00 level=INFO source=runner.go:763 msg="starting ollama engine" time=2025-03-18T18:52:52.184+01:00 level=INFO source=runner.go:823 msg="Server listening on 127.0.0.1:64179" time=2025-03-18T18:52:52.265+01:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default="" time=2025-03-18T18:52:52.265+01:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" time=2025-03-18T18:52:52.265+01:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=35 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes load_backend: loaded CUDA backend from C:\Users\Ascidiacea\Downloads\ollama-windows-amd64\lib\ollama\cuda_v12\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\Ascidiacea\Downloads\ollama-windows-amd64\lib\ollama\ggml-cpu-haswell.dll time=2025-03-18T18:52:52.368+01:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-03-18T18:52:52.399+01:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5178834 model=C:\Users\Ascidiacea\.ollama\models\blobs\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada time=2025-03-18T18:52:52.413+01:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" time=2025-03-18T18:52:52.455+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CUDA0 size="3.1 GiB" time=2025-03-18T18:52:52.455+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="525.0 MiB" time=2025-03-18T18:52:54.001+01:00 level=INFO source=ggml.go:358 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 time=2025-03-18T18:52:54.001+01:00 level=INFO source=ggml.go:358 msg="compute graph" backend=CPU buffer_type=CUDA_Host time=2025-03-18T18:52:54.008+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T18:52:54.009+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-18T18:52:54.012+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T18:52:54.014+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-03-18T18:52:54.014+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-18T18:52:54.014+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-18T18:52:54.014+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-18T18:52:54.014+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-18T18:52:54.167+01:00 level=INFO source=server.go:619 msg="llama runner started in 2.01 seconds" llama_model_loader: loaded meta data with 34 key-value pairs and 883 tensors from C:\Users\Ascidiacea\.ollama\models\blobs\sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: gemma3.attention.head_count u32 = 8 llama_model_loader: - kv 1: gemma3.attention.head_count_kv u32 = 4 llama_model_loader: - kv 2: gemma3.attention.key_length u32 = 256 llama_model_loader: - kv 3: gemma3.attention.sliding_window u32 = 1024 llama_model_loader: - kv 4: gemma3.attention.value_length u32 = 256 llama_model_loader: - kv 5: gemma3.block_count u32 = 34 llama_model_loader: - kv 6: gemma3.context_length u32 = 8192 llama_model_loader: - kv 7: gemma3.embedding_length u32 = 2560 llama_model_loader: - kv 8: gemma3.feed_forward_length u32 = 10240 llama_model_loader: - kv 9: gemma3.vision.attention.head_count u32 = 16 llama_model_loader: - kv 10: gemma3.vision.attention.layer_norm_epsilon f32 = 0.000001 llama_model_loader: - kv 11: gemma3.vision.block_count u32 = 27 llama_model_loader: - kv 12: gemma3.vision.embedding_length u32 = 1152 llama_model_loader: - kv 13: gemma3.vision.feed_forward_length u32 = 4304 llama_model_loader: - kv 14: gemma3.vision.image_size u32 = 896 llama_model_loader: - kv 15: gemma3.vision.num_channels u32 = 3 llama_model_loader: - kv 16: gemma3.vision.patch_size u32 = 14 llama_model_loader: - kv 17: general.architecture str = gemma3 llama_model_loader: - kv 18: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 19: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 20: tokenizer.ggml.add_padding_token bool = false llama_model_loader: - kv 21: tokenizer.ggml.add_unknown_token bool = false llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,514906] = ["\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n", ... llama_model_loader: - kv 25: tokenizer.ggml.model str = llama llama_model_loader: - kv 26: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 27: tokenizer.ggml.pre str = default llama_model_loader: - kv 28: tokenizer.ggml.scores arr[f32,262145] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,262145] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,262145] = ["<pad>", "<eos>", "<bos>", "<unk>", ... llama_model_loader: - kv 31: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - kv 33: general.file_type u32 = 15 llama_model_loader: - type f32: 479 tensors llama_model_loader: - type f16: 165 tensors llama_model_loader: - type q4_K: 205 tensors llama_model_loader: - type q6_K: 34 tensors load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 7 load: token to piece cache size = 1.9446 MB ------------------------ MACOS 13.5 ggml-alloc.c:819: GGML_ASSERT(talloc->buffer_id >= 0) failed SIGABRT: abort PC=0x198784764 m=70 sigcode=0 signal arrived during cgo execution goroutine 12 gp=0x14000103340 m=70 mp=0x1400301c008 [syscall]: runtime.cgocall(0x103056a84, 0x1400342fad8) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/cgocall.go:167 +0x44 fp=0x1400342fa90 sp=0x1400342fa50 pc=0x1023ad4f4 github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x131821400, 0x41b9822a0) _cgo_gotypes.go:483 +0x34 fp=0x1400342fad0 sp=0x1400342fa90 pc=0x10273dc64 github.com/ollama/ollama/ml/backend/ggml.Context.Compute.func1(...) /Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:497 github.com/ollama/ollama/ml/backend/ggml.Context.Compute({0x14003044280, 0x1575deb60, 0x41b9822a0, 0x0, 0x2000}, {0x1400388bee0, 0x1, 0x41b9822a0?}) /Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:497 +0x9c fp=0x1400342fb60 sp=0x1400342fad0 pc=0x102743d0c github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute(0x14003339f20?, {0x1400388bee0?, 0x200?, 0x0?}) <autogenerated>:1 +0x70 fp=0x1400342fbe0 sp=0x1400342fb60 pc=0x102748870 github.com/ollama/ollama/model.Forward({0x1034ca120, 0x14003339f20}, {0x1034c1770, 0x140002e61c0}, {{0x1400333c000, 0x200, 0x200}, {0x0, 0x0, 0x0}, ...}) /Users/runner/work/ollama/ollama/model/model.go:305 +0x194 fp=0x1400342fcd0 sp=0x1400342fbe0 pc=0x10276cd34 github.com/ollama/ollama/runner/ollamarunner.(*Server).processBatch(0x14000159440) /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:395 + 0x344 fp=0x1400342ff80 sp=0x1400342fcd0 pc=0x1027c2854 github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0x14000159440, {0x1034c2aa0, 0x14000139680}) /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:321 +0x54 fp=0x1400342ffa0 sp=0x1400342ff80 pc=0x1027c24d4 github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap2() /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:860 +0x30 fp=0x1400342ffd0 sp=0x1400342ffa0 pc=0x1027c5c30 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400342ffd0 sp=0x1400342ffd0 pc=0x1023b8604 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:860 +0x8cc goroutine 1 gp=0x140000021c0 m=nil [IO wait]: runtime.gopark(0x0? , 0x0?, 0x0?, 0x0?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400016d5d0 sp=0x1400016d5b0 pc=0x1023b08a8 runtime.netpollblock(0x1400016d668?, 0x24345e0?, 0x1?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/netpoll.go:575 +0x158 fp=0x1400016d610 sp=0x1400016d5d0 pc=0x102376138 internal/poll.runtime_pollWait(0x12abdbe90, 0x72) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/netpoll.go :351 +0xa0 fp=0x1400016d640 sp=0x1400016d610 pc=0x1023afa60 internal/poll.(*pollDesc).wait(0x14000626980?, 0x10235850c?, 0x0) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/internal/poll/fd_poll_runtime.go:84 +0x28 fp=0x1400016d670 sp=0x1400016d640 pc=0x10242fdf8 internal/poll.(*pollDesc).waitRead(...) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Accept(0x14000626980) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/internal/poll/fd_unix.go:620 +0x24c fp=0x1400016d720 sp=0x1400016d670 pc=0x1024346cc net.(*netFD).accept(0x14000626980) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/fd_unix.go:172 +0x28 fp=0x1400016d7e0 sp=0x1400016d720 pc=0x1024a4388 net.(*TCPListener).accept(0x14000137e40) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/tcpsock_posix.go:159 +0x24 fp=0x1400016d830 sp=0x1400016d7e0 pc=0x1024b85e4 net.(*TCPListener).Accept(0x14000137e40) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/tcpsock.go:380 +0x2c fp=0x1400016d870 sp=0x1400016d830 pc=0x1024b75cc net/http.(*onceCloseListener).Accept(0x140000eddd0?) <autogenerated>:1 +0x30 fp=0x1400016d890 sp=0x1400016d870 pc=0x1026927b0 net/http.(*Server).Serve(0x1400050ef00, {0x1034c07d8, 0x14000137e40}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:3424 +0x290 fp=0x1400016d9c0 sp=0x1400016d890 pc=0x10266bef0 github.com/ollama/ollama/runner/ollamarunner.Execute({0x14000000270, 0xe, 0xf}) /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:884 +0xbac fp=0x1400016dce0 sp=0x1400016d9c0 pc=0x1027c59cc github.com/ollama/ollama/runner.Execute({0x14000000250?, 0x0?, 0x0?}) /Users/runner/work/ollama/ollama/runner/runner.go:20 +0x120 fp=0x1400016dd10 sp=0x1400016dce0 pc=0x1027c6470 github.com/ollama/ollama/cmd.NewCLI.func2(0x1400050ed00?, {0x10306ed68?, 0x4?, 0x10306ed6c?}) /Users/runner/work/ollama/ollama/cmd/cmd.go:1327 +0x54 fp=0x1400016dd40 sp=0x1400016dd10 pc=0x102e1fa34 github.com/spf13/cobra.(*Command).execute(0x140004def08, {0x14000154690, 0xf, 0xf}) /Users/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x648 fp=0x1400016de60 sp=0x1400016dd40 pc=0x102512928 github.com/spf13/cobra.(*Command).ExecuteC(0x140004aef08) /Users/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x320 fp=0x1400016df20 sp=0x1400016de60 pc=0x102513070 github.com/spf13/cobra.(*Command).Execute(...) /Users/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992 github.com/spf13/cobra.(*Command).ExecuteContext(...) /Users/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985 main.main() /Users/runner/work/ollama/ollama/main.go:12 +0x54 fp=0x1400016df40 sp=0x1400016df20 pc=0x102e1fd84 runtime.main() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:283 +0x284 fp=0x1400016dfd0 sp=0x1400016df40 pc=0x10237cc14 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400016dfd0 sp=0x1400016dfd0 pc=0x1023b8604 goroutine 2 gp=0x14000002c40 m=nil [force gc (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006cf90 sp=0x1400006cf70 pc=0x1023b08a8 runtime.goparkunlock(...) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:441 runtime.forcegchelper() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:348 +0xb8 fp=0x1400006cfd0 sp=0x1400006cf90 pc=0x10237cf68 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006cfd0 sp=0x1400006cfd0 pc=0x1023b8604 created by runtime.init.7 in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:336 +0x24 goroutine 3 gp=0x14000003180 m=nil [GC sweep wait]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006d760 sp=0x1400006d740 pc=0x1023b08a8 runtime.goparkunlock(...) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:441 runtime.bgsweep(0x14000098000) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgcsweep.go:316 +0x108 fp=0x1400006d7b0 sp=0x1400006d760 pc=0x1023680d8 runtime.gcenable.gowrap1() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:204 +0x28 fp=0x1400006d7d0 sp=0x1400006d7b0 pc=0x10235bed8 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006d7d0 sp=0x1400006d7d0 pc=0x1023b8604 created by runtime.gcenable in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:204 +0x6c goroutine 4 gp=0x14000003340 m=nil [GC scavenge wait]: runtime.gopark(0xe1a8e1?, 0xdf5ef6?, 0x0?, 0x0?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006df60 sp=0x1400006df40 pc=0x1023b08a8 runtime.goparkunlock(...) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:441 runtime.(*scavengerState).park(0x103d44680) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgcscavenge.go:425 +0x5c fp=0x1400006df90 sp=0x1400006df60 pc=0x102365b6c runtime.bgscavenge(0x14000098000) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgcscavenge.go:658 +0xac fp=0x1400006dfb0 sp=0x1400006df90 pc=0x10236610c runtime.gcenable.gowrap2() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:205 +0x28 fp=0x1400006dfd0 sp=0x1400006dfb0 pc=0x10235be78 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006dfd0 sp=0x1400006dfd0 pc=0x1023b8604 created by runtime.gcenable in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:205 +0xac goroutine 5 gp=0x14000003c00 m=nil [finalizer wait]: runtime.gopark(0x0?, 0x1034ae418?, 0x10?, 0x20?, 0x1000000010?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006c590 sp=0x1400006c570 pc=0x1023b08a8 runtime.runfinq() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mfinal.go:196 +0x108 fp=0x1400006c7d0 sp=0x1400006c590 pc=0x10235aed8 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006c7d0 sp=0x1400006c7d0 pc=0x1023b8604 created by runtime.createfing in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mfinal.go:166 +0x80 goroutine 6 gp=0x140001dc700 m=nil [chan receive]: runtime.gopark(0x140002294a0?, 0x1400332a018?, 0x48?, 0xe7?, 0x102478658?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006e6f0 sp=0x1400006e6d0 pc=0x1023b08a8 runtime.chanrecv(0x140000a6310, 0x0, 0x1) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/chan.go:664 +0x42c fp=0x1400006e770 sp=0x1400006e6f0 pc=0x10234d98c runtime.chanrecv1(0x0?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/chan.go:506 +0x14 fp=0x1400006e7a0 sp=0x1400006e770 pc=0x10234d524 runtime.unique_runtime_registerUniqueMapCleanup.func2(...) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1796 runtime.unique_runtime_registerUniqueMapCleanup.gowrap1() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1799 +0x3c fp=0x1400006e7d0 sp=0x1400006e7a0 pc=0x10235f0fc runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006e7d0 sp=0x1400006e7d0 pc=0x1023b8604 created by unique.runtime_registerUniqueMapCleanup in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1794 +0x78 goroutine 7 gp=0x140001dca80 m=nil [GC worker (idle)]: runtime.gopark(0x103db4e40?, 0x3?, 0x12?, 0xe6?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006ef10 sp=0x1400006eef0 pc=0x1023b08a8 runtime.gcBgMarkWorker(0x140000a78f0) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x1400006efb0 sp=0x1400006ef10 pc=0x10235e36c runtime.gcBgMarkStartWorkers.gowrap1() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x1400006efd0 sp=0x1400006efb0 pc=0x10235e258 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006efd0 sp=0x1400006efd0 pc=0x1023b8604 created by runtime.gcBgMarkStartWorkers in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140 goroutine 18 gp=0x14000102380 m=nil [GC worker (idle)]: runtime.gopark(0x103db4e40?, 0x1?, 0xfc?, 0x3b?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x14000068710 sp=0x140000686f0 pc=0x1023b08a8 runtime.gcBgMarkWorker(0x140000a78f0) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x140000687b0 sp=0x14000068710 pc=0x10235e36c runtime.gcBgMarkStartWorkers.gowrap1() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x140000687d0 sp=0x140000687b0 pc=0x10235e258 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x140000687d0 sp=0x140000687d0 pc=0x1023b8604 created by runtime.gcBgMarkStartWorkers in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140 goroutine 34 gp=0x14000504000 m=nil [GC worker (idle)]: runtime.gopark(0x19c2dcf5ad9ff?, 0x3?, 0x10?, 0x27?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400050a710 sp=0x1400050a6f0 pc=0x1023b08a8 runtime.gcBgMarkWorker(0x140000a78f0) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x1400050a7b0 sp=0x1400050a710 pc=0x10235e36c runtime.gcBgMarkStartWorkers.gowrap1() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x1400050a7d0 sp=0x1400050a7b0 pc=0x10235e258 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400050a7d0 sp=0x1400050a7d0 pc=0x1023b8604 created by runtime.gcBgMarkStartWorkers in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140 goroutine 8 gp=0x140001dcc40 m=nil [GC worker (idle)]: runtime.gopark(0x103db4e40?, 0x3?, 0x40?, 0x97?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006f710 sp=0x1400006f6f0 pc=0x1023b08a8 runtime.gcBgMarkWorker(0x140000a78f0) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x1400006f7b0 sp=0x1400006f710 pc=0x10235e36c runtime.gcBgMarkStartWorkers.gowrap1() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x1400006f7d0 sp=0x1400006f7b0 pc=0x10235e258 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006f7d0 sp=0x1400006f7d0 pc=0x1023b8604 created by runtime.gcBgMarkStartWorkers in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140 goroutine 19 gp=0x14000102540 m=nil [GC worker (idle)]: runtime.gopark(0x19c2dcf530eb8?, 0x1?, 0x63?, 0xf4?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x14000068f10 sp=0x14000068ef0 pc=0x1023b08a8 runtime.gcBgMarkWorker(0x140000a78f0) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x14000068fb0 sp=0x14000068f10 pc=0x10235e36c runtime.gcBgMarkStartWorkers.gowrap1() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x14000068fd0 sp=0x14000068fb0 pc=0x10235e258 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x14000068fd0 sp=0x14000068fd0 pc=0x1023b8604 created by runtime.gcBgMarkStartWorkers in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140 goroutine 35 gp=0x140005041c0 m=nil [GC worker (idle)]: runtime.gopark(0x19c2dcf5be952?, 0x3?, 0x1?, 0xd1?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400050af10 sp=0x1400050aef0 pc=0x1023b08a8 runtime.gcBgMarkWorker(0x140000a78f0) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x1400050afb0 sp=0x1400050af10 pc=0x10235e36c runtime.gcBgMarkStartWorkers.gowrap1() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x1400050afd0 sp=0x1400050afb0 pc=0x10235e258 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400050afd0 sp=0x1400050afd0 pc=0x1023b8604 created by runtime.gcBgMarkStartWorkers in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140 goroutine 9 gp=0x140001dce00 m=nil [GC worker (idle)]: runtime.gopark(0x19c2dcf596ee1?, 0x1?, 0x10?, 0x3d?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400006ff10 sp=0x1400006fef0 pc=0x1023b08a8 runtime.gcBgMarkWorker(0x140000a78f0) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x1400006ffb0 sp=0x1400006ff10 pc=0x10235e36c runtime.gcBgMarkStartWorkers.gowrap1() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x1400006ffd0 sp=0x1400006ffb0 pc=0x10235e258 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400006ffd0 sp=0x1400006ffd0 pc=0x1023b8604 created by runtime.gcBgMarkStartWorkers in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140 goroutine 20 gp=0x14000102700 m=nil [GC worker (idle)]: runtime.gopark(0x103db4e40?, 0x1?, 0xd?, 0x73?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x14000080f10 sp=0x14000080ef0 pc=0x1023b08a8 runtime.gcBgMarkWorker(0x140000a78f0) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x14000080fb0 sp=0x14000080f10 pc=0x10235e36c runtime.gcBgMarkStartWorkers.gowrap1() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x14000080fd0 sp=0x14000080fb0 pc=0x10235e258 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x14000080fd0 sp=0x14000080fd0 pc=0x1023b8604 created by runtime.gcBgMarkStartWorkers in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140 goroutine 10 gp=0x140001dcfc0 m=nil [GC worker (idle)]: runtime.gopark(0x19c2dcf59a926?, 0x1?, 0x26?, 0x46?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x14000694f10 sp=0x14000694ef0 pc=0x1023b08a8 runtime.gcBgMarkWorker(0x140000a78f0) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x14000694fb0 sp=0x14000694f10 pc=0x10235e36c runtime.gcBgMarkStartWorkers.gowrap1() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x14000694fd0 sp=0x14000694fb0 pc=0x10235e258 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x14000694fd0 sp=0x14000694fd0 pc=0x1023b8604 created by runtime.gcBgMarkStartWorkers in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140 goroutine 36 gp=0x14000504380 m=nil [GC worker (idle)]: runtime.gopark(0x19c2dcf532310?, 0x3?, 0x28?, 0x9f?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400050b710 sp=0x1400050b6f0 pc=0x1023b08a8 runtime.gcBgMarkWorker(0x140000a78f0) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1423 +0xdc fp=0x1400050b7b0 sp=0x1400050b710 pc=0x10235e36c runtime.gcBgMarkStartWorkers.gowrap1() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x28 fp=0x1400050b7d0 sp=0x1400050b7b0 pc=0x10235e258 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400050b7d0 sp=0x1400050b7d0 pc=0x1023b8604 created by runtime.gcBgMarkStartWorkers in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/mgc.go:1339 +0x140 goroutine 946 gp=0x1400373d340 m=nil [select]: runtime.gopark(0x1400016fa50?, 0x2?, 0x28?, 0xf7?, 0x1400016f7ec?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1400016f600 sp=0x1400016f5e0 pc=0x1023b08a8 runtime.selectgo(0x1400016fa50, 0x1400016f7e8, 0x244?, 0x0, 0x1?, 0x1) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/run time/select.go:351 +0x6c4 fp=0x1400016f730 sp=0x1400016f600 pc=0x102390284 github.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0x14000159440, {0x1034c09b8, 0x140031920e0}, 0x1400013a140) /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:649 +0x914 fp=0x1400016faa0 sp=0x1400016f730 pc=0x1027c4314 github.com/ollama/ollama/runner/ollamarunner.(*Server).completion-fm({0x1034c09b8?, 0x140031920e0?}, 0x1400016fb28?) <autogenerated>:1 +0x40 fp=0x1400016fad0 sp=0x1400016faa0 pc=0x1027c5f90 net/http.HandlerFunc.ServeHTTP(0x14000164b40?, {0x1034c09b8?, 0x140031920e0?}, 0x1400016fb10?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:2294 +0x38 fp=0x1400016fb00 sp=0x1400016fad0 pc=0x102668918 net/http.(*ServeMux).ServeHTTP(0x10?, {0x1034c09b8, 0x140031920e0}, 0x1400013a140) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:2822 +0x1b4 fp=0x1400016fb50 sp=0x1400016fb00 pc=0x10266a4a4 net/http.serverHandler.ServeHTTP({0x1034bd070?}, {0x1034c09b8?, 0x140031920e0?}, 0x1?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:3301 +0xbc fp=0x1400016fb80 sp=0x1400016fb50 pc=0x10268618c net/http.(*conn).serve(0x140000eddd0, {0x1034c2a68, 0x14000612600}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:2102 +0x52c fp=0x1400016ffa0 sp=0x1400016fb80 pc=0x1026670bc net/http.(*Server).Serve.gowrap3() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:3454 +0x30 fp=0x1400016ffd0 sp=0x1400016ffa0 pc=0x10266c280 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400016ffd0 sp=0x1400016ffd0 pc=0x1023b8604 created by net/http.(*Server).Serve in goroutine 1 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:3454 +0x3d8 goroutine 1035 gp=0x14024763880 m=nil [IO wait]: runtime.gopark(0xffffffffffffffff?, 0xffffffffffffffff?, 0x23?, 0x0?, 0x1023d4200?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/proc.go:435 +0xc8 fp=0x1402476ad80 sp=0x1402476ad60 pc=0x1023b08a8 runtime.netpollblock(0x0?, 0x0?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/netpoll.go:575 +0x158 fp=0x1402476adc0 sp=0x1402476ad80 pc=0x102376138 internal/poll.runtime_pollWait(0x12abdbd78, 0x72) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/netpoll.go:351 +0xa0 fp=0x1402476adf0 sp=0x1402476adc0 pc=0x1023afa60 internal/poll.(*pollDesc).wait(0x14000626080?, 0x14003408131?, 0x0) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/internal/poll/fd_poll_runtime.go:84 +0x28 fp=0x1402476ae20 sp=0x1402476adf0 pc=0x10242fdf8 internal/poll.(*pollDesc).waitRead(...) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0x14000626080, {0x14003408131, 0x1, 0x1}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/internal/poll/fd_unix.go:165 +0x1fc fp=0x1402476aec0 sp=0x1402476ae20 pc=0x1024310ac net.(*netFD).Read(0x14000626080, {0x14003408131?, 0x1402476af58?, 0x102661b34?}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/fd_posix.go:55 +0x28 fp=0x1402476af10 sp=0x1402476aec0 pc=0x1024a2958 net.(*conn).Read(0x14000132008, {0x14003408131?, 0x0?, 0x0?}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/net.go:194 +0x34 fp=0x1402476af60 sp=0x1402476af10 pc=0x1024af824 net/http.(*connReader).backgroundRead(0x14003408120) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:690 +0x40 fp=0x1402476afb0 sp=0x1402476af60 pc=0x102661a30 net/http.(*connReader).startBackgroundRead.gowrap2() /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:686 +0x28 fp=0x1402476afd0 sp=0x1402476afb0 pc=0x102661918 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.0/x64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1402476afd0 sp=0x1402476afd0 pc=0x1023b8604 created by net/http.(*connReader).startBackgroundRead in goroutine 946 /Users/runner/hostedtoolcache/go/1.24.0/x64/src/net/http/server.go:686 +0xc4 r0 0x0 r1 0x0 r2 0x0 r3 0x0 r4 0x103234e1b r5 0x377792d40 r6 0x64656c6961662029 r7 0x1318215a8 r8 0x616db9f7d34d95ff r9 0x616db9f4a434a5ff r10 0x2 r11 0xfffffffd r12 0x10000000000 r13 0x0 r14 0x0 r15 0x0 r16 0x148 r17 0x1f83633a0 r18 0x0 r19 0x6 r20 0x377793000 r21 0x7003 r22 0x3777930e0 r23 0x606b8 r24 0x160008000 r25 0x103db3408 r26 0x1034adac0 r27 0x818 r28 0x14003011180 r29 0x377792ca0 lr 0x1987bbc28 sp 0x377792c80 pc 0x198784764 fault 0x198784764 [GIN] 2025/03/18 - 19:05:48 | 500 | 119.9725ms | 127.0.0.1 | POST "/api/chat" time=2025-03-18T19:05:48.818+01:00 level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 2" time=2025-03-18T19:05:48.911+01:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/leo/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada gpu=0 parallel=4 available=22906503168 required="6.3 GiB" time=2025-03-18T19:05:48.911+01:00 level=INFO source=server.go:105 msg="system memory" total="32.0 GiB" free="14.4 GiB" free_swap="0 B" time=2025-03-18T19:05:48.912+01:00 level=INFO source=server.go:138 msg=offload library=metal layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[21.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.3 GiB" memory.required.partial="6.3 GiB" memory.required.kv="1.1 GiB" memory.required.allocations="[6.3 GiB]" memory.weights.total="1.8 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="517.0 MiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-03-18T19:05:48.974+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T19:05:48.975+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-18T19:05:48.977+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T19:05:48.980+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-03-18T19:05:48.980+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-18T19:05:48.980+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-18T19:05:48.980+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-18T19:05:48.980+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-18T19:05:48.983+01:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/leo/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada --ctx-size 8192 --batch-size 512 --n-gpu-layers 35 --threads 8 --parallel 4 --port 63790" time=2025-03-18T19:05:48.984+01:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-18T19:05:48.984+01:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-18T19:05:48.984+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-18T19:05:49.001+01:00 level=INFO source=runner.go:823 msg="starting ollama engine" time=2025-03-18T19:05:49.001+01:00 level=INFO source=runner.go:883 msg="Server listening on 127.0.0.1:63790" time=2025-03-18T19:05:49.060+01:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default="" time=2025-03-18T19:05:49.060+01:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" time=2025-03-18T19:05:49.060+01:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=35 ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-icelake.so ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-haswell.so ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-alderlake.so ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-sandybridge.so ggml_backend_load_best: failed to load /Applications/Ollama.app/Contents/Resources/libggml-cpu-skylakex.so time=2025-03-18T19:05:49.062+01:00 level=INFO source=ggml.go:109 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-03-18T19:05:49.169+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=Metal size="3.1 GiB" time=2025-03-18T19:05:49.169+01:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="525.0 MiB" time=2025-03-18T19:05:49.269+01:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 Max ggml_metal_init: picking default device: Apple M1 Max ggml_metal_init: using embedded metal library ggml_metal_init: GPU name: Apple M1 Max ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 22906.50 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) time=2025-03-18T19:05:49.624+01:00 level=INFO source=ggml.go:356 msg="compute graph" backend=Metal buffer_type=Metal time=2025-03-18T19:05:49.624+01:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CPU buffer_type=CPU time=2025-03-18T19:05:49.625+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T19:05:49.626+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-18T19:05:49.628+01:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-18T19:05:49.630+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-03-18T19:05:49.631+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-18T19:05:49.631+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-18T19:05:49.631+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-18T19:05:49.631+01:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-18T19:05:49.772+01:00 level=INFO source=server.go:624 msg="llama runner started in 0.79 seconds" llama_model_loader: loaded meta data with 34 key-value pairs and 883 tensors from /Users/leo/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: gemma3.attention.head_count u32 = 8 llama_model_loader: - kv 1: gemma3.attention.head_count_kv u32 = 4 llama_model_loader: - kv 2: gemma3.attention.key_length u32 = 256 llama_model_loader: - kv 3: gemma3.attention.sliding_window u32 = 1024 llama_model_loader: - kv 4: gemma3.attention.value_length u32 = 256 llama_model_loader: - kv 5: gemma3.block_count u32 = 34 llama_model_loader: - kv 6: gemma3.context_length u32 = 8192 llama_model_loader: - kv 7: gemma3.embedding_length u32 = 2560 llama_model_loader: - kv 8: gemma3.feed_forward_length u32 = 10240 llama_model_loader: - kv 9: gemma3.vision.attention.head_count u32 = 16 llama_model_loader: - kv 10: gemma3.vision.attention.layer_norm_epsilon f32 = 0.000001 llama_model_loader: - kv 11: gemma3.vision.block_count u32 = 27 llama_model_loader: - kv 12: gemma3.vision.embedding_length u32 = 1152 llama_model_loader: - kv 13: gemma3.vision.feed_forward_length u32 = 4304 llama_model_loader: - kv 14: gemma3.vision.image_size u32 = 896 llama_model_loader: - kv 15: gemma3.vision.num_channels u32 = 3 llama_model_loader: - kv 16: gemma3.vision.patch_size u32 = 14 llama_model_loader: - kv 17: general.architecture str = gemma3 llama_model_loader: - kv 18: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 19: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 20: tokenizer.ggml.add_padding_token bool = false llama_model_loader: - kv 21: tokenizer.ggml.add_unknown_token bool = false llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,514906] = ["\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n", ... llama_model_loader: - kv 25: tokenizer.ggml.model str = llama llama_model_loader: - kv 26: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 27: tokenizer.ggml.pre str = default llama_model_loader: - kv 28: tokenizer.ggml.scores arr[f32,262145] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,262145] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,262145] = ["<pad>", "<eos>", "<bos>", "<unk>", ... llama_model_loader: - kv 31: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - kv 33: general.file_type u32 = 15 llama_model_loader: - type f32: 479 tensors llama_model_loader: - type f16: 165 tensors llama_model_loader: - type q4_K: 205 tensors llama_model_loader: - type q6_K: 34 tensors load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 7 load: token to piece cache size = 1.9446 MB ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.6.1
GiteaMirror added the bug label 2026-04-29 01:31:11 -05:00
Author
Owner

@jmorganca commented on GitHub (Mar 18, 2025):

Hi @leokeba may I ask what requests you are making that's causing the fail? This will help us reproduce on our side

<!-- gh-comment-id:2734398584 --> @jmorganca commented on GitHub (Mar 18, 2025): Hi @leokeba may I ask what requests you are making that's causing the fail? This will help us reproduce on our side
Author
Owner

@leokeba commented on GitHub (Mar 18, 2025):

Hi @jmorganca, thanks for answering. It is a bit hard to find a simple way to reproduce because I am using structured-output with a custom pydantic model.

Now that I think of it, this may actually be relevant to the bug.

I will try to reproduce the issue using a single self-contained python example and post it here when I am done.

<!-- gh-comment-id:2734421440 --> @leokeba commented on GitHub (Mar 18, 2025): Hi @jmorganca, thanks for answering. It is a bit hard to find a simple way to reproduce because I am using structured-output with a custom pydantic model. Now that I think of it, this may actually be relevant to the bug. I will try to reproduce the issue using a single self-contained python example and post it here when I am done.
Author
Owner

@leokeba commented on GitHub (Mar 18, 2025):

Here's a way to trigger the bug :

from pydantic import BaseModel
from enum import StrEnum
from typing import List
import ollama

class TweetSubject(StrEnum):
    REGLES_DE_CRISE_ADAPTATIONS = "Règles de crise / adaptations"
    MEDECINE_PROTECTION_SANITAIRE = "Médecine / protection sanitaire"
    VACCIN = "Vaccin"
    PENURIE_GESTION_MATERIEL = "Pénurie / gestion matériel"
    PASS_SANITAIRE = "Pass sanitaire"
    AUTRES = "Autres"
    NSP = "NSP"

subjects_description = {
    TweetSubject.REGLES_DE_CRISE_ADAPTATIONS: "règles édictées pour répondre à la situation de pandémie de covid19 et façons de s’y adapter. Aspect légal, autorisations, interdictions, etc.",
    TweetSubject.MEDECINE_PROTECTION_SANITAIRE: "sujets médicaux, liés au covid19 ou à d’autres maladies, aux risques qu’elles entraînent et aux façons de s’en protéger.",
    TweetSubject.VACCIN: "vaccin contre le covid19. Ne se combine pas avec un autre sujet sauf si plusieurs questions bien distinctes.",
    TweetSubject.PENURIE_GESTION_MATERIEL: "gestion par l’état du matériel nécessaire pour lutter contre le covid19, y compris les moyens humains des hôpitaux, le prix des moyens de protection et leur accessibilité.",
    TweetSubject.PASS_SANITAIRE: "pass sanitaire, passeport vaccinal ou toute différenciation de droit entre personnes vaccinées et personnes non-vaccinées, ou toute autorisation ou interdiction liée au résultat d’un test. Si le sujet « pass sanitaire » est présent dans le message, pas de combinaison avec un autre. Si les sujets « pass sanitaire » et « vaccin » sont présents, indiquer « pass sanitaire », sauf si plusieurs questions bien distinctes.",
    TweetSubject.AUTRES: "sujet qui ne rentre dans aucune des autres catégories.",
    TweetSubject.NSP: "s’il n’y a pas d’élément susceptible d’éclairer sur le sujet du tweet"
}

class TweetEmotion(StrEnum):
    MECONTENTEMENT_COLERE = "Mécontentement / colère"
    GRATITUDE_VALIDATION = "Gratitude / validation"
    NEUTRE = "Neutre"
    AUTRE = "Autre"
    NSP = "NSP"

emotions_description = {
    TweetEmotion.MECONTENTEMENT_COLERE: "expression d’une émotion pouvant aller d’un léger mécontentement à la colère la plus violente.",
    TweetEmotion.GRATITUDE_VALIDATION: "expression d’une gratitude ou validation de propos ou d’actions.",
    TweetEmotion.NEUTRE: "pas d’émotion particulière détectable.",
    TweetEmotion.AUTRE: "expression d’une émotion qui ne rentre dans aucune des autres catégories.",
    TweetEmotion.NSP: "s’il n’y a pas d’élément susceptible d’éclairer sur l’émotion du tweet."
}

class TweetAnalysis(BaseModel):
    subject: List[TweetSubject]
    emotion: List[TweetEmotion]

subject_list = [subject.value for subject in TweetSubject]
emotion_list = [emotion.value for emotion in TweetEmotion]

def build_analysis_prompt(tweet):
    subject_list_string = '''
    - '''.join(f"'{subject}' : {subjects_description[subject]}" for subject in subject_list)
    emotion_list_string = '''
    - '''.join(f"'{emotion}' : {emotions_description[emotion]}" for emotion in emotion_list)

    prompt = f"""Analyze the following tweet and return the subjects and emotions that best characterizes its contents as a json object.
                Do not return more than 2 subjects and 2 emotions.
    
                Tweet: {tweet}

                Subjects: 
                - {subject_list_string}

                Emotions: 
                - {emotion_list_string}"""
    return prompt

def get_ollama_structured_response(prompt: str, model: BaseModel):
        """
        Sends a prompt to the Ollama model and returns the response content.
        """
        response = ollama.chat(
            model='gemma3:latest', 
            messages=[
                {
                    'role': 'user',
                    'content': prompt,
                },
            ],
            format=model.model_json_schema(),
        )
        return model.model_validate_json(response['message']['content'])

tweet1 = '''Il n'y a pas que la Chine et l'Italie dont s'inspirer         
Taiwan semble exemplaire dans son traitement du #Coronavirus  #Onvousrépond 
https://t.co/kr8LE9C0vg'''

tweet2 = '@la_muse88 Ça dépendra des résultats du 1er tour...si LR est en tête pas de confinement avant le 2e tour, sinon....#OnVousRepond #France2'

tweet3 = '''1- 50% des malades en réanimation on moins de 65ans
2- Est-ce que le fait d'être fumeur est un facteur à risque ?
3-Pourquoi les bureaux de tabac restent ouverts (non alimentaire)? 
 #michelcimes #France2 #OnVousRepond #COVIDー19'''

response = get_ollama_structured_response(build_analysis_prompt(tweet2), TweetAnalysis)
print(response)

response = get_ollama_structured_response(build_analysis_prompt(tweet1), TweetAnalysis)
print(response)

response = get_ollama_structured_response(build_analysis_prompt(tweet3), TweetAnalysis)
print(response)

This does trigger the bug every time on both my systems. It seems quite sensitive to the contents and order of the requests.

<!-- gh-comment-id:2734697982 --> @leokeba commented on GitHub (Mar 18, 2025): Here's a way to trigger the bug : ``` from pydantic import BaseModel from enum import StrEnum from typing import List import ollama class TweetSubject(StrEnum): REGLES_DE_CRISE_ADAPTATIONS = "Règles de crise / adaptations" MEDECINE_PROTECTION_SANITAIRE = "Médecine / protection sanitaire" VACCIN = "Vaccin" PENURIE_GESTION_MATERIEL = "Pénurie / gestion matériel" PASS_SANITAIRE = "Pass sanitaire" AUTRES = "Autres" NSP = "NSP" subjects_description = { TweetSubject.REGLES_DE_CRISE_ADAPTATIONS: "règles édictées pour répondre à la situation de pandémie de covid19 et façons de s’y adapter. Aspect légal, autorisations, interdictions, etc.", TweetSubject.MEDECINE_PROTECTION_SANITAIRE: "sujets médicaux, liés au covid19 ou à d’autres maladies, aux risques qu’elles entraînent et aux façons de s’en protéger.", TweetSubject.VACCIN: "vaccin contre le covid19. Ne se combine pas avec un autre sujet sauf si plusieurs questions bien distinctes.", TweetSubject.PENURIE_GESTION_MATERIEL: "gestion par l’état du matériel nécessaire pour lutter contre le covid19, y compris les moyens humains des hôpitaux, le prix des moyens de protection et leur accessibilité.", TweetSubject.PASS_SANITAIRE: "pass sanitaire, passeport vaccinal ou toute différenciation de droit entre personnes vaccinées et personnes non-vaccinées, ou toute autorisation ou interdiction liée au résultat d’un test. Si le sujet « pass sanitaire » est présent dans le message, pas de combinaison avec un autre. Si les sujets « pass sanitaire » et « vaccin » sont présents, indiquer « pass sanitaire », sauf si plusieurs questions bien distinctes.", TweetSubject.AUTRES: "sujet qui ne rentre dans aucune des autres catégories.", TweetSubject.NSP: "s’il n’y a pas d’élément susceptible d’éclairer sur le sujet du tweet" } class TweetEmotion(StrEnum): MECONTENTEMENT_COLERE = "Mécontentement / colère" GRATITUDE_VALIDATION = "Gratitude / validation" NEUTRE = "Neutre" AUTRE = "Autre" NSP = "NSP" emotions_description = { TweetEmotion.MECONTENTEMENT_COLERE: "expression d’une émotion pouvant aller d’un léger mécontentement à la colère la plus violente.", TweetEmotion.GRATITUDE_VALIDATION: "expression d’une gratitude ou validation de propos ou d’actions.", TweetEmotion.NEUTRE: "pas d’émotion particulière détectable.", TweetEmotion.AUTRE: "expression d’une émotion qui ne rentre dans aucune des autres catégories.", TweetEmotion.NSP: "s’il n’y a pas d’élément susceptible d’éclairer sur l’émotion du tweet." } class TweetAnalysis(BaseModel): subject: List[TweetSubject] emotion: List[TweetEmotion] subject_list = [subject.value for subject in TweetSubject] emotion_list = [emotion.value for emotion in TweetEmotion] def build_analysis_prompt(tweet): subject_list_string = ''' - '''.join(f"'{subject}' : {subjects_description[subject]}" for subject in subject_list) emotion_list_string = ''' - '''.join(f"'{emotion}' : {emotions_description[emotion]}" for emotion in emotion_list) prompt = f"""Analyze the following tweet and return the subjects and emotions that best characterizes its contents as a json object. Do not return more than 2 subjects and 2 emotions. Tweet: {tweet} Subjects: - {subject_list_string} Emotions: - {emotion_list_string}""" return prompt def get_ollama_structured_response(prompt: str, model: BaseModel): """ Sends a prompt to the Ollama model and returns the response content. """ response = ollama.chat( model='gemma3:latest', messages=[ { 'role': 'user', 'content': prompt, }, ], format=model.model_json_schema(), ) return model.model_validate_json(response['message']['content']) tweet1 = '''Il n'y a pas que la Chine et l'Italie dont s'inspirer Taiwan semble exemplaire dans son traitement du #Coronavirus #Onvousrépond https://t.co/kr8LE9C0vg''' tweet2 = '@la_muse88 Ça dépendra des résultats du 1er tour...si LR est en tête pas de confinement avant le 2e tour, sinon....#OnVousRepond #France2' tweet3 = '''1- 50% des malades en réanimation on moins de 65ans 2- Est-ce que le fait d'être fumeur est un facteur à risque ? 3-Pourquoi les bureaux de tabac restent ouverts (non alimentaire)? #michelcimes #France2 #OnVousRepond #COVIDー19''' response = get_ollama_structured_response(build_analysis_prompt(tweet2), TweetAnalysis) print(response) response = get_ollama_structured_response(build_analysis_prompt(tweet1), TweetAnalysis) print(response) response = get_ollama_structured_response(build_analysis_prompt(tweet3), TweetAnalysis) print(response) ``` This does trigger the bug every time on both my systems. It seems quite sensitive to the contents and order of the requests.
Author
Owner

@fspv commented on GitHub (Apr 20, 2025):

I'm still getting this with ollama 0.6.5 and 0.6.6. https://github.com/ollama/ollama/pull/9875 seems to be included in both

 2025/04/20 18:32:11 routes.go:1232: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_O
N: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:
://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLL
ELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:f
E:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.
0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri:/
//* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
 time=2025-04-20T18:32:11.437Z level=INFO source=images.go:458 msg="total blobs: 7"
 time=2025-04-20T18:32:11.437Z level=INFO source=images.go:465 msg="total unused blobs removed: 0"
 time=2025-04-20T18:32:11.437Z level=INFO source=routes.go:1299 msg="Listening on [::]:11434 (version 0.6.6)"
 time=2025-04-20T18:32:11.437Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
 time=2025-04-20T18:32:11.440Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
 time=2025-04-20T18:32:11.440Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0
62.6 GiB" available="39.3 GiB"
 time=2025-04-20T18:32:12.297Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
 time=2025-04-20T18:32:12.333Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
 time=2025-04-20T18:32:12.375Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
 time=2025-04-20T18:32:12.376Z level=INFO source=server.go:105 msg="system memory" total="62.6 GiB" free="39.3 GiB" free_swap="19.3 G

 time=2025-04-20T18:32:12.376Z level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.off
t="" memory.available="[39.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="0 B" memory.requ
" memory.required.allocations="[5.3 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeati
ory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
 time=2025-04-20T18:32:12.450Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
 time=2025-04-20T18:32:12.454Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
 time=2025-04-20T18:32:12.459Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=
-07
 time=2025-04-20T18:32:12.459Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
 time=2025-04-20T18:32:12.459Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
 time=2025-04-20T18:32:12.459Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
 time=2025-04-20T18:32:12.459Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
 time=2025-04-20T18:32:12.460Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engin
llama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads
allel 4 --port 42165"
 time=2025-04-20T18:32:12.460Z level=INFO source=sched.go:451 msg="loaded runners" count=1
 time=2025-04-20T18:32:12.460Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
 time=2025-04-20T18:32:12.460Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
 time=2025-04-20T18:32:12.468Z level=INFO source=runner.go:866 msg="starting ollama engine"
 time=2025-04-20T18:32:12.468Z level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:42165"
 time=2025-04-20T18:32:12.533Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
 time=2025-04-20T18:32:12.534Z level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
 time=2025-04-20T18:32:12.534Z level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
 time=2025-04-20T18:32:12.534Z level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_te
values=36
 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so
 time=2025-04-20T18:32:12.537Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0
=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
 time=2025-04-20T18:32:12.539Z level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB"
 time=2025-04-20T18:32:12.712Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loadin

 time=2025-04-20T18:32:13.419Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
 time=2025-04-20T18:32:13.426Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=
-07
 time=2025-04-20T18:32:13.426Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
 time=2025-04-20T18:32:13.426Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
 time=2025-04-20T18:32:13.426Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
 time=2025-04-20T18:32:13.426Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
 time=2025-04-20T18:32:13.649Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="137.0 MiB"
 time=2025-04-20T18:32:13.717Z level=INFO source=server.go:619 msg="llama runner started in 1.26 seconds"
 time=2025-04-20T18:32:13.754Z level=WARN source=runner.go:154 msg="truncating input prompt" limit=2048 prompt=3685 keep=4 new=2048
 ggml-alloc.c:819: GGML_ASSERT(talloc->buffer_id >= 0) failed
 /usr/bin/ollama(+0x11021a8)[0x5efd83fa61a8]
 /usr/bin/ollama(+0x1102526)[0x5efd83fa6526]
 /usr/bin/ollama(+0x10ef8f5)[0x5efd83f938f5]
 /usr/bin/ollama(+0x10f101b)[0x5efd83f9501b]
 /usr/bin/ollama(+0x1116005)[0x5efd83fba005]
 /usr/bin/ollama(+0x111645b)[0x5efd83fba45b]
 /usr/bin/ollama(+0x117071b)[0x5efd8401471b]
 /usr/bin/ollama(+0x334801)[0x5efd831d8801]
 SIGABRT: abort
 PC=0x7b923470800b m=15 sigcode=18446744073709551610
 signal arrived during cgo execution

 goroutine 55 gp=0xc000602e00 m=15 mp=0xc000101808 [syscall]:
 runtime.cgocall(0x5efd84014700, 0xc000617af8)
      runtime/cgocall.go:167 +0x4b fp=0xc000617ad0 sp=0xc000617a98 pc=0x5efd831ce14b
 github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x7b91d0000e80, 0x7b9194002fa0)
      _cgo_gotypes.go:516 +0x4a fp=0xc000617af8 sp=0xc000617ad0 pc=0x5efd835cb6aa
 github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute.func1(...)
      github.com/ollama/ollama/ml/backend/ggml/ggml.go:529
 github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute(0xc003a279b0, {0xc002f7be70, 0x1, 0x0?})
      github.com/ollama/ollama/ml/backend/ggml/ggml.go:529 +0x96 fp=0xc000617b88 sp=0xc000617af8 pc=0x5efd835d4956
 github.com/ollama/ollama/model.Forward({0x5efd844d40b0, 0xc003a279b0}, {0x5efd844caa90, 0xc00310cae0}, {0xc005df1000, 0x200, 0x200},
 0xc003111698}, {0x0, ...}, ...})
      github.com/ollama/ollama/model/model.go:313 +0x2b8 fp=0xc000617c70 sp=0xc000617b88 pc=0x5efd836027d8
 github.com/ollama/ollama/runner/ollamarunner.(*Server).processBatch(0xc000129d40)
      github.com/ollama/ollama/runner/ollamarunner/runner.go:478 +0x476 fp=0xc000617f98 sp=0xc000617c70 pc=0x5efd83684ab6
 github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc000129d40, {0x5efd844cbdf0, 0xc0006aaaf0})
      github.com/ollama/ollama/runner/ollamarunner/runner.go:364 +0x4e fp=0xc000617fb8 sp=0xc000617f98 pc=0x5efd836845ee
 github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap2()
      github.com/ollama/ollama/runner/ollamarunner/runner.go:906 +0x28 fp=0xc000617fe0 sp=0xc000617fb8 pc=0x5efd836890e8
 runtime.goexit({})
      runtime/asm_amd64.s:1700 +0x1 fp=0xc000617fe8 sp=0xc000617fe0 pc=0x5efd831d8b81
 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
      github.com/ollama/ollama/runner/ollamarunner/runner.go:906 +0xb37

(traceback is truncated, but I don't think it is relevant here)

Should I open a separate issue? The symptoms are exactly the same: gemma3 and random crashes

UPD: ignore this, it is okay now. Maybe I somehow got an old instance stuck running

<!-- gh-comment-id:2817288417 --> @fspv commented on GitHub (Apr 20, 2025): ~~I'm still getting this with ollama 0.6.5 and 0.6.6. https://github.com/ollama/ollama/pull/9875 seems to be included in both~~ ``` 2025/04/20 18:32:11 routes.go:1232: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_O N: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD: ://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLL ELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:f E:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0. 0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri:/ //* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-04-20T18:32:11.437Z level=INFO source=images.go:458 msg="total blobs: 7" time=2025-04-20T18:32:11.437Z level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-20T18:32:11.437Z level=INFO source=routes.go:1299 msg="Listening on [::]:11434 (version 0.6.6)" time=2025-04-20T18:32:11.437Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-20T18:32:11.440Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-04-20T18:32:11.440Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0 62.6 GiB" available="39.3 GiB" time=2025-04-20T18:32:12.297Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-20T18:32:12.333Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-20T18:32:12.375Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-20T18:32:12.376Z level=INFO source=server.go:105 msg="system memory" total="62.6 GiB" free="39.3 GiB" free_swap="19.3 G time=2025-04-20T18:32:12.376Z level=INFO source=server.go:138 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.off t="" memory.available="[39.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="0 B" memory.requ " memory.required.allocations="[5.3 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeati ory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-04-20T18:32:12.450Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-20T18:32:12.454Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-20T18:32:12.459Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default= -07 time=2025-04-20T18:32:12.459Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-20T18:32:12.459Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-20T18:32:12.459Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-20T18:32:12.459Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-04-20T18:32:12.460Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engin llama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --threads allel 4 --port 42165" time=2025-04-20T18:32:12.460Z level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-20T18:32:12.460Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-20T18:32:12.460Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-20T18:32:12.468Z level=INFO source=runner.go:866 msg="starting ollama engine" time=2025-04-20T18:32:12.468Z level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:42165" time=2025-04-20T18:32:12.533Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-20T18:32:12.534Z level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" time=2025-04-20T18:32:12.534Z level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" time=2025-04-20T18:32:12.534Z level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_te values=36 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so time=2025-04-20T18:32:12.537Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0 =1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-04-20T18:32:12.539Z level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="3.6 GiB" time=2025-04-20T18:32:12.712Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loadin time=2025-04-20T18:32:13.419Z level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-20T18:32:13.426Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default= -07 time=2025-04-20T18:32:13.426Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-20T18:32:13.426Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-20T18:32:13.426Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-20T18:32:13.426Z level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-04-20T18:32:13.649Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="137.0 MiB" time=2025-04-20T18:32:13.717Z level=INFO source=server.go:619 msg="llama runner started in 1.26 seconds" time=2025-04-20T18:32:13.754Z level=WARN source=runner.go:154 msg="truncating input prompt" limit=2048 prompt=3685 keep=4 new=2048 ggml-alloc.c:819: GGML_ASSERT(talloc->buffer_id >= 0) failed /usr/bin/ollama(+0x11021a8)[0x5efd83fa61a8] /usr/bin/ollama(+0x1102526)[0x5efd83fa6526] /usr/bin/ollama(+0x10ef8f5)[0x5efd83f938f5] /usr/bin/ollama(+0x10f101b)[0x5efd83f9501b] /usr/bin/ollama(+0x1116005)[0x5efd83fba005] /usr/bin/ollama(+0x111645b)[0x5efd83fba45b] /usr/bin/ollama(+0x117071b)[0x5efd8401471b] /usr/bin/ollama(+0x334801)[0x5efd831d8801] SIGABRT: abort PC=0x7b923470800b m=15 sigcode=18446744073709551610 signal arrived during cgo execution goroutine 55 gp=0xc000602e00 m=15 mp=0xc000101808 [syscall]: runtime.cgocall(0x5efd84014700, 0xc000617af8) runtime/cgocall.go:167 +0x4b fp=0xc000617ad0 sp=0xc000617a98 pc=0x5efd831ce14b github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x7b91d0000e80, 0x7b9194002fa0) _cgo_gotypes.go:516 +0x4a fp=0xc000617af8 sp=0xc000617ad0 pc=0x5efd835cb6aa github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute.func1(...) github.com/ollama/ollama/ml/backend/ggml/ggml.go:529 github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute(0xc003a279b0, {0xc002f7be70, 0x1, 0x0?}) github.com/ollama/ollama/ml/backend/ggml/ggml.go:529 +0x96 fp=0xc000617b88 sp=0xc000617af8 pc=0x5efd835d4956 github.com/ollama/ollama/model.Forward({0x5efd844d40b0, 0xc003a279b0}, {0x5efd844caa90, 0xc00310cae0}, {0xc005df1000, 0x200, 0x200}, 0xc003111698}, {0x0, ...}, ...}) github.com/ollama/ollama/model/model.go:313 +0x2b8 fp=0xc000617c70 sp=0xc000617b88 pc=0x5efd836027d8 github.com/ollama/ollama/runner/ollamarunner.(*Server).processBatch(0xc000129d40) github.com/ollama/ollama/runner/ollamarunner/runner.go:478 +0x476 fp=0xc000617f98 sp=0xc000617c70 pc=0x5efd83684ab6 github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc000129d40, {0x5efd844cbdf0, 0xc0006aaaf0}) github.com/ollama/ollama/runner/ollamarunner/runner.go:364 +0x4e fp=0xc000617fb8 sp=0xc000617f98 pc=0x5efd836845ee github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap2() github.com/ollama/ollama/runner/ollamarunner/runner.go:906 +0x28 fp=0xc000617fe0 sp=0xc000617fb8 pc=0x5efd836890e8 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000617fe8 sp=0xc000617fe0 pc=0x5efd831d8b81 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/ollamarunner/runner.go:906 +0xb37 ``` ~~(traceback is truncated, but I don't think it is relevant here)~~ ~~Should I open a separate issue? The symptoms are exactly the same: gemma3 and random crashes~~ UPD: ignore this, it is okay now. Maybe I somehow got an old instance stuck running
Author
Owner

@robertmx commented on GitHub (Apr 25, 2025):

Still get this with 0.6.6.

<!-- gh-comment-id:2830518050 --> @robertmx commented on GitHub (Apr 25, 2025): Still get this with 0.6.6.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52971