[GH-ISSUE #11783] Can't Import Finetuned GPT-OSS #54325

Open
opened 2026-04-29 05:45:05 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @chigkim on GitHub (Aug 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11783

What is the issue?

When I try to import huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated, it just exits after copying file and converting model.

ollama show gpt-oss --modelfile>gpt-oss.modelfile
ollama create gpt-oss-abliterated -f gpt-oss.modelfile

I pointed modelfile with "FROM ./".
I can see ~/.ollama/models/ollama-safetensors###/fp which was about 15.5GB, so it looks like it did copy and try to convert.
However, ollama list doesn't show the model, and there's no sha files in blobs.

Relevant log output

gathering model components
copying file
...
converting model

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.11.3

Originally created by @chigkim on GitHub (Aug 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11783 ### What is the issue? When I try to import [huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated](https://huggingface.co/huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated), it just exits after copying file and converting model. ```python ollama show gpt-oss --modelfile>gpt-oss.modelfile ollama create gpt-oss-abliterated -f gpt-oss.modelfile ``` I pointed modelfile with "FROM ./". I can see ~/.ollama/models/ollama-safetensors###/fp which was about 15.5GB, so it looks like it did copy and try to convert. However, ollama list doesn't show the model, and there's no sha files in blobs. ### Relevant log output ```shell gathering model components copying file ... converting model ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.11.3
GiteaMirror added the bug label 2026-04-29 05:45:05 -05:00
Author
Owner

@meow18838 commented on GitHub (Aug 8, 2025):

Same with my own finetune, please someone fix this

<!-- gh-comment-id:3168459140 --> @meow18838 commented on GitHub (Aug 8, 2025): Same with my own finetune, please someone fix this
Author
Owner

@albertjimenez commented on GitHub (Aug 13, 2025):

same issue here, finetuned gpt 20b transformed with unsloth to gguf fails to be run by ollama but it works on the installation in ollama

<!-- gh-comment-id:3184396260 --> @albertjimenez commented on GitHub (Aug 13, 2025): same issue here, finetuned gpt 20b transformed with unsloth to gguf fails to be run by ollama but it works on the installation in ollama
Author
Owner

@jiachenguoNU commented on GitHub (Aug 16, 2025):

Same issue here. Cannot load the model

<!-- gh-comment-id:3193840169 --> @jiachenguoNU commented on GitHub (Aug 16, 2025): Same issue here. Cannot load the model
Author
Owner

@rick-github commented on GitHub (Sep 23, 2025):

If the ollama server is exiting while converting a model, it may be getting killed by an OOM condition. During the conversion process ollama starts multiple co-routines to do the conversion, and if the model is big, that can have a large memory footprint. This can be alleviated by setting GOMAXPROCS=1 in the server environment.

<!-- gh-comment-id:3324771183 --> @rick-github commented on GitHub (Sep 23, 2025): If the ollama server is exiting while converting a model, it may be getting killed by an OOM condition. During the conversion process ollama starts multiple co-routines to do the conversion, and if the model is big, that can have a large memory footprint. This can be alleviated by setting `GOMAXPROCS=1` in the server environment.
Author
Owner

@artemavrin commented on GitHub (Dec 8, 2025):

from docs

Ollama supports importing models for several different architectures including:
Llama (including Llama 2, Llama 3, Llama 3.1, and Llama 3.2);
Mistral (including Mistral 1, Mistral 2, and Mixtral);
Gemma (including Gemma 1 and Gemma 2); and
Phi3
This includes importing foundation models as well as any fine tuned models which have been fused with a foundation model.

There is no gpt-oss support I guess?
I created model with GOMAXPROCS=1, but ollama run modelname crashes with 500 server error

<!-- gh-comment-id:3625046969 --> @artemavrin commented on GitHub (Dec 8, 2025): from docs > Ollama supports importing models for several different architectures including: > Llama (including Llama 2, Llama 3, Llama 3.1, and Llama 3.2); > Mistral (including Mistral 1, Mistral 2, and Mixtral); > Gemma (including Gemma 1 and Gemma 2); and > Phi3 > This includes importing foundation models as well as any fine tuned models which have been fused with a foundation model. There is no gpt-oss support I guess? I created model with `GOMAXPROCS=1`, but `ollama run modelname` crashes with 500 server error
Author
Owner

@rick-github commented on GitHub (Dec 8, 2025):

The documentation is talking about importing from safetensor format. If you have a GGUF that step is not required.

Crashing during model load is different to crashing during model conversion. Server log will have details about the crash.

<!-- gh-comment-id:3628396679 --> @rick-github commented on GitHub (Dec 8, 2025): The documentation is talking about importing from safetensor format. If you have a GGUF that step is not required. Crashing during model load is different to crashing during model conversion. [Server log](https://docs.ollama.com/troubleshooting) will have details about the crash.
Author
Owner

@artemavrin commented on GitHub (Dec 9, 2025):

  1. mlx_lm.fuse --model openai/gpt-oss-20b --dequantize
  2. ollama create kamin-gpt:20b

Modelfile

FROM ./fused_model

SYSTEM "You are super agent"

PARAMETER temperature 0.8

  1. ollama run kamin-gpt:20b
Error: 500 Internal Server Error: do load request: Post "http://127.0.0.1:54882/load": EOF

mlx_lm.generate - works perfect

server logs
[GIN] 2025/12/07 - 01:30:09 | 200 |   38.535667ms |       127.0.0.1 | POST     "/api/create"
[GIN] 2025/12/07 - 01:32:29 | 200 |      30.709µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/07 - 01:36:28 | 200 |      28.458µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/07 - 01:38:07 | 200 |      45.125µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/07 - 01:38:49 | 200 |      35.875µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/07 - 01:44:06 | 200 |      55.541µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/12/07 - 01:50:31 | 200 |      42.375µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/07 - 01:50:31 | 200 |    2.587334ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/12/07 - 01:52:35 | 200 |      44.042µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/07 - 16:34:34 | 200 |      27.417µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/07 - 16:34:34 | 200 |     100.209µs |       127.0.0.1 | POST     "/api/blobs/sha256:0c8171cc2d0e5302c70eec5798fbc4c378abee0691605795528b0ba33d671846"
[GIN] 2025/12/07 - 16:34:34 | 200 |      99.167µs |       127.0.0.1 | POST     "/api/blobs/sha256:685ec5b007e559eebfe38e5c9b349f04694cda92a35b4e9a662678e384a93d26"
time=2025-12-07T16:34:35.025+03:00 level=ERROR source=create.go:305 msg="error converting from safetensors" error="unsupported architecture"
[GIN] 2025/12/07 - 16:34:35 | 200 |    39.79675ms |       127.0.0.1 | POST     "/api/create"
[GIN] 2025/12/08 - 08:42:03 | 200 |      46.667µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/08 - 08:42:03 | 200 |    5.701166ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/12/08 - 08:43:11 | 200 |      50.958µs |       127.0.0.1 | HEAD     "/"
time=2025-12-08T08:43:12.778+03:00 level=INFO source=download.go:177 msg="downloading 6e416d39200a in 21 1 GB part(s)"
time=2025-12-08T08:49:57.448+03:00 level=INFO source=download.go:177 msg="downloading 7339fa418c9a in 1 11 KB part(s)"
time=2025-12-08T08:49:58.854+03:00 level=INFO source=download.go:177 msg="downloading f6417cb1e269 in 1 42 B part(s)"
time=2025-12-08T08:50:00.250+03:00 level=INFO source=download.go:177 msg="downloading 50fcece1bf41 in 1 552 B part(s)"
[GIN] 2025/12/08 - 08:50:10 | 200 |         6m58s |       127.0.0.1 | POST     "/api/pull"
time=2025-12-08T08:56:01.925+03:00 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-12-08T08:56:01.925+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-6e416d39200aae1cec3ea197c5a5ebbaf214ccddc9561bcc0ec7157c83b2a99b --port 51043"
time=2025-12-08T08:56:01.927+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="232.1 GiB" free_swap="0 B"
time=2025-12-08T08:56:01.927+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="207.5 GiB" free="208.0 GiB" minimum="512.0 MiB" overhead="0 B"
time=2025-12-08T08:56:01.927+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=65 requested=-1
time=2025-12-08T08:56:01.948+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-08T08:56:01.948+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51043"
time=2025-12-08T08:56:01.951+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:65[ID:0 Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T08:56:01.966+03:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=1166 num_key_values=40
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.023 sec
ggml_metal_device_init: GPU name:   Apple M3 Ultra
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 223338.30 MB
time=2025-12-08T08:56:01.968+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_init: use bfloat         = true
ggml_metal_init: use fusion         = true
ggml_metal_init: use concurrency    = true
ggml_metal_init: use graph optimize = true
time=2025-12-08T08:56:02.980+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:65[ID:0 Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T08:56:07.876+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:65[ID:0 Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T08:56:07.876+03:00 level=INFO source=ggml.go:482 msg="offloading 64 repeating layers to GPU"
time=2025-12-08T08:56:07.876+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-12-08T08:56:07.876+03:00 level=INFO source=ggml.go:494 msg="offloaded 65/65 layers to GPU"
time=2025-12-08T08:56:07.876+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="19.1 GiB"
time=2025-12-08T08:56:07.876+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="417.3 MiB"
time=2025-12-08T08:56:07.876+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="64.0 GiB"
time=2025-12-08T08:56:07.876+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="4.2 GiB"
time=2025-12-08T08:56:07.876+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="79.1 MiB"
time=2025-12-08T08:56:07.876+03:00 level=INFO source=device.go:272 msg="total memory" size="87.7 GiB"
time=2025-12-08T08:56:07.876+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2025-12-08T08:56:07.876+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-08T08:56:07.876+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-12-08T08:56:09.387+03:00 level=INFO source=server.go:1332 msg="llama runner started in 7.46 seconds"
[GIN] 2025/12/08 - 08:57:49 | 200 |         1m47s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 08:59:11 | 200 |         1m21s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:00:21 | 200 |          1m9s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:01:20 | 200 | 58.960074542s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:01:55 | 200 |      16.125µs |       127.0.0.1 | HEAD     "/"
time=2025-12-08T09:01:57.059+03:00 level=INFO source=download.go:177 msg="downloading ed12a4674d72 in 16 383 MB part(s)"
[GIN] 2025/12/08 - 09:02:16 | 200 | 56.355158667s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:03:23 | 401 |  511.143375ms | 192.168.180.215 | POST     "/api/chat"
time=2025-12-08T09:03:57.882+03:00 level=INFO source=download.go:177 msg="downloading 17e666fbe4f4 in 1 551 B part(s)"
[GIN] 2025/12/08 - 09:04:01 | 200 |          2m5s |       127.0.0.1 | POST     "/api/pull"
time=2025-12-08T09:04:23.692+03:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=0 library=Metal total="208.0 GiB" available="120.7 GiB"
time=2025-12-08T09:04:23.724+03:00 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-12-08T09:04:23.725+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 --port 51076"
time=2025-12-08T09:04:23.727+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="163.2 GiB" free_swap="0 B"
time=2025-12-08T09:04:23.727+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="120.2 GiB" free="120.7 GiB" minimum="512.0 MiB" overhead="0 B"
time=2025-12-08T09:04:23.727+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1
time=2025-12-08T09:04:23.745+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-08T09:04:23.745+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51076"
time=2025-12-08T09:04:23.751+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:04:23.766+03:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=40
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.023 sec
ggml_metal_device_init: GPU name:   Apple M3 Ultra
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 223338.30 MB
time=2025-12-08T09:04:23.767+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_init: use bfloat         = true
ggml_metal_init: use fusion         = true
ggml_metal_init: use concurrency    = true
ggml_metal_init: use graph optimize = true
time=2025-12-08T09:04:24.850+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:04:28.241+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:04:28.241+03:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU"
time=2025-12-08T09:04:28.241+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-12-08T09:04:28.241+03:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU"
time=2025-12-08T09:04:28.241+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="5.4 GiB"
time=2025-12-08T09:04:28.241+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="333.8 MiB"
time=2025-12-08T09:04:28.241+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="36.0 GiB"
time=2025-12-08T09:04:28.241+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="4.2 GiB"
time=2025-12-08T09:04:28.241+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="63.3 MiB"
time=2025-12-08T09:04:28.241+03:00 level=INFO source=device.go:272 msg="total memory" size="46.0 GiB"
time=2025-12-08T09:04:28.241+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=2
time=2025-12-08T09:04:28.241+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-08T09:04:28.241+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-12-08T09:04:28.746+03:00 level=INFO source=server.go:1332 msg="llama runner started in 5.02 seconds"
[GIN] 2025/12/08 - 09:06:45 | 200 |         2m22s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:08:38 | 200 |          1m5s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:13:00 | 200 |      29.458µs |       127.0.0.1 | HEAD     "/"
time=2025-12-08T09:13:01.391+03:00 level=INFO source=download.go:177 msg="downloading c8f369ebea62 in 16 835 MB part(s)"
time=2025-12-08T09:17:26.017+03:00 level=INFO source=download.go:177 msg="downloading 01f91cf3c09b in 1 488 B part(s)"
[GIN] 2025/12/08 - 09:17:32 | 200 |         4m32s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/12/08 - 09:18:56 | 400 |  126.224958ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:18:57 | 400 |  109.088542ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:19:04 | 400 |  109.675167ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:19:29 | 200 |      45.417µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/08 - 09:19:29 | 200 |   92.047667ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/12/08 - 09:19:29 | 200 |     881.292µs |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/12/08 - 09:19:29 | 200 |  162.662042ms |       127.0.0.1 | DELETE   "/api/delete"
[GIN] 2025/12/08 - 09:19:37 | 200 |      62.458µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/08 - 09:19:37 | 200 |    7.101208ms |       127.0.0.1 | GET      "/api/tags"
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.006 sec
ggml_metal_device_init: GPU name:   Apple M3 Ultra
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 223338.30 MB
llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) (unknown id) - 212991 MiB free
llama_model_loader: loaded meta data with 34 key-value pairs and 363 tensors from /Users/kamin/.ollama/models/blobs/sha256-641615e9986bc8687f936cd87c586bdd92d338172c4180963080e48b8e84ec36 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                           general.basename str              = Magistral
llama_model_loader: - kv   2:                          general.file_type u32              = 15
llama_model_loader: - kv   3:                            general.license str              = apache-2.0
llama_model_loader: - kv   4:                               general.name str              = Magistral Small 2506
llama_model_loader: - kv   5:                    general.parameter_count u64              = 23572403200
llama_model_loader: - kv   6:               general.quantization_version u32              = 2
llama_model_loader: - kv   7:                         general.size_label str              = Small
llama_model_loader: - kv   8:                               general.type str              = model
llama_model_loader: - kv   9:                            general.version str              = 2506
llama_model_loader: - kv  10:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  11:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  12:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  13:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  14:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  15:                          llama.block_count u32              = 40
llama_model_loader: - kv  16:                       llama.context_length u32              = 40000
llama_model_loader: - kv  17:                     llama.embedding_length u32              = 5120
llama_model_loader: - kv  18:                  llama.feed_forward_length u32              = 32768
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       llama.rope.freq_base f32              = 1000000000.000000
llama_model_loader: - kv  21:                           llama.vocab_size u32              = 131072
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,269443]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ �...
llama_model_loader: - kv  28:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 11
llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = tekken
llama_model_loader: - kv  31:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  32:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "[INST]", "[...
llama_model_loader: - kv  33:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - type  f32:   81 tensors
llama_model_loader: - type q4_K:  241 tensors
llama_model_loader: - type q6_K:   41 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 13.34 GiB (4.86 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 2 ('</s>')
load: special tokens cache size = 1000
load: token to piece cache size = 0.8498 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 23.57 B
print_info: general.name     = Magistral Small 2506
print_info: vocab type       = BPE
print_info: n_vocab          = 131072
print_info: n_merges         = 269443
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 11 '<pad>'
print_info: LF token         = 1010 'Ċ'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 150
llama_model_load: vocab only - skipping tensors
time=2025-12-08T09:20:07.277+03:00 level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=262144 n_ctx_train=40000
time=2025-12-08T09:20:07.278+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --model /Users/kamin/.ollama/models/blobs/sha256-641615e9986bc8687f936cd87c586bdd92d338172c4180963080e48b8e84ec36 --port 51115"
time=2025-12-08T09:20:07.285+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="232.3 GiB" free_swap="0 B"
time=2025-12-08T09:20:07.285+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="207.5 GiB" free="208.0 GiB" minimum="512.0 MiB" overhead="0 B"
time=2025-12-08T09:20:07.285+03:00 level=INFO source=server.go:459 msg="loading model" "model layers"=41 requested=-1
time=2025-12-08T09:20:07.286+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="13.0 GiB"
time=2025-12-08T09:20:07.286+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="6.1 GiB"
time=2025-12-08T09:20:07.286+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="2.6 GiB"
time=2025-12-08T09:20:07.286+03:00 level=INFO source=device.go:272 msg="total memory" size="21.7 GiB"
time=2025-12-08T09:20:07.303+03:00 level=INFO source=runner.go:963 msg="starting go runner"
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.007 sec
ggml_metal_device_init: GPU name:   Apple M3 Ultra
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 223338.30 MB
time=2025-12-08T09:20:07.304+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-12-08T09:20:07.385+03:00 level=INFO source=runner.go:999 msg="Server listening on 127.0.0.1:51115"
time=2025-12-08T09:20:07.393+03:00 level=INFO source=runner.go:893 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:40000 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) (unknown id) - 212991 MiB free
time=2025-12-08T09:20:07.393+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-08T09:20:07.393+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 34 key-value pairs and 363 tensors from /Users/kamin/.ollama/models/blobs/sha256-641615e9986bc8687f936cd87c586bdd92d338172c4180963080e48b8e84ec36 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                           general.basename str              = Magistral
llama_model_loader: - kv   2:                          general.file_type u32              = 15
llama_model_loader: - kv   3:                            general.license str              = apache-2.0
llama_model_loader: - kv   4:                               general.name str              = Magistral Small 2506
llama_model_loader: - kv   5:                    general.parameter_count u64              = 23572403200
llama_model_loader: - kv   6:               general.quantization_version u32              = 2
llama_model_loader: - kv   7:                         general.size_label str              = Small
llama_model_loader: - kv   8:                               general.type str              = model
llama_model_loader: - kv   9:                            general.version str              = 2506
llama_model_loader: - kv  10:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  11:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  12:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  13:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  14:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  15:                          llama.block_count u32              = 40
llama_model_loader: - kv  16:                       llama.context_length u32              = 40000
llama_model_loader: - kv  17:                     llama.embedding_length u32              = 5120
llama_model_loader: - kv  18:                  llama.feed_forward_length u32              = 32768
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       llama.rope.freq_base f32              = 1000000000.000000
llama_model_loader: - kv  21:                           llama.vocab_size u32              = 131072
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,269443]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ �...
llama_model_loader: - kv  28:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 11
llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = tekken
llama_model_loader: - kv  31:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  32:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "[INST]", "[...
llama_model_loader: - kv  33:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - type  f32:   81 tensors
llama_model_loader: - type q4_K:  241 tensors
llama_model_loader: - type q6_K:   41 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 13.34 GiB (4.86 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 2 ('</s>')
load: special tokens cache size = 1000
load: token to piece cache size = 0.8498 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 40000
print_info: n_embd           = 5120
print_info: n_layer          = 40
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 32768
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 40000
print_info: rope_finetuned   = unknown
print_info: model type       = 13B
print_info: model params     = 23.57 B
print_info: general.name     = Magistral Small 2506
print_info: vocab type       = BPE
print_info: n_vocab          = 131072
print_info: n_merges         = 269443
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 11 '<pad>'
print_info: LF token         = 1010 'Ċ'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 150
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 40 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 41/41 layers to GPU
load_tensors:   CPU_Mapped model buffer size =   360.00 MiB
load_tensors: Metal_Mapped model buffer size = 13302.36 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 40000
llama_context: n_ctx_per_seq = 40000
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = disabled
llama_context: kv_unified    = false
llama_context: freq_base     = 1000000000.0
llama_context: freq_scale    = 1
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_init: use bfloat         = true
ggml_metal_init: use fusion         = true
ggml_metal_init: use concurrency    = true
ggml_metal_init: use graph optimize = true
llama_context:        CPU  output buffer size =     0.52 MiB
llama_kv_cache:      Metal KV buffer size =  6250.00 MiB
llama_kv_cache: size = 6250.00 MiB ( 40000 cells,  40 layers,  1/1 seqs), K (f16): 3125.00 MiB, V (f16): 3125.00 MiB
llama_context:      Metal compute buffer size =  2600.13 MiB
llama_context:        CPU compute buffer size =    92.13 MiB
llama_context: graph nodes  = 1446
llama_context: graph splits = 2
time=2025-12-08T09:20:14.459+03:00 level=INFO source=server.go:1332 msg="llama runner started in 7.17 seconds"
time=2025-12-08T09:20:14.459+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2025-12-08T09:20:14.459+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-08T09:20:14.459+03:00 level=INFO source=server.go:1332 msg="llama runner started in 7.17 seconds"
time=2025-12-08T09:20:20.844+03:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=0 library=Metal total="208.0 GiB" available="186.3 GiB"
time=2025-12-08T09:20:20.880+03:00 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-12-08T09:20:20.880+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 --port 51127"
time=2025-12-08T09:20:20.884+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="210.0 GiB" free_swap="0 B"
time=2025-12-08T09:20:20.884+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="185.8 GiB" free="186.3 GiB" minimum="512.0 MiB" overhead="0 B"
time=2025-12-08T09:20:20.884+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1
time=2025-12-08T09:20:20.903+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-08T09:20:20.903+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51127"
time=2025-12-08T09:20:20.907+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:20:20.923+03:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=40
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.024 sec
ggml_metal_device_init: GPU name:   Apple M3 Ultra
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 223338.30 MB
time=2025-12-08T09:20:20.924+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_init: use bfloat         = true
ggml_metal_init: use fusion         = true
ggml_metal_init: use concurrency    = true
ggml_metal_init: use graph optimize = true
time=2025-12-08T09:20:21.953+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:20:25.359+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:20:25.359+03:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU"
time=2025-12-08T09:20:25.359+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-12-08T09:20:25.359+03:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU"
time=2025-12-08T09:20:25.359+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="5.4 GiB"
time=2025-12-08T09:20:25.359+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="333.8 MiB"
time=2025-12-08T09:20:25.359+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="36.0 GiB"
time=2025-12-08T09:20:25.359+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="4.2 GiB"
time=2025-12-08T09:20:25.359+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="63.3 MiB"
time=2025-12-08T09:20:25.359+03:00 level=INFO source=device.go:272 msg="total memory" size="46.0 GiB"
time=2025-12-08T09:20:25.359+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=2
time=2025-12-08T09:20:25.359+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-08T09:20:25.359+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-12-08T09:20:25.863+03:00 level=INFO source=server.go:1332 msg="llama runner started in 4.98 seconds"
[GIN] 2025/12/08 - 09:21:08 | 200 |          1m1s | 192.168.180.215 | POST     "/api/chat"
time=2025-12-08T09:21:41.350+03:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=0 library=Metal total="208.0 GiB" available="140.8 GiB"
time=2025-12-08T09:21:41.389+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 --port 51133"
time=2025-12-08T09:21:41.393+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="159.4 GiB" free_swap="0 B"
time=2025-12-08T09:21:41.393+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="140.3 GiB" free="140.8 GiB" minimum="512.0 MiB" overhead="0 B"
time=2025-12-08T09:21:41.393+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1
time=2025-12-08T09:21:41.413+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-08T09:21:41.413+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51133"
time=2025-12-08T09:21:41.417+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:21:41.436+03:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=45
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 6.145 sec
ggml_metal_device_init: GPU name:   Apple M3 Ultra
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 223338.30 MB
time=2025-12-08T09:21:41.437+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_init: use bfloat         = true
ggml_metal_init: use fusion         = true
ggml_metal_init: use concurrency    = true
ggml_metal_init: use graph optimize = true
time=2025-12-08T09:21:48.001+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:21:51.085+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:21:51.085+03:00 level=INFO source=ggml.go:482 msg="offloading 40 repeating layers to GPU"
time=2025-12-08T09:21:51.085+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-12-08T09:21:51.085+03:00 level=INFO source=ggml.go:494 msg="offloaded 41/41 layers to GPU"
time=2025-12-08T09:21:51.085+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="8.1 GiB"
time=2025-12-08T09:21:51.085+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="360.0 MiB"
time=2025-12-08T09:21:51.085+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="40.0 GiB"
time=2025-12-08T09:21:51.085+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="16.5 GiB"
time=2025-12-08T09:21:51.085+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB"
time=2025-12-08T09:21:51.085+03:00 level=INFO source=device.go:272 msg="total memory" size="65.0 GiB"
time=2025-12-08T09:21:51.085+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=3
time=2025-12-08T09:21:51.085+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-08T09:21:51.085+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-12-08T09:21:52.840+03:00 level=INFO source=server.go:1332 msg="llama runner started in 11.45 seconds"
[GIN] 2025/12/08 - 09:22:02 | 200 | 20.758205875s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:22:18 | 500 | 15.643943833s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:22:35 | 500 | 15.668591041s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:22:46 | 200 |      24.167µs |       127.0.0.1 | HEAD     "/"
time=2025-12-08T09:22:50.312+03:00 level=INFO source=download.go:177 msg="downloading 6150cb382311 in 20 1 GB part(s)"
[GIN] 2025/12/08 - 09:22:55 | 500 |    15.974345s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:23:18 | 200 |         2m57s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:23:19 | 500 | 15.702620083s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:23:47 | 500 | 12.213247459s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:24:31 | 500 | 11.885607333s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:25:47 | 500 | 11.733735583s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:28:06 | 500 | 11.308895458s | 192.168.180.215 | POST     "/api/chat"
time=2025-12-08T09:29:16.950+03:00 level=INFO source=download.go:177 msg="downloading c5ad996bda6e in 1 556 B part(s)"
time=2025-12-08T09:29:18.349+03:00 level=INFO source=download.go:177 msg="downloading 6e4c38e1172f in 1 1.1 KB part(s)"
time=2025-12-08T09:29:19.761+03:00 level=INFO source=download.go:177 msg="downloading f4d24e9138dd in 1 148 B part(s)"
time=2025-12-08T09:29:21.177+03:00 level=INFO source=download.go:177 msg="downloading c7f3ea903b50 in 1 488 B part(s)"
[GIN] 2025/12/08 - 09:29:30 | 200 |         6m44s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/12/08 - 09:31:52 | 400 |   57.368333ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:32:16 | 200 |      48.833µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/08 - 09:32:16 | 200 |   41.397417ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/12/08 - 09:32:16 | 200 |    1.101292ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/12/08 - 09:32:16 | 200 |     283.127ms |       127.0.0.1 | DELETE   "/api/delete"
[GIN] 2025/12/08 - 09:32:33 | 500 | 11.000564084s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:33:00 | 200 |      62.375µs |       127.0.0.1 | HEAD     "/"
time=2025-12-08T09:33:01.843+03:00 level=INFO source=download.go:177 msg="downloading 6150cb382311 in 20 1 GB part(s)"
time=2025-12-08T09:39:25.559+03:00 level=INFO source=download.go:177 msg="downloading c5ad996bda6e in 1 556 B part(s)"
time=2025-12-08T09:39:26.970+03:00 level=INFO source=download.go:177 msg="downloading 6e4c38e1172f in 1 1.1 KB part(s)"
time=2025-12-08T09:39:28.385+03:00 level=INFO source=download.go:177 msg="downloading f4d24e9138dd in 1 148 B part(s)"
time=2025-12-08T09:39:29.776+03:00 level=INFO source=download.go:177 msg="downloading c7f3ea903b50 in 1 488 B part(s)"
[GIN] 2025/12/08 - 09:39:38 | 200 |         6m38s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/12/08 - 09:40:12 | 400 |   59.758334ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:40:25 | 200 |      44.417µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/08 - 09:40:25 | 200 |   45.258458ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/12/08 - 09:40:25 | 200 |    1.262875ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/12/08 - 09:40:25 | 200 |  303.550333ms |       127.0.0.1 | DELETE   "/api/delete"
time=2025-12-08T09:40:43.217+03:00 level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=262144 n_ctx_train=131072
time=2025-12-08T09:40:43.218+03:00 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-12-08T09:40:43.218+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 51207"
time=2025-12-08T09:40:43.221+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="227.2 GiB" free_swap="0 B"
time=2025-12-08T09:40:43.221+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="207.5 GiB" free="208.0 GiB" minimum="512.0 MiB" overhead="0 B"
time=2025-12-08T09:40:43.221+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=25 requested=-1
time=2025-12-08T09:40:43.240+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-08T09:40:43.240+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51207"
time=2025-12-08T09:40:43.244+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:40:43.270+03:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.006 sec
ggml_metal_device_init: GPU name:   Apple M3 Ultra
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 223338.30 MB
time=2025-12-08T09:40:43.271+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_init: use bfloat         = true
ggml_metal_init: use fusion         = true
ggml_metal_init: use concurrency    = true
ggml_metal_init: use graph optimize = true
time=2025-12-08T09:40:43.454+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:40:43.795+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:40:43.795+03:00 level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU"
time=2025-12-08T09:40:43.795+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-12-08T09:40:43.795+03:00 level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU"
time=2025-12-08T09:40:43.795+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="11.8 GiB"
time=2025-12-08T09:40:43.795+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="1.1 GiB"
time=2025-12-08T09:40:43.795+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="3.1 GiB"
time=2025-12-08T09:40:43.795+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="419.1 MiB"
time=2025-12-08T09:40:43.795+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB"
time=2025-12-08T09:40:43.795+03:00 level=INFO source=device.go:272 msg="total memory" size="16.4 GiB"
time=2025-12-08T09:40:43.795+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2025-12-08T09:40:43.795+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-08T09:40:43.795+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-12-08T09:40:46.556+03:00 level=INFO source=server.go:1332 msg="llama runner started in 3.33 seconds"
[GIN] 2025/12/08 - 09:41:02 | 200 | 19.768327833s | 192.168.180.215 | POST     "/api/chat"
time=2025-12-08T09:41:05.797+03:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=0 library=Metal total="208.0 GiB" available="192.7 GiB"
time=2025-12-08T09:41:05.838+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 --port 51211"
time=2025-12-08T09:41:05.842+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="220.6 GiB" free_swap="0 B"
time=2025-12-08T09:41:05.842+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="192.2 GiB" free="192.7 GiB" minimum="512.0 MiB" overhead="0 B"
time=2025-12-08T09:41:05.842+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1
time=2025-12-08T09:41:05.861+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-08T09:41:05.861+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51211"
time=2025-12-08T09:41:05.865+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:41:05.883+03:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=45
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.007 sec
ggml_metal_device_init: GPU name:   Apple M3 Ultra
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 223338.30 MB
time=2025-12-08T09:41:05.884+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_init: use bfloat         = true
ggml_metal_init: use fusion         = true
ggml_metal_init: use concurrency    = true
ggml_metal_init: use graph optimize = true
time=2025-12-08T09:41:06.308+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:41:09.272+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:41:09.273+03:00 level=INFO source=ggml.go:482 msg="offloading 40 repeating layers to GPU"
time=2025-12-08T09:41:09.273+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-12-08T09:41:09.273+03:00 level=INFO source=ggml.go:494 msg="offloaded 41/41 layers to GPU"
time=2025-12-08T09:41:09.273+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="8.1 GiB"
time=2025-12-08T09:41:09.273+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="360.0 MiB"
time=2025-12-08T09:41:09.273+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="40.0 GiB"
time=2025-12-08T09:41:09.273+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="16.5 GiB"
time=2025-12-08T09:41:09.273+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB"
time=2025-12-08T09:41:09.273+03:00 level=INFO source=device.go:272 msg="total memory" size="65.0 GiB"
time=2025-12-08T09:41:09.273+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=2
time=2025-12-08T09:41:09.273+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-08T09:41:09.273+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-12-08T09:41:10.028+03:00 level=INFO source=server.go:1332 msg="llama runner started in 4.19 seconds"
[GIN] 2025/12/08 - 09:41:24 | 200 | 20.877683875s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:41:25 | 200 |  1.064411792s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:41:27 | 500 |  22.04396225s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:41:52 | 404 |     4.24125ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:43:49 | 200 | 11.106395333s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:43:57 | 200 |  8.509058333s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:43:58 | 200 |   598.85325ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:44:18 | 200 |  7.509208583s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:44:19 | 200 |  1.490641291s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:44:20 | 200 |  356.163208ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:45:37 | 200 |  7.437303833s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:45:37 | 200 |  349.671875ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:46:58 | 200 |  3.916729625s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:47:02 | 200 |  3.589947917s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:47:03 | 200 |  690.066208ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:49:32 | 200 | 24.090042166s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:49:39 | 200 |  5.869877459s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:49:39 | 200 |  260.484292ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:50:24 | 200 | 31.287326917s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:50:32 | 200 |  7.740722625s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:50:33 | 200 |  694.195833ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:51:45 | 200 | 50.424538791s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:51:53 | 200 |  7.342045584s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:52:02 | 200 |   8.81576975s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:52:03 | 200 |  503.335708ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:52:43 | 200 | 23.419410875s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:52:43 | 200 |  525.769375ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:53:05 | 200 | 14.242236375s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:53:13 | 200 |  7.632865958s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:53:19 | 200 |  4.965888041s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:53:28 | 200 |     9.301018s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:53:29 | 200 |  719.284125ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:54:54 | 200 | 16.094301833s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:55:30 | 200 | 35.942114458s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:55:38 | 200 |  7.370639959s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:55:38 | 200 |  500.174458ms | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:58:03 | 200 |         1m15s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:58:08 | 200 |  4.926181208s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:58:10 | 200 |   1.72818925s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:58:12 | 200 |  1.815152459s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:58:13 | 200 |  1.237368791s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:58:17 | 200 |    3.4756815s | 192.168.180.215 | POST     "/api/chat"
time=2025-12-08T09:58:31.818+03:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=0 library=Metal total="208.0 GiB" available="192.7 GiB"
time=2025-12-08T09:58:31.859+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 --port 51230"
time=2025-12-08T09:58:31.863+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="212.4 GiB" free_swap="0 B"
time=2025-12-08T09:58:31.863+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="192.2 GiB" free="192.7 GiB" minimum="512.0 MiB" overhead="0 B"
time=2025-12-08T09:58:31.863+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1
time=2025-12-08T09:58:31.883+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-08T09:58:31.883+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51230"
time=2025-12-08T09:58:31.885+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:58:31.904+03:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=45
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.007 sec
ggml_metal_device_init: GPU name:   Apple M3 Ultra
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 223338.30 MB
time=2025-12-08T09:58:31.905+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_init: use bfloat         = true
ggml_metal_init: use fusion         = true
ggml_metal_init: use concurrency    = true
ggml_metal_init: use graph optimize = true
time=2025-12-08T09:58:32.339+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:58:35.388+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T09:58:35.389+03:00 level=INFO source=ggml.go:482 msg="offloading 40 repeating layers to GPU"
time=2025-12-08T09:58:35.389+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-12-08T09:58:35.389+03:00 level=INFO source=ggml.go:494 msg="offloaded 41/41 layers to GPU"
time=2025-12-08T09:58:35.389+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="8.1 GiB"
time=2025-12-08T09:58:35.389+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="360.0 MiB"
time=2025-12-08T09:58:35.389+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="40.0 GiB"
time=2025-12-08T09:58:35.389+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="16.5 GiB"
time=2025-12-08T09:58:35.389+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB"
time=2025-12-08T09:58:35.389+03:00 level=INFO source=device.go:272 msg="total memory" size="65.0 GiB"
time=2025-12-08T09:58:35.389+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=2
time=2025-12-08T09:58:35.389+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-08T09:58:35.389+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-12-08T09:58:36.144+03:00 level=INFO source=server.go:1332 msg="llama runner started in 4.28 seconds"
[GIN] 2025/12/08 - 09:58:54 | 500 | 23.084038458s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 09:59:27 | 200 |          1m9s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 10:00:01 | 200 |    33.381663s | 192.168.180.215 | POST     "/api/chat"
time=2025-12-08T10:45:14.201+03:00 level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=262144 n_ctx_train=131072
time=2025-12-08T10:45:14.201+03:00 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-12-08T10:45:14.202+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 51287"
time=2025-12-08T10:45:14.206+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="229.7 GiB" free_swap="0 B"
time=2025-12-08T10:45:14.206+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="207.5 GiB" free="208.0 GiB" minimum="512.0 MiB" overhead="0 B"
time=2025-12-08T10:45:14.206+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=25 requested=-1
time=2025-12-08T10:45:14.224+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-08T10:45:14.224+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51287"
time=2025-12-08T10:45:14.230+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T10:45:14.256+03:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.007 sec
ggml_metal_device_init: GPU name:   Apple M3 Ultra
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 223338.30 MB
time=2025-12-08T10:45:14.257+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_init: use bfloat         = true
ggml_metal_init: use fusion         = true
ggml_metal_init: use concurrency    = true
ggml_metal_init: use graph optimize = true
time=2025-12-08T10:45:14.457+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T10:45:14.805+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-08T10:45:14.806+03:00 level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU"
time=2025-12-08T10:45:14.806+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-12-08T10:45:14.806+03:00 level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU"
time=2025-12-08T10:45:14.806+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="11.8 GiB"
time=2025-12-08T10:45:14.806+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="1.1 GiB"
time=2025-12-08T10:45:14.806+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="3.1 GiB"
time=2025-12-08T10:45:14.806+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="419.1 MiB"
time=2025-12-08T10:45:14.806+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB"
time=2025-12-08T10:45:14.806+03:00 level=INFO source=device.go:272 msg="total memory" size="16.4 GiB"
time=2025-12-08T10:45:14.806+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2025-12-08T10:45:14.806+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-08T10:45:14.806+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-12-08T10:45:15.814+03:00 level=INFO source=server.go:1332 msg="llama runner started in 1.61 seconds"
[GIN] 2025/12/08 - 10:45:35 | 200 | 21.529829958s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 10:45:46 | 200 | 10.210298792s | 192.168.180.215 | POST     "/api/chat"
[GIN] 2025/12/08 - 10:45:46 | 200 |  589.462625ms | 192.168.180.215 | POST     "/api/chat"  ```

</details>
<!-- gh-comment-id:3630900129 --> @artemavrin commented on GitHub (Dec 9, 2025): 1. mlx_lm.fuse --model openai/gpt-oss-20b --dequantize 2. ollama create kamin-gpt:20b Modelfile ``` FROM ./fused_model SYSTEM "You are super agent" PARAMETER temperature 0.8 ``` 3. ollama run kamin-gpt:20b ``` Error: 500 Internal Server Error: do load request: Post "http://127.0.0.1:54882/load": EOF ``` mlx_lm.generate - works perfect <details> <summary>server logs</summary> ``` [GIN] 2025/12/07 - 01:30:09 | 200 | 38.535667ms | 127.0.0.1 | POST "/api/create" [GIN] 2025/12/07 - 01:32:29 | 200 | 30.709µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/07 - 01:36:28 | 200 | 28.458µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/07 - 01:38:07 | 200 | 45.125µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/07 - 01:38:49 | 200 | 35.875µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/07 - 01:44:06 | 200 | 55.541µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/12/07 - 01:50:31 | 200 | 42.375µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/07 - 01:50:31 | 200 | 2.587334ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/12/07 - 01:52:35 | 200 | 44.042µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/07 - 16:34:34 | 200 | 27.417µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/07 - 16:34:34 | 200 | 100.209µs | 127.0.0.1 | POST "/api/blobs/sha256:0c8171cc2d0e5302c70eec5798fbc4c378abee0691605795528b0ba33d671846" [GIN] 2025/12/07 - 16:34:34 | 200 | 99.167µs | 127.0.0.1 | POST "/api/blobs/sha256:685ec5b007e559eebfe38e5c9b349f04694cda92a35b4e9a662678e384a93d26" time=2025-12-07T16:34:35.025+03:00 level=ERROR source=create.go:305 msg="error converting from safetensors" error="unsupported architecture" [GIN] 2025/12/07 - 16:34:35 | 200 | 39.79675ms | 127.0.0.1 | POST "/api/create" [GIN] 2025/12/08 - 08:42:03 | 200 | 46.667µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/08 - 08:42:03 | 200 | 5.701166ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/12/08 - 08:43:11 | 200 | 50.958µs | 127.0.0.1 | HEAD "/" time=2025-12-08T08:43:12.778+03:00 level=INFO source=download.go:177 msg="downloading 6e416d39200a in 21 1 GB part(s)" time=2025-12-08T08:49:57.448+03:00 level=INFO source=download.go:177 msg="downloading 7339fa418c9a in 1 11 KB part(s)" time=2025-12-08T08:49:58.854+03:00 level=INFO source=download.go:177 msg="downloading f6417cb1e269 in 1 42 B part(s)" time=2025-12-08T08:50:00.250+03:00 level=INFO source=download.go:177 msg="downloading 50fcece1bf41 in 1 552 B part(s)" [GIN] 2025/12/08 - 08:50:10 | 200 | 6m58s | 127.0.0.1 | POST "/api/pull" time=2025-12-08T08:56:01.925+03:00 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-12-08T08:56:01.925+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-6e416d39200aae1cec3ea197c5a5ebbaf214ccddc9561bcc0ec7157c83b2a99b --port 51043" time=2025-12-08T08:56:01.927+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="232.1 GiB" free_swap="0 B" time=2025-12-08T08:56:01.927+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="207.5 GiB" free="208.0 GiB" minimum="512.0 MiB" overhead="0 B" time=2025-12-08T08:56:01.927+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=65 requested=-1 time=2025-12-08T08:56:01.948+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-08T08:56:01.948+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51043" time=2025-12-08T08:56:01.951+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:65[ID:0 Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T08:56:01.966+03:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=1166 num_key_values=40 ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.023 sec ggml_metal_device_init: GPU name: Apple M3 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 223338.30 MB time=2025-12-08T08:56:01.968+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_init: use bfloat = true ggml_metal_init: use fusion = true ggml_metal_init: use concurrency = true ggml_metal_init: use graph optimize = true time=2025-12-08T08:56:02.980+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:65[ID:0 Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T08:56:07.876+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:65[ID:0 Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T08:56:07.876+03:00 level=INFO source=ggml.go:482 msg="offloading 64 repeating layers to GPU" time=2025-12-08T08:56:07.876+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-12-08T08:56:07.876+03:00 level=INFO source=ggml.go:494 msg="offloaded 65/65 layers to GPU" time=2025-12-08T08:56:07.876+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="19.1 GiB" time=2025-12-08T08:56:07.876+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="417.3 MiB" time=2025-12-08T08:56:07.876+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="64.0 GiB" time=2025-12-08T08:56:07.876+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="4.2 GiB" time=2025-12-08T08:56:07.876+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="79.1 MiB" time=2025-12-08T08:56:07.876+03:00 level=INFO source=device.go:272 msg="total memory" size="87.7 GiB" time=2025-12-08T08:56:07.876+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-12-08T08:56:07.876+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-08T08:56:07.876+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-12-08T08:56:09.387+03:00 level=INFO source=server.go:1332 msg="llama runner started in 7.46 seconds" [GIN] 2025/12/08 - 08:57:49 | 200 | 1m47s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 08:59:11 | 200 | 1m21s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:00:21 | 200 | 1m9s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:01:20 | 200 | 58.960074542s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:01:55 | 200 | 16.125µs | 127.0.0.1 | HEAD "/" time=2025-12-08T09:01:57.059+03:00 level=INFO source=download.go:177 msg="downloading ed12a4674d72 in 16 383 MB part(s)" [GIN] 2025/12/08 - 09:02:16 | 200 | 56.355158667s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:03:23 | 401 | 511.143375ms | 192.168.180.215 | POST "/api/chat" time=2025-12-08T09:03:57.882+03:00 level=INFO source=download.go:177 msg="downloading 17e666fbe4f4 in 1 551 B part(s)" [GIN] 2025/12/08 - 09:04:01 | 200 | 2m5s | 127.0.0.1 | POST "/api/pull" time=2025-12-08T09:04:23.692+03:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=0 library=Metal total="208.0 GiB" available="120.7 GiB" time=2025-12-08T09:04:23.724+03:00 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-12-08T09:04:23.725+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 --port 51076" time=2025-12-08T09:04:23.727+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="163.2 GiB" free_swap="0 B" time=2025-12-08T09:04:23.727+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="120.2 GiB" free="120.7 GiB" minimum="512.0 MiB" overhead="0 B" time=2025-12-08T09:04:23.727+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1 time=2025-12-08T09:04:23.745+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-08T09:04:23.745+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51076" time=2025-12-08T09:04:23.751+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:04:23.766+03:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=40 ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.023 sec ggml_metal_device_init: GPU name: Apple M3 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 223338.30 MB time=2025-12-08T09:04:23.767+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_init: use bfloat = true ggml_metal_init: use fusion = true ggml_metal_init: use concurrency = true ggml_metal_init: use graph optimize = true time=2025-12-08T09:04:24.850+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:04:28.241+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:04:28.241+03:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU" time=2025-12-08T09:04:28.241+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-12-08T09:04:28.241+03:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU" time=2025-12-08T09:04:28.241+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="5.4 GiB" time=2025-12-08T09:04:28.241+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="333.8 MiB" time=2025-12-08T09:04:28.241+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="36.0 GiB" time=2025-12-08T09:04:28.241+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="4.2 GiB" time=2025-12-08T09:04:28.241+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="63.3 MiB" time=2025-12-08T09:04:28.241+03:00 level=INFO source=device.go:272 msg="total memory" size="46.0 GiB" time=2025-12-08T09:04:28.241+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=2 time=2025-12-08T09:04:28.241+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-08T09:04:28.241+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-12-08T09:04:28.746+03:00 level=INFO source=server.go:1332 msg="llama runner started in 5.02 seconds" [GIN] 2025/12/08 - 09:06:45 | 200 | 2m22s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:08:38 | 200 | 1m5s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:13:00 | 200 | 29.458µs | 127.0.0.1 | HEAD "/" time=2025-12-08T09:13:01.391+03:00 level=INFO source=download.go:177 msg="downloading c8f369ebea62 in 16 835 MB part(s)" time=2025-12-08T09:17:26.017+03:00 level=INFO source=download.go:177 msg="downloading 01f91cf3c09b in 1 488 B part(s)" [GIN] 2025/12/08 - 09:17:32 | 200 | 4m32s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/12/08 - 09:18:56 | 400 | 126.224958ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:18:57 | 400 | 109.088542ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:19:04 | 400 | 109.675167ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:19:29 | 200 | 45.417µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/08 - 09:19:29 | 200 | 92.047667ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/12/08 - 09:19:29 | 200 | 881.292µs | 127.0.0.1 | POST "/api/generate" [GIN] 2025/12/08 - 09:19:29 | 200 | 162.662042ms | 127.0.0.1 | DELETE "/api/delete" [GIN] 2025/12/08 - 09:19:37 | 200 | 62.458µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/08 - 09:19:37 | 200 | 7.101208ms | 127.0.0.1 | GET "/api/tags" ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.006 sec ggml_metal_device_init: GPU name: Apple M3 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 223338.30 MB llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) (unknown id) - 212991 MiB free llama_model_loader: loaded meta data with 34 key-value pairs and 363 tensors from /Users/kamin/.ollama/models/blobs/sha256-641615e9986bc8687f936cd87c586bdd92d338172c4180963080e48b8e84ec36 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.basename str = Magistral llama_model_loader: - kv 2: general.file_type u32 = 15 llama_model_loader: - kv 3: general.license str = apache-2.0 llama_model_loader: - kv 4: general.name str = Magistral Small 2506 llama_model_loader: - kv 5: general.parameter_count u64 = 23572403200 llama_model_loader: - kv 6: general.quantization_version u32 = 2 llama_model_loader: - kv 7: general.size_label str = Small llama_model_loader: - kv 8: general.type str = model llama_model_loader: - kv 9: general.version str = 2506 llama_model_loader: - kv 10: llama.attention.head_count u32 = 32 llama_model_loader: - kv 11: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 12: llama.attention.key_length u32 = 128 llama_model_loader: - kv 13: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: llama.block_count u32 = 40 llama_model_loader: - kv 16: llama.context_length u32 = 40000 llama_model_loader: - kv 17: llama.embedding_length u32 = 5120 llama_model_loader: - kv 18: llama.feed_forward_length u32 = 32768 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: llama.rope.freq_base f32 = 1000000000.000000 llama_model_loader: - kv 21: llama.vocab_size u32 = 131072 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,269443] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ �... llama_model_loader: - kv 28: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 11 llama_model_loader: - kv 30: tokenizer.ggml.pre str = tekken llama_model_loader: - kv 31: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 32: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "[INST]", "[... llama_model_loader: - kv 33: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - type f32: 81 tensors llama_model_loader: - type q4_K: 241 tensors llama_model_loader: - type q6_K: 41 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 13.34 GiB (4.86 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 2 ('</s>') load: special tokens cache size = 1000 load: token to piece cache size = 0.8498 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 23.57 B print_info: general.name = Magistral Small 2506 print_info: vocab type = BPE print_info: n_vocab = 131072 print_info: n_merges = 269443 print_info: BOS token = 1 '<s>' print_info: EOS token = 2 '</s>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 11 '<pad>' print_info: LF token = 1010 'Ċ' print_info: EOG token = 2 '</s>' print_info: max token length = 150 llama_model_load: vocab only - skipping tensors time=2025-12-08T09:20:07.277+03:00 level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=262144 n_ctx_train=40000 time=2025-12-08T09:20:07.278+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --model /Users/kamin/.ollama/models/blobs/sha256-641615e9986bc8687f936cd87c586bdd92d338172c4180963080e48b8e84ec36 --port 51115" time=2025-12-08T09:20:07.285+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="232.3 GiB" free_swap="0 B" time=2025-12-08T09:20:07.285+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="207.5 GiB" free="208.0 GiB" minimum="512.0 MiB" overhead="0 B" time=2025-12-08T09:20:07.285+03:00 level=INFO source=server.go:459 msg="loading model" "model layers"=41 requested=-1 time=2025-12-08T09:20:07.286+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="13.0 GiB" time=2025-12-08T09:20:07.286+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="6.1 GiB" time=2025-12-08T09:20:07.286+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="2.6 GiB" time=2025-12-08T09:20:07.286+03:00 level=INFO source=device.go:272 msg="total memory" size="21.7 GiB" time=2025-12-08T09:20:07.303+03:00 level=INFO source=runner.go:963 msg="starting go runner" ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.007 sec ggml_metal_device_init: GPU name: Apple M3 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 223338.30 MB time=2025-12-08T09:20:07.304+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-12-08T09:20:07.385+03:00 level=INFO source=runner.go:999 msg="Server listening on 127.0.0.1:51115" time=2025-12-08T09:20:07.393+03:00 level=INFO source=runner.go:893 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:40000 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) (unknown id) - 212991 MiB free time=2025-12-08T09:20:07.393+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-08T09:20:07.393+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 34 key-value pairs and 363 tensors from /Users/kamin/.ollama/models/blobs/sha256-641615e9986bc8687f936cd87c586bdd92d338172c4180963080e48b8e84ec36 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.basename str = Magistral llama_model_loader: - kv 2: general.file_type u32 = 15 llama_model_loader: - kv 3: general.license str = apache-2.0 llama_model_loader: - kv 4: general.name str = Magistral Small 2506 llama_model_loader: - kv 5: general.parameter_count u64 = 23572403200 llama_model_loader: - kv 6: general.quantization_version u32 = 2 llama_model_loader: - kv 7: general.size_label str = Small llama_model_loader: - kv 8: general.type str = model llama_model_loader: - kv 9: general.version str = 2506 llama_model_loader: - kv 10: llama.attention.head_count u32 = 32 llama_model_loader: - kv 11: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 12: llama.attention.key_length u32 = 128 llama_model_loader: - kv 13: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: llama.block_count u32 = 40 llama_model_loader: - kv 16: llama.context_length u32 = 40000 llama_model_loader: - kv 17: llama.embedding_length u32 = 5120 llama_model_loader: - kv 18: llama.feed_forward_length u32 = 32768 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: llama.rope.freq_base f32 = 1000000000.000000 llama_model_loader: - kv 21: llama.vocab_size u32 = 131072 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,269443] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ �... llama_model_loader: - kv 28: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 11 llama_model_loader: - kv 30: tokenizer.ggml.pre str = tekken llama_model_loader: - kv 31: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 32: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "[INST]", "[... llama_model_loader: - kv 33: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - type f32: 81 tensors llama_model_loader: - type q4_K: 241 tensors llama_model_loader: - type q6_K: 41 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 13.34 GiB (4.86 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 2 ('</s>') load: special tokens cache size = 1000 load: token to piece cache size = 0.8498 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 40000 print_info: n_embd = 5120 print_info: n_layer = 40 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 32768 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 1000000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 40000 print_info: rope_finetuned = unknown print_info: model type = 13B print_info: model params = 23.57 B print_info: general.name = Magistral Small 2506 print_info: vocab type = BPE print_info: n_vocab = 131072 print_info: n_merges = 269443 print_info: BOS token = 1 '<s>' print_info: EOS token = 2 '</s>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 11 '<pad>' print_info: LF token = 1010 'Ċ' print_info: EOG token = 2 '</s>' print_info: max token length = 150 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 40 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 41/41 layers to GPU load_tensors: CPU_Mapped model buffer size = 360.00 MiB load_tensors: Metal_Mapped model buffer size = 13302.36 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 40000 llama_context: n_ctx_per_seq = 40000 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = disabled llama_context: kv_unified = false llama_context: freq_base = 1000000000.0 llama_context: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_init: use bfloat = true ggml_metal_init: use fusion = true ggml_metal_init: use concurrency = true ggml_metal_init: use graph optimize = true llama_context: CPU output buffer size = 0.52 MiB llama_kv_cache: Metal KV buffer size = 6250.00 MiB llama_kv_cache: size = 6250.00 MiB ( 40000 cells, 40 layers, 1/1 seqs), K (f16): 3125.00 MiB, V (f16): 3125.00 MiB llama_context: Metal compute buffer size = 2600.13 MiB llama_context: CPU compute buffer size = 92.13 MiB llama_context: graph nodes = 1446 llama_context: graph splits = 2 time=2025-12-08T09:20:14.459+03:00 level=INFO source=server.go:1332 msg="llama runner started in 7.17 seconds" time=2025-12-08T09:20:14.459+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-12-08T09:20:14.459+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-08T09:20:14.459+03:00 level=INFO source=server.go:1332 msg="llama runner started in 7.17 seconds" time=2025-12-08T09:20:20.844+03:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=0 library=Metal total="208.0 GiB" available="186.3 GiB" time=2025-12-08T09:20:20.880+03:00 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-12-08T09:20:20.880+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 --port 51127" time=2025-12-08T09:20:20.884+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="210.0 GiB" free_swap="0 B" time=2025-12-08T09:20:20.884+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="185.8 GiB" free="186.3 GiB" minimum="512.0 MiB" overhead="0 B" time=2025-12-08T09:20:20.884+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1 time=2025-12-08T09:20:20.903+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-08T09:20:20.903+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51127" time=2025-12-08T09:20:20.907+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:20:20.923+03:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=40 ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.024 sec ggml_metal_device_init: GPU name: Apple M3 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 223338.30 MB time=2025-12-08T09:20:20.924+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_init: use bfloat = true ggml_metal_init: use fusion = true ggml_metal_init: use concurrency = true ggml_metal_init: use graph optimize = true time=2025-12-08T09:20:21.953+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:20:25.359+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:24 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:20:25.359+03:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU" time=2025-12-08T09:20:25.359+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-12-08T09:20:25.359+03:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU" time=2025-12-08T09:20:25.359+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="5.4 GiB" time=2025-12-08T09:20:25.359+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="333.8 MiB" time=2025-12-08T09:20:25.359+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="36.0 GiB" time=2025-12-08T09:20:25.359+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="4.2 GiB" time=2025-12-08T09:20:25.359+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="63.3 MiB" time=2025-12-08T09:20:25.359+03:00 level=INFO source=device.go:272 msg="total memory" size="46.0 GiB" time=2025-12-08T09:20:25.359+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=2 time=2025-12-08T09:20:25.359+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-08T09:20:25.359+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-12-08T09:20:25.863+03:00 level=INFO source=server.go:1332 msg="llama runner started in 4.98 seconds" [GIN] 2025/12/08 - 09:21:08 | 200 | 1m1s | 192.168.180.215 | POST "/api/chat" time=2025-12-08T09:21:41.350+03:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=0 library=Metal total="208.0 GiB" available="140.8 GiB" time=2025-12-08T09:21:41.389+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 --port 51133" time=2025-12-08T09:21:41.393+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="159.4 GiB" free_swap="0 B" time=2025-12-08T09:21:41.393+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="140.3 GiB" free="140.8 GiB" minimum="512.0 MiB" overhead="0 B" time=2025-12-08T09:21:41.393+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1 time=2025-12-08T09:21:41.413+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-08T09:21:41.413+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51133" time=2025-12-08T09:21:41.417+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:21:41.436+03:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=45 ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 6.145 sec ggml_metal_device_init: GPU name: Apple M3 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 223338.30 MB time=2025-12-08T09:21:41.437+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_init: use bfloat = true ggml_metal_init: use fusion = true ggml_metal_init: use concurrency = true ggml_metal_init: use graph optimize = true time=2025-12-08T09:21:48.001+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:21:51.085+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:21:51.085+03:00 level=INFO source=ggml.go:482 msg="offloading 40 repeating layers to GPU" time=2025-12-08T09:21:51.085+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-12-08T09:21:51.085+03:00 level=INFO source=ggml.go:494 msg="offloaded 41/41 layers to GPU" time=2025-12-08T09:21:51.085+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="8.1 GiB" time=2025-12-08T09:21:51.085+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="360.0 MiB" time=2025-12-08T09:21:51.085+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="40.0 GiB" time=2025-12-08T09:21:51.085+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="16.5 GiB" time=2025-12-08T09:21:51.085+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB" time=2025-12-08T09:21:51.085+03:00 level=INFO source=device.go:272 msg="total memory" size="65.0 GiB" time=2025-12-08T09:21:51.085+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=3 time=2025-12-08T09:21:51.085+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-08T09:21:51.085+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-12-08T09:21:52.840+03:00 level=INFO source=server.go:1332 msg="llama runner started in 11.45 seconds" [GIN] 2025/12/08 - 09:22:02 | 200 | 20.758205875s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:22:18 | 500 | 15.643943833s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:22:35 | 500 | 15.668591041s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:22:46 | 200 | 24.167µs | 127.0.0.1 | HEAD "/" time=2025-12-08T09:22:50.312+03:00 level=INFO source=download.go:177 msg="downloading 6150cb382311 in 20 1 GB part(s)" [GIN] 2025/12/08 - 09:22:55 | 500 | 15.974345s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:23:18 | 200 | 2m57s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:23:19 | 500 | 15.702620083s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:23:47 | 500 | 12.213247459s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:24:31 | 500 | 11.885607333s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:25:47 | 500 | 11.733735583s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:28:06 | 500 | 11.308895458s | 192.168.180.215 | POST "/api/chat" time=2025-12-08T09:29:16.950+03:00 level=INFO source=download.go:177 msg="downloading c5ad996bda6e in 1 556 B part(s)" time=2025-12-08T09:29:18.349+03:00 level=INFO source=download.go:177 msg="downloading 6e4c38e1172f in 1 1.1 KB part(s)" time=2025-12-08T09:29:19.761+03:00 level=INFO source=download.go:177 msg="downloading f4d24e9138dd in 1 148 B part(s)" time=2025-12-08T09:29:21.177+03:00 level=INFO source=download.go:177 msg="downloading c7f3ea903b50 in 1 488 B part(s)" [GIN] 2025/12/08 - 09:29:30 | 200 | 6m44s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/12/08 - 09:31:52 | 400 | 57.368333ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:32:16 | 200 | 48.833µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/08 - 09:32:16 | 200 | 41.397417ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/12/08 - 09:32:16 | 200 | 1.101292ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/12/08 - 09:32:16 | 200 | 283.127ms | 127.0.0.1 | DELETE "/api/delete" [GIN] 2025/12/08 - 09:32:33 | 500 | 11.000564084s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:33:00 | 200 | 62.375µs | 127.0.0.1 | HEAD "/" time=2025-12-08T09:33:01.843+03:00 level=INFO source=download.go:177 msg="downloading 6150cb382311 in 20 1 GB part(s)" time=2025-12-08T09:39:25.559+03:00 level=INFO source=download.go:177 msg="downloading c5ad996bda6e in 1 556 B part(s)" time=2025-12-08T09:39:26.970+03:00 level=INFO source=download.go:177 msg="downloading 6e4c38e1172f in 1 1.1 KB part(s)" time=2025-12-08T09:39:28.385+03:00 level=INFO source=download.go:177 msg="downloading f4d24e9138dd in 1 148 B part(s)" time=2025-12-08T09:39:29.776+03:00 level=INFO source=download.go:177 msg="downloading c7f3ea903b50 in 1 488 B part(s)" [GIN] 2025/12/08 - 09:39:38 | 200 | 6m38s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/12/08 - 09:40:12 | 400 | 59.758334ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:40:25 | 200 | 44.417µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/08 - 09:40:25 | 200 | 45.258458ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/12/08 - 09:40:25 | 200 | 1.262875ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/12/08 - 09:40:25 | 200 | 303.550333ms | 127.0.0.1 | DELETE "/api/delete" time=2025-12-08T09:40:43.217+03:00 level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=262144 n_ctx_train=131072 time=2025-12-08T09:40:43.218+03:00 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-12-08T09:40:43.218+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 51207" time=2025-12-08T09:40:43.221+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="227.2 GiB" free_swap="0 B" time=2025-12-08T09:40:43.221+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="207.5 GiB" free="208.0 GiB" minimum="512.0 MiB" overhead="0 B" time=2025-12-08T09:40:43.221+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=25 requested=-1 time=2025-12-08T09:40:43.240+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-08T09:40:43.240+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51207" time=2025-12-08T09:40:43.244+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:40:43.270+03:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32 ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.006 sec ggml_metal_device_init: GPU name: Apple M3 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 223338.30 MB time=2025-12-08T09:40:43.271+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_init: use bfloat = true ggml_metal_init: use fusion = true ggml_metal_init: use concurrency = true ggml_metal_init: use graph optimize = true time=2025-12-08T09:40:43.454+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:40:43.795+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:40:43.795+03:00 level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU" time=2025-12-08T09:40:43.795+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-12-08T09:40:43.795+03:00 level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU" time=2025-12-08T09:40:43.795+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="11.8 GiB" time=2025-12-08T09:40:43.795+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="1.1 GiB" time=2025-12-08T09:40:43.795+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="3.1 GiB" time=2025-12-08T09:40:43.795+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="419.1 MiB" time=2025-12-08T09:40:43.795+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB" time=2025-12-08T09:40:43.795+03:00 level=INFO source=device.go:272 msg="total memory" size="16.4 GiB" time=2025-12-08T09:40:43.795+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-12-08T09:40:43.795+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-08T09:40:43.795+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-12-08T09:40:46.556+03:00 level=INFO source=server.go:1332 msg="llama runner started in 3.33 seconds" [GIN] 2025/12/08 - 09:41:02 | 200 | 19.768327833s | 192.168.180.215 | POST "/api/chat" time=2025-12-08T09:41:05.797+03:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=0 library=Metal total="208.0 GiB" available="192.7 GiB" time=2025-12-08T09:41:05.838+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 --port 51211" time=2025-12-08T09:41:05.842+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="220.6 GiB" free_swap="0 B" time=2025-12-08T09:41:05.842+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="192.2 GiB" free="192.7 GiB" minimum="512.0 MiB" overhead="0 B" time=2025-12-08T09:41:05.842+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1 time=2025-12-08T09:41:05.861+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-08T09:41:05.861+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51211" time=2025-12-08T09:41:05.865+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:41:05.883+03:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=45 ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.007 sec ggml_metal_device_init: GPU name: Apple M3 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 223338.30 MB time=2025-12-08T09:41:05.884+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_init: use bfloat = true ggml_metal_init: use fusion = true ggml_metal_init: use concurrency = true ggml_metal_init: use graph optimize = true time=2025-12-08T09:41:06.308+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:41:09.272+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:41:09.273+03:00 level=INFO source=ggml.go:482 msg="offloading 40 repeating layers to GPU" time=2025-12-08T09:41:09.273+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-12-08T09:41:09.273+03:00 level=INFO source=ggml.go:494 msg="offloaded 41/41 layers to GPU" time=2025-12-08T09:41:09.273+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="8.1 GiB" time=2025-12-08T09:41:09.273+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="360.0 MiB" time=2025-12-08T09:41:09.273+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="40.0 GiB" time=2025-12-08T09:41:09.273+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="16.5 GiB" time=2025-12-08T09:41:09.273+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB" time=2025-12-08T09:41:09.273+03:00 level=INFO source=device.go:272 msg="total memory" size="65.0 GiB" time=2025-12-08T09:41:09.273+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=2 time=2025-12-08T09:41:09.273+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-08T09:41:09.273+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-12-08T09:41:10.028+03:00 level=INFO source=server.go:1332 msg="llama runner started in 4.19 seconds" [GIN] 2025/12/08 - 09:41:24 | 200 | 20.877683875s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:41:25 | 200 | 1.064411792s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:41:27 | 500 | 22.04396225s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:41:52 | 404 | 4.24125ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:43:49 | 200 | 11.106395333s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:43:57 | 200 | 8.509058333s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:43:58 | 200 | 598.85325ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:44:18 | 200 | 7.509208583s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:44:19 | 200 | 1.490641291s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:44:20 | 200 | 356.163208ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:45:37 | 200 | 7.437303833s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:45:37 | 200 | 349.671875ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:46:58 | 200 | 3.916729625s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:47:02 | 200 | 3.589947917s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:47:03 | 200 | 690.066208ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:49:32 | 200 | 24.090042166s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:49:39 | 200 | 5.869877459s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:49:39 | 200 | 260.484292ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:50:24 | 200 | 31.287326917s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:50:32 | 200 | 7.740722625s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:50:33 | 200 | 694.195833ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:51:45 | 200 | 50.424538791s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:51:53 | 200 | 7.342045584s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:52:02 | 200 | 8.81576975s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:52:03 | 200 | 503.335708ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:52:43 | 200 | 23.419410875s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:52:43 | 200 | 525.769375ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:53:05 | 200 | 14.242236375s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:53:13 | 200 | 7.632865958s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:53:19 | 200 | 4.965888041s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:53:28 | 200 | 9.301018s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:53:29 | 200 | 719.284125ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:54:54 | 200 | 16.094301833s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:55:30 | 200 | 35.942114458s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:55:38 | 200 | 7.370639959s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:55:38 | 200 | 500.174458ms | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:58:03 | 200 | 1m15s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:58:08 | 200 | 4.926181208s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:58:10 | 200 | 1.72818925s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:58:12 | 200 | 1.815152459s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:58:13 | 200 | 1.237368791s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:58:17 | 200 | 3.4756815s | 192.168.180.215 | POST "/api/chat" time=2025-12-08T09:58:31.818+03:00 level=INFO source=sched.go:583 msg="updated VRAM based on existing loaded models" gpu=0 library=Metal total="208.0 GiB" available="192.7 GiB" time=2025-12-08T09:58:31.859+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-9026d5ef829c7a9259de75070282233aa1d96e27b29553a89b35ef34485403f5 --port 51230" time=2025-12-08T09:58:31.863+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="212.4 GiB" free_swap="0 B" time=2025-12-08T09:58:31.863+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="192.2 GiB" free="192.7 GiB" minimum="512.0 MiB" overhead="0 B" time=2025-12-08T09:58:31.863+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1 time=2025-12-08T09:58:31.883+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-08T09:58:31.883+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51230" time=2025-12-08T09:58:31.885+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:58:31.904+03:00 level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=585 num_key_values=45 ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.007 sec ggml_metal_device_init: GPU name: Apple M3 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 223338.30 MB time=2025-12-08T09:58:31.905+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_init: use bfloat = true ggml_metal_init: use fusion = true ggml_metal_init: use concurrency = true ggml_metal_init: use graph optimize = true time=2025-12-08T09:58:32.339+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:58:35.388+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:262144 KvCacheType: NumThreads:24 GPULayers:41[ID:0 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T09:58:35.389+03:00 level=INFO source=ggml.go:482 msg="offloading 40 repeating layers to GPU" time=2025-12-08T09:58:35.389+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-12-08T09:58:35.389+03:00 level=INFO source=ggml.go:494 msg="offloaded 41/41 layers to GPU" time=2025-12-08T09:58:35.389+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="8.1 GiB" time=2025-12-08T09:58:35.389+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="360.0 MiB" time=2025-12-08T09:58:35.389+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="40.0 GiB" time=2025-12-08T09:58:35.389+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="16.5 GiB" time=2025-12-08T09:58:35.389+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB" time=2025-12-08T09:58:35.389+03:00 level=INFO source=device.go:272 msg="total memory" size="65.0 GiB" time=2025-12-08T09:58:35.389+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=2 time=2025-12-08T09:58:35.389+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-08T09:58:35.389+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-12-08T09:58:36.144+03:00 level=INFO source=server.go:1332 msg="llama runner started in 4.28 seconds" [GIN] 2025/12/08 - 09:58:54 | 500 | 23.084038458s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 09:59:27 | 200 | 1m9s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 10:00:01 | 200 | 33.381663s | 192.168.180.215 | POST "/api/chat" time=2025-12-08T10:45:14.201+03:00 level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=262144 n_ctx_train=131072 time=2025-12-08T10:45:14.201+03:00 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-12-08T10:45:14.202+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 51287" time=2025-12-08T10:45:14.206+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="229.7 GiB" free_swap="0 B" time=2025-12-08T10:45:14.206+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="207.5 GiB" free="208.0 GiB" minimum="512.0 MiB" overhead="0 B" time=2025-12-08T10:45:14.206+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=25 requested=-1 time=2025-12-08T10:45:14.224+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-08T10:45:14.224+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:51287" time=2025-12-08T10:45:14.230+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T10:45:14.256+03:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32 ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.007 sec ggml_metal_device_init: GPU name: Apple M3 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 223338.30 MB time=2025-12-08T10:45:14.257+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_init: use bfloat = true ggml_metal_init: use fusion = true ggml_metal_init: use concurrency = true ggml_metal_init: use graph optimize = true time=2025-12-08T10:45:14.457+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T10:45:14.805+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-08T10:45:14.806+03:00 level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU" time=2025-12-08T10:45:14.806+03:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-12-08T10:45:14.806+03:00 level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU" time=2025-12-08T10:45:14.806+03:00 level=INFO source=device.go:240 msg="model weights" device=Metal size="11.8 GiB" time=2025-12-08T10:45:14.806+03:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="1.1 GiB" time=2025-12-08T10:45:14.806+03:00 level=INFO source=device.go:251 msg="kv cache" device=Metal size="3.1 GiB" time=2025-12-08T10:45:14.806+03:00 level=INFO source=device.go:262 msg="compute graph" device=Metal size="419.1 MiB" time=2025-12-08T10:45:14.806+03:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB" time=2025-12-08T10:45:14.806+03:00 level=INFO source=device.go:272 msg="total memory" size="16.4 GiB" time=2025-12-08T10:45:14.806+03:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-12-08T10:45:14.806+03:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-08T10:45:14.806+03:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-12-08T10:45:15.814+03:00 level=INFO source=server.go:1332 msg="llama runner started in 1.61 seconds" [GIN] 2025/12/08 - 10:45:35 | 200 | 21.529829958s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 10:45:46 | 200 | 10.210298792s | 192.168.180.215 | POST "/api/chat" [GIN] 2025/12/08 - 10:45:46 | 200 | 589.462625ms | 192.168.180.215 | POST "/api/chat" ``` </details>
Author
Owner

@artemavrin commented on GitHub (Dec 9, 2025):

logs from terminal

[GIN] 2025/12/09 - 11:24:58 | 200 |      29.667µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/09 - 11:24:58 | 200 |   80.783709ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/12/09 - 11:24:58 | 200 |   63.701541ms |       127.0.0.1 | POST     "/api/show"
time=2025-12-09T11:24:58.744+03:00 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-12-09T11:24:58.744+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-2c988a86b9b1ccfda0c74ec91e2f880ad5c6f5d9da7611f552b65fe945aeb4e4 --port 54921"
time=2025-12-09T11:24:58.746+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="219.7 GiB" free_swap="0 B"
time=2025-12-09T11:24:58.746+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="207.5 GiB" free="208.0 GiB" minimum="512.0 MiB" overhead="0 B"
time=2025-12-09T11:24:58.746+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=25 requested=-1
time=2025-12-09T11:24:58.766+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-09T11:24:58.766+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:54921"
time=2025-12-09T11:24:58.776+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-09T11:24:58.808+03:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.006 sec
ggml_metal_device_init: GPU name:   Apple M3 Ultra
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 223338.30 MB
time=2025-12-09T11:24:58.809+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_init: use bfloat         = true
ggml_metal_init: use fusion         = true
ggml_metal_init: use concurrency    = true
ggml_metal_init: use graph optimize = true
time=2025-12-09T11:24:58.853+03:00 level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:54923: runtime error: invalid memory address or nil pointer dereference\ngoroutine 9 [running]:\nnet/http.(*conn).serve.func1()\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:1947 +0xb0\npanic({0x101ab2420?, 0x1023dea40?})\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/panic.go:792 +0x124\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1186 +0x124\npanic({0x101ab2420?, 0x1023dea40?})\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/panic.go:792 +0x124\ngithub.com/ollama/ollama/ml/nn.(*LinearBatch).Forward(0x0, {0x101c28d70, 0x140019f2d80}, {0x101c33300?, 0x140019fc558?}, {0x101c33300, 0x140019fc498})\n\t/Users/runner/work/ollama/ollama/ml/nn/linear.go:25 +0x34\ngithub.com/ollama/ollama/model/models/gptoss.(*MLPBlock).Forward(0x140019a4630, {0x101c28d70, 0x140019f2d80}, {0x101c33300, 0x140019fc408}, 0x140003f6070)\n\t/Users/runner/work/ollama/ollama/model/models/gptoss/model.go:180 +0x404\ngithub.com/ollama/ollama/model/models/gptoss.(*TransformerBlock).Forward(0x14000045328, {0x101c28d70, 0x140019f2d80}, {0x101c33300?, 0x140019fc060?}, {0x101c33300?, 0x140019fc078?}, {0x0, 0x0}, {0x101c25ca0?, ...}, ...)\n\t/Users/runner/work/ollama/ollama/model/models/gptoss/model.go:98 +0xa4\ngithub.com/ollama/ollama/model/models/gptoss.(*Transformer).Forward(0x140003f6000, {0x101c28d70, 0x140019f2d80}, {{0x101c33300, 0x140019fcc60}, {0x101c33300, 0x140019fcc78}, {0x140004cd000, 0x200, 0x200}, ...})\n\t/Users/runner/work/ollama/ollama/model/models/gptoss/model.go:47 +0x13c\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0x140001910e0, 0x1)\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1156 +0x754\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0x140001910e0, {0x16f88f8da?, 0x0?}, {0x0, 0x18, {0x1400045fcc0, 0x1, 0x1}, 0x1}, {0x0?, ...}, ...)\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1219 +0x230\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0x140001910e0, {0x101c1be40, 0x1400044f5e0}, 0x14000487cc0)\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1298 +0x460\nnet/http.HandlerFunc.ServeHTTP(0x140003f75c0?, {0x101c1be40?, 0x1400044f5e0?}, 0x140004c9b10?)\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:2294 +0x38\nnet/http.(*ServeMux).ServeHTTP(0x10?, {0x101c1be40, 0x1400044f5e0}, 0x14000487cc0)\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:2822 +0x1b4\nnet/http.serverHandler.ServeHTTP({0x101c18430?}, {0x101c1be40?, 0x1400044f5e0?}, 0x1?)\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:3301 +0xbc\nnet/http.(*conn).serve(0x140001f8000, {0x101c1e248, 0x140004a2f90})\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:2102 +0x52c\ncreated by net/http.(*Server).Serve in goroutine 1\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:3454 +0x3d8"
time=2025-12-09T11:24:58.856+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-09T11:24:58.856+03:00 level=INFO source=sched.go:470 msg="Load failed" model=/Users/kamin/.ollama/models/blobs/sha256-2c988a86b9b1ccfda0c74ec91e2f880ad5c6f5d9da7611f552b65fe945aeb4e4 error="do load request: Post \"http://127.0.0.1:54921/load\": EOF"
time=2025-12-09T11:24:58.858+03:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="signal: killed"
[GIN] 2025/12/09 - 11:24:58 | 500 |    319.4165ms |       127.0.0.1 | POST     "/api/generate"
<!-- gh-comment-id:3630971014 --> @artemavrin commented on GitHub (Dec 9, 2025): logs from terminal ``` [GIN] 2025/12/09 - 11:24:58 | 200 | 29.667µs | 127.0.0.1 | HEAD "/" [GIN] 2025/12/09 - 11:24:58 | 200 | 80.783709ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/12/09 - 11:24:58 | 200 | 63.701541ms | 127.0.0.1 | POST "/api/show" time=2025-12-09T11:24:58.744+03:00 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-12-09T11:24:58.744+03:00 level=INFO source=server.go:392 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/kamin/.ollama/models/blobs/sha256-2c988a86b9b1ccfda0c74ec91e2f880ad5c6f5d9da7611f552b65fe945aeb4e4 --port 54921" time=2025-12-09T11:24:58.746+03:00 level=INFO source=sched.go:443 msg="system memory" total="256.0 GiB" free="219.7 GiB" free_swap="0 B" time=2025-12-09T11:24:58.746+03:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=Metal available="207.5 GiB" free="208.0 GiB" minimum="512.0 MiB" overhead="0 B" time=2025-12-09T11:24:58.746+03:00 level=INFO source=server.go:702 msg="loading model" "model layers"=25 requested=-1 time=2025-12-09T11:24:58.766+03:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-09T11:24:58.766+03:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:54921" time=2025-12-09T11:24:58.776+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:24 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-09T11:24:58.808+03:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32 ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.006 sec ggml_metal_device_init: GPU name: Apple M3 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 223338.30 MB time=2025-12-09T11:24:58.809+03:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_init: use bfloat = true ggml_metal_init: use fusion = true ggml_metal_init: use concurrency = true ggml_metal_init: use graph optimize = true time=2025-12-09T11:24:58.853+03:00 level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:54923: runtime error: invalid memory address or nil pointer dereference\ngoroutine 9 [running]:\nnet/http.(*conn).serve.func1()\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:1947 +0xb0\npanic({0x101ab2420?, 0x1023dea40?})\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/panic.go:792 +0x124\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1186 +0x124\npanic({0x101ab2420?, 0x1023dea40?})\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/panic.go:792 +0x124\ngithub.com/ollama/ollama/ml/nn.(*LinearBatch).Forward(0x0, {0x101c28d70, 0x140019f2d80}, {0x101c33300?, 0x140019fc558?}, {0x101c33300, 0x140019fc498})\n\t/Users/runner/work/ollama/ollama/ml/nn/linear.go:25 +0x34\ngithub.com/ollama/ollama/model/models/gptoss.(*MLPBlock).Forward(0x140019a4630, {0x101c28d70, 0x140019f2d80}, {0x101c33300, 0x140019fc408}, 0x140003f6070)\n\t/Users/runner/work/ollama/ollama/model/models/gptoss/model.go:180 +0x404\ngithub.com/ollama/ollama/model/models/gptoss.(*TransformerBlock).Forward(0x14000045328, {0x101c28d70, 0x140019f2d80}, {0x101c33300?, 0x140019fc060?}, {0x101c33300?, 0x140019fc078?}, {0x0, 0x0}, {0x101c25ca0?, ...}, ...)\n\t/Users/runner/work/ollama/ollama/model/models/gptoss/model.go:98 +0xa4\ngithub.com/ollama/ollama/model/models/gptoss.(*Transformer).Forward(0x140003f6000, {0x101c28d70, 0x140019f2d80}, {{0x101c33300, 0x140019fcc60}, {0x101c33300, 0x140019fcc78}, {0x140004cd000, 0x200, 0x200}, ...})\n\t/Users/runner/work/ollama/ollama/model/models/gptoss/model.go:47 +0x13c\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0x140001910e0, 0x1)\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1156 +0x754\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0x140001910e0, {0x16f88f8da?, 0x0?}, {0x0, 0x18, {0x1400045fcc0, 0x1, 0x1}, 0x1}, {0x0?, ...}, ...)\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1219 +0x230\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0x140001910e0, {0x101c1be40, 0x1400044f5e0}, 0x14000487cc0)\n\t/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1298 +0x460\nnet/http.HandlerFunc.ServeHTTP(0x140003f75c0?, {0x101c1be40?, 0x1400044f5e0?}, 0x140004c9b10?)\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:2294 +0x38\nnet/http.(*ServeMux).ServeHTTP(0x10?, {0x101c1be40, 0x1400044f5e0}, 0x14000487cc0)\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:2822 +0x1b4\nnet/http.serverHandler.ServeHTTP({0x101c18430?}, {0x101c1be40?, 0x1400044f5e0?}, 0x1?)\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:3301 +0xbc\nnet/http.(*conn).serve(0x140001f8000, {0x101c1e248, 0x140004a2f90})\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:2102 +0x52c\ncreated by net/http.(*Server).Serve in goroutine 1\n\t/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:3454 +0x3d8" time=2025-12-09T11:24:58.856+03:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-09T11:24:58.856+03:00 level=INFO source=sched.go:470 msg="Load failed" model=/Users/kamin/.ollama/models/blobs/sha256-2c988a86b9b1ccfda0c74ec91e2f880ad5c6f5d9da7611f552b65fe945aeb4e4 error="do load request: Post \"http://127.0.0.1:54921/load\": EOF" time=2025-12-09T11:24:58.858+03:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="signal: killed" [GIN] 2025/12/09 - 11:24:58 | 500 | 319.4165ms | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@artemavrin commented on GitHub (Dec 11, 2025):

any thoughts about this?

<!-- gh-comment-id:3640388971 --> @artemavrin commented on GitHub (Dec 11, 2025): any thoughts about this?
Author
Owner

@joshuachris2001 commented on GitHub (Apr 5, 2026):

I go back to my strange but working ways, as the suggestion did not resolve my problem. I noticed Gemma 4 has a similar vibe so I'll work on that too. I see the error in that log is runtime error: invalid memory address or nil pointer dereference; how about running the server in debug mode for more information.

<!-- gh-comment-id:4189197599 --> @joshuachris2001 commented on GitHub (Apr 5, 2026): *I go back to my strange but working ways, as the suggestion did not resolve my problem*. I noticed Gemma 4 has a similar vibe so I'll work on that too. I see the error in that log is `runtime error: invalid memory address or nil pointer dereference`; how about running the server in debug mode for more information.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54325