[GH-ISSUE #12023] Failure to fully convert multimodal model from safetensors (Gemma3 base) #33743

Open
opened 2026-04-22 16:42:40 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @missedmyeye on GitHub (Aug 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12023

What is the issue?

I am trying to convert a finetuned-Gemma3 model to Ollama, as per the recommended method of building from safetensors.
However, via the latest versions of Ollama (0.11.5/0.11.6), after converting + quantization, the vision aspect of the model does not seem to be working, causing Ollama to crash after loading the image.

% ollama run gemma-test:q4_k_m
>>> hello
Hello there! 👋 

How can I help you today? Just let me know what you're thinking, or if you just wanted to say hello, that's lovely too! 😊 

Do you want to:

* **Chat?** We can talk about anything!
* **Get information?** I can try to answer your questions.
* **Brainstorm ideas?** 
* **Write something?** (like a story, poem, or email)
* **Something else?**

>>> describe this image /Users/user/Downloads/gemma-test.jpg
Added image '/Users/user/Downloads/gemma-test.jpg'
Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details

Crash report:
crashreport.txt

Server logs provided as well.
server.log

I have tried the same conversion on a workstation with Ollama version 0.6.5 which was successful and could process images, so I was wondering what went wrong with this updated version. Additionally, multimodal models on Ollama repo that have been converted are working (e.g. gemma3:27b), so it seems to be a conversion issue. Hope this can be resolved in later updates, as I can't find versioning of .dmg files for MacOS to roll back, unlike Linux. Thank you.

Relevant log output

ops.cpp:6930: fatal error
ops.cpp:6930: fatal error
(lldb) process attach --pid 59293
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
time=2025-08-22T12:23:11.708+08:00 level=ERROR source=server.go:1442 msg="post predict" error="Post \"http://127.0.0.1:55398/completion\": EOF"
[GIN] 2025/08/22 - 12:23:11 | 200 |   1.50235775s |       127.0.0.1 | POST     "/api/chat"
time=2025-08-22T12:23:11.708+08:00 level=ERROR source=server.go:409 msg="llama runner terminated" error="signal: abort trap"

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.11.5, 0.11.6

Originally created by @missedmyeye on GitHub (Aug 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12023 ### What is the issue? I am trying to convert a finetuned-Gemma3 model to Ollama, as per the [recommended](https://github.com/ollama/ollama/issues/9967#issuecomment-2749735940) method of [building from safetensors](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#build-from-a-safetensors-model). However, via the latest versions of Ollama (0.11.5/0.11.6), after converting + quantization, the vision aspect of the model does not seem to be working, causing Ollama to crash after loading the image. ``` % ollama run gemma-test:q4_k_m >>> hello Hello there! 👋 How can I help you today? Just let me know what you're thinking, or if you just wanted to say hello, that's lovely too! 😊 Do you want to: * **Chat?** We can talk about anything! * **Get information?** I can try to answer your questions. * **Brainstorm ideas?** * **Write something?** (like a story, poem, or email) * **Something else?** >>> describe this image /Users/user/Downloads/gemma-test.jpg Added image '/Users/user/Downloads/gemma-test.jpg' Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details ``` Crash report: [crashreport.txt](https://github.com/user-attachments/files/21929878/crashreport.txt) Server logs provided as well. [server.log](https://github.com/user-attachments/files/22123307/server.log) I have tried the same conversion on a workstation with Ollama version `0.6.5` which was successful and could process images, so I was wondering what went wrong with this updated version. Additionally, multimodal models on Ollama repo that have been converted are working (e.g. [gemma3:27b](https://ollama.com/library/gemma3:27b)), so it seems to be a conversion issue. Hope this can be resolved in later updates, as I can't find versioning of .dmg files for MacOS to roll back, unlike Linux. Thank you. ### Relevant log output ```shell ops.cpp:6930: fatal error ops.cpp:6930: fatal error (lldb) process attach --pid 59293 error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) time=2025-08-22T12:23:11.708+08:00 level=ERROR source=server.go:1442 msg="post predict" error="Post \"http://127.0.0.1:55398/completion\": EOF" [GIN] 2025/08/22 - 12:23:11 | 200 | 1.50235775s | 127.0.0.1 | POST "/api/chat" time=2025-08-22T12:23:11.708+08:00 level=ERROR source=server.go:409 msg="llama runner terminated" error="signal: abort trap" ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.11.5, 0.11.6
GiteaMirror added the bug label 2026-04-22 16:42:40 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 3, 2025):

Server logs provided as well.

Where?

<!-- gh-comment-id:3248725995 --> @rick-github commented on GitHub (Sep 3, 2025): > Server logs provided as well. Where?
Author
Owner

@missedmyeye commented on GitHub (Sep 3, 2025):

I have updated in the initial comment.
If it helps, I have recreated the issue from creating the Ollama model from safetensors + quantization, to trying inference with text (which works), followed by attempted inference with image, which causes the crash. You may refer to the logs from [GIN] 2025/09/03 - 23:48:24 onwards.

Here it is as well. @rick-github

[GIN] 2025/09/03 - 23:48:24 | 200 |      75.208µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/09/03 - 23:48:24 | 200 |   14.647708ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/09/03 - 23:50:41 | 200 |      39.791µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/09/03 - 23:50:50 | 201 |    5.685125ms |       127.0.0.1 | POST     "/api/blobs/sha256:50b2f405ba56a26d4913fd772089992252d7f942123cc0a034d96424221ba946"
[GIN] 2025/09/03 - 23:50:50 | 201 |    8.378584ms |       127.0.0.1 | POST     "/api/blobs/sha256:a07a7a8c390d9b47bff7ff02fcc3c26b0e721a4ab8e3b04649997b559f1e2460"
[GIN] 2025/09/03 - 23:50:50 | 201 |     15.3585ms |       127.0.0.1 | POST     "/api/blobs/sha256:bfe25c2735e395407beb78456ea9a6984a1f00d8c16fa04a8b75f2a614cf53e1"
[GIN] 2025/09/03 - 23:50:50 | 201 |     676.083µs |       127.0.0.1 | POST     "/api/blobs/sha256:3ffd5f11778dc73e2b69b3c00535e4121e1badf7018136263cd17b5b34fbaa53"
[GIN] 2025/09/03 - 23:50:50 | 201 |    1.035834ms |       127.0.0.1 | POST     "/api/blobs/sha256:61ae4cd81af7adb450484a24643bba8886b906fcd4d44d501b0928e6061fc679"
[GIN] 2025/09/03 - 23:50:50 | 201 |     744.375µs |       127.0.0.1 | POST     "/api/blobs/sha256:f688d6bb20c5017601c4011de7ca656da8485b540b05013efdaf986c0fcc918d"
[GIN] 2025/09/03 - 23:50:50 | 201 |    1.242084ms |       127.0.0.1 | POST     "/api/blobs/sha256:2f7b0adf4fb469770bb1490e3e35df87b1dc578246c5e7e6fc76ecf33213a397"
[GIN] 2025/09/03 - 23:50:50 | 201 |      195.39ms |       127.0.0.1 | POST     "/api/blobs/sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795"
[GIN] 2025/09/03 - 23:50:52 | 201 |  2.724673834s |       127.0.0.1 | POST     "/api/blobs/sha256:a411bc671848491cd482c42ba5076f7a584223f179ef534221c9e5bd88cbb7fd"
[GIN] 2025/09/03 - 23:51:34 | 201 | 44.681000458s |       127.0.0.1 | POST     "/api/blobs/sha256:c8519cb4392632517b37f329558d7de6172f62797a8688e141a2b293a4197bd0"
[GIN] 2025/09/03 - 23:51:34 | 201 | 44.712441875s |       127.0.0.1 | POST     "/api/blobs/sha256:69a54312d9b2f1e8a0a636bbaee5e5bd05972152c70586bc3b16d83943d215d3"
[GIN] 2025/09/03 - 23:51:34 | 201 | 44.709609875s |       127.0.0.1 | POST     "/api/blobs/sha256:4f6acb67766c0b4fcab443caa17a45303079fed058d384dfe0cb07041649779b"
[GIN] 2025/09/03 - 23:51:34 | 201 |   44.7693925s |       127.0.0.1 | POST     "/api/blobs/sha256:8d690a387c7d61675395f59ce5fd68432791a34571858ff09fd26af80c5729d9"
[GIN] 2025/09/03 - 23:51:34 | 201 | 44.788675458s |       127.0.0.1 | POST     "/api/blobs/sha256:17a4799bff9546d5ef04c7355c52ceb77b610677319b2c24c3fa90c483fd5d70"
[GIN] 2025/09/03 - 23:51:34 | 201 | 44.786395667s |       127.0.0.1 | POST     "/api/blobs/sha256:2793c8bd7f02b1f5ee26666d1e364baf21bae424912b6017504a55e483fdbe0d"
[GIN] 2025/09/03 - 23:51:34 | 201 | 44.789148333s |       127.0.0.1 | POST     "/api/blobs/sha256:708385b1dd5867ca114bc1539db49a6354ca52b6aef08e3997799aef317f497b"
[GIN] 2025/09/03 - 23:51:34 | 201 | 44.783040083s |       127.0.0.1 | POST     "/api/blobs/sha256:d86c76a25828a08fc99ff808be0bebb4e7406d4718beb2ae334d2ce83e0a0e54"
[GIN] 2025/09/03 - 23:51:34 | 201 |   44.7772005s |       127.0.0.1 | POST     "/api/blobs/sha256:cc6a79ae7c3d1ae6df964d98cb7d31248e307de699544670cfc7a4982bc768a7"
[GIN] 2025/09/03 - 23:51:34 | 201 |  44.81488525s |       127.0.0.1 | POST     "/api/blobs/sha256:f09d1522eb2999db6acdc542e3af4f0e984ff468a457bc92a403a14e11c4aefe"
[GIN] 2025/09/03 - 23:51:35 | 201 | 44.817336666s |       127.0.0.1 | POST     "/api/blobs/sha256:dbf0ca9e97cb35a2970c8b25a85a659b4345d2eeed0a26a48363d68534f72128"
time=2025-09-03T23:53:38.301+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
[GIN] 2025/09/03 - 23:55:27 | 200 |         3m52s |       127.0.0.1 | POST     "/api/create"
[GIN] 2025/09/03 - 23:56:21 | 200 |         634µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/09/03 - 23:56:21 | 200 |     7.49725ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/09/03 - 23:56:31 | 200 |      40.833µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/09/03 - 23:56:31 | 200 |    5.283333ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/09/03 - 23:56:41 | 200 |      70.708µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/09/03 - 23:56:41 | 200 |  110.130208ms |       127.0.0.1 | POST     "/api/show"
time=2025-09-03T23:56:42.064+08:00 level=INFO source=server.go:383 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/user/.ollama/models/blobs/sha256-06814493b70004b6804692f2dd84c5400989e5611e085ad50edae791f9d69a4b --port 60242"
time=2025-09-03T23:56:42.112+08:00 level=INFO source=server.go:488 msg="system memory" total="64.0 GiB" free="29.9 GiB" free_swap="0 B"
time=2025-09-03T23:56:42.113+08:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/Users/user/.ollama/models/blobs/sha256-06814493b70004b6804692f2dd84c5400989e5611e085ad50edae791f9d69a4b library=metal parallel=1 required="19.3 GiB" gpus=1
time=2025-09-03T23:56:42.114+08:00 level=INFO source=server.go:531 msg=offload library=metal layers.requested=-1 layers.model=63 layers.offload=63 layers.split=[63] memory.available="[48.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.3 GiB" memory.required.partial="19.3 GiB" memory.required.kv="944.0 MiB" memory.required.allocations="[19.3 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="522.6 MiB" memory.graph.partial="522.6 MiB" projector.weights="759.1 MiB" projector.graph="1.0 GiB"
time=2025-09-03T23:56:42.121+08:00 level=INFO source=runner.go:1006 msg="starting ollama engine"
time=2025-09-03T23:56:42.122+08:00 level=INFO source=runner.go:1043 msg="Server listening on 127.0.0.1:60242"
time=2025-09-03T23:56:42.127+08:00 level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:12 GPULayers:63[ID:0 Layers:63(0..62)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-09-03T23:56:42.161+08:00 level=INFO source=ggml.go:130 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37
time=2025-09-03T23:56:42.167+08:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 Metal.0.BF16=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M3 Max
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name:   Apple M3 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = false
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = true
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 51539.61 MB
time=2025-09-03T23:56:42.454+08:00 level=INFO source=ggml.go:486 msg="offloading 62 repeating layers to GPU"
time=2025-09-03T23:56:42.454+08:00 level=INFO source=ggml.go:492 msg="offloading output layer to GPU"
time=2025-09-03T23:56:42.454+08:00 level=INFO source=ggml.go:497 msg="offloaded 63/63 layers to GPU"
time=2025-09-03T23:56:42.454+08:00 level=INFO source=backend.go:310 msg="model weights" device=Metal size="16.2 GiB"
time=2025-09-03T23:56:42.454+08:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
time=2025-09-03T23:56:42.454+08:00 level=INFO source=backend.go:321 msg="kv cache" device=Metal size="944.0 MiB"
time=2025-09-03T23:56:42.454+08:00 level=INFO source=backend.go:332 msg="compute graph" device=Metal size="1.1 GiB"
time=2025-09-03T23:56:42.454+08:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="16.4 MiB"
time=2025-09-03T23:56:42.454+08:00 level=INFO source=backend.go:342 msg="total memory" size="19.3 GiB"
time=2025-09-03T23:56:42.454+08:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-09-03T23:56:42.455+08:00 level=INFO source=server.go:1234 msg="waiting for llama runner to start responding"
time=2025-09-03T23:56:42.455+08:00 level=INFO source=server.go:1268 msg="waiting for server to become available" status="llm server loading model"
time=2025-09-03T23:56:46.719+08:00 level=INFO source=server.go:1272 msg="llama runner started in 4.66 seconds"
[GIN] 2025/09/03 - 23:56:46 | 200 |  4.845890834s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/09/03 - 23:56:56 | 200 |  6.341762792s |       127.0.0.1 | POST     "/api/chat"
ops.cpp:6930: fatal error
ops.cpp:6930: fatal error
ops.cpp:6930: fatal error
ops.cpp:6930: fatal error
ops.cpp:6930: fatal error
ops.cpp:6930: fatal error
ops.cpp:6930: fatal error
ops.cpp:6930: fatal error
ops.cpp:6930: fatal error
ops.cpp:6930: fatal error
ops.cpp:6930: fatal error
ops.cpp:6930: fatal error
(lldb) process attach --pid 90668
(lldb) process attach --pid 90668
(lldb) process attach --pid 90668
(lldb) process attach --pid 90668
(lldb) process attach --pid 90668
(lldb) process attach --pid 90668
(lldb) process attach --pid 90668
(lldb) process attach --pid 90668
(lldb) process attach --pid 90668
(lldb) process attach --pid 90668
(lldb) process attach --pid 90668
(lldb) process attach --pid 90668
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
error: attach failed: attach failed (Not allowed to attach to process.  Look in the console messages (Console.app), near the debugserver entries, when the attach failed.  The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
time=2025-09-03T23:57:28.826+08:00 level=ERROR source=server.go:409 msg="llama runner terminated" error="signal: abort trap"
time=2025-09-03T23:57:28.826+08:00 level=ERROR source=server.go:1442 msg="post predict" error="Post \"http://127.0.0.1:60242/completion\": EOF"
[GIN] 2025/09/03 - 23:57:28 | 200 |  1.535193666s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:3249905312 --> @missedmyeye commented on GitHub (Sep 3, 2025): I have updated in the initial comment. If it helps, I have recreated the issue from creating the Ollama model from safetensors + quantization, to trying inference with text (which works), followed by attempted inference with image, which causes the crash. You may refer to the logs from `[GIN] 2025/09/03 - 23:48:24` onwards. Here it is as well. @rick-github ``` [GIN] 2025/09/03 - 23:48:24 | 200 | 75.208µs | 127.0.0.1 | HEAD "/" [GIN] 2025/09/03 - 23:48:24 | 200 | 14.647708ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/09/03 - 23:50:41 | 200 | 39.791µs | 127.0.0.1 | HEAD "/" [GIN] 2025/09/03 - 23:50:50 | 201 | 5.685125ms | 127.0.0.1 | POST "/api/blobs/sha256:50b2f405ba56a26d4913fd772089992252d7f942123cc0a034d96424221ba946" [GIN] 2025/09/03 - 23:50:50 | 201 | 8.378584ms | 127.0.0.1 | POST "/api/blobs/sha256:a07a7a8c390d9b47bff7ff02fcc3c26b0e721a4ab8e3b04649997b559f1e2460" [GIN] 2025/09/03 - 23:50:50 | 201 | 15.3585ms | 127.0.0.1 | POST "/api/blobs/sha256:bfe25c2735e395407beb78456ea9a6984a1f00d8c16fa04a8b75f2a614cf53e1" [GIN] 2025/09/03 - 23:50:50 | 201 | 676.083µs | 127.0.0.1 | POST "/api/blobs/sha256:3ffd5f11778dc73e2b69b3c00535e4121e1badf7018136263cd17b5b34fbaa53" [GIN] 2025/09/03 - 23:50:50 | 201 | 1.035834ms | 127.0.0.1 | POST "/api/blobs/sha256:61ae4cd81af7adb450484a24643bba8886b906fcd4d44d501b0928e6061fc679" [GIN] 2025/09/03 - 23:50:50 | 201 | 744.375µs | 127.0.0.1 | POST "/api/blobs/sha256:f688d6bb20c5017601c4011de7ca656da8485b540b05013efdaf986c0fcc918d" [GIN] 2025/09/03 - 23:50:50 | 201 | 1.242084ms | 127.0.0.1 | POST "/api/blobs/sha256:2f7b0adf4fb469770bb1490e3e35df87b1dc578246c5e7e6fc76ecf33213a397" [GIN] 2025/09/03 - 23:50:50 | 201 | 195.39ms | 127.0.0.1 | POST "/api/blobs/sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795" [GIN] 2025/09/03 - 23:50:52 | 201 | 2.724673834s | 127.0.0.1 | POST "/api/blobs/sha256:a411bc671848491cd482c42ba5076f7a584223f179ef534221c9e5bd88cbb7fd" [GIN] 2025/09/03 - 23:51:34 | 201 | 44.681000458s | 127.0.0.1 | POST "/api/blobs/sha256:c8519cb4392632517b37f329558d7de6172f62797a8688e141a2b293a4197bd0" [GIN] 2025/09/03 - 23:51:34 | 201 | 44.712441875s | 127.0.0.1 | POST "/api/blobs/sha256:69a54312d9b2f1e8a0a636bbaee5e5bd05972152c70586bc3b16d83943d215d3" [GIN] 2025/09/03 - 23:51:34 | 201 | 44.709609875s | 127.0.0.1 | POST "/api/blobs/sha256:4f6acb67766c0b4fcab443caa17a45303079fed058d384dfe0cb07041649779b" [GIN] 2025/09/03 - 23:51:34 | 201 | 44.7693925s | 127.0.0.1 | POST "/api/blobs/sha256:8d690a387c7d61675395f59ce5fd68432791a34571858ff09fd26af80c5729d9" [GIN] 2025/09/03 - 23:51:34 | 201 | 44.788675458s | 127.0.0.1 | POST "/api/blobs/sha256:17a4799bff9546d5ef04c7355c52ceb77b610677319b2c24c3fa90c483fd5d70" [GIN] 2025/09/03 - 23:51:34 | 201 | 44.786395667s | 127.0.0.1 | POST "/api/blobs/sha256:2793c8bd7f02b1f5ee26666d1e364baf21bae424912b6017504a55e483fdbe0d" [GIN] 2025/09/03 - 23:51:34 | 201 | 44.789148333s | 127.0.0.1 | POST "/api/blobs/sha256:708385b1dd5867ca114bc1539db49a6354ca52b6aef08e3997799aef317f497b" [GIN] 2025/09/03 - 23:51:34 | 201 | 44.783040083s | 127.0.0.1 | POST "/api/blobs/sha256:d86c76a25828a08fc99ff808be0bebb4e7406d4718beb2ae334d2ce83e0a0e54" [GIN] 2025/09/03 - 23:51:34 | 201 | 44.7772005s | 127.0.0.1 | POST "/api/blobs/sha256:cc6a79ae7c3d1ae6df964d98cb7d31248e307de699544670cfc7a4982bc768a7" [GIN] 2025/09/03 - 23:51:34 | 201 | 44.81488525s | 127.0.0.1 | POST "/api/blobs/sha256:f09d1522eb2999db6acdc542e3af4f0e984ff468a457bc92a403a14e11c4aefe" [GIN] 2025/09/03 - 23:51:35 | 201 | 44.817336666s | 127.0.0.1 | POST "/api/blobs/sha256:dbf0ca9e97cb35a2970c8b25a85a659b4345d2eeed0a26a48363d68534f72128" time=2025-09-03T23:53:38.301+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-03T23:53:38.317+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" [GIN] 2025/09/03 - 23:55:27 | 200 | 3m52s | 127.0.0.1 | POST "/api/create" [GIN] 2025/09/03 - 23:56:21 | 200 | 634µs | 127.0.0.1 | HEAD "/" [GIN] 2025/09/03 - 23:56:21 | 200 | 7.49725ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/09/03 - 23:56:31 | 200 | 40.833µs | 127.0.0.1 | HEAD "/" [GIN] 2025/09/03 - 23:56:31 | 200 | 5.283333ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/09/03 - 23:56:41 | 200 | 70.708µs | 127.0.0.1 | HEAD "/" [GIN] 2025/09/03 - 23:56:41 | 200 | 110.130208ms | 127.0.0.1 | POST "/api/show" time=2025-09-03T23:56:42.064+08:00 level=INFO source=server.go:383 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/user/.ollama/models/blobs/sha256-06814493b70004b6804692f2dd84c5400989e5611e085ad50edae791f9d69a4b --port 60242" time=2025-09-03T23:56:42.112+08:00 level=INFO source=server.go:488 msg="system memory" total="64.0 GiB" free="29.9 GiB" free_swap="0 B" time=2025-09-03T23:56:42.113+08:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/Users/user/.ollama/models/blobs/sha256-06814493b70004b6804692f2dd84c5400989e5611e085ad50edae791f9d69a4b library=metal parallel=1 required="19.3 GiB" gpus=1 time=2025-09-03T23:56:42.114+08:00 level=INFO source=server.go:531 msg=offload library=metal layers.requested=-1 layers.model=63 layers.offload=63 layers.split=[63] memory.available="[48.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.3 GiB" memory.required.partial="19.3 GiB" memory.required.kv="944.0 MiB" memory.required.allocations="[19.3 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="522.6 MiB" memory.graph.partial="522.6 MiB" projector.weights="759.1 MiB" projector.graph="1.0 GiB" time=2025-09-03T23:56:42.121+08:00 level=INFO source=runner.go:1006 msg="starting ollama engine" time=2025-09-03T23:56:42.122+08:00 level=INFO source=runner.go:1043 msg="Server listening on 127.0.0.1:60242" time=2025-09-03T23:56:42.127+08:00 level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:12 GPULayers:63[ID:0 Layers:63(0..62)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-09-03T23:56:42.161+08:00 level=INFO source=ggml.go:130 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37 time=2025-09-03T23:56:42.167+08:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 Metal.0.BF16=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M3 Max ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M3 Max ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB time=2025-09-03T23:56:42.454+08:00 level=INFO source=ggml.go:486 msg="offloading 62 repeating layers to GPU" time=2025-09-03T23:56:42.454+08:00 level=INFO source=ggml.go:492 msg="offloading output layer to GPU" time=2025-09-03T23:56:42.454+08:00 level=INFO source=ggml.go:497 msg="offloaded 63/63 layers to GPU" time=2025-09-03T23:56:42.454+08:00 level=INFO source=backend.go:310 msg="model weights" device=Metal size="16.2 GiB" time=2025-09-03T23:56:42.454+08:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" time=2025-09-03T23:56:42.454+08:00 level=INFO source=backend.go:321 msg="kv cache" device=Metal size="944.0 MiB" time=2025-09-03T23:56:42.454+08:00 level=INFO source=backend.go:332 msg="compute graph" device=Metal size="1.1 GiB" time=2025-09-03T23:56:42.454+08:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="16.4 MiB" time=2025-09-03T23:56:42.454+08:00 level=INFO source=backend.go:342 msg="total memory" size="19.3 GiB" time=2025-09-03T23:56:42.454+08:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-09-03T23:56:42.455+08:00 level=INFO source=server.go:1234 msg="waiting for llama runner to start responding" time=2025-09-03T23:56:42.455+08:00 level=INFO source=server.go:1268 msg="waiting for server to become available" status="llm server loading model" time=2025-09-03T23:56:46.719+08:00 level=INFO source=server.go:1272 msg="llama runner started in 4.66 seconds" [GIN] 2025/09/03 - 23:56:46 | 200 | 4.845890834s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/09/03 - 23:56:56 | 200 | 6.341762792s | 127.0.0.1 | POST "/api/chat" ops.cpp:6930: fatal error ops.cpp:6930: fatal error ops.cpp:6930: fatal error ops.cpp:6930: fatal error ops.cpp:6930: fatal error ops.cpp:6930: fatal error ops.cpp:6930: fatal error ops.cpp:6930: fatal error ops.cpp:6930: fatal error ops.cpp:6930: fatal error ops.cpp:6930: fatal error ops.cpp:6930: fatal error (lldb) process attach --pid 90668 (lldb) process attach --pid 90668 (lldb) process attach --pid 90668 (lldb) process attach --pid 90668 (lldb) process attach --pid 90668 (lldb) process attach --pid 90668 (lldb) process attach --pid 90668 (lldb) process attach --pid 90668 (lldb) process attach --pid 90668 (lldb) process attach --pid 90668 (lldb) process attach --pid 90668 (lldb) process attach --pid 90668 error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) error: attach failed: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) time=2025-09-03T23:57:28.826+08:00 level=ERROR source=server.go:409 msg="llama runner terminated" error="signal: abort trap" time=2025-09-03T23:57:28.826+08:00 level=ERROR source=server.go:1442 msg="post predict" error="Post \"http://127.0.0.1:60242/completion\": EOF" [GIN] 2025/09/03 - 23:57:28 | 200 | 1.535193666s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Sep 4, 2025):

ops.cpp:6930: fatal error

b3e6120736/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp (L6916-L6933)
A tensor has an unsupported datatype in it. This is likely similar to https://github.com/ggml-org/llama.cpp/pull/15367, where the conversion process uses the wrong datatype. Since 0.6.5 works with your fine-tune, something in the ollama conversion code has undergone a regression. I'll see if I can pinpoint it.

<!-- gh-comment-id:3253913889 --> @rick-github commented on GitHub (Sep 4, 2025): ``` ops.cpp:6930: fatal error ``` https://github.com/ollama/ollama/blob/b3e6120736e45cc47ed96fe46c8cf418cb3d8cff/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp#L6916-L6933 A tensor has an unsupported datatype in it. This is likely similar to https://github.com/ggml-org/llama.cpp/pull/15367, where the conversion process uses the wrong datatype. Since 0.6.5 works with your fine-tune, something in the ollama conversion code has undergone a regression. I'll see if I can pinpoint it.
Author
Owner

@rick-github commented on GitHub (Sep 4, 2025):

0.11.1 broke safetensor import for gemma3. This was the release that added gpt-oss support so a lot of stuff changed.

<!-- gh-comment-id:3254170398 --> @rick-github commented on GitHub (Sep 4, 2025): 0.11.1 broke safetensor import for gemma3. This was the release that added gpt-oss support so a lot of stuff changed.
Author
Owner

@rick-github commented on GitHub (Sep 4, 2025):

0.11.1 started using BF16 in place of F16.

blk.0.attn_k.weight                 F16                         |  blk.0.attn_k.weight                 BF16
blk.0.attn_k_norm.weight            F32                            blk.0.attn_k_norm.weight            F32
blk.0.attn_norm.weight              F32                            blk.0.attn_norm.weight              F32
blk.0.attn_output.weight            F16                         |  blk.0.attn_output.weight            BF16
blk.0.attn_q.weight                 F16                         |  blk.0.attn_q.weight                 BF16
blk.0.attn_q_norm.weight            F32                            blk.0.attn_q_norm.weight            F32
blk.0.attn_v.weight                 F16                         |  blk.0.attn_v.weight                 BF16
blk.0.ffn_down.weight               F16                         |  blk.0.ffn_down.weight               BF16
blk.0.ffn_gate.weight               F16                         |  blk.0.ffn_gate.weight               BF16
blk.0.ffn_norm.weight               F32                            blk.0.ffn_norm.weight               F32

<!-- gh-comment-id:3254210950 --> @rick-github commented on GitHub (Sep 4, 2025): 0.11.1 started using BF16 in place of F16. ``` blk.0.attn_k.weight F16 | blk.0.attn_k.weight BF16 blk.0.attn_k_norm.weight F32 blk.0.attn_k_norm.weight F32 blk.0.attn_norm.weight F32 blk.0.attn_norm.weight F32 blk.0.attn_output.weight F16 | blk.0.attn_output.weight BF16 blk.0.attn_q.weight F16 | blk.0.attn_q.weight BF16 blk.0.attn_q_norm.weight F32 blk.0.attn_q_norm.weight F32 blk.0.attn_v.weight F16 | blk.0.attn_v.weight BF16 blk.0.ffn_down.weight F16 | blk.0.ffn_down.weight BF16 blk.0.ffn_gate.weight F16 | blk.0.ffn_gate.weight BF16 blk.0.ffn_norm.weight F32 blk.0.ffn_norm.weight F32 ```
Author
Owner

@missedmyeye commented on GitHub (Sep 5, 2025):

I've verified that the conversion works for v0.10.1, so I will use that in the meantime. Thank you for checking.

Additionally, I tried 0.11.0 and encountered the same error.

ops.cpp:5859: fatal error

server-v0-11-0.log
d552068413/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp (L5845-L5862)

time=2025-09-05T11:32:04.149+08:00 level=INFO source=routes.go:1297 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/user/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2025-09-05T11:32:04.158+08:00 level=INFO source=images.go:477 msg="total blobs: 45"
time=2025-09-05T11:32:05.023+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 20"
time=2025-09-05T11:32:05.029+08:00 level=INFO source=routes.go:1350 msg="Listening on 127.0.0.1:11434 (version 0.11.0)"
time=2025-09-05T11:32:05.134+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="48.0 GiB" available="48.0 GiB"
[GIN] 2025/09/05 - 11:32:26 | 200 |     427.083µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/09/05 - 11:32:26 | 200 |    7.880458ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/09/05 - 11:32:31 | 200 |      67.833µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/09/05 - 11:32:40 | 201 |   11.154458ms |       127.0.0.1 | POST     "/api/blobs/sha256:61ae4cd81af7adb450484a24643bba8886b906fcd4d44d501b0928e6061fc679"
[GIN] 2025/09/05 - 11:32:40 | 201 |    10.93475ms |       127.0.0.1 | POST     "/api/blobs/sha256:2f7b0adf4fb469770bb1490e3e35df87b1dc578246c5e7e6fc76ecf33213a397"
[GIN] 2025/09/05 - 11:32:40 | 201 |    9.914458ms |       127.0.0.1 | POST     "/api/blobs/sha256:3ffd5f11778dc73e2b69b3c00535e4121e1badf7018136263cd17b5b34fbaa53"
[GIN] 2025/09/05 - 11:32:40 | 201 |    7.711584ms |       127.0.0.1 | POST     "/api/blobs/sha256:50b2f405ba56a26d4913fd772089992252d7f942123cc0a034d96424221ba946"
[GIN] 2025/09/05 - 11:32:40 | 201 |    8.380667ms |       127.0.0.1 | POST     "/api/blobs/sha256:f688d6bb20c5017601c4011de7ca656da8485b540b05013efdaf986c0fcc918d"
[GIN] 2025/09/05 - 11:32:40 | 201 |    9.261916ms |       127.0.0.1 | POST     "/api/blobs/sha256:a07a7a8c390d9b47bff7ff02fcc3c26b0e721a4ab8e3b04649997b559f1e2460"
[GIN] 2025/09/05 - 11:32:40 | 201 |   34.641458ms |       127.0.0.1 | POST     "/api/blobs/sha256:bfe25c2735e395407beb78456ea9a6984a1f00d8c16fa04a8b75f2a614cf53e1"
[GIN] 2025/09/05 - 11:32:40 | 201 |    369.8035ms |       127.0.0.1 | POST     "/api/blobs/sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795"
[GIN] 2025/09/05 - 11:32:44 | 201 |  4.569463041s |       127.0.0.1 | POST     "/api/blobs/sha256:a411bc671848491cd482c42ba5076f7a584223f179ef534221c9e5bd88cbb7fd"
[GIN] 2025/09/05 - 11:33:28 | 201 | 48.854568709s |       127.0.0.1 | POST     "/api/blobs/sha256:c8519cb4392632517b37f329558d7de6172f62797a8688e141a2b293a4197bd0"
[GIN] 2025/09/05 - 11:33:28 | 201 | 48.884267625s |       127.0.0.1 | POST     "/api/blobs/sha256:8d690a387c7d61675395f59ce5fd68432791a34571858ff09fd26af80c5729d9"
[GIN] 2025/09/05 - 11:33:29 | 201 | 48.939570584s |       127.0.0.1 | POST     "/api/blobs/sha256:69a54312d9b2f1e8a0a636bbaee5e5bd05972152c70586bc3b16d83943d215d3"
[GIN] 2025/09/05 - 11:33:29 | 201 | 48.958133334s |       127.0.0.1 | POST     "/api/blobs/sha256:2793c8bd7f02b1f5ee26666d1e364baf21bae424912b6017504a55e483fdbe0d"
[GIN] 2025/09/05 - 11:33:29 | 201 | 48.972118625s |       127.0.0.1 | POST     "/api/blobs/sha256:d86c76a25828a08fc99ff808be0bebb4e7406d4718beb2ae334d2ce83e0a0e54"
[GIN] 2025/09/05 - 11:33:29 | 201 | 48.977697667s |       127.0.0.1 | POST     "/api/blobs/sha256:708385b1dd5867ca114bc1539db49a6354ca52b6aef08e3997799aef317f497b"
[GIN] 2025/09/05 - 11:33:29 | 201 | 48.982331625s |       127.0.0.1 | POST     "/api/blobs/sha256:17a4799bff9546d5ef04c7355c52ceb77b610677319b2c24c3fa90c483fd5d70"
[GIN] 2025/09/05 - 11:33:29 | 201 |  48.97327025s |       127.0.0.1 | POST     "/api/blobs/sha256:4f6acb67766c0b4fcab443caa17a45303079fed058d384dfe0cb07041649779b"
[GIN] 2025/09/05 - 11:33:29 | 201 | 48.989163917s |       127.0.0.1 | POST     "/api/blobs/sha256:f09d1522eb2999db6acdc542e3af4f0e984ff468a457bc92a403a14e11c4aefe"
[GIN] 2025/09/05 - 11:33:29 | 201 | 48.998377459s |       127.0.0.1 | POST     "/api/blobs/sha256:cc6a79ae7c3d1ae6df964d98cb7d31248e307de699544670cfc7a4982bc768a7"
[GIN] 2025/09/05 - 11:33:29 | 201 |  49.03174925s |       127.0.0.1 | POST     "/api/blobs/sha256:dbf0ca9e97cb35a2970c8b25a85a659b4345d2eeed0a26a48363d68534f72128"
time=2025-09-05T11:35:19.356+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0"
[GIN] 2025/09/05 - 11:37:13 | 200 |         3m44s |       127.0.0.1 | POST     "/api/create"
[GIN] 2025/09/05 - 11:37:30 | 200 |    1.356958ms |       127.0.0.1 | HEAD     "/"
[GIN] 2025/09/05 - 11:37:30 | 200 |    7.818834ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/09/05 - 11:37:32 | 200 |       49.25µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/09/05 - 11:37:32 | 200 |    99.27075ms |       127.0.0.1 | POST     "/api/show"
time=2025-09-05T11:37:32.570+08:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/user/.ollama/models/blobs/sha256-06814493b70004b6804692f2dd84c5400989e5611e085ad50edae791f9d69a4b gpu=0 parallel=1 available=51539607552 required="19.3 GiB"
time=2025-09-05T11:37:32.572+08:00 level=INFO source=server.go:135 msg="system memory" total="64.0 GiB" free="33.5 GiB" free_swap="0 B"
time=2025-09-05T11:37:32.573+08:00 level=INFO source=server.go:175 msg=offload library=metal layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[48.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.3 GiB" memory.required.partial="19.3 GiB" memory.required.kv="944.0 MiB" memory.required.allocations="[19.3 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="522.6 MiB" memory.graph.partial="522.6 MiB" projector.weights="759.1 MiB" projector.graph="1.0 GiB"
time=2025-09-05T11:37:32.613+08:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/user/.ollama/models/blobs/sha256-06814493b70004b6804692f2dd84c5400989e5611e085ad50edae791f9d69a4b --ctx-size 4096 --batch-size 512 --n-gpu-layers 63 --threads 12 --parallel 1 --port 57425"
time=2025-09-05T11:37:32.662+08:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-09-05T11:37:32.663+08:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-09-05T11:37:32.663+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
time=2025-09-05T11:37:32.672+08:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-09-05T11:37:32.674+08:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:57425"
time=2025-09-05T11:37:32.710+08:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37
time=2025-09-05T11:37:32.718+08:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 Metal.0.BF16=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-09-05T11:37:32.820+08:00 level=INFO source=ggml.go:367 msg="offloading 62 repeating layers to GPU"
time=2025-09-05T11:37:32.820+08:00 level=INFO source=ggml.go:373 msg="offloading output layer to GPU"
time=2025-09-05T11:37:32.820+08:00 level=INFO source=ggml.go:378 msg="offloaded 63/63 layers to GPU"
time=2025-09-05T11:37:32.820+08:00 level=INFO source=ggml.go:381 msg="model weights" buffer=Metal size="16.2 GiB"
time=2025-09-05T11:37:32.820+08:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="1.1 GiB"
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M3 Max
ggml_metal_load_library: using embedded metal library
time=2025-09-05T11:37:32.915+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
ggml_metal_init: GPU name:   Apple M3 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = false
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = true
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 51539.61 MB
time=2025-09-05T11:37:43.486+08:00 level=INFO source=ggml.go:672 msg="compute graph" backend=Metal buffer_type=Metal size="1.1 GiB"
time=2025-09-05T11:37:43.486+08:00 level=INFO source=ggml.go:672 msg="compute graph" backend=BLAS buffer_type=CPU size="16.4 MiB"
time=2025-09-05T11:37:43.486+08:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
time=2025-09-05T11:37:43.545+08:00 level=INFO source=ggml.go:672 msg="compute graph" backend=Metal buffer_type=Metal size="1.1 GiB"
time=2025-09-05T11:37:43.545+08:00 level=INFO source=ggml.go:672 msg="compute graph" backend=BLAS buffer_type=CPU size="16.4 MiB"
time=2025-09-05T11:37:43.545+08:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
time=2025-09-05T11:37:47.486+08:00 level=INFO source=server.go:637 msg="llama runner started in 14.82 seconds"
[GIN] 2025/09/05 - 11:37:47 | 200 | 15.033219542s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/09/05 - 11:38:01 | 200 |    11.190782s |       127.0.0.1 | POST     "/api/chat"
ops.cpp:5859: fatal errorops.cpp:5859: fatal error
ops.cpp:5859: fatal error
ops.cpp:5859: fatal error
ops.cpp:5859: fatal error
ops.cpp:5859: fatal error
ops.cpp:5859: fatal error
ops.cpp:5859: fatal error
ops.cpp:5859: fatal errorops.cpp:5859: time=2025-09-05T11:38:47.232+08:00 level=ERROR source=server.go:807 msg="post predict" error="Post \"http://127.0.0.1:57425/completion\": EOF"
[GIN] 2025/09/05 - 11:38:47 | 200 |     177.643ms |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:3257012774 --> @missedmyeye commented on GitHub (Sep 5, 2025): I've verified that the conversion works for [v0.10.1](https://github.com/ollama/ollama/releases/tag/v0.10.1), so I will use that in the meantime. Thank you for checking. Additionally, I tried 0.11.0 and encountered the same error. ``` ops.cpp:5859: fatal error ``` [server-v0-11-0.log](https://github.com/user-attachments/files/22164735/server-v0-11-0.log) https://github.com/ollama/ollama/blob/d552068413d1a3d0af69f23c2abc1e1f698ed234/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp#L5845-L5862 ``` time=2025-09-05T11:32:04.149+08:00 level=INFO source=routes.go:1297 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/user/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" time=2025-09-05T11:32:04.158+08:00 level=INFO source=images.go:477 msg="total blobs: 45" time=2025-09-05T11:32:05.023+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 20" time=2025-09-05T11:32:05.029+08:00 level=INFO source=routes.go:1350 msg="Listening on 127.0.0.1:11434 (version 0.11.0)" time=2025-09-05T11:32:05.134+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="48.0 GiB" available="48.0 GiB" [GIN] 2025/09/05 - 11:32:26 | 200 | 427.083µs | 127.0.0.1 | HEAD "/" [GIN] 2025/09/05 - 11:32:26 | 200 | 7.880458ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/09/05 - 11:32:31 | 200 | 67.833µs | 127.0.0.1 | HEAD "/" [GIN] 2025/09/05 - 11:32:40 | 201 | 11.154458ms | 127.0.0.1 | POST "/api/blobs/sha256:61ae4cd81af7adb450484a24643bba8886b906fcd4d44d501b0928e6061fc679" [GIN] 2025/09/05 - 11:32:40 | 201 | 10.93475ms | 127.0.0.1 | POST "/api/blobs/sha256:2f7b0adf4fb469770bb1490e3e35df87b1dc578246c5e7e6fc76ecf33213a397" [GIN] 2025/09/05 - 11:32:40 | 201 | 9.914458ms | 127.0.0.1 | POST "/api/blobs/sha256:3ffd5f11778dc73e2b69b3c00535e4121e1badf7018136263cd17b5b34fbaa53" [GIN] 2025/09/05 - 11:32:40 | 201 | 7.711584ms | 127.0.0.1 | POST "/api/blobs/sha256:50b2f405ba56a26d4913fd772089992252d7f942123cc0a034d96424221ba946" [GIN] 2025/09/05 - 11:32:40 | 201 | 8.380667ms | 127.0.0.1 | POST "/api/blobs/sha256:f688d6bb20c5017601c4011de7ca656da8485b540b05013efdaf986c0fcc918d" [GIN] 2025/09/05 - 11:32:40 | 201 | 9.261916ms | 127.0.0.1 | POST "/api/blobs/sha256:a07a7a8c390d9b47bff7ff02fcc3c26b0e721a4ab8e3b04649997b559f1e2460" [GIN] 2025/09/05 - 11:32:40 | 201 | 34.641458ms | 127.0.0.1 | POST "/api/blobs/sha256:bfe25c2735e395407beb78456ea9a6984a1f00d8c16fa04a8b75f2a614cf53e1" [GIN] 2025/09/05 - 11:32:40 | 201 | 369.8035ms | 127.0.0.1 | POST "/api/blobs/sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795" [GIN] 2025/09/05 - 11:32:44 | 201 | 4.569463041s | 127.0.0.1 | POST "/api/blobs/sha256:a411bc671848491cd482c42ba5076f7a584223f179ef534221c9e5bd88cbb7fd" [GIN] 2025/09/05 - 11:33:28 | 201 | 48.854568709s | 127.0.0.1 | POST "/api/blobs/sha256:c8519cb4392632517b37f329558d7de6172f62797a8688e141a2b293a4197bd0" [GIN] 2025/09/05 - 11:33:28 | 201 | 48.884267625s | 127.0.0.1 | POST "/api/blobs/sha256:8d690a387c7d61675395f59ce5fd68432791a34571858ff09fd26af80c5729d9" [GIN] 2025/09/05 - 11:33:29 | 201 | 48.939570584s | 127.0.0.1 | POST "/api/blobs/sha256:69a54312d9b2f1e8a0a636bbaee5e5bd05972152c70586bc3b16d83943d215d3" [GIN] 2025/09/05 - 11:33:29 | 201 | 48.958133334s | 127.0.0.1 | POST "/api/blobs/sha256:2793c8bd7f02b1f5ee26666d1e364baf21bae424912b6017504a55e483fdbe0d" [GIN] 2025/09/05 - 11:33:29 | 201 | 48.972118625s | 127.0.0.1 | POST "/api/blobs/sha256:d86c76a25828a08fc99ff808be0bebb4e7406d4718beb2ae334d2ce83e0a0e54" [GIN] 2025/09/05 - 11:33:29 | 201 | 48.977697667s | 127.0.0.1 | POST "/api/blobs/sha256:708385b1dd5867ca114bc1539db49a6354ca52b6aef08e3997799aef317f497b" [GIN] 2025/09/05 - 11:33:29 | 201 | 48.982331625s | 127.0.0.1 | POST "/api/blobs/sha256:17a4799bff9546d5ef04c7355c52ceb77b610677319b2c24c3fa90c483fd5d70" [GIN] 2025/09/05 - 11:33:29 | 201 | 48.97327025s | 127.0.0.1 | POST "/api/blobs/sha256:4f6acb67766c0b4fcab443caa17a45303079fed058d384dfe0cb07041649779b" [GIN] 2025/09/05 - 11:33:29 | 201 | 48.989163917s | 127.0.0.1 | POST "/api/blobs/sha256:f09d1522eb2999db6acdc542e3af4f0e984ff468a457bc92a403a14e11c4aefe" [GIN] 2025/09/05 - 11:33:29 | 201 | 48.998377459s | 127.0.0.1 | POST "/api/blobs/sha256:cc6a79ae7c3d1ae6df964d98cb7d31248e307de699544670cfc7a4982bc768a7" [GIN] 2025/09/05 - 11:33:29 | 201 | 49.03174925s | 127.0.0.1 | POST "/api/blobs/sha256:dbf0ca9e97cb35a2970c8b25a85a659b4345d2eeed0a26a48363d68534f72128" time=2025-09-05T11:35:19.356+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q4_K - using fallback quantization Q5_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" time=2025-09-05T11:35:19.358+08:00 level=WARN source=quantization.go:145 msg="tensor cols 1152 are not divisible by 256, required for Q6_K - using fallback quantization Q8_0" [GIN] 2025/09/05 - 11:37:13 | 200 | 3m44s | 127.0.0.1 | POST "/api/create" [GIN] 2025/09/05 - 11:37:30 | 200 | 1.356958ms | 127.0.0.1 | HEAD "/" [GIN] 2025/09/05 - 11:37:30 | 200 | 7.818834ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/09/05 - 11:37:32 | 200 | 49.25µs | 127.0.0.1 | HEAD "/" [GIN] 2025/09/05 - 11:37:32 | 200 | 99.27075ms | 127.0.0.1 | POST "/api/show" time=2025-09-05T11:37:32.570+08:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/user/.ollama/models/blobs/sha256-06814493b70004b6804692f2dd84c5400989e5611e085ad50edae791f9d69a4b gpu=0 parallel=1 available=51539607552 required="19.3 GiB" time=2025-09-05T11:37:32.572+08:00 level=INFO source=server.go:135 msg="system memory" total="64.0 GiB" free="33.5 GiB" free_swap="0 B" time=2025-09-05T11:37:32.573+08:00 level=INFO source=server.go:175 msg=offload library=metal layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[48.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.3 GiB" memory.required.partial="19.3 GiB" memory.required.kv="944.0 MiB" memory.required.allocations="[19.3 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="522.6 MiB" memory.graph.partial="522.6 MiB" projector.weights="759.1 MiB" projector.graph="1.0 GiB" time=2025-09-05T11:37:32.613+08:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/user/.ollama/models/blobs/sha256-06814493b70004b6804692f2dd84c5400989e5611e085ad50edae791f9d69a4b --ctx-size 4096 --batch-size 512 --n-gpu-layers 63 --threads 12 --parallel 1 --port 57425" time=2025-09-05T11:37:32.662+08:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-09-05T11:37:32.663+08:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-09-05T11:37:32.663+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" time=2025-09-05T11:37:32.672+08:00 level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-09-05T11:37:32.674+08:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:57425" time=2025-09-05T11:37:32.710+08:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37 time=2025-09-05T11:37:32.718+08:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 Metal.0.BF16=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-09-05T11:37:32.820+08:00 level=INFO source=ggml.go:367 msg="offloading 62 repeating layers to GPU" time=2025-09-05T11:37:32.820+08:00 level=INFO source=ggml.go:373 msg="offloading output layer to GPU" time=2025-09-05T11:37:32.820+08:00 level=INFO source=ggml.go:378 msg="offloaded 63/63 layers to GPU" time=2025-09-05T11:37:32.820+08:00 level=INFO source=ggml.go:381 msg="model weights" buffer=Metal size="16.2 GiB" time=2025-09-05T11:37:32.820+08:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="1.1 GiB" ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M3 Max ggml_metal_load_library: using embedded metal library time=2025-09-05T11:37:32.915+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" ggml_metal_init: GPU name: Apple M3 Max ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB time=2025-09-05T11:37:43.486+08:00 level=INFO source=ggml.go:672 msg="compute graph" backend=Metal buffer_type=Metal size="1.1 GiB" time=2025-09-05T11:37:43.486+08:00 level=INFO source=ggml.go:672 msg="compute graph" backend=BLAS buffer_type=CPU size="16.4 MiB" time=2025-09-05T11:37:43.486+08:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" time=2025-09-05T11:37:43.545+08:00 level=INFO source=ggml.go:672 msg="compute graph" backend=Metal buffer_type=Metal size="1.1 GiB" time=2025-09-05T11:37:43.545+08:00 level=INFO source=ggml.go:672 msg="compute graph" backend=BLAS buffer_type=CPU size="16.4 MiB" time=2025-09-05T11:37:43.545+08:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" time=2025-09-05T11:37:47.486+08:00 level=INFO source=server.go:637 msg="llama runner started in 14.82 seconds" [GIN] 2025/09/05 - 11:37:47 | 200 | 15.033219542s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/09/05 - 11:38:01 | 200 | 11.190782s | 127.0.0.1 | POST "/api/chat" ops.cpp:5859: fatal errorops.cpp:5859: fatal error ops.cpp:5859: fatal error ops.cpp:5859: fatal error ops.cpp:5859: fatal error ops.cpp:5859: fatal error ops.cpp:5859: fatal error ops.cpp:5859: fatal error ops.cpp:5859: fatal errorops.cpp:5859: time=2025-09-05T11:38:47.232+08:00 level=ERROR source=server.go:807 msg="post predict" error="Post \"http://127.0.0.1:57425/completion\": EOF" [GIN] 2025/09/05 - 11:38:47 | 200 | 177.643ms | 127.0.0.1 | POST "/api/chat" ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33743