[GH-ISSUE #12238] Apple M2 Max causes panic: error computing ggml graph: -1 with gemma3:latest and qwen2.5vl:latest #54655

Closed
opened 2026-04-29 06:46:29 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @moncapitaine on GitHub (Sep 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12238

What is the issue?

This happens around 60% of the tries. It does not matter if I try with /api/chat or /api/generate or if I stream or not.

I totally fresh-installed ollama 0.11.10 and removed the old directories as described here: https://github.com/ollama/ollama/blob/main/docs/macos.md

My main concern is about not being able to reproduce it. I tried different ENV settings like GPU=0, restricting MAX RAM etc. . Nothing seems to help.

I had this problem with several images, smaller and bigger ones. I only mention images that work from time to time.

Typical last "done" chunk is:

1. **Brand Name**: The brand name is "...
{
  "message": {
    "role": "assistant",
    "content": ""
  },
  "done": true,
  "total_duration": 30343321958,
  "load_duration": 73879083,
  "prompt_eval_count": 1307,
  "prompt_eval_duration": 22409707625,
  "eval_count": 391,
  "eval_duration": 7857276084
}

This problem seems not to happen if I query the same model with the same image using the OLLAMA Chat UI

Both environments (app and cli serve) seem to be identical:

HTTPS_PROXY: 
HTTP_PROXY: 
NO_PROXY: 
OLLAMA_CONTEXT_LENGTH:4096 
OLLAMA_DEBUG:INFO 
OLLAMA_FLASH_ATTENTION:false 
OLLAMA_GPU_OVERHEAD:0 
OLLAMA_HOST:http://127.0.0.1:11434 
OLLAMA_KEEP_ALIVE:5m0s 
OLLAMA_KV_CACHE_TYPE: 
OLLAMA_LLM_LIBRARY: 
OLLAMA_LOAD_TIMEOUT:5m0s 
OLLAMA_MAX_LOADED_MODELS:0 
OLLAMA_MAX_QUEUE:512 
OLLAMA_MODELS:/Users/michael/.ollama/models 
OLLAMA_MULTIUSER_CACHE:false 
OLLAMA_NEW_ENGINE:false 
OLLAMA_NEW_ESTIMATES:false 
OLLAMA_NOHISTORY:false 
OLLAMA_NOPRUNE:false 
OLLAMA_NUM_PARALLEL:1 
OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*]
OLLAMA_SCHED_SPREAD:false 
http_proxy: 
https_proxy: 
no_proxy:

Relevant log output

Test with qwen2.5vl:latest

time=2025-09-10T09:44:52.607+02:00 level=INFO source=server.go:398 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/michael/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --port 54762"
time=2025-09-10T09:44:52.612+02:00 level=INFO source=server.go:503 msg="system memory" total="96.0 GiB" free="70.1 GiB" free_swap="0 B"
time=2025-09-10T09:44:52.614+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/Users/michael/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 library=metal parallel=1 required="8.0 GiB" gpus=1
time=2025-09-10T09:44:52.615+02:00 level=INFO source=server.go:543 msg=offload library=metal layers.requested=-1 layers.model=29 layers.offload=29 layers.split=[29] memory.available="[72.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.0 GiB" memory.required.partial="8.0 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[8.0 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="261.3 MiB" memory.graph.partial="261.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB"
time=2025-09-10T09:44:52.620+02:00 level=INFO source=runner.go:1251 msg="starting ollama engine"
time=2025-09-10T09:44:52.621+02:00 level=INFO source=runner.go:1286 msg="Server listening on 127.0.0.1:54762"
time=2025-09-10T09:44:52.626+02:00 level=INFO source=runner.go:1170 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:29[ID:0 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-09-10T09:44:52.644+02:00 level=INFO source=ggml.go:131 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=36
time=2025-09-10T09:44:52.645+02:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 Metal.0.BF16=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M2 Max
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name:   Apple M2 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple8  (1008)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = false
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = true
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 77309.41 MB
time=2025-09-10T09:44:52.863+02:00 level=INFO source=ggml.go:487 msg="offloading 28 repeating layers to GPU"
time=2025-09-10T09:44:52.863+02:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU"
time=2025-09-10T09:44:52.863+02:00 level=INFO source=ggml.go:498 msg="offloaded 29/29 layers to GPU"
time=2025-09-10T09:44:52.863+02:00 level=INFO source=backend.go:310 msg="model weights" device=Metal size="5.3 GiB"
time=2025-09-10T09:44:52.863+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="292.4 MiB"
time=2025-09-10T09:44:52.863+02:00 level=INFO source=backend.go:321 msg="kv cache" device=Metal size="224.0 MiB"
time=2025-09-10T09:44:52.863+02:00 level=INFO source=backend.go:332 msg="compute graph" device=Metal size="1.7 GiB"
time=2025-09-10T09:44:52.863+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="16.8 MiB"
time=2025-09-10T09:44:52.863+02:00 level=INFO source=backend.go:342 msg="total memory" size="7.4 GiB"
time=2025-09-10T09:44:52.863+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-09-10T09:44:52.863+02:00 level=INFO source=server.go:1250 msg="waiting for llama runner to start responding"
time=2025-09-10T09:44:52.885+02:00 level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model"
time=2025-09-10T09:44:54.391+02:00 level=INFO source=server.go:1288 msg="llama runner started in 1.78 seconds"
ggml_metal_graph_compute: command buffer 0 failed with status 5
error: Internal Error (0000000e:Internal Error)
panic: error computing ggml graph: -1

goroutine 9 [running]:
github.com/ollama/ollama/ml/backend/ggml.(*Context).ComputeWithNotify(0x140000da040, 0x0?, {0x14000284000, 0x1, 0x152df1601?})
	/Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:772 +0x32c
github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute(0x140000da040?, {0x14000284000?, 0x1?, 0x100d58794?})
	/Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:762 +0x30
github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getTensor(0x14000b03868?, {0x1020214d0, 0x140001396b0}, {0x102025be0, 0x140000da000}, {0x1020306e8, 0x14000cd09f0}, 0x0)
	/Users/runner/work/ollama/ollama/runner/ollamarunner/multimodal.go:90 +0x234
github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getMultimodal(0x14001786f00, {0x1020214d0, 0x140001396b0}, {0x102025be0, 0x140000da000}, {0x14000ce6d20, 0x1, 0x20?}, 0x0)
	/Users/runner/work/ollama/ollama/runner/ollamarunner/multimodal.go:56 +0xa8
github.com/ollama/ollama/runner/ollamarunner.(*Server).forwardBatch(_, {0x0, {0x102025be0, 0x14000ce20c0}, {0x1020306e8, 0x14000c62948}, {0x1400028b000, 0x16, 0x20}, {{0x1020306e8, ...}, ...}, ...})
	/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:543 +0xbbc
github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0x14000208f00, {0x10201c6b0, 0x140000d3ea0})
	/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:420 +0x15c
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
	/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1265 +0x470

Test with gemma3:latest

[GIN] 2025/09/10 - 18:55:53 | 200 | 16.873529708s |       127.0.0.1 | POST     "/api/chat"
time=2025-09-10T18:56:27.597+02:00 level=INFO source=server.go:398 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/michael/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --port 53667"
time=2025-09-10T18:56:27.602+02:00 level=INFO source=server.go:503 msg="system memory" total="96.0 GiB" free="67.1 GiB" free_swap="0 B"
time=2025-09-10T18:56:27.603+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/Users/michael/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 library=metal parallel=1 required="5.4 GiB" gpus=1
time=2025-09-10T18:56:27.604+02:00 level=INFO source=server.go:543 msg=offload library=metal layers.requested=-1 layers.model=35 layers.offload=35 layers.split=[35] memory.available="[72.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.4 GiB" memory.required.partial="5.4 GiB" memory.required.kv="254.0 MiB" memory.required.allocations="[5.4 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="517.0 MiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-09-10T18:56:27.610+02:00 level=INFO source=runner.go:1251 msg="starting ollama engine"
time=2025-09-10T18:56:27.610+02:00 level=INFO source=runner.go:1286 msg="Server listening on 127.0.0.1:53667"
time=2025-09-10T18:56:27.616+02:00 level=INFO source=runner.go:1170 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:35[ID:0 Layers:35(0..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-09-10T18:56:27.656+02:00 level=INFO source=ggml.go:131 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36
time=2025-09-10T18:56:27.657+02:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 Metal.0.BF16=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M2 Max
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name:   Apple M2 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple8  (1008)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = false
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = true
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 77309.41 MB
time=2025-09-10T18:56:27.843+02:00 level=INFO source=ggml.go:487 msg="offloading 34 repeating layers to GPU"
time=2025-09-10T18:56:27.843+02:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU"
time=2025-09-10T18:56:27.843+02:00 level=INFO source=ggml.go:498 msg="offloaded 35/35 layers to GPU"
time=2025-09-10T18:56:27.844+02:00 level=INFO source=backend.go:310 msg="model weights" device=Metal size="3.1 GiB"
time=2025-09-10T18:56:27.844+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="525.0 MiB"
time=2025-09-10T18:56:27.844+02:00 level=INFO source=backend.go:321 msg="kv cache" device=Metal size="254.0 MiB"
time=2025-09-10T18:56:27.844+02:00 level=INFO source=backend.go:332 msg="compute graph" device=Metal size="1.1 GiB"
time=2025-09-10T18:56:27.844+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.0 MiB"
time=2025-09-10T18:56:27.844+02:00 level=INFO source=backend.go:342 msg="total memory" size="5.0 GiB"
time=2025-09-10T18:56:27.844+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-09-10T18:56:27.844+02:00 level=INFO source=server.go:1250 msg="waiting for llama runner to start responding"
time=2025-09-10T18:56:27.848+02:00 level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model"
time=2025-09-10T18:56:28.099+02:00 level=INFO source=server.go:1288 msg="llama runner started in 0.50 seconds"
[GIN] 2025/09/10 - 18:56:40 | 200 | 12.656067458s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/09/10 - 18:56:54 | 200 | 14.497685459s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/09/10 - 18:57:07 | 200 | 12.689051333s |       127.0.0.1 | POST     "/api/chat"
ggml_metal_graph_compute: command buffer 0 failed with status 5
error: Internal Error (0000000e:Internal Error)
panic: error computing ggml graph: -1

goroutine 12 [running]:
github.com/ollama/ollama/ml/backend/ggml.(*Context).ComputeWithNotify(0x140017f4440, 0x102904778?, {0x14001832040, 0x1, 0x134473601?})
	/Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:772 +0x32c
github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute(0x140017f4440?, {0x14001832040?, 0x1?, 0x14000052008?})
	/Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:762 +0x30
github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getTensor(0x14001649868?, {0x10370d4d0, 0x140000d96b0}, {0x103711be0, 0x140017f4400}, {0x10371c6e8, 0x14001847bf0}, 0x0)
	/Users/runner/work/ollama/ollama/runner/ollamarunner/multimodal.go:90 +0x234
github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getMultimodal(0x1400219c210, {0x10370d4d0, 0x140000d96b0}, {0x103711be0, 0x140017f4400}, {0x140002863c0, 0x1, 0x2?}, 0x0)
	/Users/runner/work/ollama/ollama/runner/ollamarunner/multimodal.go:56 +0xa8
github.com/ollama/ollama/runner/ollamarunner.(*Server).forwardBatch(_, {0x360, {0x103711be0, 0x140018b2000}, {0x10371c6e8, 0x140018d4348}, {0x14000076068, 0x1, 0x1}, {{0x10371c6e8, ...}, ...}, ...})
	/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:543 +0xbbc
github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0x140001541e0, {0x1037086b0, 0x1400037c190})
	/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:420 +0x15c
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
	/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1265 +0x470
time=2025-09-10T18:57:15.333+02:00 level=ERROR source=server.go:1458 msg="post predict" error="Post \"http://127.0.0.1:53667/completion\": EOF"
time=2025-09-10T18:57:15.333+02:00 level=ERROR source=server.go:424 msg="llama runner terminated" error="exit status 2"

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.11.10

Originally created by @moncapitaine on GitHub (Sep 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12238 ### What is the issue? This happens around 60% of the tries. It does not matter if I try with /api/chat or /api/generate or if I stream or not. I totally fresh-installed ollama 0.11.10 and removed the old directories as described here: https://github.com/ollama/ollama/blob/main/docs/macos.md My main concern is about not being able to reproduce it. I tried different ENV settings like GPU=0, restricting MAX RAM etc. . Nothing seems to help. I had this problem with several images, smaller and bigger ones. I only mention images that work from time to time. Typical last "done" chunk is: ```json 1. **Brand Name**: The brand name is "... { "message": { "role": "assistant", "content": "" }, "done": true, "total_duration": 30343321958, "load_duration": 73879083, "prompt_eval_count": 1307, "prompt_eval_duration": 22409707625, "eval_count": 391, "eval_duration": 7857276084 } ``` **This problem seems not to happen if I query the same model with the same image using the OLLAMA Chat UI** Both environments (app and cli serve) seem to be identical: ``` HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/michael/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy: ``` ### Relevant log output #### Test with qwen2.5vl:latest ```shell time=2025-09-10T09:44:52.607+02:00 level=INFO source=server.go:398 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/michael/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --port 54762" time=2025-09-10T09:44:52.612+02:00 level=INFO source=server.go:503 msg="system memory" total="96.0 GiB" free="70.1 GiB" free_swap="0 B" time=2025-09-10T09:44:52.614+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/Users/michael/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 library=metal parallel=1 required="8.0 GiB" gpus=1 time=2025-09-10T09:44:52.615+02:00 level=INFO source=server.go:543 msg=offload library=metal layers.requested=-1 layers.model=29 layers.offload=29 layers.split=[29] memory.available="[72.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.0 GiB" memory.required.partial="8.0 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[8.0 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="261.3 MiB" memory.graph.partial="261.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB" time=2025-09-10T09:44:52.620+02:00 level=INFO source=runner.go:1251 msg="starting ollama engine" time=2025-09-10T09:44:52.621+02:00 level=INFO source=runner.go:1286 msg="Server listening on 127.0.0.1:54762" time=2025-09-10T09:44:52.626+02:00 level=INFO source=runner.go:1170 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:29[ID:0 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-09-10T09:44:52.644+02:00 level=INFO source=ggml.go:131 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=36 time=2025-09-10T09:44:52.645+02:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 Metal.0.BF16=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M2 Max ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M2 Max ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB time=2025-09-10T09:44:52.863+02:00 level=INFO source=ggml.go:487 msg="offloading 28 repeating layers to GPU" time=2025-09-10T09:44:52.863+02:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU" time=2025-09-10T09:44:52.863+02:00 level=INFO source=ggml.go:498 msg="offloaded 29/29 layers to GPU" time=2025-09-10T09:44:52.863+02:00 level=INFO source=backend.go:310 msg="model weights" device=Metal size="5.3 GiB" time=2025-09-10T09:44:52.863+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="292.4 MiB" time=2025-09-10T09:44:52.863+02:00 level=INFO source=backend.go:321 msg="kv cache" device=Metal size="224.0 MiB" time=2025-09-10T09:44:52.863+02:00 level=INFO source=backend.go:332 msg="compute graph" device=Metal size="1.7 GiB" time=2025-09-10T09:44:52.863+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="16.8 MiB" time=2025-09-10T09:44:52.863+02:00 level=INFO source=backend.go:342 msg="total memory" size="7.4 GiB" time=2025-09-10T09:44:52.863+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-09-10T09:44:52.863+02:00 level=INFO source=server.go:1250 msg="waiting for llama runner to start responding" time=2025-09-10T09:44:52.885+02:00 level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model" time=2025-09-10T09:44:54.391+02:00 level=INFO source=server.go:1288 msg="llama runner started in 1.78 seconds" ggml_metal_graph_compute: command buffer 0 failed with status 5 error: Internal Error (0000000e:Internal Error) panic: error computing ggml graph: -1 goroutine 9 [running]: github.com/ollama/ollama/ml/backend/ggml.(*Context).ComputeWithNotify(0x140000da040, 0x0?, {0x14000284000, 0x1, 0x152df1601?}) /Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:772 +0x32c github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute(0x140000da040?, {0x14000284000?, 0x1?, 0x100d58794?}) /Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:762 +0x30 github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getTensor(0x14000b03868?, {0x1020214d0, 0x140001396b0}, {0x102025be0, 0x140000da000}, {0x1020306e8, 0x14000cd09f0}, 0x0) /Users/runner/work/ollama/ollama/runner/ollamarunner/multimodal.go:90 +0x234 github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getMultimodal(0x14001786f00, {0x1020214d0, 0x140001396b0}, {0x102025be0, 0x140000da000}, {0x14000ce6d20, 0x1, 0x20?}, 0x0) /Users/runner/work/ollama/ollama/runner/ollamarunner/multimodal.go:56 +0xa8 github.com/ollama/ollama/runner/ollamarunner.(*Server).forwardBatch(_, {0x0, {0x102025be0, 0x14000ce20c0}, {0x1020306e8, 0x14000c62948}, {0x1400028b000, 0x16, 0x20}, {{0x1020306e8, ...}, ...}, ...}) /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:543 +0xbbc github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0x14000208f00, {0x10201c6b0, 0x140000d3ea0}) /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:420 +0x15c created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1265 +0x470 ``` #### Test with gemma3:latest ``` [GIN] 2025/09/10 - 18:55:53 | 200 | 16.873529708s | 127.0.0.1 | POST "/api/chat" time=2025-09-10T18:56:27.597+02:00 level=INFO source=server.go:398 msg="starting runner" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/michael/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --port 53667" time=2025-09-10T18:56:27.602+02:00 level=INFO source=server.go:503 msg="system memory" total="96.0 GiB" free="67.1 GiB" free_swap="0 B" time=2025-09-10T18:56:27.603+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/Users/michael/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 library=metal parallel=1 required="5.4 GiB" gpus=1 time=2025-09-10T18:56:27.604+02:00 level=INFO source=server.go:543 msg=offload library=metal layers.requested=-1 layers.model=35 layers.offload=35 layers.split=[35] memory.available="[72.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.4 GiB" memory.required.partial="5.4 GiB" memory.required.kv="254.0 MiB" memory.required.allocations="[5.4 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="517.0 MiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-09-10T18:56:27.610+02:00 level=INFO source=runner.go:1251 msg="starting ollama engine" time=2025-09-10T18:56:27.610+02:00 level=INFO source=runner.go:1286 msg="Server listening on 127.0.0.1:53667" time=2025-09-10T18:56:27.616+02:00 level=INFO source=runner.go:1170 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:8 GPULayers:35[ID:0 Layers:35(0..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-09-10T18:56:27.656+02:00 level=INFO source=ggml.go:131 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36 time=2025-09-10T18:56:27.657+02:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 Metal.0.BF16=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M2 Max ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M2 Max ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB time=2025-09-10T18:56:27.843+02:00 level=INFO source=ggml.go:487 msg="offloading 34 repeating layers to GPU" time=2025-09-10T18:56:27.843+02:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU" time=2025-09-10T18:56:27.843+02:00 level=INFO source=ggml.go:498 msg="offloaded 35/35 layers to GPU" time=2025-09-10T18:56:27.844+02:00 level=INFO source=backend.go:310 msg="model weights" device=Metal size="3.1 GiB" time=2025-09-10T18:56:27.844+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="525.0 MiB" time=2025-09-10T18:56:27.844+02:00 level=INFO source=backend.go:321 msg="kv cache" device=Metal size="254.0 MiB" time=2025-09-10T18:56:27.844+02:00 level=INFO source=backend.go:332 msg="compute graph" device=Metal size="1.1 GiB" time=2025-09-10T18:56:27.844+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.0 MiB" time=2025-09-10T18:56:27.844+02:00 level=INFO source=backend.go:342 msg="total memory" size="5.0 GiB" time=2025-09-10T18:56:27.844+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-09-10T18:56:27.844+02:00 level=INFO source=server.go:1250 msg="waiting for llama runner to start responding" time=2025-09-10T18:56:27.848+02:00 level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model" time=2025-09-10T18:56:28.099+02:00 level=INFO source=server.go:1288 msg="llama runner started in 0.50 seconds" [GIN] 2025/09/10 - 18:56:40 | 200 | 12.656067458s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/09/10 - 18:56:54 | 200 | 14.497685459s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/09/10 - 18:57:07 | 200 | 12.689051333s | 127.0.0.1 | POST "/api/chat" ggml_metal_graph_compute: command buffer 0 failed with status 5 error: Internal Error (0000000e:Internal Error) panic: error computing ggml graph: -1 goroutine 12 [running]: github.com/ollama/ollama/ml/backend/ggml.(*Context).ComputeWithNotify(0x140017f4440, 0x102904778?, {0x14001832040, 0x1, 0x134473601?}) /Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:772 +0x32c github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute(0x140017f4440?, {0x14001832040?, 0x1?, 0x14000052008?}) /Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:762 +0x30 github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getTensor(0x14001649868?, {0x10370d4d0, 0x140000d96b0}, {0x103711be0, 0x140017f4400}, {0x10371c6e8, 0x14001847bf0}, 0x0) /Users/runner/work/ollama/ollama/runner/ollamarunner/multimodal.go:90 +0x234 github.com/ollama/ollama/runner/ollamarunner.multimodalStore.getMultimodal(0x1400219c210, {0x10370d4d0, 0x140000d96b0}, {0x103711be0, 0x140017f4400}, {0x140002863c0, 0x1, 0x2?}, 0x0) /Users/runner/work/ollama/ollama/runner/ollamarunner/multimodal.go:56 +0xa8 github.com/ollama/ollama/runner/ollamarunner.(*Server).forwardBatch(_, {0x360, {0x103711be0, 0x140018b2000}, {0x10371c6e8, 0x140018d4348}, {0x14000076068, 0x1, 0x1}, {{0x10371c6e8, ...}, ...}, ...}) /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:543 +0xbbc github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0x140001541e0, {0x1037086b0, 0x1400037c190}) /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:420 +0x15c created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1265 +0x470 time=2025-09-10T18:57:15.333+02:00 level=ERROR source=server.go:1458 msg="post predict" error="Post \"http://127.0.0.1:53667/completion\": EOF" time=2025-09-10T18:57:15.333+02:00 level=ERROR source=server.go:424 msg="llama runner terminated" error="exit status 2" ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.11.10
GiteaMirror added the bug label 2026-04-29 06:46:29 -05:00
Author
Owner

@jessegross commented on GitHub (Sep 10, 2025):

Thanks for the logs and investigation. There is an open bug for this with #10986 - please focus any discussion there.

<!-- gh-comment-id:3275924215 --> @jessegross commented on GitHub (Sep 10, 2025): Thanks for the logs and investigation. There is an open bug for this with #10986 - please focus any discussion there.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54655