[GH-ISSUE #10395] Gemma 3 vision random text #68889

Closed
opened 2026-05-04 15:32:18 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @krishna-winzo on GitHub (Apr 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10395

What is the issue?

Gemma 3's vision is generating random text on prompting

Relevant log output

ollama run gemma3
pulling manifest 
pulling aeda25e63ebd: 100% ▕██████████████████▏ 3.3 GB                         
pulling e0a42594d802: 100% ▕██████████████████▏  358 B                         
pulling dd084c7d92a3: 100% ▕██████████████████▏ 8.4 KB                         
pulling 3116c5225075: 100% ▕██████████████████▏   77 B                         
pulling b6ae5839783f: 100% ▕██████████████████▏  489 B                         
verifying sha256 digest 
writing manifest 
success 
>>> what is in this image: /Users/krishna/Desktop/sample_screenshot.png
Added image '/Users/krishna/Desktop/sample_screenshot.png'
Min
justiceergjaergergergerg)'),'),(''),('เรียน'),(''));('')),(''));(''))')'Minjusticeergjaergergergerg)'),'),(''),('เรียน'),(''));('')),(''));(''))')')')')')')')')(''))(''))(''))(''));(''))')')(''));(''))')(''));(''))')(''));(')')')')')')(''))(''))(''))(''));(''))')')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''))'))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')();(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));('')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));('')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));('^C

>>> 
>>> /Users/krishna/Desktop/sample_screenshot.png what is in this image
Added image '/Users/krishna/Desktop/sample_screenshot.png'
ります^C

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.6.6

Originally created by @krishna-winzo on GitHub (Apr 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10395 ### What is the issue? Gemma 3's vision is generating random text on prompting ### Relevant log output ```shell ollama run gemma3 pulling manifest pulling aeda25e63ebd: 100% ▕██████████████████▏ 3.3 GB pulling e0a42594d802: 100% ▕██████████████████▏ 358 B pulling dd084c7d92a3: 100% ▕██████████████████▏ 8.4 KB pulling 3116c5225075: 100% ▕██████████████████▏ 77 B pulling b6ae5839783f: 100% ▕██████████████████▏ 489 B verifying sha256 digest writing manifest success >>> what is in this image: /Users/krishna/Desktop/sample_screenshot.png Added image '/Users/krishna/Desktop/sample_screenshot.png' Min justiceergjaergergergerg)'),'),(''),('เรียน'),(''));('')),(''));(''))')'Minjusticeergjaergergergerg)'),'),(''),('เรียน'),(''));('')),(''));(''))')')')')')')')')(''))(''))(''))(''));(''))')')(''));(''))')(''));(''))')(''));(')')')')')')(''))(''))(''))(''));(''))')')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''))'))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')();(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));('')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));('')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));(''))')(''));('^C >>> >>> /Users/krishna/Desktop/sample_screenshot.png what is in this image Added image '/Users/krishna/Desktop/sample_screenshot.png' ります^C ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.6.6
GiteaMirror added the macosbug labels 2026-05-04 15:33:10 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 24, 2025):

Server logs may aid in debugging.

<!-- gh-comment-id:2828683330 --> @rick-github commented on GitHub (Apr 24, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging.
Author
Owner

@mchiang0610 commented on GitHub (Apr 25, 2025):

@krishna-winzo would it also be possible to ask for the sample screenshot (if it's appropriate and not sensitive). Okay if not; I'm just trying to reproduce this.

<!-- gh-comment-id:2829117790 --> @mchiang0610 commented on GitHub (Apr 25, 2025): @krishna-winzo would it also be possible to ask for the sample screenshot (if it's appropriate and not sensitive). Okay if not; I'm just trying to reproduce this.
Author
Owner

@krishna-winzo commented on GitHub (Apr 25, 2025):

@mchiang0610 Working for other models, seem to be specific to gemma3

Image Image
<!-- gh-comment-id:2829433928 --> @krishna-winzo commented on GitHub (Apr 25, 2025): @mchiang0610 Working for other models, seem to be specific to gemma3 <img width="1440" alt="Image" src="https://github.com/user-attachments/assets/13c3ded5-5c2a-4386-89d5-ec09542ea0d5" /> <img width="1440" alt="Image" src="https://github.com/user-attachments/assets/3c16b185-7750-44cc-ba5a-3dd05ed905e2" />
Author
Owner

@krishna-winzo commented on GitHub (Apr 25, 2025):

@rick-github please find the logs
[GIN] 2025/04/25 - 11:15:38 | 200 | 2.890583ms | 127.0.0.1 | HEAD "/"
time=2025-04-25T11:15:38.658+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-25T11:15:38.708+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/04/25 - 11:15:38 | 200 | 119.20675ms | 127.0.0.1 | POST "/api/show"
time=2025-04-25T11:15:38.762+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-25T11:15:38.810+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-25T11:15:38.855+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-25T11:15:38.858+05:30 level=INFO source=sched.go:722 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/peddi/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 gpu=0 parallel=4 available=11453251584 required="5.8 GiB"
time=2025-04-25T11:15:38.862+05:30 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="3.1 GiB" free_swap="0 B"
time=2025-04-25T11:15:38.865+05:30 level=INFO source=server.go:138 msg=offload library=metal layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[10.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="682.0 MiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="517.0 MiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-04-25T11:15:38.989+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-25T11:15:38.993+05:30 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-25T11:15:39.000+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-04-25T11:15:39.000+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-25T11:15:39.000+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-25T11:15:39.000+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-25T11:15:39.000+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-25T11:15:39.001+05:30 level=INFO source=server.go:405 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/peddi/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --n-gpu-layers 35 --threads 4 --parallel 4 --port 57311"
time=2025-04-25T11:15:39.003+05:30 level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-25T11:15:39.003+05:30 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-25T11:15:39.004+05:30 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-25T11:15:39.021+05:30 level=INFO source=runner.go:866 msg="starting ollama engine"
time=2025-04-25T11:15:39.022+05:30 level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:57311"
time=2025-04-25T11:15:39.119+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-25T11:15:39.120+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
time=2025-04-25T11:15:39.120+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-04-25T11:15:39.120+05:30 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36
time=2025-04-25T11:15:39.126+05:30 level=INFO source=ggml.go:109 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-04-25T11:15:39.256+05:30 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
time=2025-04-25T11:15:39.290+05:30 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="525.0 MiB"
time=2025-04-25T11:15:39.290+05:30 level=INFO source=ggml.go:298 msg="model weights" buffer=Metal size="3.1 GiB"
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1
ggml_metal_init: picking default device: Apple M1
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name: Apple M1
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets = false
ggml_metal_init: has bfloat = true
ggml_metal_init: use bfloat = false
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB
ggml_metal_init: skipping kernel_get_rows_bf16 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported)
time=2025-04-25T11:15:43.321+05:30 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-25T11:15:43.328+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-04-25T11:15:43.328+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-25T11:15:43.328+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-25T11:15:43.328+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-25T11:15:43.328+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-25T11:15:43.539+05:30 level=INFO source=ggml.go:556 msg="compute graph" backend=Metal buffer_type=Metal size="162.0 MiB"
time=2025-04-25T11:15:43.539+05:30 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="5.0 MiB"
time=2025-04-25T11:15:43.775+05:30 level=INFO source=server.go:619 msg="llama runner started in 4.77 seconds"
[GIN] 2025/04/25 - 11:15:43 | 200 | 5.062082125s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/04/25 - 11:15:49 | 200 | 58.875µs | 127.0.0.1 | HEAD "/"
time=2025-04-25T11:15:50.061+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-04-25T11:15:50.108+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/04/25 - 11:15:50 | 200 | 120.289792ms | 127.0.0.1 | POST "/api/show"
time=2025-04-25T11:15:50.163+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/04/25 - 11:15:50 | 200 | 49.2685ms | 127.0.0.1 | POST "/api/generate"
time=2025-04-25T11:15:59.423+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
ggml_metal_graph_compute: command buffer 1 failed with status 5
error: Internal Error (0000000e:Internal Error)
[GIN] 2025/04/25 - 11:16:44 | 200 | 45.188370666s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:2829437350 --> @krishna-winzo commented on GitHub (Apr 25, 2025): @rick-github please find the logs [GIN] 2025/04/25 - 11:15:38 | 200 | 2.890583ms | 127.0.0.1 | HEAD "/" time=2025-04-25T11:15:38.658+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-25T11:15:38.708+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/04/25 - 11:15:38 | 200 | 119.20675ms | 127.0.0.1 | POST "/api/show" time=2025-04-25T11:15:38.762+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-25T11:15:38.810+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-25T11:15:38.855+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-25T11:15:38.858+05:30 level=INFO source=sched.go:722 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/peddi/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 gpu=0 parallel=4 available=11453251584 required="5.8 GiB" time=2025-04-25T11:15:38.862+05:30 level=INFO source=server.go:105 msg="system memory" total="16.0 GiB" free="3.1 GiB" free_swap="0 B" time=2025-04-25T11:15:38.865+05:30 level=INFO source=server.go:138 msg=offload library=metal layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[10.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.8 GiB" memory.required.partial="5.8 GiB" memory.required.kv="682.0 MiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="517.0 MiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-04-25T11:15:38.989+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-25T11:15:38.993+05:30 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-25T11:15:39.000+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-04-25T11:15:39.000+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-25T11:15:39.000+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-25T11:15:39.000+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-25T11:15:39.000+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-04-25T11:15:39.001+05:30 level=INFO source=server.go:405 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/peddi/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 --ctx-size 8192 --batch-size 512 --n-gpu-layers 35 --threads 4 --parallel 4 --port 57311" time=2025-04-25T11:15:39.003+05:30 level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-25T11:15:39.003+05:30 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-25T11:15:39.004+05:30 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-25T11:15:39.021+05:30 level=INFO source=runner.go:866 msg="starting ollama engine" time=2025-04-25T11:15:39.022+05:30 level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:57311" time=2025-04-25T11:15:39.119+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-25T11:15:39.120+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" time=2025-04-25T11:15:39.120+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" time=2025-04-25T11:15:39.120+05:30 level=INFO source=ggml.go:72 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=36 time=2025-04-25T11:15:39.126+05:30 level=INFO source=ggml.go:109 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-04-25T11:15:39.256+05:30 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" time=2025-04-25T11:15:39.290+05:30 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="525.0 MiB" time=2025-04-25T11:15:39.290+05:30 level=INFO source=ggml.go:298 msg="model weights" buffer=Metal size="3.1 GiB" ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 ggml_metal_init: picking default device: Apple M1 ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M1 ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) time=2025-04-25T11:15:43.321+05:30 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-25T11:15:43.328+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-04-25T11:15:43.328+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-25T11:15:43.328+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-25T11:15:43.328+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-25T11:15:43.328+05:30 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-04-25T11:15:43.539+05:30 level=INFO source=ggml.go:556 msg="compute graph" backend=Metal buffer_type=Metal size="162.0 MiB" time=2025-04-25T11:15:43.539+05:30 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="5.0 MiB" time=2025-04-25T11:15:43.775+05:30 level=INFO source=server.go:619 msg="llama runner started in 4.77 seconds" [GIN] 2025/04/25 - 11:15:43 | 200 | 5.062082125s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/04/25 - 11:15:49 | 200 | 58.875µs | 127.0.0.1 | HEAD "/" time=2025-04-25T11:15:50.061+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-04-25T11:15:50.108+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/04/25 - 11:15:50 | 200 | 120.289792ms | 127.0.0.1 | POST "/api/show" time=2025-04-25T11:15:50.163+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/04/25 - 11:15:50 | 200 | 49.2685ms | 127.0.0.1 | POST "/api/generate" time=2025-04-25T11:15:59.423+05:30 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 ggml_metal_graph_compute: command buffer 1 failed with status 5 error: Internal Error (0000000e:Internal Error) [GIN] 2025/04/25 - 11:16:44 | 200 | 45.188370666s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@rick-github commented on GitHub (Apr 25, 2025):

I think Michael was asking for the screenshot you are giving to ollama, the one showing a computer screen displaying the GlobalProtect website.

<!-- gh-comment-id:2831409523 --> @rick-github commented on GitHub (Apr 25, 2025): I think Michael was asking for the screenshot you are giving to ollama, the one showing a computer screen displaying the GlobalProtect website.
Author
Owner

@rick-github commented on GitHub (Apr 25, 2025):

ggml_metal_graph_compute: command buffer 1 failed with status 5 error: Internal Error (0000000e:Internal Error)

This is not encouraging. Could I ask you to reformat the logs so that it's easier to read? Either that or attach to this post.

<!-- gh-comment-id:2831411660 --> @rick-github commented on GitHub (Apr 25, 2025): ``` ggml_metal_graph_compute: command buffer 1 failed with status 5 error: Internal Error (0000000e:Internal Error) ``` This is not encouraging. Could I ask you to reformat the logs so that it's easier to read? Either that or attach to this post.
Author
Owner

@krishna-winzo commented on GitHub (Apr 26, 2025):

@mchiang0610 this is the image I am using. Also it is not specific to this image, tried with others too. But got a random text.

Image
<!-- gh-comment-id:2831913819 --> @krishna-winzo commented on GitHub (Apr 26, 2025): @mchiang0610 this is the image I am using. Also it is not specific to this image, tried with others too. But got a random text. <img width="353" alt="Image" src="https://github.com/user-attachments/assets/73d32dd8-8a3c-455e-84c9-0eacfe055437" />
Author
Owner

@krishna-winzo commented on GitHub (Apr 26, 2025):

ggml_metal_graph_compute: command buffer 1 failed with status 5 error: Internal Error (0000000e:Internal Error)

This is not encouraging. Could I ask you to reformat the logs so that it's easier to read? Either that or attach to this post.

@rick-github Reformatted the above log

<!-- gh-comment-id:2831914867 --> @krishna-winzo commented on GitHub (Apr 26, 2025): > ``` > ggml_metal_graph_compute: command buffer 1 failed with status 5 error: Internal Error (0000000e:Internal Error) > ``` > > This is not encouraging. Could I ask you to reformat the logs so that it's easier to read? Either that or attach to this post. @rick-github Reformatted the above log
Author
Owner

@dunklesToast commented on GitHub (May 16, 2025):

Having the same issues with gemma3:4b and gemma3:12b on a 16" MacBook Pro M2 Max 32GB:

  • macOS: 15.1.1
  • ollama: 0.7.0 (downloaded from the website)
Server Log
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Max
ggml_metal_init: picking default device: Apple M2 Max
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name:   Apple M2 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple8  (1008)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = false
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = false
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 22906.50 MB
ggml_metal_init: skipping kernel_get_rows_bf16                     (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row              (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4                (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16                  (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32                (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f16                (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128   (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512   (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96       (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16                     (not supported)
time=2025-05-16T13:38:54.009+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=Metal buffer_type=Metal size="308.0 MiB"
time=2025-05-16T13:38:54.009+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=BLAS buffer_type=CPU size="7.5 MiB"
time=2025-05-16T13:38:54.009+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
time=2025-05-16T13:38:54.086+02:00 level=INFO source=server.go:630 msg="llama runner started in 2.51 seconds"
[GIN] 2025/05/16 - 13:38:54 | 200 |  2.626168916s |       127.0.0.1 | POST     "/api/generate"
time=2025-05-16T13:39:06.447+02:00 level=INFO source=routes.go:1205 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/tom/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2025-05-16T13:39:06.448+02:00 level=INFO source=images.go:463 msg="total blobs: 13"
time=2025-05-16T13:39:06.448+02:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0"
time=2025-05-16T13:39:06.448+02:00 level=INFO source=routes.go:1258 msg="Listening on 127.0.0.1:11434 (version 0.7.0)"
time=2025-05-16T13:39:06.527+02:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="21.3 GiB" available="21.3 GiB"
[GIN] 2025/05/16 - 13:39:13 | 200 |      85.667µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/16 - 13:39:13 | 200 |   78.906792ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-16T13:39:13.193+02:00 level=INFO source=sched.go:777 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=0 parallel=2 available=22906503168 required="11.7 GiB"
time=2025-05-16T13:39:13.193+02:00 level=INFO source=server.go:135 msg="system memory" total="32.0 GiB" free="19.5 GiB" free_swap="0 B"
time=2025-05-16T13:39:13.195+02:00 level=INFO source=server.go:168 msg=offload library=metal layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[21.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.7 GiB" memory.required.partial="11.7 GiB" memory.required.kv="1.3 GiB" memory.required.allocations="[11.7 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="519.5 MiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-05-16T13:39:13.235+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 2 --port 61290"
time=2025-05-16T13:39:13.237+02:00 level=INFO source=sched.go:472 msg="loaded runners" count=1
time=2025-05-16T13:39:13.237+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-05-16T13:39:13.237+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-16T13:39:13.246+02:00 level=INFO source=runner.go:836 msg="starting ollama engine"
time=2025-05-16T13:39:13.246+02:00 level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:61290"
time=2025-05-16T13:39:13.283+02:00 level=INFO source=ggml.go:73 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37
time=2025-05-16T13:39:13.285+02:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-05-16T13:39:13.365+02:00 level=INFO source=ggml.go:299 msg="model weights" buffer=Metal size="7.6 GiB"
time=2025-05-16T13:39:13.365+02:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="787.5 MiB"
time=2025-05-16T13:39:13.489+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Max
ggml_metal_init: picking default device: Apple M2 Max
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name:   Apple M2 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple8  (1008)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = false
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = false
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 22906.50 MB
ggml_metal_init: skipping kernel_get_rows_bf16                     (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row              (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4                (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16                  (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32                (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f16                (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128   (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512   (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96       (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16                     (not supported)
time=2025-05-16T13:39:15.669+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=Metal buffer_type=Metal size="308.0 MiB"
time=2025-05-16T13:39:15.669+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=BLAS buffer_type=CPU size="7.5 MiB"
time=2025-05-16T13:39:15.669+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
time=2025-05-16T13:39:15.744+02:00 level=INFO source=server.go:630 msg="llama runner started in 2.51 seconds"
[GIN] 2025/05/16 - 13:39:15 | 200 |  2.624408083s |       127.0.0.1 | POST     "/api/generate"
ggml_metal_graph_compute: command buffer 1 failed with status 5
error: Internal Error (0000000e:Internal Error)
[GIN] 2025/05/16 - 13:39:33 | 200 |    12.896263s |       127.0.0.1 | POST     "/api/chat"

Used Image:

Image

Chat:

➜  ~ ollama run gemma3:12b
>>> Whats in this image? /Users/tom/Desktop/band.png
Added image '/Users/tom/Desktop/band.png'
 outlined மட்டுமே next不存在схснуснюс… …сютню … …снет … … сч … … сч … … сч … сч … сч … сч … сч … сч … сч … сч … сч … сч … сч … сч … сч … сч … сч … … с … с … с … с 
… с … с … с … с … с … … … с … с … с … с … с … … с … с … … … с … с … … с … с … … с … с … … с … с … … с … с … … с … с … … с … с … …^C

This happened multiple times with the gemma model but resulted in different outputs. One time the model just printed model all over the terminal, one time it began with dinner dinner dinner dinner dinner and then just [...](...).

If you need more debugging info, let me know.

<!-- gh-comment-id:2886493583 --> @dunklesToast commented on GitHub (May 16, 2025): Having the same issues with gemma3:4b and gemma3:12b on a 16" MacBook Pro M2 Max 32GB: - macOS: 15.1.1 - ollama: 0.7.0 (downloaded from the website) <details> <summary>Server Log</summary> <pre> ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Max ggml_metal_init: picking default device: Apple M2 Max ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M2 Max ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 22906.50 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f16 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) time=2025-05-16T13:38:54.009+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=Metal buffer_type=Metal size="308.0 MiB" time=2025-05-16T13:38:54.009+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=BLAS buffer_type=CPU size="7.5 MiB" time=2025-05-16T13:38:54.009+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" time=2025-05-16T13:38:54.086+02:00 level=INFO source=server.go:630 msg="llama runner started in 2.51 seconds" [GIN] 2025/05/16 - 13:38:54 | 200 | 2.626168916s | 127.0.0.1 | POST "/api/generate" time=2025-05-16T13:39:06.447+02:00 level=INFO source=routes.go:1205 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/tom/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" time=2025-05-16T13:39:06.448+02:00 level=INFO source=images.go:463 msg="total blobs: 13" time=2025-05-16T13:39:06.448+02:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0" time=2025-05-16T13:39:06.448+02:00 level=INFO source=routes.go:1258 msg="Listening on 127.0.0.1:11434 (version 0.7.0)" time=2025-05-16T13:39:06.527+02:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="21.3 GiB" available="21.3 GiB" [GIN] 2025/05/16 - 13:39:13 | 200 | 85.667µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/16 - 13:39:13 | 200 | 78.906792ms | 127.0.0.1 | POST "/api/show" time=2025-05-16T13:39:13.193+02:00 level=INFO source=sched.go:777 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=0 parallel=2 available=22906503168 required="11.7 GiB" time=2025-05-16T13:39:13.193+02:00 level=INFO source=server.go:135 msg="system memory" total="32.0 GiB" free="19.5 GiB" free_swap="0 B" time=2025-05-16T13:39:13.195+02:00 level=INFO source=server.go:168 msg=offload library=metal layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[21.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.7 GiB" memory.required.partial="11.7 GiB" memory.required.kv="1.3 GiB" memory.required.allocations="[11.7 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="519.5 MiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-05-16T13:39:13.235+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 2 --port 61290" time=2025-05-16T13:39:13.237+02:00 level=INFO source=sched.go:472 msg="loaded runners" count=1 time=2025-05-16T13:39:13.237+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-16T13:39:13.237+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-16T13:39:13.246+02:00 level=INFO source=runner.go:836 msg="starting ollama engine" time=2025-05-16T13:39:13.246+02:00 level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:61290" time=2025-05-16T13:39:13.283+02:00 level=INFO source=ggml.go:73 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37 time=2025-05-16T13:39:13.285+02:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-05-16T13:39:13.365+02:00 level=INFO source=ggml.go:299 msg="model weights" buffer=Metal size="7.6 GiB" time=2025-05-16T13:39:13.365+02:00 level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="787.5 MiB" time=2025-05-16T13:39:13.489+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Max ggml_metal_init: picking default device: Apple M2 Max ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M2 Max ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 22906.50 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f16 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) time=2025-05-16T13:39:15.669+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=Metal buffer_type=Metal size="308.0 MiB" time=2025-05-16T13:39:15.669+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=BLAS buffer_type=CPU size="7.5 MiB" time=2025-05-16T13:39:15.669+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" time=2025-05-16T13:39:15.744+02:00 level=INFO source=server.go:630 msg="llama runner started in 2.51 seconds" [GIN] 2025/05/16 - 13:39:15 | 200 | 2.624408083s | 127.0.0.1 | POST "/api/generate" ggml_metal_graph_compute: command buffer 1 failed with status 5 error: Internal Error (0000000e:Internal Error) [GIN] 2025/05/16 - 13:39:33 | 200 | 12.896263s | 127.0.0.1 | POST "/api/chat" </pre> </details> Used Image: ![Image](https://github.com/user-attachments/assets/499f241d-5b7b-448d-bdeb-21172619ef7e) Chat: <pre> ➜ ~ ollama run gemma3:12b >>> Whats in this image? /Users/tom/Desktop/band.png Added image '/Users/tom/Desktop/band.png' outlined<unused16> மட்டுமே next不存在схснуснюс… …сютню … …снет … … сч … … сч … … сч … сч … сч … сч … сч … сч … сч … сч … сч … сч … сч … сч … сч … сч … сч … … с … с … с … с … с … с … с … с … с … … … с … с … с … с … с … … с … с … … … с … с … … с … с … … с … с … … с … с … … с … с … … с … с … … с … с … …^C </pre> This happened multiple times with the gemma model but resulted in different outputs. One time the model just printed `model` all over the terminal, one time it began with `dinner dinner dinner dinner dinner` and then just `[...](...)`. If you need more debugging info, let me know.
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

$ ollama run gemma3:12b
>>> Whats in this image? /home/rick/band.png
Added image '/home/rick/band.png'
Here's a breakdown of what's in the image:

**Overall:**

*   **It's a visual representation of a digital audio filter.** This is 
likely a plugin or module within a music production software (like Ableton 
Live, Logic Pro, etc.).

**Key Elements:**

*   **"FILTER" Title:** Clearly indicates the purpose of the interface.
*   **"Band 24":** Suggests this is part of a larger filter bank or a 
specific band within a filter.
*   **Waveform Display:** The blue line shows a graphical representation 
of the filter's frequency response. This helps visualize how the filter 
affects different frequencies.
*   **Control Knobs:**
    *   **Cutoff:** Adjusts the frequency at which the filter starts to 
attenuate (reduce) the signal.
    *   **Resonance (Res):**  Adds emphasis or "peak" at the cutoff 
frequency, creating a characteristic "whistle" or "sweep" sound.
    *   **Pan:** Controls the stereo panning of the filtered signal.
    *   **Drive:** Adds distortion or saturation to the signal.
    *   **Fat:** Likely adds harmonic richness or thickness to the sound.
    *   **Mix:** Controls the blend between the original, unfiltered 
signal and the filtered signal.
*   **Toggle Switches:** These are likely for enabling/disabling various 
filter features or stages (A, B, N, S).
*   **Navigation Arrows:** The arrows on the top right suggest you can 
navigate between different filter bands or settings.

**In essence, this is a visual interface for manipulating a digital audio 
filter, allowing a user to shape the tonal characteristics of a sound.**
ggml_metal_graph_compute: command buffer 1 failed with status 5
error: Internal Error (0000000e:Internal Error)

I think this is specific to MacOS. I had a bit of a search for the error message and while it shows up often enough, there's rarely a solution. You could try setting OLLAMA_DEBUG=1 in the server environment, the increased debugging may show a stack trace or something that might indicate what sort of problem it is, other than an "Internal Error".

<!-- gh-comment-id:2889108460 --> @rick-github commented on GitHub (May 18, 2025): ```console $ ollama run gemma3:12b >>> Whats in this image? /home/rick/band.png Added image '/home/rick/band.png' Here's a breakdown of what's in the image: **Overall:** * **It's a visual representation of a digital audio filter.** This is likely a plugin or module within a music production software (like Ableton Live, Logic Pro, etc.). **Key Elements:** * **"FILTER" Title:** Clearly indicates the purpose of the interface. * **"Band 24":** Suggests this is part of a larger filter bank or a specific band within a filter. * **Waveform Display:** The blue line shows a graphical representation of the filter's frequency response. This helps visualize how the filter affects different frequencies. * **Control Knobs:** * **Cutoff:** Adjusts the frequency at which the filter starts to attenuate (reduce) the signal. * **Resonance (Res):** Adds emphasis or "peak" at the cutoff frequency, creating a characteristic "whistle" or "sweep" sound. * **Pan:** Controls the stereo panning of the filtered signal. * **Drive:** Adds distortion or saturation to the signal. * **Fat:** Likely adds harmonic richness or thickness to the sound. * **Mix:** Controls the blend between the original, unfiltered signal and the filtered signal. * **Toggle Switches:** These are likely for enabling/disabling various filter features or stages (A, B, N, S). * **Navigation Arrows:** The arrows on the top right suggest you can navigate between different filter bands or settings. **In essence, this is a visual interface for manipulating a digital audio filter, allowing a user to shape the tonal characteristics of a sound.** ``` ``` ggml_metal_graph_compute: command buffer 1 failed with status 5 error: Internal Error (0000000e:Internal Error) ``` I think this is specific to MacOS. I had a bit of a search for the error message and while it shows up often enough, there's rarely a solution. You could try setting `OLLAMA_DEBUG=1` in the server environment, the increased debugging may show a stack trace or something that might indicate what sort of problem it is, other than an "Internal Error".
Author
Owner

@dunklesToast commented on GitHub (May 18, 2025):

Here are the attached logs with debugging info included.
I've also tried multiple pictures (pngs and jpegs) and they all spit out random gibberish. LMK if you need additional information.

Debug Logs
time=2025-05-18T19:28:36.838+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-05-18T19:28:36.840+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-05-18T19:28:36.840+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-05-18T19:28:36.840+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-05-18T19:28:36.840+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-05-18T19:28:36.840+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-05-18T19:28:37.092+02:00 level=DEBUG source=ggml.go:553 msg="compute graph" nodes=2119 splits=2
time=2025-05-18T19:28:37.092+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=Metal buffer_type=Metal size="308.0 MiB"
time=2025-05-18T19:28:37.092+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=BLAS buffer_type=CPU size="7.5 MiB"
time=2025-05-18T19:28:37.092+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
time=2025-05-18T19:28:37.251+02:00 level=INFO source=server.go:630 msg="llama runner started in 17.58 seconds"
time=2025-05-18T19:28:37.251+02:00 level=DEBUG source=sched.go:484 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma3:12b runner.inference=metal runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=2 runner.pid=1722 runner.model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de runner.num_ctx=8192
[GIN] 2025/05/18 - 19:28:37 | 200 | 18.388440209s |       127.0.0.1 | POST     "/api/generate"
time=2025-05-18T19:28:37.252+02:00 level=DEBUG source=sched.go:492 msg="context for request finished"
time=2025-05-18T19:28:37.252+02:00 level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma3:12b runner.inference=metal runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=2 runner.pid=1722 runner.model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de runner.num_ctx=8192 duration=5m0s
time=2025-05-18T19:28:37.252+02:00 level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma3:12b runner.inference=metal runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=2 runner.pid=1722 runner.model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de runner.num_ctx=8192 refCount=0
time=2025-05-18T19:28:48.132+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-18T19:28:48.134+02:00 level=DEBUG source=sched.go:604 msg="evaluating already loaded" model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de
time=2025-05-18T19:28:48.134+02:00 level=DEBUG source=server.go:729 msg="completion request" images=1 prompt=82 format=""
time=2025-05-18T19:28:48.151+02:00 level=DEBUG source=process_text_spm.go:191 msg="adding bos token to prompt" id=2
time=2025-05-18T19:28:48.204+02:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=0 prompt=274 used=0 remaining=274
ggml_metal_graph_compute: command buffer 1 failed with status 5
error: Internal Error (0000000e:Internal Error)
time=2025-05-18T19:29:07.982+02:00 level=DEBUG source=sched.go:423 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma3:12b runner.inference=metal runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=2 runner.pid=1722 runner.model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de runner.num_ctx=8192
time=2025-05-18T19:29:07.982+02:00 level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma3:12b runner.inference=metal runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=2 runner.pid=1722 runner.model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de runner.num_ctx=8192 duration=5m0s
time=2025-05-18T19:29:07.982+02:00 level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma3:12b runner.inference=metal runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=2 runner.pid=1722 runner.model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de runner.num_ctx=8192 refCount=0
[GIN] 2025/05/18 - 19:29:07 | 200 |   19.8958075s |       127.0.0.1 | POST     "/api/chat"

EDIT: I've also updated my Mac to macOS 15.5 (24F74) but that did not help.

<!-- gh-comment-id:2889114415 --> @dunklesToast commented on GitHub (May 18, 2025): Here are the attached logs with debugging info included. I've also tried multiple pictures (pngs and jpegs) and they all spit out random gibberish. LMK if you need additional information. <details> <summary>Debug Logs</summary> <pre> time=2025-05-18T19:28:36.838+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-05-18T19:28:36.840+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-05-18T19:28:36.840+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-05-18T19:28:36.840+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-05-18T19:28:36.840+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-05-18T19:28:36.840+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-05-18T19:28:37.092+02:00 level=DEBUG source=ggml.go:553 msg="compute graph" nodes=2119 splits=2 time=2025-05-18T19:28:37.092+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=Metal buffer_type=Metal size="308.0 MiB" time=2025-05-18T19:28:37.092+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=BLAS buffer_type=CPU size="7.5 MiB" time=2025-05-18T19:28:37.092+02:00 level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="0 B" time=2025-05-18T19:28:37.251+02:00 level=INFO source=server.go:630 msg="llama runner started in 17.58 seconds" time=2025-05-18T19:28:37.251+02:00 level=DEBUG source=sched.go:484 msg="finished setting up" runner.name=registry.ollama.ai/library/gemma3:12b runner.inference=metal runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=2 runner.pid=1722 runner.model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de runner.num_ctx=8192 [GIN] 2025/05/18 - 19:28:37 | 200 | 18.388440209s | 127.0.0.1 | POST "/api/generate" time=2025-05-18T19:28:37.252+02:00 level=DEBUG source=sched.go:492 msg="context for request finished" time=2025-05-18T19:28:37.252+02:00 level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma3:12b runner.inference=metal runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=2 runner.pid=1722 runner.model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de runner.num_ctx=8192 duration=5m0s time=2025-05-18T19:28:37.252+02:00 level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma3:12b runner.inference=metal runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=2 runner.pid=1722 runner.model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de runner.num_ctx=8192 refCount=0 time=2025-05-18T19:28:48.132+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-18T19:28:48.134+02:00 level=DEBUG source=sched.go:604 msg="evaluating already loaded" model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de time=2025-05-18T19:28:48.134+02:00 level=DEBUG source=server.go:729 msg="completion request" images=1 prompt=82 format="" time=2025-05-18T19:28:48.151+02:00 level=DEBUG source=process_text_spm.go:191 msg="adding bos token to prompt" id=2 time=2025-05-18T19:28:48.204+02:00 level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=0 prompt=274 used=0 remaining=274 ggml_metal_graph_compute: command buffer 1 failed with status 5 error: Internal Error (0000000e:Internal Error) time=2025-05-18T19:29:07.982+02:00 level=DEBUG source=sched.go:423 msg="context for request finished" runner.name=registry.ollama.ai/library/gemma3:12b runner.inference=metal runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=2 runner.pid=1722 runner.model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de runner.num_ctx=8192 time=2025-05-18T19:29:07.982+02:00 level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gemma3:12b runner.inference=metal runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=2 runner.pid=1722 runner.model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de runner.num_ctx=8192 duration=5m0s time=2025-05-18T19:29:07.982+02:00 level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gemma3:12b runner.inference=metal runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=2 runner.pid=1722 runner.model=/Users/tom/.ollama/models/blobs/sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de runner.num_ctx=8192 refCount=0 [GIN] 2025/05/18 - 19:29:07 | 200 | 19.8958075s | 127.0.0.1 | POST "/api/chat" </pre> </details> EDIT: I've also updated my Mac to macOS 15.5 (24F74) but that did not help.
Author
Owner

@dunklesToast commented on GitHub (Jun 2, 2025):

Seems like this is fixed with ollama 0.9.0 (running on macOS 15.5 (24F74))

Prompt Output
> ollama run gemma3:12b
>>> /Users/tom/Desktop/band.png What do you see in this image?
Added image '/Users/tom/Desktop/band.png'
The image appears to be a close-up of a traditional Tibetan Buddhist mandala. Here's a breakdown of what I see:

*   **Intricate Design:** It's highly detailed with repetitive patterns and geometric shapes, characteristic of mandalas.
*   **Color Palette:** The colors are rich and vibrant, including reds, blues, greens, yellows, and whites, commonly used in Tibetan art.
*   **Central Figure:** There seems to be a central deity or figure within the mandala, surrounded by layers of symbolic representations.
*   **Symbolism:** The various elements likely represent various concepts related to Buddhist cosmology, teachings, and practices.

It's a beautiful and complex work of art!

>>> /Users/tom/Desktop/filter.png Please explain me this screenshot of the DAW Filter 
Added image '/Users/tom/Desktop/filter.png'
Okay, let's break down what's displayed in this screenshot of a DAW (Digital Audio Workstation) filter. It's a complex view, so I'll explain the various components. This 
appears to be a visual representation of a digital filter within a DAW like Ableton Live, Logic Pro X, or similar software.

**Overall Context:**

The image shows a sophisticated visual representation of a digital audio filter. It's not just showing you the basic parameters like cutoff and resonance, but it's providing 
a real-time, interactive display of the filter's behavior. This is a powerful feature for sound design and creative manipulation.

**Key Elements Explained:**

1.  **Filter Type Selector (Top Left):**
    *   You see a selector, likely a dropdown, labeled "Filter Type." This lets you choose the type of filter being applied. Common filter types include:
        *   **Low Pass:** Allows frequencies below the cutoff to pass, attenuating higher frequencies.
        *   **High Pass:** Allows frequencies above the cutoff to pass, attenuating lower frequencies.
        *   **Band Pass:** Allows a narrow range of frequencies to pass, attenuating those outside the range.
        *   **Notch (Band Reject):** Attenuates frequencies within a narrow range.

2.  **Frequency Response Graph (Central/Main Display):**
    *   This is the most critical element. It graphically represents how the filter affects different frequencies.
    *   **X-axis (Horizontal):** Frequency, usually measured in Hertz (Hz).
    *   **Y-axis (Vertical):** Gain or amplitude. A line close to 0 dB means the frequency passes through relatively unchanged. A line moving downward means the frequency is 
attenuated (reduced in volume).
    *   **Response Curve:** The curved line shows how the filter alters the amplitude of each frequency.  The shape of the curve directly reflects the filter's 
characteristics (e.g., steepness of the cut, resonance).
    *   **"Sweep" Animation:** The dynamic movement of the response curve suggests a real-time manipulation or "sweep" of the filter's parameters.

3.  **Filter Parameters (Right Side/Panel):**
    *   **Cutoff Frequency:** Determines the point where frequencies start being attenuated.
    *   **Resonance (Q Factor):**  Controls the "peakedness" of the response curve at the cutoff frequency. Higher resonance creates a more prominent peak, leading to a more 
resonant, emphasized sound.
    *   **Drive/Gain:** Adjusts the overall gain of the filter output.
    *   **Slope/Order:** Sets the steepness of the filter's cutoff.  Higher order filters have a steeper slope and a more aggressive cutoff.
    *  **Envelope Amount:**  Control how the filter responds to incoming signal amplitude.
    * **LFO amount:** control the filter modulation by low frequency oscillator.

4. **Interactive Features:**

 *   **Cursor/Pointer:** You see a pointer, likely indicating a point on the frequency response graph. This lets you interactively adjust filter parameters by dragging the 
cursor.
 *   **Graph Manipulation:**  It's likely you can click and drag on the frequency response graph itself to shape the curve, which directly modifies the filter's behavior.



**In Summary:**

This screenshot depicts a sophisticated DAW filter visualization allowing users to not only adjust basic filter parameters (cutoff, resonance) but also to visually 
understand and interactively shape the filter's frequency response. The real-time graph animation, coupled with interactive controls, provides a very powerful environment 
for sound design and creative audio processing.
<!-- gh-comment-id:2930400084 --> @dunklesToast commented on GitHub (Jun 2, 2025): Seems like this is fixed with ollama 0.9.0 (running on macOS 15.5 (24F74)) <details> <summary>Prompt Output</summary> <pre> > ollama run gemma3:12b >>> /Users/tom/Desktop/band.png What do you see in this image? Added image '/Users/tom/Desktop/band.png' The image appears to be a close-up of a traditional Tibetan Buddhist mandala. Here's a breakdown of what I see: * **Intricate Design:** It's highly detailed with repetitive patterns and geometric shapes, characteristic of mandalas. * **Color Palette:** The colors are rich and vibrant, including reds, blues, greens, yellows, and whites, commonly used in Tibetan art. * **Central Figure:** There seems to be a central deity or figure within the mandala, surrounded by layers of symbolic representations. * **Symbolism:** The various elements likely represent various concepts related to Buddhist cosmology, teachings, and practices. It's a beautiful and complex work of art! >>> /Users/tom/Desktop/filter.png Please explain me this screenshot of the DAW Filter Added image '/Users/tom/Desktop/filter.png' Okay, let's break down what's displayed in this screenshot of a DAW (Digital Audio Workstation) filter. It's a complex view, so I'll explain the various components. This appears to be a visual representation of a digital filter within a DAW like Ableton Live, Logic Pro X, or similar software. **Overall Context:** The image shows a sophisticated visual representation of a digital audio filter. It's not just showing you the basic parameters like cutoff and resonance, but it's providing a real-time, interactive display of the filter's behavior. This is a powerful feature for sound design and creative manipulation. **Key Elements Explained:** 1. **Filter Type Selector (Top Left):** * You see a selector, likely a dropdown, labeled "Filter Type." This lets you choose the type of filter being applied. Common filter types include: * **Low Pass:** Allows frequencies below the cutoff to pass, attenuating higher frequencies. * **High Pass:** Allows frequencies above the cutoff to pass, attenuating lower frequencies. * **Band Pass:** Allows a narrow range of frequencies to pass, attenuating those outside the range. * **Notch (Band Reject):** Attenuates frequencies within a narrow range. 2. **Frequency Response Graph (Central/Main Display):** * This is the most critical element. It graphically represents how the filter affects different frequencies. * **X-axis (Horizontal):** Frequency, usually measured in Hertz (Hz). * **Y-axis (Vertical):** Gain or amplitude. A line close to 0 dB means the frequency passes through relatively unchanged. A line moving downward means the frequency is attenuated (reduced in volume). * **Response Curve:** The curved line shows how the filter alters the amplitude of each frequency. The shape of the curve directly reflects the filter's characteristics (e.g., steepness of the cut, resonance). * **"Sweep" Animation:** The dynamic movement of the response curve suggests a real-time manipulation or "sweep" of the filter's parameters. 3. **Filter Parameters (Right Side/Panel):** * **Cutoff Frequency:** Determines the point where frequencies start being attenuated. * **Resonance (Q Factor):** Controls the "peakedness" of the response curve at the cutoff frequency. Higher resonance creates a more prominent peak, leading to a more resonant, emphasized sound. * **Drive/Gain:** Adjusts the overall gain of the filter output. * **Slope/Order:** Sets the steepness of the filter's cutoff. Higher order filters have a steeper slope and a more aggressive cutoff. * **Envelope Amount:** Control how the filter responds to incoming signal amplitude. * **LFO amount:** control the filter modulation by low frequency oscillator. 4. **Interactive Features:** * **Cursor/Pointer:** You see a pointer, likely indicating a point on the frequency response graph. This lets you interactively adjust filter parameters by dragging the cursor. * **Graph Manipulation:** It's likely you can click and drag on the frequency response graph itself to shape the curve, which directly modifies the filter's behavior. **In Summary:** This screenshot depicts a sophisticated DAW filter visualization allowing users to not only adjust basic filter parameters (cutoff, resonance) but also to visually understand and interactively shape the filter's frequency response. The real-time graph animation, coupled with interactive controls, provides a very powerful environment for sound design and creative audio processing. </pre> </details>
Author
Owner

@rick-github commented on GitHub (Jun 2, 2025):

Thanks for the update.

<!-- gh-comment-id:2930410565 --> @rick-github commented on GitHub (Jun 2, 2025): Thanks for the update.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68889