[GH-ISSUE #9697] Sending multiple images to Gemma3 on Mac causes an EOF #6331

Closed
opened 2026-04-12 17:50:13 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @BruceMacD on GitHub (Mar 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9697

Originally assigned to: @jessegross on GitHub.

What is the issue?

Ollama crashes with a GGML assertion failure when attempting to run inference on Apple Silicon Mac using Metal GPU acceleration when given multiple images.

The issue appears to be a type mismatch in the GGML library when using the Metal framework. The code is expecting a tensor of type F32 (32-bit floating point), but received a different type, causing an assertion failure.

Relevant log output

ggml-metal.m:3253: GGML_ASSERT(src1->type == GGML_TYPE_F32) failed
SIGABRT: abort
PC=0x18cedea60 m=61 sigcode=0
github.com/ollama/ollama/ml/backend/ggml.Context.Compute({0x1400037c000, 0x600001cf4cc0, 0x141260e80, 0x0, 0x2000}, {0x1408d5131c0, 0x1, 0x141260e80?})
        /Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:497 +0x9c fp=0x14000599b60 sp=0x14000599ad0 pc=0x10485919c

Steps to Reproduce

  1. Run Ollama with Metal GPU acceleration enabled (default on macOS)
  2. ollama run gemma3:27b "What is in these two images? ./photo.jpg ./other.jpg"
  3. Process crashes with the above error

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.6.0

Originally created by @BruceMacD on GitHub (Mar 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9697 Originally assigned to: @jessegross on GitHub. ### What is the issue? Ollama crashes with a GGML assertion failure when attempting to run inference on Apple Silicon Mac using Metal GPU acceleration when given multiple images. The issue appears to be a type mismatch in the GGML library when using the Metal framework. The code is expecting a tensor of type F32 (32-bit floating point), but received a different type, causing an assertion failure. ### Relevant log output ```shell ggml-metal.m:3253: GGML_ASSERT(src1->type == GGML_TYPE_F32) failed SIGABRT: abort PC=0x18cedea60 m=61 sigcode=0 ``` ``` github.com/ollama/ollama/ml/backend/ggml.Context.Compute({0x1400037c000, 0x600001cf4cc0, 0x141260e80, 0x0, 0x2000}, {0x1408d5131c0, 0x1, 0x141260e80?}) /Users/runner/work/ollama/ollama/ml/backend/ggml/ggml.go:497 +0x9c fp=0x14000599b60 sp=0x14000599ad0 pc=0x10485919c ``` ### Steps to Reproduce 1. Run Ollama with Metal GPU acceleration enabled (default on macOS) 2. `ollama run gemma3:27b "What is in these two images? ./photo.jpg ./other.jpg"` 3. Process crashes with the above error ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.6.0
GiteaMirror added the bug label 2026-04-12 17:50:13 -05:00
Author
Owner

@powerpiggy commented on GitHub (Mar 13, 2025):

I have the same problem.

<!-- gh-comment-id:2719759421 --> @powerpiggy commented on GitHub (Mar 13, 2025): I have the same problem.
Author
Owner

@noobmaster29 commented on GitHub (Mar 13, 2025):

I'm running on windows and I am having the same issue (just installed everything fresh today). I thought it was perhaps a memory issue but I tried it with 27b and 4b and I get the same issue. Does not happen with only 1 image attached.

With 27b, if I attached 1 photo, and then a 2nd photo in another request, it also crashes with the same error. However, does not seem to be the case with 4b.

Edit: Ran the same request on openrouter and it runs fine.

Image

Image

<!-- gh-comment-id:2720383685 --> @noobmaster29 commented on GitHub (Mar 13, 2025): I'm running on windows and I am having the same issue (just installed everything fresh today). I thought it was perhaps a memory issue but I tried it with 27b and 4b and I get the same issue. Does not happen with only 1 image attached. With 27b, if I attached 1 photo, and then a 2nd photo in another request, it also crashes with the same error. However, does not seem to be the case with 4b. Edit: Ran the same request on openrouter and it runs fine. ![Image](https://github.com/user-attachments/assets/200006af-ce98-4275-ac59-793f8952c596) ![Image](https://github.com/user-attachments/assets/442e8cb9-6b59-43f8-ab49-fd0852485394)
Author
Owner

@rdzotz commented on GitHub (Mar 13, 2025):

I get the following on ubuntu

ollama run gemma3:27b
>>> Please describe the following images: ..._after.jpg _before.jpg 
Added image '_after.jpg'
Added image '_before.jpg'
Error: vision model only supports a single image per message
<!-- gh-comment-id:2720595111 --> @rdzotz commented on GitHub (Mar 13, 2025): I get the following on ubuntu ``` ollama run gemma3:27b >>> Please describe the following images: ..._after.jpg _before.jpg Added image '_after.jpg' Added image '_before.jpg' Error: vision model only supports a single image per message ```
Author
Owner

@fighter3005 commented on GitHub (Mar 13, 2025):

Same thing on ubuntu (docker).

Image

Image

It behaves like Llama3.2-vision. However, both should support multiple images, if I am not mistaken.

I read here, that this is how it should be for Llama3.2-vision.
I still believe this is wrong. vLLM also lists both models with multiple image support. This has been and still is a major inconvenience with Ollama.

Disclaimer: I have not looked into either codebase and might be terribly wrong here. :)

<!-- gh-comment-id:2721244663 --> @fighter3005 commented on GitHub (Mar 13, 2025): Same thing on ubuntu (docker). ![Image](https://github.com/user-attachments/assets/c72e231a-e1af-4506-b5b3-f4656357b633) ![Image](https://github.com/user-attachments/assets/fed29f71-9af9-4934-ad51-63377010f9ad) It behaves like Llama3.2-vision. However, both should support multiple images, if I am not mistaken. I read [here](https://github.com/ollama/ollama/issues/7477), that this is how it should be for Llama3.2-vision. I still believe this is wrong. vLLM also lists both models with multiple image support. This has been and still is a major inconvenience with Ollama. Disclaimer: I have not looked into either codebase and might be terribly wrong here. :)
Author
Owner

@rdzotz commented on GitHub (Mar 13, 2025):

Same thing on ubuntu (docker).

Image

Image

It behaves like Llama3.2-vision. However, both should support multiple images, if I am not mistaken.

I read here, that this is how it should be for Llama3.2-vision. I still believe this is wrong. vLLM also lists both models with multiple image support. This has been and still is a major inconvenience with Ollama.

Disclaimer: I have not looked into either codebase and might be terribly wrong here. :)

llama3.2-vision works for me with multiple image input.

<!-- gh-comment-id:2721667857 --> @rdzotz commented on GitHub (Mar 13, 2025): > Same thing on ubuntu (docker). > > ![Image](https://github.com/user-attachments/assets/c72e231a-e1af-4506-b5b3-f4656357b633) > > ![Image](https://github.com/user-attachments/assets/fed29f71-9af9-4934-ad51-63377010f9ad) > > It behaves like Llama3.2-vision. However, both should support multiple images, if I am not mistaken. > > I read [here](https://github.com/ollama/ollama/issues/7477), that this is how it should be for Llama3.2-vision. I still believe this is wrong. vLLM also lists both models with multiple image support. This has been and still is a major inconvenience with Ollama. > > Disclaimer: I have not looked into either codebase and might be terribly wrong here. :) llama3.2-vision works for me with multiple image input.
Author
Owner

@fighter3005 commented on GitHub (Mar 13, 2025):

Same thing on ubuntu (docker).
Image
Image
It behaves like Llama3.2-vision. However, both should support multiple images, if I am not mistaken.
I read here, that this is how it should be for Llama3.2-vision. I still believe this is wrong. vLLM also lists both models with multiple image support. This has been and still is a major inconvenience with Ollama.
Disclaimer: I have not looked into either codebase and might be terribly wrong here. :)

llama3.2-vision works for me with multiple image input.

Multiple images in one prompt? What quantization do you use? And what settings do you use with Ollama?
MiniCPM-V-2.6 works for me with multiple inputs through open webui, llama3.2-vision not. I will test through ollama directly...

Update: Llama3.2-vision does not support multiple images in one prompt. So I cannot say: "compare the images ./image1.png ./image2.png"

<!-- gh-comment-id:2722089980 --> @fighter3005 commented on GitHub (Mar 13, 2025): > > Same thing on ubuntu (docker). > > ![Image](https://github.com/user-attachments/assets/c72e231a-e1af-4506-b5b3-f4656357b633) > > ![Image](https://github.com/user-attachments/assets/fed29f71-9af9-4934-ad51-63377010f9ad) > > It behaves like Llama3.2-vision. However, both should support multiple images, if I am not mistaken. > > I read [here](https://github.com/ollama/ollama/issues/7477), that this is how it should be for Llama3.2-vision. I still believe this is wrong. vLLM also lists both models with multiple image support. This has been and still is a major inconvenience with Ollama. > > Disclaimer: I have not looked into either codebase and might be terribly wrong here. :) > > llama3.2-vision works for me with multiple image input. Multiple images in one prompt? What quantization do you use? And what settings do you use with Ollama? MiniCPM-V-2.6 works for me with multiple inputs through open webui, llama3.2-vision not. I will test through ollama directly... Update: Llama3.2-vision does not support multiple images in one prompt. So I cannot say: "compare the images ./image1.png ./image2.png"
Author
Owner

@noobmaster29 commented on GitHub (Mar 14, 2025):

Very cool, do we know when the fix will be commited? I'm happy to test and report back.

<!-- gh-comment-id:2723097756 --> @noobmaster29 commented on GitHub (Mar 14, 2025): Very cool, do we know when the fix will be commited? I'm happy to test and report back.
Author
Owner

@BruceMacD commented on GitHub (Mar 14, 2025):

Hey everyone, Jesse has a fix in review for this, we have a pre-release built right now, so it won't make it in for that, but it will be in the next one!

<!-- gh-comment-id:2725236407 --> @BruceMacD commented on GitHub (Mar 14, 2025): Hey everyone, Jesse has a fix in review for this, we have a pre-release built right now, so it won't make it in for that, but it will be in the next one!
Author
Owner

@noobmaster29 commented on GitHub (Mar 19, 2025):

Just tested this after the latest update and it works perfectly. Thank you very much!

<!-- gh-comment-id:2735033488 --> @noobmaster29 commented on GitHub (Mar 19, 2025): Just tested this after the latest update and it works perfectly. Thank you very much!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6331