[GH-ISSUE #6972] Feature Request: LLaMA 3.2 Vision Support #30171

Closed
opened 2026-04-22 09:40:54 -05:00 by GiteaMirror · 33 comments
Owner

Originally created by @tuanlda78202 on GitHub (Sep 26, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6972

It would be nice if ollama could support LLaMA 3.2 Vision

https://huggingface.co/meta-llama/Llama-3.2-11B-Vision

image

Originally created by @tuanlda78202 on GitHub (Sep 26, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6972 It would be nice if `ollama` could support LLaMA 3.2 Vision https://huggingface.co/meta-llama/Llama-3.2-11B-Vision ![image](https://github.com/user-attachments/assets/322997ba-6cea-4a93-a2de-35650c4f2381)
GiteaMirror added the modelfeature request labels 2026-04-22 09:40:54 -05:00
Author
Owner

@CodingNowPls commented on GitHub (Sep 26, 2024):

ollama v0.3.12 is supported

<!-- gh-comment-id:2375742813 --> @CodingNowPls commented on GitHub (Sep 26, 2024): ollama v0.3.12 is supported
Author
Owner

@elimisteve commented on GitHub (Sep 26, 2024):

@CodingNowPls What do you mean? At https://ollama.com/library/llama3.2 I don't see 11B (which supports vision), only 1B and 3B.

<!-- gh-comment-id:2375785817 --> @elimisteve commented on GitHub (Sep 26, 2024): @CodingNowPls What do you mean? At https://ollama.com/library/llama3.2 I don't see 11B (which supports vision), only 1B and 3B.
Author
Owner

@elimisteve commented on GitHub (Sep 26, 2024):

@tuanlda78202 I just found https://ollama.com/blog/llama3.2 , which says 11B and 90B are coming soon to Ollama; they're working on it 😄.

<!-- gh-comment-id:2375799655 --> @elimisteve commented on GitHub (Sep 26, 2024): @tuanlda78202 I just found https://ollama.com/blog/llama3.2 , which says 11B and 90B are coming soon to Ollama; they're working on it 😄.
Author
Owner
<!-- gh-comment-id:2376072617 --> @ddpasa commented on GitHub (Sep 26, 2024): It looks like there are 4 versions: 11b: https://huggingface.co/meta-llama/Llama-3.2-11B-Vision 11b-instruct: https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct 90b: https://huggingface.co/meta-llama/Llama-3.2-90B-Vision 90b-instruct: https://huggingface.co/meta-llama/Llama-3.2-90B-Vision-Instruct
Author
Owner

@CodingNowPls commented on GitHub (Sep 26, 2024):

@tuanlda78202 I just found https://ollama.com/blog/llama3.2 , which says 11B and 90B are coming soon to Ollama; they're working on it 😄.

sorry,I saw it wrong

<!-- gh-comment-id:2376180334 --> @CodingNowPls commented on GitHub (Sep 26, 2024): > @tuanlda78202 I just found https://ollama.com/blog/llama3.2 , which says 11B and 90B are coming soon to Ollama; they're working on it 😄. sorry,I saw it wrong
Author
Owner

@paulblakely commented on GitHub (Sep 26, 2024):

Work is being done on this, see https://github.com/ollama/ollama/pull/6963

<!-- gh-comment-id:2377160335 --> @paulblakely commented on GitHub (Sep 26, 2024): Work is being done on this, see https://github.com/ollama/ollama/pull/6963
Author
Owner

@briansan commented on GitHub (Sep 27, 2024):

https://github.com/ollama/ollama/pull/6965

<!-- gh-comment-id:2379292878 --> @briansan commented on GitHub (Sep 27, 2024): https://github.com/ollama/ollama/pull/6965
Author
Owner

@1WorldCapture commented on GitHub (Sep 28, 2024):

leave a comment to watch on this

appreciate for the team's hard working on this

<!-- gh-comment-id:2380372772 --> @1WorldCapture commented on GitHub (Sep 28, 2024): leave a comment to watch on this appreciate for the team's hard working on this
Author
Owner

@tungedng2710 commented on GitHub (Oct 2, 2024):

I'm looking for this

<!-- gh-comment-id:2388250982 --> @tungedng2710 commented on GitHub (Oct 2, 2024): I'm looking for this
Author
Owner

@oguzhandoganoglu commented on GitHub (Oct 2, 2024):

I'm waiting

<!-- gh-comment-id:2388579801 --> @oguzhandoganoglu commented on GitHub (Oct 2, 2024): I'm waiting
Author
Owner

@amytimed commented on GitHub (Oct 2, 2024):

leave a comment to watch on this

you can watch without leaving a comment by clicking the subscribe button (says Unsubscribe for me since i clicked it)
image

this way, it wont give a notification to the other subscribed people

<!-- gh-comment-id:2388973778 --> @amytimed commented on GitHub (Oct 2, 2024): > leave a comment to watch on this you can watch without leaving a comment by clicking the subscribe button (says Unsubscribe for me since i clicked it) ![image](https://github.com/user-attachments/assets/465a8ce5-ed2a-4fde-a7db-58583d96a649) this way, it wont give a notification to the other subscribed people
Author
Owner

@al-swaiti commented on GitHub (Oct 3, 2024):

image
is that that difficult ... i though it is same llava

<!-- gh-comment-id:2392032831 --> @al-swaiti commented on GitHub (Oct 3, 2024): ![image](https://github.com/user-attachments/assets/7f47d360-e7b8-421c-902a-bb8c86f61f85) is that that difficult ... i though it is same llava
Author
Owner

@ProjectMoon commented on GitHub (Oct 3, 2024):

There is actually quite a lot that goes into loading and running AI models. You can't just drop a GGUF file in and assume it'll run (although it's a good assumption to make, if you otherwise don't know how it works). Different models contain different ways of handling and storing the data. So the thing running it (ollama and llama.cpp in this case) have to be modified to work with it.

<!-- gh-comment-id:2392039699 --> @ProjectMoon commented on GitHub (Oct 3, 2024): There is actually quite a lot that goes into loading and running AI models. You can't just drop a GGUF file in and assume it'll run (although it's a good assumption to make, if you otherwise don't know how it works). Different models contain different ways of handling and storing the data. So the thing running it (ollama and llama.cpp in this case) have to be modified to work with it.
Author
Owner

@al-swaiti commented on GitHub (Oct 3, 2024):

I didn't convert that to gguf I tried but failed , I think I need to check latest commit of llama.cpp whatever I tried to convert it to ollama directly without converting that to gguf !
,,, also is there spam here this is first dislikes 👎 😳 , I think they misunderstood 😅 or they have leader

<!-- gh-comment-id:2392436511 --> @al-swaiti commented on GitHub (Oct 3, 2024): I didn't convert that to gguf I tried but failed , I think I need to check latest commit of llama.cpp whatever I tried to convert it to ollama directly without converting that to gguf ! ,,, also is there spam here this is first dislikes 👎 😳 , I think they misunderstood 😅 or they have leader
Author
Owner

@JacopKane commented on GitHub (Oct 21, 2024):

Looking forward to this

<!-- gh-comment-id:2427653657 --> @JacopKane commented on GitHub (Oct 21, 2024): Looking forward to this
Author
Owner

@pdevine commented on GitHub (Oct 23, 2024):

For people who want to try this:

I haven't uploaded the 90b model yet or quantizations other than q4_K_M, q8_0, and fp16. These are not the final weights, so they may (probably will) change again if we need to tweak some stuff. The model will eventually move into the main library (i.e. out of x/) which may/may not require re-pulling the final weights.

Can't wait for people to try this out!

<!-- gh-comment-id:2433776490 --> @pdevine commented on GitHub (Oct 23, 2024): For people who want to try this: * Download the [pre-release of Ollama 0.4.0](https://github.com/ollama/ollama/releases/tag/v0.4.0-rc4) * Pull the vision model with `ollama pull x/llama3.2-vision:11b` (you can find the other quantizations at https://ollama.com/x/llama3.2-vision) * Please report any issues here I haven't uploaded the 90b model yet or quantizations other than `q4_K_M`, `q8_0`, and `fp16`. These are not the final weights, so they may (probably will) change again if we need to tweak some stuff. The model will eventually move into the main library (i.e. out of `x/`) which may/may not require re-pulling the final weights. Can't wait for people to try this out!
Author
Owner

@joostwestra commented on GitHub (Oct 24, 2024):

I have managed to build the pre-release and load the x/llama3.2-vision:11b model.
Please note that the absolute path to the image needs to be used to load the image.
Then it appears to work correctly after loading for a while. (MacBook Pro M1 16Gb)

Thanks for implementing this feature!

<!-- gh-comment-id:2435229592 --> @joostwestra commented on GitHub (Oct 24, 2024): I have managed to build the pre-release and load the x/llama3.2-vision:11b model. Please note that the absolute path to the image needs to be used to load the image. Then it appears to work correctly after loading for a while. (MacBook Pro M1 16Gb) Thanks for implementing this feature!
Author
Owner

@ProjectMoon commented on GitHub (Oct 25, 2024):

So I managed to get a null pointer which locked up ollama with the model stuck in stopping state, until I manually restarted it. I sent an image request (via OpenWebUI) and got a response. Then I sent another message, and got back a 500 error in the web UI. The stack trace in the Ollama logs is:

time=2024-10-25T09:51:46.410+02:00 level=INFO source=.:0 msg="http: panic serving 127.0.0.1:50368: runtime error: invalid memory address or nil pointer dereference
goroutine 29 [running]:
net/http.(*conn).serve.func1()
	net/http/server.go:1903 +0xbe
panic({0x55b0070bf7e0?, 0x55b00729ad70?})
	runtime/panic.go:770 +0x132
github.com/ollama/ollama/llama.NewLlavaImageEmbed.func1(0xc0000cb458?, 0x55b006ba843e?, {0xc0002c2000, 0x2145b0, 0x30?})
	github.com/ollama/ollama/llama/llama.go:474 +0x26
github.com/ollama/ollama/llama.NewLlavaImageEmbed(0xc000014390, 0xc0002c2000?, {0xc0002c2000?, 0x2145b1?, 0x55b0070d1480?})
	github.com/ollama/ollama/llama/llama.go:474 +0x30
main.(*Server).inputs(0xc000126120, {0xc000254600, 0x5ac}, {0xc0000a0030, 0x1, 0x6b6cc25?})
	github.com/ollama/ollama/llama/runner/runner.go:199 +0x325
main.(*Server).NewSequence(0xc000126120, {0xc000254600, 0x5ac}, {0xc0000a0030, 0x1, 0x1}, {0x186a0, {0x0, 0x0, 0x0}, ...})
	github.com/ollama/ollama/llama/runner/runner.go:100 +0xb2
main.(*Server).completion(0xc000126120, {0x55b00710e870, 0xc0001baa80}, 0xc0001a9560)
	github.com/ollama/ollama/llama/runner/runner.go:628 +0x52a
net/http.HandlerFunc.ServeHTTP(0xc00010ec30?, {0x55b00710e870?, 0xc0001baa80?}, 0x10?)

This is on rc5.

<!-- gh-comment-id:2437192457 --> @ProjectMoon commented on GitHub (Oct 25, 2024): So I managed to get a null pointer which locked up ollama with the model stuck in stopping state, until I manually restarted it. I sent an image request (via OpenWebUI) and got a response. Then I sent another message, and got back a 500 error in the web UI. The stack trace in the Ollama logs is: ``` time=2024-10-25T09:51:46.410+02:00 level=INFO source=.:0 msg="http: panic serving 127.0.0.1:50368: runtime error: invalid memory address or nil pointer dereference goroutine 29 [running]: net/http.(*conn).serve.func1() net/http/server.go:1903 +0xbe panic({0x55b0070bf7e0?, 0x55b00729ad70?}) runtime/panic.go:770 +0x132 github.com/ollama/ollama/llama.NewLlavaImageEmbed.func1(0xc0000cb458?, 0x55b006ba843e?, {0xc0002c2000, 0x2145b0, 0x30?}) github.com/ollama/ollama/llama/llama.go:474 +0x26 github.com/ollama/ollama/llama.NewLlavaImageEmbed(0xc000014390, 0xc0002c2000?, {0xc0002c2000?, 0x2145b1?, 0x55b0070d1480?}) github.com/ollama/ollama/llama/llama.go:474 +0x30 main.(*Server).inputs(0xc000126120, {0xc000254600, 0x5ac}, {0xc0000a0030, 0x1, 0x6b6cc25?}) github.com/ollama/ollama/llama/runner/runner.go:199 +0x325 main.(*Server).NewSequence(0xc000126120, {0xc000254600, 0x5ac}, {0xc0000a0030, 0x1, 0x1}, {0x186a0, {0x0, 0x0, 0x0}, ...}) github.com/ollama/ollama/llama/runner/runner.go:100 +0xb2 main.(*Server).completion(0xc000126120, {0x55b00710e870, 0xc0001baa80}, 0xc0001a9560) github.com/ollama/ollama/llama/runner/runner.go:628 +0x52a net/http.HandlerFunc.ServeHTTP(0xc00010ec30?, {0x55b00710e870?, 0xc0001baa80?}, 0x10?) ``` This is on rc5.
Author
Owner

@oderwat commented on GitHub (Oct 25, 2024):

Edit: Ollama 0.4.0-rc5 using 'x/llama3.2-vision' works with the chat API but with the generate API

Old:

I can run Ollama 0.4.0-rc5 with 'x/llama3.2-visionin' on my WSL2 and query it from another machine using the CLI to type the prompt and include the full image path in the typed text. The result is excellent btw.

The problems start when I want to automate it. Calling the CLI ollama with the exact same prompt as parameter fails and returns either 'Error: POST predict: Post "http://127.0.0.1:33039/completion": EOF' or '!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!'.

Calling it through the API, sending the image as base64 (Go based on the curl example with llava) it fails in the same way:

unexpected status code: 500, body: {"error":"POST predict: Post \"http://127.0.0.1:44501/completion\": EOF"}

This is reproducible. It always works when "typing" the prompt to the CLI ollama 0.4.0-rc5 but not when using the API. When changing the model to llava everything works as expected.

The code. I use GoNB for such experiments:

func FileToBase64(filePath string) (string, error) {
	// Read the file
	content, err := os.ReadFile(filePath)
	if err != nil {
		return "", fmt.Errorf("failed to read file: %w", err)
	}

	// Encode to base64
	encodedContent := base64.StdEncoding.EncodeToString(content)
	return encodedContent, nil
}

type OllamaRequest struct {
	Model   string   `json:"model"`
	Prompt  string   `json:"prompt"`
	Stream  bool     `json:"stream"`
	Images  []string `json:"images"`
}

type OllamaResponse struct {
	Response string `json:"response"`
}

func GenerateWithImage(prompt, imagePath string) (*OllamaResponse, error) {
	// First, convert the image to base64
	imageData, err := os.ReadFile(imagePath)
	if err != nil {
		return nil, fmt.Errorf("failed to read image: %w", err)
	}

	base64Image := base64.StdEncoding.EncodeToString(imageData)

	// Prepare the request body
	reqBody := OllamaRequest{
		Model:   "x/llama3.2-vision",
		Prompt:  prompt,
		Stream:  false,
		Images:  []string{base64Image},
	}

	// Marshal the request body to JSON
	jsonData, err := json.Marshal(reqBody)
	if err != nil {
		return nil, fmt.Errorf("failed to marshal JSON: %w", err)
	}

	// Create a new request
	req, err := http.NewRequest(
		"POST",
		"http://mywindows:11434/api/generate",
		bytes.NewBuffer(jsonData),
	)
	if err != nil {
		return nil, fmt.Errorf("failed to create request: %w", err)
	}

	// Set content-type header
	req.Header.Set("Content-Type", "application/json")

	// Make the request
	client := &http.Client{}
	resp, err := client.Do(req)
	if err != nil {
		return nil, fmt.Errorf("failed to make request: %w", err)
	}
	defer resp.Body.Close()

	// Read the response body
	body, err := io.ReadAll(resp.Body)
	if err != nil {
		return nil, fmt.Errorf("failed to read response: %w", err)
	}

	// Check status code
	if resp.StatusCode != http.StatusOK {
		return nil, fmt.Errorf("unexpected status code: %d, body: %s", resp.StatusCode, string(body))
	}

	// Parse the response
	var response OllamaResponse
	if err := json.Unmarshal(body, &response); err != nil {
		return nil, fmt.Errorf("failed to parse response: %w", err)
	}

	return &response, nil
}

%%
resp, err := GenerateWithImage(`Please describe the content and style of this image in detail but use only one sentence:`, 
    ImagePath)
if err != nil { panic(err) }
fmt.Printf("%q",resp.Response)

Here, the output from the server:

time=2024-10-25T13:52:07.919+02:00 level=INFO source=llama-server.go:573 msg="llama runner started in 1.76 seconds"
SIGSEGV: segmentation violation
PC=0x7f97e99008b0 m=9 sigcode=2 addr=0x7f97b4241000
signal arrived during cgo execution

goroutine 50 gp=0xc000220000 m=9 mp=0xc000590008 [syscall]:
runtime.cgocall(0x55c5a5d7e690, 0xc00027f368)
        runtime/cgocall.go:157 +0x4b fp=0xc00027f340 sp=0xc00027f308 pc=0x55c5a5b01b8b
github.com/ollama/ollama/llama._Cfunc_mllama_image_encode(0x7f97c03b2b30, 0xc, 0x7f97b4001220, 0xc000600000)
        _cgo_gotypes.go:906 +0x4c fp=0xc00027f368 sp=0xc00027f340 pc=0x55c5a5c007ac
github.com/ollama/ollama/llama.NewMllamaImageEmbed.func5(0x7f97b4001220?, 0x0?, 0x7f97b4001220, {0xc000600000, 0xc000230900?, 0xc00027f458?})
        github.com/ollama/ollama/llama/llama.go:504 +0xa9 fp=0xc00027f3c0 sp=0xc00027f368 pc=0x55c5a5c04149
github.com/ollama/ollama/llama.NewMllamaImageEmbed(0xc0001282a0, 0xc00011cf60, {0xc000500000, 0x8f27a, 0x8f27a}, 0x0)
        github.com/ollama/ollama/llama/llama.go:504 +0x179 fp=0xc00027f468 sp=0xc00027f3c0 pc=0x55c5a5c03f39
main.(*Server).inputs(0xc000146120, {0xc00059c100, 0xfb}, {0xc00020e990, 0x1, 0xa5b60085?})
        github.com/ollama/ollama/llama/runner/runner.go:220 +0x4df fp=0xc00027f600 sp=0xc00027f468 pc=0x55c5a5d77e3f
main.(*Server).NewSequence(0xc000146120, {0xc00059c100, 0xfb}, {0xc00020e990, 0x1, 0x1}, {0x5000, {0x0, 0x0, 0x0}, ...})
        github.com/ollama/ollama/llama/runner/runner.go:100 +0xb2 fp=0xc00027f7b8 sp=0xc00027f600 pc=0x55c5a5d771b2
main.(*Server).completion(0xc000146120, {0x55c5a609e390, 0xc00023a2a0}, 0xc000228900)
        github.com/ollama/ollama/llama/runner/runner.go:628 +0x52a fp=0xc00027fab8 sp=0xc00027f7b8 pc=0x55c5a5d7a62a
main.(*Server).completion-fm({0x55c5a609e390?, 0xc00023a2a0?}, 0x55c5a5d5632d?)
        <autogenerated>:1 +0x36 fp=0xc00027fae8 sp=0xc00027fab8 pc=0x55c5a5d7d936
net/http.HandlerFunc.ServeHTTP(0xc00011ef70?, {0x55c5a609e390?, 0xc00023a2a0?}, 0x10?)
        net/http/server.go:2171 +0x29 fp=0xc00027fb10 sp=0xc00027fae8 pc=0x55c5a5d4edc9
net/http.(*ServeMux).ServeHTTP(0x55c5a5b0b745?, {0x55c5a609e390, 0xc00023a2a0}, 0xc000228900)
        net/http/server.go:2688 +0x1ad fp=0xc00027fb60 sp=0xc00027fb10 pc=0x55c5a5d50c4d
net/http.serverHandler.ServeHTTP({0x55c5a609d6e0?}, {0x55c5a609e390?, 0xc00023a2a0?}, 0x6?)
        net/http/server.go:3142 +0x8e fp=0xc00027fb90 sp=0xc00027fb60 pc=0x55c5a5d51c6e
net/http.(*conn).serve(0xc00021a000, {0x55c5a609e7b0, 0xc00011cd80})
        net/http/server.go:2044 +0x5e8 fp=0xc00027ffb8 sp=0xc00027fb90 pc=0x55c5a5d4da08
net/http.(*Server).Serve.gowrap3()
        net/http/server.go:3290 +0x28 fp=0xc00027ffe0 sp=0xc00027ffb8 pc=0x55c5a5d523e8
runtime.goexit({})
        runtime/asm_amd64.s:1695 +0x1 fp=0xc00027ffe8 sp=0xc00027ffe0 pc=0x55c5a5b6a5a1
created by net/http.(*Server).Serve in goroutine 1
        net/http/server.go:3290 +0x4b4
<!-- gh-comment-id:2437586368 --> @oderwat commented on GitHub (Oct 25, 2024): Edit: Ollama 0.4.0-rc5 using 'x/llama3.2-vision' works with the **chat** API but with the **generate** API Old: I can run Ollama 0.4.0-rc5 with 'x/llama3.2-visionin' on my WSL2 and query it from another machine using the CLI to type the prompt and include the full image path in the typed text. The result is excellent btw. The problems start when I want to automate it. Calling the CLI ollama with the exact same prompt as parameter fails and returns either 'Error: POST predict: Post "http://127.0.0.1:33039/completion": EOF' or '!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!'. Calling it through the API, sending the image as base64 (Go based on the curl example with llava) it fails in the same way: ``` unexpected status code: 500, body: {"error":"POST predict: Post \"http://127.0.0.1:44501/completion\": EOF"} ``` This is reproducible. It always works when "typing" the prompt to the CLI ollama 0.4.0-rc5 but not when using the API. When changing the model to `llava` everything works as expected. The code. I use [GoNB](https://github.com/janpfeifer/gonb) for such experiments: ```go func FileToBase64(filePath string) (string, error) { // Read the file content, err := os.ReadFile(filePath) if err != nil { return "", fmt.Errorf("failed to read file: %w", err) } // Encode to base64 encodedContent := base64.StdEncoding.EncodeToString(content) return encodedContent, nil } type OllamaRequest struct { Model string `json:"model"` Prompt string `json:"prompt"` Stream bool `json:"stream"` Images []string `json:"images"` } type OllamaResponse struct { Response string `json:"response"` } func GenerateWithImage(prompt, imagePath string) (*OllamaResponse, error) { // First, convert the image to base64 imageData, err := os.ReadFile(imagePath) if err != nil { return nil, fmt.Errorf("failed to read image: %w", err) } base64Image := base64.StdEncoding.EncodeToString(imageData) // Prepare the request body reqBody := OllamaRequest{ Model: "x/llama3.2-vision", Prompt: prompt, Stream: false, Images: []string{base64Image}, } // Marshal the request body to JSON jsonData, err := json.Marshal(reqBody) if err != nil { return nil, fmt.Errorf("failed to marshal JSON: %w", err) } // Create a new request req, err := http.NewRequest( "POST", "http://mywindows:11434/api/generate", bytes.NewBuffer(jsonData), ) if err != nil { return nil, fmt.Errorf("failed to create request: %w", err) } // Set content-type header req.Header.Set("Content-Type", "application/json") // Make the request client := &http.Client{} resp, err := client.Do(req) if err != nil { return nil, fmt.Errorf("failed to make request: %w", err) } defer resp.Body.Close() // Read the response body body, err := io.ReadAll(resp.Body) if err != nil { return nil, fmt.Errorf("failed to read response: %w", err) } // Check status code if resp.StatusCode != http.StatusOK { return nil, fmt.Errorf("unexpected status code: %d, body: %s", resp.StatusCode, string(body)) } // Parse the response var response OllamaResponse if err := json.Unmarshal(body, &response); err != nil { return nil, fmt.Errorf("failed to parse response: %w", err) } return &response, nil } %% resp, err := GenerateWithImage(`Please describe the content and style of this image in detail but use only one sentence:`, ImagePath) if err != nil { panic(err) } fmt.Printf("%q",resp.Response) ``` Here, the output from the server: ``` time=2024-10-25T13:52:07.919+02:00 level=INFO source=llama-server.go:573 msg="llama runner started in 1.76 seconds" SIGSEGV: segmentation violation PC=0x7f97e99008b0 m=9 sigcode=2 addr=0x7f97b4241000 signal arrived during cgo execution goroutine 50 gp=0xc000220000 m=9 mp=0xc000590008 [syscall]: runtime.cgocall(0x55c5a5d7e690, 0xc00027f368) runtime/cgocall.go:157 +0x4b fp=0xc00027f340 sp=0xc00027f308 pc=0x55c5a5b01b8b github.com/ollama/ollama/llama._Cfunc_mllama_image_encode(0x7f97c03b2b30, 0xc, 0x7f97b4001220, 0xc000600000) _cgo_gotypes.go:906 +0x4c fp=0xc00027f368 sp=0xc00027f340 pc=0x55c5a5c007ac github.com/ollama/ollama/llama.NewMllamaImageEmbed.func5(0x7f97b4001220?, 0x0?, 0x7f97b4001220, {0xc000600000, 0xc000230900?, 0xc00027f458?}) github.com/ollama/ollama/llama/llama.go:504 +0xa9 fp=0xc00027f3c0 sp=0xc00027f368 pc=0x55c5a5c04149 github.com/ollama/ollama/llama.NewMllamaImageEmbed(0xc0001282a0, 0xc00011cf60, {0xc000500000, 0x8f27a, 0x8f27a}, 0x0) github.com/ollama/ollama/llama/llama.go:504 +0x179 fp=0xc00027f468 sp=0xc00027f3c0 pc=0x55c5a5c03f39 main.(*Server).inputs(0xc000146120, {0xc00059c100, 0xfb}, {0xc00020e990, 0x1, 0xa5b60085?}) github.com/ollama/ollama/llama/runner/runner.go:220 +0x4df fp=0xc00027f600 sp=0xc00027f468 pc=0x55c5a5d77e3f main.(*Server).NewSequence(0xc000146120, {0xc00059c100, 0xfb}, {0xc00020e990, 0x1, 0x1}, {0x5000, {0x0, 0x0, 0x0}, ...}) github.com/ollama/ollama/llama/runner/runner.go:100 +0xb2 fp=0xc00027f7b8 sp=0xc00027f600 pc=0x55c5a5d771b2 main.(*Server).completion(0xc000146120, {0x55c5a609e390, 0xc00023a2a0}, 0xc000228900) github.com/ollama/ollama/llama/runner/runner.go:628 +0x52a fp=0xc00027fab8 sp=0xc00027f7b8 pc=0x55c5a5d7a62a main.(*Server).completion-fm({0x55c5a609e390?, 0xc00023a2a0?}, 0x55c5a5d5632d?) <autogenerated>:1 +0x36 fp=0xc00027fae8 sp=0xc00027fab8 pc=0x55c5a5d7d936 net/http.HandlerFunc.ServeHTTP(0xc00011ef70?, {0x55c5a609e390?, 0xc00023a2a0?}, 0x10?) net/http/server.go:2171 +0x29 fp=0xc00027fb10 sp=0xc00027fae8 pc=0x55c5a5d4edc9 net/http.(*ServeMux).ServeHTTP(0x55c5a5b0b745?, {0x55c5a609e390, 0xc00023a2a0}, 0xc000228900) net/http/server.go:2688 +0x1ad fp=0xc00027fb60 sp=0xc00027fb10 pc=0x55c5a5d50c4d net/http.serverHandler.ServeHTTP({0x55c5a609d6e0?}, {0x55c5a609e390?, 0xc00023a2a0?}, 0x6?) net/http/server.go:3142 +0x8e fp=0xc00027fb90 sp=0xc00027fb60 pc=0x55c5a5d51c6e net/http.(*conn).serve(0xc00021a000, {0x55c5a609e7b0, 0xc00011cd80}) net/http/server.go:2044 +0x5e8 fp=0xc00027ffb8 sp=0xc00027fb90 pc=0x55c5a5d4da08 net/http.(*Server).Serve.gowrap3() net/http/server.go:3290 +0x28 fp=0xc00027ffe0 sp=0xc00027ffb8 pc=0x55c5a5d523e8 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc00027ffe8 sp=0xc00027ffe0 pc=0x55c5a5b6a5a1 created by net/http.(*Server).Serve in goroutine 1 net/http/server.go:3290 +0x4b4 ```
Author
Owner

@oderwat commented on GitHub (Oct 25, 2024):

I rewrote my experimental script to use the chat interface and made a simple image captioning app (for training SD 3.5 Large Loras): capollama

Notice: While playing with that I had occasions when the description of one image "bleed" over into another image. I am not sure what caused it but suddenly my bar graphs where horses on the moon or a logo was being described as something else.

<!-- gh-comment-id:2438147256 --> @oderwat commented on GitHub (Oct 25, 2024): I rewrote my experimental script to use the chat interface and made a simple image captioning app (for training SD 3.5 Large Loras): [capollama](https://github.com/oderwat/capollama) Notice: While playing with that I had occasions when the description of one image "bleed" over into another image. I am not sure what caused it but suddenly my bar graphs where horses on the moon or a logo was being described as something else.
Author
Owner

@jessegross commented on GitHub (Oct 25, 2024):

@oderwat Image bleed is a known issue with the implementation of mllama, we are working on, but it's mentioned in the release notes.

I'll see if I can reproduce the segfault.

<!-- gh-comment-id:2438581368 --> @jessegross commented on GitHub (Oct 25, 2024): @oderwat Image bleed is a known issue with the implementation of mllama, we are working on, but it's mentioned in the release notes. I'll see if I can reproduce the segfault.
Author
Owner

@jessegross commented on GitHub (Oct 25, 2024):

@ProjectMoon Can you post the full log? If it reproducible can you also run it with OLLAMA_DEBUG=1? I think your issue is different from @oderwat 's

<!-- gh-comment-id:2438612381 --> @jessegross commented on GitHub (Oct 25, 2024): @ProjectMoon Can you post the full log? If it reproducible can you also run it with OLLAMA_DEBUG=1? I think your issue is different from @oderwat 's
Author
Owner

@jessegross commented on GitHub (Oct 25, 2024):

@oderwat Looks like the /generate codepath has a missing piece. I broke this out as a separate bug:
#7362

Thanks!

<!-- gh-comment-id:2438636353 --> @jessegross commented on GitHub (Oct 25, 2024): @oderwat Looks like the `/generate` codepath has a missing piece. I broke this out as a separate bug: #7362 Thanks!
Author
Owner

@oderwat commented on GitHub (Oct 25, 2024):

@ProjectMoon / @jessegross I am not sure, but I think I had a similar problem when using the 0.3.x client version on one system with the 0.4.0 server running on another system.

<!-- gh-comment-id:2438645051 --> @oderwat commented on GitHub (Oct 25, 2024): @ProjectMoon / @jessegross I am not sure, but I think I had a similar problem when using the 0.3.x client version on one system with the 0.4.0 server running on another system.
Author
Owner

@ProjectMoon commented on GitHub (Oct 25, 2024):

@ProjectMoon Can you post the full log? If it reproducible can you also run it with OLLAMA_DEBUG=1? I think your issue is different from @oderwat 's

I was able to reproduce. It seems to be related to rapidly switching models. My setup is a bit unusual:

  1. I have a 16 GB AMD card, and a 4 GB Nvidia card. All larger models (including Llama 3.2 vision) are going on the AMD card.
  2. In this case, I have OpenWebUI set up to dynamically use a vision model (llama 3.2 in this case) when an image is uploaded.
  3. Otherwise, I'm normally using Qwen2.5.

OpenWebUI is swapping between Qwen and the vision model during the generation for various reasons (to load tool schema, then the filter function re-routes to vision, then it needs to call the original model for generating title).

The reproduction steps are:

  1. Have the Qwen 2.5 model selected in OpenWebUI.
  2. Upload an image and ask something like "What is this?"
  3. Send another message like "Tell me more about it" after the first response.
  4. Null pointer

This does NOT happen when using the Llama model directly, so it must be related to swapping models in and out quickly. I can upload the debug logs, but of course I will have to spend some time sanitizing them.

Edit: this happens also with other vision models (MiniCPM-V in my second test). So it's not something specific to Llama 3.2.

Edit 2: Does not happen on 0.3.14.

<!-- gh-comment-id:2438661483 --> @ProjectMoon commented on GitHub (Oct 25, 2024): > @ProjectMoon Can you post the full log? If it reproducible can you also run it with OLLAMA_DEBUG=1? I think your issue is different from @oderwat 's I was able to reproduce. It seems to be related to rapidly switching models. My setup is a bit unusual: 1. I have a 16 GB AMD card, and a 4 GB Nvidia card. All larger models (including Llama 3.2 vision) are going on the AMD card. 2. In this case, I have OpenWebUI set up to dynamically use a vision model (llama 3.2 in this case) when an image is uploaded. 3. Otherwise, I'm normally using Qwen2.5. OpenWebUI is swapping between Qwen and the vision model during the generation for various reasons (to load tool schema, then the filter function re-routes to vision, then it needs to call the original model for generating title). The reproduction steps are: 1. Have the Qwen 2.5 model selected in OpenWebUI. 2. Upload an image and ask something like "What is this?" 3. Send another message like "Tell me more about it" after the first response. 5. Null pointer This does NOT happen when using the Llama model directly, so it must be related to swapping models in and out quickly. I can upload the debug logs, but of course I will have to spend some time sanitizing them. Edit: this happens also with other vision models (MiniCPM-V in my second test). So it's not something specific to Llama 3.2. Edit 2: Does not happen on 0.3.14.
Author
Owner

@jessegross commented on GitHub (Oct 25, 2024):

@ProjectMoon Thanks for the additional info. I will take a look but if you are able to upload the full debug log that would be the most helpful.

<!-- gh-comment-id:2438771894 --> @jessegross commented on GitHub (Oct 25, 2024): @ProjectMoon Thanks for the additional info. I will take a look but if you are able to upload the full debug log that would be the most helpful.
Author
Owner

@ProjectMoon commented on GitHub (Oct 27, 2024):

ollama-sanitized.log.gz
Here is the log with system prompts removed. There might be a lot of extra stuff in there. But you should be able to see the flow from Qwen 2.5 to Llama 3.2 Vision to Qwen 2.5, etc, and then the eventual null pointers. Let me know if you need anything else.

It sounds like a concurrency issue separate from Llama 3.2 vision impl itself, to be honest.

<!-- gh-comment-id:2439979689 --> @ProjectMoon commented on GitHub (Oct 27, 2024): [ollama-sanitized.log.gz](https://github.com/user-attachments/files/17533690/ollama-sanitized.log.gz) Here is the log with system prompts removed. There might be a lot of extra stuff in there. But you should be able to see the flow from Qwen 2.5 to Llama 3.2 Vision to Qwen 2.5, etc, and then the eventual null pointers. Let me know if you need anything else. It sounds like a concurrency issue separate from Llama 3.2 vision impl itself, to be honest.
Author
Owner

@jessegross commented on GitHub (Oct 30, 2024):

Thanks for the logs. It's a bit hard to follow but I just sent out a patch that makes a lot of the code in that area more robust. I think it is likely to help or at least catch the problem in a way that is easier to pinpoint. If you are able to try it out, that would be helpful to know if it solves the problem:
https://github.com/ollama/ollama/pull/7414

<!-- gh-comment-id:2445762046 --> @jessegross commented on GitHub (Oct 30, 2024): Thanks for the logs. It's a bit hard to follow but I just sent out a patch that makes a lot of the code in that area more robust. I think it is likely to help or at least catch the problem in a way that is easier to pinpoint. If you are able to try it out, that would be helpful to know if it solves the problem: https://github.com/ollama/ollama/pull/7414
Author
Owner

@ProjectMoon commented on GitHub (Oct 30, 2024):

Thanks for the logs. It's a bit hard to follow but I just sent out a patch that makes a lot of the code in that area more robust. I think it is likely to help or at least catch the problem in a way that is easier to pinpoint. If you are able to try it out, that would be helpful to know if it solves the problem: #7414

I'll try it in the next RC and report back. It's pretty easy for me to reproduce on my setup.

<!-- gh-comment-id:2448505608 --> @ProjectMoon commented on GitHub (Oct 30, 2024): > Thanks for the logs. It's a bit hard to follow but I just sent out a patch that makes a lot of the code in that area more robust. I think it is likely to help or at least catch the problem in a way that is easier to pinpoint. If you are able to try it out, that would be helpful to know if it solves the problem: #7414 I'll try it in the next RC and report back. It's pretty easy for me to reproduce on my setup.
Author
Owner

@mrober01 commented on GitHub (Oct 31, 2024):

The 11b model works well, thanks. Wondering if the 90b model will be uploaded too as that will be fun to try out. I did download the unsloth 4bit bnb safetensors 90b model and tried the import in ollama, but that errored out - unsupported type. Not sure how this is usually done, maybe can't do with bnb type? - keep in mind i am a novice :) .

<!-- gh-comment-id:2448935211 --> @mrober01 commented on GitHub (Oct 31, 2024): The 11b model works well, thanks. Wondering if the 90b model will be uploaded too as that will be fun to try out. I did download the unsloth 4bit bnb safetensors 90b model and tried the import in ollama, but that errored out - unsupported type. Not sure how this is usually done, maybe can't do with bnb type? - keep in mind i am a novice :) .
Author
Owner

@thatjpk commented on GitHub (Oct 31, 2024):

I'm testing rc6 with the llama3.2-vision models, and I'm running into a problem when using CUDA. I started to write a comment here, but decided to file a separate issue to keep from cluttering this thread.

Thanks for the work on this so far! 🍻

https://github.com/ollama/ollama/issues/7440

<!-- gh-comment-id:2449065090 --> @thatjpk commented on GitHub (Oct 31, 2024): I'm testing rc6 with the llama3.2-vision models, and I'm running into a problem when using CUDA. I started to write a comment here, but decided to file a separate issue to keep from cluttering this thread. Thanks for the work on this so far! :beers: https://github.com/ollama/ollama/issues/7440
Author
Owner

@ProjectMoon commented on GitHub (Oct 31, 2024):

Thanks for the logs. It's a bit hard to follow but I just sent out a patch that makes a lot of the code in that area more robust. I think it is likely to help or at least catch the problem in a way that is easier to pinpoint. If you are able to try it out, that would be helpful to know if it solves the problem: #7414

I'll try it in the next RC and report back. It's pretty easy for me to reproduce on my setup.

OK so it looks like the crash is no longer happening. Model swapping seems to be handled correctly!

<!-- gh-comment-id:2449688886 --> @ProjectMoon commented on GitHub (Oct 31, 2024): > > Thanks for the logs. It's a bit hard to follow but I just sent out a patch that makes a lot of the code in that area more robust. I think it is likely to help or at least catch the problem in a way that is easier to pinpoint. If you are able to try it out, that would be helpful to know if it solves the problem: #7414 > > I'll try it in the next RC and report back. It's pretty easy for me to reproduce on my setup. OK so it looks like the crash is no longer happening. Model swapping seems to be handled correctly!
Author
Owner

@jessegross commented on GitHub (Oct 31, 2024):

Glad to hear that it fixed the issue - thanks for testing!

<!-- gh-comment-id:2450454286 --> @jessegross commented on GitHub (Oct 31, 2024): Glad to hear that it fixed the issue - thanks for testing!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30171