[GH-ISSUE #14749] ollama crashed when posting image to qwen3.5:27b model #71595

Closed
opened 2026-05-05 02:13:51 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @sweihub on GitHub (Mar 10, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14749

What is the issue?

I use the openai /v1/chat/completions API to post a image to ollama qwen3.5:27b multimodal model, the ollama daemon crashed. My image size as described Content-Length: 3783642.

The ollama system logs attached: ollama-logs.txt

My testing output

❯ vision -i test/AP202603091820403506.pdf -o test/qwen.txt -c qwen.json
[2026-03-10 08:53:56.877] [INFO] [vision:185] Input file: "test/AP202603091820403506.pdf"
[2026-03-10 08:53:56.878] [INFO] [vision:186] Output text: "test/qwen.txt"
[2026-03-10 08:53:56.878] [INFO] [vision:191] Using VML URL: http://172.16.8.107:11434/v1
[2026-03-10 08:53:56.878] [INFO] [vision:192] Using model: qwen3.5:27b
[2026-03-10 08:53:56.878] [INFO] [vision:196] Detected file type: Pdf
[2026-03-10 08:53:57.033] [INFO] [vision:172] Converted page 1 to image successfully
[2026-03-10 08:53:57.140] [INFO] [vision:172] Converted page 2 to image successfully
[2026-03-10 08:53:57.231] [INFO] [vision:172] Converted page 3 to image successfully
[2026-03-10 08:53:57.301] [INFO] [vision:172] Converted page 4 to image successfully
[2026-03-10 08:53:57.367] [INFO] [vision:172] Converted page 5 to image successfully
[2026-03-10 08:53:57.403] [INFO] [vision:172] Converted page 6 to image successfully
[2026-03-10 08:53:57.404] [INFO] [vision:206] Successfully converted PDF to 6 images
[2026-03-10 08:53:57.404] [INFO] [vision:222] Processing page 1 with VLM..
[2026-03-10 08:53:57.449] [DEBUG] [ureq::stream:395] connecting to 172.16.8.107:11434 at 172.16.8.107:11434
[2026-03-10 08:53:57.450] [DEBUG] [ureq::stream:202] created stream: Stream(TcpStream { addr: 172.16.8.107:60406, peer: 172.16.8.107:11434, fd: 3 })
[2026-03-10 08:53:57.450] [DEBUG] [ureq::unit:261] sending request POST http://172.16.8.107:11434/v1/chat/completions
[2026-03-10 08:53:57.450] [DEBUG] [ureq::unit:480] writing prelude: POST /v1/chat/completions HTTP/1.1
Host: 172.16.8.107:11434
User-Agent: ureq/2.12.1
Accept: */*
Content-Type: application/json
accept-encoding: gzip
Content-Length: 3783642
[2026-03-10 08:54:26.891] [DEBUG] [ureq::response:396] Body entirely buffered (length: 206)
[2026-03-10 08:54:26.891] [DEBUG] [ureq::pool:130] adding stream to pool: http|172.16.8.107|11434 -> Stream(TcpStream { addr: 172.16.8.107:60406, peer: 172.16.8.107:11434, fd: 3 })
[2026-03-10 08:54:26.891] [DEBUG] [ureq::unit:314] response 500 to POST http://172.16.8.107:11434/v1/chat/completions
[2026-03-10 08:54:26.891] [DEBUG] [ureq::stream:322] dropping stream: Stream(TcpStream { addr: 172.16.8.107:60406, peer: 172.16.8.107:11434, fd: 3 })
Error: Failed to OCR page 1

Caused by:
    0: Failed to connect to VML server at http://172.16.8.107:11434/v1
    1: http://172.16.8.107:11434/v1/chat/completions: status code 500

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.17.4

Originally created by @sweihub on GitHub (Mar 10, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14749 ### What is the issue? I use the openai `/v1/chat/completions` API to post a image to ollama `qwen3.5:27b` multimodal model, the ollama daemon crashed. My image size as described `Content-Length: 3783642`. The ollama system logs attached: [ollama-logs.txt](https://github.com/user-attachments/files/25856593/ollama-logs.txt) My testing output ``` ❯ vision -i test/AP202603091820403506.pdf -o test/qwen.txt -c qwen.json [2026-03-10 08:53:56.877] [INFO] [vision:185] Input file: "test/AP202603091820403506.pdf" [2026-03-10 08:53:56.878] [INFO] [vision:186] Output text: "test/qwen.txt" [2026-03-10 08:53:56.878] [INFO] [vision:191] Using VML URL: http://172.16.8.107:11434/v1 [2026-03-10 08:53:56.878] [INFO] [vision:192] Using model: qwen3.5:27b [2026-03-10 08:53:56.878] [INFO] [vision:196] Detected file type: Pdf [2026-03-10 08:53:57.033] [INFO] [vision:172] Converted page 1 to image successfully [2026-03-10 08:53:57.140] [INFO] [vision:172] Converted page 2 to image successfully [2026-03-10 08:53:57.231] [INFO] [vision:172] Converted page 3 to image successfully [2026-03-10 08:53:57.301] [INFO] [vision:172] Converted page 4 to image successfully [2026-03-10 08:53:57.367] [INFO] [vision:172] Converted page 5 to image successfully [2026-03-10 08:53:57.403] [INFO] [vision:172] Converted page 6 to image successfully [2026-03-10 08:53:57.404] [INFO] [vision:206] Successfully converted PDF to 6 images [2026-03-10 08:53:57.404] [INFO] [vision:222] Processing page 1 with VLM.. [2026-03-10 08:53:57.449] [DEBUG] [ureq::stream:395] connecting to 172.16.8.107:11434 at 172.16.8.107:11434 [2026-03-10 08:53:57.450] [DEBUG] [ureq::stream:202] created stream: Stream(TcpStream { addr: 172.16.8.107:60406, peer: 172.16.8.107:11434, fd: 3 }) [2026-03-10 08:53:57.450] [DEBUG] [ureq::unit:261] sending request POST http://172.16.8.107:11434/v1/chat/completions [2026-03-10 08:53:57.450] [DEBUG] [ureq::unit:480] writing prelude: POST /v1/chat/completions HTTP/1.1 Host: 172.16.8.107:11434 User-Agent: ureq/2.12.1 Accept: */* Content-Type: application/json accept-encoding: gzip Content-Length: 3783642 [2026-03-10 08:54:26.891] [DEBUG] [ureq::response:396] Body entirely buffered (length: 206) [2026-03-10 08:54:26.891] [DEBUG] [ureq::pool:130] adding stream to pool: http|172.16.8.107|11434 -> Stream(TcpStream { addr: 172.16.8.107:60406, peer: 172.16.8.107:11434, fd: 3 }) [2026-03-10 08:54:26.891] [DEBUG] [ureq::unit:314] response 500 to POST http://172.16.8.107:11434/v1/chat/completions [2026-03-10 08:54:26.891] [DEBUG] [ureq::stream:322] dropping stream: Stream(TcpStream { addr: 172.16.8.107:60406, peer: 172.16.8.107:11434, fd: 3 }) Error: Failed to OCR page 1 Caused by: 0: Failed to connect to VML server at http://172.16.8.107:11434/v1 1: http://172.16.8.107:11434/v1/chat/completions: status code 500 ``` ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.17.4
GiteaMirror added the bug label 2026-05-05 02:13:51 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 12, 2026):

Mar 10 08:54:25 beast ollama[184216]:   cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream)

#14444

Upgrade to 0.17.5 or newer.

<!-- gh-comment-id:4047597883 --> @rick-github commented on GitHub (Mar 12, 2026): ``` Mar 10 08:54:25 beast ollama[184216]: cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) ``` #14444 Upgrade to 0.17.5 or newer.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71595