[GH-ISSUE #13113] The qwen3-vl model reports an error after uploading a small image #8682

Closed
opened 2026-04-12 21:27:06 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @adamyang1980 on GitHub (Nov 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13113

What is the issue?

When running the qwen3-vl:30b-instruct model with ollama 0.12.11, a 500 error is reported on the client side when uploading a very small image for invocation. Through the debug logs, it seems that the issue lies in the optimization processing of small images in the qwen3vl.go program of ollama. The detailed situation is as shown in the uploaded logs.

Relevant log output

time=2025-11-17T02:11:23.628Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097
time=2025-11-17T02:11:23.628Z level=DEBUG source=server.go:1465 msg="completion request" images=7 prompt=235 format=""
time=2025-11-17T02:11:23.924Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:45448: height:30 or width:305 must be larger than factor:32\ngoroutine 23 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x569ae4556780?, 0xc000334280?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).SmartResize(0x569ae471e450?, 0xc0014d4240?, 0x131)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:54 +0x3e7\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).ProcessImage(0xc001ff4110, {0x569ae472a850, 0xc0014d4200}, {0x569ae471e450?, 0xc0014d4240?})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:92 +0x9c\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc001ff40d0, {0x569ae472a850, 0xc0014d4200}, {0xc0000fe210, 0xac, 0xae})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:37 +0x113\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).inputs(0xc000260f00, {0xc000c36000, 0xeb}, {0xc000050700, 0x7, 0xc0013fc808?})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:273 +0x50a\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).NewSequence(0xc000260f00, {0xc000c36000, 0xeb}, {0xc000050700, 0x7, 0x8}, {0x1000, {0x0, 0x0, 0x0}, ...})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:134 +0x8f\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0xc000260f00, {0x569ae471daa8, 0xc0005bb6c0}, 0xc00015e280)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:893 +0x5d2\nnet/http.HandlerFunc.ServeHTTP(0xc000142600?, {0x569ae471daa8?, 0xc0005bb6c0?}, 0xc000e01b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x569ae323d0c5?, {0x569ae471daa8, 0xc0005bb6c0}, 0xc00015e280)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x569ae471a0b0?}, {0x569ae471daa8?, 0xc0005bb6c0?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001783f0, {0x569ae471fe68, 0xc000176a50})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485"
time=2025-11-17T02:11:23.924Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:36599/completion\": EOF"
[GIN] 2025/11/17 - 02:11:23 | 500 |  489.008419ms |       10.1.3.13 | POST     "/api/generate"
time=2025-11-17T02:11:23.925Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000
time=2025-11-17T02:11:23.925Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 duration=5m0s
time=2025-11-17T02:11:23.925Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 refCount=0
time=2025-11-17T02:11:29.738Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097
time=2025-11-17T02:11:29.739Z level=DEBUG source=server.go:1465 msg="completion request" images=7 prompt=235 format=""
time=2025-11-17T02:11:30.042Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:39178: height:30 or width:305 must be larger than factor:32\ngoroutine 134781 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x569ae4556780?, 0xc000334280?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).SmartResize(0x569ae471e450?, 0xc00030e3c0?, 0x131)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:54 +0x3e7\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).ProcessImage(0xc001ff4110, {0x569ae472a850, 0xc00030e380}, {0x569ae471e450?, 0xc00030e3c0?})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:92 +0x9c\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc001ff40d0, {0x569ae472a850, 0xc00030e380}, {0xc0000fe2c0, 0xac, 0xae})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:37 +0x113\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).inputs(0xc000260f00, {0xc000c360f0, 0xeb}, {0xc000050800, 0x7, 0xc000346008?})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:273 +0x50a\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).NewSequence(0xc000260f00, {0xc000c360f0, 0xeb}, {0xc000050800, 0x7, 0x8}, {0x1000, {0x0, 0x0, 0x0}, ...})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:134 +0x8f\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0xc000260f00, {0x569ae471daa8, 0xc0014e61c0}, 0xc000128280)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:893 +0x5d2\nnet/http.HandlerFunc.ServeHTTP(0xc000142600?, {0x569ae471daa8?, 0xc0014e61c0?}, 0xc0004d3b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x569ae323d0c5?, {0x569ae471daa8, 0xc0014e61c0}, 0xc000128280)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x569ae471a0b0?}, {0x569ae471daa8?, 0xc0014e61c0?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001781b0, {0x569ae471fe68, 0xc000176a50})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485"
time=2025-11-17T02:11:30.042Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:36599/completion\": EOF"
[GIN] 2025/11/17 - 02:11:30 | 500 |     481.676ms |       10.1.3.13 | POST     "/api/generate"
time=2025-11-17T02:11:30.042Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000
time=2025-11-17T02:11:30.042Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 duration=5m0s
time=2025-11-17T02:11:30.042Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 refCount=0
time=2025-11-17T02:11:35.873Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097
time=2025-11-17T02:11:35.874Z level=DEBUG source=server.go:1465 msg="completion request" images=7 prompt=235 format=""
time=2025-11-17T02:11:36.209Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:51322: height:30 or width:305 must be larger than factor:32\ngoroutine 136640 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x569ae4556780?, 0xc000334280?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).SmartResize(0x569ae471e450?, 0xc0014d4380?, 0x131)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:54 +0x3e7\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).ProcessImage(0xc001ff4110, {0x569ae472a850, 0xc0014d4340}, {0x569ae471e450?, 0xc0014d4380?})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:92 +0x9c\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc001ff40d0, {0x569ae472a850, 0xc0014d4340}, {0xc0000fe210, 0xac, 0xae})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:37 +0x113\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).inputs(0xc000260f00, {0xc000c36000, 0xeb}, {0xc000050700, 0x7, 0xc000601808?})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:273 +0x50a\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).NewSequence(0xc000260f00, {0xc000c36000, 0xeb}, {0xc000050700, 0x7, 0x8}, {0x1000, {0x0, 0x0, 0x0}, ...})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:134 +0x8f\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0xc000260f00, {0x569ae471daa8, 0xc0014e6380}, 0xc0001283c0)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:893 +0x5d2\nnet/http.HandlerFunc.ServeHTTP(0xc000142600?, {0x569ae471daa8?, 0xc0014e6380?}, 0xc000e01b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x569ae323d0c5?, {0x569ae471daa8, 0xc0014e6380}, 0xc0001283c0)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x569ae471a0b0?}, {0x569ae471daa8?, 0xc0014e6380?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001783f0, {0x569ae471fe68, 0xc000176a50})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485"
time=2025-11-17T02:11:36.209Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:36599/completion\": EOF"
[GIN] 2025/11/17 - 02:11:36 | 500 |  528.917088ms |       10.1.3.13 | POST     "/api/generate"
time=2025-11-17T02:11:36.210Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000
time=2025-11-17T02:11:36.210Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 duration=5m0s
time=2025-11-17T02:11:36.210Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 refCount=0
time=2025-11-17T02:11:41.954Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097
time=2025-11-17T02:11:41.954Z level=DEBUG source=server.go:1465 msg="completion request" images=7 prompt=235 format=""
time=2025-11-17T02:11:42.279Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:51330: height:30 or width:305 must be larger than factor:32\ngoroutine 138349 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x569ae4556780?, 0xc000ebc240?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).SmartResize(0x569ae471e450?, 0xc00030e3c0?, 0x131)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:54 +0x3e7\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).ProcessImage(0xc001ff4110, {0x569ae472a850, 0xc00030e380}, {0x569ae471e450?, 0xc00030e3c0?})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:92 +0x9c\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc001ff40d0, {0x569ae472a850, 0xc00030e380}, {0xc0000fe2c0, 0xac, 0xae})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:37 +0x113\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).inputs(0xc000260f00, {0xc000c360f0, 0xeb}, {0xc000050800, 0x7, 0xc0000b1008?})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:273 +0x50a\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).NewSequence(0xc000260f00, {0xc000c360f0, 0xeb}, {0xc000050800, 0x7, 0x8}, {0x1000, {0x0, 0x0, 0x0}, ...})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:134 +0x8f\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0xc000260f00, {0x569ae471daa8, 0xc0014e60e0}, 0xc000128140)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:893 +0x5d2\nnet/http.HandlerFunc.ServeHTTP(0xc000142600?, {0x569ae471daa8?, 0xc0014e60e0?}, 0xc00016fb60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x569ae323d0c5?, {0x569ae471daa8, 0xc0014e60e0}, 0xc000128140)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x569ae471a0b0?}, {0x569ae471daa8?, 0xc0014e60e0?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001781b0, {0x569ae471fe68, 0xc000176a50})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485"
time=2025-11-17T02:11:42.279Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:36599/completion\": EOF"
[GIN] 2025/11/17 - 02:11:42 | 500 |  517.100846ms |       10.1.3.13 | POST     "/api/generate"
time=2025-11-17T02:11:42.279Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000
time=2025-11-17T02:11:42.279Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 duration=5m0s
time=2025-11-17T02:11:42.279Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 refCount=0
time=2025-11-17T02:11:47.849Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097
time=2025-11-17T02:11:47.850Z level=DEBUG source=server.go:1465 msg="completion request" images=7 prompt=235 format=""
time=2025-11-17T02:11:48.163Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:39222: height:30 or width:305 must be larger than factor:32\ngoroutine 138436 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x569ae4556780?, 0xc000ebc1f0?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).SmartResize(0x569ae471e450?, 0xc0014d42c0?, 0x131)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:54 +0x3e7\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).ProcessImage(0xc001ff4110, {0x569ae472a850, 0xc0014d4280}, {0x569ae471e450?, 0xc0014d42c0?})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:92 +0x9c\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc001ff40d0, {0x569ae472a850, 0xc0014d4280}, {0xc0000fe210, 0xac, 0xae})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:37 +0x113\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).inputs(0xc000260f00, {0xc000c36000, 0xeb}, {0xc000050700, 0x7, 0xc0000b1008?})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:273 +0x50a\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).NewSequence(0xc000260f00, {0xc000c36000, 0xeb}, {0xc000050700, 0x7, 0x8}, {0x1000, {0x0, 0x0, 0x0}, ...})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:134 +0x8f\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0xc000260f00, {0x569ae471daa8, 0xc000578000}, 0xc00015e280)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:893 +0x5d2\nnet/http.HandlerFunc.ServeHTTP(0xc000142600?, {0x569ae471daa8?, 0xc000578000?}, 0xc000e01b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x569ae323d0c5?, {0x569ae471daa8, 0xc000578000}, 0xc00015e280)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x569ae471a0b0?}, {0x569ae471daa8?, 0xc000578000?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001783f0, {0x569ae471fe68, 0xc000176a50})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485"
time=2025-11-17T02:11:48.164Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:36599/completion\": EOF"
[GIN] 2025/11/17 - 02:11:48 | 500 |  509.386815ms |       10.1.3.13 | POST     "/api/generate"
time=2025-11-17T02:11:48.164Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000
time=2025-11-17T02:11:48.164Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 duration=5m0s
time=2025-11-17T02:11:48.164Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 refCount=0
time=2025-11-17T02:11:53.978Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097
time=2025-11-17T02:11:53.979Z level=DEBUG source=server.go:1465 msg="completion request" images=7 prompt=235 format=""
time=2025-11-17T02:11:54.303Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:37822: height:30 or width:305 must be larger than factor:32\ngoroutine 138411 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x569ae4556780?, 0xc0003342e0?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).SmartResize(0x569ae471e450?, 0xc0014d4200?, 0x131)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:54 +0x3e7\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).ProcessImage(0xc001ff4110, {0x569ae472a850, 0xc0014d41c0}, {0x569ae471e450?, 0xc0014d4200?})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:92 +0x9c\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc001ff40d0, {0x569ae472a850, 0xc0014d41c0}, {0xc0000fe2c0, 0xac, 0xae})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:37 +0x113\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).inputs(0xc000260f00, {0xc000c360f0, 0xeb}, {0xc000050800, 0x7, 0xc000347808?})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:273 +0x50a\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).NewSequence(0xc000260f00, {0xc000c360f0, 0xeb}, {0xc000050800, 0x7, 0x8}, {0x1000, {0x0, 0x0, 0x0}, ...})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:134 +0x8f\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0xc000260f00, {0x569ae471daa8, 0xc0014e60e0}, 0xc000128140)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:893 +0x5d2\nnet/http.HandlerFunc.ServeHTTP(0xc000142600?, {0x569ae471daa8?, 0xc0014e60e0?}, 0xc0004d3b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x569ae323d0c5?, {0x569ae471daa8, 0xc0014e60e0}, 0xc000128140)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x569ae471a0b0?}, {0x569ae471daa8?, 0xc0014e60e0?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001781b0, {0x569ae471fe68, 0xc000176a50})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485"
[GIN] 2025/11/17 - 02:11:54 | 500 |  520.795464ms |       10.1.3.13 | POST     "/api/generate"
time=2025-11-17T02:11:54.304Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:36599/completion\": EOF"
time=2025-11-17T02:11:54.304Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000
time=2025-11-17T02:11:54.304Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 duration=5m0s
time=2025-11-17T02:11:54.304Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 refCount=0

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.12.11

Originally created by @adamyang1980 on GitHub (Nov 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13113 ### What is the issue? When running the qwen3-vl:30b-instruct model with ollama 0.12.11, a 500 error is reported on the client side when uploading a very small image for invocation. Through the debug logs, it seems that the issue lies in the optimization processing of small images in the qwen3vl.go program of ollama. The detailed situation is as shown in the uploaded logs. ### Relevant log output ```shell time=2025-11-17T02:11:23.628Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 time=2025-11-17T02:11:23.628Z level=DEBUG source=server.go:1465 msg="completion request" images=7 prompt=235 format="" time=2025-11-17T02:11:23.924Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:45448: height:30 or width:305 must be larger than factor:32\ngoroutine 23 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x569ae4556780?, 0xc000334280?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).SmartResize(0x569ae471e450?, 0xc0014d4240?, 0x131)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:54 +0x3e7\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).ProcessImage(0xc001ff4110, {0x569ae472a850, 0xc0014d4200}, {0x569ae471e450?, 0xc0014d4240?})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:92 +0x9c\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc001ff40d0, {0x569ae472a850, 0xc0014d4200}, {0xc0000fe210, 0xac, 0xae})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:37 +0x113\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).inputs(0xc000260f00, {0xc000c36000, 0xeb}, {0xc000050700, 0x7, 0xc0013fc808?})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:273 +0x50a\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).NewSequence(0xc000260f00, {0xc000c36000, 0xeb}, {0xc000050700, 0x7, 0x8}, {0x1000, {0x0, 0x0, 0x0}, ...})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:134 +0x8f\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0xc000260f00, {0x569ae471daa8, 0xc0005bb6c0}, 0xc00015e280)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:893 +0x5d2\nnet/http.HandlerFunc.ServeHTTP(0xc000142600?, {0x569ae471daa8?, 0xc0005bb6c0?}, 0xc000e01b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x569ae323d0c5?, {0x569ae471daa8, 0xc0005bb6c0}, 0xc00015e280)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x569ae471a0b0?}, {0x569ae471daa8?, 0xc0005bb6c0?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001783f0, {0x569ae471fe68, 0xc000176a50})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485" time=2025-11-17T02:11:23.924Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:36599/completion\": EOF" [GIN] 2025/11/17 - 02:11:23 | 500 | 489.008419ms | 10.1.3.13 | POST "/api/generate" time=2025-11-17T02:11:23.925Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 time=2025-11-17T02:11:23.925Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 duration=5m0s time=2025-11-17T02:11:23.925Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 refCount=0 time=2025-11-17T02:11:29.738Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 time=2025-11-17T02:11:29.739Z level=DEBUG source=server.go:1465 msg="completion request" images=7 prompt=235 format="" time=2025-11-17T02:11:30.042Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:39178: height:30 or width:305 must be larger than factor:32\ngoroutine 134781 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x569ae4556780?, 0xc000334280?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).SmartResize(0x569ae471e450?, 0xc00030e3c0?, 0x131)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:54 +0x3e7\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).ProcessImage(0xc001ff4110, {0x569ae472a850, 0xc00030e380}, {0x569ae471e450?, 0xc00030e3c0?})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:92 +0x9c\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc001ff40d0, {0x569ae472a850, 0xc00030e380}, {0xc0000fe2c0, 0xac, 0xae})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:37 +0x113\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).inputs(0xc000260f00, {0xc000c360f0, 0xeb}, {0xc000050800, 0x7, 0xc000346008?})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:273 +0x50a\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).NewSequence(0xc000260f00, {0xc000c360f0, 0xeb}, {0xc000050800, 0x7, 0x8}, {0x1000, {0x0, 0x0, 0x0}, ...})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:134 +0x8f\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0xc000260f00, {0x569ae471daa8, 0xc0014e61c0}, 0xc000128280)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:893 +0x5d2\nnet/http.HandlerFunc.ServeHTTP(0xc000142600?, {0x569ae471daa8?, 0xc0014e61c0?}, 0xc0004d3b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x569ae323d0c5?, {0x569ae471daa8, 0xc0014e61c0}, 0xc000128280)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x569ae471a0b0?}, {0x569ae471daa8?, 0xc0014e61c0?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001781b0, {0x569ae471fe68, 0xc000176a50})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485" time=2025-11-17T02:11:30.042Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:36599/completion\": EOF" [GIN] 2025/11/17 - 02:11:30 | 500 | 481.676ms | 10.1.3.13 | POST "/api/generate" time=2025-11-17T02:11:30.042Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 time=2025-11-17T02:11:30.042Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 duration=5m0s time=2025-11-17T02:11:30.042Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 refCount=0 time=2025-11-17T02:11:35.873Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 time=2025-11-17T02:11:35.874Z level=DEBUG source=server.go:1465 msg="completion request" images=7 prompt=235 format="" time=2025-11-17T02:11:36.209Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:51322: height:30 or width:305 must be larger than factor:32\ngoroutine 136640 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x569ae4556780?, 0xc000334280?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).SmartResize(0x569ae471e450?, 0xc0014d4380?, 0x131)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:54 +0x3e7\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).ProcessImage(0xc001ff4110, {0x569ae472a850, 0xc0014d4340}, {0x569ae471e450?, 0xc0014d4380?})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:92 +0x9c\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc001ff40d0, {0x569ae472a850, 0xc0014d4340}, {0xc0000fe210, 0xac, 0xae})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:37 +0x113\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).inputs(0xc000260f00, {0xc000c36000, 0xeb}, {0xc000050700, 0x7, 0xc000601808?})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:273 +0x50a\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).NewSequence(0xc000260f00, {0xc000c36000, 0xeb}, {0xc000050700, 0x7, 0x8}, {0x1000, {0x0, 0x0, 0x0}, ...})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:134 +0x8f\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0xc000260f00, {0x569ae471daa8, 0xc0014e6380}, 0xc0001283c0)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:893 +0x5d2\nnet/http.HandlerFunc.ServeHTTP(0xc000142600?, {0x569ae471daa8?, 0xc0014e6380?}, 0xc000e01b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x569ae323d0c5?, {0x569ae471daa8, 0xc0014e6380}, 0xc0001283c0)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x569ae471a0b0?}, {0x569ae471daa8?, 0xc0014e6380?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001783f0, {0x569ae471fe68, 0xc000176a50})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485" time=2025-11-17T02:11:36.209Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:36599/completion\": EOF" [GIN] 2025/11/17 - 02:11:36 | 500 | 528.917088ms | 10.1.3.13 | POST "/api/generate" time=2025-11-17T02:11:36.210Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 time=2025-11-17T02:11:36.210Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 duration=5m0s time=2025-11-17T02:11:36.210Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 refCount=0 time=2025-11-17T02:11:41.954Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 time=2025-11-17T02:11:41.954Z level=DEBUG source=server.go:1465 msg="completion request" images=7 prompt=235 format="" time=2025-11-17T02:11:42.279Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:51330: height:30 or width:305 must be larger than factor:32\ngoroutine 138349 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x569ae4556780?, 0xc000ebc240?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).SmartResize(0x569ae471e450?, 0xc00030e3c0?, 0x131)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:54 +0x3e7\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).ProcessImage(0xc001ff4110, {0x569ae472a850, 0xc00030e380}, {0x569ae471e450?, 0xc00030e3c0?})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:92 +0x9c\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc001ff40d0, {0x569ae472a850, 0xc00030e380}, {0xc0000fe2c0, 0xac, 0xae})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:37 +0x113\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).inputs(0xc000260f00, {0xc000c360f0, 0xeb}, {0xc000050800, 0x7, 0xc0000b1008?})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:273 +0x50a\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).NewSequence(0xc000260f00, {0xc000c360f0, 0xeb}, {0xc000050800, 0x7, 0x8}, {0x1000, {0x0, 0x0, 0x0}, ...})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:134 +0x8f\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0xc000260f00, {0x569ae471daa8, 0xc0014e60e0}, 0xc000128140)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:893 +0x5d2\nnet/http.HandlerFunc.ServeHTTP(0xc000142600?, {0x569ae471daa8?, 0xc0014e60e0?}, 0xc00016fb60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x569ae323d0c5?, {0x569ae471daa8, 0xc0014e60e0}, 0xc000128140)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x569ae471a0b0?}, {0x569ae471daa8?, 0xc0014e60e0?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001781b0, {0x569ae471fe68, 0xc000176a50})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485" time=2025-11-17T02:11:42.279Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:36599/completion\": EOF" [GIN] 2025/11/17 - 02:11:42 | 500 | 517.100846ms | 10.1.3.13 | POST "/api/generate" time=2025-11-17T02:11:42.279Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 time=2025-11-17T02:11:42.279Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 duration=5m0s time=2025-11-17T02:11:42.279Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 refCount=0 time=2025-11-17T02:11:47.849Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 time=2025-11-17T02:11:47.850Z level=DEBUG source=server.go:1465 msg="completion request" images=7 prompt=235 format="" time=2025-11-17T02:11:48.163Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:39222: height:30 or width:305 must be larger than factor:32\ngoroutine 138436 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x569ae4556780?, 0xc000ebc1f0?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).SmartResize(0x569ae471e450?, 0xc0014d42c0?, 0x131)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:54 +0x3e7\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).ProcessImage(0xc001ff4110, {0x569ae472a850, 0xc0014d4280}, {0x569ae471e450?, 0xc0014d42c0?})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:92 +0x9c\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc001ff40d0, {0x569ae472a850, 0xc0014d4280}, {0xc0000fe210, 0xac, 0xae})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:37 +0x113\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).inputs(0xc000260f00, {0xc000c36000, 0xeb}, {0xc000050700, 0x7, 0xc0000b1008?})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:273 +0x50a\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).NewSequence(0xc000260f00, {0xc000c36000, 0xeb}, {0xc000050700, 0x7, 0x8}, {0x1000, {0x0, 0x0, 0x0}, ...})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:134 +0x8f\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0xc000260f00, {0x569ae471daa8, 0xc000578000}, 0xc00015e280)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:893 +0x5d2\nnet/http.HandlerFunc.ServeHTTP(0xc000142600?, {0x569ae471daa8?, 0xc000578000?}, 0xc000e01b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x569ae323d0c5?, {0x569ae471daa8, 0xc000578000}, 0xc00015e280)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x569ae471a0b0?}, {0x569ae471daa8?, 0xc000578000?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001783f0, {0x569ae471fe68, 0xc000176a50})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485" time=2025-11-17T02:11:48.164Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:36599/completion\": EOF" [GIN] 2025/11/17 - 02:11:48 | 500 | 509.386815ms | 10.1.3.13 | POST "/api/generate" time=2025-11-17T02:11:48.164Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 time=2025-11-17T02:11:48.164Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 duration=5m0s time=2025-11-17T02:11:48.164Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 refCount=0 time=2025-11-17T02:11:53.978Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 time=2025-11-17T02:11:53.979Z level=DEBUG source=server.go:1465 msg="completion request" images=7 prompt=235 format="" time=2025-11-17T02:11:54.303Z level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:37822: height:30 or width:305 must be larger than factor:32\ngoroutine 138411 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x569ae4556780?, 0xc0003342e0?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).SmartResize(0x569ae471e450?, 0xc0014d4200?, 0x131)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:54 +0x3e7\ngithub.com/ollama/ollama/model/models/qwen3vl.(*ImageProcessor).ProcessImage(0xc001ff4110, {0x569ae472a850, 0xc0014d41c0}, {0x569ae471e450?, 0xc0014d4200?})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/imageprocessor.go:92 +0x9c\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc001ff40d0, {0x569ae472a850, 0xc0014d41c0}, {0xc0000fe2c0, 0xac, 0xae})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:37 +0x113\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).inputs(0xc000260f00, {0xc000c360f0, 0xeb}, {0xc000050800, 0x7, 0xc000347808?})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:273 +0x50a\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).NewSequence(0xc000260f00, {0xc000c360f0, 0xeb}, {0xc000050800, 0x7, 0x8}, {0x1000, {0x0, 0x0, 0x0}, ...})\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:134 +0x8f\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).completion(0xc000260f00, {0x569ae471daa8, 0xc0014e60e0}, 0xc000128140)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:893 +0x5d2\nnet/http.HandlerFunc.ServeHTTP(0xc000142600?, {0x569ae471daa8?, 0xc0014e60e0?}, 0xc0004d3b60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x569ae323d0c5?, {0x569ae471daa8, 0xc0014e60e0}, 0xc000128140)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x569ae471a0b0?}, {0x569ae471daa8?, 0xc0014e60e0?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001781b0, {0x569ae471fe68, 0xc000176a50})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485" [GIN] 2025/11/17 - 02:11:54 | 500 | 520.795464ms | 10.1.3.13 | POST "/api/generate" time=2025-11-17T02:11:54.304Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:36599/completion\": EOF" time=2025-11-17T02:11:54.304Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 time=2025-11-17T02:11:54.304Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 duration=5m0s time=2025-11-17T02:11:54.304Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3-vl:30b-a3b-instruct runner.inference="[{ID:00000000-c500-0000-0000-000000000000 Library:Vulkan}]" runner.size="20.6 GiB" runner.vram="20.6 GiB" runner.parallel=1 runner.pid=51 runner.model=/root/.ollama/models/blobs/sha256-8088c24b807ccac3dbf05b0f8a92e6588ecc818a5fb29a942658308c5d8c0097 runner.num_ctx=20000 refCount=0 ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.12.11
GiteaMirror added the bug label 2026-04-12 21:27:06 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 17, 2025):

The image is too narrow to be processed by the model. Resize it so that it is at least 32 pixels in height.

<!-- gh-comment-id:3541079814 --> @rick-github commented on GitHub (Nov 17, 2025): The image is too narrow to be processed by the model. Resize it so that it is at least 32 pixels in height.
Author
Owner

@adamyang1980 commented on GitHub (Nov 18, 2025):

I understand your point. I will also be more careful about checking the image size in the future. I just wanted to say that it's not very user-friendly for Ollama to directly return a 500 error, and it seems to only occur on Vulkan, as I haven't seen this error on ROCM.

<!-- gh-comment-id:3544697911 --> @adamyang1980 commented on GitHub (Nov 18, 2025): I understand your point. I will also be more careful about checking the image size in the future. I just wanted to say that it's not very user-friendly for Ollama to directly return a 500 error, and it seems to only occur on Vulkan, as I haven't seen this error on ROCM.
Author
Owner

@rick-github commented on GitHub (Nov 18, 2025):

The inference backend is not relevant, it's a function of the model. Ollama could be modified to scale or pad the image, but then that would affect the output. For example, if the client wanted bboxes for elements in the picture, modifying the image to suit the model parameters may affect the output. Better to let the client know that the image is not supported.

<!-- gh-comment-id:3544743835 --> @rick-github commented on GitHub (Nov 18, 2025): The inference backend is not relevant, it's a function of the model. Ollama could be modified to scale or pad the image, but then that would affect the output. For example, if the client wanted bboxes for elements in the picture, modifying the image to suit the model parameters may affect the output. Better to let the client know that the image is not supported.
Author
Owner

@adamyang1980 commented on GitHub (Nov 18, 2025):

OK. Thank you for your reply

<!-- gh-comment-id:3544777141 --> @adamyang1980 commented on GitHub (Nov 18, 2025): OK. Thank you for your reply
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8682