[GH-ISSUE #13084] Qwen3-VL produces garbled output when image inputs exceed num_ctx in multi-turn conversations #34420

Open
opened 2026-04-22 17:57:39 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Pluser456 on GitHub (Nov 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13084

What is the issue?

Description:

When using Qwen3-VL via Ollama (CLI, app, or ollama serve), consecutive image inputs across multiple conversation turns cause the model to output garbled text or repeated content once the prompt token count exceeds the configured num_ctx value. The output becomes nearly completely unreadable and unusable.

reproduction steps:

Use ollama app as an example,

  1. Create a new conversation, set content length (num_ctx) = 8k.
  2. Process multiple conversation turns, each time input one image and some brief texts. In my case, I send images of size 3236*2160.
  3. In about 3-6 rounds of conversation, the model would output highly unsorted contents or repeat some short sentences.

Details:

  • I tested the 8b-thinking, 4b-thinking and their instruct versions. It seems when the problem emerges, checking out to another Qwen3-vl won't help. All of them would output garbage if one of them starts to do so.
  • If increase num_ctx to a larger value, the problem will disappear temperally, but when the number of prompt tokens exceeds or nears the newly set value, the problem is back.
  • My mother tongue isn't English, so I'm sorry for some grammer mistakes.

Some captures:

Ollama app:
Image
CLI:
Image
Open-WebUI:
Image

Relevant log output

[GIN] 2025/11/14 - 15:52:43 | 200 |    509.9761ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/14 - 15:53:04 | 200 |     47.6703ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/14 - 15:53:12 | 200 |      3.0871ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:53:14 | 200 |       1.533ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:53:14 | 200 |     41.3704ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/14 - 15:53:14 | 200 |     36.4575ms |       127.0.0.1 | POST     "/api/show"
time=2025-11-14T15:53:14.951+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 14152"
time=2025-11-14T15:53:15.648+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-14T15:53:15.648+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-11-14T15:53:15.704+08:00 level=INFO source=server.go:215 msg="enabling flash attention"
time=2025-11-14T15:53:15.704+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama\\models\\blobs\\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 --port 14158"
time=2025-11-14T15:53:15.706+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1
time=2025-11-14T15:53:15.706+08:00 level=INFO source=server.go:658 msg="system memory" total="31.8 GiB" free="16.3 GiB" free_swap="23.5 GiB"
time=2025-11-14T15:53:15.706+08:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-a7554ca8-aee2-6447-1a8c-f5c4e07c3313 library=CUDA available="18.4 GiB" free="18.8 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-11-14T15:53:15.733+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine"
time=2025-11-14T15:53:15.740+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:14158"
time=2025-11-14T15:53:15.750+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:6 GPULayers:37[ID:GPU-a7554ca8-aee2-6447-1a8c-f5c4e07c3313 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-14T15:53:15.771+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=40
load_backend: loaded CPU backend from D:\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6, VMM: yes, ID: GPU-a7554ca8-aee2-6447-1a8c-f5c4e07c3313
load_backend: loaded CUDA backend from D:\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-11-14T15:53:15.860+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-11-14T15:53:16.319+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:6 GPULayers:37[ID:GPU-a7554ca8-aee2-6447-1a8c-f5c4e07c3313 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-14T15:53:16.772+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:6 GPULayers:37[ID:GPU-a7554ca8-aee2-6447-1a8c-f5c4e07c3313 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-14T15:53:16.772+08:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU"
time=2025-11-14T15:53:16.772+08:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-11-14T15:53:16.772+08:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU"
time=2025-11-14T15:53:16.772+08:00 level=INFO source=device.go:212 msg="model weights" device=CUDA0 size="5.4 GiB"
time=2025-11-14T15:53:16.772+08:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="333.8 MiB"
time=2025-11-14T15:53:16.772+08:00 level=INFO source=device.go:223 msg="kv cache" device=CUDA0 size="1.1 GiB"
time=2025-11-14T15:53:16.772+08:00 level=INFO source=device.go:234 msg="compute graph" device=CUDA0 size="4.5 GiB"
time=2025-11-14T15:53:16.772+08:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="63.3 MiB"
time=2025-11-14T15:53:16.772+08:00 level=INFO source=device.go:244 msg="total memory" size="11.4 GiB"
time=2025-11-14T15:53:16.772+08:00 level=INFO source=sched.go:500 msg="loaded runners" count=1
time=2025-11-14T15:53:16.772+08:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-11-14T15:53:16.773+08:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
time=2025-11-14T15:53:19.025+08:00 level=INFO source=server.go:1289 msg="llama runner started in 3.32 seconds"
[GIN] 2025/11/14 - 15:53:19 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/11/14 - 15:53:19 | 200 |      4.1526ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:53:35 | 200 |    20.259818s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/11/14 - 15:53:49 | 200 |      2.0691ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:54:01 | 200 |     41.3485ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/14 - 15:54:01 | 200 |     44.9351ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/14 - 15:54:13 | 200 |   12.5783706s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/11/14 - 15:54:19 | 200 |      2.0509ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:54:28 | 200 |     40.1061ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/14 - 15:54:28 | 200 |     35.3833ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/14 - 15:54:43 | 200 |   14.5064455s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/11/14 - 15:54:49 | 200 |      2.0509ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:55:15 | 200 |     41.4723ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/14 - 15:55:15 | 200 |     37.1118ms |       127.0.0.1 | POST     "/api/show"
time=2025-11-14T15:55:17.790+08:00 level=WARN source=runner.go:171 msg="truncating input prompt" limit=8192 prompt=9409 keep=4 new=7425
[GIN] 2025/11/14 - 15:55:19 | 200 |      2.1401ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:55:25 | 200 |    9.6799827s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/11/14 - 15:55:41 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/11/14 - 15:55:41 | 200 |      2.1051ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:56:11 | 200 |      2.0539ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:56:29 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/11/14 - 15:56:29 | 200 |     36.0295ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/14 - 15:56:29 | 200 |     34.7245ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/14 - 15:56:30 | 200 |     65.0794ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/11/14 - 15:56:36 | 200 |      3.4637ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:57:07 | 200 |      1.5266ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:57:22 | 200 |    15.093466s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/11/14 - 15:57:37 | 200 |       1.545ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:58:07 | 200 |       2.583ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:58:16 | 200 |   15.3443031s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/11/14 - 15:58:37 | 200 |      2.5785ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:59:07 | 200 |      2.6739ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 15:59:37 | 200 |      2.2411ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 16:00:07 | 200 |      2.2337ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 16:00:37 | 200 |      2.5926ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 16:00:41 | 200 |         2m10s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/11/14 - 16:01:07 | 200 |      2.7223ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 16:01:32 | 200 |    9.9693282s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/11/14 - 16:01:37 | 200 |      2.5612ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 16:02:02 | 200 |   16.7811006s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/11/14 - 16:02:07 | 200 |      2.6578ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/14 - 16:02:33 | 200 |   15.8356389s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/11/14 - 16:02:37 | 200 |      2.5625ms |       127.0.0.1 | GET      "/api/tags"
time=2025-11-14T16:02:43.832+08:00 level=WARN source=runner.go:171 msg="truncating input prompt" limit=8192 prompt=9314 keep=4 new=7330
[GIN] 2025/11/14 - 16:02:54 | 200 |   13.2317709s |       127.0.0.1 | POST     "/api/chat"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.12.10

Originally created by @Pluser456 on GitHub (Nov 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13084 ### What is the issue? ### Description: When using Qwen3-VL via Ollama (CLI, app, or ollama serve), consecutive image inputs across multiple conversation turns cause the model to output garbled text or repeated content once the prompt token count exceeds the configured num_ctx value. The output becomes nearly completely unreadable and unusable. ### reproduction steps: Use ollama app as an example, 1. Create a new conversation, set content length (num_ctx) = 8k. 2. Process multiple conversation turns, each time input one image and some brief texts. In my case, I send images of size 3236*2160. 3. In about 3-6 rounds of conversation, the model would output highly unsorted contents or repeat some short sentences. ### Details: - I tested the 8b-thinking, 4b-thinking and their instruct versions. It seems when the problem emerges, checking out to another Qwen3-vl won't help. All of them would output garbage if one of them starts to do so. - If increase num_ctx to a larger value, the problem will disappear temperally, but when the number of prompt tokens exceeds or nears the newly set value, the problem is back. - My mother tongue isn't English, so I'm sorry for some grammer mistakes. ### Some captures: Ollama app: <img width="1645" height="1468" alt="Image" src="https://github.com/user-attachments/assets/4bac4c3e-b4a3-452e-874c-fa3341d05af0" /> CLI: <img width="1197" height="1381" alt="Image" src="https://github.com/user-attachments/assets/93af32d2-62d1-4e60-9280-76f95585fd97" /> Open-WebUI: <img width="2559" height="1306" alt="Image" src="https://github.com/user-attachments/assets/ca1d4bc2-3e4a-4077-84ec-accc33d58813" /> ### Relevant log output ```shell [GIN] 2025/11/14 - 15:52:43 | 200 | 509.9761ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/14 - 15:53:04 | 200 | 47.6703ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/14 - 15:53:12 | 200 | 3.0871ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:53:14 | 200 | 1.533ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:53:14 | 200 | 41.3704ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/14 - 15:53:14 | 200 | 36.4575ms | 127.0.0.1 | POST "/api/show" time=2025-11-14T15:53:14.951+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 14152" time=2025-11-14T15:53:15.648+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-14T15:53:15.648+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12 time=2025-11-14T15:53:15.704+08:00 level=INFO source=server.go:215 msg="enabling flash attention" time=2025-11-14T15:53:15.704+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama\\models\\blobs\\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 --port 14158" time=2025-11-14T15:53:15.706+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1 time=2025-11-14T15:53:15.706+08:00 level=INFO source=server.go:658 msg="system memory" total="31.8 GiB" free="16.3 GiB" free_swap="23.5 GiB" time=2025-11-14T15:53:15.706+08:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-a7554ca8-aee2-6447-1a8c-f5c4e07c3313 library=CUDA available="18.4 GiB" free="18.8 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-14T15:53:15.733+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine" time=2025-11-14T15:53:15.740+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:14158" time=2025-11-14T15:53:15.750+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:6 GPULayers:37[ID:GPU-a7554ca8-aee2-6447-1a8c-f5c4e07c3313 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-14T15:53:15.771+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=40 load_backend: loaded CPU backend from D:\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6, VMM: yes, ID: GPU-a7554ca8-aee2-6447-1a8c-f5c4e07c3313 load_backend: loaded CUDA backend from D:\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-11-14T15:53:15.860+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-11-14T15:53:16.319+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:6 GPULayers:37[ID:GPU-a7554ca8-aee2-6447-1a8c-f5c4e07c3313 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-14T15:53:16.772+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:6 GPULayers:37[ID:GPU-a7554ca8-aee2-6447-1a8c-f5c4e07c3313 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-14T15:53:16.772+08:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU" time=2025-11-14T15:53:16.772+08:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-11-14T15:53:16.772+08:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU" time=2025-11-14T15:53:16.772+08:00 level=INFO source=device.go:212 msg="model weights" device=CUDA0 size="5.4 GiB" time=2025-11-14T15:53:16.772+08:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="333.8 MiB" time=2025-11-14T15:53:16.772+08:00 level=INFO source=device.go:223 msg="kv cache" device=CUDA0 size="1.1 GiB" time=2025-11-14T15:53:16.772+08:00 level=INFO source=device.go:234 msg="compute graph" device=CUDA0 size="4.5 GiB" time=2025-11-14T15:53:16.772+08:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="63.3 MiB" time=2025-11-14T15:53:16.772+08:00 level=INFO source=device.go:244 msg="total memory" size="11.4 GiB" time=2025-11-14T15:53:16.772+08:00 level=INFO source=sched.go:500 msg="loaded runners" count=1 time=2025-11-14T15:53:16.772+08:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" time=2025-11-14T15:53:16.773+08:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" time=2025-11-14T15:53:19.025+08:00 level=INFO source=server.go:1289 msg="llama runner started in 3.32 seconds" [GIN] 2025/11/14 - 15:53:19 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/11/14 - 15:53:19 | 200 | 4.1526ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:53:35 | 200 | 20.259818s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/11/14 - 15:53:49 | 200 | 2.0691ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:54:01 | 200 | 41.3485ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/14 - 15:54:01 | 200 | 44.9351ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/14 - 15:54:13 | 200 | 12.5783706s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/11/14 - 15:54:19 | 200 | 2.0509ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:54:28 | 200 | 40.1061ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/14 - 15:54:28 | 200 | 35.3833ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/14 - 15:54:43 | 200 | 14.5064455s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/11/14 - 15:54:49 | 200 | 2.0509ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:55:15 | 200 | 41.4723ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/14 - 15:55:15 | 200 | 37.1118ms | 127.0.0.1 | POST "/api/show" time=2025-11-14T15:55:17.790+08:00 level=WARN source=runner.go:171 msg="truncating input prompt" limit=8192 prompt=9409 keep=4 new=7425 [GIN] 2025/11/14 - 15:55:19 | 200 | 2.1401ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:55:25 | 200 | 9.6799827s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/11/14 - 15:55:41 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/11/14 - 15:55:41 | 200 | 2.1051ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:56:11 | 200 | 2.0539ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:56:29 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/11/14 - 15:56:29 | 200 | 36.0295ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/14 - 15:56:29 | 200 | 34.7245ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/14 - 15:56:30 | 200 | 65.0794ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/11/14 - 15:56:36 | 200 | 3.4637ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:57:07 | 200 | 1.5266ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:57:22 | 200 | 15.093466s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/11/14 - 15:57:37 | 200 | 1.545ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:58:07 | 200 | 2.583ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:58:16 | 200 | 15.3443031s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/11/14 - 15:58:37 | 200 | 2.5785ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:59:07 | 200 | 2.6739ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 15:59:37 | 200 | 2.2411ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 16:00:07 | 200 | 2.2337ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 16:00:37 | 200 | 2.5926ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 16:00:41 | 200 | 2m10s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/11/14 - 16:01:07 | 200 | 2.7223ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 16:01:32 | 200 | 9.9693282s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/11/14 - 16:01:37 | 200 | 2.5612ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 16:02:02 | 200 | 16.7811006s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/11/14 - 16:02:07 | 200 | 2.6578ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/14 - 16:02:33 | 200 | 15.8356389s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/11/14 - 16:02:37 | 200 | 2.5625ms | 127.0.0.1 | GET "/api/tags" time=2025-11-14T16:02:43.832+08:00 level=WARN source=runner.go:171 msg="truncating input prompt" limit=8192 prompt=9314 keep=4 new=7330 [GIN] 2025/11/14 - 16:02:54 | 200 | 13.2317709s | 127.0.0.1 | POST "/api/chat" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.10
GiteaMirror added the bug label 2026-04-22 17:57:39 -05:00
Author
Owner

@arquam07 commented on GitHub (Feb 4, 2026):

Have you found a solution?

<!-- gh-comment-id:3850309453 --> @arquam07 commented on GitHub (Feb 4, 2026): Have you found a solution?
Author
Owner

@Pluser456 commented on GitHub (Feb 5, 2026):

Have you found a solution?

No. I haven't found a perfect method to fully solve this problem. I tried to enlarge num_ctx parameter in Ollama (e.g. 65536), and that can temperally help.
Now I guess this problem is intrinsic to Qwen3-vl, not to Ollama, but I'm not sure.

<!-- gh-comment-id:3851258327 --> @Pluser456 commented on GitHub (Feb 5, 2026): > Have you found a solution? No. I haven't found a perfect method to fully solve this problem. I tried to enlarge `num_ctx` parameter in Ollama (e.g. 65536), and that can temperally help. Now I guess this problem is intrinsic to Qwen3-vl, not to Ollama, but I'm not sure.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34420