[GH-ISSUE #12998] Ollama 0.12.10 Windows: 500 Internal Server Error With Qwen3-VL Models Set to 256k Context Size in Ollama App #34366

Closed
opened 2026-04-22 17:51:10 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @DanTheProgrammerMan on GitHub (Nov 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12998

What is the issue?

When attempting to load Qwen3-VL models (Tested: qwen3:32b-vl, qwen3:30b-vl, qwen3:8b-vl, qwen3:4b-vl, qwen3:2b-vl) with a 256k context size set in the Ollama app settings, the following error occurs during model loading:

500 Internal Server Error: do load request: Post "http://127.0.0.1:52058/load": read tcp 127.0.0.1:52063->127.0.0.1:52058: wsarecv: An existing connection was forcibly closed by the remote host.

The issue only occurs with 256k context size, with new & existing chats - no errors are observed with 128k or lower.

In previous versions of Ollama (0.12.8), I have run Qwen3-VL models at 256k context size, and it has run just fine.

Relevant log output

500 Internal Server Error: do load request: Post "http://127.0.0.1:52058/load": read tcp 127.0.0.1:52063->127.0.0.1:52058: wsarecv: An existing connection was forcibly closed by the remote host.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.12.10

Originally created by @DanTheProgrammerMan on GitHub (Nov 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12998 ### What is the issue? When attempting to load Qwen3-VL models (Tested: `qwen3:32b-vl`, `qwen3:30b-vl`, `qwen3:8b-vl`, `qwen3:4b-vl`, `qwen3:2b-vl`) with a 256k context size set in the Ollama app settings, the following error occurs during model loading: ``` txt 500 Internal Server Error: do load request: Post "http://127.0.0.1:52058/load": read tcp 127.0.0.1:52063->127.0.0.1:52058: wsarecv: An existing connection was forcibly closed by the remote host. ``` The issue only occurs with 256k context size, with new & existing chats - no errors are observed with 128k or lower. *In previous versions of Ollama (0.12.8), I have run Qwen3-VL models at 256k context size, and it has run just fine.* ### Relevant log output ```shell 500 Internal Server Error: do load request: Post "http://127.0.0.1:52058/load": read tcp 127.0.0.1:52063->127.0.0.1:52058: wsarecv: An existing connection was forcibly closed by the remote host. ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.10
GiteaMirror added the bug label 2026-04-22 17:51:10 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 7, 2025):

Post the full server log.

<!-- gh-comment-id:3501496013 --> @rick-github commented on GitHub (Nov 7, 2025): Post the full [server log]( https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md).
Author
Owner

@DanTheProgrammerMan commented on GitHub (Nov 7, 2025):

time=2025-11-07T18:10:03.828+08:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:262144 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\Ollama_Models\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-11-07T18:10:03.847+08:00 level=INFO source=images.go:522 msg="total blobs: 152"
time=2025-11-07T18:10:03.855+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 12"
time=2025-11-07T18:10:03.863+08:00 level=INFO source=routes.go:1578 msg="Listening on 127.0.0.1:11434 (version 0.12.10)"
time=2025-11-07T18:10:03.864+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-07T18:10:03.872+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\Users\User1\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 61784"
time=2025-11-07T18:10:04.037+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\Users\User1\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 61790"
time=2025-11-07T18:10:04.158+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\Users\User1\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 61796"
time=2025-11-07T18:10:04.254+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937 filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4070 Laptop GPU" libdirs=ollama,cuda_v12 driver=12.6 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="5.8 GiB"
time=2025-11-07T18:10:04.254+08:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB"
[GIN] 2025/11/07 - 18:10:04 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/11/07 - 18:10:04 | 200 | 9.7079ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/11/07 - 18:10:14 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2025/11/07 - 18:10:14 | 200 | 8.0731ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/11/07 - 18:10:15 | 200 | 8.7445ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/11/07 - 18:10:31 | 200 | 39.7752ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/11/07 - 18:10:31 | 200 | 44.3146ms | 127.0.0.1 | POST "/api/show"
time=2025-11-07T18:10:31.182+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\Users\User1\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 57351"
time=2025-11-07T18:10:31.358+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-07T18:10:31.358+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-11-07T18:10:31.358+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=32
time=2025-11-07T18:10:31.425+08:00 level=INFO source=server.go:215 msg="enabling flash attention"
time=2025-11-07T18:10:31.426+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\Users\User1\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model D:\Ollama_Models\models\blobs\sha256-b1da6f96a2e40e5db05b6066d799c69411225b336bfa20ef1b002c223ed4b190 --port 57356"
time=2025-11-07T18:10:31.429+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=49 requested=-1
time=2025-11-07T18:10:31.429+08:00 level=INFO source=server.go:658 msg="system memory" total="31.8 GiB" free="18.9 GiB" free_swap="30.3 GiB"
time=2025-11-07T18:10:31.429+08:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937 library=CUDA available="5.4 GiB" free="5.9 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-11-07T18:10:31.465+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine"
time=2025-11-07T18:10:31.466+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:57356"
time=2025-11-07T18:10:31.472+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:8 GPULayers:49[ID:GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-07T18:10:31.492+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vlmoe file_type=Q4_K_M name="" description="" num_tensors=1038 num_key_values=43
load_backend: loaded CPU backend from C:\Users\User1\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4070 Laptop GPU, compute capability 8.9, VMM: yes, ID: GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937
load_backend: loaded CUDA backend from C:\Users\User1\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-11-07T18:10:31.606+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:325: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed
time=2025-11-07T18:10:33.252+08:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama_Models\models\blobs\sha256-b1da6f96a2e40e5db05b6066d799c69411225b336bfa20ef1b002c223ed4b190 error="do load request: Post "http://127.0.0.1:57356/load": read tcp 127.0.0.1:57361->127.0.0.1:57356: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2025/11/07 - 18:10:33 | 500 | 2.1441039s | 127.0.0.1 | POST "/api/chat"
time=2025-11-07T18:10:33.339+08:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 0xc0000409"
[GIN] 2025/11/07 - 18:10:45 | 200 | 9.4464ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/11/07 - 18:11:15 | 200 | 8.9193ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/11/07 - 18:11:45 | 200 | 8.4621ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/11/07 - 18:12:15 | 200 | 7.8896ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/11/07 - 18:12:45 | 200 | 7.7903ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/11/07 - 18:13:15 | 200 | 7.6399ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/11/07 - 18:13:45 | 200 | 9.9203ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/11/07 - 18:14:10 | 200 | 44.308ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/11/07 - 18:14:10 | 200 | 37.7429ms | 127.0.0.1 | POST "/api/show"
time=2025-11-07T18:14:10.117+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\Users\User1\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 61807"
time=2025-11-07T18:14:10.311+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-07T18:14:10.311+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-11-07T18:14:10.311+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=32
time=2025-11-07T18:14:10.358+08:00 level=INFO source=server.go:215 msg="enabling flash attention"
time=2025-11-07T18:14:10.359+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\Users\User1\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model D:\Ollama_Models\models\blobs\sha256-b1da6f96a2e40e5db05b6066d799c69411225b336bfa20ef1b002c223ed4b190 --port 61812"
time=2025-11-07T18:14:10.362+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=49 requested=-1
time=2025-11-07T18:14:10.362+08:00 level=INFO source=server.go:658 msg="system memory" total="31.8 GiB" free="19.0 GiB" free_swap="31.0 GiB"
time=2025-11-07T18:14:10.362+08:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937 library=CUDA available="5.4 GiB" free="5.9 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-11-07T18:14:10.396+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine"
time=2025-11-07T18:14:10.396+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:61812"
time=2025-11-07T18:14:10.405+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:8 GPULayers:49[ID:GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-07T18:14:10.423+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vlmoe file_type=Q4_K_M name="" description="" num_tensors=1038 num_key_values=43
load_backend: loaded CPU backend from C:\Users\User1\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4070 Laptop GPU, compute capability 8.9, VMM: yes, ID: GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937
load_backend: loaded CUDA backend from C:\Users\User1\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-11-07T18:14:10.518+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:325: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed
time=2025-11-07T18:14:12.152+08:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama_Models\models\blobs\sha256-b1da6f96a2e40e5db05b6066d799c69411225b336bfa20ef1b002c223ed4b190 error="do load request: Post "http://127.0.0.1:61812/load": read tcp 127.0.0.1:61817->127.0.0.1:61812: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2025/11/07 - 18:14:12 | 500 | 2.1068276s | 127.0.0.1 | POST "/api/chat"
time=2025-11-07T18:14:12.237+08:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 0xc0000409"

<!-- gh-comment-id:3501674456 --> @DanTheProgrammerMan commented on GitHub (Nov 7, 2025): time=2025-11-07T18:10:03.828+08:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:262144 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama_Models\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-11-07T18:10:03.847+08:00 level=INFO source=images.go:522 msg="total blobs: 152" time=2025-11-07T18:10:03.855+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 12" time=2025-11-07T18:10:03.863+08:00 level=INFO source=routes.go:1578 msg="Listening on 127.0.0.1:11434 (version 0.12.10)" time=2025-11-07T18:10:03.864+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-07T18:10:03.872+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\User1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61784" time=2025-11-07T18:10:04.037+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\User1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61790" time=2025-11-07T18:10:04.158+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\User1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61796" time=2025-11-07T18:10:04.254+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937 filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4070 Laptop GPU" libdirs=ollama,cuda_v12 driver=12.6 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="5.8 GiB" time=2025-11-07T18:10:04.254+08:00 level=INFO source=routes.go:1619 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB" [GIN] 2025/11/07 - 18:10:04 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/11/07 - 18:10:04 | 200 | 9.7079ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/07 - 18:10:14 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/11/07 - 18:10:14 | 200 | 8.0731ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/07 - 18:10:15 | 200 | 8.7445ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/07 - 18:10:31 | 200 | 39.7752ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/07 - 18:10:31 | 200 | 44.3146ms | 127.0.0.1 | POST "/api/show" time=2025-11-07T18:10:31.182+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\User1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57351" time=2025-11-07T18:10:31.358+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-07T18:10:31.358+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-11-07T18:10:31.358+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=32 time=2025-11-07T18:10:31.425+08:00 level=INFO source=server.go:215 msg="enabling flash attention" time=2025-11-07T18:10:31.426+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\User1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama_Models\\models\\blobs\\sha256-b1da6f96a2e40e5db05b6066d799c69411225b336bfa20ef1b002c223ed4b190 --port 57356" time=2025-11-07T18:10:31.429+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=49 requested=-1 time=2025-11-07T18:10:31.429+08:00 level=INFO source=server.go:658 msg="system memory" total="31.8 GiB" free="18.9 GiB" free_swap="30.3 GiB" time=2025-11-07T18:10:31.429+08:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937 library=CUDA available="5.4 GiB" free="5.9 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-07T18:10:31.465+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine" time=2025-11-07T18:10:31.466+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:57356" time=2025-11-07T18:10:31.472+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:8 GPULayers:49[ID:GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-07T18:10:31.492+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vlmoe file_type=Q4_K_M name="" description="" num_tensors=1038 num_key_values=43 load_backend: loaded CPU backend from C:\Users\User1\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4070 Laptop GPU, compute capability 8.9, VMM: yes, ID: GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937 load_backend: loaded CUDA backend from C:\Users\User1\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-11-07T18:10:31.606+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:325: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed time=2025-11-07T18:10:33.252+08:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama_Models\models\blobs\sha256-b1da6f96a2e40e5db05b6066d799c69411225b336bfa20ef1b002c223ed4b190 error="do load request: Post \"http://127.0.0.1:57356/load\": read tcp 127.0.0.1:57361->127.0.0.1:57356: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2025/11/07 - 18:10:33 | 500 | 2.1441039s | 127.0.0.1 | POST "/api/chat" time=2025-11-07T18:10:33.339+08:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 0xc0000409" [GIN] 2025/11/07 - 18:10:45 | 200 | 9.4464ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/07 - 18:11:15 | 200 | 8.9193ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/07 - 18:11:45 | 200 | 8.4621ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/07 - 18:12:15 | 200 | 7.8896ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/07 - 18:12:45 | 200 | 7.7903ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/07 - 18:13:15 | 200 | 7.6399ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/07 - 18:13:45 | 200 | 9.9203ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/07 - 18:14:10 | 200 | 44.308ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/07 - 18:14:10 | 200 | 37.7429ms | 127.0.0.1 | POST "/api/show" time=2025-11-07T18:14:10.117+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\User1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61807" time=2025-11-07T18:14:10.311+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-07T18:14:10.311+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-11-07T18:14:10.311+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=32 time=2025-11-07T18:14:10.358+08:00 level=INFO source=server.go:215 msg="enabling flash attention" time=2025-11-07T18:14:10.359+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\User1\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama_Models\\models\\blobs\\sha256-b1da6f96a2e40e5db05b6066d799c69411225b336bfa20ef1b002c223ed4b190 --port 61812" time=2025-11-07T18:14:10.362+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=49 requested=-1 time=2025-11-07T18:14:10.362+08:00 level=INFO source=server.go:658 msg="system memory" total="31.8 GiB" free="19.0 GiB" free_swap="31.0 GiB" time=2025-11-07T18:14:10.362+08:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937 library=CUDA available="5.4 GiB" free="5.9 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-07T18:14:10.396+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine" time=2025-11-07T18:14:10.396+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:61812" time=2025-11-07T18:14:10.405+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:8 GPULayers:49[ID:GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-07T18:14:10.423+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vlmoe file_type=Q4_K_M name="" description="" num_tensors=1038 num_key_values=43 load_backend: loaded CPU backend from C:\Users\User1\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4070 Laptop GPU, compute capability 8.9, VMM: yes, ID: GPU-92e587c5-babb-38a1-6eaf-4c4e6b79f937 load_backend: loaded CUDA backend from C:\Users\User1\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-11-07T18:14:10.518+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:325: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed time=2025-11-07T18:14:12.152+08:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama_Models\models\blobs\sha256-b1da6f96a2e40e5db05b6066d799c69411225b336bfa20ef1b002c223ed4b190 error="do load request: Post \"http://127.0.0.1:61812/load\": read tcp 127.0.0.1:61817->127.0.0.1:61812: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2025/11/07 - 18:14:12 | 500 | 2.1068276s | 127.0.0.1 | POST "/api/chat" time=2025-11-07T18:14:12.237+08:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 0xc0000409"
Author
Owner

@rick-github commented on GitHub (Nov 7, 2025):

Perhaps https://github.com/ggml-org/llama.cpp/issues/15049

<!-- gh-comment-id:3501941882 --> @rick-github commented on GitHub (Nov 7, 2025): Perhaps https://github.com/ggml-org/llama.cpp/issues/15049
Author
Owner

@magnusbonnevier commented on GitHub (Nov 10, 2025):

Confirmed Bug, i had problems running the qwen3-vl:2b and qwen3-vl:4b models with the exact error reported.

I was thinking what changed ?, well the 256k Context window setting in the UI i tried, after setting that, the error is constant.

And thanks to this issue report i dialed it back to 128k and all of a sudden the models load fine.

This is strange.

However, the cloud model of qwen3-vl:235b-cloud model did not care about the 256k context window size.

<!-- gh-comment-id:3510184034 --> @magnusbonnevier commented on GitHub (Nov 10, 2025): Confirmed Bug, i had problems running the qwen3-vl:2b and qwen3-vl:4b models with the exact error reported. I was thinking what changed ?, well the 256k Context window setting in the UI i tried, after setting that, the error is constant. And thanks to this issue report i dialed it back to 128k and all of a sudden the models load fine. This is strange. However, the cloud model of qwen3-vl:235b-cloud model did not care about the 256k context window size.
Author
Owner

@dambergn commented on GitHub (Nov 10, 2025):

I had tried changing the context model for Qwen3-VL:30b to use a 256k context with a modelfile, I did this because using open-webui will sometimes ignore the ollama settings and use the default. I have found creating a new model with the context hard set helps with this, at least with gpt-oss, When I try to run it I get a similar 500 server error. As a test I also tried making a smaller context model at 32k context window and run into the same issue.

ollama2.log

<!-- gh-comment-id:3512557612 --> @dambergn commented on GitHub (Nov 10, 2025): I had tried changing the context model for Qwen3-VL:30b to use a 256k context with a modelfile, I did this because using open-webui will sometimes ignore the ollama settings and use the default. I have found creating a new model with the context hard set helps with this, at least with gpt-oss, When I try to run it I get a similar 500 server error. As a test I also tried making a smaller context model at 32k context window and run into the same issue. [ollama2.log](https://github.com/user-attachments/files/23457775/ollama2.log)
Author
Owner

@myonlang commented on GitHub (Jan 29, 2026):

Why the hell nobody is working on this? why is this still open?

<!-- gh-comment-id:3819218323 --> @myonlang commented on GitHub (Jan 29, 2026): Why the hell nobody is working on this? why is this still open?
Author
Owner

@rick-github commented on GitHub (Jan 29, 2026):

Thanks for pinging the thread. This issue, GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) due to large context, appears to be fixed as of 0.13.3.

$ ollama -v
ollama version is 0.12.10
$ ollama-run.py qwen3-vl:32b hello --context 262144
do load request: Post "http://127.0.0.1:33281/load": EOF (status code: 500)
0.13.0   do load request: Post "http://127.0.0.1:45033/load": EOF (status code: 500)
0.13.1   do load request: Post "http://127.0.0.1:40675/load": EOF (status code: 500)
0.13.2   do load request: Post "http://127.0.0.1:36627/load": EOF (status code: 500)
0.13.3   Thinking...
0.13.4   Thinking...
0.13.5   Thinking...
0.14.0   Thinking...
0.14.1   Thinking...
0.14.2   Thinking...
0.14.3   Thinking...
0.15.0   Thinking...
0.15.1   Thinking...
0.15.2   Thinking...
model version output
qwen3-vl:32b 0.12.10 (status code: 500)
qwen3-vl:30b 0.12.10 (status code: 500)
qwen3-vl:8b 0.12.10 (status code: 500)
qwen3-vl:4b 0.12.10 (status code: 500)
qwen3-vl:2b 0.12.10 (status code: 500)
qwen3-vl:32b 0.13.2 (status code: 500)
qwen3-vl:30b 0.13.2 (status code: 500)
qwen3-vl:8b 0.13.2 (status code: 500)
qwen3-vl:4b 0.13.2 (status code: 500)
qwen3-vl:2b 0.13.2 (status code: 500)
qwen3-vl:32b 0.13.3 Thinking...
qwen3-vl:30b 0.13.3 Thinking...
qwen3-vl:8b 0.13.3 Thinking...
qwen3-vl:4b 0.13.3 Thinking...
qwen3-vl:2b 0.13.3 Thinking...
qwen3-vl:32b 0.15.2 Thinking...
qwen3-vl:30b 0.15.2 Thinking...
qwen3-vl:8b 0.15.2 Thinking...
qwen3-vl:4b 0.15.2 Thinking...
qwen3-vl:2b 0.15.2 Thinking...

If somebody with a Windows machine could check that would be helpful.

<!-- gh-comment-id:3819904916 --> @rick-github commented on GitHub (Jan 29, 2026): Thanks for pinging the thread. This issue, `GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX)` due to large context, appears to be fixed as of 0.13.3. ```console $ ollama -v ollama version is 0.12.10 $ ollama-run.py qwen3-vl:32b hello --context 262144 do load request: Post "http://127.0.0.1:33281/load": EOF (status code: 500) ``` ```console 0.13.0 do load request: Post "http://127.0.0.1:45033/load": EOF (status code: 500) 0.13.1 do load request: Post "http://127.0.0.1:40675/load": EOF (status code: 500) 0.13.2 do load request: Post "http://127.0.0.1:36627/load": EOF (status code: 500) 0.13.3 Thinking... 0.13.4 Thinking... 0.13.5 Thinking... 0.14.0 Thinking... 0.14.1 Thinking... 0.14.2 Thinking... 0.14.3 Thinking... 0.15.0 Thinking... 0.15.1 Thinking... 0.15.2 Thinking... ``` | model | version | output | |--|--|--| | qwen3-vl:32b | 0.12.10 | (status code: 500) | | qwen3-vl:30b | 0.12.10 | (status code: 500) | | qwen3-vl:8b | 0.12.10 | (status code: 500) | | qwen3-vl:4b | 0.12.10 | (status code: 500) | | qwen3-vl:2b | 0.12.10 | (status code: 500) | | qwen3-vl:32b | 0.13.2 | (status code: 500) | | qwen3-vl:30b | 0.13.2 | (status code: 500) | | qwen3-vl:8b | 0.13.2 | (status code: 500) | | qwen3-vl:4b | 0.13.2 | (status code: 500) | | qwen3-vl:2b | 0.13.2 | (status code: 500) | | qwen3-vl:32b | 0.13.3 | Thinking... | | qwen3-vl:30b | 0.13.3 | Thinking... | | qwen3-vl:8b | 0.13.3 | Thinking... | | qwen3-vl:4b | 0.13.3 | Thinking... | | qwen3-vl:2b | 0.13.3 | Thinking... | | qwen3-vl:32b | 0.15.2 | Thinking... | | qwen3-vl:30b | 0.15.2 | Thinking... | | qwen3-vl:8b | 0.15.2 | Thinking... | | qwen3-vl:4b | 0.15.2 | Thinking... | | qwen3-vl:2b | 0.15.2 | Thinking... | If somebody with a Windows machine could check that would be helpful.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34366