[GH-ISSUE #13032] Error: 500 Internal Server Error: do load request: Post "http://127.0.0.1:11680/load": read tcp 127.0.0.1:11685->127.0.0.1:11680: wsarecv: An existing connection was forcibly closed by the remote host. #34394

Closed
opened 2026-04-22 17:54:53 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @survivor998 on GitHub (Nov 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13032

What is the issue?

Error: 500 Internal Server Error: do load request: Post "http://127.0.0.1:11680/load": read tcp 127.0.0.1:11685->127.0.0.1:11680: wsarecv: An existing connection was forcibly closed by the remote host.

Relevant log output

time=2025-11-10T08:45:10.464+08:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:262144 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-11-10T08:45:10.493+08:00 level=INFO source=images.go:522 msg="total blobs: 29"
time=2025-11-10T08:45:10.494+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-11-10T08:45:10.495+08:00 level=INFO source=routes.go:1578 msg="Listening on 127.0.0.1:11434 (version 0.12.10)"
time=2025-11-10T08:45:10.497+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-10T08:45:10.505+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50364"
time=2025-11-10T08:45:10.952+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50370"
time=2025-11-10T08:45:11.108+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50376"
time=2025-11-10T08:45:11.624+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-e7309209-2017-a73d-80f8-00c15d654357 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090 Laptop GPU" libdirs=ollama,cuda_v12 driver=12.9 pci_id=0000:02:00.0 type=discrete total="23.9 GiB" available="23.5 GiB"
[GIN] 2025/11/10 - 08:46:20 | 200 |       521.1µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/11/10 - 08:46:20 | 200 |      4.8193ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/10 - 08:46:39 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/11/10 - 08:46:39 | 200 |     63.9752ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/10 - 08:46:39 | 200 |     52.7208ms |       127.0.0.1 | POST     "/api/show"
time=2025-11-10T08:46:39.265+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 23788"
time=2025-11-10T08:46:42.260+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" extra_envs=map[] error="failed to finish discovery before timeout"
time=2025-11-10T08:46:42.262+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values"
time=2025-11-10T08:46:42.262+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-10T08:46:42.262+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-11-10T08:46:42.262+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=24
time=2025-11-10T08:46:42.322+08:00 level=INFO source=server.go:215 msg="enabling flash attention"
time=2025-11-10T08:46:42.323+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama\\blobs\\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 --port 23794"
time=2025-11-10T08:46:42.334+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1
time=2025-11-10T08:46:42.334+08:00 level=INFO source=server.go:658 msg="system memory" total="63.5 GiB" free="50.2 GiB" free_swap="53.9 GiB"
time=2025-11-10T08:46:42.334+08:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-e7309209-2017-a73d-80f8-00c15d654357 library=CUDA available="23.0 GiB" free="23.5 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-11-10T08:46:42.368+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine"
time=2025-11-10T08:46:42.374+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:23794"
time=2025-11-10T08:46:42.377+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-e7309209-2017-a73d-80f8-00c15d654357 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-10T08:46:42.393+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=40
load_backend: loaded CPU backend from C:\Users\savior\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090 Laptop GPU, compute capability 12.0, VMM: yes, ID: GPU-e7309209-2017-a73d-80f8-00c15d654357
load_backend: loaded CUDA backend from C:\Users\savior\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-11-10T08:46:42.509+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:325: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed
time=2025-11-10T08:46:44.202+08:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama\blobs\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 error="do load request: Post \"http://127.0.0.1:23794/load\": read tcp 127.0.0.1:23799->127.0.0.1:23794: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2025/11/10 - 08:46:44 | 500 |    5.0216666s |       127.0.0.1 | POST     "/api/generate"
time=2025-11-10T08:46:44.303+08:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 0xc0000409"
[GIN] 2025/11/10 - 08:48:13 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/11/10 - 08:48:13 | 200 |     49.0193ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/10 - 08:48:13 | 200 |     41.4317ms |       127.0.0.1 | POST     "/api/show"
time=2025-11-10T08:48:13.252+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 11673"
time=2025-11-10T08:48:13.452+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-10T08:48:13.452+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-11-10T08:48:13.452+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=24
time=2025-11-10T08:48:13.508+08:00 level=INFO source=server.go:215 msg="enabling flash attention"
time=2025-11-10T08:48:13.509+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama\\blobs\\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 --port 11680"
time=2025-11-10T08:48:13.519+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1
time=2025-11-10T08:48:13.520+08:00 level=INFO source=server.go:658 msg="system memory" total="63.5 GiB" free="48.5 GiB" free_swap="52.2 GiB"
time=2025-11-10T08:48:13.520+08:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-e7309209-2017-a73d-80f8-00c15d654357 library=CUDA available="23.0 GiB" free="23.5 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-11-10T08:48:13.553+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine"
time=2025-11-10T08:48:13.560+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:11680"
time=2025-11-10T08:48:13.564+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-e7309209-2017-a73d-80f8-00c15d654357 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-10T08:48:13.580+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=40
load_backend: loaded CPU backend from C:\Users\savior\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090 Laptop GPU, compute capability 12.0, VMM: yes, ID: GPU-e7309209-2017-a73d-80f8-00c15d654357
load_backend: loaded CUDA backend from C:\Users\savior\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-11-10T08:48:13.681+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:325: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed
time=2025-11-10T08:48:15.148+08:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama\blobs\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 error="do load request: Post \"http://127.0.0.1:11680/load\": read tcp 127.0.0.1:11685->127.0.0.1:11680: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2025/11/10 - 08:48:15 | 500 |    1.9824605s |       127.0.0.1 | POST     "/api/generate"
time=2025-11-10T08:48:15.234+08:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 0xc0000409"
[GIN] 2025/11/10 - 08:48:38 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/11/10 - 08:48:38 | 200 |      2.1196ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/10 - 08:48:38 | 404 |      1.3141ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/10 - 08:48:41 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/11/10 - 08:48:41 | 200 |      1.6152ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/10 - 08:48:56 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/11/10 - 08:48:56 | 200 |      1.7522ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/10 - 08:49:07 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/11/10 - 08:53:55 | 200 |       507.9µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/11/10 - 08:53:55 | 200 |     91.1104ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/10 - 08:53:55 | 200 |     58.8462ms |       127.0.0.1 | POST     "/api/show"
time=2025-11-10T08:53:55.717+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 55002"
time=2025-11-10T08:53:56.031+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-10T08:53:56.031+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-11-10T08:53:56.031+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=24
time=2025-11-10T08:53:56.115+08:00 level=INFO source=server.go:215 msg="enabling flash attention"
time=2025-11-10T08:53:56.117+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama\\blobs\\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 --port 49779"
time=2025-11-10T08:53:56.132+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1
time=2025-11-10T08:53:56.132+08:00 level=INFO source=server.go:658 msg="system memory" total="63.5 GiB" free="46.9 GiB" free_swap="50.6 GiB"
time=2025-11-10T08:53:56.132+08:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-e7309209-2017-a73d-80f8-00c15d654357 library=CUDA available="23.0 GiB" free="23.5 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-11-10T08:53:56.171+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine"
time=2025-11-10T08:53:56.177+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:49779"
time=2025-11-10T08:53:56.186+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-e7309209-2017-a73d-80f8-00c15d654357 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-10T08:53:56.202+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=40
load_backend: loaded CPU backend from C:\Users\savior\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090 Laptop GPU, compute capability 12.0, VMM: yes, ID: GPU-e7309209-2017-a73d-80f8-00c15d654357
load_backend: loaded CUDA backend from C:\Users\savior\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-11-10T08:53:56.305+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:325: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed
time=2025-11-10T08:53:57.827+08:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama\blobs\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 error="do load request: Post \"http://127.0.0.1:49779/load\": read tcp 127.0.0.1:49785->127.0.0.1:49779: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2025/11/10 - 08:53:57 | 500 |     2.218512s |       127.0.0.1 | POST     "/api/generate"
time=2025-11-10T08:53:57.916+08:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 0xc0000409"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.12.10

Originally created by @survivor998 on GitHub (Nov 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13032 ### What is the issue? Error: 500 Internal Server Error: do load request: Post "http://127.0.0.1:11680/load": read tcp 127.0.0.1:11685->127.0.0.1:11680: wsarecv: An existing connection was forcibly closed by the remote host. ### Relevant log output ```shell time=2025-11-10T08:45:10.464+08:00 level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:262144 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-11-10T08:45:10.493+08:00 level=INFO source=images.go:522 msg="total blobs: 29" time=2025-11-10T08:45:10.494+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-10T08:45:10.495+08:00 level=INFO source=routes.go:1578 msg="Listening on 127.0.0.1:11434 (version 0.12.10)" time=2025-11-10T08:45:10.497+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-10T08:45:10.505+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50364" time=2025-11-10T08:45:10.952+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50370" time=2025-11-10T08:45:11.108+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50376" time=2025-11-10T08:45:11.624+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-e7309209-2017-a73d-80f8-00c15d654357 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090 Laptop GPU" libdirs=ollama,cuda_v12 driver=12.9 pci_id=0000:02:00.0 type=discrete total="23.9 GiB" available="23.5 GiB" [GIN] 2025/11/10 - 08:46:20 | 200 | 521.1µs | 127.0.0.1 | HEAD "/" [GIN] 2025/11/10 - 08:46:20 | 200 | 4.8193ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/10 - 08:46:39 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/11/10 - 08:46:39 | 200 | 63.9752ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/10 - 08:46:39 | 200 | 52.7208ms | 127.0.0.1 | POST "/api/show" time=2025-11-10T08:46:39.265+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 23788" time=2025-11-10T08:46:42.260+08:00 level=INFO source=runner.go:442 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v12]" extra_envs=map[] error="failed to finish discovery before timeout" time=2025-11-10T08:46:42.262+08:00 level=WARN source=runner.go:334 msg="unable to refresh free memory, using old values" time=2025-11-10T08:46:42.262+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-10T08:46:42.262+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-11-10T08:46:42.262+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=24 time=2025-11-10T08:46:42.322+08:00 level=INFO source=server.go:215 msg="enabling flash attention" time=2025-11-10T08:46:42.323+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama\\blobs\\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 --port 23794" time=2025-11-10T08:46:42.334+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1 time=2025-11-10T08:46:42.334+08:00 level=INFO source=server.go:658 msg="system memory" total="63.5 GiB" free="50.2 GiB" free_swap="53.9 GiB" time=2025-11-10T08:46:42.334+08:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-e7309209-2017-a73d-80f8-00c15d654357 library=CUDA available="23.0 GiB" free="23.5 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-10T08:46:42.368+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine" time=2025-11-10T08:46:42.374+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:23794" time=2025-11-10T08:46:42.377+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-e7309209-2017-a73d-80f8-00c15d654357 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-10T08:46:42.393+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=40 load_backend: loaded CPU backend from C:\Users\savior\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090 Laptop GPU, compute capability 12.0, VMM: yes, ID: GPU-e7309209-2017-a73d-80f8-00c15d654357 load_backend: loaded CUDA backend from C:\Users\savior\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-11-10T08:46:42.509+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:325: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed time=2025-11-10T08:46:44.202+08:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama\blobs\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 error="do load request: Post \"http://127.0.0.1:23794/load\": read tcp 127.0.0.1:23799->127.0.0.1:23794: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2025/11/10 - 08:46:44 | 500 | 5.0216666s | 127.0.0.1 | POST "/api/generate" time=2025-11-10T08:46:44.303+08:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 0xc0000409" [GIN] 2025/11/10 - 08:48:13 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/11/10 - 08:48:13 | 200 | 49.0193ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/10 - 08:48:13 | 200 | 41.4317ms | 127.0.0.1 | POST "/api/show" time=2025-11-10T08:48:13.252+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 11673" time=2025-11-10T08:48:13.452+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-10T08:48:13.452+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-11-10T08:48:13.452+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=24 time=2025-11-10T08:48:13.508+08:00 level=INFO source=server.go:215 msg="enabling flash attention" time=2025-11-10T08:48:13.509+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama\\blobs\\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 --port 11680" time=2025-11-10T08:48:13.519+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1 time=2025-11-10T08:48:13.520+08:00 level=INFO source=server.go:658 msg="system memory" total="63.5 GiB" free="48.5 GiB" free_swap="52.2 GiB" time=2025-11-10T08:48:13.520+08:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-e7309209-2017-a73d-80f8-00c15d654357 library=CUDA available="23.0 GiB" free="23.5 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-10T08:48:13.553+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine" time=2025-11-10T08:48:13.560+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:11680" time=2025-11-10T08:48:13.564+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-e7309209-2017-a73d-80f8-00c15d654357 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-10T08:48:13.580+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=40 load_backend: loaded CPU backend from C:\Users\savior\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090 Laptop GPU, compute capability 12.0, VMM: yes, ID: GPU-e7309209-2017-a73d-80f8-00c15d654357 load_backend: loaded CUDA backend from C:\Users\savior\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-11-10T08:48:13.681+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:325: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed time=2025-11-10T08:48:15.148+08:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama\blobs\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 error="do load request: Post \"http://127.0.0.1:11680/load\": read tcp 127.0.0.1:11685->127.0.0.1:11680: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2025/11/10 - 08:48:15 | 500 | 1.9824605s | 127.0.0.1 | POST "/api/generate" time=2025-11-10T08:48:15.234+08:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 0xc0000409" [GIN] 2025/11/10 - 08:48:38 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/11/10 - 08:48:38 | 200 | 2.1196ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/10 - 08:48:38 | 404 | 1.3141ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/10 - 08:48:41 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/11/10 - 08:48:41 | 200 | 1.6152ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/10 - 08:48:56 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/11/10 - 08:48:56 | 200 | 1.7522ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/10 - 08:49:07 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/11/10 - 08:53:55 | 200 | 507.9µs | 127.0.0.1 | HEAD "/" [GIN] 2025/11/10 - 08:53:55 | 200 | 91.1104ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/10 - 08:53:55 | 200 | 58.8462ms | 127.0.0.1 | POST "/api/show" time=2025-11-10T08:53:55.717+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 55002" time=2025-11-10T08:53:56.031+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-10T08:53:56.031+08:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-11-10T08:53:56.031+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=24 efficiency=16 threads=24 time=2025-11-10T08:53:56.115+08:00 level=INFO source=server.go:215 msg="enabling flash attention" time=2025-11-10T08:53:56.117+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\savior\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\Ollama\\blobs\\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 --port 49779" time=2025-11-10T08:53:56.132+08:00 level=INFO source=server.go:653 msg="loading model" "model layers"=37 requested=-1 time=2025-11-10T08:53:56.132+08:00 level=INFO source=server.go:658 msg="system memory" total="63.5 GiB" free="46.9 GiB" free_swap="50.6 GiB" time=2025-11-10T08:53:56.132+08:00 level=INFO source=server.go:665 msg="gpu memory" id=GPU-e7309209-2017-a73d-80f8-00c15d654357 library=CUDA available="23.0 GiB" free="23.5 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-10T08:53:56.171+08:00 level=INFO source=runner.go:1349 msg="starting ollama engine" time=2025-11-10T08:53:56.177+08:00 level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:49779" time=2025-11-10T08:53:56.186+08:00 level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:262144 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-e7309209-2017-a73d-80f8-00c15d654357 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-10T08:53:56.202+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=40 load_backend: loaded CPU backend from C:\Users\savior\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090 Laptop GPU, compute capability 12.0, VMM: yes, ID: GPU-e7309209-2017-a73d-80f8-00c15d654357 load_backend: loaded CUDA backend from C:\Users\savior\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-11-10T08:53:56.305+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:325: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed time=2025-11-10T08:53:57.827+08:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama\blobs\sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 error="do load request: Post \"http://127.0.0.1:49779/load\": read tcp 127.0.0.1:49785->127.0.0.1:49779: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2025/11/10 - 08:53:57 | 500 | 2.218512s | 127.0.0.1 | POST "/api/generate" time=2025-11-10T08:53:57.916+08:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 0xc0000409" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.10
GiteaMirror added the bug label 2026-04-22 17:54:53 -05:00
Author
Owner

@survivor998 commented on GitHub (Nov 10, 2025):

the bug only happend when i uesd qwen3-Vl models(like 8b or 30b),othor models like gpt-oss,qwen2.5VL would not happened

<!-- gh-comment-id:3509051099 --> @survivor998 commented on GitHub (Nov 10, 2025): the bug only happend when i uesd qwen3-Vl models(like 8b or 30b),othor models like gpt-oss,qwen2.5VL would not happened
Author
Owner

@survivor998 commented on GitHub (Nov 10, 2025):

I try download serveral times,so the the network would not be the problem

<!-- gh-comment-id:3509052268 --> @survivor998 commented on GitHub (Nov 10, 2025): I try download serveral times,so the the network would not be the problem
Author
Owner

@rick-github commented on GitHub (Nov 10, 2025):

C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:325: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed

Perhaps #12998.

<!-- gh-comment-id:3509054548 --> @rick-github commented on GitHub (Nov 10, 2025): ``` C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:325: GGML_ASSERT(ggml_nbytes(src0) <= INT_MAX) failed ``` Perhaps #12998.
Author
Owner

@magnusbonnevier commented on GitHub (Nov 10, 2025):

Seems to be related to the context window setting of 256k for some reason, as stated in the #12998 issue.

Dial it back to 128k in the UI or cli settings and see if that works.

<!-- gh-comment-id:3510207105 --> @magnusbonnevier commented on GitHub (Nov 10, 2025): Seems to be related to the context window setting of 256k for some reason, as stated in the #12998 issue. Dial it back to 128k in the UI or cli settings and see if that works.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34394