[GH-ISSUE #14501] Ollama 500 Error with Qwen3.5:35b-a3b and qwen3.5:27b-q4_K_M Models #55920

Closed
opened 2026-04-29 09:57:01 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @lyvs2012 on GitHub (Feb 27, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14501

What is the issue?

Issue Report: Qwen3.5 Model Fails with 500 Error on Ollama
System Context:
I can successfully run other models of similar and larger sizes on my local machine (e.g., Qwen3 48GB, GPT-OSS:120B 60GB, GLM-4.7-Flash 19GB).
The hardware specifications are sufficient for heavy models.
Problem Description:
I am encountering a 500 Internal Server Error when attempting to use the Qwen3.5 model via the Ollama client.
Reproduction Steps & Symptoms:
Initial Run: The model loads and opens a conversation session successfully the very first time.
Subsequent Runs: After the initial session, any further attempts to call the model or open a new dialog result in an immediate 500 error.
Model Sizes: The local instances of these models are approximately 17GB and 23GB respectively.
Question:
Why is the Qwen3.5 model specifically failing with a 500 error after the first use, while other significantly larger models function normally?

Relevant log output

Couldn't find 'C:\Users\LiYong\.ollama\id_ed25519'. Generating new private key.
Your new public key is: 

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILMrLVPjXm1m8rK8DKoebe3/5pXoygVk8dw2y8QZEsod

time=2026-02-27T16:38:11.468+08:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\LiYong\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2026-02-27T16:38:11.471+08:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false"
time=2026-02-27T16:38:11.472+08:00 level=INFO source=images.go:473 msg="total blobs: 0"
time=2026-02-27T16:38:11.472+08:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
time=2026-02-27T16:38:11.472+08:00 level=INFO source=routes.go:1718 msg="Listening on 127.0.0.1:11434 (version 0.17.4)"
time=2026-02-27T16:38:11.473+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-02-27T16:38:11.483+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-02-27T16:38:11.490+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 13703"
time=2026-02-27T16:38:11.768+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 13708"
time=2026-02-27T16:38:11.968+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 13713"
time=2026-02-27T16:38:12.175+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 13720"
time=2026-02-27T16:38:12.175+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 13721"
time=2026-02-27T16:38:12.370+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-22e15c16-4a8d-b130-9dda-d5c054898aee filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5070 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="15.9 GiB" available="14.5 GiB"
time=2026-02-27T16:38:12.370+08:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="15.9 GiB" default_num_ctx=4096
[GIN] 2026/02/27 - 16:38:12 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2026/02/27 - 16:39:09 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2026/02/27 - 16:39:09 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2026/02/27 - 16:39:09 | 200 |      2.1133ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:39:09 | 404 |      1.9031ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/02/27 - 16:39:10 | 401 |    388.1598ms |       127.0.0.1 | POST     "/api/me"
[GIN] 2026/02/27 - 16:39:10 | 401 |    511.7081ms |       127.0.0.1 | POST     "/api/me"
[GIN] 2026/02/27 - 16:39:14 | 200 |    130.5896ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/02/27 - 16:39:17 | 200 |      1.0376ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:39:17 | 200 |    121.9582ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/02/27 - 16:39:17 | 200 |    121.0927ms |       127.0.0.1 | POST     "/api/show"
time=2026-02-27T16:39:17.782+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 12315"
time=2026-02-27T16:39:17.938+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-02-27T16:39:17.938+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16
time=2026-02-27T16:39:18.012+08:00 level=INFO source=server.go:247 msg="enabling flash attention"
time=2026-02-27T16:39:18.013+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\LiYong\\.ollama\\models\\blobs\\sha256-7935de6e08f9444536d0edcacf19d2166b34bef8ddb4ac7ce9263ff5cad0693b --port 12320"
time=2026-02-27T16:39:18.024+08:00 level=INFO source=sched.go:491 msg="system memory" total="143.7 GiB" free="130.1 GiB" free_swap="135.4 GiB"
time=2026-02-27T16:39:18.024+08:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-22e15c16-4a8d-b130-9dda-d5c054898aee library=CUDA available="13.9 GiB" free="14.3 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-27T16:39:18.024+08:00 level=INFO source=server.go:757 msg="loading model" "model layers"=65 requested=-1
time=2026-02-27T16:39:18.054+08:00 level=INFO source=runner.go:1411 msg="starting ollama engine"
time=2026-02-27T16:39:18.058+08:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:12320"
time=2026-02-27T16:39:18.066+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:65[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-27T16:39:18.095+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=1307 num_key_values=53
load_backend: loaded CPU backend from C:\Users\LiYong\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5070 Ti, compute capability 12.0, VMM: yes, ID: GPU-22e15c16-4a8d-b130-9dda-d5c054898aee
load_backend: loaded CUDA backend from C:\Users\LiYong\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2026-02-27T16:39:18.175+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-02-27T16:39:18.644+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:48[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:48(16..63)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-27T16:39:18.886+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:48[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:48(16..63)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-27T16:39:19.548+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:48[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:48(16..63)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-27T16:39:19.548+08:00 level=INFO source=ggml.go:482 msg="offloading 48 repeating layers to GPU"
time=2026-02-27T16:39:19.548+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-02-27T16:39:19.548+08:00 level=INFO source=ggml.go:494 msg="offloaded 48/65 layers to GPU"
time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="10.1 GiB"
time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="6.2 GiB"
time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="2.9 GiB"
time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="999.2 MiB"
time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="799.3 MiB"
time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="1.0 GiB"
time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:272 msg="total memory" size="21.9 GiB"
time=2026-02-27T16:39:19.548+08:00 level=INFO source=sched.go:566 msg="loaded runners" count=1
time=2026-02-27T16:39:19.549+08:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-02-27T16:39:19.555+08:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
time=2026-02-27T16:39:21.810+08:00 level=INFO source=server.go:1388 msg="llama runner started in 3.79 seconds"
[GIN] 2026/02/27 - 16:39:47 | 200 |      1.5064ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:40:17 | 200 |      3.1242ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:40:47 | 200 |      1.5386ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:41:04 | 200 |         1m47s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2026/02/27 - 16:41:17 | 200 |      1.0201ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:41:47 | 200 |      1.0384ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:42:17 | 200 |      1.0917ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:42:47 | 200 |      1.0375ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:43:17 | 200 |      1.0341ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:43:45 | 200 |    127.4648ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/02/27 - 16:43:45 | 200 |    115.6221ms |       127.0.0.1 | POST     "/api/show"
CUDA error: invalid argument
  current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438
  cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error
time=2026-02-27T16:43:46.061+08:00 level=ERROR source=server.go:1610 msg="post predict" error="Post \"http://127.0.0.1:12320/completion\": read tcp 127.0.0.1:14337->127.0.0.1:12320: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2026/02/27 - 16:43:46 | 500 |    458.4568ms |       127.0.0.1 | POST     "/api/chat"
time=2026-02-27T16:43:46.633+08:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1"
[GIN] 2026/02/27 - 16:43:47 | 200 |      1.3126ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:44:04 | 200 |    123.7783ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/02/27 - 16:44:08 | 200 |    126.5724ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/02/27 - 16:44:08 | 200 |    113.2701ms |       127.0.0.1 | POST     "/api/show"
time=2026-02-27T16:44:08.940+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 14367"
time=2026-02-27T16:44:09.104+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-02-27T16:44:09.104+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16
time=2026-02-27T16:44:09.123+08:00 level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-22e15c16-4a8d-b130-9dda-d5c054898aee library=CUDA total="15.9 GiB" available="2.2 GiB"
time=2026-02-27T16:44:09.180+08:00 level=INFO source=server.go:247 msg="enabling flash attention"
time=2026-02-27T16:44:09.182+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\LiYong\\.ollama\\models\\blobs\\sha256-2abd0d805943fa113f934d1ae4f2d5a749b5d4fe2a0a9c64b645c1df15868da7 --port 14372"
time=2026-02-27T16:44:09.190+08:00 level=INFO source=sched.go:491 msg="system memory" total="143.7 GiB" free="129.4 GiB" free_swap="135.3 GiB"
time=2026-02-27T16:44:09.191+08:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-22e15c16-4a8d-b130-9dda-d5c054898aee library=CUDA available="1.7 GiB" free="2.2 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-27T16:44:09.191+08:00 level=INFO source=server.go:757 msg="loading model" "model layers"=41 requested=-1
time=2026-02-27T16:44:09.221+08:00 level=INFO source=runner.go:1411 msg="starting ollama engine"
time=2026-02-27T16:44:09.226+08:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:14372"
time=2026-02-27T16:44:09.234+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-27T16:44:09.262+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35moe file_type=Q4_K_M name="" description="" num_tensors=1959 num_key_values=57
load_backend: loaded CPU backend from C:\Users\LiYong\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5070 Ti, compute capability 12.0, VMM: yes, ID: GPU-22e15c16-4a8d-b130-9dda-d5c054898aee
load_backend: loaded CUDA backend from C:\Users\LiYong\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2026-02-27T16:44:09.340+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-02-27T16:44:09.830+08:00 level=INFO source=server.go:1029 msg="model requires more gpu memory than is currently available, evicting a model to make space" "loaded layers"=2
time=2026-02-27T16:44:09.830+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:Disabled KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-27T16:44:09.830+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="21.9 GiB"
time=2026-02-27T16:44:09.830+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="277.3 MiB"
time=2026-02-27T16:44:09.830+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.6 GiB"
time=2026-02-27T16:44:09.830+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="554.5 MiB"
time=2026-02-27T16:44:09.830+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="15.8 MiB"
time=2026-02-27T16:44:09.830+08:00 level=INFO source=device.go:272 msg="total memory" size="24.4 GiB"
time=2026-02-27T16:44:09.841+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 14379"
time=2026-02-27T16:44:10.234+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10027"
time=2026-02-27T16:44:10.485+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10032"
time=2026-02-27T16:44:10.735+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10037"
time=2026-02-27T16:44:10.985+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10042"
time=2026-02-27T16:44:11.235+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10057"
time=2026-02-27T16:44:11.484+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10062"
time=2026-02-27T16:44:11.735+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10067"
time=2026-02-27T16:44:11.985+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10072"
time=2026-02-27T16:44:12.236+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10077"
time=2026-02-27T16:44:12.485+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10088"
time=2026-02-27T16:44:12.735+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10101"
time=2026-02-27T16:44:12.985+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10113"
time=2026-02-27T16:44:13.235+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10118"
time=2026-02-27T16:44:13.486+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10123"
time=2026-02-27T16:44:13.734+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10128"
time=2026-02-27T16:44:13.987+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10133"
time=2026-02-27T16:44:14.235+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10138"
time=2026-02-27T16:44:14.486+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10145"
time=2026-02-27T16:44:14.735+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10150"
time=2026-02-27T16:44:14.984+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10155"
time=2026-02-27T16:44:14.993+08:00 level=INFO source=runner.go:464 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" extra_envs=map[] error="failed to finish discovery before timeout"
time=2026-02-27T16:44:14.993+08:00 level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values"
time=2026-02-27T16:44:14.995+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10156"
time=2026-02-27T16:44:15.138+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-02-27T16:44:15.138+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16
time=2026-02-27T16:44:15.191+08:00 level=INFO source=sched.go:491 msg="system memory" total="143.7 GiB" free="128.6 GiB" free_swap="134.1 GiB"
time=2026-02-27T16:44:15.191+08:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-22e15c16-4a8d-b130-9dda-d5c054898aee library=CUDA available="14.1 GiB" free="14.6 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-27T16:44:15.191+08:00 level=INFO source=server.go:757 msg="loading model" "model layers"=41 requested=-1
time=2026-02-27T16:44:15.191+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:24[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:24(16..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-27T16:44:15.418+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:24[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:24(16..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-27T16:44:15.854+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:24[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:24(16..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-27T16:44:15.854+08:00 level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU"
time=2026-02-27T16:44:15.854+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-02-27T16:44:15.854+08:00 level=INFO source=ggml.go:494 msg="offloaded 24/41 layers to GPU"
time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="12.2 GiB"
time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="10.0 GiB"
time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="990.2 MiB"
time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="660.1 MiB"
time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="589.2 MiB"
time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="630.8 MiB"
time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:272 msg="total memory" size="25.0 GiB"
time=2026-02-27T16:44:15.854+08:00 level=INFO source=sched.go:566 msg="loaded runners" count=1
time=2026-02-27T16:44:15.854+08:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-02-27T16:44:15.855+08:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
[GIN] 2026/02/27 - 16:44:17 | 200 |      1.6005ms |       127.0.0.1 | GET      "/api/tags"
time=2026-02-27T16:44:18.612+08:00 level=INFO source=server.go:1388 msg="llama runner started in 9.42 seconds"
CUDA error: invalid argument
  current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438
  cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error
time=2026-02-27T16:44:18.825+08:00 level=ERROR source=server.go:1610 msg="post predict" error="Post \"http://127.0.0.1:14372/completion\": read tcp 127.0.0.1:14377->127.0.0.1:14372: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2026/02/27 - 16:44:18 | 500 |   10.0025406s |       127.0.0.1 | POST     "/api/chat"
time=2026-02-27T16:44:19.097+08:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1"
[GIN] 2026/02/27 - 16:44:47 | 200 |       530.3µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:45:17 | 200 |       1.017ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:45:47 | 200 |      1.2218ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 16:46:17 | 200 |      1.2302ms |       127.0.0.1 | GET      "/api/tags"

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

v0.17.4/v0.17.3/v0.17.2/v0.17.1/v0.17.0

Originally created by @lyvs2012 on GitHub (Feb 27, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14501 ### What is the issue? Issue Report: Qwen3.5 Model Fails with 500 Error on Ollama System Context: I can successfully run other models of similar and larger sizes on my local machine (e.g., Qwen3 48GB, GPT-OSS:120B 60GB, GLM-4.7-Flash 19GB). The hardware specifications are sufficient for heavy models. Problem Description: I am encountering a 500 Internal Server Error when attempting to use the Qwen3.5 model via the Ollama client. Reproduction Steps & Symptoms: Initial Run: The model loads and opens a conversation session successfully the very first time. Subsequent Runs: After the initial session, any further attempts to call the model or open a new dialog result in an immediate 500 error. Model Sizes: The local instances of these models are approximately 17GB and 23GB respectively. Question: Why is the Qwen3.5 model specifically failing with a 500 error after the first use, while other significantly larger models function normally? ### Relevant log output ```shell Couldn't find 'C:\Users\LiYong\.ollama\id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILMrLVPjXm1m8rK8DKoebe3/5pXoygVk8dw2y8QZEsod time=2026-02-27T16:38:11.468+08:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\LiYong\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-02-27T16:38:11.471+08:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false" time=2026-02-27T16:38:11.472+08:00 level=INFO source=images.go:473 msg="total blobs: 0" time=2026-02-27T16:38:11.472+08:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-02-27T16:38:11.472+08:00 level=INFO source=routes.go:1718 msg="Listening on 127.0.0.1:11434 (version 0.17.4)" time=2026-02-27T16:38:11.473+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-02-27T16:38:11.483+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-02-27T16:38:11.490+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 13703" time=2026-02-27T16:38:11.768+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 13708" time=2026-02-27T16:38:11.968+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 13713" time=2026-02-27T16:38:12.175+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 13720" time=2026-02-27T16:38:12.175+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 13721" time=2026-02-27T16:38:12.370+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-22e15c16-4a8d-b130-9dda-d5c054898aee filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5070 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="15.9 GiB" available="14.5 GiB" time=2026-02-27T16:38:12.370+08:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="15.9 GiB" default_num_ctx=4096 [GIN] 2026/02/27 - 16:38:12 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/27 - 16:39:09 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/27 - 16:39:09 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/27 - 16:39:09 | 200 | 2.1133ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:39:09 | 404 | 1.9031ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/27 - 16:39:10 | 401 | 388.1598ms | 127.0.0.1 | POST "/api/me" [GIN] 2026/02/27 - 16:39:10 | 401 | 511.7081ms | 127.0.0.1 | POST "/api/me" [GIN] 2026/02/27 - 16:39:14 | 200 | 130.5896ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/27 - 16:39:17 | 200 | 1.0376ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:39:17 | 200 | 121.9582ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/27 - 16:39:17 | 200 | 121.0927ms | 127.0.0.1 | POST "/api/show" time=2026-02-27T16:39:17.782+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 12315" time=2026-02-27T16:39:17.938+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-02-27T16:39:17.938+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16 time=2026-02-27T16:39:18.012+08:00 level=INFO source=server.go:247 msg="enabling flash attention" time=2026-02-27T16:39:18.013+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\LiYong\\.ollama\\models\\blobs\\sha256-7935de6e08f9444536d0edcacf19d2166b34bef8ddb4ac7ce9263ff5cad0693b --port 12320" time=2026-02-27T16:39:18.024+08:00 level=INFO source=sched.go:491 msg="system memory" total="143.7 GiB" free="130.1 GiB" free_swap="135.4 GiB" time=2026-02-27T16:39:18.024+08:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-22e15c16-4a8d-b130-9dda-d5c054898aee library=CUDA available="13.9 GiB" free="14.3 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-27T16:39:18.024+08:00 level=INFO source=server.go:757 msg="loading model" "model layers"=65 requested=-1 time=2026-02-27T16:39:18.054+08:00 level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-02-27T16:39:18.058+08:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:12320" time=2026-02-27T16:39:18.066+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:65[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:65(0..64)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-27T16:39:18.095+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=1307 num_key_values=53 load_backend: loaded CPU backend from C:\Users\LiYong\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5070 Ti, compute capability 12.0, VMM: yes, ID: GPU-22e15c16-4a8d-b130-9dda-d5c054898aee load_backend: loaded CUDA backend from C:\Users\LiYong\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-02-27T16:39:18.175+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-02-27T16:39:18.644+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:48[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:48(16..63)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-27T16:39:18.886+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:48[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:48(16..63)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-27T16:39:19.548+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:48[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:48(16..63)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-27T16:39:19.548+08:00 level=INFO source=ggml.go:482 msg="offloading 48 repeating layers to GPU" time=2026-02-27T16:39:19.548+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-02-27T16:39:19.548+08:00 level=INFO source=ggml.go:494 msg="offloaded 48/65 layers to GPU" time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="10.1 GiB" time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="6.2 GiB" time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="2.9 GiB" time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="999.2 MiB" time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="799.3 MiB" time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="1.0 GiB" time=2026-02-27T16:39:19.548+08:00 level=INFO source=device.go:272 msg="total memory" size="21.9 GiB" time=2026-02-27T16:39:19.548+08:00 level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-02-27T16:39:19.549+08:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-27T16:39:19.555+08:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" time=2026-02-27T16:39:21.810+08:00 level=INFO source=server.go:1388 msg="llama runner started in 3.79 seconds" [GIN] 2026/02/27 - 16:39:47 | 200 | 1.5064ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:40:17 | 200 | 3.1242ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:40:47 | 200 | 1.5386ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:41:04 | 200 | 1m47s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/02/27 - 16:41:17 | 200 | 1.0201ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:41:47 | 200 | 1.0384ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:42:17 | 200 | 1.0917ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:42:47 | 200 | 1.0375ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:43:17 | 200 | 1.0341ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:43:45 | 200 | 127.4648ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/27 - 16:43:45 | 200 | 115.6221ms | 127.0.0.1 | POST "/api/show" CUDA error: invalid argument current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438 cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error time=2026-02-27T16:43:46.061+08:00 level=ERROR source=server.go:1610 msg="post predict" error="Post \"http://127.0.0.1:12320/completion\": read tcp 127.0.0.1:14337->127.0.0.1:12320: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2026/02/27 - 16:43:46 | 500 | 458.4568ms | 127.0.0.1 | POST "/api/chat" time=2026-02-27T16:43:46.633+08:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1" [GIN] 2026/02/27 - 16:43:47 | 200 | 1.3126ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:44:04 | 200 | 123.7783ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/27 - 16:44:08 | 200 | 126.5724ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/27 - 16:44:08 | 200 | 113.2701ms | 127.0.0.1 | POST "/api/show" time=2026-02-27T16:44:08.940+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 14367" time=2026-02-27T16:44:09.104+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-02-27T16:44:09.104+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16 time=2026-02-27T16:44:09.123+08:00 level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-22e15c16-4a8d-b130-9dda-d5c054898aee library=CUDA total="15.9 GiB" available="2.2 GiB" time=2026-02-27T16:44:09.180+08:00 level=INFO source=server.go:247 msg="enabling flash attention" time=2026-02-27T16:44:09.182+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\LiYong\\.ollama\\models\\blobs\\sha256-2abd0d805943fa113f934d1ae4f2d5a749b5d4fe2a0a9c64b645c1df15868da7 --port 14372" time=2026-02-27T16:44:09.190+08:00 level=INFO source=sched.go:491 msg="system memory" total="143.7 GiB" free="129.4 GiB" free_swap="135.3 GiB" time=2026-02-27T16:44:09.191+08:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-22e15c16-4a8d-b130-9dda-d5c054898aee library=CUDA available="1.7 GiB" free="2.2 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-27T16:44:09.191+08:00 level=INFO source=server.go:757 msg="loading model" "model layers"=41 requested=-1 time=2026-02-27T16:44:09.221+08:00 level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-02-27T16:44:09.226+08:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:14372" time=2026-02-27T16:44:09.234+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:41[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-27T16:44:09.262+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35moe file_type=Q4_K_M name="" description="" num_tensors=1959 num_key_values=57 load_backend: loaded CPU backend from C:\Users\LiYong\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5070 Ti, compute capability 12.0, VMM: yes, ID: GPU-22e15c16-4a8d-b130-9dda-d5c054898aee load_backend: loaded CUDA backend from C:\Users\LiYong\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-02-27T16:44:09.340+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-02-27T16:44:09.830+08:00 level=INFO source=server.go:1029 msg="model requires more gpu memory than is currently available, evicting a model to make space" "loaded layers"=2 time=2026-02-27T16:44:09.830+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:Disabled KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-27T16:44:09.830+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="21.9 GiB" time=2026-02-27T16:44:09.830+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="277.3 MiB" time=2026-02-27T16:44:09.830+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.6 GiB" time=2026-02-27T16:44:09.830+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="554.5 MiB" time=2026-02-27T16:44:09.830+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="15.8 MiB" time=2026-02-27T16:44:09.830+08:00 level=INFO source=device.go:272 msg="total memory" size="24.4 GiB" time=2026-02-27T16:44:09.841+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 14379" time=2026-02-27T16:44:10.234+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10027" time=2026-02-27T16:44:10.485+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10032" time=2026-02-27T16:44:10.735+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10037" time=2026-02-27T16:44:10.985+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10042" time=2026-02-27T16:44:11.235+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10057" time=2026-02-27T16:44:11.484+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10062" time=2026-02-27T16:44:11.735+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10067" time=2026-02-27T16:44:11.985+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10072" time=2026-02-27T16:44:12.236+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10077" time=2026-02-27T16:44:12.485+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10088" time=2026-02-27T16:44:12.735+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10101" time=2026-02-27T16:44:12.985+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10113" time=2026-02-27T16:44:13.235+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10118" time=2026-02-27T16:44:13.486+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10123" time=2026-02-27T16:44:13.734+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10128" time=2026-02-27T16:44:13.987+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10133" time=2026-02-27T16:44:14.235+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10138" time=2026-02-27T16:44:14.486+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10145" time=2026-02-27T16:44:14.735+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10150" time=2026-02-27T16:44:14.984+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10155" time=2026-02-27T16:44:14.993+08:00 level=INFO source=runner.go:464 msg="failure during GPU discovery" OLLAMA_LIBRARY_PATH="[C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\lib\\ollama C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\cuda_v13]" extra_envs=map[] error="failed to finish discovery before timeout" time=2026-02-27T16:44:14.993+08:00 level=WARN source=runner.go:356 msg="unable to refresh free memory, using old values" time=2026-02-27T16:44:14.995+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\LiYong\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 10156" time=2026-02-27T16:44:15.138+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-02-27T16:44:15.138+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=16 time=2026-02-27T16:44:15.191+08:00 level=INFO source=sched.go:491 msg="system memory" total="143.7 GiB" free="128.6 GiB" free_swap="134.1 GiB" time=2026-02-27T16:44:15.191+08:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-22e15c16-4a8d-b130-9dda-d5c054898aee library=CUDA available="14.1 GiB" free="14.6 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-27T16:44:15.191+08:00 level=INFO source=server.go:757 msg="loading model" "model layers"=41 requested=-1 time=2026-02-27T16:44:15.191+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:24[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:24(16..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-27T16:44:15.418+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:24[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:24(16..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-27T16:44:15.854+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:8 GPULayers:24[ID:GPU-22e15c16-4a8d-b130-9dda-d5c054898aee Layers:24(16..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-27T16:44:15.854+08:00 level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU" time=2026-02-27T16:44:15.854+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-02-27T16:44:15.854+08:00 level=INFO source=ggml.go:494 msg="offloaded 24/41 layers to GPU" time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="12.2 GiB" time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="10.0 GiB" time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="990.2 MiB" time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="660.1 MiB" time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="589.2 MiB" time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="630.8 MiB" time=2026-02-27T16:44:15.854+08:00 level=INFO source=device.go:272 msg="total memory" size="25.0 GiB" time=2026-02-27T16:44:15.854+08:00 level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-02-27T16:44:15.854+08:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-27T16:44:15.855+08:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" [GIN] 2026/02/27 - 16:44:17 | 200 | 1.6005ms | 127.0.0.1 | GET "/api/tags" time=2026-02-27T16:44:18.612+08:00 level=INFO source=server.go:1388 msg="llama runner started in 9.42 seconds" CUDA error: invalid argument current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438 cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error time=2026-02-27T16:44:18.825+08:00 level=ERROR source=server.go:1610 msg="post predict" error="Post \"http://127.0.0.1:14372/completion\": read tcp 127.0.0.1:14377->127.0.0.1:14372: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2026/02/27 - 16:44:18 | 500 | 10.0025406s | 127.0.0.1 | POST "/api/chat" time=2026-02-27T16:44:19.097+08:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1" [GIN] 2026/02/27 - 16:44:47 | 200 | 530.3µs | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:45:17 | 200 | 1.017ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:45:47 | 200 | 1.2218ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 16:46:17 | 200 | 1.2302ms | 127.0.0.1 | GET "/api/tags" ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version v0.17.4/v0.17.3/v0.17.2/v0.17.1/v0.17.0
GiteaMirror added the bug label 2026-04-29 09:57:01 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55920