[GH-ISSUE #14799] CUDA error on RTX 3060 Laptop 6GB with qwen3.5:4b — fixed by updating from 0.17.1 to 0.17.7 #71619

Closed
opened 2026-05-05 02:15:15 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @Guih42 on GitHub (Mar 12, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14799

What is the issue?

Bug description
Running qwen3.5:4b on a RTX 3060 Laptop (6GB VRAM) causes a CUDA error and 500 response
when called remotely from another machine on the local network.

Environment

  • OS: Windows 11
  • GPU: NVIDIA GeForce RTX 3060 Laptop GPU (6GB VRAM)
  • Driver: 591.86 / CUDA 13.1
  • Ollama: 0.17.1 (broken) → 0.17.7 (fixed)
  • Model: qwen3.5:4b
  • Setup: Ollama on laptop called remotely via OLLAMA_HOST=0.0.0.0:11435

Steps to reproduce

  1. Install Ollama 0.17.1
  2. Set OLLAMA_HOST=0.0.0.0:11435
  3. Call /api/chat from another machine on the LAN
  4. → CUDA error, 500

crash log
CUDA error
ggml-cuda.cu:94
model runner has unexpectedly stopped (status code: 500)

Fix

Updating to 0.17.7 resolved the issue completely. Same model, same hardware, same config.

Note
The bug did NOT occur when calling Ollama locally (127.0.0.1).
Only triggered when called remotely over LAN.

Using non-standard port 11435 (instead of default 11434) because two machines on the
same LAN both run Ollama : the laptop uses 11435 and the server uses 11434 to avoid
conflicts. The app dynamically detects which Ollama instance to use based on availability.
This port difference is not related to the bug — the crash happened regardless.

Relevant log output


Error: listen tcp 0.0.0.0:11435: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
time=2026-03-11T17:41:44.284-04:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11435 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\USER\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2026-03-11T17:41:44.285-04:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false"
time=2026-03-11T17:41:44.292-04:00 level=INFO source=images.go:473 msg="total blobs: 51"
time=2026-03-11T17:41:44.293-04:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
time=2026-03-11T17:41:44.294-04:00 level=INFO source=routes.go:1718 msg="Listening on [::]:11435 (version 0.17.1)"
time=2026-03-11T17:41:44.295-04:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-03-11T17:41:44.309-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 42901"
time=2026-03-11T17:41:44.517-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 42910"
time=2026-03-11T17:41:44.711-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 42916"
time=2026-03-11T17:41:44.800-04:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-03-11T17:41:44.802-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 42922"
time=2026-03-11T17:41:44.802-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 42921"
time=2026-03-11T17:41:44.988-04:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3060 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="6.0 GiB" available="4.4 GiB"
time=2026-03-11T17:41:44.988-04:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="6.0 GiB" default_num_ctx=4096
[GIN] 2026/03/11 - 17:42:04 | 200 |      4.8694ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/03/11 - 17:42:05 | 200 |    146.8504ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/03/11 - 17:42:05 | 200 |    144.9813ms |       127.0.0.1 | POST     "/api/show"
time=2026-03-11T17:42:05.719-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 16818"
time=2026-03-11T17:42:05.894-04:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-03-11T17:42:05.894-04:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1
time=2026-03-11T17:42:05.894-04:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=14 efficiency=8 threads=20
time=2026-03-11T17:42:05.986-04:00 level=INFO source=server.go:247 msg="enabling flash attention"
time=2026-03-11T17:42:05.986-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\USER\\.ollama\\models\\blobs\\sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c --port 16823"
time=2026-03-11T17:42:05.989-04:00 level=INFO source=sched.go:491 msg="system memory" total="39.7 GiB" free="25.6 GiB" free_swap="22.4 GiB"
time=2026-03-11T17:42:05.989-04:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 library=CUDA available="3.9 GiB" free="4.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-03-11T17:42:05.989-04:00 level=INFO source=server.go:757 msg="loading model" "model layers"=33 requested=-1
time=2026-03-11T17:42:06.024-04:00 level=INFO source=runner.go:1411 msg="starting ollama engine"
time=2026-03-11T17:42:06.025-04:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:16823"
time=2026-03-11T17:42:06.035-04:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:33[ID:GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-11T17:42:06.069-04:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=52
load_backend: loaded CPU backend from C:\Users\USER\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3060 Laptop GPU, compute capability 8.6, VMM: yes, ID: GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29
load_backend: loaded CUDA backend from C:\Users\USER\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2026-03-11T17:42:06.155-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-03-11T17:42:06.782-04:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:19[ID:GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 Layers:19(13..31)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-11T17:42:07.109-04:00 level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:19[ID:GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 Layers:19(13..31)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-11T17:42:07.732-04:00 level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:19[ID:GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 Layers:19(13..31)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-03-11T17:42:07.732-04:00 level=INFO source=ggml.go:482 msg="offloading 19 repeating layers to GPU"
time=2026-03-11T17:42:07.732-04:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-03-11T17:42:07.732-04:00 level=INFO source=ggml.go:494 msg="offloaded 19/33 layers to GPU"
time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="2.3 GiB"
time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="3.9 GiB"
time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="812.8 MiB"
time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="571.4 MiB"
time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="756.6 MiB"
time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="630.8 MiB"
time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:272 msg="total memory" size="8.8 GiB"
time=2026-03-11T17:42:07.732-04:00 level=INFO source=sched.go:566 msg="loaded runners" count=1
time=2026-03-11T17:42:07.732-04:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-03-11T17:42:07.733-04:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
time=2026-03-11T17:42:08.985-04:00 level=INFO source=server.go:1388 msg="llama runner started in 3.00 seconds"
time=2026-03-11T17:42:09.068-04:00 level=WARN source=runner.go:187 msg="truncating input prompt" limit=4096 prompt=6979 keep=4 new=4096
CUDA error: invalid argument
  current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438
  cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error
time=2026-03-11T17:42:09.291-04:00 level=ERROR source=server.go:1610 msg="post predict" error="Post \"http://127.0.0.1:16823/completion\": read tcp 127.0.0.1:16828->127.0.0.1:16823: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2026/03/11 - 17:42:09 | 500 |    3.6940788s |       127.0.0.1 | POST     "/api/chat"
time=2026-03-11T17:42:09.504-04:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.17.1

Originally created by @Guih42 on GitHub (Mar 12, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14799 ### What is the issue? **Bug description** Running qwen3.5:4b on a RTX 3060 Laptop (6GB VRAM) causes a CUDA error and 500 response when called remotely from another machine on the local network. **Environment** - OS: Windows 11 - GPU: NVIDIA GeForce RTX 3060 Laptop GPU (6GB VRAM) - Driver: 591.86 / CUDA 13.1 - Ollama: 0.17.1 (broken) → 0.17.7 (fixed) - Model: qwen3.5:4b - Setup: Ollama on laptop called remotely via OLLAMA_HOST=0.0.0.0:11435 **Steps to reproduce** 1. Install Ollama 0.17.1 2. Set OLLAMA_HOST=0.0.0.0:11435 3. Call /api/chat from another machine on the LAN 4. → CUDA error, 500 **crash log** CUDA error ggml-cuda.cu:94 model runner has unexpectedly stopped (status code: 500) **Fix** **Updating to 0.17.7 resolved the issue completely. Same model, same hardware, same config.** **Note** The bug did NOT occur when calling Ollama locally (127.0.0.1). Only triggered when called remotely over LAN. Using non-standard port 11435 (instead of default 11434) because two machines on the same LAN both run Ollama : the laptop uses 11435 and the server uses 11434 to avoid conflicts. The app dynamically detects which Ollama instance to use based on availability. This port difference is not related to the bug — the crash happened regardless. ### Relevant log output ```shell Error: listen tcp 0.0.0.0:11435: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted. time=2026-03-11T17:41:44.284-04:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11435 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\USER\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-03-11T17:41:44.285-04:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false" time=2026-03-11T17:41:44.292-04:00 level=INFO source=images.go:473 msg="total blobs: 51" time=2026-03-11T17:41:44.293-04:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-03-11T17:41:44.294-04:00 level=INFO source=routes.go:1718 msg="Listening on [::]:11435 (version 0.17.1)" time=2026-03-11T17:41:44.295-04:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-03-11T17:41:44.309-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 42901" time=2026-03-11T17:41:44.517-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 42910" time=2026-03-11T17:41:44.711-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 42916" time=2026-03-11T17:41:44.800-04:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-03-11T17:41:44.802-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 42922" time=2026-03-11T17:41:44.802-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 42921" time=2026-03-11T17:41:44.988-04:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3060 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="6.0 GiB" available="4.4 GiB" time=2026-03-11T17:41:44.988-04:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="6.0 GiB" default_num_ctx=4096 [GIN] 2026/03/11 - 17:42:04 | 200 | 4.8694ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/03/11 - 17:42:05 | 200 | 146.8504ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/03/11 - 17:42:05 | 200 | 144.9813ms | 127.0.0.1 | POST "/api/show" time=2026-03-11T17:42:05.719-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 16818" time=2026-03-11T17:42:05.894-04:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-03-11T17:42:05.894-04:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1 time=2026-03-11T17:42:05.894-04:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=14 efficiency=8 threads=20 time=2026-03-11T17:42:05.986-04:00 level=INFO source=server.go:247 msg="enabling flash attention" time=2026-03-11T17:42:05.986-04:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\USER\\.ollama\\models\\blobs\\sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c --port 16823" time=2026-03-11T17:42:05.989-04:00 level=INFO source=sched.go:491 msg="system memory" total="39.7 GiB" free="25.6 GiB" free_swap="22.4 GiB" time=2026-03-11T17:42:05.989-04:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 library=CUDA available="3.9 GiB" free="4.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-03-11T17:42:05.989-04:00 level=INFO source=server.go:757 msg="loading model" "model layers"=33 requested=-1 time=2026-03-11T17:42:06.024-04:00 level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-03-11T17:42:06.025-04:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:16823" time=2026-03-11T17:42:06.035-04:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:33[ID:GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-11T17:42:06.069-04:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35 file_type=Q4_K_M name="" description="" num_tensors=883 num_key_values=52 load_backend: loaded CPU backend from C:\Users\USER\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060 Laptop GPU, compute capability 8.6, VMM: yes, ID: GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 load_backend: loaded CUDA backend from C:\Users\USER\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-03-11T17:42:06.155-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-03-11T17:42:06.782-04:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:19[ID:GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 Layers:19(13..31)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-11T17:42:07.109-04:00 level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:19[ID:GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 Layers:19(13..31)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-11T17:42:07.732-04:00 level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:19[ID:GPU-21f58bd8-9811-7bcf-7a46-f0f7345d0d29 Layers:19(13..31)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-03-11T17:42:07.732-04:00 level=INFO source=ggml.go:482 msg="offloading 19 repeating layers to GPU" time=2026-03-11T17:42:07.732-04:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-03-11T17:42:07.732-04:00 level=INFO source=ggml.go:494 msg="offloaded 19/33 layers to GPU" time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="2.3 GiB" time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="3.9 GiB" time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="812.8 MiB" time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="571.4 MiB" time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="756.6 MiB" time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="630.8 MiB" time=2026-03-11T17:42:07.732-04:00 level=INFO source=device.go:272 msg="total memory" size="8.8 GiB" time=2026-03-11T17:42:07.732-04:00 level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-03-11T17:42:07.732-04:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-03-11T17:42:07.733-04:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" time=2026-03-11T17:42:08.985-04:00 level=INFO source=server.go:1388 msg="llama runner started in 3.00 seconds" time=2026-03-11T17:42:09.068-04:00 level=WARN source=runner.go:187 msg="truncating input prompt" limit=4096 prompt=6979 keep=4 new=4096 CUDA error: invalid argument current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438 cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error time=2026-03-11T17:42:09.291-04:00 level=ERROR source=server.go:1610 msg="post predict" error="Post \"http://127.0.0.1:16823/completion\": read tcp 127.0.0.1:16828->127.0.0.1:16823: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2026/03/11 - 17:42:09 | 500 | 3.6940788s | 127.0.0.1 | POST "/api/chat" time=2026-03-11T17:42:09.504-04:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.17.1
GiteaMirror added the bug label 2026-05-05 02:15:15 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 12, 2026):

Server logs will aid in debugging.

<!-- gh-comment-id:4047019375 --> @rick-github commented on GitHub (Mar 12, 2026): [Server logs](https://docs.ollama.com/troubleshooting) will aid in debugging.
Author
Owner

@Guih42 commented on GitHub (Mar 12, 2026):

Server logs will aid in debugging.

Ticket updated. Hope this help!

<!-- gh-comment-id:4047264240 --> @Guih42 commented on GitHub (Mar 12, 2026): > [Server logs](https://docs.ollama.com/troubleshooting) will aid in debugging. Ticket updated. Hope this help!
Author
Owner

@rick-github commented on GitHub (Mar 12, 2026):

https://github.com/ollama/ollama/issues/14444

<!-- gh-comment-id:4047366436 --> @rick-github commented on GitHub (Mar 12, 2026): https://github.com/ollama/ollama/issues/14444
Author
Owner

@Guih42 commented on GitHub (Mar 12, 2026):

resolved in 0.17.5 as the issue #14444 mentionned.

<!-- gh-comment-id:4047879290 --> @Guih42 commented on GitHub (Mar 12, 2026): resolved in 0.17.5 as the issue #14444 mentionned.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71619