[GH-ISSUE #14500] Qwen 3.5 35b failing to load while using GPU #71466

Closed
opened 2026-05-05 01:48:42 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @Da-coding-pro on GitHub (Feb 27, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14500

What is the issue?

I have a Nvidia A100, when I try to run qwen3.5:35b on openwebui, it fails to load, but after setting num_gpu layers to 0 it works but weirdly in Command Prompt, when I ran ollama run qwen3.5:35b it worked perfectly

Relevant log output


OS

Windows 11

GPU

Nvidia A100

CPU

Intel ULTRA 9

Ollama version

0.17.4

Originally created by @Da-coding-pro on GitHub (Feb 27, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14500 ### What is the issue? I have a Nvidia A100, when I try to run `qwen3.5:35b` on openwebui, it fails to load, but after setting num_gpu layers to 0 it works but weirdly in Command Prompt, when I ran `ollama run qwen3.5:35b` it worked perfectly ### Relevant log output ```shell ``` ### OS Windows 11 ### GPU Nvidia A100 ### CPU Intel ULTRA 9 ### Ollama version 0.17.4
GiteaMirror added the bug label 2026-05-05 01:48:42 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 27, 2026):

Server logs will aid in debugging.

<!-- gh-comment-id:3973317966 --> @rick-github commented on GitHub (Feb 27, 2026): [Server logs](https://docs.ollama.com/troubleshooting) will aid in debugging.
Author
Owner

@baileikyo commented on GitHub (Feb 27, 2026):

我遇到差不多的问题,可以运行一次,在运行就会报错,显卡是2080ti 22G

time=2026-02-28T00:05:52.161+08:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:G:\ollama\ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2026-02-28T00:05:52.171+08:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false"
time=2026-02-28T00:05:52.178+08:00 level=INFO source=images.go:473 msg="total blobs: 25"
time=2026-02-28T00:05:52.180+08:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
time=2026-02-28T00:05:52.181+08:00 level=INFO source=routes.go:1718 msg="Listening on 127.0.0.1:11434 (version 0.17.4)"
time=2026-02-28T00:05:52.182+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-02-28T00:05:52.237+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50365"
time=2026-02-28T00:05:53.236+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50374"
time=2026-02-28T00:05:53.497+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50383"
time=2026-02-28T00:05:53.867+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1"
time=2026-02-28T00:05:53.868+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50392"
time=2026-02-28T00:05:53.868+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50393"
time=2026-02-28T00:05:54.175+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac filter_id="" library=CUDA compute=7.5 name=CUDA0 description="NVIDIA GeForce RTX 2080 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="22.0 GiB" available="20.4 GiB"
time=2026-02-28T00:05:54.175+08:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="22.0 GiB" default_num_ctx=4096
[GIN] 2026/02/28 - 00:05:55 | 200 | 0s | 127.0.0.1 | GET "/api/version"
[GIN] 2026/02/28 - 00:05:55 | 200 | 3.3177ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2026/02/28 - 00:05:58 | 200 | 3.3003ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2026/02/28 - 00:06:02 | 200 | 234.3706ms | 127.0.0.1 | POST "/api/show"
[GIN] 2026/02/28 - 00:06:02 | 200 | 219.461ms | 127.0.0.1 | POST "/api/show"
time=2026-02-28T00:06:02.986+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 51663"
time=2026-02-28T00:06:03.247+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-02-28T00:06:03.247+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12
time=2026-02-28T00:06:03.405+08:00 level=INFO source=server.go:247 msg="enabling flash attention"
time=2026-02-28T00:06:03.407+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --model G:\ollama\ollama_models\blobs\sha256-2abd0d805943fa113f934d1ae4f2d5a749b5d4fe2a0a9c64b645c1df15868da7 --port 51671"
time=2026-02-28T00:06:03.431+08:00 level=INFO source=sched.go:491 msg="system memory" total="31.9 GiB" free="13.8 GiB" free_swap="62.1 GiB"
time=2026-02-28T00:06:03.431+08:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac library=CUDA available="20.0 GiB" free="20.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-28T00:06:03.431+08:00 level=INFO source=server.go:757 msg="loading model" "model layers"=41 requested=-1
time=2026-02-28T00:06:03.487+08:00 level=INFO source=runner.go:1411 msg="starting ollama engine"
time=2026-02-28T00:06:03.498+08:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:51671"
time=2026-02-28T00:06:03.508+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:41[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T00:06:03.570+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35moe file_type=Q4_K_M name="" description="" num_tensors=1959 num_key_values=57
load_backend: loaded CPU backend from G:\ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac
load_backend: loaded CUDA backend from G:\ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2026-02-28T00:06:03.693+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-02-28T00:06:04.624+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T00:06:05.125+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T00:06:06.081+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:482 msg="offloading 35 repeating layers to GPU"
time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:494 msg="offloaded 35/41 layers to GPU"
time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="17.7 GiB"
time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="4.5 GiB"
time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.4 GiB"
time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="217.4 MiB"
time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="721.4 MiB"
time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="630.8 MiB"
time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:272 msg="total memory" size="25.2 GiB"
time=2026-02-28T00:06:06.081+08:00 level=INFO source=sched.go:566 msg="loaded runners" count=1
time=2026-02-28T00:06:06.081+08:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-02-28T00:06:06.083+08:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
[GIN] 2026/02/28 - 00:06:28 | 200 | 28.0928ms | 127.0.0.1 | GET "/api/tags"
time=2026-02-28T00:06:52.162+08:00 level=INFO source=server.go:1388 msg="llama runner started in 48.73 seconds"
CUDA error: invalid argument
current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438
cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error
time=2026-02-28T00:06:52.917+08:00 level=ERROR source=server.go:1610 msg="post predict" error="Post "http://127.0.0.1:51671/completion": read tcp 127.0.0.1:51679->127.0.0.1:51671: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2026/02/28 - 00:06:52 | 500 | 50.1164571s | 127.0.0.1 | POST "/api/chat"
time=2026-02-28T00:06:53.345+08:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1"
[GIN] 2026/02/28 - 00:06:58 | 200 | 11.032ms | 127.0.0.1 | GET "/api/tags"

<!-- gh-comment-id:3973764947 --> @baileikyo commented on GitHub (Feb 27, 2026): 我遇到差不多的问题,可以运行一次,在运行就会报错,显卡是2080ti 22G time=2026-02-28T00:05:52.161+08:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:G:\\ollama\\ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-02-28T00:05:52.171+08:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false" time=2026-02-28T00:05:52.178+08:00 level=INFO source=images.go:473 msg="total blobs: 25" time=2026-02-28T00:05:52.180+08:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-02-28T00:05:52.181+08:00 level=INFO source=routes.go:1718 msg="Listening on 127.0.0.1:11434 (version 0.17.4)" time=2026-02-28T00:05:52.182+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-02-28T00:05:52.237+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\\ollama\\ollama.exe runner --ollama-engine --port 50365" time=2026-02-28T00:05:53.236+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\\ollama\\ollama.exe runner --ollama-engine --port 50374" time=2026-02-28T00:05:53.497+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\\ollama\\ollama.exe runner --ollama-engine --port 50383" time=2026-02-28T00:05:53.867+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-02-28T00:05:53.868+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\\ollama\\ollama.exe runner --ollama-engine --port 50392" time=2026-02-28T00:05:53.868+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\\ollama\\ollama.exe runner --ollama-engine --port 50393" time=2026-02-28T00:05:54.175+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac filter_id="" library=CUDA compute=7.5 name=CUDA0 description="NVIDIA GeForce RTX 2080 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="22.0 GiB" available="20.4 GiB" time=2026-02-28T00:05:54.175+08:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="22.0 GiB" default_num_ctx=4096 [GIN] 2026/02/28 - 00:05:55 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/28 - 00:05:55 | 200 | 3.3177ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/28 - 00:05:58 | 200 | 3.3003ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/28 - 00:06:02 | 200 | 234.3706ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/28 - 00:06:02 | 200 | 219.461ms | 127.0.0.1 | POST "/api/show" time=2026-02-28T00:06:02.986+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\\ollama\\ollama.exe runner --ollama-engine --port 51663" time=2026-02-28T00:06:03.247+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-02-28T00:06:03.247+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12 time=2026-02-28T00:06:03.405+08:00 level=INFO source=server.go:247 msg="enabling flash attention" time=2026-02-28T00:06:03.407+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\\ollama\\ollama.exe runner --ollama-engine --model G:\\ollama\\ollama_models\\blobs\\sha256-2abd0d805943fa113f934d1ae4f2d5a749b5d4fe2a0a9c64b645c1df15868da7 --port 51671" time=2026-02-28T00:06:03.431+08:00 level=INFO source=sched.go:491 msg="system memory" total="31.9 GiB" free="13.8 GiB" free_swap="62.1 GiB" time=2026-02-28T00:06:03.431+08:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac library=CUDA available="20.0 GiB" free="20.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-28T00:06:03.431+08:00 level=INFO source=server.go:757 msg="loading model" "model layers"=41 requested=-1 time=2026-02-28T00:06:03.487+08:00 level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-02-28T00:06:03.498+08:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:51671" time=2026-02-28T00:06:03.508+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:41[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:03.570+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35moe file_type=Q4_K_M name="" description="" num_tensors=1959 num_key_values=57 load_backend: loaded CPU backend from G:\ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac load_backend: loaded CUDA backend from G:\ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-02-28T00:06:03.693+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-02-28T00:06:04.624+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:05.125+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:06.081+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:482 msg="offloading 35 repeating layers to GPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:494 msg="offloaded 35/41 layers to GPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="17.7 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="4.5 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.4 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="217.4 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="721.4 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="630.8 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:272 msg="total memory" size="25.2 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-02-28T00:06:06.081+08:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-28T00:06:06.083+08:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" [GIN] 2026/02/28 - 00:06:28 | 200 | 28.0928ms | 127.0.0.1 | GET "/api/tags" time=2026-02-28T00:06:52.162+08:00 level=INFO source=server.go:1388 msg="llama runner started in 48.73 seconds" CUDA error: invalid argument current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438 cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error time=2026-02-28T00:06:52.917+08:00 level=ERROR source=server.go:1610 msg="post predict" error="Post \"http://127.0.0.1:51671/completion\": read tcp 127.0.0.1:51679->127.0.0.1:51671: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2026/02/28 - 00:06:52 | 500 | 50.1164571s | 127.0.0.1 | POST "/api/chat" time=2026-02-28T00:06:53.345+08:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1" [GIN] 2026/02/28 - 00:06:58 | 200 | 11.032ms | 127.0.0.1 | GET "/api/tags"
Author
Owner

@rick-github commented on GitHub (Feb 27, 2026):

@baileikyo #14444

<!-- gh-comment-id:3973784425 --> @rick-github commented on GitHub (Feb 27, 2026): @baileikyo #14444
Author
Owner

@Junesgone commented on GitHub (Feb 28, 2026):

我遇到差不多的问题,可以运行一次,在运行就会报错,显卡是2080ti 22G

time=2026-02-28T00:05:52.161+08:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:G:\ollama\ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-02-28T00:05:52.171+08:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false" time=2026-02-28T00:05:52.178+08:00 level=INFO source=images.go:473 msg="total blobs: 25" time=2026-02-28T00:05:52.180+08:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-02-28T00:05:52.181+08:00 level=INFO source=routes.go:1718 msg="Listening on 127.0.0.1:11434 (version 0.17.4)" time=2026-02-28T00:05:52.182+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-02-28T00:05:52.237+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50365" time=2026-02-28T00:05:53.236+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50374" time=2026-02-28T00:05:53.497+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50383" time=2026-02-28T00:05:53.867+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-02-28T00:05:53.868+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50392" time=2026-02-28T00:05:53.868+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50393" time=2026-02-28T00:05:54.175+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac filter_id="" library=CUDA compute=7.5 name=CUDA0 description="NVIDIA GeForce RTX 2080 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="22.0 GiB" available="20.4 GiB" time=2026-02-28T00:05:54.175+08:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="22.0 GiB" default_num_ctx=4096 [GIN] 2026/02/28 - 00:05:55 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/28 - 00:05:55 | 200 | 3.3177ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/28 - 00:05:58 | 200 | 3.3003ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/28 - 00:06:02 | 200 | 234.3706ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/28 - 00:06:02 | 200 | 219.461ms | 127.0.0.1 | POST "/api/show" time=2026-02-28T00:06:02.986+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 51663" time=2026-02-28T00:06:03.247+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-02-28T00:06:03.247+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12 time=2026-02-28T00:06:03.405+08:00 level=INFO source=server.go:247 msg="enabling flash attention" time=2026-02-28T00:06:03.407+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --model G:\ollama\ollama_models\blobs\sha256-2abd0d805943fa113f934d1ae4f2d5a749b5d4fe2a0a9c64b645c1df15868da7 --port 51671" time=2026-02-28T00:06:03.431+08:00 level=INFO source=sched.go:491 msg="system memory" total="31.9 GiB" free="13.8 GiB" free_swap="62.1 GiB" time=2026-02-28T00:06:03.431+08:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac library=CUDA available="20.0 GiB" free="20.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-28T00:06:03.431+08:00 level=INFO source=server.go:757 msg="loading model" "model layers"=41 requested=-1 time=2026-02-28T00:06:03.487+08:00 level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-02-28T00:06:03.498+08:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:51671" time=2026-02-28T00:06:03.508+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:41[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:03.570+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35moe file_type=Q4_K_M name="" description="" num_tensors=1959 num_key_values=57 load_backend: loaded CPU backend from G:\ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac load_backend: loaded CUDA backend from G:\ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-02-28T00:06:03.693+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-02-28T00:06:04.624+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:05.125+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:06.081+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:482 msg="offloading 35 repeating layers to GPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:494 msg="offloaded 35/41 layers to GPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="17.7 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="4.5 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.4 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="217.4 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="721.4 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="630.8 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:272 msg="total memory" size="25.2 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-02-28T00:06:06.081+08:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-28T00:06:06.083+08:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" [GIN] 2026/02/28 - 00:06:28 | 200 | 28.0928ms | 127.0.0.1 | GET "/api/tags" time=2026-02-28T00:06:52.162+08:00 level=INFO source=server.go:1388 msg="llama runner started in 48.73 seconds" CUDA error: invalid argument current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438 cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error time=2026-02-28T00:06:52.917+08:00 level=ERROR source=server.go:1610 msg="post predict" error="Post "[http://127.0.0.1:51671/completion](http://127.0.0.1:51671/completion%5C)": read tcp 127.0.0.1:51679->127.0.0.1:51671: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2026/02/28 - 00:06:52 | 500 | 50.1164571s | 127.0.0.1 | POST "/api/chat" time=2026-02-28T00:06:53.345+08:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1" [GIN] 2026/02/28 - 00:06:58 | 200 | 11.032ms | 127.0.0.1 | GET "/api/tags"

我也是2080ti 22G,遇到一模一样的问题

<!-- gh-comment-id:3976374928 --> @Junesgone commented on GitHub (Feb 28, 2026): > 我遇到差不多的问题,可以运行一次,在运行就会报错,显卡是2080ti 22G > > time=2026-02-28T00:05:52.161+08:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:G:\ollama\ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-02-28T00:05:52.171+08:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false" time=2026-02-28T00:05:52.178+08:00 level=INFO source=images.go:473 msg="total blobs: 25" time=2026-02-28T00:05:52.180+08:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-02-28T00:05:52.181+08:00 level=INFO source=routes.go:1718 msg="Listening on 127.0.0.1:11434 (version 0.17.4)" time=2026-02-28T00:05:52.182+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-02-28T00:05:52.237+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50365" time=2026-02-28T00:05:53.236+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50374" time=2026-02-28T00:05:53.497+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50383" time=2026-02-28T00:05:53.867+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-02-28T00:05:53.868+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50392" time=2026-02-28T00:05:53.868+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50393" time=2026-02-28T00:05:54.175+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac filter_id="" library=CUDA compute=7.5 name=CUDA0 description="NVIDIA GeForce RTX 2080 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="22.0 GiB" available="20.4 GiB" time=2026-02-28T00:05:54.175+08:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="22.0 GiB" default_num_ctx=4096 [GIN] 2026/02/28 - 00:05:55 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/28 - 00:05:55 | 200 | 3.3177ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/28 - 00:05:58 | 200 | 3.3003ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/28 - 00:06:02 | 200 | 234.3706ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/28 - 00:06:02 | 200 | 219.461ms | 127.0.0.1 | POST "/api/show" time=2026-02-28T00:06:02.986+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 51663" time=2026-02-28T00:06:03.247+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-02-28T00:06:03.247+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12 time=2026-02-28T00:06:03.405+08:00 level=INFO source=server.go:247 msg="enabling flash attention" time=2026-02-28T00:06:03.407+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --model G:\ollama\ollama_models\blobs\sha256-2abd0d805943fa113f934d1ae4f2d5a749b5d4fe2a0a9c64b645c1df15868da7 --port 51671" time=2026-02-28T00:06:03.431+08:00 level=INFO source=sched.go:491 msg="system memory" total="31.9 GiB" free="13.8 GiB" free_swap="62.1 GiB" time=2026-02-28T00:06:03.431+08:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac library=CUDA available="20.0 GiB" free="20.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-28T00:06:03.431+08:00 level=INFO source=server.go:757 msg="loading model" "model layers"=41 requested=-1 time=2026-02-28T00:06:03.487+08:00 level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-02-28T00:06:03.498+08:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:51671" time=2026-02-28T00:06:03.508+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:41[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:03.570+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35moe file_type=Q4_K_M name="" description="" num_tensors=1959 num_key_values=57 load_backend: loaded CPU backend from G:\ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac load_backend: loaded CUDA backend from G:\ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-02-28T00:06:03.693+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-02-28T00:06:04.624+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:05.125+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:06.081+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:482 msg="offloading 35 repeating layers to GPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:494 msg="offloaded 35/41 layers to GPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="17.7 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="4.5 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.4 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="217.4 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="721.4 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="630.8 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:272 msg="total memory" size="25.2 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-02-28T00:06:06.081+08:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-28T00:06:06.083+08:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" [GIN] 2026/02/28 - 00:06:28 | 200 | 28.0928ms | 127.0.0.1 | GET "/api/tags" time=2026-02-28T00:06:52.162+08:00 level=INFO source=server.go:1388 msg="llama runner started in 48.73 seconds" CUDA error: invalid argument current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438 cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error time=2026-02-28T00:06:52.917+08:00 level=ERROR source=server.go:1610 msg="post predict" error="Post "[http://127.0.0.1:51671/completion\](http://127.0.0.1:51671/completion%5C)": read tcp 127.0.0.1:51679->127.0.0.1:51671: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2026/02/28 - 00:06:52 | 500 | 50.1164571s | 127.0.0.1 | POST "/api/chat" time=2026-02-28T00:06:53.345+08:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1" [GIN] 2026/02/28 - 00:06:58 | 200 | 11.032ms | 127.0.0.1 | GET "/api/tags" 我也是2080ti 22G,遇到一模一样的问题
Author
Owner

@baileikyo commented on GitHub (Feb 28, 2026):

@baileikyo #14444

thanks

<!-- gh-comment-id:3976388817 --> @baileikyo commented on GitHub (Feb 28, 2026): > [@baileikyo](https://github.com/baileikyo) [#14444](https://github.com/ollama/ollama/issues/14444) thanks
Author
Owner

@baileikyo commented on GitHub (Feb 28, 2026):

我遇到差不多的问题,可以运行一次,在运行就会报错,显卡是2080ti 22G
time=2026-02-28T00:05:52.161+08:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:G:\ollama\ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-02-28T00:05:52.171+08:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false" time=2026-02-28T00:05:52.178+08:00 level=INFO source=images.go:473 msg="total blobs: 25" time=2026-02-28T00:05:52.180+08:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-02-28T00:05:52.181+08:00 level=INFO source=routes.go:1718 msg="Listening on 127.0.0.1:11434 (version 0.17.4)" time=2026-02-28T00:05:52.182+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-02-28T00:05:52.237+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50365" time=2026-02-28T00:05:53.236+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50374" time=2026-02-28T00:05:53.497+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50383" time=2026-02-28T00:05:53.867+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-02-28T00:05:53.868+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50392" time=2026-02-28T00:05:53.868+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50393" time=2026-02-28T00:05:54.175+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac filter_id="" library=CUDA compute=7.5 name=CUDA0 description="NVIDIA GeForce RTX 2080 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="22.0 GiB" available="20.4 GiB" time=2026-02-28T00:05:54.175+08:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="22.0 GiB" default_num_ctx=4096 [GIN] 2026/02/28 - 00:05:55 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/28 - 00:05:55 | 200 | 3.3177ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/28 - 00:05:58 | 200 | 3.3003ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/28 - 00:06:02 | 200 | 234.3706ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/28 - 00:06:02 | 200 | 219.461ms | 127.0.0.1 | POST "/api/show" time=2026-02-28T00:06:02.986+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 51663" time=2026-02-28T00:06:03.247+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-02-28T00:06:03.247+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12 time=2026-02-28T00:06:03.405+08:00 level=INFO source=server.go:247 msg="enabling flash attention" time=2026-02-28T00:06:03.407+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --model G:\ollama\ollama_models\blobs\sha256-2abd0d805943fa113f934d1ae4f2d5a749b5d4fe2a0a9c64b645c1df15868da7 --port 51671" time=2026-02-28T00:06:03.431+08:00 level=INFO source=sched.go:491 msg="system memory" total="31.9 GiB" free="13.8 GiB" free_swap="62.1 GiB" time=2026-02-28T00:06:03.431+08:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac library=CUDA available="20.0 GiB" free="20.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-28T00:06:03.431+08:00 level=INFO source=server.go:757 msg="loading model" "model layers"=41 requested=-1 time=2026-02-28T00:06:03.487+08:00 level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-02-28T00:06:03.498+08:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:51671" time=2026-02-28T00:06:03.508+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:41[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:03.570+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35moe file_type=Q4_K_M name="" description="" num_tensors=1959 num_key_values=57 load_backend: loaded CPU backend from G:\ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac load_backend: loaded CUDA backend from G:\ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-02-28T00:06:03.693+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-02-28T00:06:04.624+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:05.125+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:06.081+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:482 msg="offloading 35 repeating layers to GPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:494 msg="offloaded 35/41 layers to GPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="17.7 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="4.5 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.4 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="217.4 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="721.4 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="630.8 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:272 msg="total memory" size="25.2 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-02-28T00:06:06.081+08:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-28T00:06:06.083+08:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" [GIN] 2026/02/28 - 00:06:28 | 200 | 28.0928ms | 127.0.0.1 | GET "/api/tags" time=2026-02-28T00:06:52.162+08:00 level=INFO source=server.go:1388 msg="llama runner started in 48.73 seconds" CUDA error: invalid argument current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438 cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error time=2026-02-28T00:06:52.917+08:00 level=ERROR source=server.go:1610 msg="post predict" error="Post "http://127.0.0.1:51671/completion": read tcp 127.0.0.1:51679->127.0.0.1:51671: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2026/02/28 - 00:06:52 | 500 | 50.1164571s | 127.0.0.1 | POST "/api/chat" time=2026-02-28T00:06:53.345+08:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1" [GIN] 2026/02/28 - 00:06:58 | 200 | 11.032ms | 127.0.0.1 | GET "/api/tags"

我也是2080ti 22G,遇到一模一样的问题

看介绍是说要改文件,上面回复的那个你也可以参考看看

<!-- gh-comment-id:3976390135 --> @baileikyo commented on GitHub (Feb 28, 2026): > > 我遇到差不多的问题,可以运行一次,在运行就会报错,显卡是2080ti 22G > > time=2026-02-28T00:05:52.161+08:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:G:\ollama\ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-02-28T00:05:52.171+08:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: false" time=2026-02-28T00:05:52.178+08:00 level=INFO source=images.go:473 msg="total blobs: 25" time=2026-02-28T00:05:52.180+08:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-02-28T00:05:52.181+08:00 level=INFO source=routes.go:1718 msg="Listening on 127.0.0.1:11434 (version 0.17.4)" time=2026-02-28T00:05:52.182+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-02-28T00:05:52.237+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50365" time=2026-02-28T00:05:53.236+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50374" time=2026-02-28T00:05:53.497+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50383" time=2026-02-28T00:05:53.867+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-02-28T00:05:53.868+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50392" time=2026-02-28T00:05:53.868+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 50393" time=2026-02-28T00:05:54.175+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac filter_id="" library=CUDA compute=7.5 name=CUDA0 description="NVIDIA GeForce RTX 2080 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="22.0 GiB" available="20.4 GiB" time=2026-02-28T00:05:54.175+08:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="22.0 GiB" default_num_ctx=4096 [GIN] 2026/02/28 - 00:05:55 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/28 - 00:05:55 | 200 | 3.3177ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/28 - 00:05:58 | 200 | 3.3003ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/28 - 00:06:02 | 200 | 234.3706ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/28 - 00:06:02 | 200 | 219.461ms | 127.0.0.1 | POST "/api/show" time=2026-02-28T00:06:02.986+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --port 51663" time=2026-02-28T00:06:03.247+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-02-28T00:06:03.247+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12 time=2026-02-28T00:06:03.405+08:00 level=INFO source=server.go:247 msg="enabling flash attention" time=2026-02-28T00:06:03.407+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="G:\ollama\ollama.exe runner --ollama-engine --model G:\ollama\ollama_models\blobs\sha256-2abd0d805943fa113f934d1ae4f2d5a749b5d4fe2a0a9c64b645c1df15868da7 --port 51671" time=2026-02-28T00:06:03.431+08:00 level=INFO source=sched.go:491 msg="system memory" total="31.9 GiB" free="13.8 GiB" free_swap="62.1 GiB" time=2026-02-28T00:06:03.431+08:00 level=INFO source=sched.go:498 msg="gpu memory" id=GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac library=CUDA available="20.0 GiB" free="20.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-28T00:06:03.431+08:00 level=INFO source=server.go:757 msg="loading model" "model layers"=41 requested=-1 time=2026-02-28T00:06:03.487+08:00 level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-02-28T00:06:03.498+08:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:51671" time=2026-02-28T00:06:03.508+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:41[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:03.570+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen35moe file_type=Q4_K_M name="" description="" num_tensors=1959 num_key_values=57 load_backend: loaded CPU backend from G:\ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac load_backend: loaded CUDA backend from G:\ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-02-28T00:06:03.693+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-02-28T00:06:04.624+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:05.125+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:06.081+08:00 level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:GPU-38923bda-0b5a-d36d-568d-53d1f9f5eeac Layers:35(5..39)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:482 msg="offloading 35 repeating layers to GPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=ggml.go:494 msg="offloaded 35/41 layers to GPU" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="17.7 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="4.5 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.4 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="217.4 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="721.4 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="630.8 MiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=device.go:272 msg="total memory" size="25.2 GiB" time=2026-02-28T00:06:06.081+08:00 level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-02-28T00:06:06.081+08:00 level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-28T00:06:06.083+08:00 level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" [GIN] 2026/02/28 - 00:06:28 | 200 | 28.0928ms | 127.0.0.1 | GET "/api/tags" time=2026-02-28T00:06:52.162+08:00 level=INFO source=server.go:1388 msg="llama runner started in 48.73 seconds" CUDA error: invalid argument current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438 cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error time=2026-02-28T00:06:52.917+08:00 level=ERROR source=server.go:1610 msg="post predict" error="Post "[http://127.0.0.1:51671/completion](http://127.0.0.1:51671/completion%5C)": read tcp 127.0.0.1:51679->127.0.0.1:51671: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2026/02/28 - 00:06:52 | 500 | 50.1164571s | 127.0.0.1 | POST "/api/chat" time=2026-02-28T00:06:53.345+08:00 level=ERROR source=server.go:304 msg="llama runner terminated" error="exit status 1" [GIN] 2026/02/28 - 00:06:58 | 200 | 11.032ms | 127.0.0.1 | GET "/api/tags" > > 我也是2080ti 22G,遇到一模一样的问题 看介绍是说要改文件,上面回复的那个你也可以参考看看
Author
Owner

@Junesgone commented on GitHub (Feb 28, 2026):

看哪里的介绍?改哪个文件? @baileikyo

<!-- gh-comment-id:3976812350 --> @Junesgone commented on GitHub (Feb 28, 2026): 看哪里的介绍?改哪个文件? @baileikyo
Author
Owner

@baileikyo commented on GitHub (Mar 1, 2026):

看哪里的介绍?改哪个文件? @baileikyo

看了下,还挺复杂,等更新修复吧

<!-- gh-comment-id:3979432165 --> @baileikyo commented on GitHub (Mar 1, 2026): > 看哪里的介绍?改哪个文件? [@baileikyo](https://github.com/baileikyo) 看了下,还挺复杂,等更新修复吧
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71466