[GH-ISSUE #12773] Failed to load qwen3-coder:30b #34232

Open
opened 2026-04-22 17:39:29 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @minimaster on GitHub (Oct 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12773

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

C:\Users\Alexander>ollama run qwen3-coder:30b
Error: 500 Internal Server Error: do load request: Post "http://127.0.0.1:61182/load": read tcp 127.0.0.1:61189->127.0.0.1:61182: wsarecv: An existing connection was forcibly closed by the remote host.

C:\Users\Alexander>ollama serve
time=2025-10-25T00:31:10.554+03:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\Alexander\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-25T00:31:10.575+03:00 level=INFO source=images.go:522 msg="total blobs: 34"
time=2025-10-25T00:31:10.576+03:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-25T00:31:10.577+03:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)"
time=2025-10-25T00:31:10.578+03:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-25T00:31:12.249+03:00 level=INFO source=types.go:112 msg="inference compute" id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon(TM) 8060S Graphics" libdirs=ollama,rocm driver=60450.10 pci_id=c3:00.0 type=iGPU total="32.0 GiB" available="30.0 GiB"
[GIN] 2025/10/25 - 00:31:42 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/25 - 00:31:42 | 200 | 83.0244ms | 127.0.0.1 | POST "/api/show"
time=2025-10-25T00:31:43.171+03:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-25T00:31:43.171+03:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=16 efficiency=0 threads=32
time=2025-10-25T00:31:43.171+03:00 level=INFO source=server.go:216 msg="enabling flash attention"
time=2025-10-25T00:31:43.172+03:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\Users\Alexander\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model C:\Users\Alexander\.ollama\models\blobs\sha256-1194192cf2a187eb02722edcc3f77b11d21f537048ce04b67ccf8ba78863006a --port 61182"
time=2025-10-25T00:31:43.177+03:00 level=INFO source=server.go:676 msg="loading model" "model layers"=49 requested=-1
time=2025-10-25T00:31:43.178+03:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-25T00:31:43.178+03:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=16 efficiency=0 threads=32
time=2025-10-25T00:31:43.178+03:00 level=INFO source=server.go:682 msg="system memory" total="31.8 GiB" free="22.0 GiB" free_swap="26.5 GiB"
time=2025-10-25T00:31:43.178+03:00 level=INFO source=server.go:690 msg="gpu memory" id=0 library=ROCm available="29.6 GiB" free="30.0 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-10-25T00:31:43.224+03:00 level=INFO source=runner.go:1332 msg="starting ollama engine"
time=2025-10-25T00:31:43.241+03:00 level=INFO source=runner.go:1367 msg="Server listening on 127.0.0.1:61182"
time=2025-10-25T00:31:43.243+03:00 level=INFO source=runner.go:1205 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:16 GPULayers:49[ID:0 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-25T00:31:43.263+03:00 level=INFO source=ggml.go:134 msg="" architecture=qwen3moe file_type=Q4_K_M name="Qwen3 Coder 30B A3B Instruct" description="" num_tensors=579 num_key_values=35
load_backend: loaded CPU backend from C:\Users\Alexander\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon(TM) 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0
load_backend: loaded ROCm backend from C:\Users\Alexander\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll
time=2025-10-25T00:31:43.314+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-10-25T00:31:44.634+03:00 level=INFO source=runner.go:1205 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:16 GPULayers:49[ID:0 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-25T00:31:48.939+03:00 level=INFO source=device.go:206 msg="model weights" device=ROCm0 size="17.1 GiB"
time=2025-10-25T00:31:48.939+03:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="166.9 MiB"
time=2025-10-25T00:31:48.939+03:00 level=INFO source=device.go:217 msg="kv cache" device=ROCm0 size="384.0 MiB"
time=2025-10-25T00:31:48.939+03:00 level=INFO source=device.go:228 msg="compute graph" device=ROCm0 size="111.1 MiB"
time=2025-10-25T00:31:48.939+03:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="4.0 MiB"
time=2025-10-25T00:31:48.939+03:00 level=INFO source=device.go:238 msg="total memory" size="17.8 GiB"
time=2025-10-25T00:31:48.939+03:00 level=INFO source=sched.go:450 msg="Load failed" model=C:\Users\Alexander.ollama\models\blobs\sha256-1194192cf2a187eb02722edcc3f77b11d21f537048ce04b67ccf8ba78863006a error="do load request: Post "http://127.0.0.1:61182/load": read tcp 127.0.0.1:61189->127.0.0.1:61182: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2025/10/25 - 00:31:48 | 500 | 6.3652483s | 127.0.0.1 | POST "/api/generate"

Debug log with AMD_LOG_LEVEL=3, OLLAMA_DEBUG=1
ollama.log

Relevant log output

time=2025-10-25T00:31:48.939+03:00 level=INFO source=sched.go:450 msg="Load failed" model=C:\Users\Alexander\.ollama\models\blobs\sha256-1194192cf2a187eb02722edcc3f77b11d21f537048ce04b67ccf8ba78863006a error="do load request: Post 


:3:hip_device_runtime.cpp   :657 : 7384582597 us:   hipGetDevice ( 000000E5CDCFF74C ) 
:3:hip_device_runtime.cpp   :669 : 7384582615 us:  hipGetDevice: Returned hipSuccess : 0
:3:hip_memory.cpp           :2952: 7384582649 us:   hipMemset ( 00000003107E4400, 0, 210 )
Exception 0xc0000005 0x0 0x28 0x7ff835c50ec7
PC=0x7ff835c50ec7
signal arrived during external code execution

runtime.cgocall(0x7ff6155cb680, 0xc000048ce8)
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/cgocall.go:167 +0x3e fp=0xc000048cc0 sp=0xc000048c58 pc=0x7ff6148b2d7e
github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_alloc_ctx_tensors_from_buft(0x275fa4c5270, 0x7fffe6f4f3a0)
	_cgo_gotypes.go:563 +0x51 fp=0xc000048ce8 sp=0xc000048cc0 pc=0x7ff614d06a31
github.com/ollama/ollama/ml/backend/ggml.New.func23(...)
	C:/a/ollama/ollama/ml/backend/ggml/ggml.go:398

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.12.6

Originally created by @minimaster on GitHub (Oct 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12773 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? C:\Users\Alexander>ollama run qwen3-coder:30b Error: 500 Internal Server Error: do load request: Post "http://127.0.0.1:61182/load": read tcp 127.0.0.1:61189->127.0.0.1:61182: wsarecv: An existing connection was forcibly closed by the remote host. C:\Users\Alexander>ollama serve time=2025-10-25T00:31:10.554+03:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Alexander\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-25T00:31:10.575+03:00 level=INFO source=images.go:522 msg="total blobs: 34" time=2025-10-25T00:31:10.576+03:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-25T00:31:10.577+03:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)" time=2025-10-25T00:31:10.578+03:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-25T00:31:12.249+03:00 level=INFO source=types.go:112 msg="inference compute" id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon(TM) 8060S Graphics" libdirs=ollama,rocm driver=60450.10 pci_id=c3:00.0 type=iGPU total="32.0 GiB" available="30.0 GiB" [GIN] 2025/10/25 - 00:31:42 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/10/25 - 00:31:42 | 200 | 83.0244ms | 127.0.0.1 | POST "/api/show" time=2025-10-25T00:31:43.171+03:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-25T00:31:43.171+03:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-10-25T00:31:43.171+03:00 level=INFO source=server.go:216 msg="enabling flash attention" time=2025-10-25T00:31:43.172+03:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Alexander\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\Alexander\\.ollama\\models\\blobs\\sha256-1194192cf2a187eb02722edcc3f77b11d21f537048ce04b67ccf8ba78863006a --port 61182" time=2025-10-25T00:31:43.177+03:00 level=INFO source=server.go:676 msg="loading model" "model layers"=49 requested=-1 time=2025-10-25T00:31:43.178+03:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-25T00:31:43.178+03:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-10-25T00:31:43.178+03:00 level=INFO source=server.go:682 msg="system memory" total="31.8 GiB" free="22.0 GiB" free_swap="26.5 GiB" time=2025-10-25T00:31:43.178+03:00 level=INFO source=server.go:690 msg="gpu memory" id=0 library=ROCm available="29.6 GiB" free="30.0 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-10-25T00:31:43.224+03:00 level=INFO source=runner.go:1332 msg="starting ollama engine" time=2025-10-25T00:31:43.241+03:00 level=INFO source=runner.go:1367 msg="Server listening on 127.0.0.1:61182" time=2025-10-25T00:31:43.243+03:00 level=INFO source=runner.go:1205 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:16 GPULayers:49[ID:0 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-25T00:31:43.263+03:00 level=INFO source=ggml.go:134 msg="" architecture=qwen3moe file_type=Q4_K_M name="Qwen3 Coder 30B A3B Instruct" description="" num_tensors=579 num_key_values=35 load_backend: loaded CPU backend from C:\Users\Alexander\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon(TM) 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0 load_backend: loaded ROCm backend from C:\Users\Alexander\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll time=2025-10-25T00:31:43.314+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-10-25T00:31:44.634+03:00 level=INFO source=runner.go:1205 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:16 GPULayers:49[ID:0 Layers:49(0..48)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-25T00:31:48.939+03:00 level=INFO source=device.go:206 msg="model weights" device=ROCm0 size="17.1 GiB" time=2025-10-25T00:31:48.939+03:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="166.9 MiB" time=2025-10-25T00:31:48.939+03:00 level=INFO source=device.go:217 msg="kv cache" device=ROCm0 size="384.0 MiB" time=2025-10-25T00:31:48.939+03:00 level=INFO source=device.go:228 msg="compute graph" device=ROCm0 size="111.1 MiB" time=2025-10-25T00:31:48.939+03:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="4.0 MiB" time=2025-10-25T00:31:48.939+03:00 level=INFO source=device.go:238 msg="total memory" size="17.8 GiB" time=2025-10-25T00:31:48.939+03:00 level=INFO source=sched.go:450 msg="Load failed" model=C:\Users\Alexander\.ollama\models\blobs\sha256-1194192cf2a187eb02722edcc3f77b11d21f537048ce04b67ccf8ba78863006a error="do load request: Post \"http://127.0.0.1:61182/load\": read tcp 127.0.0.1:61189->127.0.0.1:61182: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2025/10/25 - 00:31:48 | 500 | 6.3652483s | 127.0.0.1 | POST "/api/generate" Debug log with AMD_LOG_LEVEL=3, OLLAMA_DEBUG=1 [ollama.log](https://github.com/user-attachments/files/23135349/ollama.log) ### Relevant log output ```shell time=2025-10-25T00:31:48.939+03:00 level=INFO source=sched.go:450 msg="Load failed" model=C:\Users\Alexander\.ollama\models\blobs\sha256-1194192cf2a187eb02722edcc3f77b11d21f537048ce04b67ccf8ba78863006a error="do load request: Post :3:hip_device_runtime.cpp :657 : 7384582597 us: hipGetDevice ( 000000E5CDCFF74C ) :3:hip_device_runtime.cpp :669 : 7384582615 us: hipGetDevice: Returned hipSuccess : 0 :3:hip_memory.cpp :2952: 7384582649 us: hipMemset ( 00000003107E4400, 0, 210 ) Exception 0xc0000005 0x0 0x28 0x7ff835c50ec7 PC=0x7ff835c50ec7 signal arrived during external code execution runtime.cgocall(0x7ff6155cb680, 0xc000048ce8) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/cgocall.go:167 +0x3e fp=0xc000048cc0 sp=0xc000048c58 pc=0x7ff6148b2d7e github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_alloc_ctx_tensors_from_buft(0x275fa4c5270, 0x7fffe6f4f3a0) _cgo_gotypes.go:563 +0x51 fp=0xc000048ce8 sp=0xc000048cc0 pc=0x7ff614d06a31 github.com/ollama/ollama/ml/backend/ggml.New.func23(...) C:/a/ollama/ollama/ml/backend/ggml/ggml.go:398 ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.12.6
GiteaMirror added the gpuamdbugwindows labels 2026-04-22 17:39:30 -05:00
Author
Owner

@dhiltgen commented on GitHub (Oct 25, 2025):

From the logs it seems like we're well under the available memory for the iGPU. Perhaps this is a ROCm bug. Does it crash consistently? Could you try setting OLLAMA_DEBUG to 2 to enable additional trace logging and share those logs?

<!-- gh-comment-id:3447782201 --> @dhiltgen commented on GitHub (Oct 25, 2025): From the logs it seems like we're well under the available memory for the iGPU. Perhaps this is a ROCm bug. Does it crash consistently? Could you try setting OLLAMA_DEBUG to 2 to enable additional trace logging and share those logs?
Author
Owner

@kiliansinger commented on GitHub (Oct 30, 2025):

I had similar problems on nvidia and could fix it with my PR: https://github.com/ollama/ollama/pull/12856

<!-- gh-comment-id:3470653565 --> @kiliansinger commented on GitHub (Oct 30, 2025): I had similar problems on nvidia and could fix it with my PR: https://github.com/ollama/ollama/pull/12856
Author
Owner

@ttait1 commented on GitHub (Jan 31, 2026):

I had qwen3-coder-30b working fine in ollama 15.1, but 15.2 is giving me this same error. Reverting ollama back to 15.1 solved it.

Specifically:
$ ollama run hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:IQ4_XS
Error: 500 Internal Server Error: do load request: Post "http://127.0.0.1:39801/load": EOF

<!-- gh-comment-id:3829152577 --> @ttait1 commented on GitHub (Jan 31, 2026): I had qwen3-coder-30b working fine in ollama 15.1, but 15.2 is giving me this same error. Reverting ollama back to 15.1 solved it. Specifically: $ ollama run hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:IQ4_XS Error: 500 Internal Server Error: do load request: Post "http://127.0.0.1:39801/load": EOF
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34232