[GH-ISSUE #14439] [Windows] ROCm 7.1 / HIP Error on Radeon RX 9000 Series (gfx1200) - CUBLAS_STATUS_INTERNAL_ERROR #35136

Closed
opened 2026-04-22 19:25:01 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @diegoabreu15 on GitHub (Feb 26, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14439

Describe the bug
On Windows 11, using an AMD Radeon RX 9060 XT (architecture gfx1200) with ROCm 7.1 / HIP SDK, Ollama fails to offload models entirely to the GPU. Even for small models like Gemma 3:4b (4.3 GB), the system performs a partial offload (approx. 82% CPU / 18% GPU) or crashes with a connection error.

Log Snippets
When running with OLLAMA_DEBUG=1, the following error occurs:
wsarecv: An existing connection was forcibly closed by the remote host.
Internal HIP errors indicate CUBLAS_STATUS_INTERNAL_ERROR or memory allocation failures despite having 8GB of VRAM available.

Workaround discovered
Setting OLLAMA_VULKAN=1 and HIP_VISIBLE_DEVICES=-1 stabilizes the model and allows 100% GPU offload, but loses the performance benefits of the native ROCm 7.1 path.

Hardware & Software

OS: Windows 11

GPU: AMD Radeon RX 9060 XT (8GB VRAM)

RAM: 32GB

Driver: AMD Software: Adrenalin Edition (Latest)

ROCm/HIP Version: 7.1

Ollama Version: Latest

Relevant log output

time=2026-02-25T18:43:00.828-03:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="8.0 GiB" default_num_ctx=4096

[GIN] 2026/02/25 - 18:45:47 | 200 |            0s |       127.0.0.1 | GET      "/api/version"

[GIN] 2026/02/25 - 18:45:47 | 200 |            0s |       127.0.0.1 | GET      "/api/version"

[GIN] 2026/02/25 - 18:45:47 | 200 |            0s |       127.0.0.1 | GET      "/api/version"

[GIN] 2026/02/25 - 18:45:47 | 200 |      10.906ms |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:45:48 | 401 |    199.6413ms |       127.0.0.1 | POST     "/api/me"

[GIN] 2026/02/25 - 18:45:48 | 401 |    166.6388ms |       127.0.0.1 | POST     "/api/me"

[GIN] 2026/02/25 - 18:45:48 | 404 |      1.0193ms |       127.0.0.1 | POST     "/api/show"

[GIN] 2026/02/25 - 18:45:52 | 404 |      1.0417ms |       127.0.0.1 | POST     "/api/show"

[GIN] 2026/02/25 - 18:45:52 | 200 |      1.0347ms |       127.0.0.1 | GET      "/api/tags"

time=2026-02-25T18:45:53.377-03:00 level=INFO source=download.go:179 msg="downloading e7b273f96360 in 16 862 MB part(s)"

[GIN] 2026/02/25 - 18:46:22 | 200 |       1.537ms |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:46:52 | 200 |      1.0207ms |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:47:22 | 200 |      2.5673ms |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:47:52 | 200 |       518.9µs |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:48:22 | 200 |      1.0197ms |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:48:52 | 200 |      1.5398ms |       127.0.0.1 | GET      "/api/tags"

time=2026-02-25T18:49:05.530-03:00 level=INFO source=download.go:179 msg="downloading fa6710a93d78 in 1 7.2 KB part(s)"

time=2026-02-25T18:49:06.903-03:00 level=INFO source=download.go:179 msg="downloading f60356777647 in 1 11 KB part(s)"

time=2026-02-25T18:49:08.285-03:00 level=INFO source=download.go:179 msg="downloading d8ba2f9a17b3 in 1 18 B part(s)"

time=2026-02-25T18:49:09.685-03:00 level=INFO source=download.go:179 msg="downloading 776beb3adb23 in 1 489 B part(s)"

[GIN] 2026/02/25 - 18:49:22 | 200 |       516.8µs |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:49:26 | 200 |         3m33s |       127.0.0.1 | POST     "/api/pull"

[GIN] 2026/02/25 - 18:49:26 | 200 |      7.2178ms |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:49:26 | 200 |    213.0554ms |       127.0.0.1 | POST     "/api/show"

[GIN] 2026/02/25 - 18:49:26 | 200 |    230.9763ms |       127.0.0.1 | POST     "/api/show"

time=2026-02-25T18:49:26.467-03:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\pc765xt\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53122"

time=2026-02-25T18:49:26.978-03:00 level=INFO source=cpu_windows.go:148 msg=packages count=1

time=2026-02-25T18:49:26.978-03:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12

time=2026-02-25T18:49:27.105-03:00 level=INFO source=server.go:247 msg="enabling flash attention"

time=2026-02-25T18:49:27.107-03:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\pc765xt\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\pc765xt\\.ollama\\models\\blobs\\sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 53132"

time=2026-02-25T18:49:27.110-03:00 level=INFO source=sched.go:491 msg="system memory" total="31.9 GiB" free="22.8 GiB" free_swap="21.0 GiB"

time=2026-02-25T18:49:27.110-03:00 level=INFO source=sched.go:498 msg="gpu memory" id=0 library=ROCm available="5.9 GiB" free="6.3 GiB" minimum="457.0 MiB" overhead="0 B"

time=2026-02-25T18:49:27.110-03:00 level=INFO source=server.go:757 msg="loading model" "model layers"=25 requested=-1

time=2026-02-25T18:49:27.139-03:00 level=INFO source=runner.go:1411 msg="starting ollama engine"

time=2026-02-25T18:49:27.147-03:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:53132"

time=2026-02-25T18:49:27.153-03:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"

time=2026-02-25T18:49:27.201-03:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32

load_backend: loaded CPU backend from C:\Users\pc765xt\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll

ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no

ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no

ggml_cuda_init: found 1 ROCm devices:

  Device 0: AMD Radeon RX 9060 XT, gfx1200 (0x1200), VMM: no, Wave Size: 32, ID: 0

load_backend: loaded ROCm backend from C:\Users\pc765xt\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll

time=2026-02-25T18:49:27.245-03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)

HIP Library Path: C:\WINDOWS\SYSTEM32\amdhip64_7.dll

ROCm error: CUBLAS_STATUS_INTERNAL_ERROR

  current device: 0, in function ggml_cuda_op_mul_mat_cublas at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:1415

  hipblasGemmEx(ctx.cublas_handle(id), HIPBLAS_OP_T, HIPBLAS_OP_N, row_diff, src1_ncols, ne10, &alpha_f32, src0_ptr, HIPBLAS_R_16B, ne00, src1_ptr, HIPBLAS_R_16B, ne10, &beta_f32, dst_bf16.get(), HIPBLAS_R_16B, ldc, HIPBLAS_R_32F, HIPBLAS_GEMM_DEFAULT)

C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:94: ROCm error

[GIN] 2026/02/25 - 18:49:54 | 200 |            0s |       127.0.0.1 | GET      "/api/version"

[GIN] 2026/02/25 - 18:49:54 | 200 |      2.2667ms |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:49:59 | 200 |            0s |       127.0.0.1 | GET      "/api/version"

[GIN] 2026/02/25 - 18:49:59 | 200 |      2.1903ms |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:50:00 | 200 |            0s |       127.0.0.1 | GET      "/api/version"

[GIN] 2026/02/25 - 18:50:00 | 200 |      2.0663ms |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:50:00 | 200 |            0s |       127.0.0.1 | GET      "/api/version"

[GIN] 2026/02/25 - 18:50:00 | 200 |      3.2627ms |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:50:06 | 200 |       516.6µs |       127.0.0.1 | GET      "/api/version"

[GIN] 2026/02/25 - 18:50:06 | 200 |      2.6283ms |       127.0.0.1 | GET      "/api/tags"

[GIN] 2026/02/25 - 18:50:36 | 200 |      1.0187ms |       127.0.0.1 | GET      "/api/tags"

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.17.0

Originally created by @diegoabreu15 on GitHub (Feb 26, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14439 Describe the bug On Windows 11, using an AMD Radeon RX 9060 XT (architecture gfx1200) with ROCm 7.1 / HIP SDK, Ollama fails to offload models entirely to the GPU. Even for small models like Gemma 3:4b (4.3 GB), the system performs a partial offload (approx. 82% CPU / 18% GPU) or crashes with a connection error. Log Snippets When running with OLLAMA_DEBUG=1, the following error occurs: wsarecv: An existing connection was forcibly closed by the remote host. Internal HIP errors indicate CUBLAS_STATUS_INTERNAL_ERROR or memory allocation failures despite having 8GB of VRAM available. Workaround discovered Setting OLLAMA_VULKAN=1 and HIP_VISIBLE_DEVICES=-1 stabilizes the model and allows 100% GPU offload, but loses the performance benefits of the native ROCm 7.1 path. Hardware & Software OS: Windows 11 GPU: AMD Radeon RX 9060 XT (8GB VRAM) RAM: 32GB Driver: AMD Software: Adrenalin Edition (Latest) ROCm/HIP Version: 7.1 Ollama Version: Latest ### Relevant log output ```shell time=2026-02-25T18:43:00.828-03:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="8.0 GiB" default_num_ctx=4096 [GIN] 2026/02/25 - 18:45:47 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/25 - 18:45:47 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/25 - 18:45:47 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/25 - 18:45:47 | 200 | 10.906ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:45:48 | 401 | 199.6413ms | 127.0.0.1 | POST "/api/me" [GIN] 2026/02/25 - 18:45:48 | 401 | 166.6388ms | 127.0.0.1 | POST "/api/me" [GIN] 2026/02/25 - 18:45:48 | 404 | 1.0193ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/25 - 18:45:52 | 404 | 1.0417ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/25 - 18:45:52 | 200 | 1.0347ms | 127.0.0.1 | GET "/api/tags" time=2026-02-25T18:45:53.377-03:00 level=INFO source=download.go:179 msg="downloading e7b273f96360 in 16 862 MB part(s)" [GIN] 2026/02/25 - 18:46:22 | 200 | 1.537ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:46:52 | 200 | 1.0207ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:47:22 | 200 | 2.5673ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:47:52 | 200 | 518.9µs | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:48:22 | 200 | 1.0197ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:48:52 | 200 | 1.5398ms | 127.0.0.1 | GET "/api/tags" time=2026-02-25T18:49:05.530-03:00 level=INFO source=download.go:179 msg="downloading fa6710a93d78 in 1 7.2 KB part(s)" time=2026-02-25T18:49:06.903-03:00 level=INFO source=download.go:179 msg="downloading f60356777647 in 1 11 KB part(s)" time=2026-02-25T18:49:08.285-03:00 level=INFO source=download.go:179 msg="downloading d8ba2f9a17b3 in 1 18 B part(s)" time=2026-02-25T18:49:09.685-03:00 level=INFO source=download.go:179 msg="downloading 776beb3adb23 in 1 489 B part(s)" [GIN] 2026/02/25 - 18:49:22 | 200 | 516.8µs | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:49:26 | 200 | 3m33s | 127.0.0.1 | POST "/api/pull" [GIN] 2026/02/25 - 18:49:26 | 200 | 7.2178ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:49:26 | 200 | 213.0554ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/25 - 18:49:26 | 200 | 230.9763ms | 127.0.0.1 | POST "/api/show" time=2026-02-25T18:49:26.467-03:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\pc765xt\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53122" time=2026-02-25T18:49:26.978-03:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-02-25T18:49:26.978-03:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12 time=2026-02-25T18:49:27.105-03:00 level=INFO source=server.go:247 msg="enabling flash attention" time=2026-02-25T18:49:27.107-03:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\pc765xt\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\pc765xt\\.ollama\\models\\blobs\\sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 53132" time=2026-02-25T18:49:27.110-03:00 level=INFO source=sched.go:491 msg="system memory" total="31.9 GiB" free="22.8 GiB" free_swap="21.0 GiB" time=2026-02-25T18:49:27.110-03:00 level=INFO source=sched.go:498 msg="gpu memory" id=0 library=ROCm available="5.9 GiB" free="6.3 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-25T18:49:27.110-03:00 level=INFO source=server.go:757 msg="loading model" "model layers"=25 requested=-1 time=2026-02-25T18:49:27.139-03:00 level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-02-25T18:49:27.147-03:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:53132" time=2026-02-25T18:49:27.153-03:00 level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-25T18:49:27.201-03:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32 load_backend: loaded CPU backend from C:\Users\pc765xt\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon RX 9060 XT, gfx1200 (0x1200), VMM: no, Wave Size: 32, ID: 0 load_backend: loaded ROCm backend from C:\Users\pc765xt\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll time=2026-02-25T18:49:27.245-03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) HIP Library Path: C:\WINDOWS\SYSTEM32\amdhip64_7.dll ROCm error: CUBLAS_STATUS_INTERNAL_ERROR current device: 0, in function ggml_cuda_op_mul_mat_cublas at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:1415 hipblasGemmEx(ctx.cublas_handle(id), HIPBLAS_OP_T, HIPBLAS_OP_N, row_diff, src1_ncols, ne10, &alpha_f32, src0_ptr, HIPBLAS_R_16B, ne00, src1_ptr, HIPBLAS_R_16B, ne10, &beta_f32, dst_bf16.get(), HIPBLAS_R_16B, ldc, HIPBLAS_R_32F, HIPBLAS_GEMM_DEFAULT) C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:94: ROCm error [GIN] 2026/02/25 - 18:49:54 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/25 - 18:49:54 | 200 | 2.2667ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:49:59 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/25 - 18:49:59 | 200 | 2.1903ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:50:00 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/25 - 18:50:00 | 200 | 2.0663ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:50:00 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/25 - 18:50:00 | 200 | 3.2627ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:50:06 | 200 | 516.6µs | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/25 - 18:50:06 | 200 | 2.6283ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/25 - 18:50:36 | 200 | 1.0187ms | 127.0.0.1 | GET "/api/tags" ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.17.0
GiteaMirror added the bug label 2026-04-22 19:25:01 -05:00
Author
Owner

@diegoabreu15 commented on GitHub (Feb 26, 2026):

rocm.gfx1200.for.rocm.6.2.4-no-optimized.zip

Simply replace these files in the ollama lib and rockblas folders.

<!-- gh-comment-id:3968581510 --> @diegoabreu15 commented on GitHub (Feb 26, 2026): [rocm.gfx1200.for.rocm.6.2.4-no-optimized.zip](https://github.com/user-attachments/files/25585247/rocm.gfx1200.for.rocm.6.2.4-no-optimized.zip) Simply replace these files in the ollama lib and rockblas folders.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35136