[GH-ISSUE #11975] Presence of AMD iGPUs causes discrete GPUs to stop working with ROCm error: invalid device function #7951

Closed
opened 2026-04-12 20:07:19 -05:00 by GiteaMirror · 16 comments
Owner

Originally created by @atomskz on GitHub (Aug 20, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11975

Originally assigned to: @dhiltgen on GitHub.

Issue Description

After a fresh native Windows install of Ollama, two large models (qwen3:30b and gpt-oss:20b) loaded and ran successfully on the GPU. I then tested the integration with AnythingLLM (a RAG application) via the Ollama API. At this stage, the API was responsive, and AnythingLLM could successfully query the running models.
After a short period of testing, the models disappeared from the list (ollama list showed nothing). Subsequent attempts to pull and run any model now fail with the error: "Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error. Check Ollama server logs for details."

The server logs show a critical ROCm error: invalid device function during the model inference step, specifically in the ggml_cuda_mul_mat_q function.

Additional Context

  • The issue began after successful use and after testing API integration with AnythingLLM.
  • The problem persists across multiple reboots and a full uninstall/reinstall of Ollama (including deleting the C:\Users\<user>\.ollama directory).
  • The log message "one or more GPUs detected that are unable to accurately report free memory" appears, but this was also present during the initial successful runs.
  • The integrated GPU (AMD Radeon(TM) Graphics, gfx1036) is correctly detected and skipped as unsupported. The discrete GPU (gfx1100) is the target.

Relevant log output (server.log)

... [Previous log lines showing successful GPU detection] ...
time=2025-08-20T10:42:00.261+05:00 level=INFO source=sched.go:192 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency"
... [Model metadata loaded successfully] ...
ggml_cuda_init: found 2 ROCm devices:
  Device 0: AMD Radeon(TM) Graphics, gfx1036 (0x1036), VMM: no, Wave Size: 32, ID: 0
  Device 1: AMD Radeon RX 7900 XTX, gfx1100 (0x1100), VMM: no, Wave Size: 32, ID: 1
... [Model layers offloaded to GPU 1 (7900 XTX) successfully] ...
llama_context: graph splits = 2
time=2025-08-20T10:42:03.908+05:00 level=INFO source=server.go:1272 msg="llama runner started in 3.18 seconds"
ROCm error: invalid device function
  current device: 1, in function ggml_cuda_mul_mat_q at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:129
  hipGetLastError()
C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:84: ROCm error
time=2025-08-20T10:42:04.001+05:00 level=ERROR source=server.go:1442 msg="post predict" error="Post \"http://127.0.0.1:59471/completion\": read tcp 127.0.0.1:59472->127.0.0.1:59471: wsarecv: An existing connection was forcibly closed by the remote host."
time=2025-08-20T10:42:04.182+05:00 level=ERROR source=server.go:409 msg="llama runner terminated" error="exit status 0xc0000409"

Environment

  • Ollama Version: 0.11.5
  • AMD Video Driver Version: 25.8.1
  • Operating System: Windows 11 (native)
  • GPU: AMD Radeon RX 7900 XTX (gfx1100) with 24 GB VRAM
  • System RAM: 64 GB
  • Relevant ollama run command: The issue occurs with any model, e.g., ollama run qwen:4b.
Originally created by @atomskz on GitHub (Aug 20, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11975 Originally assigned to: @dhiltgen on GitHub. ### **Issue Description** After a fresh native Windows install of Ollama, two large models (`qwen3:30b` and `gpt-oss:20b`) loaded and ran successfully on the GPU. I then tested the integration with AnythingLLM (a RAG application) via the Ollama API. At this stage, the API was responsive, and AnythingLLM could successfully query the running models. After a short period of testing, the models disappeared from the list (`ollama list` showed nothing). Subsequent attempts to pull and run any model now fail with the error: `"Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error. Check Ollama server logs for details."` The server logs show a critical `ROCm error: invalid device function` during the model inference step, specifically in the `ggml_cuda_mul_mat_q` function. ### **Additional Context** * The issue began after successful use and after testing API integration with AnythingLLM. * The problem persists across multiple reboots and a full uninstall/reinstall of Ollama (including deleting the `C:\Users\<user>\.ollama` directory). * The log message `"one or more GPUs detected that are unable to accurately report free memory"` appears, but this was also present during the initial successful runs. * The integrated GPU (AMD Radeon(TM) Graphics, `gfx1036`) is correctly detected and skipped as unsupported. The discrete GPU (`gfx1100`) is the target. ### Relevant log output (server.log) ```shell ... [Previous log lines showing successful GPU detection] ... time=2025-08-20T10:42:00.261+05:00 level=INFO source=sched.go:192 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency" ... [Model metadata loaded successfully] ... ggml_cuda_init: found 2 ROCm devices: Device 0: AMD Radeon(TM) Graphics, gfx1036 (0x1036), VMM: no, Wave Size: 32, ID: 0 Device 1: AMD Radeon RX 7900 XTX, gfx1100 (0x1100), VMM: no, Wave Size: 32, ID: 1 ... [Model layers offloaded to GPU 1 (7900 XTX) successfully] ... llama_context: graph splits = 2 time=2025-08-20T10:42:03.908+05:00 level=INFO source=server.go:1272 msg="llama runner started in 3.18 seconds" ROCm error: invalid device function current device: 1, in function ggml_cuda_mul_mat_q at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:129 hipGetLastError() C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:84: ROCm error time=2025-08-20T10:42:04.001+05:00 level=ERROR source=server.go:1442 msg="post predict" error="Post \"http://127.0.0.1:59471/completion\": read tcp 127.0.0.1:59472->127.0.0.1:59471: wsarecv: An existing connection was forcibly closed by the remote host." time=2025-08-20T10:42:04.182+05:00 level=ERROR source=server.go:409 msg="llama runner terminated" error="exit status 0xc0000409" ``` ### **Environment** * **Ollama Version:** `0.11.5` * **AMD Video Driver Version:** `25.8.1` * **Operating System:** `Windows 11` (native) * **GPU:** `AMD Radeon RX 7900 XTX (gfx1100)` with `24 GB VRAM` * **System RAM:** `64 GB` * **Relevant `ollama run` command:** The issue occurs with any model, e.g., `ollama run qwen:4b`.
GiteaMirror added the amdbug labels 2026-04-12 20:07:19 -05:00
Author
Owner

@atomskz commented on GitHub (Aug 20, 2025):

Update: Clean OS Test Results

To further isolate the issue, I performed a clean installation of Windows 11 on the same hardware. The results confirm that the problem is consistent and not related to any prior software conflicts on my main system.

Steps taken on the fresh OS:

  1. Installed the latest AMD Adrenalin drivers (version 25.8.1).
  2. Installed the latest version of Ollama (0.11.5) from the official website.
  3. Attempted to run a model immediately after installation (ollama run qwen:4b).

Result:
The model downloads successfully, but the runner crashes with the same ROCm error: invalid device function during initialization. The models never start.

This confirms that the issue is 100% reproducible on a clean system with an AMD RX 7900 XTX (gfx1100) and is not caused by third-party software, previous Ollama installations, or OS corruption. The problem appears to be a fundamental incompatibility between the current Ollama build for Windows and this specific GPU architecture.

<!-- gh-comment-id:3205433973 --> @atomskz commented on GitHub (Aug 20, 2025): **Update: Clean OS Test Results** To further isolate the issue, I performed a clean installation of Windows 11 on the same hardware. The results confirm that the problem is consistent and not related to any prior software conflicts on my main system. **Steps taken on the fresh OS:** 1. Installed the latest AMD Adrenalin drivers (version 25.8.1). 2. Installed the latest version of Ollama (0.11.5) from the official website. 3. Attempted to run a model immediately after installation (`ollama run qwen:4b`). **Result:** The model downloads successfully, but the runner crashes with the same `ROCm error: invalid device function` during initialization. The models never start. This confirms that the issue is 100% reproducible on a clean system with an AMD RX 7900 XTX (gfx1100) and is not caused by third-party software, previous Ollama installations, or OS corruption. The problem appears to be a fundamental incompatibility between the current Ollama build for Windows and this specific GPU architecture.
Author
Owner

@necpa commented on GitHub (Aug 20, 2025):

Additional Confirmation with AMD Radeon RX 7900 XT (gfx1100)

I can confirm a similar issue on a system using an AMD Radeon RX 7900 XT (gfx1100):

  • Ollama version: 0.11.5

  • Driver version: 25.8.1

  • OS: Windows 11 (clean install)

  • GPU: AMD Radeon RX 7900 XT (gfx1100)

  • Symptoms:

    • Clean installation of Ollama

    • Pulled the mistral model successfully

    • On first attempt to run the model, it crashes immediately

    • Model still appears in ollama list

    • Logs show the following error:

      ROCm error: invalid device function
        current device: 1, in function ggml_cuda_op_mul_mat at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:1758
        hipGetLastError()
      C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:84: ROCm error
      

This confirms that the issue affects other gfx1100 GPUs beyond the RX 7900 XTX, and happens immediately after pulling a model, even before any extended use or API integration. Likely a low-level compatibility issue with ROCm and Ollama’s current CUDA backend on Windows for this GPU architecture.

<!-- gh-comment-id:3206599355 --> @necpa commented on GitHub (Aug 20, 2025): **Additional Confirmation with AMD Radeon RX 7900 XT (gfx1100)** I can confirm a similar issue on a system using an **AMD Radeon RX 7900 XT (gfx1100)**: * **Ollama version**: 0.11.5 * **Driver version**: 25.8.1 * **OS**: Windows 11 (clean install) * **GPU**: AMD Radeon RX 7900 XT (gfx1100) * **Symptoms**: * Clean installation of Ollama * Pulled the `mistral` model successfully * On first attempt to run the model, it crashes immediately * Model still appears in `ollama list` * Logs show the following error: ``` ROCm error: invalid device function current device: 1, in function ggml_cuda_op_mul_mat at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:1758 hipGetLastError() C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:84: ROCm error ``` This confirms that the issue affects **other gfx1100 GPUs** beyond the RX 7900 XTX, and happens **immediately after pulling a model**, even before any extended use or API integration. Likely a low-level compatibility issue with ROCm and Ollama’s current CUDA backend on Windows for this GPU architecture.
Author
Owner

@dhiltgen commented on GitHub (Aug 20, 2025):

It's possible this may be an interaction between ROCm v6.2 and the latest driver 25.8.1. You might want to try downgrading to 24.8.1 to see if that gets your system back to a working state while we investigate...

https://drivers.amd.com/drivers/whql-amd-software-adrenalin-edition-24.8.1-win10-win11-aug-rdna.exe

<!-- gh-comment-id:3207669613 --> @dhiltgen commented on GitHub (Aug 20, 2025): It's possible this may be an interaction between ROCm v6.2 and the latest driver 25.8.1. You might want to try downgrading to 24.8.1 to see if that gets your system back to a working state while we investigate... https://drivers.amd.com/drivers/whql-amd-software-adrenalin-edition-24.8.1-win10-win11-aug-rdna.exe
Author
Owner

@atomskz commented on GitHub (Aug 20, 2025):

Thank you for the suggestion. I have performed a clean uninstallation of the previous driver (25.8.1) and installed version 24.8.1 as recommended.

Result: Unfortunately, the issue persists. The model still fails to load with the same type of ROCm error: invalid device function.

System Info with downgraded driver:

  • Driver Version: 24.8.1 (as shown by driver=6.1 in the log)
  • GPU: AMD Radeon RX 7900 XTX (gfx1100)
  • Ollama Version: 0.11.5

Thank you for your continued investigation. Please let me know if there is any other information I can provide from my system to help diagnose this compatibility issue.

Screenshot Image
server.log after downgrade
time=2025-08-20T22:27:38.218+03:00 level=INFO source=routes.go:1318 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\ollama-models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-08-20T22:27:38.241+03:00 level=INFO source=images.go:477 msg="total blobs: 5"
time=2025-08-20T22:27:38.241+03:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-20T22:27:38.243+03:00 level=INFO source=routes.go:1371 msg="Listening on 127.0.0.1:11434 (version 0.11.5)"
time=2025-08-20T22:27:38.243+03:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-20T22:27:38.244+03:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-08-20T22:27:38.244+03:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=0 threads=24
time=2025-08-20T22:27:38.789+03:00 level=INFO source=amd_windows.go:127 msg="unsupported Radeon iGPU detected skipping" id=0 total="24.0 GiB"
time=2025-08-20T22:27:39.145+03:00 level=INFO source=types.go:130 msg="inference compute" id=1 library=rocm variant="" compute=gfx1100 driver=6.1 name="AMD Radeon RX 7900 XTX" total="24.0 GiB" available="23.8 GiB"
[GIN] 2025/08/20 - 22:27:39 | 200 |       509.3µs |       127.0.0.1 | GET      "/"
[GIN] 2025/08/20 - 22:27:39 | 200 |      4.6628ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/08/20 - 22:27:39 | 200 |    104.7624ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/08/20 - 22:27:56 | 200 |       523.7µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/08/20 - 22:27:56 | 200 |     70.8627ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/08/20 - 22:27:56 | 200 |      62.653ms |       127.0.0.1 | POST     "/api/show"
time=2025-08-20T22:27:56.903+03:00 level=INFO source=sched.go:192 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency"
time=2025-08-20T22:27:57.361+03:00 level=INFO source=server.go:383 msg="starting runner" cmd="C:\\Users\\fedorovn\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\ollama-models\\blobs\\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --port 49786"
time=2025-08-20T22:27:57.387+03:00 level=INFO source=runner.go:1006 msg="starting ollama engine"
time=2025-08-20T22:27:57.400+03:00 level=INFO source=runner.go:1043 msg="Server listening on 127.0.0.1:49786"
time=2025-08-20T22:27:57.703+03:00 level=INFO source=server.go:488 msg="system memory" total="63.2 GiB" free="57.3 GiB" free_swap="63.3 GiB"
time=2025-08-20T22:27:57.704+03:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=D:\ollama-models\blobs\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 library=rocm parallel=1 required="13.9 GiB" gpus=1
time=2025-08-20T22:27:57.704+03:00 level=INFO source=server.go:531 msg=offload library=rocm layers.requested=-1 layers.model=25 layers.offload=25 layers.split=[25] memory.available="[23.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="13.9 GiB" memory.required.partial="13.9 GiB" memory.required.kv="300.0 MiB" memory.required.allocations="[13.9 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.0 GiB"
time=2025-08-20T22:27:57.706+03:00 level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:12 GPULayers:25[ID:1 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-08-20T22:27:57.744+03:00 level=INFO source=ggml.go:130 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 ROCm devices:
  Device 0: AMD Radeon(TM) Graphics, gfx1036 (0x1036), VMM: no, Wave Size: 32, ID: 0
  Device 1: AMD Radeon RX 7900 XTX, gfx1100 (0x1100), VMM: no, Wave Size: 32, ID: 1
load_backend: loaded ROCm backend from C:\Users\fedorovn\AppData\Local\Programs\Ollama\lib\ollama\ggml-hip.dll
load_backend: loaded CPU backend from C:\Users\fedorovn\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-08-20T22:27:59.267+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 ROCm.1.NO_VMM=1 ROCm.1.NO_PEER_COPY=1 ROCm.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-08-20T22:28:00.280+03:00 level=INFO source=ggml.go:486 msg="offloading 24 repeating layers to GPU"
time=2025-08-20T22:28:00.280+03:00 level=INFO source=ggml.go:492 msg="offloading output layer to GPU"
time=2025-08-20T22:28:00.280+03:00 level=INFO source=ggml.go:497 msg="offloaded 25/25 layers to GPU"
time=2025-08-20T22:28:00.280+03:00 level=INFO source=backend.go:310 msg="model weights" device=ROCm1 size="11.8 GiB"
time=2025-08-20T22:28:00.280+03:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
time=2025-08-20T22:28:00.280+03:00 level=INFO source=backend.go:321 msg="kv cache" device=ROCm1 size="300.0 MiB"
time=2025-08-20T22:28:00.280+03:00 level=INFO source=backend.go:332 msg="compute graph" device=ROCm1 size="1.1 GiB"
time=2025-08-20T22:28:00.280+03:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB"
time=2025-08-20T22:28:00.280+03:00 level=INFO source=backend.go:342 msg="total memory" size="14.2 GiB"
time=2025-08-20T22:28:00.280+03:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-08-20T22:28:00.280+03:00 level=INFO source=server.go:1234 msg="waiting for llama runner to start responding"
time=2025-08-20T22:28:00.281+03:00 level=INFO source=server.go:1268 msg="waiting for server to become available" status="llm server loading model"
time=2025-08-20T22:28:07.797+03:00 level=INFO source=server.go:1272 msg="llama runner started in 10.44 seconds"
ROCm error: invalid device function
  current device: 1, in function ggml_cuda_op_mul_mat at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:1758
  hipGetLastError()
C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:84: ROCm error
time=2025-08-20T22:28:08.185+03:00 level=ERROR source=server.go:1442 msg="post predict" error="Post \"http://127.0.0.1:49786/completion\": read tcp 127.0.0.1:49787->127.0.0.1:49786: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2025/08/20 - 22:28:08 | 200 |   11.7016405s |       127.0.0.1 | POST     "/api/chat"
time=2025-08-20T22:28:08.212+03:00 level=ERROR source=server.go:409 msg="llama runner terminated" error="exit status 0xc0000409"
<!-- gh-comment-id:3207837024 --> @atomskz commented on GitHub (Aug 20, 2025): Thank you for the suggestion. I have performed a clean uninstallation of the previous driver (25.8.1) and installed version **24.8.1** as recommended. **Result:** Unfortunately, the issue persists. The model still fails to load with the same type of `ROCm error: invalid device function`. **System Info with downgraded driver:** * **Driver Version:** 24.8.1 (as shown by `driver=6.1` in the log) * **GPU:** AMD Radeon RX 7900 XTX (gfx1100) * **Ollama Version:** 0.11.5 Thank you for your continued investigation. Please let me know if there is any other information I can provide from my system to help diagnose this compatibility issue. <details> <summary>Screenshot</summary> <img width="1163" height="708" alt="Image" src="https://github.com/user-attachments/assets/e26fb373-bc0d-4e6e-9361-635e9166f2cb" /> </details> <details> <summary>server.log after downgrade</summary> ```log time=2025-08-20T22:27:38.218+03:00 level=INFO source=routes.go:1318 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\ollama-models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-08-20T22:27:38.241+03:00 level=INFO source=images.go:477 msg="total blobs: 5" time=2025-08-20T22:27:38.241+03:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-20T22:27:38.243+03:00 level=INFO source=routes.go:1371 msg="Listening on 127.0.0.1:11434 (version 0.11.5)" time=2025-08-20T22:27:38.243+03:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-20T22:27:38.244+03:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-08-20T22:27:38.244+03:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=0 threads=24 time=2025-08-20T22:27:38.789+03:00 level=INFO source=amd_windows.go:127 msg="unsupported Radeon iGPU detected skipping" id=0 total="24.0 GiB" time=2025-08-20T22:27:39.145+03:00 level=INFO source=types.go:130 msg="inference compute" id=1 library=rocm variant="" compute=gfx1100 driver=6.1 name="AMD Radeon RX 7900 XTX" total="24.0 GiB" available="23.8 GiB" [GIN] 2025/08/20 - 22:27:39 | 200 | 509.3µs | 127.0.0.1 | GET "/" [GIN] 2025/08/20 - 22:27:39 | 200 | 4.6628ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/08/20 - 22:27:39 | 200 | 104.7624ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/08/20 - 22:27:56 | 200 | 523.7µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/08/20 - 22:27:56 | 200 | 70.8627ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/08/20 - 22:27:56 | 200 | 62.653ms | 127.0.0.1 | POST "/api/show" time=2025-08-20T22:27:56.903+03:00 level=INFO source=sched.go:192 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency" time=2025-08-20T22:27:57.361+03:00 level=INFO source=server.go:383 msg="starting runner" cmd="C:\\Users\\fedorovn\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model D:\\ollama-models\\blobs\\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --port 49786" time=2025-08-20T22:27:57.387+03:00 level=INFO source=runner.go:1006 msg="starting ollama engine" time=2025-08-20T22:27:57.400+03:00 level=INFO source=runner.go:1043 msg="Server listening on 127.0.0.1:49786" time=2025-08-20T22:27:57.703+03:00 level=INFO source=server.go:488 msg="system memory" total="63.2 GiB" free="57.3 GiB" free_swap="63.3 GiB" time=2025-08-20T22:27:57.704+03:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=D:\ollama-models\blobs\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 library=rocm parallel=1 required="13.9 GiB" gpus=1 time=2025-08-20T22:27:57.704+03:00 level=INFO source=server.go:531 msg=offload library=rocm layers.requested=-1 layers.model=25 layers.offload=25 layers.split=[25] memory.available="[23.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="13.9 GiB" memory.required.partial="13.9 GiB" memory.required.kv="300.0 MiB" memory.required.allocations="[13.9 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.0 GiB" time=2025-08-20T22:27:57.706+03:00 level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:8192 KvCacheType: NumThreads:12 GPULayers:25[ID:1 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-08-20T22:27:57.744+03:00 level=INFO source=ggml.go:130 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 ROCm devices: Device 0: AMD Radeon(TM) Graphics, gfx1036 (0x1036), VMM: no, Wave Size: 32, ID: 0 Device 1: AMD Radeon RX 7900 XTX, gfx1100 (0x1100), VMM: no, Wave Size: 32, ID: 1 load_backend: loaded ROCm backend from C:\Users\fedorovn\AppData\Local\Programs\Ollama\lib\ollama\ggml-hip.dll load_backend: loaded CPU backend from C:\Users\fedorovn\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-08-20T22:27:59.267+03:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 ROCm.1.NO_VMM=1 ROCm.1.NO_PEER_COPY=1 ROCm.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-08-20T22:28:00.280+03:00 level=INFO source=ggml.go:486 msg="offloading 24 repeating layers to GPU" time=2025-08-20T22:28:00.280+03:00 level=INFO source=ggml.go:492 msg="offloading output layer to GPU" time=2025-08-20T22:28:00.280+03:00 level=INFO source=ggml.go:497 msg="offloaded 25/25 layers to GPU" time=2025-08-20T22:28:00.280+03:00 level=INFO source=backend.go:310 msg="model weights" device=ROCm1 size="11.8 GiB" time=2025-08-20T22:28:00.280+03:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" time=2025-08-20T22:28:00.280+03:00 level=INFO source=backend.go:321 msg="kv cache" device=ROCm1 size="300.0 MiB" time=2025-08-20T22:28:00.280+03:00 level=INFO source=backend.go:332 msg="compute graph" device=ROCm1 size="1.1 GiB" time=2025-08-20T22:28:00.280+03:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB" time=2025-08-20T22:28:00.280+03:00 level=INFO source=backend.go:342 msg="total memory" size="14.2 GiB" time=2025-08-20T22:28:00.280+03:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-08-20T22:28:00.280+03:00 level=INFO source=server.go:1234 msg="waiting for llama runner to start responding" time=2025-08-20T22:28:00.281+03:00 level=INFO source=server.go:1268 msg="waiting for server to become available" status="llm server loading model" time=2025-08-20T22:28:07.797+03:00 level=INFO source=server.go:1272 msg="llama runner started in 10.44 seconds" ROCm error: invalid device function current device: 1, in function ggml_cuda_op_mul_mat at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:1758 hipGetLastError() C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:84: ROCm error time=2025-08-20T22:28:08.185+03:00 level=ERROR source=server.go:1442 msg="post predict" error="Post \"http://127.0.0.1:49786/completion\": read tcp 127.0.0.1:49787->127.0.0.1:49786: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2025/08/20 - 22:28:08 | 200 | 11.7016405s | 127.0.0.1 | POST "/api/chat" time=2025-08-20T22:28:08.212+03:00 level=ERROR source=server.go:409 msg="llama runner terminated" error="exit status 0xc0000409" ``` </details>
Author
Owner

@DanielKluev commented on GitHub (Aug 21, 2025):

I've had exactly same problem, happened in the middle of long agentic session when ollama suddenly on its own decided it's fine to update to 0.11.5.
After that, everything stopped working on gfx1100 GPUs.

Once I reverted ollama back to 0.11.4, everything works again. So it's certainly something in 0.11.5 that causes the error.

<!-- gh-comment-id:3208621517 --> @DanielKluev commented on GitHub (Aug 21, 2025): I've had exactly same problem, happened in the middle of long agentic session when ollama suddenly on its own decided it's fine to update to 0.11.5. After that, everything stopped working on gfx1100 GPUs. Once I reverted ollama back to 0.11.4, everything works again. So it's certainly something in 0.11.5 that causes the error.
Author
Owner

@atomskz commented on GitHub (Aug 21, 2025):

Update: Rollback to v0.11.4 Resolves the Issue

I can confirm that rolling back to Ollama version 0.11.4 (with AMD driver 25.8.1) has successfully resolved the problem.

The model (qwen:4b) now loads and runs correctly on Ollama v0.11.4:. Chat functionality via the command line and API is fully operational.

To provide a complete test case, I also upgraded to the latest version, Ollama v0.11.6. The ROCm error: invalid device function crash still occurs on this version, confirming that the regression was introduced after v0.11.4 and has not yet been fixed.

<!-- gh-comment-id:3209590151 --> @atomskz commented on GitHub (Aug 21, 2025): **Update: Rollback to v0.11.4 Resolves the Issue** I can confirm that rolling back to Ollama version **0.11.4** (with AMD driver 25.8.1) has successfully resolved the problem. The model (`qwen:4b`) now loads and runs correctly on **Ollama v0.11.4:**. Chat functionality via the command line and API is fully operational. To provide a complete test case, I also upgraded to the latest version, **Ollama v0.11.6**. The `ROCm error: invalid device function` crash still occurs on this version, confirming that the regression was introduced after v0.11.4 and has not yet been fixed.
Author
Owner

@gqchen-dz commented on GitHub (Aug 21, 2025):

the same error , ollama version : 0.11.5 and 0.11.6,on Tesla T4.
the ollama version 0.11.4 is well.

<!-- gh-comment-id:3209644188 --> @gqchen-dz commented on GitHub (Aug 21, 2025): the same error , ollama version : 0.11.5 and 0.11.6,on Tesla T4. the ollama version 0.11.4 is well.
Author
Owner

@MStefan99 commented on GitHub (Aug 21, 2025):

Similar error with a fresh Ollama 0.11.6 reinstall and AMD drivers 25.8.1 on AMD Radeon RX 7900 XTX

Relevant Ollama logs:

time=2025-08-21T11:02:02.130+02:00 level=INFO source=server.go:1268 msg="waiting for server to become available" status="llm server loading model"
time=2025-08-21T11:02:10.668+02:00 level=INFO source=server.go:1272 msg="llama runner started in 17.83 seconds"
ggml_cuda_compute_forward: SCALE failed
ROCm error: invalid device function
  current device: 1, in function ggml_cuda_compute_forward at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2563
  err
C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:84: ROCm error
time=2025-08-21T11:02:10.999+02:00 level=ERROR source=server.go:1442 msg="post predict" error="Post \"http://127.0.0.1:50997/completion\": read tcp 127.0.0.1:50998->127.0.0.1:50997: wsarecv: An existing connection was forcibly closed by the remote host."

<!-- gh-comment-id:3209671674 --> @MStefan99 commented on GitHub (Aug 21, 2025): Similar error with a fresh Ollama 0.11.6 reinstall and AMD drivers 25.8.1 on AMD Radeon RX 7900 XTX Relevant Ollama logs: ``` time=2025-08-21T11:02:02.130+02:00 level=INFO source=server.go:1268 msg="waiting for server to become available" status="llm server loading model" time=2025-08-21T11:02:10.668+02:00 level=INFO source=server.go:1272 msg="llama runner started in 17.83 seconds" ggml_cuda_compute_forward: SCALE failed ROCm error: invalid device function current device: 1, in function ggml_cuda_compute_forward at C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2563 err C:/a/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:84: ROCm error time=2025-08-21T11:02:10.999+02:00 level=ERROR source=server.go:1442 msg="post predict" error="Post \"http://127.0.0.1:50997/completion\": read tcp 127.0.0.1:50998->127.0.0.1:50997: wsarecv: An existing connection was forcibly closed by the remote host." ```
Author
Owner

@haraldhotzbehofsits commented on GitHub (Aug 21, 2025):

Same observation from my side, 0.11.4 is fine, at least with ROCm .5 and .6 fail.

<!-- gh-comment-id:3211911609 --> @haraldhotzbehofsits commented on GitHub (Aug 21, 2025): Same observation from my side, 0.11.4 is fine, at least with ROCm .5 and .6 fail.
Author
Owner

@isoscelesjones commented on GitHub (Aug 25, 2025):

Same - .4 works and .5 and .6 fail. And I can't prevent .4 from automatically updating to .6

<!-- gh-comment-id:3221069881 --> @isoscelesjones commented on GitHub (Aug 25, 2025): Same - .4 works and .5 and .6 fail. And I can't prevent .4 from automatically updating to .6
Author
Owner

@PRSJMC commented on GitHub (Aug 26, 2025):

same on 7800XT,gfx1101

<!-- gh-comment-id:3225047458 --> @PRSJMC commented on GitHub (Aug 26, 2025): same on 7800XT,gfx1101
Author
Owner

@awball0 commented on GitHub (Aug 27, 2025):

Same on 7900XTX

<!-- gh-comment-id:3226328712 --> @awball0 commented on GitHub (Aug 27, 2025): Same on 7900XTX
Author
Owner

@HaroldStash commented on GitHub (Aug 27, 2025):

Set the environment variables ROCR_VISIBLE_DEVICES=1 and HIP_VISIBLE_DEVICES=1 for the bypass to actually skip the iGPU from being utilized with ollama/llama.cpp. The 7900xtx works fine on 6.4 rocm. The value is the index of the card you want to run it on(0 = igpu, 1 = dedicated on my machine).

Device 0: AMD Radeon(TM) Graphics, gfx1036 (0x1036), VMM: no, Wave Size: 32, ID: 0
Device 1: AMD Radeon RX 7900 XTX, gfx1100 (0x1100), VMM: no, Wave Size: 32, ID: 1

You can confirm the iGPU being skipped by running hipinfo

<!-- gh-comment-id:3229929642 --> @HaroldStash commented on GitHub (Aug 27, 2025): Set the environment variables ROCR_VISIBLE_DEVICES=1 and HIP_VISIBLE_DEVICES=1 for the bypass to actually skip the iGPU from being utilized with ollama/llama.cpp. The 7900xtx works fine on 6.4 rocm. The value is the index of the card you want to run it on(0 = igpu, 1 = dedicated on my machine). Device 0: AMD Radeon(TM) Graphics, gfx1036 (0x1036), VMM: no, Wave Size: 32, ID: 0 Device 1: AMD Radeon RX 7900 XTX, gfx1100 (0x1100), VMM: no, Wave Size: 32, ID: 1 You can confirm the iGPU being skipped by running hipinfo
Author
Owner

@dhiltgen commented on GitHub (Aug 28, 2025):

We'll get this regression fixed soon to filter out the unsupported iGPUs again - until then, using HIP_VISIBLE_DEVICES on windows or ROCR_VISIBLE_DEVICES on linux as described above will workaround the regression.

<!-- gh-comment-id:3230316046 --> @dhiltgen commented on GitHub (Aug 28, 2025): We'll get this regression fixed soon to filter out the unsupported iGPUs again - until then, using HIP_VISIBLE_DEVICES on windows or ROCR_VISIBLE_DEVICES on linux as described above will workaround the regression.
Author
Owner

@awball0 commented on GitHub (Aug 28, 2025):

Thank you, the HIP_VISIBLE_DEVICES environment variable on Windows worked for me :)

<!-- gh-comment-id:3231088882 --> @awball0 commented on GitHub (Aug 28, 2025): Thank you, the HIP_VISIBLE_DEVICES environment variable on Windows worked for me :)
Author
Owner

@jzila commented on GitHub (Sep 17, 2025):

FWIW I get this same crash on ollama 11.10 with the Ryzen AI Max 395+ which has the supported gfx1151 Strix Halo GPU. I've manually set the HSA_OVERRIDE_GFX_VERSION to 11.5.1 to get it to recognize the GPU, but then the logs say it doesn't offload any model layers, after which the runner crashes as soon as I submit an inference call. I'm trying to use qwen3-coder:30b.

<!-- gh-comment-id:3303763051 --> @jzila commented on GitHub (Sep 17, 2025): FWIW I get this same crash on ollama 11.10 with the Ryzen AI Max 395+ which has the supported gfx1151 Strix Halo GPU. I've manually set the `HSA_OVERRIDE_GFX_VERSION` to `11.5.1` to get it to recognize the GPU, but then the logs say it doesn't offload any model layers, after which the runner crashes as soon as I submit an inference call. I'm trying to use `qwen3-coder:30b`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7951