[GH-ISSUE #12893] GPU Crashes with "killed entity" errors on AMD RX 7900 XTX with newer kernels #70603

Closed
opened 2026-05-04 22:13:52 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @DRRDietrich on GitHub (Oct 31, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12893

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Problem Description
Ollama causes repeated GPU crashes with amdgpu driver errors when running LLM inference. The GPU enters a "killed entity" state, preventing further job submissions. The system remains functional but GPU acceleration fails.

Steps to Reproduce

  1. Start ollama service
  2. Run LLM inference: ollama run any_model (for example ollama run qwen3:30b)
  3. Start conversation
  4. GPU crashes occur after 3 to 10 messages

Expected Behavior
Ollama should utilize GPU acceleration without causing driver-level crashes or "killed entity" states.

Actual Behavior
GPU repeatedly enters killed state, requiring system reboot to restore GPU functionality. Ollama cannot continue inference after the first crash.

Workaround
Ollama Version 0.12.3

Relevant log output

amdgpu: Freeing queue vital buffer 0x7f9bcc200000, queue evicted
[drm:amdgpu_job_submit [amdgpu]] *ERROR* Trying to push to a killed entity

OS

Linux Fedora 42, Kernel 6.16.10-200.fc42.x86_64

GPU

AMD Radeon RX 7900 XTX (Navi 31, RDNA3, gfx1030)
Driver: amdgpu (Mesa)

CPU

AMD Ryzen 5 2600X

Ollama version

0.12.5, 0.12.6, 0.12.7, 0.12.8

Originally created by @DRRDietrich on GitHub (Oct 31, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12893 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? **Problem Description** Ollama causes repeated GPU crashes with amdgpu driver errors when running LLM inference. The GPU enters a "killed entity" state, preventing further job submissions. The system remains functional but GPU acceleration fails. **Steps to Reproduce** 1. Start ollama service 2. Run LLM inference: ollama run _any_model_ (for example `ollama run qwen3:30b`) 3. Start conversation 4. GPU crashes occur after 3 to 10 messages **Expected Behavior** Ollama should utilize GPU acceleration without causing driver-level crashes or "killed entity" states. **Actual Behavior** GPU repeatedly enters killed state, requiring system reboot to restore GPU functionality. Ollama cannot continue inference after the first crash. **Workaround** Ollama Version 0.12.3 ### Relevant log output ```shell amdgpu: Freeing queue vital buffer 0x7f9bcc200000, queue evicted [drm:amdgpu_job_submit [amdgpu]] *ERROR* Trying to push to a killed entity ``` ### OS Linux Fedora 42, Kernel 6.16.10-200.fc42.x86_64 ### GPU AMD Radeon RX 7900 XTX (Navi 31, RDNA3, gfx1030) Driver: amdgpu (Mesa) ### CPU AMD Ryzen 5 2600X ### Ollama version 0.12.5, 0.12.6, 0.12.7, 0.12.8
GiteaMirror added the amdbuggpu labels 2026-05-04 22:13:53 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 31, 2025):

Server log may help debugging.

<!-- gh-comment-id:3475090423 --> @rick-github commented on GitHub (Oct 31, 2025): [Server log](https://docs.ollama.com/troubleshooting) may help debugging.
Author
Owner

@DRRDietrich commented on GitHub (Nov 2, 2025):

I've switched from the systemd service to a Docker container for a more controlled testing environment.

Working Version: ollama/ollama:0.12.3-rocm
Broken Version: all versions from ollama/ollama:0.12.4-rocm to ollama/ollama:0.12.9-rocm are affected.
(first ERRORs appear immediately on container start, model inference only makes it worse, especially image processing with qwen3-vl)

Docker Setup

services:
  ollama:
    container_name: ollama
    image: ollama/ollama:0.12.3-rocm
    volumes:
      - ./ollama/models:/root/.ollama/models
    devices:
      - /dev/kfd
      - /dev/dri
    group_add:
      - video
    environment:
      - HSA_OVERRIDE_GFX_VERSION=11.0.0
      - OLLAMA_HOST=0.0.0.0
      - OLLAMA_DEBUG=1
      - HSA_ENABLE_DEBUG=1
      - GPU_DEBUG=1
      - AMD_LOG_LEVEL=4

Behavior with 0.12.4-rocm to 0.12.9-rocm

  1. Container starts
  2. Kernel errors appear immediately (even before any model inference)
  3. Initial errors allow system to remain responsive
  4. Model inference is possible for a few times
  5. After multiple [drm:amdgpu_job_submit [amdgpu]] *ERROR* Trying to push to a killed entity messages accumulate, the entire system freezes
  6. System becomes completely unresponsive - only reboot via SysRq REISUB works

The first kernel ERROR message appeared simultaneously with this line in the Docker logs:

ollama  | time=2025-11-02T09:13:37.146Z level=DEBUG source=server.go:401 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_HOST=0.0.0.0 GPU_DEBUG=1 HSA_OVERRIDE_GFX_VERSION=11.0.0 HSA_ENABLE_DEBUG=1 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=600 OLLAMA_NUM_THREAD=12 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm ROCR_VISIBLE_DEVICES=GPU-8db8156041e8c322 GGML_CUDA_INIT=1

System Impact

  • 0.12.3-rocm: Stable, no crashes
  • 0.12.4-rocm: Immediate crashes → multiple errors → complete system freeze requiring hard reboot

This suggests something changed in the ROCm integration or GPU initialization between these two versions that is incompatible with RDNA3 (gfx1100) on recent kernels.

Logs

  • dmesg
[drm:amdgpu_job_submit [amdgpu]] *ERROR* Trying to push to a killed entity
  • journalctl
home kernel: [drm:amdgpu_job_submit [amdgpu]] *ERROR* Trying to push to a killed entity
  • docker compose up output with full debug logging
[+] Running 1/1
 ✔ Container ollama  Created                                                                                                                                                                                  0.1s 
Attaching to ollama
ollama  | Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
ollama  | Your new public key is: 
ollama  | 
ollama  | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL3+CoJ2XXKyFqusyYsYnGzkrrtCIMeza7foGpUz8GJ0
ollama  | 
ollama  | time=2025-11-02T09:13:34.548Z level=INFO source=routes.go:1524 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:11.0.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:10m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
ollama  | time=2025-11-02T09:13:34.548Z level=INFO source=images.go:522 msg="total blobs: 0"
ollama  | time=2025-11-02T09:13:34.549Z level=INFO source=images.go:529 msg="total unused blobs removed: 0"
ollama  | time=2025-11-02T09:13:34.549Z level=INFO source=routes.go:1577 msg="Listening on [::]:11434 (version 0.12.9)"
ollama  | time=2025-11-02T09:13:34.549Z level=DEBUG source=sched.go:120 msg="starting llm scheduler"
ollama  | time=2025-11-02T09:13:34.549Z level=INFO source=runner.go:76 msg="discovering available GPUs..."
ollama  | time=2025-11-02T09:13:34.550Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38567"
ollama  | time=2025-11-02T09:13:34.550Z level=DEBUG source=server.go:401 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_HOST=0.0.0.0 GPU_DEBUG=1 HSA_OVERRIDE_GFX_VERSION=11.0.0 HSA_ENABLE_DEBUG=1 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=600 OLLAMA_NUM_THREAD=12 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm
ollama  | time=2025-11-02T09:13:37.146Z level=DEBUG source=runner.go:471 msg="bootstrap discovery took" duration=2.596544229s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs=map[]
ollama  | time=2025-11-02T09:13:37.146Z level=DEBUG source=runner.go:120 msg="evluating which if any devices to filter out" initial_count=1
ollama  | time=2025-11-02T09:13:37.146Z level=DEBUG source=runner.go:132 msg="verifying GPU is supported" library=/usr/lib/ollama/rocm description="AMD Radeon Graphics" compute=gfx1100 id=GPU-8db8156041e8c322 pci_id=0000:10:00.0
ollama  | time=2025-11-02T09:13:37.146Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41407"
ollama  | time=2025-11-02T09:13:37.146Z level=DEBUG source=server.go:401 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_HOST=0.0.0.0 GPU_DEBUG=1 HSA_OVERRIDE_GFX_VERSION=11.0.0 HSA_ENABLE_DEBUG=1 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=600 OLLAMA_NUM_THREAD=12 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm ROCR_VISIBLE_DEVICES=GPU-8db8156041e8c322 GGML_CUDA_INIT=1
ollama  | time=2025-11-02T09:13:40.970Z level=DEBUG source=runner.go:471 msg="bootstrap discovery took" duration=3.823662125s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs="map[GGML_CUDA_INIT:1 ROCR_VISIBLE_DEVICES:GPU-8db8156041e8c322]"
ollama  | time=2025-11-02T09:13:40.970Z level=DEBUG source=runner.go:41 msg="GPU bootstrap discovery took" duration=6.420638613s
ollama  | time=2025-11-02T09:13:40.970Z level=INFO source=types.go:42 msg="inference compute" id=GPU-8db8156041e8c322 filtered_id="" library=ROCm compute=gfx1100 name=ROCm0 description="AMD Radeon Graphics" libdirs=ollama,rocm driver=60342.13 pci_id=0000:10:00.0 type=discrete total="24.0 GiB" available="23.9 GiB"
Image
<!-- gh-comment-id:3477796778 --> @DRRDietrich commented on GitHub (Nov 2, 2025): I've switched from the systemd service to a Docker container for a more controlled testing environment. **Working Version**: `ollama/ollama:0.12.3-rocm` ✅ **Broken Version**: all versions from `ollama/ollama:0.12.4-rocm` to `ollama/ollama:0.12.9-rocm` are affected. ❌ (first ERRORs appear immediately on container start, model inference only makes it worse, especially image processing with qwen3-vl) ### Docker Setup ```yaml services: ollama: container_name: ollama image: ollama/ollama:0.12.3-rocm volumes: - ./ollama/models:/root/.ollama/models devices: - /dev/kfd - /dev/dri group_add: - video environment: - HSA_OVERRIDE_GFX_VERSION=11.0.0 - OLLAMA_HOST=0.0.0.0 - OLLAMA_DEBUG=1 - HSA_ENABLE_DEBUG=1 - GPU_DEBUG=1 - AMD_LOG_LEVEL=4 ``` ### Behavior with `0.12.4-rocm` to `0.12.9-rocm` 1. Container starts 2. Kernel errors appear **immediately** (even before any model inference) 3. Initial errors allow system to remain responsive 4. Model inference is possible for a few times 5. After multiple `[drm:amdgpu_job_submit [amdgpu]] *ERROR* Trying to push to a killed entity` messages accumulate, **the entire system freezes** 6. System becomes completely unresponsive - only reboot via SysRq REISUB works The first kernel ERROR message appeared **simultaneously** with this line in the Docker logs: ``` ollama | time=2025-11-02T09:13:37.146Z level=DEBUG source=server.go:401 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_HOST=0.0.0.0 GPU_DEBUG=1 HSA_OVERRIDE_GFX_VERSION=11.0.0 HSA_ENABLE_DEBUG=1 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=600 OLLAMA_NUM_THREAD=12 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm ROCR_VISIBLE_DEVICES=GPU-8db8156041e8c322 GGML_CUDA_INIT=1 ``` ### System Impact - **0.12.3-rocm**: Stable, no crashes - **0.12.4-rocm**: Immediate crashes → multiple errors → **complete system freeze** requiring hard reboot This suggests something changed in the ROCm integration or GPU initialization between these two versions that is incompatible with RDNA3 (gfx1100) on recent kernels. ### Logs - `dmesg` ``` [drm:amdgpu_job_submit [amdgpu]] *ERROR* Trying to push to a killed entity ``` - `journalctl` ``` home kernel: [drm:amdgpu_job_submit [amdgpu]] *ERROR* Trying to push to a killed entity ``` - `docker compose up` output with full debug logging ``` [+] Running 1/1 ✔ Container ollama Created 0.1s Attaching to ollama ollama | Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. ollama | Your new public key is: ollama | ollama | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL3+CoJ2XXKyFqusyYsYnGzkrrtCIMeza7foGpUz8GJ0 ollama | ollama | time=2025-11-02T09:13:34.548Z level=INFO source=routes.go:1524 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION:11.0.0 HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:10m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" ollama | time=2025-11-02T09:13:34.548Z level=INFO source=images.go:522 msg="total blobs: 0" ollama | time=2025-11-02T09:13:34.549Z level=INFO source=images.go:529 msg="total unused blobs removed: 0" ollama | time=2025-11-02T09:13:34.549Z level=INFO source=routes.go:1577 msg="Listening on [::]:11434 (version 0.12.9)" ollama | time=2025-11-02T09:13:34.549Z level=DEBUG source=sched.go:120 msg="starting llm scheduler" ollama | time=2025-11-02T09:13:34.549Z level=INFO source=runner.go:76 msg="discovering available GPUs..." ollama | time=2025-11-02T09:13:34.550Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38567" ollama | time=2025-11-02T09:13:34.550Z level=DEBUG source=server.go:401 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_HOST=0.0.0.0 GPU_DEBUG=1 HSA_OVERRIDE_GFX_VERSION=11.0.0 HSA_ENABLE_DEBUG=1 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=600 OLLAMA_NUM_THREAD=12 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm ollama | time=2025-11-02T09:13:37.146Z level=DEBUG source=runner.go:471 msg="bootstrap discovery took" duration=2.596544229s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs=map[] ollama | time=2025-11-02T09:13:37.146Z level=DEBUG source=runner.go:120 msg="evluating which if any devices to filter out" initial_count=1 ollama | time=2025-11-02T09:13:37.146Z level=DEBUG source=runner.go:132 msg="verifying GPU is supported" library=/usr/lib/ollama/rocm description="AMD Radeon Graphics" compute=gfx1100 id=GPU-8db8156041e8c322 pci_id=0000:10:00.0 ollama | time=2025-11-02T09:13:37.146Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41407" ollama | time=2025-11-02T09:13:37.146Z level=DEBUG source=server.go:401 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_HOST=0.0.0.0 GPU_DEBUG=1 HSA_OVERRIDE_GFX_VERSION=11.0.0 HSA_ENABLE_DEBUG=1 OLLAMA_DEBUG=1 OLLAMA_KEEP_ALIVE=600 OLLAMA_NUM_THREAD=12 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm ROCR_VISIBLE_DEVICES=GPU-8db8156041e8c322 GGML_CUDA_INIT=1 ollama | time=2025-11-02T09:13:40.970Z level=DEBUG source=runner.go:471 msg="bootstrap discovery took" duration=3.823662125s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs="map[GGML_CUDA_INIT:1 ROCR_VISIBLE_DEVICES:GPU-8db8156041e8c322]" ollama | time=2025-11-02T09:13:40.970Z level=DEBUG source=runner.go:41 msg="GPU bootstrap discovery took" duration=6.420638613s ollama | time=2025-11-02T09:13:40.970Z level=INFO source=types.go:42 msg="inference compute" id=GPU-8db8156041e8c322 filtered_id="" library=ROCm compute=gfx1100 name=ROCm0 description="AMD Radeon Graphics" libdirs=ollama,rocm driver=60342.13 pci_id=0000:10:00.0 type=discrete total="24.0 GiB" available="23.9 GiB" ``` <img width="1919" height="2160" alt="Image" src="https://github.com/user-attachments/assets/e3f3305a-9d56-4cfe-a600-c44c5b80d701" />
Author
Owner

@DRRDietrich commented on GitHub (Nov 2, 2025):

After further testing, I've determined this is related to kernel/amdgpu driver.

Working vs. Broken Kernels

Kernel Version Status Notes
6.16.0 Working Stable, no crashes
6.16.10 Broken GPU crashes
6.17.5 Broken GPU crashes
6.18.0 Broken GPU crashes

The regression was introduced somewhere between kernel 6.16.0 and 6.16.10 in the amdgpu driver.

What This Means

  • All Ollama versions work correctly with kernel 6.16.0

Workaround

Downgrade to kernel 6.16.0

I'm not sure whether this is an Ollama-specific incompatibility with newer kernels or a general kernel/amdgpu driver regression that affects multiple applications.

<!-- gh-comment-id:3478001431 --> @DRRDietrich commented on GitHub (Nov 2, 2025): After further testing, I've determined this is related to kernel/amdgpu driver. ### Working vs. Broken Kernels | Kernel Version | Status | Notes | |----------------|--------|-------| | 6.16.0 | ✅ **Working** | Stable, no crashes | | 6.16.10 | ❌ **Broken** | GPU crashes | | 6.17.5 | ❌ **Broken** | GPU crashes | | 6.18.0 | ❌ **Broken** | GPU crashes | The regression was introduced somewhere **between kernel 6.16.0 and 6.16.10** in the amdgpu driver. ### What This Means - **All Ollama versions work correctly** with kernel 6.16.0 ### Workaround **Downgrade to kernel 6.16.0** I'm not sure whether this is an Ollama-specific incompatibility with newer kernels or a general kernel/amdgpu driver regression that affects multiple applications.
Author
Owner

@AndreiNarv commented on GitHub (Nov 2, 2025):

I am also on Fedora, kernel 6.16.3 was the last one which worked well for me with recent versions of ollama; after that, I have similar problems

<!-- gh-comment-id:3478131688 --> @AndreiNarv commented on GitHub (Nov 2, 2025): I am also on Fedora, kernel 6.16.3 was the last one which worked well for me with recent versions of ollama; after that, I have similar problems
Author
Owner

@mahlersand commented on GitHub (Nov 6, 2025):

Also happens with gfx1010 (AMD Radeon RX 5700)

<!-- gh-comment-id:3496694491 --> @mahlersand commented on GitHub (Nov 6, 2025): Also happens with gfx1010 (AMD Radeon RX 5700)
Author
Owner

@anemyte commented on GitHub (Nov 6, 2025):

Accidentally stumbled upon this issue while googling the error message. Not sure if it is helpful but I think I leave this here regardless.
I use another model (stable diffusion) and software (sdnext) and for me this error happens when the software runs out of RAM and gets killed by OOM reaper, hence the error message "Trying to push to a killed entity". In my as well, GPU recovers only after reboot, but I didn't really try other options (such as resetting via /sys driver interface). Kernel 6.17.2-arch1-1

<!-- gh-comment-id:3497381348 --> @anemyte commented on GitHub (Nov 6, 2025): Accidentally stumbled upon this issue while googling the error message. Not sure if it is helpful but I think I leave this here regardless. I use another model (stable diffusion) and software (sdnext) and for me this error happens when the software runs out of RAM and gets killed by OOM reaper, hence the error message "Trying to push to a killed entity". In my as well, GPU recovers only after reboot, but I didn't really try other options (such as resetting via /sys driver interface). Kernel 6.17.2-arch1-1
Author
Owner

@afrolino02 commented on GitHub (Nov 13, 2025):

Also happens with gfx1010 (AMD Radeon RX 5700)

and RX 6600

<!-- gh-comment-id:3528588972 --> @afrolino02 commented on GitHub (Nov 13, 2025): > Also happens with gfx1010 (AMD Radeon RX 5700) and RX 6600
Author
Owner

@josephtingiris commented on GitHub (Nov 13, 2025):

and RX 7600 XT (gfx1102)

f42
Linux t0 6.17.7-200.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Nov 2 17:43:34 UTC 2025 x86_64 GNU/Linux

ollama:latest
ollama version is 0.12.10-29-g6286d9a

I built it myself, though I did try various versions of prebuilt binaries. It felt like it started after a kernel upgrade, but I'm not sure (yet). This github issue gives me hope & I'm going to try and go back to 6.16.0 next.

<!-- gh-comment-id:3529608747 --> @josephtingiris commented on GitHub (Nov 13, 2025): and RX 7600 XT (gfx1102) f42 `Linux t0 6.17.7-200.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Nov 2 17:43:34 UTC 2025 x86_64 GNU/Linux` ollama:latest `ollama version is 0.12.10-29-g6286d9a` I built it myself, though I did try various versions of prebuilt binaries. It felt like it started after a kernel upgrade, but I'm not sure (yet). This github issue gives me hope & I'm going to try and go back to 6.16.0 next.
Author
Owner

@javalsai commented on GitHub (Nov 14, 2025):

I'll add that running it like:

GPU_DEBUG=1 HSA_ENABLE_DEBUG=1 AMD_LOG_LEVEL=4 HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama serve
and
GPU_DEBUG=1 HSA_ENABLE_DEBUG=1 AMD_LOG_LEVEL=4 HSA_OVERRIDE_GFX_VERSION=11.0.0 ollama serve

on a RX 6600 XT not only crashes ollama but causes a complete gpu reset while HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama serve "just" causes ollama to break.

Artix Linux 6.17.7-zen1-1-zen.

Several gfxhub page faults in dmesg. Not sure if it's completely ollama's fault as I've been facing GPU hangs since last system update, but iirc ollama was also broken prior to that update.

Among the end of the logs (which oddly enough seem partially corrupted) I found:

:3:hip_device_runtime.cpp   :657 : 2362346583 us: [pid:26641 tid: 0x7f8a190196c0]  hipGetDevice ( 0x7f8a1901291c ) 
:3:hip_device_runtime.cpp   :669 : 2362346593 us: [pid:26641 tid: 0x7f8a190196c0] hipGetDevice: Returned hipSuccess : 1
:3:hip_device_runtime.cpp   :688 : 2362346599 us: [pid:26641 tid: 0x7f8a190196c0]  hipSetDevice ( 0 ) 
:3:hip_device_runtime.cpp   :698 : 2362346605 us: [pid:26641 tid: 0x7f8a190196c0] hipSetDevice: Returned hipSuccess : 
:3:hip_stream.cpp           :271 : 2362346611 us: [pid:26641 tid: 0x7f8a190196c0]  hipStreamCreateWithFlags ( 0x7f87a04af380, 1 ) 
:3:rocdevice.cpp            :3048: 2362346620 us: [pid:26641 tid: 0x7f8a190196c0] Number of allocated hardware queues with low priority: 0, with normal priority: 0, with high priority: 0, maximum per priority is: 4
:3:rocdevice.cpp            :3129: 2362352287 us: [pid:26641 tid: 0x7f8a190196c0] Created SWq=0x7f8a18610000 to map on HWq=0x7f8a10200000 with size 16384 with priority 1, cooperative: 0
:3:rocdevice.cpp            :3222: 2362352303 us: [pid:26641 tid: 0x7f8a190196c0] acquireQueue refCount: 0x7f8a10200000 (1)
:4:rocdevice.cpp            :2210: 2362352562 us: [pid:26641 tid: 0x7f8a190196c0] Allocate hsa host memory 0x7f87a9e00000, size 0x100000, numa_node = 0
:3:devprogram.cpp           :2658: 2362611506 us: [pid:26641 tid: 0x7f8a190196c0] Using Code Object V5.
Memory access fault by GPU node-1 (Agent handle: 0x564d35fcec60) on address 0x7f8a18633000. Reason: Page not present or supervisor privilege.

forgot to mention, ollama version 0.12.10

I'll add that I randomly got it working with no crashes and seemed stable after several runs, seems to fail randomly when loading the model.

As DRRDietrich suggests it might be the kernel, I'll add that my kernel version before the system update I was talking about was 6.16.9.

<!-- gh-comment-id:3530365383 --> @javalsai commented on GitHub (Nov 14, 2025): I'll add that running it like: `GPU_DEBUG=1 HSA_ENABLE_DEBUG=1 AMD_LOG_LEVEL=4 HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama serve` and `GPU_DEBUG=1 HSA_ENABLE_DEBUG=1 AMD_LOG_LEVEL=4 HSA_OVERRIDE_GFX_VERSION=11.0.0 ollama serve` on a RX 6600 XT not only crashes ollama but causes a complete gpu reset while `HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama serve` "just" causes ollama to break. Artix Linux 6.17.7-zen1-1-zen. Several gfxhub page faults in dmesg. Not sure if it's completely ollama's fault as I've been facing GPU hangs since last system update, but iirc ollama was also broken prior to that update. Among the end of the logs (which oddly enough seem partially corrupted) I found: ``` :3:hip_device_runtime.cpp :657 : 2362346583 us: [pid:26641 tid: 0x7f8a190196c0] hipGetDevice ( 0x7f8a1901291c ) :3:hip_device_runtime.cpp :669 : 2362346593 us: [pid:26641 tid: 0x7f8a190196c0] hipGetDevice: Returned hipSuccess : 1 :3:hip_device_runtime.cpp :688 : 2362346599 us: [pid:26641 tid: 0x7f8a190196c0] hipSetDevice ( 0 ) :3:hip_device_runtime.cpp :698 : 2362346605 us: [pid:26641 tid: 0x7f8a190196c0] hipSetDevice: Returned hipSuccess : :3:hip_stream.cpp :271 : 2362346611 us: [pid:26641 tid: 0x7f8a190196c0] hipStreamCreateWithFlags ( 0x7f87a04af380, 1 ) :3:rocdevice.cpp :3048: 2362346620 us: [pid:26641 tid: 0x7f8a190196c0] Number of allocated hardware queues with low priority: 0, with normal priority: 0, with high priority: 0, maximum per priority is: 4 :3:rocdevice.cpp :3129: 2362352287 us: [pid:26641 tid: 0x7f8a190196c0] Created SWq=0x7f8a18610000 to map on HWq=0x7f8a10200000 with size 16384 with priority 1, cooperative: 0 :3:rocdevice.cpp :3222: 2362352303 us: [pid:26641 tid: 0x7f8a190196c0] acquireQueue refCount: 0x7f8a10200000 (1) :4:rocdevice.cpp :2210: 2362352562 us: [pid:26641 tid: 0x7f8a190196c0] Allocate hsa host memory 0x7f87a9e00000, size 0x100000, numa_node = 0 :3:devprogram.cpp :2658: 2362611506 us: [pid:26641 tid: 0x7f8a190196c0] Using Code Object V5. Memory access fault by GPU node-1 (Agent handle: 0x564d35fcec60) on address 0x7f8a18633000. Reason: Page not present or supervisor privilege. ``` forgot to mention, ollama version 0.12.10 I'll add that I randomly got it working with no crashes and seemed stable after several runs, seems to fail randomly when loading the model. As DRRDietrich suggests it might be the kernel, I'll add that my kernel version before the system update I was talking about was 6.16.9.
Author
Owner

@RockingRolli commented on GitHub (Nov 14, 2025):

I have (had) the same issue with my Radeon Pro R9700. I tried different Linux Distributions, all with the same error that you described. So I went back and installed trusty Fedora (43 Server) and tinkered with the Kernel.
It seems like kernel 6.17.8 fixes this issue. I installed this one from Koji: https://koji.fedoraproject.org/koji/buildinfo?buildID=2860212
With that kernel Ollama works again without crashing. dmesg shows a lot of these lines though:

[  407.534311] amdgpu: init_user_pages: Failed to get user pages: -1
[  407.534343] amdgpu: init_user_pages: Failed to get user pages: -1
[  407.535291] amdgpu: init_user_pages: Failed to get user pages: -1
[  407.535311] amdgpu: init_user_pages: Failed to get user pages: -1
...

Welcome to the bleeding edge I guess ;)

<!-- gh-comment-id:3534572020 --> @RockingRolli commented on GitHub (Nov 14, 2025): I have (had) the same issue with my Radeon Pro R9700. I tried different Linux Distributions, all with the same error that you described. So I went back and installed trusty Fedora (43 Server) and tinkered with the Kernel. It seems like kernel 6.17.8 fixes this issue. I installed this one from Koji: https://koji.fedoraproject.org/koji/buildinfo?buildID=2860212 With that kernel Ollama works again without crashing. `dmesg` shows a lot of these lines though: ``` [ 407.534311] amdgpu: init_user_pages: Failed to get user pages: -1 [ 407.534343] amdgpu: init_user_pages: Failed to get user pages: -1 [ 407.535291] amdgpu: init_user_pages: Failed to get user pages: -1 [ 407.535311] amdgpu: init_user_pages: Failed to get user pages: -1 ... ``` Welcome to the bleeding edge I guess ;)
Author
Owner

@josephtingiris commented on GitHub (Nov 15, 2025):

I have (had) the same issue with my Radeon Pro R9700. I tried different Linux Distributions, all with the same error that you described. So I went back and installed trusty Fedora (43 Server) and tinkered with the Kernel. It seems like kernel 6.17.8 fixes this issue. I installed this one from Koji: https://koji.fedoraproject.org/koji/buildinfo?buildID=2860212 With that kernel Ollama works again without crashing. dmesg shows a lot of these lines though:

[  407.534311] amdgpu: init_user_pages: Failed to get user pages: -1
[  407.534343] amdgpu: init_user_pages: Failed to get user pages: -1
[  407.535291] amdgpu: init_user_pages: Failed to get user pages: -1
[  407.535311] amdgpu: init_user_pages: Failed to get user pages: -1
...

Welcome to the bleeding edge I guess ;)

Yep. I had to go forward before I could go back. I wanted to try 6.16.0 with fc42 but someone had deleted it from koji. It was there for fc43, so I upgraded everything first. Of course, ollama hung and the upgrade didn't finish. Good fun.

On the bright side, I got fc43 working and confirmed 6.16.0-65 works significantly better than every kernel version I've tried after that. Now, I can also definitely say that whatever happened that started my 'killed entity' messages (and ollama zombies) started after 6.16 .0 and before 6.16.3 (because that's the earliest I've personally tested, so this bug began in 6.16.1 or 6.16.2.

With 6.16.0-65, ollama still core dumps on certain conditions but at least it's acting normal in that it's not going becoming a defunct/zombie process requiring a reboot. I've already verified up to the official 6.17.7-300 are broken.

This is the koji 6.16.0-65 version that I'm using - https://koji.fedoraproject.org/koji/buildinfo?buildID=2780449

Those 'Failed to get user pages' lines start happening after 6.16.0, too. With 6.16.0, instead, these messages get emitted at the same time during the workflow.

Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f5ae2800000, queue evicted
Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f5ae3e00000, queue evicted
Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f5b74400000, queue evicted
Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f5b9c400000, queue evicted
Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f6cb1600000, queue evicted
Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f6cbaa00000, queue evicted
Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f6cec800000, queue evicted
Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f6d0a400000, queue evicted
Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f7044600000, queue evicted
Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f7045200000, queue evicted
Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f704e200000, queue evicted
Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f704ee00000, queue evicted
...

I'll try 6.17.8 next, though I'm contemplating waiting until it's officially released and just staying on 6.16.0 until then.

<!-- gh-comment-id:3536517046 --> @josephtingiris commented on GitHub (Nov 15, 2025): > I have (had) the same issue with my Radeon Pro R9700. I tried different Linux Distributions, all with the same error that you described. So I went back and installed trusty Fedora (43 Server) and tinkered with the Kernel. It seems like kernel 6.17.8 fixes this issue. I installed this one from Koji: https://koji.fedoraproject.org/koji/buildinfo?buildID=2860212 With that kernel Ollama works again without crashing. `dmesg` shows a lot of these lines though: > > ``` > [ 407.534311] amdgpu: init_user_pages: Failed to get user pages: -1 > [ 407.534343] amdgpu: init_user_pages: Failed to get user pages: -1 > [ 407.535291] amdgpu: init_user_pages: Failed to get user pages: -1 > [ 407.535311] amdgpu: init_user_pages: Failed to get user pages: -1 > ... > ``` > > Welcome to the bleeding edge I guess ;) Yep. I had to go forward before I could go back. I wanted to try 6.16.0 with fc42 but someone had deleted it from koji. It was there for fc43, so I upgraded everything first. Of course, ollama hung and the upgrade didn't finish. Good fun. On the bright side, I got fc43 working and confirmed 6.16.0-65 works significantly better than every kernel version I've tried after that. Now, I can also definitely say that whatever happened that started my 'killed entity' messages (and ollama zombies) started after 6.16 .0 and before 6.16.3 (because that's the earliest I've personally tested, so this bug began in 6.16.1 or 6.16.2. With 6.16.0-65, ollama still core dumps on certain conditions but at least it's acting normal in that it's not going becoming a defunct/zombie process requiring a reboot. I've already verified up to the official 6.17.7-300 are broken. This is the koji 6.16.0-65 version that I'm using - https://koji.fedoraproject.org/koji/buildinfo?buildID=2780449 Those 'Failed to get user pages' lines start happening after 6.16.0, too. With 6.16.0, instead, these messages get emitted at the same time during the workflow. ``` Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f5ae2800000, queue evicted Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f5ae3e00000, queue evicted Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f5b74400000, queue evicted Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f5b9c400000, queue evicted Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f6cb1600000, queue evicted Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f6cbaa00000, queue evicted Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f6cec800000, queue evicted Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f6d0a400000, queue evicted Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f7044600000, queue evicted Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f7045200000, queue evicted Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f704e200000, queue evicted Nov 15 08:18:17 t0 kernel: amdgpu: Freeing queue vital buffer 0x7f704ee00000, queue evicted ... ``` I'll try 6.17.8 next, though I'm contemplating waiting until it's officially released and just staying on 6.16.0 until then.
Author
Owner

@DRRDietrich commented on GitHub (Nov 16, 2025):

I've now also switched to Kernel 6.17.8-300.fc43.x86_64. It seems to be working fine.

<!-- gh-comment-id:3538507029 --> @DRRDietrich commented on GitHub (Nov 16, 2025): I've now also switched to Kernel `6.17.8-300.fc43.x86_64`. It seems to be working fine.
Author
Owner

@dhiltgen commented on GitHub (Dec 5, 2025):

It sounds like the newer kernel has resolved this, so I'll go ahead and close the issue.

<!-- gh-comment-id:3614905145 --> @dhiltgen commented on GitHub (Dec 5, 2025): It sounds like the newer kernel has resolved this, so I'll go ahead and close the issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70603