[GH-ISSUE #14927] AMD RDNA4 (gfx1201 / RX 9060 XT) not detected – Ollama falls back to CPU (0 VRAM) #71666

Open
opened 2026-05-05 02:17:51 -05:00 by GiteaMirror · 35 comments
Owner

Originally created by @OTAKUWeBer on GitHub (Mar 18, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14927

What is the issue?

Running Ollama on Arch Linux with an AMD RX 9060 XT (gfx1201 / RDNA4). The system correctly detects the GPU via ROCm and Vulkan, but Ollama does not use it for inference and falls back to CPU.


🔍 Relevant Debug Info

ROCm detection:

/opt/rocm/bin/rocminfo | grep Name
Name: gfx1201
Marketing Name: AMD Radeon RX 9060 XT

Vulkan detection:

vulkaninfo | grep deviceName
deviceName = AMD Radeon RX 9060 XT (RADV GFX1200)

Ollama logs:

inference compute id=cpu library=cpu
total_vram="0 B"
discovering available GPUs...

⚙️ Environment Details

  • OS: Arch Linux
  • CPU: AMD Ryzen 5 9600X
  • GPU: AMD Radeon RX 9060 XT (gfx1201 / RDNA4)
  • Ollama version: 0.17.7
  • ROCm installed and functional
  • Vulkan (RADV) working

Issue

Despite proper GPU detection at the system level, Ollama does not initialize or utilize the GPU backend. No VRAM is detected, and inference defaults to CPU.


🔁 Notes / Attempts

  • Verified ROCm and Vulkan functionality independently
  • Restarted Ollama service after environment changes
  • Attempted forcing GPU/Vulkan via environment variables (no effect)

Question

Is RDNA4 (gfx12 / gfx1201) currently unsupported?
Are there experimental builds or flags to enable GPU acceleration on newer AMD architectures?

Relevant log output

~
❯ journalctl -u ollama
Mar 17 02:48:47 archlinux systemd[1]: Started Ollama Service.
Mar 17 02:48:47 archlinux ollama[34385]: Couldn't find '/var/lib/ollama/.ollama/id_ed25519'. Generating new private key.
Mar 17 02:48:47 archlinux ollama[34385]: Your new public key is:
Mar 17 02:48:47 archlinux ollama[34385]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO1H1riqQ79f9zQ0je0WsjYQVNwrWixOtF+RewdoBLF4
Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.866+06:00 level=INFO source=routes.go:1658 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HI>
Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.866+06:00 level=INFO source=routes.go:1660 msg="Ollama cloud disabled: false"
Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.866+06:00 level=INFO source=images.go:477 msg="total blobs: 0"
Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.866+06:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.866+06:00 level=INFO source=routes.go:1713 msg="Listening on 127.0.0.1:11434 (version 0.17.7)"
Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.866+06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.867+06:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39375"
Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.881+06:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver=>
Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.881+06:00 level=INFO source=routes.go:1763 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096
Mar 17 02:49:09 archlinux ollama[34385]: [GIN] 2026/03/17 - 02:49:09 | 404 |    3.011987ms |       127.0.0.1 | POST     "/v1/messages?beta=true"
Mar 17 02:49:09 archlinux ollama[34385]: [GIN] 2026/03/17 - 02:49:09 | 404 |    2.930571ms |       127.0.0.1 | POST     "/v1/messages?beta=true"
Mar 17 02:49:16 archlinux ollama[34385]: [GIN] 2026/03/17 - 02:49:16 | 200 |      23.265µs |       127.0.0.1 | HEAD     "/"
Mar 17 02:49:16 archlinux ollama[34385]: [GIN] 2026/03/17 - 02:49:16 | 404 |      74.924µs |       127.0.0.1 | POST     "/api/show"
Mar 17 02:49:18 archlinux ollama[34385]: time=2026-03-17T02:49:18.525+06:00 level=INFO source=download.go:179 msg="downloading ac9bc7a69dab in 16 561 MB part(s)"
Mar 17 03:02:19 archlinux ollama[34385]: time=2026-03-17T03:02:19.406+06:00 level=INFO source=download.go:179 msg="downloading 66b9ea09bd5b in 1 68 B part(s)"
Mar 17 03:02:21 archlinux ollama[34385]: time=2026-03-17T03:02:21.932+06:00 level=INFO source=download.go:179 msg="downloading 1e65450c3067 in 1 1.6 KB part(s)"
Mar 17 03:02:23 archlinux ollama[34385]: time=2026-03-17T03:02:23.653+06:00 level=INFO source=download.go:179 msg="downloading 832dd9e00a68 in 1 11 KB part(s)"
Mar 17 03:02:25 archlinux ollama[34385]: time=2026-03-17T03:02:25.447+06:00 level=INFO source=download.go:179 msg="downloading 0578f229f23a in 1 488 B part(s)"
Mar 17 03:02:30 archlinux ollama[34385]: [GIN] 2026/03/17 - 03:02:30 | 200 |        13m13s |       127.0.0.1 | POST     "/api/pull"
Mar 17 03:02:30 archlinux ollama[34385]: [GIN] 2026/03/17 - 03:02:30 | 200 |   48.775831ms |       127.0.0.1 | POST     "/api/show"
Mar 17 03:02:30 archlinux ollama[34385]: [GIN] 2026/03/17 - 03:02:30 | 200 |   48.776051ms |       127.0.0.1 | POST     "/api/show"
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /var/lib/ollama/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1>
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv   0:                       general.architecture str              = qwen2
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv   1:                               general.type str              = model
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 Coder 14B Instruct
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5-Coder
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv   5:                         general.size_label str              = 14B
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv   6:                            general.license str              = apache-2.0
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-C...
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 Coder 14B
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-C...
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  12:                               general.tags arr[str,6]       = ["code", "codeqwen", "chat", "qwen", ...
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  14:                          qwen2.block_count u32              = 48
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 5120
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 13824
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 40
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 8
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  22:                          general.file_type u32              = 15
Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
lines 1-51

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.17.7

Originally created by @OTAKUWeBer on GitHub (Mar 18, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14927 ### What is the issue? Running Ollama on Arch Linux with an AMD RX 9060 XT (gfx1201 / RDNA4). The system correctly detects the GPU via ROCm and Vulkan, but Ollama does not use it for inference and falls back to CPU. --- ### 🔍 Relevant Debug Info **ROCm detection:** ``` /opt/rocm/bin/rocminfo | grep Name Name: gfx1201 Marketing Name: AMD Radeon RX 9060 XT ``` **Vulkan detection:** ``` vulkaninfo | grep deviceName deviceName = AMD Radeon RX 9060 XT (RADV GFX1200) ``` **Ollama logs:** ``` inference compute id=cpu library=cpu total_vram="0 B" discovering available GPUs... ``` --- ### ⚙️ Environment Details * OS: Arch Linux * CPU: AMD Ryzen 5 9600X * GPU: AMD Radeon RX 9060 XT (gfx1201 / RDNA4) * Ollama version: 0.17.7 * ROCm installed and functional * Vulkan (RADV) working --- ### ❗ Issue Despite proper GPU detection at the system level, Ollama does not initialize or utilize the GPU backend. No VRAM is detected, and inference defaults to CPU. --- ### 🔁 Notes / Attempts * Verified ROCm and Vulkan functionality independently * Restarted Ollama service after environment changes * Attempted forcing GPU/Vulkan via environment variables (no effect) --- ### ❓ Question Is RDNA4 (gfx12 / gfx1201) currently unsupported? Are there experimental builds or flags to enable GPU acceleration on newer AMD architectures? ### Relevant log output ```shell ~ ❯ journalctl -u ollama Mar 17 02:48:47 archlinux systemd[1]: Started Ollama Service. Mar 17 02:48:47 archlinux ollama[34385]: Couldn't find '/var/lib/ollama/.ollama/id_ed25519'. Generating new private key. Mar 17 02:48:47 archlinux ollama[34385]: Your new public key is: Mar 17 02:48:47 archlinux ollama[34385]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO1H1riqQ79f9zQ0je0WsjYQVNwrWixOtF+RewdoBLF4 Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.866+06:00 level=INFO source=routes.go:1658 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HI> Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.866+06:00 level=INFO source=routes.go:1660 msg="Ollama cloud disabled: false" Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.866+06:00 level=INFO source=images.go:477 msg="total blobs: 0" Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.866+06:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.866+06:00 level=INFO source=routes.go:1713 msg="Listening on 127.0.0.1:11434 (version 0.17.7)" Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.866+06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.867+06:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39375" Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.881+06:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver=> Mar 17 02:48:47 archlinux ollama[34385]: time=2026-03-17T02:48:47.881+06:00 level=INFO source=routes.go:1763 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 Mar 17 02:49:09 archlinux ollama[34385]: [GIN] 2026/03/17 - 02:49:09 | 404 | 3.011987ms | 127.0.0.1 | POST "/v1/messages?beta=true" Mar 17 02:49:09 archlinux ollama[34385]: [GIN] 2026/03/17 - 02:49:09 | 404 | 2.930571ms | 127.0.0.1 | POST "/v1/messages?beta=true" Mar 17 02:49:16 archlinux ollama[34385]: [GIN] 2026/03/17 - 02:49:16 | 200 | 23.265µs | 127.0.0.1 | HEAD "/" Mar 17 02:49:16 archlinux ollama[34385]: [GIN] 2026/03/17 - 02:49:16 | 404 | 74.924µs | 127.0.0.1 | POST "/api/show" Mar 17 02:49:18 archlinux ollama[34385]: time=2026-03-17T02:49:18.525+06:00 level=INFO source=download.go:179 msg="downloading ac9bc7a69dab in 16 561 MB part(s)" Mar 17 03:02:19 archlinux ollama[34385]: time=2026-03-17T03:02:19.406+06:00 level=INFO source=download.go:179 msg="downloading 66b9ea09bd5b in 1 68 B part(s)" Mar 17 03:02:21 archlinux ollama[34385]: time=2026-03-17T03:02:21.932+06:00 level=INFO source=download.go:179 msg="downloading 1e65450c3067 in 1 1.6 KB part(s)" Mar 17 03:02:23 archlinux ollama[34385]: time=2026-03-17T03:02:23.653+06:00 level=INFO source=download.go:179 msg="downloading 832dd9e00a68 in 1 11 KB part(s)" Mar 17 03:02:25 archlinux ollama[34385]: time=2026-03-17T03:02:25.447+06:00 level=INFO source=download.go:179 msg="downloading 0578f229f23a in 1 488 B part(s)" Mar 17 03:02:30 archlinux ollama[34385]: [GIN] 2026/03/17 - 03:02:30 | 200 | 13m13s | 127.0.0.1 | POST "/api/pull" Mar 17 03:02:30 archlinux ollama[34385]: [GIN] 2026/03/17 - 03:02:30 | 200 | 48.775831ms | 127.0.0.1 | POST "/api/show" Mar 17 03:02:30 archlinux ollama[34385]: [GIN] 2026/03/17 - 03:02:30 | 200 | 48.776051ms | 127.0.0.1 | POST "/api/show" Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /var/lib/ollama/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1> Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 0: general.architecture str = qwen2 Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 1: general.type str = model Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 14B Instruct Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 3: general.finetune str = Instruct Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 5: general.size_label str = 14B Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 6: general.license str = apache-2.0 Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 8: general.base_model.count u32 = 1 Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 14B Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 14: qwen2.block_count u32 = 48 Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 22: general.file_type u32 = 15 Mar 17 03:02:30 archlinux ollama[34385]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 lines 1-51 ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.17.7
GiteaMirror added the bug label 2026-05-05 02:17:51 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 18, 2026):

Is ollama-rocm installed?

<!-- gh-comment-id:4080812744 --> @rick-github commented on GitHub (Mar 18, 2026): Is ollama-rocm installed?
Author
Owner

@OTAKUWeBer commented on GitHub (Mar 18, 2026):

Is ollama-rocm installed?

Yes.
❯ pacman -Q ollama ollama-rocm
ollama 0.17.7-1
ollama-rocm 0.17.7-1

<!-- gh-comment-id:4080858342 --> @OTAKUWeBer commented on GitHub (Mar 18, 2026): > Is ollama-rocm installed? Yes. ❯ pacman -Q ollama ollama-rocm ollama 0.17.7-1 ollama-rocm 0.17.7-1
Author
Owner

@boomam commented on GitHub (Mar 20, 2026):

I see similar to this with Strix Halo, 0.18.x releases appear to not detect the 8060S w/ whatever VRAM allocation.
Dropping to 0.17.7 fixes it.

<!-- gh-comment-id:4099749533 --> @boomam commented on GitHub (Mar 20, 2026): I see similar to this with Strix Halo, 0.18.x releases appear to not detect the 8060S w/ whatever VRAM allocation. Dropping to 0.17.7 fixes it.
Author
Owner

@rick-github commented on GitHub (Mar 20, 2026):

0.18.0+ needs ROCm 7+.

<!-- gh-comment-id:4099760112 --> @rick-github commented on GitHub (Mar 20, 2026): 0.18.0+ needs ROCm 7+.
Author
Owner

@boomam commented on GitHub (Mar 20, 2026):

@rick-github - i cant speak for @OTAKUWeBer ofc, but im running the latest release of ROCM as of today.
Which from what i see installed, is 7.2.70200

<!-- gh-comment-id:4099792367 --> @boomam commented on GitHub (Mar 20, 2026): @rick-github - i cant speak for @OTAKUWeBer ofc, but im running the latest release of ROCM as of today. Which from what i see installed, is 7.2.70200
Author
Owner

@rick-github commented on GitHub (Mar 20, 2026):

Server logs will aid in debugging.

<!-- gh-comment-id:4099804289 --> @rick-github commented on GitHub (Mar 20, 2026): [Server logs](https://docs.ollama.com/troubleshooting) will aid in debugging.
Author
Owner

@boomam commented on GitHub (Mar 20, 2026):

Exactly the same logs as what @OTAKUWeBer showed above - aka, no GPU detection.

How does Ollama detect the GPU, via the rocm-smi?

<!-- gh-comment-id:4099817712 --> @boomam commented on GitHub (Mar 20, 2026): Exactly the same logs as what @OTAKUWeBer showed above - aka, no GPU detection. How does Ollama detect the GPU, via the rocm-smi?
Author
Owner

@rick-github commented on GitHub (Mar 20, 2026):

Set OLLAMA_DEBUG=2 in the server environment and post the log from start through to the line with inference compute.

<!-- gh-comment-id:4099829896 --> @rick-github commented on GitHub (Mar 20, 2026): Set `OLLAMA_DEBUG=2` in the server environment and post the log from start through to the line with `inference compute`.
Author
Owner

@boomam commented on GitHub (Mar 20, 2026):

time=2026-03-20T17:38:50.314Z level=INFO source=routes.go:1727 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-03-20T17:38:50.314Z level=INFO source=routes.go:1729 msg="Ollama cloud disabled: false"
time=2026-03-20T17:38:50.320Z level=INFO source=images.go:477 msg="total blobs: 127"
time=2026-03-20T17:38:50.321Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2026-03-20T17:38:50.322Z level=INFO source=routes.go:1782 msg="Listening on [::]:11434 (version 0.18.2)"
time=2026-03-20T17:38:50.322Z level=DEBUG source=sched.go:145 msg="starting llm scheduler"
time=2026-03-20T17:38:50.323Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-03-20T17:38:50.323Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/rocm]" extraEnvs=map[]
time=2026-03-20T17:38:50.324Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37557"
time=2026-03-20T17:38:50.324Z level=DEBUG source=server.go:431 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_NUM_PARALLEL=1 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=24h LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_DEBUG=2 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm
time=2026-03-20T17:38:50.333Z level=INFO source=runner.go:1411 msg="starting ollama engine"
time=2026-03-20T17:38:50.333Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:37557"
time=2026-03-20T17:38:50.340Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
time=2026-03-20T17:38:50.340Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
time=2026-03-20T17:38:50.340Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-20T17:38:50.340Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-20T17:38:50.341Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0
time=2026-03-20T17:38:50.341Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default=""
time=2026-03-20T17:38:50.341Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default=""
time=2026-03-20T17:38:50.341Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-03-20T17:38:50.341Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2026-03-20T17:38:50.349Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/rocm
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0
load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so
time=2026-03-20T17:38:50.507Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default=""
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000
time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1
time=2026-03-20T17:38:50.507Z level=DEBUG source=runner.go:1386 msg="dummy model load took" duration=172.024233ms
ggml_hip_get_device_memory searching for device 0000:c2:00.0
ggml_backend_cuda_device_get_memory device 0000:c2:00.0 utilizing AMD specific memory reporting free: 119347310592 total: 119521259520
time=2026-03-20T17:38:50.508Z level=DEBUG source=runner.go:1391 msg="gathering device infos took" duration=287.461µs
time=2026-03-20T17:38:50.508Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" devices="[{DeviceID:{ID:0 Library:ROCm} Name:ROCm0 Description:Radeon 8060S Graphics FilterID: Integrated:true PCIID:0000:c2:00.0 TotalMemory:119521259520 FreeMemory:119347310592 ComputeMajor:17 ComputeMinor:81 DriverMajor:70226 DriverMinor:1 LibraryPath:[/usr/lib/ollama /usr/lib/ollama/rocm]}]"
time=2026-03-20T17:38:50.508Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=185.442191ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs=map[]
time=2026-03-20T17:38:50.508Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1
time=2026-03-20T17:38:50.508Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/lib/ollama/rocm description="Radeon 8060S Graphics" compute=gfx1151 id=0 pci_id=0000:c2:00.0
time=2026-03-20T17:38:50.508Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/rocm]" extraEnvs="map[GGML_CUDA_INIT:1 ROCR_VISIBLE_DEVICES:0]"
time=2026-03-20T17:38:50.509Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45923"
time=2026-03-20T17:38:50.509Z level=DEBUG source=server.go:431 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_NUM_PARALLEL=1 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=24h LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_DEBUG=2 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm GGML_CUDA_INIT=1 ROCR_VISIBLE_DEVICES=0
time=2026-03-20T17:38:50.519Z level=INFO source=runner.go:1411 msg="starting ollama engine"
time=2026-03-20T17:38:50.519Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:45923"
time=2026-03-20T17:38:50.520Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string
time=2026-03-20T17:38:50.520Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
time=2026-03-20T17:38:50.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-20T17:38:50.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-20T17:38:50.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0
time=2026-03-20T17:38:50.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default=""
time=2026-03-20T17:38:50.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default=""
time=2026-03-20T17:38:50.520Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
time=2026-03-20T17:38:50.521Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2026-03-20T17:38:50.524Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/rocm
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
ggml_cuda_init: initializing rocBLAS on device 0
SIGSEGV: segmentation violation
PC=0x7772c5cf9170 m=10 sigcode=1 addr=0x34
signal arrived during cgo execution

goroutine 14 gp=0xc000166380 m=10 mp=0xc000100808 [syscall]:
runtime.cgocall(0x57929ca34800, 0xc000048988)
        runtime/cgocall.go:167 +0x4b fp=0xc000048960 sp=0xc000048928 pc=0x57929baa7a6b
github.com/ollama/ollama/ml/backend/ggml/ggml/src._Cfunc_ggml_backend_load_all_from_path(0x5792aaf11990)
        _cgo_gotypes.go:195 +0x3e fp=0xc000048988 sp=0xc000048960 pc=0x57929bf2fa7e
github.com/ollama/ollama/ml/backend/ggml/ggml/src.init.func1.1({0xc000042024, 0x14})
        github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml.go:97 +0xf5 fp=0xc000048a20 sp=0xc000048988 pc=0x57929bf2f515
github.com/ollama/ollama/ml/backend/ggml/ggml/src.init.func1()
        github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml.go:98 +0x545 fp=0xc000048c98 sp=0xc000048a20 pc=0x57929bf2f365
github.com/ollama/ollama/ml/backend/ggml/ggml/src.init.OnceFunc.func2()
        sync/oncefunc.go:27 +0x62 fp=0xc000048ce0 sp=0xc000048c98 pc=0x57929bf2ed42
sync.(*Once).doSlow(0x57929d355660?, 0x57929de67d60?)
        sync/once.go:78 +0xab fp=0xc000048d38 sp=0xc000048ce0 pc=0x57929babd48b
sync.(*Once).Do(0x0?, 0xc000048de0?)
        sync/once.go:69 +0x19 fp=0xc000048d58 sp=0xc000048d38 pc=0x57929babd3b9
github.com/ollama/ollama/ml/backend/ggml/ggml/src.init.OnceFunc.func3()
        sync/oncefunc.go:32 +0x2d fp=0xc000048d88 sp=0xc000048d58 pc=0x57929bf2ecad
github.com/ollama/ollama/ml/backend/ggml.init.func1()
        github.com/ollama/ollama/ml/backend/ggml/ggml.go:48 +0x23 fp=0xc000048e18 sp=0xc000048d88 pc=0x57929bf8aaa3
github.com/ollama/ollama/ml/backend/ggml.init.OnceFunc.func2()
        sync/oncefunc.go:27 +0x62 fp=0xc000048e60 sp=0xc000048e18 pc=0x57929bf8a9a2
sync.(*Once).doSlow(0x157929d349cc8?, 0xc00013c728?)
        sync/once.go:78 +0xab fp=0xc000048eb8 sp=0xc000048e60 pc=0x57929babd48b
sync.(*Once).Do(0x57929babd540?, 0x57929de683c4?)
        sync/once.go:69 +0x19 fp=0xc000048ed8 sp=0xc000048eb8 pc=0x57929babd3b9
github.com/ollama/ollama/ml/backend/ggml.init.OnceFunc.func3()
        sync/oncefunc.go:32 +0x2d fp=0xc000048f08 sp=0xc000048ed8 pc=0x57929bf8a90d
github.com/ollama/ollama/ml/backend/ggml.New({0xc000134408, 0x13}, {0x0, 0x20, {0xc000359b00, 0x1, 0x1}, 0x0})
        github.com/ollama/ollama/ml/backend/ggml/ggml.go:147 +0x124 fp=0xc0000497a0 sp=0xc000048f08 pc=0x57929bf949c4
github.com/ollama/ollama/ml.NewBackend({0xc000134408, 0x13}, {0x0, 0x20, {0xc000359b00, 0x1, 0x1}, 0x0})
        github.com/ollama/ollama/ml/backend.go:88 +0x9b fp=0xc0000497f0 sp=0xc0000497a0 pc=0x57929bf3173b
github.com/ollama/ollama/model.New({0xc000134408?, 0x57929d35edd0?}, {0x0, 0x20, {0xc000359b00, 0x1, 0x1}, 0x0})
        github.com/ollama/ollama/model/model.go:114 +0x7e fp=0xc0000498c0 sp=0xc0000497f0 pc=0x57929bfda01e
github.com/ollama/ollama/runner/ollamarunner.(*Server).info(0xc0002610e0, {0x57929d352ec0, 0xc0004db500}, 0xc000045d60?)
        github.com/ollama/ollama/runner/ollamarunner/runner.go:1381 +0x4cc fp=0xc000049ac0 sp=0xc0000498c0 pc=0x57929c0cd30c
github.com/ollama/ollama/runner/ollamarunner.(*Server).info-fm({0x57929d352ec0?, 0xc0004db500?}, 0xc000427b38?)
        <autogenerated>:1 +0x36 fp=0xc000049af0 sp=0xc000049ac0 pc=0x57929c0ce7d6
net/http.HandlerFunc.ServeHTTP(0xc0004e8780?, {0x57929d352ec0?, 0xc0004db500?}, 0x57929bdb2ef6?)
        net/http/server.go:2294 +0x29 fp=0xc000049b18 sp=0xc000049af0 pc=0x57929bdbac89
net/http.(*ServeMux).ServeHTTP(0x57929baa7939?, {0x57929d352ec0, 0xc0004db500}, 0xc0001708c0)
        net/http/server.go:2822 +0x1c4 fp=0xc000049b68 sp=0xc000049b18 pc=0x57929bdbcb84
net/http.serverHandler.ServeHTTP({0xc00025f680?}, {0x57929d352ec0?, 0xc0004db500?}, 0x1?)
        net/http/server.go:3301 +0x8e fp=0xc000049b98 sp=0xc000049b68 pc=0x57929bdda60e
net/http.(*conn).serve(0xc000268510, {0x57929d355698, 0xc00025f590})
        net/http/server.go:2102 +0x625 fp=0xc000049fb8 sp=0xc000049b98 pc=0x57929bdb9185
net/http.(*Server).Serve.gowrap3()
        net/http/server.go:3454 +0x28 fp=0xc000049fe0 sp=0xc000049fb8 pc=0x57929bdbea48
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000049fe8 sp=0xc000049fe0 pc=0x57929bab2e61
created by net/http.(*Server).Serve in goroutine 1
        net/http/server.go:3454 +0x485

goroutine 1 gp=0xc000002380 m=nil [IO wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00012f778 sp=0xc00012f758 pc=0x57929baaaeee
runtime.netpollblock(0xc00012f7c8?, 0x9ba444a6?, 0x92?)
        runtime/netpoll.go:575 +0xf7 fp=0xc00012f7b0 sp=0xc00012f778 pc=0x57929ba70097
internal/poll.runtime_pollWait(0x7772efac86d0, 0x72)
        runtime/netpoll.go:351 +0x85 fp=0xc00012f7d0 sp=0xc00012f7b0 pc=0x57929baaa105
internal/poll.(*pollDesc).wait(0xc00016ac00?, 0x900000036?, 0x0)
        internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00012f7f8 sp=0xc00012f7d0 pc=0x57929bb32487
internal/poll.(*pollDesc).waitRead(...)
        internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00016ac00)
        internal/poll/fd_unix.go:620 +0x295 fp=0xc00012f8a0 sp=0xc00012f7f8 pc=0x57929bb37855
net.(*netFD).accept(0xc00016ac00)
        net/fd_unix.go:172 +0x29 fp=0xc00012f958 sp=0xc00012f8a0 pc=0x57929bbaad49
net.(*TCPListener).accept(0xc000359940)
        net/tcpsock_posix.go:159 +0x1b fp=0xc00012f9a8 sp=0xc00012f958 pc=0x57929bbc0c5b
net.(*TCPListener).Accept(0xc000359940)
        net/tcpsock.go:380 +0x30 fp=0xc00012f9d8 sp=0xc00012f9a8 pc=0x57929bbbfb10
net/http.(*onceCloseListener).Accept(0xc000268510?)
        <autogenerated>:1 +0x24 fp=0xc00012f9f0 sp=0xc00012f9d8 pc=0x57929bde6d84
net/http.(*Server).Serve(0xc00015d600, {0x57929d352ce0, 0xc000359940})
        net/http/server.go:3424 +0x30c fp=0xc00012fb20 sp=0xc00012f9f0 pc=0x57929bdbe64c
github.com/ollama/ollama/runner/ollamarunner.Execute({0xc000034080, 0x2, 0x2})
        github.com/ollama/ollama/runner/ollamarunner/runner.go:1447 +0x94e fp=0xc00012fcf0 sp=0xc00012fb20 pc=0x57929c0ce1ee
github.com/ollama/ollama/runner.Execute({0xc000034060?, 0x0?, 0x0?})
        github.com/ollama/ollama/runner/runner.go:18 +0x10e fp=0xc00012fd30 sp=0xc00012fcf0 pc=0x57929c170f0e
github.com/ollama/ollama/cmd.NewCLI.func3(0xc00015d300?, {0x57929cd6a244?, 0x4?, 0x57929cd6a248?})
        github.com/ollama/ollama/cmd/cmd.go:2269 +0x45 fp=0xc00012fd58 sp=0xc00012fd30 pc=0x57929c994b05
github.com/spf13/cobra.(*Command).execute(0xc00026db08, {0xc00025f020, 0x3, 0x3})
        github.com/spf13/cobra@v1.7.0/command.go:940 +0x85c fp=0xc00012fe78 sp=0xc00012fd58 pc=0x57929bc24cdc
github.com/spf13/cobra.(*Command).ExecuteC(0xc0004de908)
        github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc00012ff30 sp=0xc00012fe78 pc=0x57929bc25525
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
        github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
        github.com/ollama/ollama/main.go:12 +0x4d fp=0xc00012ff50 sp=0xc00012ff30 pc=0x57929c9965ad
runtime.main()
        runtime/proc.go:283 +0x29d fp=0xc00012ffe0 sp=0xc00012ff50 pc=0x57929ba7771d
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00012ffe8 sp=0xc00012ffe0 pc=0x57929bab2e61

goroutine 2 gp=0xc000002e00 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000aafa8 sp=0xc0000aaf88 pc=0x57929baaaeee
runtime.goparkunlock(...)
        runtime/proc.go:441
runtime.forcegchelper()
        runtime/proc.go:348 +0xb8 fp=0xc0000aafe0 sp=0xc0000aafa8 pc=0x57929ba77a58
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000aafe8 sp=0xc0000aafe0 pc=0x57929bab2e61
created by runtime.init.7 in goroutine 1
        runtime/proc.go:336 +0x1a

goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000ab780 sp=0xc0000ab760 pc=0x57929baaaeee
runtime.goparkunlock(...)
        runtime/proc.go:441
runtime.bgsweep(0xc0000d6000)
        runtime/mgcsweep.go:316 +0xdf fp=0xc0000ab7c8 sp=0xc0000ab780 pc=0x57929ba621ff
runtime.gcenable.gowrap1()
        runtime/mgc.go:204 +0x25 fp=0xc0000ab7e0 sp=0xc0000ab7c8 pc=0x57929ba565e5
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ab7e8 sp=0xc0000ab7e0 pc=0x57929bab2e61
created by runtime.gcenable in goroutine 1
        runtime/mgc.go:204 +0x66

goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]:
runtime.gopark(0x10000?, 0x57929cf84a50?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000abf78 sp=0xc0000abf58 pc=0x57929baaaeee
runtime.goparkunlock(...)
        runtime/proc.go:441
runtime.(*scavengerState).park(0x57929dd94e40)
        runtime/mgcscavenge.go:425 +0x49 fp=0xc0000abfa8 sp=0xc0000abf78 pc=0x57929ba5fc49
runtime.bgscavenge(0xc0000d6000)
        runtime/mgcscavenge.go:658 +0x59 fp=0xc0000abfc8 sp=0xc0000abfa8 pc=0x57929ba601d9
runtime.gcenable.gowrap2()
        runtime/mgc.go:205 +0x25 fp=0xc0000abfe0 sp=0xc0000abfc8 pc=0x57929ba56585
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000abfe8 sp=0xc0000abfe0 pc=0x57929bab2e61
created by runtime.gcenable in goroutine 1
        runtime/mgc.go:205 +0xa5

goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait]:
runtime.gopark(0x1b8?, 0xc000002380?, 0x1?, 0x23?, 0xc0000aa688?)
        runtime/proc.go:435 +0xce fp=0xc0000aa630 sp=0xc0000aa610 pc=0x57929baaaeee
runtime.runfinq()
        runtime/mfinal.go:196 +0x107 fp=0xc0000aa7e0 sp=0xc0000aa630 pc=0x57929ba555a7
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000aa7e8 sp=0xc0000aa7e0 pc=0x57929bab2e61
created by runtime.createfing in goroutine 1
        runtime/mfinal.go:166 +0x3d

goroutine 6 gp=0xc0001fc8c0 m=nil [chan receive]:
runtime.gopark(0xc00025da40?, 0xc000590018?, 0x60?, 0xc7?, 0x57929bb918a8?)
        runtime/proc.go:435 +0xce fp=0xc0000ac718 sp=0xc0000ac6f8 pc=0x57929baaaeee
runtime.chanrecv(0xc00003e380, 0x0, 0x1)
        runtime/chan.go:664 +0x445 fp=0xc0000ac790 sp=0xc0000ac718 pc=0x57929ba47085
runtime.chanrecv1(0x0?, 0x0?)
        runtime/chan.go:506 +0x12 fp=0xc0000ac7b8 sp=0xc0000ac790 pc=0x57929ba46c12
runtime.unique_runtime_registerUniqueMapCleanup.func2(...)
        runtime/mgc.go:1796
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
        runtime/mgc.go:1799 +0x2f fp=0xc0000ac7e0 sp=0xc0000ac7b8 pc=0x57929ba5978f
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ac7e8 sp=0xc0000ac7e0 pc=0x57929bab2e61
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
        runtime/mgc.go:1794 +0x85

goroutine 7 gp=0xc0001fcc40 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000acf38 sp=0xc0000acf18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000acfc8 sp=0xc0000acf38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000acfe0 sp=0xc0000acfc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000acfe8 sp=0xc0000acfe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 8 gp=0xc0001fce00 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000ad738 sp=0xc0000ad718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000ad7c8 sp=0xc0000ad738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000ad7e0 sp=0xc0000ad7c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ad7e8 sp=0xc0000ad7e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 9 gp=0xc0001fcfc0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000adf38 sp=0xc0000adf18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000adfc8 sp=0xc0000adf38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000adfe0 sp=0xc0000adfc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000adfe8 sp=0xc0000adfe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 10 gp=0xc0001fd180 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000a6738 sp=0xc0000a6718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000a67c8 sp=0xc0000a6738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000a67e0 sp=0xc0000a67c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a67e8 sp=0xc0000a67e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 11 gp=0xc0001fd340 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000a6f38 sp=0xc0000a6f18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000a6fc8 sp=0xc0000a6f38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000a6fe0 sp=0xc0000a6fc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a6fe8 sp=0xc0000a6fe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 18 gp=0xc000504000 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00050a738 sp=0xc00050a718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00050a7c8 sp=0xc00050a738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00050a7e0 sp=0xc00050a7c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00050a7e8 sp=0xc00050a7e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 19 gp=0xc0005041c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00050af38 sp=0xc00050af18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00050afc8 sp=0xc00050af38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00050afe0 sp=0xc00050afc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00050afe8 sp=0xc00050afe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 20 gp=0xc000504380 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00050b738 sp=0xc00050b718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00050b7c8 sp=0xc00050b738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00050b7e0 sp=0xc00050b7c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00050b7e8 sp=0xc00050b7e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 21 gp=0xc000504540 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00050bf38 sp=0xc00050bf18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00050bfc8 sp=0xc00050bf38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00050bfe0 sp=0xc00050bfc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00050bfe8 sp=0xc00050bfe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 22 gp=0xc000504700 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00050c738 sp=0xc00050c718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00050c7c8 sp=0xc00050c738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00050c7e0 sp=0xc00050c7c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00050c7e8 sp=0xc00050c7e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 23 gp=0xc0005048c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00050cf38 sp=0xc00050cf18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00050cfc8 sp=0xc00050cf38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00050cfe0 sp=0xc00050cfc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00050cfe8 sp=0xc00050cfe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 24 gp=0xc000504a80 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00050d738 sp=0xc00050d718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00050d7c8 sp=0xc00050d738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00050d7e0 sp=0xc00050d7c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00050d7e8 sp=0xc00050d7e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 25 gp=0xc000504c40 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00050df38 sp=0xc00050df18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00050dfc8 sp=0xc00050df38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00050dfe0 sp=0xc00050dfc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00050dfe8 sp=0xc00050dfe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 26 gp=0xc000504e00 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000506738 sp=0xc000506718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0005067c8 sp=0xc000506738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0005067e0 sp=0xc0005067c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0005067e8 sp=0xc0005067e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 27 gp=0xc000504fc0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000506f38 sp=0xc000506f18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc000506fc8 sp=0xc000506f38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc000506fe0 sp=0xc000506fc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000506fe8 sp=0xc000506fe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 28 gp=0xc000505180 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000507738 sp=0xc000507718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0005077c8 sp=0xc000507738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0005077e0 sp=0xc0005077c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0005077e8 sp=0xc0005077e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 34 gp=0xc000102380 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011a738 sp=0xc00011a718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011a7c8 sp=0xc00011a738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011a7e0 sp=0xc00011a7c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011a7e8 sp=0xc00011a7e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 35 gp=0xc000102540 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011af38 sp=0xc00011af18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011afc8 sp=0xc00011af38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011afe0 sp=0xc00011afc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011afe8 sp=0xc00011afe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 36 gp=0xc000102700 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011b738 sp=0xc00011b718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011b7c8 sp=0xc00011b738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011b7e0 sp=0xc00011b7c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011b7e8 sp=0xc00011b7e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 37 gp=0xc0001028c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011bf38 sp=0xc00011bf18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011bfc8 sp=0xc00011bf38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011bfe0 sp=0xc00011bfc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011bfe8 sp=0xc00011bfe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 38 gp=0xc000102a80 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011c738 sp=0xc00011c718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011c7c8 sp=0xc00011c738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011c7e0 sp=0xc00011c7c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011c7e8 sp=0xc00011c7e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 39 gp=0xc000102c40 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011cf38 sp=0xc00011cf18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011cfc8 sp=0xc00011cf38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011cfe0 sp=0xc00011cfc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011cfe8 sp=0xc00011cfe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 40 gp=0xc000102e00 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011d738 sp=0xc00011d718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011d7c8 sp=0xc00011d738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011d7e0 sp=0xc00011d7c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011d7e8 sp=0xc00011d7e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 41 gp=0xc000102fc0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc00011df38 sp=0xc00011df18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc00011dfc8 sp=0xc00011df38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc00011dfe0 sp=0xc00011dfc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00011dfe8 sp=0xc00011dfe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 12 gp=0xc0001fd500 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000a7738 sp=0xc0000a7718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0000a77c8 sp=0xc0000a7738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0000a77e0 sp=0xc0000a77c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a77e8 sp=0xc0000a77e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 42 gp=0xc000103180 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000116738 sp=0xc000116718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0001167c8 sp=0xc000116738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0001167e0 sp=0xc0001167c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0001167e8 sp=0xc0001167e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 43 gp=0xc000103340 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000116f38 sp=0xc000116f18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc000116fc8 sp=0xc000116f38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc000116fe0 sp=0xc000116fc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000116fe8 sp=0xc000116fe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 44 gp=0xc000103500 m=nil [GC worker (idle)]:
runtime.gopark(0x9044f81f4d2?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000117738 sp=0xc000117718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0001177c8 sp=0xc000117738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0001177e0 sp=0xc0001177c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0001177e8 sp=0xc0001177e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 45 gp=0xc0001036c0 m=nil [GC worker (idle)]:
runtime.gopark(0x57929de6a1e0?, 0x1?, 0x43?, 0x57?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000117f38 sp=0xc000117f18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc000117fc8 sp=0xc000117f38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc000117fe0 sp=0xc000117fc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000117fe8 sp=0xc000117fe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 46 gp=0xc000103880 m=nil [GC worker (idle)]:
runtime.gopark(0x57929de6a1e0?, 0x1?, 0xe8?, 0x4a?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000118738 sp=0xc000118718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0001187c8 sp=0xc000118738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0001187e0 sp=0xc0001187c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0001187e8 sp=0xc0001187e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 47 gp=0xc000103a40 m=nil [GC worker (idle)]:
runtime.gopark(0x57929de6a1e0?, 0x1?, 0xea?, 0x75?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000118f38 sp=0xc000118f18 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc000118fc8 sp=0xc000118f38 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc000118fe0 sp=0xc000118fc8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000118fe8 sp=0xc000118fe0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 48 gp=0xc000103c00 m=nil [GC worker (idle)]:
runtime.gopark(0x9044f815775?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc000119738 sp=0xc000119718 pc=0x57929baaaeee
runtime.gcBgMarkWorker(0xc00003f5e0)
        runtime/mgc.go:1423 +0xe9 fp=0xc0001197c8 sp=0xc000119738 pc=0x57929ba58aa9
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1339 +0x25 fp=0xc0001197e0 sp=0xc0001197c8 pc=0x57929ba58985
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0001197e8 sp=0xc0001197e0 pc=0x57929bab2e61
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1339 +0x105

goroutine 13 gp=0xc0001661c0 m=nil [sync.WaitGroup.Wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0xe0?, 0x0?)
        runtime/proc.go:435 +0xce fp=0xc0000bda90 sp=0xc0000bda70 pc=0x57929baaaeee
runtime.goparkunlock(...)
        runtime/proc.go:441
runtime.semacquire1(0xc000261198, 0x0, 0x1, 0x0, 0x18)
        runtime/sema.go:188 +0x229 fp=0xc0000bdaf8 sp=0xc0000bda90 pc=0x57929ba8ace9
sync.runtime_SemacquireWaitGroup(0x0?)
        runtime/sema.go:110 +0x25 fp=0xc0000bdb30 sp=0xc0000bdaf8 pc=0x57929baac825
sync.(*WaitGroup).Wait(0xc000261190?)
        sync/waitgroup.go:118 +0x48 fp=0xc0000bdb58 sp=0xc0000bdb30 pc=0x57929babe8c8
github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0002610e0, {0x57929d3556d0, 0xc0003bb7c0})
        github.com/ollama/ollama/runner/ollamarunner/runner.go:442 +0x45 fp=0xc0000bdfb8 sp=0xc0000bdb58 pc=0x57929c0c4be5
github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap1()
        github.com/ollama/ollama/runner/ollamarunner/runner.go:1424 +0x28 fp=0xc0000bdfe0 sp=0xc0000bdfb8 pc=0x57929c0ce468
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000bdfe8 sp=0xc0000bdfe0 pc=0x57929bab2e61
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
        github.com/ollama/ollama/runner/ollamarunner/runner.go:1424 +0x4c9

goroutine 15 gp=0xc000166540 m=nil [IO wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0xb?)
        runtime/proc.go:435 +0xce fp=0xc000509dd8 sp=0xc000509db8 pc=0x57929baaaeee
runtime.netpollblock(0x57929bace798?, 0x9ba444a6?, 0x92?)
        runtime/netpoll.go:575 +0xf7 fp=0xc000509e10 sp=0xc000509dd8 pc=0x57929ba70097
internal/poll.runtime_pollWait(0x7772efac85b8, 0x72)
        runtime/netpoll.go:351 +0x85 fp=0xc000509e30 sp=0xc000509e10 pc=0x57929baaa105
internal/poll.(*pollDesc).wait(0xc00016ac80?, 0xc00025f691?, 0x0)
        internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000509e58 sp=0xc000509e30 pc=0x57929bb32487
internal/poll.(*pollDesc).waitRead(...)
        internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00016ac80, {0xc00025f691, 0x1, 0x1})
        internal/poll/fd_unix.go:165 +0x27a fp=0xc000509ef0 sp=0xc000509e58 pc=0x57929bb3377a
net.(*netFD).Read(0xc00016ac80, {0xc00025f691?, 0x0?, 0x0?})
        net/fd_posix.go:55 +0x25 fp=0xc000509f38 sp=0xc000509ef0 pc=0x57929bba8da5
net.(*conn).Read(0xc00013c710, {0xc00025f691?, 0x0?, 0x0?})
        net/net.go:194 +0x45 fp=0xc000509f80 sp=0xc000509f38 pc=0x57929bbb7165
net/http.(*connReader).backgroundRead(0xc00025f680)
        net/http/server.go:690 +0x37 fp=0xc000509fc8 sp=0xc000509f80 pc=0x57929bdb3057
net/http.(*connReader).startBackgroundRead.gowrap2()
        net/http/server.go:686 +0x25 fp=0xc000509fe0 sp=0xc000509fc8 pc=0x57929bdb2f85
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000509fe8 sp=0xc000509fe0 pc=0x57929bab2e61
created by net/http.(*connReader).startBackgroundRead in goroutine 14
        net/http/server.go:686 +0xb6

rax    0x7772c5e73948
rbx    0x7772c072cf40
rcx    0x1
rdx    0x7772c08b3
rdi    0x7772c072cf40
rsi    0x7772cfffc320
rbp    0x7772cfffc320
rsp    0x7772cfffc280
r8     0x7772c00008e0
r9     0x7
r10    0x7772c08b3350
r11    0x76a3e6ed59765bfc
r12    0x0
r13    0x7772c072d428
r14    0x7772c08b3580
r15    0x7772c08b3350
rip    0x7772c5cf9170
rflags 0x10206
cs     0x33
fs     0x0
gs     0x0
time=2026-03-20T17:38:50.599Z level=TRACE source=runner.go:462 msg="runner exited" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs="map[GGML_CUDA_INIT:1 ROCR_VISIBLE_DEVICES:0]" code=2
time=2026-03-20T17:38:50.599Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" devices=[]
time=2026-03-20T17:38:50.599Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=91.117243ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs="map[GGML_CUDA_INIT:1 ROCR_VISIBLE_DEVICES:0]"
time=2026-03-20T17:38:50.599Z level=DEBUG source=runner.go:153 msg="filtering device which didn't fully initialize" id=0 libdir=/usr/lib/ollama/rocm pci_id=0000:c2:00.0 library=ROCm
time=2026-03-20T17:38:50.599Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[]
time=2026-03-20T17:38:50.599Z level=TRACE source=runner.go:183 msg="removing unsupported or overlapping GPU combination" libDir=/usr/lib/ollama/rocm description="Radeon 8060S Graphics" compute=gfx1151 pci_id=0000:c2:00.0
time=2026-03-20T17:38:50.599Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=277.232691ms
time=2026-03-20T17:38:50.600Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="30.6 GiB" available="30.5 GiB"
time=2026-03-20T17:38:50.600Z level=INFO source=routes.go:1832 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096

Looks like its seeing the GPU at least, but not the VRAM.
 
Do we know how it detects VRAM/the GPU?
rocm-smi doesn't exist in a v7.x format that i can tell, so the host system is stuck on v6.1.2 for just that component.

<!-- gh-comment-id:4099868247 --> @boomam commented on GitHub (Mar 20, 2026): ```sh time=2026-03-20T17:38:50.314Z level=INFO source=routes.go:1727 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-03-20T17:38:50.314Z level=INFO source=routes.go:1729 msg="Ollama cloud disabled: false" time=2026-03-20T17:38:50.320Z level=INFO source=images.go:477 msg="total blobs: 127" time=2026-03-20T17:38:50.321Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2026-03-20T17:38:50.322Z level=INFO source=routes.go:1782 msg="Listening on [::]:11434 (version 0.18.2)" time=2026-03-20T17:38:50.322Z level=DEBUG source=sched.go:145 msg="starting llm scheduler" time=2026-03-20T17:38:50.323Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-03-20T17:38:50.323Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/rocm]" extraEnvs=map[] time=2026-03-20T17:38:50.324Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37557" time=2026-03-20T17:38:50.324Z level=DEBUG source=server.go:431 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_NUM_PARALLEL=1 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=24h LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_DEBUG=2 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm time=2026-03-20T17:38:50.333Z level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-03-20T17:38:50.333Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:37557" time=2026-03-20T17:38:50.340Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string time=2026-03-20T17:38:50.340Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string time=2026-03-20T17:38:50.340Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-20T17:38:50.340Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-20T17:38:50.341Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0 time=2026-03-20T17:38:50.341Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default="" time=2026-03-20T17:38:50.341Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default="" time=2026-03-20T17:38:50.341Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-03-20T17:38:50.341Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2026-03-20T17:38:50.349Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/rocm ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0 load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so time=2026-03-20T17:38:50.507Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default="" time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000 time=2026-03-20T17:38:50.507Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1 time=2026-03-20T17:38:50.507Z level=DEBUG source=runner.go:1386 msg="dummy model load took" duration=172.024233ms ggml_hip_get_device_memory searching for device 0000:c2:00.0 ggml_backend_cuda_device_get_memory device 0000:c2:00.0 utilizing AMD specific memory reporting free: 119347310592 total: 119521259520 time=2026-03-20T17:38:50.508Z level=DEBUG source=runner.go:1391 msg="gathering device infos took" duration=287.461µs time=2026-03-20T17:38:50.508Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" devices="[{DeviceID:{ID:0 Library:ROCm} Name:ROCm0 Description:Radeon 8060S Graphics FilterID: Integrated:true PCIID:0000:c2:00.0 TotalMemory:119521259520 FreeMemory:119347310592 ComputeMajor:17 ComputeMinor:81 DriverMajor:70226 DriverMinor:1 LibraryPath:[/usr/lib/ollama /usr/lib/ollama/rocm]}]" time=2026-03-20T17:38:50.508Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=185.442191ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs=map[] time=2026-03-20T17:38:50.508Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1 time=2026-03-20T17:38:50.508Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/lib/ollama/rocm description="Radeon 8060S Graphics" compute=gfx1151 id=0 pci_id=0000:c2:00.0 time=2026-03-20T17:38:50.508Z level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs="[/usr/lib/ollama /usr/lib/ollama/rocm]" extraEnvs="map[GGML_CUDA_INIT:1 ROCR_VISIBLE_DEVICES:0]" time=2026-03-20T17:38:50.509Z level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45923" time=2026-03-20T17:38:50.509Z level=DEBUG source=server.go:431 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_NUM_PARALLEL=1 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_KEEP_ALIVE=24h LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_DEBUG=2 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm GGML_CUDA_INIT=1 ROCR_VISIBLE_DEVICES=0 time=2026-03-20T17:38:50.519Z level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-03-20T17:38:50.519Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:45923" time=2026-03-20T17:38:50.520Z level=DEBUG source=gguf.go:604 msg=general.architecture type=string time=2026-03-20T17:38:50.520Z level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string time=2026-03-20T17:38:50.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-20T17:38:50.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-20T17:38:50.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0 time=2026-03-20T17:38:50.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default="" time=2026-03-20T17:38:50.520Z level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default="" time=2026-03-20T17:38:50.520Z level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 time=2026-03-20T17:38:50.521Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2026-03-20T17:38:50.524Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/rocm ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: ggml_cuda_init: initializing rocBLAS on device 0 SIGSEGV: segmentation violation PC=0x7772c5cf9170 m=10 sigcode=1 addr=0x34 signal arrived during cgo execution goroutine 14 gp=0xc000166380 m=10 mp=0xc000100808 [syscall]: runtime.cgocall(0x57929ca34800, 0xc000048988) runtime/cgocall.go:167 +0x4b fp=0xc000048960 sp=0xc000048928 pc=0x57929baa7a6b github.com/ollama/ollama/ml/backend/ggml/ggml/src._Cfunc_ggml_backend_load_all_from_path(0x5792aaf11990) _cgo_gotypes.go:195 +0x3e fp=0xc000048988 sp=0xc000048960 pc=0x57929bf2fa7e github.com/ollama/ollama/ml/backend/ggml/ggml/src.init.func1.1({0xc000042024, 0x14}) github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml.go:97 +0xf5 fp=0xc000048a20 sp=0xc000048988 pc=0x57929bf2f515 github.com/ollama/ollama/ml/backend/ggml/ggml/src.init.func1() github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml.go:98 +0x545 fp=0xc000048c98 sp=0xc000048a20 pc=0x57929bf2f365 github.com/ollama/ollama/ml/backend/ggml/ggml/src.init.OnceFunc.func2() sync/oncefunc.go:27 +0x62 fp=0xc000048ce0 sp=0xc000048c98 pc=0x57929bf2ed42 sync.(*Once).doSlow(0x57929d355660?, 0x57929de67d60?) sync/once.go:78 +0xab fp=0xc000048d38 sp=0xc000048ce0 pc=0x57929babd48b sync.(*Once).Do(0x0?, 0xc000048de0?) sync/once.go:69 +0x19 fp=0xc000048d58 sp=0xc000048d38 pc=0x57929babd3b9 github.com/ollama/ollama/ml/backend/ggml/ggml/src.init.OnceFunc.func3() sync/oncefunc.go:32 +0x2d fp=0xc000048d88 sp=0xc000048d58 pc=0x57929bf2ecad github.com/ollama/ollama/ml/backend/ggml.init.func1() github.com/ollama/ollama/ml/backend/ggml/ggml.go:48 +0x23 fp=0xc000048e18 sp=0xc000048d88 pc=0x57929bf8aaa3 github.com/ollama/ollama/ml/backend/ggml.init.OnceFunc.func2() sync/oncefunc.go:27 +0x62 fp=0xc000048e60 sp=0xc000048e18 pc=0x57929bf8a9a2 sync.(*Once).doSlow(0x157929d349cc8?, 0xc00013c728?) sync/once.go:78 +0xab fp=0xc000048eb8 sp=0xc000048e60 pc=0x57929babd48b sync.(*Once).Do(0x57929babd540?, 0x57929de683c4?) sync/once.go:69 +0x19 fp=0xc000048ed8 sp=0xc000048eb8 pc=0x57929babd3b9 github.com/ollama/ollama/ml/backend/ggml.init.OnceFunc.func3() sync/oncefunc.go:32 +0x2d fp=0xc000048f08 sp=0xc000048ed8 pc=0x57929bf8a90d github.com/ollama/ollama/ml/backend/ggml.New({0xc000134408, 0x13}, {0x0, 0x20, {0xc000359b00, 0x1, 0x1}, 0x0}) github.com/ollama/ollama/ml/backend/ggml/ggml.go:147 +0x124 fp=0xc0000497a0 sp=0xc000048f08 pc=0x57929bf949c4 github.com/ollama/ollama/ml.NewBackend({0xc000134408, 0x13}, {0x0, 0x20, {0xc000359b00, 0x1, 0x1}, 0x0}) github.com/ollama/ollama/ml/backend.go:88 +0x9b fp=0xc0000497f0 sp=0xc0000497a0 pc=0x57929bf3173b github.com/ollama/ollama/model.New({0xc000134408?, 0x57929d35edd0?}, {0x0, 0x20, {0xc000359b00, 0x1, 0x1}, 0x0}) github.com/ollama/ollama/model/model.go:114 +0x7e fp=0xc0000498c0 sp=0xc0000497f0 pc=0x57929bfda01e github.com/ollama/ollama/runner/ollamarunner.(*Server).info(0xc0002610e0, {0x57929d352ec0, 0xc0004db500}, 0xc000045d60?) github.com/ollama/ollama/runner/ollamarunner/runner.go:1381 +0x4cc fp=0xc000049ac0 sp=0xc0000498c0 pc=0x57929c0cd30c github.com/ollama/ollama/runner/ollamarunner.(*Server).info-fm({0x57929d352ec0?, 0xc0004db500?}, 0xc000427b38?) <autogenerated>:1 +0x36 fp=0xc000049af0 sp=0xc000049ac0 pc=0x57929c0ce7d6 net/http.HandlerFunc.ServeHTTP(0xc0004e8780?, {0x57929d352ec0?, 0xc0004db500?}, 0x57929bdb2ef6?) net/http/server.go:2294 +0x29 fp=0xc000049b18 sp=0xc000049af0 pc=0x57929bdbac89 net/http.(*ServeMux).ServeHTTP(0x57929baa7939?, {0x57929d352ec0, 0xc0004db500}, 0xc0001708c0) net/http/server.go:2822 +0x1c4 fp=0xc000049b68 sp=0xc000049b18 pc=0x57929bdbcb84 net/http.serverHandler.ServeHTTP({0xc00025f680?}, {0x57929d352ec0?, 0xc0004db500?}, 0x1?) net/http/server.go:3301 +0x8e fp=0xc000049b98 sp=0xc000049b68 pc=0x57929bdda60e net/http.(*conn).serve(0xc000268510, {0x57929d355698, 0xc00025f590}) net/http/server.go:2102 +0x625 fp=0xc000049fb8 sp=0xc000049b98 pc=0x57929bdb9185 net/http.(*Server).Serve.gowrap3() net/http/server.go:3454 +0x28 fp=0xc000049fe0 sp=0xc000049fb8 pc=0x57929bdbea48 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000049fe8 sp=0xc000049fe0 pc=0x57929bab2e61 created by net/http.(*Server).Serve in goroutine 1 net/http/server.go:3454 +0x485 goroutine 1 gp=0xc000002380 m=nil [IO wait]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00012f778 sp=0xc00012f758 pc=0x57929baaaeee runtime.netpollblock(0xc00012f7c8?, 0x9ba444a6?, 0x92?) runtime/netpoll.go:575 +0xf7 fp=0xc00012f7b0 sp=0xc00012f778 pc=0x57929ba70097 internal/poll.runtime_pollWait(0x7772efac86d0, 0x72) runtime/netpoll.go:351 +0x85 fp=0xc00012f7d0 sp=0xc00012f7b0 pc=0x57929baaa105 internal/poll.(*pollDesc).wait(0xc00016ac00?, 0x900000036?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00012f7f8 sp=0xc00012f7d0 pc=0x57929bb32487 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Accept(0xc00016ac00) internal/poll/fd_unix.go:620 +0x295 fp=0xc00012f8a0 sp=0xc00012f7f8 pc=0x57929bb37855 net.(*netFD).accept(0xc00016ac00) net/fd_unix.go:172 +0x29 fp=0xc00012f958 sp=0xc00012f8a0 pc=0x57929bbaad49 net.(*TCPListener).accept(0xc000359940) net/tcpsock_posix.go:159 +0x1b fp=0xc00012f9a8 sp=0xc00012f958 pc=0x57929bbc0c5b net.(*TCPListener).Accept(0xc000359940) net/tcpsock.go:380 +0x30 fp=0xc00012f9d8 sp=0xc00012f9a8 pc=0x57929bbbfb10 net/http.(*onceCloseListener).Accept(0xc000268510?) <autogenerated>:1 +0x24 fp=0xc00012f9f0 sp=0xc00012f9d8 pc=0x57929bde6d84 net/http.(*Server).Serve(0xc00015d600, {0x57929d352ce0, 0xc000359940}) net/http/server.go:3424 +0x30c fp=0xc00012fb20 sp=0xc00012f9f0 pc=0x57929bdbe64c github.com/ollama/ollama/runner/ollamarunner.Execute({0xc000034080, 0x2, 0x2}) github.com/ollama/ollama/runner/ollamarunner/runner.go:1447 +0x94e fp=0xc00012fcf0 sp=0xc00012fb20 pc=0x57929c0ce1ee github.com/ollama/ollama/runner.Execute({0xc000034060?, 0x0?, 0x0?}) github.com/ollama/ollama/runner/runner.go:18 +0x10e fp=0xc00012fd30 sp=0xc00012fcf0 pc=0x57929c170f0e github.com/ollama/ollama/cmd.NewCLI.func3(0xc00015d300?, {0x57929cd6a244?, 0x4?, 0x57929cd6a248?}) github.com/ollama/ollama/cmd/cmd.go:2269 +0x45 fp=0xc00012fd58 sp=0xc00012fd30 pc=0x57929c994b05 github.com/spf13/cobra.(*Command).execute(0xc00026db08, {0xc00025f020, 0x3, 0x3}) github.com/spf13/cobra@v1.7.0/command.go:940 +0x85c fp=0xc00012fe78 sp=0xc00012fd58 pc=0x57929bc24cdc github.com/spf13/cobra.(*Command).ExecuteC(0xc0004de908) github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc00012ff30 sp=0xc00012fe78 pc=0x57929bc25525 github.com/spf13/cobra.(*Command).Execute(...) github.com/spf13/cobra@v1.7.0/command.go:992 github.com/spf13/cobra.(*Command).ExecuteContext(...) github.com/spf13/cobra@v1.7.0/command.go:985 main.main() github.com/ollama/ollama/main.go:12 +0x4d fp=0xc00012ff50 sp=0xc00012ff30 pc=0x57929c9965ad runtime.main() runtime/proc.go:283 +0x29d fp=0xc00012ffe0 sp=0xc00012ff50 pc=0x57929ba7771d runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00012ffe8 sp=0xc00012ffe0 pc=0x57929bab2e61 goroutine 2 gp=0xc000002e00 m=nil [force gc (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000aafa8 sp=0xc0000aaf88 pc=0x57929baaaeee runtime.goparkunlock(...) runtime/proc.go:441 runtime.forcegchelper() runtime/proc.go:348 +0xb8 fp=0xc0000aafe0 sp=0xc0000aafa8 pc=0x57929ba77a58 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000aafe8 sp=0xc0000aafe0 pc=0x57929bab2e61 created by runtime.init.7 in goroutine 1 runtime/proc.go:336 +0x1a goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000ab780 sp=0xc0000ab760 pc=0x57929baaaeee runtime.goparkunlock(...) runtime/proc.go:441 runtime.bgsweep(0xc0000d6000) runtime/mgcsweep.go:316 +0xdf fp=0xc0000ab7c8 sp=0xc0000ab780 pc=0x57929ba621ff runtime.gcenable.gowrap1() runtime/mgc.go:204 +0x25 fp=0xc0000ab7e0 sp=0xc0000ab7c8 pc=0x57929ba565e5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ab7e8 sp=0xc0000ab7e0 pc=0x57929bab2e61 created by runtime.gcenable in goroutine 1 runtime/mgc.go:204 +0x66 goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]: runtime.gopark(0x10000?, 0x57929cf84a50?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000abf78 sp=0xc0000abf58 pc=0x57929baaaeee runtime.goparkunlock(...) runtime/proc.go:441 runtime.(*scavengerState).park(0x57929dd94e40) runtime/mgcscavenge.go:425 +0x49 fp=0xc0000abfa8 sp=0xc0000abf78 pc=0x57929ba5fc49 runtime.bgscavenge(0xc0000d6000) runtime/mgcscavenge.go:658 +0x59 fp=0xc0000abfc8 sp=0xc0000abfa8 pc=0x57929ba601d9 runtime.gcenable.gowrap2() runtime/mgc.go:205 +0x25 fp=0xc0000abfe0 sp=0xc0000abfc8 pc=0x57929ba56585 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000abfe8 sp=0xc0000abfe0 pc=0x57929bab2e61 created by runtime.gcenable in goroutine 1 runtime/mgc.go:205 +0xa5 goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait]: runtime.gopark(0x1b8?, 0xc000002380?, 0x1?, 0x23?, 0xc0000aa688?) runtime/proc.go:435 +0xce fp=0xc0000aa630 sp=0xc0000aa610 pc=0x57929baaaeee runtime.runfinq() runtime/mfinal.go:196 +0x107 fp=0xc0000aa7e0 sp=0xc0000aa630 pc=0x57929ba555a7 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000aa7e8 sp=0xc0000aa7e0 pc=0x57929bab2e61 created by runtime.createfing in goroutine 1 runtime/mfinal.go:166 +0x3d goroutine 6 gp=0xc0001fc8c0 m=nil [chan receive]: runtime.gopark(0xc00025da40?, 0xc000590018?, 0x60?, 0xc7?, 0x57929bb918a8?) runtime/proc.go:435 +0xce fp=0xc0000ac718 sp=0xc0000ac6f8 pc=0x57929baaaeee runtime.chanrecv(0xc00003e380, 0x0, 0x1) runtime/chan.go:664 +0x445 fp=0xc0000ac790 sp=0xc0000ac718 pc=0x57929ba47085 runtime.chanrecv1(0x0?, 0x0?) runtime/chan.go:506 +0x12 fp=0xc0000ac7b8 sp=0xc0000ac790 pc=0x57929ba46c12 runtime.unique_runtime_registerUniqueMapCleanup.func2(...) runtime/mgc.go:1796 runtime.unique_runtime_registerUniqueMapCleanup.gowrap1() runtime/mgc.go:1799 +0x2f fp=0xc0000ac7e0 sp=0xc0000ac7b8 pc=0x57929ba5978f runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ac7e8 sp=0xc0000ac7e0 pc=0x57929bab2e61 created by unique.runtime_registerUniqueMapCleanup in goroutine 1 runtime/mgc.go:1794 +0x85 goroutine 7 gp=0xc0001fcc40 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000acf38 sp=0xc0000acf18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc0000acfc8 sp=0xc0000acf38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000acfe0 sp=0xc0000acfc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000acfe8 sp=0xc0000acfe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 8 gp=0xc0001fce00 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000ad738 sp=0xc0000ad718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc0000ad7c8 sp=0xc0000ad738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000ad7e0 sp=0xc0000ad7c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ad7e8 sp=0xc0000ad7e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 9 gp=0xc0001fcfc0 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000adf38 sp=0xc0000adf18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc0000adfc8 sp=0xc0000adf38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000adfe0 sp=0xc0000adfc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000adfe8 sp=0xc0000adfe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 10 gp=0xc0001fd180 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000a6738 sp=0xc0000a6718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc0000a67c8 sp=0xc0000a6738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000a67e0 sp=0xc0000a67c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a67e8 sp=0xc0000a67e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 11 gp=0xc0001fd340 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000a6f38 sp=0xc0000a6f18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc0000a6fc8 sp=0xc0000a6f38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000a6fe0 sp=0xc0000a6fc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a6fe8 sp=0xc0000a6fe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 18 gp=0xc000504000 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00050a738 sp=0xc00050a718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00050a7c8 sp=0xc00050a738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00050a7e0 sp=0xc00050a7c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00050a7e8 sp=0xc00050a7e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 19 gp=0xc0005041c0 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00050af38 sp=0xc00050af18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00050afc8 sp=0xc00050af38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00050afe0 sp=0xc00050afc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00050afe8 sp=0xc00050afe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 20 gp=0xc000504380 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00050b738 sp=0xc00050b718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00050b7c8 sp=0xc00050b738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00050b7e0 sp=0xc00050b7c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00050b7e8 sp=0xc00050b7e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 21 gp=0xc000504540 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00050bf38 sp=0xc00050bf18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00050bfc8 sp=0xc00050bf38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00050bfe0 sp=0xc00050bfc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00050bfe8 sp=0xc00050bfe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 22 gp=0xc000504700 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00050c738 sp=0xc00050c718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00050c7c8 sp=0xc00050c738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00050c7e0 sp=0xc00050c7c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00050c7e8 sp=0xc00050c7e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 23 gp=0xc0005048c0 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00050cf38 sp=0xc00050cf18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00050cfc8 sp=0xc00050cf38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00050cfe0 sp=0xc00050cfc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00050cfe8 sp=0xc00050cfe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 24 gp=0xc000504a80 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00050d738 sp=0xc00050d718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00050d7c8 sp=0xc00050d738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00050d7e0 sp=0xc00050d7c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00050d7e8 sp=0xc00050d7e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 25 gp=0xc000504c40 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00050df38 sp=0xc00050df18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00050dfc8 sp=0xc00050df38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00050dfe0 sp=0xc00050dfc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00050dfe8 sp=0xc00050dfe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 26 gp=0xc000504e00 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000506738 sp=0xc000506718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc0005067c8 sp=0xc000506738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0005067e0 sp=0xc0005067c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0005067e8 sp=0xc0005067e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 27 gp=0xc000504fc0 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000506f38 sp=0xc000506f18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc000506fc8 sp=0xc000506f38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000506fe0 sp=0xc000506fc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000506fe8 sp=0xc000506fe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 28 gp=0xc000505180 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000507738 sp=0xc000507718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc0005077c8 sp=0xc000507738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0005077e0 sp=0xc0005077c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0005077e8 sp=0xc0005077e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 34 gp=0xc000102380 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011a738 sp=0xc00011a718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00011a7c8 sp=0xc00011a738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011a7e0 sp=0xc00011a7c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011a7e8 sp=0xc00011a7e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 35 gp=0xc000102540 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011af38 sp=0xc00011af18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00011afc8 sp=0xc00011af38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011afe0 sp=0xc00011afc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011afe8 sp=0xc00011afe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 36 gp=0xc000102700 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011b738 sp=0xc00011b718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00011b7c8 sp=0xc00011b738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011b7e0 sp=0xc00011b7c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011b7e8 sp=0xc00011b7e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 37 gp=0xc0001028c0 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011bf38 sp=0xc00011bf18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00011bfc8 sp=0xc00011bf38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011bfe0 sp=0xc00011bfc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011bfe8 sp=0xc00011bfe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 38 gp=0xc000102a80 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011c738 sp=0xc00011c718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00011c7c8 sp=0xc00011c738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011c7e0 sp=0xc00011c7c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011c7e8 sp=0xc00011c7e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 39 gp=0xc000102c40 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011cf38 sp=0xc00011cf18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00011cfc8 sp=0xc00011cf38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011cfe0 sp=0xc00011cfc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011cfe8 sp=0xc00011cfe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 40 gp=0xc000102e00 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011d738 sp=0xc00011d718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00011d7c8 sp=0xc00011d738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011d7e0 sp=0xc00011d7c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011d7e8 sp=0xc00011d7e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 41 gp=0xc000102fc0 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011df38 sp=0xc00011df18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc00011dfc8 sp=0xc00011df38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011dfe0 sp=0xc00011dfc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011dfe8 sp=0xc00011dfe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 12 gp=0xc0001fd500 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000a7738 sp=0xc0000a7718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc0000a77c8 sp=0xc0000a7738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000a77e0 sp=0xc0000a77c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a77e8 sp=0xc0000a77e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 42 gp=0xc000103180 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000116738 sp=0xc000116718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc0001167c8 sp=0xc000116738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0001167e0 sp=0xc0001167c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001167e8 sp=0xc0001167e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 43 gp=0xc000103340 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000116f38 sp=0xc000116f18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc000116fc8 sp=0xc000116f38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000116fe0 sp=0xc000116fc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000116fe8 sp=0xc000116fe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 44 gp=0xc000103500 m=nil [GC worker (idle)]: runtime.gopark(0x9044f81f4d2?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000117738 sp=0xc000117718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc0001177c8 sp=0xc000117738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0001177e0 sp=0xc0001177c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001177e8 sp=0xc0001177e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 45 gp=0xc0001036c0 m=nil [GC worker (idle)]: runtime.gopark(0x57929de6a1e0?, 0x1?, 0x43?, 0x57?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000117f38 sp=0xc000117f18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc000117fc8 sp=0xc000117f38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000117fe0 sp=0xc000117fc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000117fe8 sp=0xc000117fe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 46 gp=0xc000103880 m=nil [GC worker (idle)]: runtime.gopark(0x57929de6a1e0?, 0x1?, 0xe8?, 0x4a?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000118738 sp=0xc000118718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc0001187c8 sp=0xc000118738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0001187e0 sp=0xc0001187c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001187e8 sp=0xc0001187e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 47 gp=0xc000103a40 m=nil [GC worker (idle)]: runtime.gopark(0x57929de6a1e0?, 0x1?, 0xea?, 0x75?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000118f38 sp=0xc000118f18 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc000118fc8 sp=0xc000118f38 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000118fe0 sp=0xc000118fc8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000118fe8 sp=0xc000118fe0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 48 gp=0xc000103c00 m=nil [GC worker (idle)]: runtime.gopark(0x9044f815775?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000119738 sp=0xc000119718 pc=0x57929baaaeee runtime.gcBgMarkWorker(0xc00003f5e0) runtime/mgc.go:1423 +0xe9 fp=0xc0001197c8 sp=0xc000119738 pc=0x57929ba58aa9 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0001197e0 sp=0xc0001197c8 pc=0x57929ba58985 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001197e8 sp=0xc0001197e0 pc=0x57929bab2e61 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 13 gp=0xc0001661c0 m=nil [sync.WaitGroup.Wait]: runtime.gopark(0x0?, 0x0?, 0x0?, 0xe0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0000bda90 sp=0xc0000bda70 pc=0x57929baaaeee runtime.goparkunlock(...) runtime/proc.go:441 runtime.semacquire1(0xc000261198, 0x0, 0x1, 0x0, 0x18) runtime/sema.go:188 +0x229 fp=0xc0000bdaf8 sp=0xc0000bda90 pc=0x57929ba8ace9 sync.runtime_SemacquireWaitGroup(0x0?) runtime/sema.go:110 +0x25 fp=0xc0000bdb30 sp=0xc0000bdaf8 pc=0x57929baac825 sync.(*WaitGroup).Wait(0xc000261190?) sync/waitgroup.go:118 +0x48 fp=0xc0000bdb58 sp=0xc0000bdb30 pc=0x57929babe8c8 github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0002610e0, {0x57929d3556d0, 0xc0003bb7c0}) github.com/ollama/ollama/runner/ollamarunner/runner.go:442 +0x45 fp=0xc0000bdfb8 sp=0xc0000bdb58 pc=0x57929c0c4be5 github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap1() github.com/ollama/ollama/runner/ollamarunner/runner.go:1424 +0x28 fp=0xc0000bdfe0 sp=0xc0000bdfb8 pc=0x57929c0ce468 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000bdfe8 sp=0xc0000bdfe0 pc=0x57929bab2e61 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/ollamarunner/runner.go:1424 +0x4c9 goroutine 15 gp=0xc000166540 m=nil [IO wait]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0xb?) runtime/proc.go:435 +0xce fp=0xc000509dd8 sp=0xc000509db8 pc=0x57929baaaeee runtime.netpollblock(0x57929bace798?, 0x9ba444a6?, 0x92?) runtime/netpoll.go:575 +0xf7 fp=0xc000509e10 sp=0xc000509dd8 pc=0x57929ba70097 internal/poll.runtime_pollWait(0x7772efac85b8, 0x72) runtime/netpoll.go:351 +0x85 fp=0xc000509e30 sp=0xc000509e10 pc=0x57929baaa105 internal/poll.(*pollDesc).wait(0xc00016ac80?, 0xc00025f691?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000509e58 sp=0xc000509e30 pc=0x57929bb32487 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc00016ac80, {0xc00025f691, 0x1, 0x1}) internal/poll/fd_unix.go:165 +0x27a fp=0xc000509ef0 sp=0xc000509e58 pc=0x57929bb3377a net.(*netFD).Read(0xc00016ac80, {0xc00025f691?, 0x0?, 0x0?}) net/fd_posix.go:55 +0x25 fp=0xc000509f38 sp=0xc000509ef0 pc=0x57929bba8da5 net.(*conn).Read(0xc00013c710, {0xc00025f691?, 0x0?, 0x0?}) net/net.go:194 +0x45 fp=0xc000509f80 sp=0xc000509f38 pc=0x57929bbb7165 net/http.(*connReader).backgroundRead(0xc00025f680) net/http/server.go:690 +0x37 fp=0xc000509fc8 sp=0xc000509f80 pc=0x57929bdb3057 net/http.(*connReader).startBackgroundRead.gowrap2() net/http/server.go:686 +0x25 fp=0xc000509fe0 sp=0xc000509fc8 pc=0x57929bdb2f85 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000509fe8 sp=0xc000509fe0 pc=0x57929bab2e61 created by net/http.(*connReader).startBackgroundRead in goroutine 14 net/http/server.go:686 +0xb6 rax 0x7772c5e73948 rbx 0x7772c072cf40 rcx 0x1 rdx 0x7772c08b3 rdi 0x7772c072cf40 rsi 0x7772cfffc320 rbp 0x7772cfffc320 rsp 0x7772cfffc280 r8 0x7772c00008e0 r9 0x7 r10 0x7772c08b3350 r11 0x76a3e6ed59765bfc r12 0x0 r13 0x7772c072d428 r14 0x7772c08b3580 r15 0x7772c08b3350 rip 0x7772c5cf9170 rflags 0x10206 cs 0x33 fs 0x0 gs 0x0 time=2026-03-20T17:38:50.599Z level=TRACE source=runner.go:462 msg="runner exited" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs="map[GGML_CUDA_INIT:1 ROCR_VISIBLE_DEVICES:0]" code=2 time=2026-03-20T17:38:50.599Z level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" devices=[] time=2026-03-20T17:38:50.599Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=91.117243ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs="map[GGML_CUDA_INIT:1 ROCR_VISIBLE_DEVICES:0]" time=2026-03-20T17:38:50.599Z level=DEBUG source=runner.go:153 msg="filtering device which didn't fully initialize" id=0 libdir=/usr/lib/ollama/rocm pci_id=0000:c2:00.0 library=ROCm time=2026-03-20T17:38:50.599Z level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[] time=2026-03-20T17:38:50.599Z level=TRACE source=runner.go:183 msg="removing unsupported or overlapping GPU combination" libDir=/usr/lib/ollama/rocm description="Radeon 8060S Graphics" compute=gfx1151 pci_id=0000:c2:00.0 time=2026-03-20T17:38:50.599Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=277.232691ms time=2026-03-20T17:38:50.600Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="30.6 GiB" available="30.5 GiB" time=2026-03-20T17:38:50.600Z level=INFO source=routes.go:1832 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 ``` Looks like its seeing the GPU at least, but not the VRAM. &nbsp; Do we know how it detects VRAM/the GPU? rocm-smi doesn't exist in a v7.x format that i can tell, so the host system is stuck on v6.1.2 for *just* that component.
Author
Owner

@rick-github commented on GitHub (Mar 20, 2026):

How did you install ollama?

<!-- gh-comment-id:4099937232 --> @rick-github commented on GitHub (Mar 20, 2026): How did you install ollama?
Author
Owner

@boomam commented on GitHub (Mar 20, 2026):

Its in a container - ollama/ollama:0.18.2-rocm -

name: ollama
services:
  ollama:
    image: ollama/ollama:0.18.2-rocm
    container_name: ollama
    restart: always 
    ports:
      - "11434:11434"
    volumes:
      - /mnt/llm/ollama:/root/.ollama
    environment:
      - OLLAMA_HOST=0.0.0.0:11434      
      - OLLAMA_KEEP_ALIVE=24h
      - OLLAMA_NUM_PARALLEL=1

    devices:
      - /dev/kfd:/dev/kfd
      - /dev/dri:/dev/dri
    networks:
      - lan

The stack auto-updates whenever a new release is released, the above config works 100% as-is on the 0.17.x releases.

<!-- gh-comment-id:4099943039 --> @boomam commented on GitHub (Mar 20, 2026): Its in a container - `ollama/ollama:0.18.2-rocm` - ```yaml name: ollama services: ollama: image: ollama/ollama:0.18.2-rocm container_name: ollama restart: always ports: - "11434:11434" volumes: - /mnt/llm/ollama:/root/.ollama environment: - OLLAMA_HOST=0.0.0.0:11434 - OLLAMA_KEEP_ALIVE=24h - OLLAMA_NUM_PARALLEL=1 devices: - /dev/kfd:/dev/kfd - /dev/dri:/dev/dri networks: - lan ``` The stack auto-updates whenever a new release is released, the above config works 100% as-is on the 0.17.x releases.
Author
Owner

@rick-github commented on GitHub (Mar 20, 2026):

What do the following output:

radeontop -d - -l 1
grep . /opt/rocm*/.info/version
sudo update-alternatives --display rocm-smi
rocm-smi -a
uname -rv
docker run --rm --device=/dev/kfd --device=/dev/dri -e OLLAMA_DEBUG=2 ollama/ollama:0.18.2-rocm
<!-- gh-comment-id:4100110071 --> @rick-github commented on GitHub (Mar 20, 2026): What do the following output: ``` radeontop -d - -l 1 grep . /opt/rocm*/.info/version sudo update-alternatives --display rocm-smi rocm-smi -a uname -rv docker run --rm --device=/dev/kfd --device=/dev/dri -e OLLAMA_DEBUG=2 ollama/ollama:0.18.2-rocm ```
Author
Owner

@boomam commented on GitHub (Mar 20, 2026):

radeontop -d - -l 1

Unknown Radeon card. <= R500 won't work, new cards might.
Dumping to -, line limit 1.
1774030843.205592: bus c2, gpu 0.00%, ee 0.00%, vgt 0.00%, ta 0.00%, tc 0.00%, sx 0.00%, sh 0.00%, spi 0.00%, smx 0.00%, cr 0.00%, sc 0.00%, pa 0.00%, db 0.00%, cb 0.00%, vram 0.15% 147.73mb, gtt 0.09% 14.09mb, mclk 100.00% 1.000ghz, sclk 20.74% 0.601ghz

 

grep . /opt/rocm*/.info/version

/opt/rocm-7.2.0/.info/version:7.2.0
/opt/rocm/.info/version:7.2.0

 

update-alternatives --display rocm-smi

update-alternatives: error: no alternatives for rocm-smi

 

rocm-smi -a

============================ ROCm System Management Interface ============================
============================== Version of System Component ===============================
Driver version: 6.16.6
==========================================================================================
=========================================== ID ===========================================
GPU[0]          : Device Name:          Strix Halo [Radeon Graphics / Radeon 8050S Graphics / Radeon 8060S Graphics]
GPU[0]          : Device ID:            0x1586
GPU[0]          : Device Rev:           0xc1
GPU[0]          : Subsystem ID:         0x000a
GPU[0]          : GUID:                 3750
==========================================================================================
======================================= Unique ID ========================================
GPU[0]          : Unique ID: N/A
==========================================================================================
========================================= VBIOS ==========================================
GPU[0]          : VBIOS version: 113-STRXLGEN-001
==========================================================================================
====================================== Temperature =======================================
GPU[0]          : Temperature (Sensor edge) (C): 27.0
==========================================================================================
=============================== Current clock frequencies ================================
Exception caught: map::at
GPU[0]          : mclk clock level: 2: (1000Mhz)
GPU[0]          : sclk clock level: 0: (600Mhz)
GPU[0]          : socclk clock level: 0: (600Mhz)
==========================================================================================
=================================== Current Fan Metric ===================================
GPU[0]          : Not supported
==========================================================================================
================================= Show Performance Level =================================
GPU[0]          : Performance Level: auto
==========================================================================================
==================================== OverDrive Level =====================================
GPU[0]          : get_overdrive_level_sclk, Not supported on the given system
==========================================================================================
==================================== OverDrive Level =====================================
GPU[0]          : get_mem_overdrive_level_mclk, Not supported on the given system
==========================================================================================
======================================= Power Cap ========================================
GPU[0]          : get_power_cap, Not supported on the given system
GPU[0]          : Max Graphics Package Power Unsupported
==========================================================================================
================================== Show Power Profiles ===================================
GPU[0]          : get_power_profiles, Not supported on the given system
==========================================================================================
=================================== Power Consumption ====================================
GPU[0]          : Current Socket Graphics Package Power (W): 6.02
==========================================================================================
============================== Supported clock frequencies ===============================
GPU[0]          : Clock [dcefclk] on device [0] exists but EMPTY! Likely driver error!
GPU[0]          : Clock [fclk] on device [0] exists but EMPTY! Likely driver error!
GPU[0]          : Supported mclk frequencies on GPU0
GPU[0]          : 0: 400Mhz
GPU[0]          : 1: 800Mhz
GPU[0]          : 2: 1000Mhz *
GPU[0]          : 
GPU[0]          : Supported sclk frequencies on GPU0
GPU[0]          : 0: 600Mhz *
GPU[0]          : 1: 1100Mhz
GPU[0]          : 2: 2900Mhz
GPU[0]          : 
GPU[0]          : Supported socclk frequencies on GPU0
GPU[0]          : 0: 600Mhz *
GPU[0]          : 1: 736Mhz
GPU[0]          : 2: 883Mhz
GPU[0]          : 3: 981Mhz
GPU[0]          : 4: 1104Mhz
GPU[0]          : 5: 1261Mhz
GPU[0]          : 6: 1472Mhz
GPU[0]          : 7: 1472Mhz
GPU[0]          : 
------------------------------------------------------------------------------------------
==========================================================================================
=================================== % time GPU is busy ===================================
GPU[0]          : GPU use (%): 0
==========================================================================================
=================================== Current Memory Use ===================================
GPU[0]          : GPU Memory Allocated (VRAM%): 0
GPU[0]          : % memory use, Not supported on the given system
GPU[0]          : Memory Activity: N/A
GPU[0]          : Not supported on the given system
==========================================================================================
===================================== Memory Vendor ======================================
GPU[0]          : get_vram_vendor, Not supported on the given system
==========================================================================================
================================== PCIe Replay Counter ===================================
GPU[0]          : PCIe Replay Count, Not supported on the given system
==========================================================================================
===================================== Serial Number ======================================
GPU[0]          : get_serial_number, Not supported on the given system
GPU[0]          : Serial Number: N/A
==========================================================================================
===================================== KFD Processes ======================================
No KFD PIDs currently running
==========================================================================================
================================== GPUs Indexed by PID ===================================
No KFD PIDs currently running
==========================================================================================
======================= GPU Memory clock frequencies and voltages ========================
GPU[0]          : OD_SCLK:
GPU[0]          : 0: 600Mhz
GPU[0]          : 1: 2900Mhz
GPU[0]          : OD_MCLK:
GPU[0]          : 0: 0Mhz
GPU[0]          : 1: 0Mhz
==========================================================================================
==================================== Current voltage =====================================
GPU[0]          : Voltage (mV): 0
==========================================================================================
======================================= PCI Bus ID =======================================
GPU[0]          : PCI Bus: 0000:C2:00.0
==========================================================================================
================================== Firmware Information ==================================
GPU[0]          : ASD firmware version:         0x210000fc
GPU[0]          : get_firmware_version_CE, Not supported on the given system
GPU[0]          : get_firmware_version_DMCU, Not supported on the given system
GPU[0]          : get_firmware_version_MC, Not supported on the given system
GPU[0]          : ME firmware version:          32
GPU[0]          : MEC firmware version:         32
GPU[0]          : get_firmware_version_MEC2, Not supported on the given system
GPU[0]          : MES firmware version:         0x00000080
GPU[0]          : MES KIQ firmware version:     0x0000006f
GPU[0]          : PFP firmware version:         46
GPU[0]          : RLC firmware version:         290653446
GPU[0]          : get_firmware_version_RLC SRLC, Not supported on the given system
GPU[0]          : get_firmware_version_RLC SRLG, Not supported on the given system
GPU[0]          : get_firmware_version_RLC SRLS, Not supported on the given system
GPU[0]          : SDMA firmware version:        17
GPU[0]          : get_firmware_version_SDMA2, Not supported on the given system
GPU[0]          : SMC firmware version:         10.100.06.00
GPU[0]          : get_firmware_version_SOS, Not supported on the given system
GPU[0]          : get_firmware_version_TA RAS, Not supported on the given system
GPU[0]          : get_firmware_version_TA XGMI, Not supported on the given system
GPU[0]          : get_firmware_version_UVD, Not supported on the given system
GPU[0]          : get_firmware_version_VCE, Not supported on the given system
GPU[0]          : VCN firmware version:         0x09118019
==========================================================================================
====================================== Product Info ======================================
GPU[0]          : Card Series:          Strix Halo [Radeon Graphics / Radeon 8050S Graphics / Radeon 8060S Graphics]
GPU[0]          : Card Model:           0x1586
GPU[0]          : Card Vendor:          Advanced Micro Devices, Inc. [AMD/ATI]
GPU[0]          : Card SKU:             STRXLGEN
GPU[0]          : Subsystem ID:         0x000a
GPU[0]          : Device Rev:           0xc1
GPU[0]          : Node ID:              1
GPU[0]          : GUID:                 3750
GPU[0]          : GFX Version:          gfx1151
==========================================================================================
======================================= Pages Info =======================================
GPU[0]          : ras, Not supported on the given system
================================= Show Valid sclk Range ==================================
GPU[0]          : Valid sclk range: 600Mhz - 2900Mhz
==========================================================================================
================================= Show Valid mclk Range ==================================
GPU[0]          : Valid mclk range: 0Mhz - 0Mhz
==========================================================================================
================================ Show Valid voltage Range ================================
ERROR: GPU[0]   : Voltage curve regions unsupported.
==========================================================================================
================================== Voltage Curve Points ==================================
ERROR: GPU[0]   : Voltage curve Points unsupported.
==========================================================================================
==================================== Consumed Energy =====================================
GPU[0]          : % Energy Counter, Unexpected data received
==========================================================================================
=============================== Current Compute Partition ================================
GPU[0]          : Not supported on the given system
==========================================================================================
================================ Current Memory Partition ================================
GPU[0]          : Not supported on the given system
==========================================================================================
================================== End of ROCm SMI Log ===================================

Note, that v7.x of ROCM-SMI is not available anymore and is depreciated, so the 6.16.6 version at the top is 100% normal for a 7.2.x release installed with amdgpu-install.
 

uname -rv

6.17.0-19-generic #19-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar  6 14:02:58 UTC 2026

 

docker run --rm --device=/dev/kfd --device=/dev/dri -e OLLAMA_DEBUG=2 ollama/ollama:0.18.2-rocm
See log provided above, its the exact same other than the name of the container.

<!-- gh-comment-id:4100146598 --> @boomam commented on GitHub (Mar 20, 2026): `radeontop -d - -l 1` ``` Unknown Radeon card. <= R500 won't work, new cards might. Dumping to -, line limit 1. 1774030843.205592: bus c2, gpu 0.00%, ee 0.00%, vgt 0.00%, ta 0.00%, tc 0.00%, sx 0.00%, sh 0.00%, spi 0.00%, smx 0.00%, cr 0.00%, sc 0.00%, pa 0.00%, db 0.00%, cb 0.00%, vram 0.15% 147.73mb, gtt 0.09% 14.09mb, mclk 100.00% 1.000ghz, sclk 20.74% 0.601ghz ``` &nbsp; `grep . /opt/rocm*/.info/version` ``` /opt/rocm-7.2.0/.info/version:7.2.0 /opt/rocm/.info/version:7.2.0 ``` &nbsp; `update-alternatives --display rocm-smi` ``` update-alternatives: error: no alternatives for rocm-smi ``` &nbsp; `rocm-smi -a` ``` ============================ ROCm System Management Interface ============================ ============================== Version of System Component =============================== Driver version: 6.16.6 ========================================================================================== =========================================== ID =========================================== GPU[0] : Device Name: Strix Halo [Radeon Graphics / Radeon 8050S Graphics / Radeon 8060S Graphics] GPU[0] : Device ID: 0x1586 GPU[0] : Device Rev: 0xc1 GPU[0] : Subsystem ID: 0x000a GPU[0] : GUID: 3750 ========================================================================================== ======================================= Unique ID ======================================== GPU[0] : Unique ID: N/A ========================================================================================== ========================================= VBIOS ========================================== GPU[0] : VBIOS version: 113-STRXLGEN-001 ========================================================================================== ====================================== Temperature ======================================= GPU[0] : Temperature (Sensor edge) (C): 27.0 ========================================================================================== =============================== Current clock frequencies ================================ Exception caught: map::at GPU[0] : mclk clock level: 2: (1000Mhz) GPU[0] : sclk clock level: 0: (600Mhz) GPU[0] : socclk clock level: 0: (600Mhz) ========================================================================================== =================================== Current Fan Metric =================================== GPU[0] : Not supported ========================================================================================== ================================= Show Performance Level ================================= GPU[0] : Performance Level: auto ========================================================================================== ==================================== OverDrive Level ===================================== GPU[0] : get_overdrive_level_sclk, Not supported on the given system ========================================================================================== ==================================== OverDrive Level ===================================== GPU[0] : get_mem_overdrive_level_mclk, Not supported on the given system ========================================================================================== ======================================= Power Cap ======================================== GPU[0] : get_power_cap, Not supported on the given system GPU[0] : Max Graphics Package Power Unsupported ========================================================================================== ================================== Show Power Profiles =================================== GPU[0] : get_power_profiles, Not supported on the given system ========================================================================================== =================================== Power Consumption ==================================== GPU[0] : Current Socket Graphics Package Power (W): 6.02 ========================================================================================== ============================== Supported clock frequencies =============================== GPU[0] : Clock [dcefclk] on device [0] exists but EMPTY! Likely driver error! GPU[0] : Clock [fclk] on device [0] exists but EMPTY! Likely driver error! GPU[0] : Supported mclk frequencies on GPU0 GPU[0] : 0: 400Mhz GPU[0] : 1: 800Mhz GPU[0] : 2: 1000Mhz * GPU[0] : GPU[0] : Supported sclk frequencies on GPU0 GPU[0] : 0: 600Mhz * GPU[0] : 1: 1100Mhz GPU[0] : 2: 2900Mhz GPU[0] : GPU[0] : Supported socclk frequencies on GPU0 GPU[0] : 0: 600Mhz * GPU[0] : 1: 736Mhz GPU[0] : 2: 883Mhz GPU[0] : 3: 981Mhz GPU[0] : 4: 1104Mhz GPU[0] : 5: 1261Mhz GPU[0] : 6: 1472Mhz GPU[0] : 7: 1472Mhz GPU[0] : ------------------------------------------------------------------------------------------ ========================================================================================== =================================== % time GPU is busy =================================== GPU[0] : GPU use (%): 0 ========================================================================================== =================================== Current Memory Use =================================== GPU[0] : GPU Memory Allocated (VRAM%): 0 GPU[0] : % memory use, Not supported on the given system GPU[0] : Memory Activity: N/A GPU[0] : Not supported on the given system ========================================================================================== ===================================== Memory Vendor ====================================== GPU[0] : get_vram_vendor, Not supported on the given system ========================================================================================== ================================== PCIe Replay Counter =================================== GPU[0] : PCIe Replay Count, Not supported on the given system ========================================================================================== ===================================== Serial Number ====================================== GPU[0] : get_serial_number, Not supported on the given system GPU[0] : Serial Number: N/A ========================================================================================== ===================================== KFD Processes ====================================== No KFD PIDs currently running ========================================================================================== ================================== GPUs Indexed by PID =================================== No KFD PIDs currently running ========================================================================================== ======================= GPU Memory clock frequencies and voltages ======================== GPU[0] : OD_SCLK: GPU[0] : 0: 600Mhz GPU[0] : 1: 2900Mhz GPU[0] : OD_MCLK: GPU[0] : 0: 0Mhz GPU[0] : 1: 0Mhz ========================================================================================== ==================================== Current voltage ===================================== GPU[0] : Voltage (mV): 0 ========================================================================================== ======================================= PCI Bus ID ======================================= GPU[0] : PCI Bus: 0000:C2:00.0 ========================================================================================== ================================== Firmware Information ================================== GPU[0] : ASD firmware version: 0x210000fc GPU[0] : get_firmware_version_CE, Not supported on the given system GPU[0] : get_firmware_version_DMCU, Not supported on the given system GPU[0] : get_firmware_version_MC, Not supported on the given system GPU[0] : ME firmware version: 32 GPU[0] : MEC firmware version: 32 GPU[0] : get_firmware_version_MEC2, Not supported on the given system GPU[0] : MES firmware version: 0x00000080 GPU[0] : MES KIQ firmware version: 0x0000006f GPU[0] : PFP firmware version: 46 GPU[0] : RLC firmware version: 290653446 GPU[0] : get_firmware_version_RLC SRLC, Not supported on the given system GPU[0] : get_firmware_version_RLC SRLG, Not supported on the given system GPU[0] : get_firmware_version_RLC SRLS, Not supported on the given system GPU[0] : SDMA firmware version: 17 GPU[0] : get_firmware_version_SDMA2, Not supported on the given system GPU[0] : SMC firmware version: 10.100.06.00 GPU[0] : get_firmware_version_SOS, Not supported on the given system GPU[0] : get_firmware_version_TA RAS, Not supported on the given system GPU[0] : get_firmware_version_TA XGMI, Not supported on the given system GPU[0] : get_firmware_version_UVD, Not supported on the given system GPU[0] : get_firmware_version_VCE, Not supported on the given system GPU[0] : VCN firmware version: 0x09118019 ========================================================================================== ====================================== Product Info ====================================== GPU[0] : Card Series: Strix Halo [Radeon Graphics / Radeon 8050S Graphics / Radeon 8060S Graphics] GPU[0] : Card Model: 0x1586 GPU[0] : Card Vendor: Advanced Micro Devices, Inc. [AMD/ATI] GPU[0] : Card SKU: STRXLGEN GPU[0] : Subsystem ID: 0x000a GPU[0] : Device Rev: 0xc1 GPU[0] : Node ID: 1 GPU[0] : GUID: 3750 GPU[0] : GFX Version: gfx1151 ========================================================================================== ======================================= Pages Info ======================================= GPU[0] : ras, Not supported on the given system ================================= Show Valid sclk Range ================================== GPU[0] : Valid sclk range: 600Mhz - 2900Mhz ========================================================================================== ================================= Show Valid mclk Range ================================== GPU[0] : Valid mclk range: 0Mhz - 0Mhz ========================================================================================== ================================ Show Valid voltage Range ================================ ERROR: GPU[0] : Voltage curve regions unsupported. ========================================================================================== ================================== Voltage Curve Points ================================== ERROR: GPU[0] : Voltage curve Points unsupported. ========================================================================================== ==================================== Consumed Energy ===================================== GPU[0] : % Energy Counter, Unexpected data received ========================================================================================== =============================== Current Compute Partition ================================ GPU[0] : Not supported on the given system ========================================================================================== ================================ Current Memory Partition ================================ GPU[0] : Not supported on the given system ========================================================================================== ================================== End of ROCm SMI Log =================================== ``` Note, that v7.x of ROCM-SMI is not available anymore and is depreciated, so the 6.16.6 version at the top is 100% normal for a 7.2.x release installed with `amdgpu-install`. &nbsp; `uname -rv` ``` 6.17.0-19-generic #19-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar 6 14:02:58 UTC 2026 ``` &nbsp; `docker run --rm --device=/dev/kfd --device=/dev/dri -e OLLAMA_DEBUG=2 ollama/ollama:0.18.2-rocm` See log provided [above](https://github.com/ollama/ollama/issues/14927#issuecomment-4099868247), its the exact same other than the name of the container.
Author
Owner

@rick-github commented on GitHub (Mar 20, 2026):

-- see log provided above, its the exact same other than the name of the container --

Probably. However, I would like the output of a minimal configuration.

<!-- gh-comment-id:4100261596 --> @rick-github commented on GitHub (Mar 20, 2026): > -- see log provided above, its the exact same other than the name of the container -- Probably. However, I would like the output of a minimal configuration.
Author
Owner

@boomam commented on GitHub (Mar 20, 2026):

-- see log provided above, its the exact same other than the name of the container --

Probably. However, I would like the output of a minimal configuration.

It's the same, i did a differential comparison with my editor.

<!-- gh-comment-id:4100268539 --> @boomam commented on GitHub (Mar 20, 2026): > > -- see log provided above, its the exact same other than the name of the container -- > > Probably. However, I would like the output of a minimal configuration. It's the same, i did a differential comparison with my editor.
Author
Owner

@OTAKUWeBer commented on GitHub (Mar 27, 2026):

How did you install ollama?

By linux package installer. Paru

<!-- gh-comment-id:4142091084 --> @OTAKUWeBer commented on GitHub (Mar 27, 2026): > How did you install ollama? By linux package installer. Paru
Author
Owner

@rick-github commented on GitHub (Mar 27, 2026):

Set OLLAMA_DEBUG=2 in the server environment, restart the server, and post the output of:

journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)"
<!-- gh-comment-id:4142467447 --> @rick-github commented on GitHub (Mar 27, 2026): Set `OLLAMA_DEBUG=2` in the server environment, restart the server, and post the output of: ``` journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)" ```
Author
Owner

@OTAKUWeBer commented on GitHub (Mar 27, 2026):

Set OLLAMA_DEBUG=2 in the server environment, restart the server, and post the output of:

journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)"
~
❯ journalctl -u ollama --since "today" --no-pager
Mar 27 23:15:11 archlinux systemd[1]: Started Ollama Service.
Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.733+06:00 level=INFO source=routes.go:1727 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.733+06:00 level=INFO source=routes.go:1729 msg="Ollama cloud disabled: false"
Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.735+06:00 level=INFO source=images.go:477 msg="total blobs: 5"
Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.735+06:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.735+06:00 level=INFO source=routes.go:1782 msg="Listening on 127.0.0.1:11434 (version 0.18.2)"
Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.735+06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.735+06:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 46691"
Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.750+06:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.0 GiB" available="24.6 GiB"
Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.750+06:00 level=INFO source=routes.go:1832 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096
Mar 27 23:15:28 archlinux systemd[1]: Stopping Ollama Service...
Mar 27 23:15:28 archlinux systemd[1]: ollama.service: Deactivated successfully.
Mar 27 23:15:28 archlinux systemd[1]: Stopped Ollama Service.
-- Boot e4799a755ee740dbbe2c94dd55736137 --
Mar 27 23:16:23 archlinux systemd[1]: Started Ollama Service.
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.128+06:00 level=INFO source=routes.go:1727 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.128+06:00 level=INFO source=routes.go:1729 msg="Ollama cloud disabled: false"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.130+06:00 level=INFO source=images.go:477 msg="total blobs: 5"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.130+06:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.130+06:00 level=INFO source=routes.go:1782 msg="Listening on 127.0.0.1:11434 (version 0.18.2)"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.131+06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.133+06:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34605"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.159+06:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.0 GiB" available="29.8 GiB"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.159+06:00 level=INFO source=routes.go:1832 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096``

~
❯ journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)"
Mar 27 23:16:23 archlinux systemd[1]: Started Ollama Service.
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.128+06:00 level=INFO source=routes.go:1727 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.128+06:00 level=INFO source=routes.go:1729 msg="Ollama cloud disabled: false"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.130+06:00 level=INFO source=images.go:477 msg="total blobs: 5"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.130+06:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.130+06:00 level=INFO source=routes.go:1782 msg="Listening on 127.0.0.1:11434 (version 0.18.2)"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.131+06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.133+06:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34605"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.159+06:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.0 GiB" available="29.8 GiB"
Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.159+06:00 level=INFO source=routes.go:1832 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096
<!-- gh-comment-id:4144093746 --> @OTAKUWeBer commented on GitHub (Mar 27, 2026): > Set `OLLAMA_DEBUG=2` in the server environment, restart the server, and post the output of: > > ``` > journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)" > ``` ``` ~ ❯ journalctl -u ollama --since "today" --no-pager Mar 27 23:15:11 archlinux systemd[1]: Started Ollama Service. Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.733+06:00 level=INFO source=routes.go:1727 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.733+06:00 level=INFO source=routes.go:1729 msg="Ollama cloud disabled: false" Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.735+06:00 level=INFO source=images.go:477 msg="total blobs: 5" Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.735+06:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.735+06:00 level=INFO source=routes.go:1782 msg="Listening on 127.0.0.1:11434 (version 0.18.2)" Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.735+06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.735+06:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 46691" Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.750+06:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.0 GiB" available="24.6 GiB" Mar 27 23:15:11 archlinux ollama[32269]: time=2026-03-27T23:15:11.750+06:00 level=INFO source=routes.go:1832 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 Mar 27 23:15:28 archlinux systemd[1]: Stopping Ollama Service... Mar 27 23:15:28 archlinux systemd[1]: ollama.service: Deactivated successfully. Mar 27 23:15:28 archlinux systemd[1]: Stopped Ollama Service. -- Boot e4799a755ee740dbbe2c94dd55736137 -- Mar 27 23:16:23 archlinux systemd[1]: Started Ollama Service. Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.128+06:00 level=INFO source=routes.go:1727 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.128+06:00 level=INFO source=routes.go:1729 msg="Ollama cloud disabled: false" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.130+06:00 level=INFO source=images.go:477 msg="total blobs: 5" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.130+06:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.130+06:00 level=INFO source=routes.go:1782 msg="Listening on 127.0.0.1:11434 (version 0.18.2)" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.131+06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.133+06:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34605" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.159+06:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.0 GiB" available="29.8 GiB" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.159+06:00 level=INFO source=routes.go:1832 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096`` ``` --- ``` ~ ❯ journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)" Mar 27 23:16:23 archlinux systemd[1]: Started Ollama Service. Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.128+06:00 level=INFO source=routes.go:1727 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.128+06:00 level=INFO source=routes.go:1729 msg="Ollama cloud disabled: false" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.130+06:00 level=INFO source=images.go:477 msg="total blobs: 5" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.130+06:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.130+06:00 level=INFO source=routes.go:1782 msg="Listening on 127.0.0.1:11434 (version 0.18.2)" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.131+06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.133+06:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34605" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.159+06:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.0 GiB" available="29.8 GiB" Mar 27 23:16:23 archlinux ollama[612]: time=2026-03-27T23:16:23.159+06:00 level=INFO source=routes.go:1832 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 ```
Author
Owner

@rick-github commented on GitHub (Mar 27, 2026):

Set -->OLLAMA_DEBUG=2 <-- in the server environment, restart the server, and post the output of:

journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)"
<!-- gh-comment-id:4144118333 --> @rick-github commented on GitHub (Mar 27, 2026): Set -->**`OLLAMA_DEBUG=2`** <-- in the server environment, restart the server, and post the output of: ``` journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)" ```
Author
Owner

@OTAKUWeBer commented on GitHub (Mar 27, 2026):

Set -->OLLAMA_DEBUG=2 <-- in the server environment, restart the server, and post the output of:

journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)"

I did?


`❯ journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)"
Mar 27 23:58:10 archlinux systemd[1]: Stopping Ollama Service...
Mar 27 23:58:10 archlinux systemd[1]: ollama.service: Deactivated successfully.
Mar 27 23:58:10 archlinux systemd[1]: Stopped Ollama Service.
Mar 27 23:58:10 archlinux systemd[1]: Started Ollama Service.
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=INFO source=routes.go:1727 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=INFO source=routes.go:1729 msg="Ollama cloud disabled: false"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=INFO source=images.go:477 msg="total blobs: 5"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=INFO source=routes.go:1782 msg="Listening on 127.0.0.1:11434 (version 0.18.2)"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=DEBUG source=sched.go:145 msg="starting llm scheduler"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs=[/usr/lib/ollama] extraEnvs=map[]
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.233+06:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 32975"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.233+06:00 level=DEBUG source=server.go:431 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/bin OLLAMA_MODELS=/var/lib/ollama OLLAMA_DEBUG=2 OLLAMA_VULKAN=true OLLAMA_NEW_ENGINE=true LD_LIBRARY_PATH=/usr/lib/ollama OLLAMA_LIBRARY_PATH=/usr/lib/ollama
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.240+06:00 level=INFO source=runner.go:1411 msg="starting ollama engine"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.240+06:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:32975"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=gguf.go:604 msg=general.architecture type=string
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default=""
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default=""
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception
Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception
Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception
Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception
Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception
Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception
Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception
Mar 27 23:58:10 archlinux ollama[20791]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default=""
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=runner.go:1386 msg="dummy model load took" duration=4.807283ms
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=runner.go:1391 msg="gathering device infos took" duration=340ns
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] devices=[]
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=15.954791ms OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] extra_envs=map[]
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[]
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=16.1634ms
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.0 GiB" available="24.6 GiB"
Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=INFO source=routes.go:1832 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096

~`
<!-- gh-comment-id:4144310401 --> @OTAKUWeBer commented on GitHub (Mar 27, 2026): > Set -->**`OLLAMA_DEBUG=2`** <-- in the server environment, restart the server, and post the output of: > > ``` > journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)" > ``` I did? ``` `❯ journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)" Mar 27 23:58:10 archlinux systemd[1]: Stopping Ollama Service... Mar 27 23:58:10 archlinux systemd[1]: ollama.service: Deactivated successfully. Mar 27 23:58:10 archlinux systemd[1]: Stopped Ollama Service. Mar 27 23:58:10 archlinux systemd[1]: Started Ollama Service. Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=INFO source=routes.go:1727 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG-4 OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/var/lib/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=INFO source=routes.go:1729 msg="Ollama cloud disabled: false" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=INFO source=images.go:477 msg="total blobs: 5" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=INFO source=routes.go:1782 msg="Listening on 127.0.0.1:11434 (version 0.18.2)" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=DEBUG source=sched.go:145 msg="starting llm scheduler" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.232+06:00 level=TRACE source=runner.go:440 msg="starting runner for device discovery" libDirs=[/usr/lib/ollama] extraEnvs=map[] Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.233+06:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 32975" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.233+06:00 level=DEBUG source=server.go:431 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/bin OLLAMA_MODELS=/var/lib/ollama OLLAMA_DEBUG=2 OLLAMA_VULKAN=true OLLAMA_NEW_ENGINE=true LD_LIBRARY_PATH=/usr/lib/ollama OLLAMA_LIBRARY_PATH=/usr/lib/ollama Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.240+06:00 level=INFO source=runner.go:1411 msg="starting ollama engine" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.240+06:00 level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:32975" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=gguf.go:604 msg=general.architecture type=string Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=gguf.go:604 msg=tokenizer.ggml.model type=string Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.file_type default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.name default="" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.description default="" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=INFO source=ggml.go:136 msg="" architecture=llama file_type=unknown name="" description="" num_tensors=0 num_key_values=3 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.243+06:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception Mar 27 23:58:10 archlinux ollama[20791]: operator() double registration of ggml_uncaught_exception Mar 27 23:58:10 archlinux ollama[20791]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.pooling_type default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.expert_count default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.tokens default="&{size:0 values:[]}" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.scores default="&{size:0 values:[]}" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.token_type default="&{size:0 values:[]}" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.merges default="&{size:0 values:[]}" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_bos_token default=true Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.bos_token_id default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.add_eos_token default=false Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_id default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.eos_token_ids default="&{size:0 values:[]}" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=tokenizer.ggml.pre default="" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.block_count default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.embedding_length default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.head_count_kv default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.key_length default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.dimension_count default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.attention.layer_norm_rms_epsilon default=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.freq_base default=100000 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=llama.rope.scaling.factor default=1 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=runner.go:1386 msg="dummy model load took" duration=4.807283ms Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=runner.go:1391 msg="gathering device infos took" duration=340ns Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=TRACE source=runner.go:467 msg="runner enumerated devices" OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] devices=[] Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=15.954791ms OLLAMA_LIBRARY_PATH=[/usr/lib/ollama] extra_envs=map[] Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=TRACE source=runner.go:174 msg="supported GPU library combinations before filtering" supported=map[] Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=16.1634ms Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.0 GiB" available="24.6 GiB" Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=INFO source=routes.go:1832 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 ~` ```
Author
Owner

@rick-github commented on GitHub (Mar 27, 2026):

Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)

No GPU accelerator backends found. What's the output of:

ls /usr/lib/ollama/
<!-- gh-comment-id:4144346941 --> @rick-github commented on GitHub (Mar 27, 2026): ``` Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) ``` No GPU accelerator backends found. What's the output of: ``` ls /usr/lib/ollama/ ```
Author
Owner

@OTAKUWeBer commented on GitHub (Mar 27, 2026):

Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)

No GPU accelerator backends found. What's the output of:

ls /usr/lib/ollama/

~
❯ ls /usr/lib/ollama/
libggml-base.so libggml-base.so.0.0.0 libggml-cpu-haswell.so libggml-cpu-sandybridge.so libggml-cpu-sse42.so libggml-hip.so
libggml-base.so.0 libggml-cpu-alderlake.so libggml-cpu-icelake.so libggml-cpu-skylakex.so libggml-cpu-x64.so rocblas

<!-- gh-comment-id:4144405873 --> @OTAKUWeBer commented on GitHub (Mar 27, 2026): > ``` > Mar 27 23:58:10 archlinux ollama[20791]: time=2026-03-27T23:58:10.248+06:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) > ``` > > No GPU accelerator backends found. What's the output of: > > ``` > ls /usr/lib/ollama/ > ``` ~ ❯ ls /usr/lib/ollama/ libggml-base.so libggml-base.so.0.0.0 libggml-cpu-haswell.so libggml-cpu-sandybridge.so libggml-cpu-sse42.so libggml-hip.so libggml-base.so.0 libggml-cpu-alderlake.so libggml-cpu-icelake.so libggml-cpu-skylakex.so libggml-cpu-x64.so rocblas
Author
Owner

@rick-github commented on GitHub (Mar 27, 2026):

sudo pacman -S ollama-vulkan
<!-- gh-comment-id:4144431093 --> @rick-github commented on GitHub (Mar 27, 2026): ``` sudo pacman -S ollama-vulkan ```
Author
Owner

@OTAKUWeBer commented on GitHub (Mar 27, 2026):

sudo pacman -S ollama-vulkan

Its seems to be working now. Thanks mate. Do you also know how to use claude with ollama? i mean i set it everything . but Claude taking forever for some reason

<!-- gh-comment-id:4145008990 --> @OTAKUWeBer commented on GitHub (Mar 27, 2026): > ``` > sudo pacman -S ollama-vulkan > ``` Its seems to be working now. Thanks mate. Do you also know how to use claude with ollama? i mean i set it everything . but Claude taking forever for some reason
Author
Owner

@rick-github commented on GitHub (Mar 27, 2026):

Have you increased the size of the context window?

<!-- gh-comment-id:4145025034 --> @rick-github commented on GitHub (Mar 27, 2026): Have you [increased](https://docs.ollama.com/integrations/claude-code#manual-setup:~:text=Claude%20Code%20requires%20a%20large%20context%20window) the size of the context window?
Author
Owner

@OTAKUWeBer commented on GitHub (Mar 29, 2026):

Have you increased the size of the context window?

~
❯ ollama ps
NAME                 ID              SIZE     PROCESSOR          CONTEXT    UNTIL              
qwen2.5-coder:14b    9ec8897f747e    18 GB    16%/84% CPU/GPU    32768      4 minutes from now    

~
❯ 

I cant change it to 64k

<!-- gh-comment-id:4149748757 --> @OTAKUWeBer commented on GitHub (Mar 29, 2026): > Have you [increased](https://docs.ollama.com/integrations/claude-code#manual-setup:~:text=Claude%20Code%20requires%20a%20large%20context%20window) the size of the context window? ``` ~ ❯ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen2.5-coder:14b 9ec8897f747e 18 GB 16%/84% CPU/GPU 32768 4 minutes from now ~ ❯ ``` I cant change it to 64k
Author
Owner

@rick-github commented on GitHub (Mar 29, 2026):

Claude has an instruction/tool message of about 22k tokens so it doesn't take much to fill the buffer and cause truncation or shifting. Context shifting can cause a model to lose coherence and loop, resulting in long delays in output. Offloading 16% of the model to system RAM will also cause slowness. Also note that qwen2.5-coder is not a great tool user. The options are to use a better/smaller model, upgrade the GPU, or use a cloud model.

<!-- gh-comment-id:4150006199 --> @rick-github commented on GitHub (Mar 29, 2026): Claude has an instruction/tool message of about 22k tokens so it doesn't take much to fill the buffer and cause truncation or shifting. Context shifting can cause a model to lose coherence and loop, resulting in long delays in output. Offloading 16% of the model to system RAM will also cause slowness. Also note that qwen2.5-coder is not a great tool user. The options are to use a better/smaller model, upgrade the GPU, or use a cloud model.
Author
Owner

@boomam commented on GitHub (Mar 29, 2026):

@OTAKUWeBer probably worth reopening considering I'm having a very similar issue, that is still being diagnosed, above.

<!-- gh-comment-id:4150169462 --> @boomam commented on GitHub (Mar 29, 2026): @OTAKUWeBer probably worth reopening considering I'm having a very similar issue, that is still being diagnosed, above.
Author
Owner

@OTAKUWeBer commented on GitHub (Mar 29, 2026):

Alright

<!-- gh-comment-id:4150415789 --> @OTAKUWeBer commented on GitHub (Mar 29, 2026): Alright
Author
Owner

@OTAKUWeBer commented on GitHub (Mar 29, 2026):

Thanks for the help. @rick-github .

<!-- gh-comment-id:4150416860 --> @OTAKUWeBer commented on GitHub (Mar 29, 2026): Thanks for the help. @rick-github .
Author
Owner

@Jasdfgh commented on GitHub (Mar 30, 2026):

fyi PR #14979 (from slojosic-amd) fixes the missing hipBLASLt install in ollama's build — it hasn't been mentioned in this thread yet. once merged, it may help with RDNA4 ROCm init issues, though the actual fix here might depend on your specific setup.

for Arch Linux users: if ROCm isn't picking up your GPU, installing the ollama-vulkan package (sudo pacman -S ollama-vulkan) has been the reliable path — just setting OLLAMA_VULKAN=1 doesn't work unless the Vulkan backend library is actually installed.

<!-- gh-comment-id:4153096063 --> @Jasdfgh commented on GitHub (Mar 30, 2026): fyi PR #14979 (from slojosic-amd) fixes the missing hipBLASLt install in ollama's build — it hasn't been mentioned in this thread yet. once merged, it may help with RDNA4 ROCm init issues, though the actual fix here might depend on your specific setup. for Arch Linux users: if ROCm isn't picking up your GPU, installing the ollama-vulkan package (sudo pacman -S ollama-vulkan) has been the reliable path — just setting OLLAMA_VULKAN=1 doesn't work unless the Vulkan backend library is actually installed.
Author
Owner

@boomam commented on GitHub (Mar 31, 2026):

fyi PR #14979 (from slojosic-amd) fixes the missing hipBLASLt install in ollama's build — it hasn't been mentioned in this thread yet. once merged, it may help with RDNA4 ROCm init issues, though the actual fix here might depend on your specific setup.

for Arch Linux users: if ROCm isn't picking up your GPU, installing the ollama-vulkan package (sudo pacman -S ollama-vulkan) has been the reliable path — just setting OLLAMA_VULKAN=1 doesn't work unless the Vulkan backend library is actually installed.

Hopefully it'll help with the GPU detection here, too.

<!-- gh-comment-id:4162191842 --> @boomam commented on GitHub (Mar 31, 2026): > fyi PR [#14979](https://github.com/ollama/ollama/pull/14979) (from slojosic-amd) fixes the missing hipBLASLt install in ollama's build — it hasn't been mentioned in this thread yet. once merged, it may help with RDNA4 ROCm init issues, though the actual fix here might depend on your specific setup. > > for Arch Linux users: if ROCm isn't picking up your GPU, installing the ollama-vulkan package (sudo pacman -S ollama-vulkan) has been the reliable path — just setting OLLAMA_VULKAN=1 doesn't work unless the Vulkan backend library is actually installed. Hopefully it'll help with the GPU detection here, too.
Author
Owner

@boomam commented on GitHub (Mar 31, 2026):

Just tested 0.19 and the problem persists, i'll keep an eye out for the PR mentioned above to merge and retest.

<!-- gh-comment-id:4163257692 --> @boomam commented on GitHub (Mar 31, 2026): Just tested 0.19 and the problem persists, i'll keep an eye out for the PR mentioned above to merge and retest.
Author
Owner

@DerekCochran commented on GitHub (Apr 8, 2026):

I tried the fix in the PR and it did not work for me. I was able to work with Claude Haiku and get the below solution to build from the main branch.

AMD RDNA4 (gfx1200) GPU Build Fix

Problem

When building Ollama on systems with AMD RDNA4 GPUs (gfx1200), the build process was failing to enable HIP GPU acceleration, resulting in total_vram="0 B" being reported during GPU detection.

Root Cause

CMake's find_package(hip) was not properly auto-detecting available AMDGPU targets on certain ROCm 7.0.0 configurations. As a result:

  1. The AMDGPU_TARGETS variable remained empty
  2. The HIP backend (ggml-hip) was not compiled
  3. At runtime, only CPU acceleration was available
  4. GPU detection showed 0 VRAM despite ROCm driver being properly installed

Solution

When building Ollama on Linux with ROCm, explicitly specify the AMDGPU target via CMake:

export LD_LIBRARY_PATH=/opt/rocm-7.0.0/lib:/home/djc/lib/sqlite
export CMAKE_PREFIX_PATH=/opt/rocm-7.0.0

rm -rf build
cmake -B build -DAMDGPU_TARGETS=gfx1200
cmake --build build -j10

Verification

After building, verify GPU detection works:

timeout 10 go run . serve 2>&1 | grep "total_vram"

Expected output:

time=... msg="vram-based default context" total_vram="15.9 GiB" default_num_ctx=4096

Technical Details

What Gets Built

With -DAMDGPU_TARGETS=gfx1200 specified:

build/lib/ollama/libggml-hip.so              # GPU acceleration backend
build/lib/ollama/libggml-base.so.0.0.0       # Base ggml library
build/lib/ollama/libggml-cpu-*.so            # CPU acceleration variants

Without this flag, only CPU variants are built.

GPU Library Discovery

Ollama's runtime discovery mechanism (in discover/runner.go) looks for GPU libraries in development builds at:

build/lib/ollama/

When -DAMDGPU_TARGETS=gfx1200 is set, CMake compiles the HIP backend and places it in this location where the runner can find it at startup.

Environment Variables

Setting these environment variables during build ensures CMake can find ROCm dependencies:

Variable Purpose
CMAKE_PREFIX_PATH=/opt/rocm-7.0.0 Tells CMake where to find HIP configuration and libraries
LD_LIBRARY_PATH Runtime library loader path for building with ROCm libraries

Build System Notes

The fix is explicit at build time rather than modifying CMakeLists.txt because:

  1. Portability: Not all systems with ROCm have gpus requiring gfx1200
  2. Flexibility: Users can build for multiple targets if needed: -DAMDGPU_TARGETS="gfx1200;gfx1036"
  3. Upstream Compatibility: Doesn't require changes to the main CMakeLists.txt

For multi-GPU systems, you can specify multiple targets:

cmake -B build -DAMDGPU_TARGETS="gfx1200;gfx1036;gfx1100"

Supported AMDGPU Targets

CMakeLists.txt line 142 filters to supported targets:

gfx940, gfx941, gfx942        # MI300 series
gfx1010, gfx1012              # MI100 series  
gfx1030                        # MI210/MI250
gfx1100, gfx1101, gfx1102    # MI300X series
gfx1200, gfx1201              # RDNA4 (RX 9000 series)

References

  • ROCm 7.0.0: /opt/rocm-7.0.0/lib/cmake/hip/hip-config.cmake
  • Ollama CMakeLists.txt: CMakeLists.txt lines 138-175 (HIP backend configuration)
  • GPU Discovery: discover/runner.go (runtime GPU detection)
<!-- gh-comment-id:4203477257 --> @DerekCochran commented on GitHub (Apr 8, 2026): I tried the fix in the PR and it did not work for me. I was able to work with Claude Haiku and get the below solution to build from the main branch. # AMD RDNA4 (gfx1200) GPU Build Fix ## Problem When building Ollama on systems with AMD RDNA4 GPUs (gfx1200), the build process was failing to enable HIP GPU acceleration, resulting in `total_vram="0 B"` being reported during GPU detection. ### Root Cause CMake's `find_package(hip)` was not properly auto-detecting available AMDGPU targets on certain ROCm 7.0.0 configurations. As a result: 1. The `AMDGPU_TARGETS` variable remained empty 2. The HIP backend (`ggml-hip`) was not compiled 3. At runtime, only CPU acceleration was available 4. GPU detection showed 0 VRAM despite ROCm driver being properly installed ## Solution When building Ollama on Linux with ROCm, explicitly specify the AMDGPU target via CMake: ```bash export LD_LIBRARY_PATH=/opt/rocm-7.0.0/lib:/home/djc/lib/sqlite export CMAKE_PREFIX_PATH=/opt/rocm-7.0.0 rm -rf build cmake -B build -DAMDGPU_TARGETS=gfx1200 cmake --build build -j10 ``` ### Verification After building, verify GPU detection works: ```bash timeout 10 go run . serve 2>&1 | grep "total_vram" ``` Expected output: ``` time=... msg="vram-based default context" total_vram="15.9 GiB" default_num_ctx=4096 ``` ## Technical Details ### What Gets Built With `-DAMDGPU_TARGETS=gfx1200` specified: ``` build/lib/ollama/libggml-hip.so # GPU acceleration backend build/lib/ollama/libggml-base.so.0.0.0 # Base ggml library build/lib/ollama/libggml-cpu-*.so # CPU acceleration variants ``` Without this flag, only CPU variants are built. ### GPU Library Discovery Ollama's runtime discovery mechanism (in `discover/runner.go`) looks for GPU libraries in development builds at: ``` build/lib/ollama/ ``` When `-DAMDGPU_TARGETS=gfx1200` is set, CMake compiles the HIP backend and places it in this location where the runner can find it at startup. ### Environment Variables Setting these environment variables during build ensures CMake can find ROCm dependencies: | Variable | Purpose | |----------|---------| | `CMAKE_PREFIX_PATH=/opt/rocm-7.0.0` | Tells CMake where to find HIP configuration and libraries | | `LD_LIBRARY_PATH` | Runtime library loader path for building with ROCm libraries | ## Build System Notes The fix is explicit at build time rather than modifying `CMakeLists.txt` because: 1. **Portability**: Not all systems with ROCm have gpus requiring gfx1200 2. **Flexibility**: Users can build for multiple targets if needed: `-DAMDGPU_TARGETS="gfx1200;gfx1036"` 3. **Upstream Compatibility**: Doesn't require changes to the main CMakeLists.txt For multi-GPU systems, you can specify multiple targets: ```bash cmake -B build -DAMDGPU_TARGETS="gfx1200;gfx1036;gfx1100" ``` ## Supported AMDGPU Targets CMakeLists.txt line 142 filters to supported targets: ``` gfx940, gfx941, gfx942 # MI300 series gfx1010, gfx1012 # MI100 series gfx1030 # MI210/MI250 gfx1100, gfx1101, gfx1102 # MI300X series gfx1200, gfx1201 # RDNA4 (RX 9000 series) ``` ## References - ROCm 7.0.0: `/opt/rocm-7.0.0/lib/cmake/hip/hip-config.cmake` - Ollama CMakeLists.txt: `CMakeLists.txt` lines 138-175 (HIP backend configuration) - GPU Discovery: `discover/runner.go` (runtime GPU detection)
Author
Owner

@LukeLamb commented on GitHub (Apr 25, 2026):

I hit this exact symptom on an AMD Radeon AI PRO R9700 (gfx1201) on Ubuntu 24.04 with the official ollama-linux-amd64 binary (ollama 0.20.6). Confirmed CPU-only fallback even with HIP_VISIBLE_DEVICES=0 set — /usr/local/lib/ollama/ has cuda_v12/, cuda_v13/, and CPU variants, but no rocm/ directory at all. The official Linux build does not include a ROCm backend in 0.20.6.

The source DOES support gfx1201 (CMakeLists.txt L141 and the ROCm 7 preset in CMakePresets.json both list it). Building from source against ROCm 7.2.1 works end-to-end:

git clone --depth 1 https://github.com/ollama/ollama.git
cd ollama
PATH=/opt/rocm/lib/llvm/bin:$PATH cmake --preset "ROCm 7" \
    -DAMDGPU_TARGETS=gfx1201 \
    -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ \
    -DCMAKE_PREFIX_PATH=/opt/rocm
cmake --build build --config Release -j$(nproc)
go build .
cmake --install build --prefix dist

After dropping dist/bin/ollama into /usr/local/bin/ and dist/lib/ollama/ into /usr/local/lib/, the journal shows:

library=ROCm compute=gfx1201 name=ROCm0 description="AMD Radeon Graphics"
pci_id=0000:0d:00.0 type=discrete total="31.9 GiB" available="31.8 GiB"

Throughput on llama3.1:8b-q4_K_M: 8.77 tok/s (CPU baseline) → 88.59 tok/s (Vulkan via Mesa RADV) → 92.99 tok/s (ROCm-native), with full 33/33 layer offload in all GPU cases.

This should be tracked back to #10676 — once ROCm 6.4+ is in the official Linux build pipeline, the install script will give RDNA4 users this experience by default.

<!-- gh-comment-id:4319096745 --> @LukeLamb commented on GitHub (Apr 25, 2026): I hit this exact symptom on an AMD Radeon AI PRO R9700 (gfx1201) on Ubuntu 24.04 with the official `ollama-linux-amd64` binary (`ollama 0.20.6`). Confirmed CPU-only fallback even with `HIP_VISIBLE_DEVICES=0` set — `/usr/local/lib/ollama/` has `cuda_v12/`, `cuda_v13/`, and CPU variants, but no `rocm/` directory at all. The official Linux build does not include a ROCm backend in 0.20.6. The source DOES support gfx1201 (`CMakeLists.txt` L141 and the `ROCm 7` preset in `CMakePresets.json` both list it). Building from source against ROCm 7.2.1 works end-to-end: ```bash git clone --depth 1 https://github.com/ollama/ollama.git cd ollama PATH=/opt/rocm/lib/llvm/bin:$PATH cmake --preset "ROCm 7" \ -DAMDGPU_TARGETS=gfx1201 \ -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ \ -DCMAKE_PREFIX_PATH=/opt/rocm cmake --build build --config Release -j$(nproc) go build . cmake --install build --prefix dist ``` After dropping `dist/bin/ollama` into `/usr/local/bin/` and `dist/lib/ollama/` into `/usr/local/lib/`, the journal shows: ``` library=ROCm compute=gfx1201 name=ROCm0 description="AMD Radeon Graphics" pci_id=0000:0d:00.0 type=discrete total="31.9 GiB" available="31.8 GiB" ``` Throughput on `llama3.1:8b-q4_K_M`: 8.77 tok/s (CPU baseline) → 88.59 tok/s (Vulkan via Mesa RADV) → 92.99 tok/s (ROCm-native), with full 33/33 layer offload in all GPU cases. This should be tracked back to #10676 — once ROCm 6.4+ is in the official Linux build pipeline, the install script will give RDNA4 users this experience by default.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71666