[GH-ISSUE #15338] Gemma4 + ROCm gfx1201 + high context size = results in "llama runner process has terminated: %!w(<nil>)" #9812

Closed
opened 2026-04-12 22:41:05 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @FibreFoX on GitHub (Apr 5, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15338

What is the issue?

Running ollama with Docker via the image ollama/ollama:0.20.2-rocm when using latest gemma4:31b model the ollama process crashes and is not able to load the model.

I am running a AMD Radeon AI PRO R9700 passed to the docker container. The setup works with qwen3.5, so it is no general problem, just in combination with the model. Maybe it is because of the missing TensileLibrary_lazy_gfx1201.dat file?

Relevant log output

time=2026-04-05T10:52:19.778Z level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:192000 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-04-05T10:52:19.779Z level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false"
time=2026-04-05T10:52:19.787Z level=INFO source=images.go:499 msg="total blobs: 19"
time=2026-04-05T10:52:19.788Z level=INFO source=images.go:506 msg="total unused blobs removed: 0"
time=2026-04-05T10:52:19.789Z level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.2)"
time=2026-04-05T10:52:19.791Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-05T10:52:19.794Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40593"
time=2026-04-05T10:52:20.448Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38473"
time=2026-04-05T10:52:20.664Z level=INFO source=types.go:42 msg="inference compute" id=GPU-c5891436b7f45b65 filter_id="" library=ROCm compute=gfx1201 name=ROCm0 description="AMD Radeon Graphics" libdirs=ollama,rocm driver=70226.1 pci_id=0000:03:00.0 type=discrete total="31.9 GiB" available="31.8 GiB"
time=2026-04-05T10:52:20.664Z level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="31.9 GiB" default_num_ctx=32768
[GIN] 2026/04/05 - 10:52:42 | 200 |    9.938658ms |      172.18.0.3 | GET      "/api/tags"
[GIN] 2026/04/05 - 10:52:42 | 200 |     198.695µs |      172.18.0.3 | GET      "/api/ps"
time=2026-04-05T10:52:45.247Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39279"
time=2026-04-05T10:52:45.726Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-04-05T10:52:45.867Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-05T10:52:45.867Z level=INFO source=server.go:247 msg="enabling flash attention"
time=2026-04-05T10:52:45.868Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-280af6832eca23cb322c4dcc65edfea98a21b8f8ab07dc7553bd6f7e6e7a3313 --port 35383"
time=2026-04-05T10:52:45.868Z level=INFO source=sched.go:484 msg="system memory" total="39.0 GiB" free="38.8 GiB" free_swap="0 B"
time=2026-04-05T10:52:45.868Z level=INFO source=sched.go:491 msg="gpu memory" id=GPU-c5891436b7f45b65 library=ROCm available="31.4 GiB" free="31.8 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-05T10:52:45.868Z level=INFO source=server.go:759 msg="loading model" "model layers"=61 requested=-1
time=2026-04-05T10:52:45.880Z level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-05T10:52:45.880Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:35383"
time=2026-04-05T10:52:45.890Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:192000 KvCacheType: NumThreads:8 GPULayers:61[ID:GPU-c5891436b7f45b65 Layers:61(0..60)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-05T10:52:45.947Z level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=1189 num_key_values=49
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon Graphics, gfx1201 (0x1201), VMM: no, Wave Size: 32, ID: GPU-c5891436b7f45b65
load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so
time=2026-04-05T10:52:46.011Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-04-05T10:52:46.019Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-05T10:52:46.043Z level=INFO source=model.go:138 msg="vision: decode" elapsed=1.866174ms bounds=(0,0)-(2048,2048)
time=2026-04-05T10:52:46.142Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=98.55635ms size="[768 768]"
time=2026-04-05T10:52:46.142Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-05T10:52:46.142Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-05T10:52:46.143Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=101.626828ms shape="[5376 256]"

rocblaslt error: Cannot read "TensileLibrary_lazy_gfx1201.dat": No such file or directory

rocblaslt error: Could not load "TensileLibrary_lazy_gfx1201.dat"
time=2026-04-05T10:52:47.324Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:192000 KvCacheType: NumThreads:8 GPULayers:53[ID:GPU-c5891436b7f45b65 Layers:53(7..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-05T10:52:47.383Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-05T10:52:47.395Z level=INFO source=model.go:138 msg="vision: decode" elapsed=645.458µs bounds=(0,0)-(2048,2048)
time=2026-04-05T10:52:47.489Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=93.607556ms size="[768 768]"
time=2026-04-05T10:52:47.489Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-05T10:52:47.489Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-05T10:52:47.490Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=95.147343ms shape="[5376 256]"
time=2026-04-05T10:52:47.913Z level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:192000 KvCacheType: NumThreads:8 GPULayers:53[ID:GPU-c5891436b7f45b65 Layers:53(7..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-05T10:52:47.977Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-05T10:52:47.990Z level=INFO source=model.go:138 msg="vision: decode" elapsed=591.958µs bounds=(0,0)-(2048,2048)
time=2026-04-05T10:52:48.087Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=97.076315ms size="[768 768]"
time=2026-04-05T10:52:48.090Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-05T10:52:48.090Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-05T10:52:48.090Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=101.448ms shape="[5376 256]"
time=2026-04-05T10:52:50.140Z level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:192000 KvCacheType: NumThreads:8 GPULayers:53[ID:GPU-c5891436b7f45b65 Layers:53(7..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-05T10:52:50.141Z level=INFO source=device.go:240 msg="model weights" device=ROCm0 size="14.3 GiB"
time=2026-04-05T10:52:50.141Z level=INFO source=device.go:245 msg="model weights" device=CPU size="5.3 GiB"
time=2026-04-05T10:52:50.141Z level=INFO source=device.go:251 msg="kv cache" device=ROCm0 size="16.3 GiB"
time=2026-04-05T10:52:50.141Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.9 GiB"
time=2026-04-05T10:52:50.141Z level=INFO source=device.go:262 msg="compute graph" device=ROCm0 size="719.4 MiB"
time=2026-04-05T10:52:50.141Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="1.7 GiB"
time=2026-04-05T10:52:50.141Z level=INFO source=device.go:272 msg="total memory" size="40.1 GiB"
time=2026-04-05T10:52:50.141Z level=INFO source=ggml.go:482 msg="offloading 53 repeating layers to GPU"
time=2026-04-05T10:52:50.141Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-04-05T10:52:50.141Z level=INFO source=ggml.go:494 msg="offloaded 53/61 layers to GPU"
time=2026-04-05T10:52:50.141Z level=INFO source=sched.go:561 msg="loaded runners" count=1
time=2026-04-05T10:52:50.141Z level=INFO source=server.go:1352 msg="waiting for llama runner to start responding"
time=2026-04-05T10:52:50.142Z level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model"
time=2026-04-05T10:54:15.958Z level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server not responding"
time=2026-04-05T10:54:16.411Z level=ERROR source=server.go:304 msg="llama runner terminated" error="signal: killed"
time=2026-04-05T10:54:16.615Z level=ERROR source=sched.go:567 msg="error loading llama server" error="llama runner process has terminated: %!w(<nil>)"
[GIN] 2026/04/05 - 10:54:16 | 500 |         1m31s |      172.18.0.3 | POST     "/api/chat"
time=2026-04-05T10:54:19.621Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 32905"
time=2026-04-05T10:54:20.350Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43657"
time=2026-04-05T10:54:20.600Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35571"
time=2026-04-05T10:54:20.850Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45591"
time=2026-04-05T10:54:21.100Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39545"
time=2026-04-05T10:54:21.350Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42569"
time=2026-04-05T10:54:21.600Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40421"
time=2026-04-05T10:54:21.850Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42455"

OS

Docker

GPU

AMD

CPU

AMD

Ollama version

0.20.2

Originally created by @FibreFoX on GitHub (Apr 5, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15338 ### What is the issue? Running ollama with Docker via the image `ollama/ollama:0.20.2-rocm` when using latest `gemma4:31b` model the ollama process crashes and is not able to load the model. I am running a AMD Radeon AI PRO R9700 passed to the docker container. The setup works with `qwen3.5`, so it is no general problem, just in combination with the model. Maybe it is because of the missing `TensileLibrary_lazy_gfx1201.dat` file? ### Relevant log output ```shell time=2026-04-05T10:52:19.778Z level=INFO source=routes.go:1744 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:192000 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-04-05T10:52:19.779Z level=INFO source=routes.go:1746 msg="Ollama cloud disabled: false" time=2026-04-05T10:52:19.787Z level=INFO source=images.go:499 msg="total blobs: 19" time=2026-04-05T10:52:19.788Z level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-04-05T10:52:19.789Z level=INFO source=routes.go:1802 msg="Listening on [::]:11434 (version 0.20.2)" time=2026-04-05T10:52:19.791Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-05T10:52:19.794Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40593" time=2026-04-05T10:52:20.448Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38473" time=2026-04-05T10:52:20.664Z level=INFO source=types.go:42 msg="inference compute" id=GPU-c5891436b7f45b65 filter_id="" library=ROCm compute=gfx1201 name=ROCm0 description="AMD Radeon Graphics" libdirs=ollama,rocm driver=70226.1 pci_id=0000:03:00.0 type=discrete total="31.9 GiB" available="31.8 GiB" time=2026-04-05T10:52:20.664Z level=INFO source=routes.go:1852 msg="vram-based default context" total_vram="31.9 GiB" default_num_ctx=32768 [GIN] 2026/04/05 - 10:52:42 | 200 | 9.938658ms | 172.18.0.3 | GET "/api/tags" [GIN] 2026/04/05 - 10:52:42 | 200 | 198.695µs | 172.18.0.3 | GET "/api/ps" time=2026-04-05T10:52:45.247Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39279" time=2026-04-05T10:52:45.726Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-04-05T10:52:45.867Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-05T10:52:45.867Z level=INFO source=server.go:247 msg="enabling flash attention" time=2026-04-05T10:52:45.868Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-280af6832eca23cb322c4dcc65edfea98a21b8f8ab07dc7553bd6f7e6e7a3313 --port 35383" time=2026-04-05T10:52:45.868Z level=INFO source=sched.go:484 msg="system memory" total="39.0 GiB" free="38.8 GiB" free_swap="0 B" time=2026-04-05T10:52:45.868Z level=INFO source=sched.go:491 msg="gpu memory" id=GPU-c5891436b7f45b65 library=ROCm available="31.4 GiB" free="31.8 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-05T10:52:45.868Z level=INFO source=server.go:759 msg="loading model" "model layers"=61 requested=-1 time=2026-04-05T10:52:45.880Z level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-05T10:52:45.880Z level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:35383" time=2026-04-05T10:52:45.890Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:192000 KvCacheType: NumThreads:8 GPULayers:61[ID:GPU-c5891436b7f45b65 Layers:61(0..60)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-05T10:52:45.947Z level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=1189 num_key_values=49 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon Graphics, gfx1201 (0x1201), VMM: no, Wave Size: 32, ID: GPU-c5891436b7f45b65 load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so time=2026-04-05T10:52:46.011Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-04-05T10:52:46.019Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-05T10:52:46.043Z level=INFO source=model.go:138 msg="vision: decode" elapsed=1.866174ms bounds=(0,0)-(2048,2048) time=2026-04-05T10:52:46.142Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=98.55635ms size="[768 768]" time=2026-04-05T10:52:46.142Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-05T10:52:46.142Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-05T10:52:46.143Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=101.626828ms shape="[5376 256]" rocblaslt error: Cannot read "TensileLibrary_lazy_gfx1201.dat": No such file or directory rocblaslt error: Could not load "TensileLibrary_lazy_gfx1201.dat" time=2026-04-05T10:52:47.324Z level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:192000 KvCacheType: NumThreads:8 GPULayers:53[ID:GPU-c5891436b7f45b65 Layers:53(7..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-05T10:52:47.383Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-05T10:52:47.395Z level=INFO source=model.go:138 msg="vision: decode" elapsed=645.458µs bounds=(0,0)-(2048,2048) time=2026-04-05T10:52:47.489Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=93.607556ms size="[768 768]" time=2026-04-05T10:52:47.489Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-05T10:52:47.489Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-05T10:52:47.490Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=95.147343ms shape="[5376 256]" time=2026-04-05T10:52:47.913Z level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:192000 KvCacheType: NumThreads:8 GPULayers:53[ID:GPU-c5891436b7f45b65 Layers:53(7..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-05T10:52:47.977Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-05T10:52:47.990Z level=INFO source=model.go:138 msg="vision: decode" elapsed=591.958µs bounds=(0,0)-(2048,2048) time=2026-04-05T10:52:48.087Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=97.076315ms size="[768 768]" time=2026-04-05T10:52:48.090Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-05T10:52:48.090Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-05T10:52:48.090Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=101.448ms shape="[5376 256]" time=2026-04-05T10:52:50.140Z level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:192000 KvCacheType: NumThreads:8 GPULayers:53[ID:GPU-c5891436b7f45b65 Layers:53(7..59)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-05T10:52:50.141Z level=INFO source=device.go:240 msg="model weights" device=ROCm0 size="14.3 GiB" time=2026-04-05T10:52:50.141Z level=INFO source=device.go:245 msg="model weights" device=CPU size="5.3 GiB" time=2026-04-05T10:52:50.141Z level=INFO source=device.go:251 msg="kv cache" device=ROCm0 size="16.3 GiB" time=2026-04-05T10:52:50.141Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.9 GiB" time=2026-04-05T10:52:50.141Z level=INFO source=device.go:262 msg="compute graph" device=ROCm0 size="719.4 MiB" time=2026-04-05T10:52:50.141Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="1.7 GiB" time=2026-04-05T10:52:50.141Z level=INFO source=device.go:272 msg="total memory" size="40.1 GiB" time=2026-04-05T10:52:50.141Z level=INFO source=ggml.go:482 msg="offloading 53 repeating layers to GPU" time=2026-04-05T10:52:50.141Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-04-05T10:52:50.141Z level=INFO source=ggml.go:494 msg="offloaded 53/61 layers to GPU" time=2026-04-05T10:52:50.141Z level=INFO source=sched.go:561 msg="loaded runners" count=1 time=2026-04-05T10:52:50.141Z level=INFO source=server.go:1352 msg="waiting for llama runner to start responding" time=2026-04-05T10:52:50.142Z level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model" time=2026-04-05T10:54:15.958Z level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server not responding" time=2026-04-05T10:54:16.411Z level=ERROR source=server.go:304 msg="llama runner terminated" error="signal: killed" time=2026-04-05T10:54:16.615Z level=ERROR source=sched.go:567 msg="error loading llama server" error="llama runner process has terminated: %!w(<nil>)" [GIN] 2026/04/05 - 10:54:16 | 500 | 1m31s | 172.18.0.3 | POST "/api/chat" time=2026-04-05T10:54:19.621Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 32905" time=2026-04-05T10:54:20.350Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43657" time=2026-04-05T10:54:20.600Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35571" time=2026-04-05T10:54:20.850Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45591" time=2026-04-05T10:54:21.100Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39545" time=2026-04-05T10:54:21.350Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42569" time=2026-04-05T10:54:21.600Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40421" time=2026-04-05T10:54:21.850Z level=INFO source=server.go:432 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42455" ``` ### OS Docker ### GPU AMD ### CPU AMD ### Ollama version 0.20.2
GiteaMirror added the bug label 2026-04-12 22:41:05 -05:00
Author
Owner

@FibreFoX commented on GitHub (Apr 5, 2026):

I have checked the used rocm/dev-almalinux-8:7.2-complete, the file exists at two locations:

docker > find / -name "TensileLibrary_lazy_gfx1201.dat"
/opt/rocm-7.2.0/lib/rocblas/library/TensileLibrary_lazy_gfx1201.dat
/opt/rocm-7.2.0/lib/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat

Inside the image ollama/ollama:0.20.2-rocm the file exists too:

docker > find / -name "TensileLibrary_lazy_gfx1201.dat"
/usr/lib/ollama/rocm/rocblas/library/TensileLibrary_lazy_gfx1201.dat

So why is it not detected?

<!-- gh-comment-id:4189663350 --> @FibreFoX commented on GitHub (Apr 5, 2026): I have checked the used `rocm/dev-almalinux-8:7.2-complete`, the file exists at two locations: ``` docker > find / -name "TensileLibrary_lazy_gfx1201.dat" /opt/rocm-7.2.0/lib/rocblas/library/TensileLibrary_lazy_gfx1201.dat /opt/rocm-7.2.0/lib/hipblaslt/library/TensileLibrary_lazy_gfx1201.dat ``` Inside the image `ollama/ollama:0.20.2-rocm` the file exists too: ``` docker > find / -name "TensileLibrary_lazy_gfx1201.dat" /usr/lib/ollama/rocm/rocblas/library/TensileLibrary_lazy_gfx1201.dat ``` So why is it not detected?
Author
Owner

@mizxcv commented on GitHub (Apr 7, 2026):

The error says rocblaslt — that's the hipBLASLt library, not rocBLAS. You found the TensileLibrary_lazy_gfx1201.dat file in the ollama image under rocblas/library/, but hipBLASLt looks for it in its own library path. In the base ROCm image you checked, the file exists in both rocblas/library/ and hipblaslt/library/ — the ollama image seems to only include the rocblas copy.
As a quick test inside the container, you could try copying it:

mkdir -p /usr/lib/ollama/rocm/hipblaslt/library/
cp /usr/lib/ollama/rocm/rocblas/library/TensileLibrary_lazy_gfx1201.dat /usr/lib/ollama/rocm/hipblaslt/library/

This might also explain why Qwen3.5 works but Gemma4 doesn't — could be hitting different BLAS code paths.
Also worth checking separately: your context is set to 192K, and the logs show ~40 GiB total memory needed vs 31.9 GiB GPU VRAM. The runner getting signal: killed after 90 seconds could be OOM on top of the library issue. Try OLLAMA_CONTEXT_LENGTH=32768 to isolate.

<!-- gh-comment-id:4196899363 --> @mizxcv commented on GitHub (Apr 7, 2026): The error says `rocblaslt` — that's the hipBLASLt library, not rocBLAS. You found the `TensileLibrary_lazy_gfx1201.dat` file in the ollama image under `rocblas/library/`, but hipBLASLt looks for it in its own library path. In the base ROCm image you checked, the file exists in both `rocblas/library/` and `hipblaslt/library/` — the ollama image seems to only include the rocblas copy. As a quick test inside the container, you could try copying it: ``` mkdir -p /usr/lib/ollama/rocm/hipblaslt/library/ cp /usr/lib/ollama/rocm/rocblas/library/TensileLibrary_lazy_gfx1201.dat /usr/lib/ollama/rocm/hipblaslt/library/ ``` This might also explain why Qwen3.5 works but Gemma4 doesn't — could be hitting different BLAS code paths. Also worth checking separately: your context is set to 192K, and the logs show ~40 GiB total memory needed vs 31.9 GiB GPU VRAM. The runner getting `signal: killed` after 90 seconds could be OOM on top of the library issue. Try `OLLAMA_CONTEXT_LENGTH=32768 to isolate`.
Author
Owner

@FibreFoX commented on GitHub (Apr 7, 2026):

@mizxcv Ok thats interesting, I changed the context size and still get the missing library message BUT it seems to work now:

load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon Graphics, gfx1201 (0x1201), VMM: no, Wave Size: 32, ID: GPU-c5891436b7f45b65
load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so
time=2026-04-07T06:53:38.093Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-04-07T06:53:38.100Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-07T06:53:38.120Z level=INFO source=model.go:138 msg="vision: decode" elapsed=1.840955ms bounds=(0,0)-(2048,2048)
time=2026-04-07T06:53:38.217Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=96.835559ms size="[768 768]"
time=2026-04-07T06:53:38.217Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-07T06:53:38.217Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-07T06:53:38.218Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=99.857059ms shape="[5376 256]"

rocblaslt error: Cannot read "TensileLibrary_lazy_gfx1201.dat": No such file or directory

rocblaslt error: Could not load "TensileLibrary_lazy_gfx1201.dat"
time=2026-04-07T06:53:39.020Z level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:8 GPULayers:61[ID:GPU-c5891436b7f45b65 Layers:61(0..60)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-07T06:53:39.095Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-07T06:53:39.107Z level=INFO source=model.go:138 msg="vision: decode" elapsed=706.476µs bounds=(0,0)-(2048,2048)
time=2026-04-07T06:53:39.190Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=82.888182ms size="[768 768]"
time=2026-04-07T06:53:39.193Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-07T06:53:39.193Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-07T06:53:39.194Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=87.509986ms shape="[5376 256]"
time=2026-04-07T06:53:39.459Z level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:8 GPULayers:61[ID:GPU-c5891436b7f45b65 Layers:61(0..60)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-07T06:53:39.460Z level=INFO source=device.go:240 msg="model weights" device=ROCm0 size="18.4 GiB"
time=2026-04-07T06:53:39.460Z level=INFO source=device.go:245 msg="model weights" device=CPU size="1.2 GiB"
time=2026-04-07T06:53:39.460Z level=INFO source=device.go:251 msg="kv cache" device=ROCm0 size="6.0 GiB"
time=2026-04-07T06:53:39.460Z level=INFO source=ggml.go:482 msg="offloading 60 repeating layers to GPU"
time=2026-04-07T06:53:39.460Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2026-04-07T06:53:39.460Z level=INFO source=device.go:262 msg="compute graph" device=ROCm0 size="295.2 MiB"
time=2026-04-07T06:53:39.460Z level=INFO source=ggml.go:494 msg="offloaded 61/61 layers to GPU"
time=2026-04-07T06:53:39.460Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="352.0 MiB"
time=2026-04-07T06:53:39.460Z level=INFO source=device.go:272 msg="total memory" size="26.2 GiB"
time=2026-04-07T06:53:39.460Z level=INFO source=sched.go:561 msg="loaded runners" count=1
time=2026-04-07T06:53:39.460Z level=INFO source=server.go:1352 msg="waiting for llama runner to start responding"
time=2026-04-07T06:53:39.460Z level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model"
time=2026-04-07T06:54:07.549Z level=INFO source=server.go:1390 msg="llama runner started in 29.60 seconds"

For me the issue still is valid, at least because of the missing lib. When having a bit more time later, I will check about that missing library issue, if that allows the context window to be bigger again, even when being split between VRAM+RAM.

<!-- gh-comment-id:4197112803 --> @FibreFoX commented on GitHub (Apr 7, 2026): @mizxcv Ok thats interesting, I changed the context size and still get the missing library message BUT it seems to work now: ``` load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon Graphics, gfx1201 (0x1201), VMM: no, Wave Size: 32, ID: GPU-c5891436b7f45b65 load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so time=2026-04-07T06:53:38.093Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-04-07T06:53:38.100Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-07T06:53:38.120Z level=INFO source=model.go:138 msg="vision: decode" elapsed=1.840955ms bounds=(0,0)-(2048,2048) time=2026-04-07T06:53:38.217Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=96.835559ms size="[768 768]" time=2026-04-07T06:53:38.217Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-07T06:53:38.217Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-07T06:53:38.218Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=99.857059ms shape="[5376 256]" rocblaslt error: Cannot read "TensileLibrary_lazy_gfx1201.dat": No such file or directory rocblaslt error: Could not load "TensileLibrary_lazy_gfx1201.dat" time=2026-04-07T06:53:39.020Z level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:8 GPULayers:61[ID:GPU-c5891436b7f45b65 Layers:61(0..60)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-07T06:53:39.095Z level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-07T06:53:39.107Z level=INFO source=model.go:138 msg="vision: decode" elapsed=706.476µs bounds=(0,0)-(2048,2048) time=2026-04-07T06:53:39.190Z level=INFO source=model.go:145 msg="vision: preprocess" elapsed=82.888182ms size="[768 768]" time=2026-04-07T06:53:39.193Z level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-07T06:53:39.193Z level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-07T06:53:39.194Z level=INFO source=model.go:156 msg="vision: encoded" elapsed=87.509986ms shape="[5376 256]" time=2026-04-07T06:53:39.459Z level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:8 GPULayers:61[ID:GPU-c5891436b7f45b65 Layers:61(0..60)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-07T06:53:39.460Z level=INFO source=device.go:240 msg="model weights" device=ROCm0 size="18.4 GiB" time=2026-04-07T06:53:39.460Z level=INFO source=device.go:245 msg="model weights" device=CPU size="1.2 GiB" time=2026-04-07T06:53:39.460Z level=INFO source=device.go:251 msg="kv cache" device=ROCm0 size="6.0 GiB" time=2026-04-07T06:53:39.460Z level=INFO source=ggml.go:482 msg="offloading 60 repeating layers to GPU" time=2026-04-07T06:53:39.460Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2026-04-07T06:53:39.460Z level=INFO source=device.go:262 msg="compute graph" device=ROCm0 size="295.2 MiB" time=2026-04-07T06:53:39.460Z level=INFO source=ggml.go:494 msg="offloaded 61/61 layers to GPU" time=2026-04-07T06:53:39.460Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="352.0 MiB" time=2026-04-07T06:53:39.460Z level=INFO source=device.go:272 msg="total memory" size="26.2 GiB" time=2026-04-07T06:53:39.460Z level=INFO source=sched.go:561 msg="loaded runners" count=1 time=2026-04-07T06:53:39.460Z level=INFO source=server.go:1352 msg="waiting for llama runner to start responding" time=2026-04-07T06:53:39.460Z level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model" time=2026-04-07T06:54:07.549Z level=INFO source=server.go:1390 msg="llama runner started in 29.60 seconds" ``` For me the issue still is valid, at least because of the missing lib. When having a bit more time later, I will check about that missing library issue, if that allows the context window to be bigger again, even when being split between VRAM+RAM.
Author
Owner

@mizxcv commented on GitHub (Apr 7, 2026):

Good to see it's running with the lower context. The rocblaslt error still showing up but not blocking means it's falling back to a different code path for those operations, probably rocBLAS instead of hipBLASLt. Might be worth comparing generation speed before and after copying the library file, since hipBLASLt is generally the faster path for those GEMM ops.
For the large context though, that's likely a separate issue. At 192K the total memory estimate was ~40 GiB vs 31.9 GiB VRAM, so even with the library fix, that context size would still need CPU offload to fit. The title change makes sense.

<!-- gh-comment-id:4197225213 --> @mizxcv commented on GitHub (Apr 7, 2026): Good to see it's running with the lower context. The rocblaslt error still showing up but not blocking means it's falling back to a different code path for those operations, probably rocBLAS instead of hipBLASLt. Might be worth comparing generation speed before and after copying the library file, since hipBLASLt is generally the faster path for those GEMM ops. For the large context though, that's likely a separate issue. At 192K the total memory estimate was ~40 GiB vs 31.9 GiB VRAM, so even with the library fix, that context size would still need CPU offload to fit. The title change makes sense.
Author
Owner

@FibreFoX commented on GitHub (Apr 7, 2026):

After some digging, I realized that my hosting system had too few RAM assigned (after increasing it worked with my old context window size), so the error message is unrelated. I will file a new issue for the missing library once I have done some more investigations.

Reducing the context window resolved my issue.

<!-- gh-comment-id:4198953775 --> @FibreFoX commented on GitHub (Apr 7, 2026): After some digging, I realized that my hosting system had too few RAM assigned (after increasing it worked with my old context window size), so the error message is unrelated. I will file a new issue for the missing library once I have done some more investigations. Reducing the context window resolved my issue.
Author
Owner

@mechovation commented on GitHub (Apr 8, 2026):

I used @mizxcv 's suggestion and it seemed to fix the problem in my docker.

This issue should not be closed to "just use smaller context"

<!-- gh-comment-id:4209326582 --> @mechovation commented on GitHub (Apr 8, 2026): I used @mizxcv 's suggestion and it seemed to fix the problem in my docker. This issue should not be closed to "just use smaller context"
Author
Owner

@FibreFoX commented on GitHub (Apr 8, 2026):

@mechovation What did you do exactly? Copy over these other files into the ollama container? Did it work for you even with bigger context, which has to be put in system RAM?

I did close this issue, because "smaller context" seemed to work. I get these Cannot read "TensileLibrary_lazy_gfx1201.dat": No such file or directory warning for other models (like qwen3.5) too.

<!-- gh-comment-id:4209605846 --> @FibreFoX commented on GitHub (Apr 8, 2026): @mechovation What did you do exactly? Copy over these other files into the ollama container? Did it work for you even with bigger context, which has to be put in system RAM? I did close this issue, because "smaller context" seemed to work. I get these `Cannot read "TensileLibrary_lazy_gfx1201.dat": No such file or directory` warning for other models (like qwen3.5) too.
Author
Owner

@mechovation commented on GitHub (Apr 8, 2026):

I ran the suggested commands inside the docker container. I'm not 100% it's fully working, but I think eliminated one problem. I'm also trying to reserve 32gb system memory with float to 48.

<!-- gh-comment-id:4209716122 --> @mechovation commented on GitHub (Apr 8, 2026): I ran the suggested commands inside the docker container. I'm not 100% it's fully working, but I think eliminated one problem. I'm also trying to reserve 32gb system memory with float to 48.
Author
Owner

@mechovation commented on GitHub (Apr 9, 2026):

@mechovation What did you do exactly? Copy over these other files into the ollama container? Did it work for you even with bigger context, which has to be put in system RAM?

I did close this issue, because "smaller context" seemed to work. I get these Cannot read "TensileLibrary_lazy_gfx1201.dat": No such file or directory warning for other models (like qwen3.5) too.

A followup, @mizxcv recommendation is not a path forward. It goes from file not found to cannot load. Same file name, wrong format/library.

Instead I am currently mounting the hipblaslt library from my host. (Ubuntu 24.04 LTS, standard kernel, ROCm stack)
additionally I backed my context down to 64K for the moment as well as HSA_ENABLE_SDMA: 0 and OLLAMA_FLASH_ATTENTION: 0. (this last one seems to be default anyway)

These changes let my agent process a article summarization job that was previously failing. Is it 100% fixed and super stable, to early to say. Is it BETTER? Without question. YMMV. Both my host and container run 6.8.0-107-generic kernel, as soon as host and container start departing underlaying foundations the mount the host library trick is probably not going to hold up. But keeps things running until the hipblaslt library is properly applied to the docker image.

I will probably try to bump back up to 128K context at some point, It does fit in my 32GB VRAM.

docker-compose snippet:

  ollama-rocm:
    image: ollama/ollama:rocm
    container_name: ollama-rocm
    devices:
      - /dev/kfd
      - /dev/dri
    group_add:
      - "44"
      - "993"
    ports:
      - "11434:11434"
    volumes:
      - ollama:/root/.ollama
      - /opt/rocm/lib/hipblaslt/library:/usr/lib/ollama/rocm/hipblaslt/library:ro
    environment:
      OLLAMA_CONTEXT_LENGTH: 65536
      OLLAMA_FLASH_ATTENTION: 0
      HSA_ENABLE_SDMA: 0
    restart: unless-stopped

Proof of hammering:

time=2026-04-09T14:10:37.292Z level=INFO source=server.go:1390 msg="llama runner started in 6.10 seconds"
[GIN] 2026/04/09 - 14:11:30 | 500 |          1m0s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:12:22 | 200 | 52.029616779s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:12:24 | 200 |  1.915777885s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:12:24 | 200 |     984.172µs |    192.168.1.98 | GET      "/v1/models"
[GIN] 2026/04/09 - 14:12:30 | 200 |  5.679858673s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:13:05 | 404 |       5.962µs |    192.168.1.98 | GET      "/api/v1/models"
[GIN] 2026/04/09 - 14:13:05 | 200 |    1.425974ms |    192.168.1.98 | GET      "/api/tags"
[GIN] 2026/04/09 - 14:13:05 | 200 |  235.204264ms |    192.168.1.98 | POST     "/api/show"
[GIN] 2026/04/09 - 14:14:04 | 200 | 58.941876772s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:14:17 | 200 |  9.003817322s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:14:20 | 200 |  3.258327055s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:14:37 | 200 | 13.321726882s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:14:49 | 200 |  8.703520396s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:15:05 | 200 | 10.167382972s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:15:19 | 200 | 11.933285107s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:15:35 | 200 | 13.794081395s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:15:47 | 200 |  9.839271239s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:16:42 | 200 | 53.630069807s |    192.168.1.98 | POST     "/v1/chat/completions"
[GIN] 2026/04/09 - 14:17:14 | 200 | 28.431382997s |    192.168.1.98 | POST     "/v1/chat/completions"

Memory details at 64K context:
(sorry, edit): This is Gemma4:26b Q4_K_M

time=2026-04-09T14:10:33.519Z level=INFO source=ggml.go:482 msg="offloading 30 repeating layers to GPU"
time=2026-04-09T14:10:33.519Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2026-04-09T14:10:33.519Z level=INFO source=ggml.go:494 msg="offloaded 31/31 layers to GPU"
time=2026-04-09T14:10:33.519Z level=INFO source=device.go:240 msg="model weights" device=ROCm0 size="16.6 GiB"
time=2026-04-09T14:10:33.519Z level=INFO source=device.go:245 msg="model weights" device=CPU size="667.5 MiB"
time=2026-04-09T14:10:33.519Z level=INFO source=device.go:251 msg="kv cache" device=ROCm0 size="2.1 GiB"
time=2026-04-09T14:10:33.519Z level=INFO source=device.go:262 msg="compute graph" device=ROCm0 size="3.3 GiB"
time=2026-04-09T14:10:33.519Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.5 MiB"
time=2026-04-09T14:10:33.519Z level=INFO source=device.go:272 msg="total memory" size="22.8 GiB"
time=2026-04-09T14:10:33.519Z level=INFO source=sched.go:561 msg="loaded runners" count=1

<!-- gh-comment-id:4215074454 --> @mechovation commented on GitHub (Apr 9, 2026): > [@mechovation](https://github.com/mechovation) What did you do exactly? Copy over these other files into the ollama container? Did it work for you even with bigger context, which has to be put in system RAM? > > I did close this issue, because "smaller context" seemed to work. I get these `Cannot read "TensileLibrary_lazy_gfx1201.dat": No such file or directory` warning for other models (like qwen3.5) too. A followup, @mizxcv recommendation is not a path forward. It goes from file not found to cannot load. Same file name, wrong format/library. Instead I am currently mounting the hipblaslt library from my host. (Ubuntu 24.04 LTS, standard kernel, ROCm stack) additionally I backed my context down to 64K for the moment as well as HSA_ENABLE_SDMA: 0 and OLLAMA_FLASH_ATTENTION: 0. (this last one seems to be default anyway) These changes let my agent process a article summarization job that was previously failing. Is it 100% fixed and super stable, to early to say. Is it BETTER? Without question. YMMV. Both my host and container run 6.8.0-107-generic kernel, as soon as host and container start departing underlaying foundations the mount the host library trick is probably not going to hold up. But keeps things running until the hipblaslt library is properly applied to the docker image. I will probably try to bump back up to 128K context at some point, It does fit in my 32GB VRAM. docker-compose snippet: ``` ollama-rocm: image: ollama/ollama:rocm container_name: ollama-rocm devices: - /dev/kfd - /dev/dri group_add: - "44" - "993" ports: - "11434:11434" volumes: - ollama:/root/.ollama - /opt/rocm/lib/hipblaslt/library:/usr/lib/ollama/rocm/hipblaslt/library:ro environment: OLLAMA_CONTEXT_LENGTH: 65536 OLLAMA_FLASH_ATTENTION: 0 HSA_ENABLE_SDMA: 0 restart: unless-stopped ``` Proof of hammering: ``` time=2026-04-09T14:10:37.292Z level=INFO source=server.go:1390 msg="llama runner started in 6.10 seconds" [GIN] 2026/04/09 - 14:11:30 | 500 | 1m0s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:12:22 | 200 | 52.029616779s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:12:24 | 200 | 1.915777885s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:12:24 | 200 | 984.172µs | 192.168.1.98 | GET "/v1/models" [GIN] 2026/04/09 - 14:12:30 | 200 | 5.679858673s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:13:05 | 404 | 5.962µs | 192.168.1.98 | GET "/api/v1/models" [GIN] 2026/04/09 - 14:13:05 | 200 | 1.425974ms | 192.168.1.98 | GET "/api/tags" [GIN] 2026/04/09 - 14:13:05 | 200 | 235.204264ms | 192.168.1.98 | POST "/api/show" [GIN] 2026/04/09 - 14:14:04 | 200 | 58.941876772s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:14:17 | 200 | 9.003817322s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:14:20 | 200 | 3.258327055s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:14:37 | 200 | 13.321726882s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:14:49 | 200 | 8.703520396s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:15:05 | 200 | 10.167382972s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:15:19 | 200 | 11.933285107s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:15:35 | 200 | 13.794081395s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:15:47 | 200 | 9.839271239s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:16:42 | 200 | 53.630069807s | 192.168.1.98 | POST "/v1/chat/completions" [GIN] 2026/04/09 - 14:17:14 | 200 | 28.431382997s | 192.168.1.98 | POST "/v1/chat/completions" ``` Memory details at 64K context: (sorry, edit): This is Gemma4:26b Q4_K_M ``` time=2026-04-09T14:10:33.519Z level=INFO source=ggml.go:482 msg="offloading 30 repeating layers to GPU" time=2026-04-09T14:10:33.519Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2026-04-09T14:10:33.519Z level=INFO source=ggml.go:494 msg="offloaded 31/31 layers to GPU" time=2026-04-09T14:10:33.519Z level=INFO source=device.go:240 msg="model weights" device=ROCm0 size="16.6 GiB" time=2026-04-09T14:10:33.519Z level=INFO source=device.go:245 msg="model weights" device=CPU size="667.5 MiB" time=2026-04-09T14:10:33.519Z level=INFO source=device.go:251 msg="kv cache" device=ROCm0 size="2.1 GiB" time=2026-04-09T14:10:33.519Z level=INFO source=device.go:262 msg="compute graph" device=ROCm0 size="3.3 GiB" time=2026-04-09T14:10:33.519Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.5 MiB" time=2026-04-09T14:10:33.519Z level=INFO source=device.go:272 msg="total memory" size="22.8 GiB" time=2026-04-09T14:10:33.519Z level=INFO source=sched.go:561 msg="loaded runners" count=1 ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9812