[GH-ISSUE #12752] gpt-oss:120b fails to load on Windows (Ollama 0.12.6) with AMD AI Max+ 395 (96GB VRAM) — works fine on Ubuntu #70515

Open
opened 2026-05-04 21:49:25 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @myoldcat on GitHub (Oct 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12752

What is the issue?

Description:
We are running two identical AMD AI Max+ 395 machines, both with 96GB of VRAM. One is running Ubuntu, and the other Windows. Both have Ollama version 0.12.6 installed.

On the Ubuntu machine, the model gpt-oss:120b runs successfully.
However, on the Windows machine, it fails to start and reports “out of memory” errors.

Expected behavior:
gpt-oss:120b should load and run on both systems under the same hardware and Ollama version.

Actual behavior:
On Windows:

The first attempt tries to load the entire model into GPU memory → VRAM out of memory error (unexpected, since Ubuntu works fine).

The second attempt loads part of the model to GPU → system RAM out of memory error (expected, as total RAM is indeed smaller).

Environment:

Hardware: AMD AI Max+ 395 (96GB VRAM)

OS 1: Ubuntu (works fine)

OS 2: Windows (fails)

Ollama version: 0.12.6

Model: gpt-oss:120b

Additional notes:
It seems that Ollama’s GPU memory allocation or model-sharding logic behaves differently on Windows compared to Ubuntu. The Windows version might not properly detect or manage available VRAM on AMD GPUs.

Relevant log output

time=2025-10-23T15:54:22.480+08:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\admin\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-23T15:54:22.501+08:00 level=INFO source=images.go:522 msg="total blobs: 23"
time=2025-10-23T15:54:22.503+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-23T15:54:22.505+08:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)"
time=2025-10-23T15:54:22.506+08:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-23T15:54:27.259+08:00 level=INFO source=types.go:112 msg="inference compute" id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon(TM) 8060S Graphics" libdirs=ollama,rocm driver=60450.10 pci_id=c3:00.0 type=iGPU total="96.0 GiB" available="94.7 GiB"
[GIN] 2025/10/23 - 15:54:27 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/23 - 15:54:27 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/23 - 15:54:27 | 200 |      2.6128ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/23 - 15:54:27 | 200 |    131.9127ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/10/23 - 15:54:29 | 200 |     70.9769ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/10/23 - 15:54:34 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/23 - 15:54:34 | 200 |      2.2207ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/23 - 15:54:34 | 200 |     71.1751ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/10/23 - 15:54:34 | 200 |     69.8003ms |       127.0.0.1 | POST     "/api/show"
time=2025-10-23T15:54:35.454+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-23T15:54:35.454+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=16 efficiency=0 threads=32
time=2025-10-23T15:54:35.455+08:00 level=INFO source=server.go:216 msg="enabling flash attention"
time=2025-10-23T15:54:35.455+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-6be6d66a3f546d8c19b130dc41dc24b2fc159f84ffbc76a0ee0676205083cf5a --port 49759"
time=2025-10-23T15:54:35.463+08:00 level=INFO source=server.go:676 msg="loading model" "model layers"=37 requested=-1
time=2025-10-23T15:54:35.463+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-23T15:54:35.463+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=16 efficiency=0 threads=32
time=2025-10-23T15:54:35.463+08:00 level=INFO source=server.go:682 msg="system memory" total="31.8 GiB" free="20.2 GiB" free_swap="19.3 GiB"
time=2025-10-23T15:54:35.463+08:00 level=INFO source=server.go:690 msg="gpu memory" id=0 library=ROCm available="94.2 GiB" free="94.7 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-10-23T15:54:35.509+08:00 level=INFO source=runner.go:1332 msg="starting ollama engine"
time=2025-10-23T15:54:35.516+08:00 level=INFO source=runner.go:1367 msg="Server listening on 127.0.0.1:49759"
time=2025-10-23T15:54:35.517+08:00 level=INFO source=runner.go:1205 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-23T15:54:35.561+08:00 level=INFO source=ggml.go:134 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=687 num_key_values=32
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon(TM) 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0
load_backend: loaded ROCm backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll
time=2025-10-23T15:54:35.618+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-10-23T15:54:36.417+08:00 level=INFO source=runner.go:1205 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 61223.74 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 64197741312
time=2025-10-23T15:54:36.451+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.01
time=2025-10-23T15:54:36.451+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.02
time=2025-10-23T15:54:36.451+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.04
time=2025-10-23T15:54:36.451+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.08
time=2025-10-23T15:54:36.452+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.16
time=2025-10-23T15:54:36.452+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.32
time=2025-10-23T15:54:36.452+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.64
time=2025-10-23T15:54:36.452+08:00 level=INFO source=runner.go:1205 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:20[ID:0 Layers:20(16..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 33399.51 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 35021923840
time=2025-10-23T15:54:36.537+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=1.28
time=2025-10-23T15:54:36.537+08:00 level=WARN source=server.go:977 msg="model request too large for system" requested="60.9 GiB" available="39.5 GiB" total="31.8 GiB" free="20.2 GiB" swap="19.3 GiB"
time=2025-10-23T15:54:36.537+08:00 level=INFO source=runner.go:1205 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-23T15:54:36.538+08:00 level=INFO source=device.go:206 msg="model weights" device=ROCm0 size="32.6 GiB"
time=2025-10-23T15:54:36.538+08:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="28.3 GiB"
time=2025-10-23T15:54:36.538+08:00 level=INFO source=device.go:238 msg="total memory" size="60.9 GiB"
time=2025-10-23T15:54:36.538+08:00 level=INFO source=sched.go:450 msg="Load failed" model=C:\Users\admin\.ollama\models\blobs\sha256-6be6d66a3f546d8c19b130dc41dc24b2fc159f84ffbc76a0ee0676205083cf5a error="model requires more system memory (60.9 GiB) than is available (39.5 GiB)"
time=2025-10-23T15:54:36.570+08:00 level=ERROR source=server.go:426 msg="llama runner terminated" error="exit status 1"

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.12.6

Originally created by @myoldcat on GitHub (Oct 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12752 ### What is the issue? **Description:** We are running two identical **AMD AI Max+ 395** machines, both with **96GB of VRAM**. One is running Ubuntu, and the other Windows. Both have Ollama version 0.12.6 installed. On the Ubuntu machine, the model gpt-oss:120b runs successfully. However, on the Windows machine, it fails to start and reports “out of memory” errors. **Expected behavior:** gpt-oss:120b should load and run on both systems under the same hardware and Ollama version. **Actual behavior:** On Windows: The first attempt tries to load the entire model into GPU memory → VRAM out of memory error (unexpected, since Ubuntu works fine). The second attempt loads part of the model to GPU → system RAM out of memory error (expected, as total RAM is indeed smaller). **Environment:** Hardware: AMD AI Max+ 395 (96GB VRAM) OS 1: Ubuntu (works fine) OS 2: Windows (fails) Ollama version: 0.12.6 Model: gpt-oss:120b **Additional notes:** It seems that Ollama’s GPU memory allocation or model-sharding logic behaves differently on Windows compared to Ubuntu. The Windows version might not properly detect or manage available VRAM on AMD GPUs. ### Relevant log output ```shell time=2025-10-23T15:54:22.480+08:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\admin\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-23T15:54:22.501+08:00 level=INFO source=images.go:522 msg="total blobs: 23" time=2025-10-23T15:54:22.503+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-23T15:54:22.505+08:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)" time=2025-10-23T15:54:22.506+08:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-23T15:54:27.259+08:00 level=INFO source=types.go:112 msg="inference compute" id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon(TM) 8060S Graphics" libdirs=ollama,rocm driver=60450.10 pci_id=c3:00.0 type=iGPU total="96.0 GiB" available="94.7 GiB" [GIN] 2025/10/23 - 15:54:27 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/23 - 15:54:27 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/23 - 15:54:27 | 200 | 2.6128ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/23 - 15:54:27 | 200 | 131.9127ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/23 - 15:54:29 | 200 | 70.9769ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/23 - 15:54:34 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/23 - 15:54:34 | 200 | 2.2207ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/23 - 15:54:34 | 200 | 71.1751ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/23 - 15:54:34 | 200 | 69.8003ms | 127.0.0.1 | POST "/api/show" time=2025-10-23T15:54:35.454+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-23T15:54:35.454+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-10-23T15:54:35.455+08:00 level=INFO source=server.go:216 msg="enabling flash attention" time=2025-10-23T15:54:35.455+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-6be6d66a3f546d8c19b130dc41dc24b2fc159f84ffbc76a0ee0676205083cf5a --port 49759" time=2025-10-23T15:54:35.463+08:00 level=INFO source=server.go:676 msg="loading model" "model layers"=37 requested=-1 time=2025-10-23T15:54:35.463+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-23T15:54:35.463+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-10-23T15:54:35.463+08:00 level=INFO source=server.go:682 msg="system memory" total="31.8 GiB" free="20.2 GiB" free_swap="19.3 GiB" time=2025-10-23T15:54:35.463+08:00 level=INFO source=server.go:690 msg="gpu memory" id=0 library=ROCm available="94.2 GiB" free="94.7 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-10-23T15:54:35.509+08:00 level=INFO source=runner.go:1332 msg="starting ollama engine" time=2025-10-23T15:54:35.516+08:00 level=INFO source=runner.go:1367 msg="Server listening on 127.0.0.1:49759" time=2025-10-23T15:54:35.517+08:00 level=INFO source=runner.go:1205 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-23T15:54:35.561+08:00 level=INFO source=ggml.go:134 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=687 num_key_values=32 load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon(TM) 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0 load_backend: loaded ROCm backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll time=2025-10-23T15:54:35.618+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-10-23T15:54:36.417+08:00 level=INFO source=runner.go:1205 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 61223.74 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 64197741312 time=2025-10-23T15:54:36.451+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.01 time=2025-10-23T15:54:36.451+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.02 time=2025-10-23T15:54:36.451+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.04 time=2025-10-23T15:54:36.451+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.08 time=2025-10-23T15:54:36.452+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.16 time=2025-10-23T15:54:36.452+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.32 time=2025-10-23T15:54:36.452+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=0.64 time=2025-10-23T15:54:36.452+08:00 level=INFO source=runner.go:1205 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:20[ID:0 Layers:20(16..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 33399.51 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 35021923840 time=2025-10-23T15:54:36.537+08:00 level=INFO source=server.go:822 msg="model layout did not fit, applying backoff" backoff=1.28 time=2025-10-23T15:54:36.537+08:00 level=WARN source=server.go:977 msg="model request too large for system" requested="60.9 GiB" available="39.5 GiB" total="31.8 GiB" free="20.2 GiB" swap="19.3 GiB" time=2025-10-23T15:54:36.537+08:00 level=INFO source=runner.go:1205 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-23T15:54:36.538+08:00 level=INFO source=device.go:206 msg="model weights" device=ROCm0 size="32.6 GiB" time=2025-10-23T15:54:36.538+08:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="28.3 GiB" time=2025-10-23T15:54:36.538+08:00 level=INFO source=device.go:238 msg="total memory" size="60.9 GiB" time=2025-10-23T15:54:36.538+08:00 level=INFO source=sched.go:450 msg="Load failed" model=C:\Users\admin\.ollama\models\blobs\sha256-6be6d66a3f546d8c19b130dc41dc24b2fc159f84ffbc76a0ee0676205083cf5a error="model requires more system memory (60.9 GiB) than is available (39.5 GiB)" time=2025-10-23T15:54:36.570+08:00 level=ERROR source=server.go:426 msg="llama runner terminated" error="exit status 1" ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.12.6
GiteaMirror added the bugmemoryneeds more info labels 2026-05-04 21:49:26 -05:00
Author
Owner

@dhiltgen commented on GitHub (Oct 23, 2025):

@myoldcat can you clarify your "96G VRAM" comment? This is an iGPU, so that's shared system memory, correct? Have you dedicated system memory to the iGPU at the BIOS or other level? If you have dedicated, what is the total amount of RAM in the system, and how is it split between CPU and GPU? How much swap space do you have on Linux?

<!-- gh-comment-id:3437919217 --> @dhiltgen commented on GitHub (Oct 23, 2025): @myoldcat can you clarify your "96G VRAM" comment? This is an iGPU, so that's shared system memory, correct? Have you dedicated system memory to the iGPU at the BIOS or other level? If you have dedicated, what is the total amount of RAM in the system, and how is it split between CPU and GPU? How much swap space do you have on Linux?
Author
Owner

@jessegross commented on GitHub (Oct 23, 2025):

If you can post the log from the successful run on Ubuntu that might be helpful as well.

<!-- gh-comment-id:3438294948 --> @jessegross commented on GitHub (Oct 23, 2025): If you can post the log from the successful run on Ubuntu that might be helpful as well.
Author
Owner

@myoldcat commented on GitHub (Oct 24, 2025):

the log on Ubuntu

time=2025-10-23T08:47:33.760Z level=DEBUG source=server.go:1720 msg="stopping llama server" pid=1250
time=2025-10-23T08:47:33.760Z level=DEBUG source=server.go:1726 msg="waiting for llama server to exit" pid=1250
time=2025-10-23T08:47:33.848Z level=DEBUG source=server.go:1730 msg="llama server stopped" pid=1250
time=2025-10-23T08:47:38.242Z level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-10-23T08:47:38.243Z level=INFO source=images.go:522 msg="total blobs: 39"
time=2025-10-23T08:47:38.243Z level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-23T08:47:38.244Z level=INFO source=routes.go:1564 msg="Listening on [::]:11434 (version 0.12.6)"
time=2025-10-23T08:47:38.244Z level=DEBUG source=sched.go:123 msg="starting llm scheduler"
time=2025-10-23T08:47:38.244Z level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-23T08:47:38.244Z level=DEBUG source=runner.go:448 msg="spawning runner with" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs=[]
time=2025-10-23T08:47:38.849Z level=DEBUG source=runner.go:451 msg="bootstrap discovery took" duration=605.148615ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs=[]
time=2025-10-23T08:47:38.849Z level=DEBUG source=runner.go:118 msg="filtering out unsupported or overlapping GPU library combinations" count=1
time=2025-10-23T08:47:38.849Z level=DEBUG source=runner.go:130 msg="verifying GPU is supported" library=/usr/lib/ollama/rocm description="AMD Radeon Graphics" compute=gfx1151 pci_id=c5:00.0
time=2025-10-23T08:47:38.849Z level=DEBUG source=runner.go:448 msg="spawning runner with" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs="[GGML_CUDA_INIT=1 ROCR_VISIBLE_DEVICES=0]"
time=2025-10-23T08:47:39.517Z level=DEBUG source=runner.go:451 msg="bootstrap discovery took" duration=667.67065ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs="[GGML_CUDA_INIT=1 ROCR_VISIBLE_DEVICES=0]"
time=2025-10-23T08:47:39.517Z level=DEBUG source=runner.go:45 msg="GPU bootstrap discovery took" duration=1.273243027s
time=2025-10-23T08:47:39.517Z level=INFO source=types.go:112 msg="inference compute" id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon Graphics" libdirs=ollama,rocm driver=60342.13 pci_id=c5:00.0 type=iGPU total="96.0 GiB" available="95.8 GiB"
[GIN] 2025/10/23 - 08:48:18 | 200 |       39.59µs |       127.0.0.1 | HEAD     "/"
time=2025-10-23T08:48:18.366Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
[GIN] 2025/10/23 - 08:48:18 | 200 |    76.82256ms |       127.0.0.1 | POST     "/api/show"
time=2025-10-23T08:48:18.514Z level=DEBUG source=runner.go:259 msg="refreshing free memory"
time=2025-10-23T08:48:18.514Z level=DEBUG source=runner.go:323 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
time=2025-10-23T08:48:18.514Z level=DEBUG source=runner.go:448 msg="spawning runner with" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs=[]
time=2025-10-23T08:48:19.169Z level=DEBUG source=runner.go:451 msg="bootstrap discovery took" duration=655.446551ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs=[]
time=2025-10-23T08:48:19.169Z level=DEBUG source=runner.go:45 msg="overall device VRAM discovery took" duration=655.613647ms
time=2025-10-23T08:48:19.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-23T08:48:19.185Z level=DEBUG source=sched.go:215 msg="loading first model" model=/root/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3
time=2025-10-23T08:48:19.249Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-23T08:48:19.249Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
time=2025-10-23T08:48:19.250Z level=INFO source=server.go:216 msg="enabling flash attention"
time=2025-10-23T08:48:19.250Z level=DEBUG source=server.go:331 msg="adding gpu dependency paths" paths="[/usr/lib/ollama /usr/lib/ollama/rocm /usr/lib/ollama/rocm]"
time=2025-10-23T08:48:19.250Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 35369"
time=2025-10-23T08:48:19.250Z level=DEBUG source=server.go:401 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_MAX_LOADED_MODELS=2 OLLAMA_NUM_PARALLEL=4 OLLAMA_KEEP_ALIVE=24h OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/lib/ollama/rocm:/usr/lib/ollama:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/lib/ollama/rocm ROCR_VISIBLE_DEVICES=0
time=2025-10-23T08:48:19.251Z level=INFO source=server.go:676 msg="loading model" "model layers"=37 requested=-1
time=2025-10-23T08:48:19.252Z level=INFO source=server.go:682 msg="system memory" total="31.0 GiB" free="23.8 GiB" free_swap="8.0 GiB"
time=2025-10-23T08:48:19.252Z level=INFO source=server.go:690 msg="gpu memory" id=0 library=ROCm available="95.4 GiB" free="95.8 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-10-23T08:48:19.261Z level=INFO source=runner.go:1332 msg="starting ollama engine"
time=2025-10-23T08:48:19.261Z level=INFO source=runner.go:1367 msg="Server listening on 127.0.0.1:35369"
time=2025-10-23T08:48:19.263Z level=INFO source=runner.go:1205 msg=load request="{Operation:fit LoraPath:[] Parallel:4 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-23T08:48:19.291Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-23T08:48:19.291Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
time=2025-10-23T08:48:19.291Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
time=2025-10-23T08:48:19.291Z level=INFO source=ggml.go:134 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30
time=2025-10-23T08:48:19.291Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2025-10-23T08:48:19.294Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/rocm
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0
load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so
time=2025-10-23T08:48:19.868Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-10-23T08:48:19.869Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
time=2025-10-23T08:48:20.038Z level=DEBUG source=ggml.go:837 msg="compute graph" nodes=1985 splits=2
time=2025-10-23T08:48:20.039Z level=DEBUG source=device.go:206 msg="model weights" device=ROCm0 size="59.8 GiB"
time=2025-10-23T08:48:20.039Z level=DEBUG source=device.go:211 msg="model weights" device=CPU size="1.1 GiB"
time=2025-10-23T08:48:20.039Z level=DEBUG source=device.go:217 msg="kv cache" device=ROCm0 size="1.7 GiB"
time=2025-10-23T08:48:20.039Z level=DEBUG source=device.go:228 msg="compute graph" device=ROCm0 size="182.8 MiB"
time=2025-10-23T08:48:20.039Z level=DEBUG source=device.go:233 msg="compute graph" device=CPU size="5.6 MiB"
time=2025-10-23T08:48:20.039Z level=DEBUG source=device.go:238 msg="total memory" size="62.8 GiB"
time=2025-10-23T08:48:20.039Z level=DEBUG source=server.go:721 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=5898240 required.ROCm0.ID=0 required.ROCm0.Weights="[1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1158278400]" required.ROCm0.Cache="[34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 0]" required.ROCm0.Graph=191632768
time=2025-10-23T08:48:20.039Z level=DEBUG source=server.go:915 msg="available gpu" id=0 library=ROCm "available layer vram"="95.2 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="182.8 MiB"
time=2025-10-23T08:48:20.039Z level=DEBUG source=server.go:732 msg="new layout created" layers="37[ID:0 Layers:37(0..36)]"
time=2025-10-23T08:48:20.039Z level=INFO source=runner.go:1205 msg=load request="{Operation:alloc LoraPath:[] Parallel:4 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-23T08:48:20.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
time=2025-10-23T08:48:20.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
time=2025-10-23T08:48:20.087Z level=DEBUG source=ggml.go:837 msg="compute graph" nodes=1985 splits=2
time=2025-10-23T08:48:20.087Z level=DEBUG source=device.go:206 msg="model weights" device=ROCm0 size="59.8 GiB"
time=2025-10-23T08:48:20.087Z level=DEBUG source=device.go:211 msg="model weights" device=CPU size="1.1 GiB"
time=2025-10-23T08:48:20.087Z level=DEBUG source=device.go:217 msg="kv cache" device=ROCm0 size="1.7 GiB"
time=2025-10-23T08:48:20.087Z level=DEBUG source=device.go:228 msg="compute graph" device=ROCm0 size="182.8 MiB"
time=2025-10-23T08:48:20.087Z level=DEBUG source=device.go:233 msg="compute graph" device=CPU size="5.6 MiB"
time=2025-10-23T08:48:20.087Z level=DEBUG source=device.go:238 msg="total memory" size="62.8 GiB"
time=2025-10-23T08:48:20.087Z level=DEBUG source=server.go:721 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=5898240 required.ROCm0.ID=0 required.ROCm0.Weights="[1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1158278400]" required.ROCm0.Cache="[34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 0]" required.ROCm0.Graph=191632768
time=2025-10-23T08:48:20.087Z level=DEBUG source=server.go:915 msg="available gpu" id=0 library=ROCm "available layer vram"="95.2 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="182.8 MiB"
time=2025-10-23T08:48:20.087Z level=DEBUG source=server.go:732 msg="new layout created" layers="37[ID:0 Layers:37(0..36)]"
time=2025-10-23T08:48:20.087Z level=INFO source=runner.go:1205 msg=load request="{Operation:commit LoraPath:[] Parallel:4 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-23T08:48:20.087Z level=INFO source=ggml.go:480 msg="offloading 36 repeating layers to GPU"
time=2025-10-23T08:48:20.087Z level=INFO source=ggml.go:487 msg="offloading output layer to GPU"
time=2025-10-23T08:48:20.087Z level=INFO source=ggml.go:492 msg="offloaded 37/37 layers to GPU"
time=2025-10-23T08:48:20.088Z level=INFO source=device.go:206 msg="model weights" device=ROCm0 size="59.8 GiB"
time=2025-10-23T08:48:20.088Z level=INFO source=device.go:211 msg="model weights" device=CPU size="1.1 GiB"
time=2025-10-23T08:48:20.088Z level=INFO source=device.go:217 msg="kv cache" device=ROCm0 size="1.7 GiB"
time=2025-10-23T08:48:20.088Z level=INFO source=device.go:228 msg="compute graph" device=ROCm0 size="182.8 MiB"
time=2025-10-23T08:48:20.088Z level=INFO source=device.go:233 msg="compute graph" device=CPU size="5.6 MiB"
time=2025-10-23T08:48:20.088Z level=INFO source=device.go:238 msg="total memory" size="62.8 GiB"
time=2025-10-23T08:48:20.088Z level=INFO source=sched.go:482 msg="loaded runners" count=1
time=2025-10-23T08:48:20.088Z level=INFO source=server.go:1272 msg="waiting for llama runner to start responding"
time=2025-10-23T08:48:20.088Z level=INFO source=server.go:1306 msg="waiting for server to become available" status="llm server loading model"
time=2025-10-23T08:48:20.339Z level=DEBUG source=server.go:1316 msg="model load progress 0.02"
time=2025-10-23T08:48:20.590Z level=DEBUG source=server.go:1316 msg="model load progress 0.04"
time=2025-10-23T08:48:20.841Z level=DEBUG source=server.go:1316 msg="model load progress 0.05"
time=2025-10-23T08:48:21.092Z level=DEBUG source=server.go:1316 msg="model load progress 0.07"
time=2025-10-23T08:48:21.342Z level=DEBUG source=server.go:1316 msg="model load progress 0.10"
time=2025-10-23T08:48:21.593Z level=DEBUG source=server.go:1316 msg="model load progress 0.12"
time=2025-10-23T08:48:21.844Z level=DEBUG source=server.go:1316 msg="model load progress 0.14"
time=2025-10-23T08:48:22.095Z level=DEBUG source=server.go:1316 msg="model load progress 0.16"
time=2025-10-23T08:48:22.346Z level=DEBUG source=server.go:1316 msg="model load progress 0.18"
time=2025-10-23T08:48:22.596Z level=DEBUG source=server.go:1316 msg="model load progress 0.20"
time=2025-10-23T08:48:22.847Z level=DEBUG source=server.go:1316 msg="model load progress 0.22"
time=2025-10-23T08:48:23.098Z level=DEBUG source=server.go:1316 msg="model load progress 0.24"
time=2025-10-23T08:48:23.349Z level=DEBUG source=server.go:1316 msg="model load progress 0.27"
time=2025-10-23T08:48:23.600Z level=DEBUG source=server.go:1316 msg="model load progress 0.29"
time=2025-10-23T08:48:23.850Z level=DEBUG source=server.go:1316 msg="model load progress 0.31"
time=2025-10-23T08:48:24.101Z level=DEBUG source=server.go:1316 msg="model load progress 0.33"
time=2025-10-23T08:48:24.352Z level=DEBUG source=server.go:1316 msg="model load progress 0.35"
time=2025-10-23T08:48:24.603Z level=DEBUG source=server.go:1316 msg="model load progress 0.37"
time=2025-10-23T08:48:24.854Z level=DEBUG source=server.go:1316 msg="model load progress 0.40"
time=2025-10-23T08:48:25.105Z level=DEBUG source=server.go:1316 msg="model load progress 0.42"
time=2025-10-23T08:48:25.356Z level=DEBUG source=server.go:1316 msg="model load progress 0.44"
time=2025-10-23T08:48:25.606Z level=DEBUG source=server.go:1316 msg="model load progress 0.46"
time=2025-10-23T08:48:25.857Z level=DEBUG source=server.go:1316 msg="model load progress 0.48"
time=2025-10-23T08:48:26.108Z level=DEBUG source=server.go:1316 msg="model load progress 0.51"
time=2025-10-23T08:48:26.358Z level=DEBUG source=server.go:1316 msg="model load progress 0.53"
time=2025-10-23T08:48:26.609Z level=DEBUG source=server.go:1316 msg="model load progress 0.55"
time=2025-10-23T08:48:26.860Z level=DEBUG source=server.go:1316 msg="model load progress 0.57"
time=2025-10-23T08:48:27.111Z level=DEBUG source=server.go:1316 msg="model load progress 0.59"
time=2025-10-23T08:48:27.362Z level=DEBUG source=server.go:1316 msg="model load progress 0.61"
time=2025-10-23T08:48:27.614Z level=DEBUG source=server.go:1316 msg="model load progress 0.64"
time=2025-10-23T08:48:27.864Z level=DEBUG source=server.go:1316 msg="model load progress 0.66"
time=2025-10-23T08:48:28.115Z level=DEBUG source=server.go:1316 msg="model load progress 0.68"
time=2025-10-23T08:48:28.366Z level=DEBUG source=server.go:1316 msg="model load progress 0.70"
time=2025-10-23T08:48:28.617Z level=DEBUG source=server.go:1316 msg="model load progress 0.72"
time=2025-10-23T08:48:28.867Z level=DEBUG source=server.go:1316 msg="model load progress 0.74"
time=2025-10-23T08:48:29.118Z level=DEBUG source=server.go:1316 msg="model load progress 0.77"
time=2025-10-23T08:48:29.369Z level=DEBUG source=server.go:1316 msg="model load progress 0.79"
time=2025-10-23T08:48:29.620Z level=DEBUG source=server.go:1316 msg="model load progress 0.81"
time=2025-10-23T08:48:29.870Z level=DEBUG source=server.go:1316 msg="model load progress 0.83"
time=2025-10-23T08:48:30.121Z level=DEBUG source=server.go:1316 msg="model load progress 0.85"
time=2025-10-23T08:48:30.372Z level=DEBUG source=server.go:1316 msg="model load progress 0.87"
time=2025-10-23T08:48:30.622Z level=DEBUG source=server.go:1316 msg="model load progress 0.90"
time=2025-10-23T08:48:30.873Z level=DEBUG source=server.go:1316 msg="model load progress 0.92"
time=2025-10-23T08:48:31.124Z level=DEBUG source=server.go:1316 msg="model load progress 0.94"
time=2025-10-23T08:48:31.375Z level=DEBUG source=server.go:1316 msg="model load progress 0.96"
time=2025-10-23T08:48:31.625Z level=DEBUG source=server.go:1316 msg="model load progress 0.98"
time=2025-10-23T08:48:31.876Z level=DEBUG source=server.go:1316 msg="model load progress 0.99"
time=2025-10-23T08:48:32.106Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
time=2025-10-23T08:48:32.127Z level=INFO source=server.go:1310 msg="llama runner started in 12.88 seconds"
time=2025-10-23T08:48:32.127Z level=DEBUG source=sched.go:494 msg="finished setting up" runner.name=registry.ollama.ai/library/gpt-oss:120b runner.inference="[{ID:0 Library:ROCm}]" runner.size="62.8 GiB" runner.vram="62.8 GiB" runner.parallel=4 runner.pid=84 runner.model=/root/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 runner.num_ctx=8192
[GIN] 2025/10/23 - 08:48:32 | 200 | 13.758151436s |       127.0.0.1 | POST     "/api/generate"
time=2025-10-23T08:48:32.127Z level=DEBUG source=sched.go:502 msg="context for request finished"
time=2025-10-23T08:48:32.127Z level=DEBUG source=sched.go:294 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:120b runner.inference="[{ID:0 Library:ROCm}]" runner.size="62.8 GiB" runner.vram="62.8 GiB" runner.parallel=4 runner.pid=84 runner.model=/root/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 runner.num_ctx=8192 duration=24h0m0s
time=2025-10-23T08:48:32.127Z level=DEBUG source=sched.go:312 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:120b runner.inference="[{ID:0 Library:ROCm}]" runner.size="62.8 GiB" runner.vram="62.8 GiB" runner.parallel=4 runner.pid=84 runner.model=/root/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 runner.num_ctx=8192 refCount=0
<!-- gh-comment-id:3440223814 --> @myoldcat commented on GitHub (Oct 24, 2025): the log on Ubuntu ``` time=2025-10-23T08:47:33.760Z level=DEBUG source=server.go:1720 msg="stopping llama server" pid=1250 time=2025-10-23T08:47:33.760Z level=DEBUG source=server.go:1726 msg="waiting for llama server to exit" pid=1250 time=2025-10-23T08:47:33.848Z level=DEBUG source=server.go:1730 msg="llama server stopped" pid=1250 time=2025-10-23T08:47:38.242Z level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-10-23T08:47:38.243Z level=INFO source=images.go:522 msg="total blobs: 39" time=2025-10-23T08:47:38.243Z level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-23T08:47:38.244Z level=INFO source=routes.go:1564 msg="Listening on [::]:11434 (version 0.12.6)" time=2025-10-23T08:47:38.244Z level=DEBUG source=sched.go:123 msg="starting llm scheduler" time=2025-10-23T08:47:38.244Z level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-23T08:47:38.244Z level=DEBUG source=runner.go:448 msg="spawning runner with" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs=[] time=2025-10-23T08:47:38.849Z level=DEBUG source=runner.go:451 msg="bootstrap discovery took" duration=605.148615ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs=[] time=2025-10-23T08:47:38.849Z level=DEBUG source=runner.go:118 msg="filtering out unsupported or overlapping GPU library combinations" count=1 time=2025-10-23T08:47:38.849Z level=DEBUG source=runner.go:130 msg="verifying GPU is supported" library=/usr/lib/ollama/rocm description="AMD Radeon Graphics" compute=gfx1151 pci_id=c5:00.0 time=2025-10-23T08:47:38.849Z level=DEBUG source=runner.go:448 msg="spawning runner with" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs="[GGML_CUDA_INIT=1 ROCR_VISIBLE_DEVICES=0]" time=2025-10-23T08:47:39.517Z level=DEBUG source=runner.go:451 msg="bootstrap discovery took" duration=667.67065ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs="[GGML_CUDA_INIT=1 ROCR_VISIBLE_DEVICES=0]" time=2025-10-23T08:47:39.517Z level=DEBUG source=runner.go:45 msg="GPU bootstrap discovery took" duration=1.273243027s time=2025-10-23T08:47:39.517Z level=INFO source=types.go:112 msg="inference compute" id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon Graphics" libdirs=ollama,rocm driver=60342.13 pci_id=c5:00.0 type=iGPU total="96.0 GiB" available="95.8 GiB" [GIN] 2025/10/23 - 08:48:18 | 200 | 39.59µs | 127.0.0.1 | HEAD "/" time=2025-10-23T08:48:18.366Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/10/23 - 08:48:18 | 200 | 76.82256ms | 127.0.0.1 | POST "/api/show" time=2025-10-23T08:48:18.514Z level=DEBUG source=runner.go:259 msg="refreshing free memory" time=2025-10-23T08:48:18.514Z level=DEBUG source=runner.go:323 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" time=2025-10-23T08:48:18.514Z level=DEBUG source=runner.go:448 msg="spawning runner with" OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs=[] time=2025-10-23T08:48:19.169Z level=DEBUG source=runner.go:451 msg="bootstrap discovery took" duration=655.446551ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/rocm]" extra_envs=[] time=2025-10-23T08:48:19.169Z level=DEBUG source=runner.go:45 msg="overall device VRAM discovery took" duration=655.613647ms time=2025-10-23T08:48:19.184Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-23T08:48:19.185Z level=DEBUG source=sched.go:215 msg="loading first model" model=/root/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 time=2025-10-23T08:48:19.249Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-23T08:48:19.249Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 time=2025-10-23T08:48:19.250Z level=INFO source=server.go:216 msg="enabling flash attention" time=2025-10-23T08:48:19.250Z level=DEBUG source=server.go:331 msg="adding gpu dependency paths" paths="[/usr/lib/ollama /usr/lib/ollama/rocm /usr/lib/ollama/rocm]" time=2025-10-23T08:48:19.250Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 35369" time=2025-10-23T08:48:19.250Z level=DEBUG source=server.go:401 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_MAX_LOADED_MODELS=2 OLLAMA_NUM_PARALLEL=4 OLLAMA_KEEP_ALIVE=24h OLLAMA_HOST=0.0.0.0 OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/lib/ollama/rocm:/usr/lib/ollama:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama:/usr/lib/ollama/rocm:/usr/lib/ollama/rocm ROCR_VISIBLE_DEVICES=0 time=2025-10-23T08:48:19.251Z level=INFO source=server.go:676 msg="loading model" "model layers"=37 requested=-1 time=2025-10-23T08:48:19.252Z level=INFO source=server.go:682 msg="system memory" total="31.0 GiB" free="23.8 GiB" free_swap="8.0 GiB" time=2025-10-23T08:48:19.252Z level=INFO source=server.go:690 msg="gpu memory" id=0 library=ROCm available="95.4 GiB" free="95.8 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-10-23T08:48:19.261Z level=INFO source=runner.go:1332 msg="starting ollama engine" time=2025-10-23T08:48:19.261Z level=INFO source=runner.go:1367 msg="Server listening on 127.0.0.1:35369" time=2025-10-23T08:48:19.263Z level=INFO source=runner.go:1205 msg=load request="{Operation:fit LoraPath:[] Parallel:4 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-23T08:48:19.291Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-23T08:48:19.291Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" time=2025-10-23T08:48:19.291Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" time=2025-10-23T08:48:19.291Z level=INFO source=ggml.go:134 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30 time=2025-10-23T08:48:19.291Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2025-10-23T08:48:19.294Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/rocm /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0 load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so time=2025-10-23T08:48:19.868Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-10-23T08:48:19.869Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 time=2025-10-23T08:48:20.038Z level=DEBUG source=ggml.go:837 msg="compute graph" nodes=1985 splits=2 time=2025-10-23T08:48:20.039Z level=DEBUG source=device.go:206 msg="model weights" device=ROCm0 size="59.8 GiB" time=2025-10-23T08:48:20.039Z level=DEBUG source=device.go:211 msg="model weights" device=CPU size="1.1 GiB" time=2025-10-23T08:48:20.039Z level=DEBUG source=device.go:217 msg="kv cache" device=ROCm0 size="1.7 GiB" time=2025-10-23T08:48:20.039Z level=DEBUG source=device.go:228 msg="compute graph" device=ROCm0 size="182.8 MiB" time=2025-10-23T08:48:20.039Z level=DEBUG source=device.go:233 msg="compute graph" device=CPU size="5.6 MiB" time=2025-10-23T08:48:20.039Z level=DEBUG source=device.go:238 msg="total memory" size="62.8 GiB" time=2025-10-23T08:48:20.039Z level=DEBUG source=server.go:721 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=5898240 required.ROCm0.ID=0 required.ROCm0.Weights="[1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1158278400]" required.ROCm0.Cache="[34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 0]" required.ROCm0.Graph=191632768 time=2025-10-23T08:48:20.039Z level=DEBUG source=server.go:915 msg="available gpu" id=0 library=ROCm "available layer vram"="95.2 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="182.8 MiB" time=2025-10-23T08:48:20.039Z level=DEBUG source=server.go:732 msg="new layout created" layers="37[ID:0 Layers:37(0..36)]" time=2025-10-23T08:48:20.039Z level=INFO source=runner.go:1205 msg=load request="{Operation:alloc LoraPath:[] Parallel:4 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-23T08:48:20.067Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 time=2025-10-23T08:48:20.072Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 time=2025-10-23T08:48:20.087Z level=DEBUG source=ggml.go:837 msg="compute graph" nodes=1985 splits=2 time=2025-10-23T08:48:20.087Z level=DEBUG source=device.go:206 msg="model weights" device=ROCm0 size="59.8 GiB" time=2025-10-23T08:48:20.087Z level=DEBUG source=device.go:211 msg="model weights" device=CPU size="1.1 GiB" time=2025-10-23T08:48:20.087Z level=DEBUG source=device.go:217 msg="kv cache" device=ROCm0 size="1.7 GiB" time=2025-10-23T08:48:20.087Z level=DEBUG source=device.go:228 msg="compute graph" device=ROCm0 size="182.8 MiB" time=2025-10-23T08:48:20.087Z level=DEBUG source=device.go:233 msg="compute graph" device=CPU size="5.6 MiB" time=2025-10-23T08:48:20.087Z level=DEBUG source=device.go:238 msg="total memory" size="62.8 GiB" time=2025-10-23T08:48:20.087Z level=DEBUG source=server.go:721 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=5898240 required.ROCm0.ID=0 required.ROCm0.Weights="[1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1751096064 1158278400]" required.ROCm0.Cache="[34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 34603008 67108864 0]" required.ROCm0.Graph=191632768 time=2025-10-23T08:48:20.087Z level=DEBUG source=server.go:915 msg="available gpu" id=0 library=ROCm "available layer vram"="95.2 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="182.8 MiB" time=2025-10-23T08:48:20.087Z level=DEBUG source=server.go:732 msg="new layout created" layers="37[ID:0 Layers:37(0..36)]" time=2025-10-23T08:48:20.087Z level=INFO source=runner.go:1205 msg=load request="{Operation:commit LoraPath:[] Parallel:4 BatchSize:512 FlashAttention:true KvSize:32768 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-23T08:48:20.087Z level=INFO source=ggml.go:480 msg="offloading 36 repeating layers to GPU" time=2025-10-23T08:48:20.087Z level=INFO source=ggml.go:487 msg="offloading output layer to GPU" time=2025-10-23T08:48:20.087Z level=INFO source=ggml.go:492 msg="offloaded 37/37 layers to GPU" time=2025-10-23T08:48:20.088Z level=INFO source=device.go:206 msg="model weights" device=ROCm0 size="59.8 GiB" time=2025-10-23T08:48:20.088Z level=INFO source=device.go:211 msg="model weights" device=CPU size="1.1 GiB" time=2025-10-23T08:48:20.088Z level=INFO source=device.go:217 msg="kv cache" device=ROCm0 size="1.7 GiB" time=2025-10-23T08:48:20.088Z level=INFO source=device.go:228 msg="compute graph" device=ROCm0 size="182.8 MiB" time=2025-10-23T08:48:20.088Z level=INFO source=device.go:233 msg="compute graph" device=CPU size="5.6 MiB" time=2025-10-23T08:48:20.088Z level=INFO source=device.go:238 msg="total memory" size="62.8 GiB" time=2025-10-23T08:48:20.088Z level=INFO source=sched.go:482 msg="loaded runners" count=1 time=2025-10-23T08:48:20.088Z level=INFO source=server.go:1272 msg="waiting for llama runner to start responding" time=2025-10-23T08:48:20.088Z level=INFO source=server.go:1306 msg="waiting for server to become available" status="llm server loading model" time=2025-10-23T08:48:20.339Z level=DEBUG source=server.go:1316 msg="model load progress 0.02" time=2025-10-23T08:48:20.590Z level=DEBUG source=server.go:1316 msg="model load progress 0.04" time=2025-10-23T08:48:20.841Z level=DEBUG source=server.go:1316 msg="model load progress 0.05" time=2025-10-23T08:48:21.092Z level=DEBUG source=server.go:1316 msg="model load progress 0.07" time=2025-10-23T08:48:21.342Z level=DEBUG source=server.go:1316 msg="model load progress 0.10" time=2025-10-23T08:48:21.593Z level=DEBUG source=server.go:1316 msg="model load progress 0.12" time=2025-10-23T08:48:21.844Z level=DEBUG source=server.go:1316 msg="model load progress 0.14" time=2025-10-23T08:48:22.095Z level=DEBUG source=server.go:1316 msg="model load progress 0.16" time=2025-10-23T08:48:22.346Z level=DEBUG source=server.go:1316 msg="model load progress 0.18" time=2025-10-23T08:48:22.596Z level=DEBUG source=server.go:1316 msg="model load progress 0.20" time=2025-10-23T08:48:22.847Z level=DEBUG source=server.go:1316 msg="model load progress 0.22" time=2025-10-23T08:48:23.098Z level=DEBUG source=server.go:1316 msg="model load progress 0.24" time=2025-10-23T08:48:23.349Z level=DEBUG source=server.go:1316 msg="model load progress 0.27" time=2025-10-23T08:48:23.600Z level=DEBUG source=server.go:1316 msg="model load progress 0.29" time=2025-10-23T08:48:23.850Z level=DEBUG source=server.go:1316 msg="model load progress 0.31" time=2025-10-23T08:48:24.101Z level=DEBUG source=server.go:1316 msg="model load progress 0.33" time=2025-10-23T08:48:24.352Z level=DEBUG source=server.go:1316 msg="model load progress 0.35" time=2025-10-23T08:48:24.603Z level=DEBUG source=server.go:1316 msg="model load progress 0.37" time=2025-10-23T08:48:24.854Z level=DEBUG source=server.go:1316 msg="model load progress 0.40" time=2025-10-23T08:48:25.105Z level=DEBUG source=server.go:1316 msg="model load progress 0.42" time=2025-10-23T08:48:25.356Z level=DEBUG source=server.go:1316 msg="model load progress 0.44" time=2025-10-23T08:48:25.606Z level=DEBUG source=server.go:1316 msg="model load progress 0.46" time=2025-10-23T08:48:25.857Z level=DEBUG source=server.go:1316 msg="model load progress 0.48" time=2025-10-23T08:48:26.108Z level=DEBUG source=server.go:1316 msg="model load progress 0.51" time=2025-10-23T08:48:26.358Z level=DEBUG source=server.go:1316 msg="model load progress 0.53" time=2025-10-23T08:48:26.609Z level=DEBUG source=server.go:1316 msg="model load progress 0.55" time=2025-10-23T08:48:26.860Z level=DEBUG source=server.go:1316 msg="model load progress 0.57" time=2025-10-23T08:48:27.111Z level=DEBUG source=server.go:1316 msg="model load progress 0.59" time=2025-10-23T08:48:27.362Z level=DEBUG source=server.go:1316 msg="model load progress 0.61" time=2025-10-23T08:48:27.614Z level=DEBUG source=server.go:1316 msg="model load progress 0.64" time=2025-10-23T08:48:27.864Z level=DEBUG source=server.go:1316 msg="model load progress 0.66" time=2025-10-23T08:48:28.115Z level=DEBUG source=server.go:1316 msg="model load progress 0.68" time=2025-10-23T08:48:28.366Z level=DEBUG source=server.go:1316 msg="model load progress 0.70" time=2025-10-23T08:48:28.617Z level=DEBUG source=server.go:1316 msg="model load progress 0.72" time=2025-10-23T08:48:28.867Z level=DEBUG source=server.go:1316 msg="model load progress 0.74" time=2025-10-23T08:48:29.118Z level=DEBUG source=server.go:1316 msg="model load progress 0.77" time=2025-10-23T08:48:29.369Z level=DEBUG source=server.go:1316 msg="model load progress 0.79" time=2025-10-23T08:48:29.620Z level=DEBUG source=server.go:1316 msg="model load progress 0.81" time=2025-10-23T08:48:29.870Z level=DEBUG source=server.go:1316 msg="model load progress 0.83" time=2025-10-23T08:48:30.121Z level=DEBUG source=server.go:1316 msg="model load progress 0.85" time=2025-10-23T08:48:30.372Z level=DEBUG source=server.go:1316 msg="model load progress 0.87" time=2025-10-23T08:48:30.622Z level=DEBUG source=server.go:1316 msg="model load progress 0.90" time=2025-10-23T08:48:30.873Z level=DEBUG source=server.go:1316 msg="model load progress 0.92" time=2025-10-23T08:48:31.124Z level=DEBUG source=server.go:1316 msg="model load progress 0.94" time=2025-10-23T08:48:31.375Z level=DEBUG source=server.go:1316 msg="model load progress 0.96" time=2025-10-23T08:48:31.625Z level=DEBUG source=server.go:1316 msg="model load progress 0.98" time=2025-10-23T08:48:31.876Z level=DEBUG source=server.go:1316 msg="model load progress 0.99" time=2025-10-23T08:48:32.106Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 time=2025-10-23T08:48:32.127Z level=INFO source=server.go:1310 msg="llama runner started in 12.88 seconds" time=2025-10-23T08:48:32.127Z level=DEBUG source=sched.go:494 msg="finished setting up" runner.name=registry.ollama.ai/library/gpt-oss:120b runner.inference="[{ID:0 Library:ROCm}]" runner.size="62.8 GiB" runner.vram="62.8 GiB" runner.parallel=4 runner.pid=84 runner.model=/root/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 runner.num_ctx=8192 [GIN] 2025/10/23 - 08:48:32 | 200 | 13.758151436s | 127.0.0.1 | POST "/api/generate" time=2025-10-23T08:48:32.127Z level=DEBUG source=sched.go:502 msg="context for request finished" time=2025-10-23T08:48:32.127Z level=DEBUG source=sched.go:294 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:120b runner.inference="[{ID:0 Library:ROCm}]" runner.size="62.8 GiB" runner.vram="62.8 GiB" runner.parallel=4 runner.pid=84 runner.model=/root/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 runner.num_ctx=8192 duration=24h0m0s time=2025-10-23T08:48:32.127Z level=DEBUG source=sched.go:312 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:120b runner.inference="[{ID:0 Library:ROCm}]" runner.size="62.8 GiB" runner.vram="62.8 GiB" runner.parallel=4 runner.pid=84 runner.model=/root/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 runner.num_ctx=8192 refCount=0 ```
Author
Owner

@myoldcat commented on GitHub (Oct 24, 2025):

@dhiltgen Thanks for your reply! Here are more details:

Both machines use AMD AI Max+ 395 with a unified memory architecture (UMA), and the total RAM is 128GB.
In BIOS, 96 GB of system memory is allocated to the GPU as VRAM.
So effectively:

  • GPU memory: 96 GB (reserved in BIOS)
  • CPU memory available: ~32 GB

Here is the system info of windows:
Image

and the swap space on Ubuntu is 8 GB.

<!-- gh-comment-id:3440351895 --> @myoldcat commented on GitHub (Oct 24, 2025): @dhiltgen Thanks for your reply! Here are more details: Both machines use **AMD AI Max+ 395** with a **unified memory architecture** (UMA), and the total RAM is 128GB. In BIOS, 96 GB of system memory is allocated to the GPU as VRAM. So effectively: * GPU memory: **96 GB** (reserved in BIOS) * CPU memory available: **~32 GB** Here is the system info of windows: <img width="1521" height="912" alt="Image" src="https://github.com/user-attachments/assets/858d6501-3189-413a-ada0-87e1fcc6a715" /> and the swap space on Ubuntu is 8 GB.
Author
Owner

@jessegross commented on GitHub (Oct 24, 2025):

It's possible that Windows doesn't like the single 60 GB allocation for the weights (a later 32 GB allocation also failed). The upcoming 0.12.7 will try further backoffs - this probably won't fix the issue but it might tell us the largest allocation that does succeed. Once we get a feeling for what generally works, we could potentially set a cap and then do allocations in smaller pieces.

<!-- gh-comment-id:3444417652 --> @jessegross commented on GitHub (Oct 24, 2025): It's possible that Windows doesn't like the single 60 GB allocation for the weights (a later 32 GB allocation also failed). The upcoming 0.12.7 will try further backoffs - this probably won't fix the issue but it might tell us the largest allocation that does succeed. Once we get a feeling for what generally works, we could potentially set a cap and then do allocations in smaller pieces.
Author
Owner

@myoldcat commented on GitHub (Oct 25, 2025):

We also tested in Windows using LM Studio with the same gpt-oss:120b model.
When using Vulkan llama.cpp, it runs successfully.
However, when using ROCm llama.cpp, it fails with similar errors.

<!-- gh-comment-id:3445524580 --> @myoldcat commented on GitHub (Oct 25, 2025): We also tested in Windows using **LM Studio** with the same `gpt-oss:120b` model. When using **Vulkan llama.cpp**, it runs successfully. However, when using **ROCm llama.cpp**, it fails with similar errors.
Author
Owner

@jessegross commented on GitHub (Oct 27, 2025):

The Vulkan backend caps individual allocations at 1G whereas the CUDA/ROCm backends do not, so that could support my comment above. If you can post the log from either the next release (0.12.7) or build from the current source, then that will give more information if that is the cause.

<!-- gh-comment-id:3452721727 --> @jessegross commented on GitHub (Oct 27, 2025): The Vulkan backend caps individual allocations at 1G whereas the CUDA/ROCm backends do not, so that could support my comment above. If you can post the log from either the next release (0.12.7) or build from the current source, then that will give more information if that is the cause.
Author
Owner

@myoldcat commented on GitHub (Oct 31, 2025):

Using version 0.12.7-rc0, the model can run, but it cannot be fully loaded onto the GPU.

NAME ID SIZE PROCESSOR CONTEXT UNTIL
gpt-oss:120b a951a23b46a1 66 GB 54%/46% CPU/GPU 8192 4 minutes from now

logs

time=2025-10-29T17:08:30.953+08:00 level=INFO source=routes.go:1524 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\admin\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-29T17:08:31.072+08:00 level=INFO source=images.go:522 msg="total blobs: 26"
time=2025-10-29T17:08:31.075+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-29T17:08:31.076+08:00 level=INFO source=routes.go:1577 msg="Listening on 127.0.0.1:11434 (version 0.12.7-rc0)"
time=2025-10-29T17:08:31.079+08:00 level=INFO source=runner.go:76 msg="discovering available GPUs..."
time=2025-10-29T17:08:31.090+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57116"
time=2025-10-29T17:08:31.247+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57122"
time=2025-10-29T17:08:31.747+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57129"
time=2025-10-29T17:08:32.576+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53925"
time=2025-10-29T17:08:34.075+08:00 level=INFO source=types.go:42 msg="inference compute" id=0 filtered_id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon(TM) 8060S Graphics" libdirs=ollama,rocm driver=60450.10 pci_id=0000:c3:00.0 type=iGPU total="96.0 GiB" available="95.1 GiB"
[GIN] 2025/10/29 - 17:08:34 | 200 |       506.6µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/29 - 17:08:34 | 200 |       506.6µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/29 - 17:08:34 | 200 |      3.0959ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/29 - 17:08:35 | 200 |     89.0344ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/10/29 - 17:08:57 | 200 |     59.0768ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/10/29 - 17:09:02 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/29 - 17:09:02 | 200 |       2.617ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/29 - 17:09:02 | 200 |     68.3353ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/10/29 - 17:09:02 | 200 |     56.6994ms |       127.0.0.1 | POST     "/api/show"
time=2025-10-29T17:09:02.644+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 59285"
time=2025-10-29T17:09:03.051+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-29T17:09:03.051+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=16 efficiency=0 threads=32
time=2025-10-29T17:09:03.137+08:00 level=INFO source=server.go:215 msg="enabling flash attention"
time=2025-10-29T17:09:03.138+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-6be6d66a3f546d8c19b130dc41dc24b2fc159f84ffbc76a0ee0676205083cf5a --port 59291"
time=2025-10-29T17:09:03.144+08:00 level=INFO source=server.go:638 msg="loading model" "model layers"=37 requested=-1
time=2025-10-29T17:09:03.144+08:00 level=INFO source=server.go:643 msg="system memory" total="31.8 GiB" free="22.7 GiB" free_swap="108.3 GiB"
time=2025-10-29T17:09:03.144+08:00 level=INFO source=server.go:650 msg="gpu memory" id=0 library=ROCm available="94.6 GiB" free="95.1 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-10-29T17:09:03.186+08:00 level=INFO source=runner.go:1337 msg="starting ollama engine"
time=2025-10-29T17:09:03.192+08:00 level=INFO source=runner.go:1372 msg="Server listening on 127.0.0.1:59291"
time=2025-10-29T17:09:03.199+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-29T17:09:03.235+08:00 level=INFO source=ggml.go:135 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=687 num_key_values=32
load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon(TM) 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0
load_backend: loaded ROCm backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll
time=2025-10-29T17:09:03.293+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-10-29T17:09:03.957+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
**ggml_backend_cuda_buffer_type_alloc_buffer: allocating 61223.74 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 64197741312**
time=2025-10-29T17:09:03.993+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.10
time=2025-10-29T17:09:03.994+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.20
time=2025-10-29T17:09:03.994+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.30
time=2025-10-29T17:09:03.994+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.40
time=2025-10-29T17:09:03.994+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:34[ID:0 Layers:34(2..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
**ggml_backend_cuda_buffer_type_alloc_buffer: allocating 56779.17 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 59537270528**
time=2025-10-29T17:09:04.029+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.50
time=2025-10-29T17:09:04.029+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:28[ID:0 Layers:28(8..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
**ggml_backend_cuda_buffer_type_alloc_buffer: allocating 46759.31 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 49030693376**
time=2025-10-29T17:09:04.062+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.60
time=2025-10-29T17:09:04.062+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:23[ID:0 Layers:23(13..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
**ggml_backend_cuda_buffer_type_alloc_buffer: allocating 38409.44 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 40275212416**
time=2025-10-29T17:09:04.092+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.70
time=2025-10-29T17:09:04.093+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:17[ID:0 Layers:17(19..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:17[ID:0 Layers:17(19..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=ggml.go:481 msg="offloading 17 repeating layers to GPU"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=ggml.go:485 msg="offloading output layer to CPU"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=ggml.go:493 msg="offloaded 17/37 layers to GPU"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:212 msg="model weights" device=ROCm0 size="27.7 GiB"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="33.1 GiB"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:223 msg="kv cache" device=ROCm0 size="216.0 MiB"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:228 msg="kv cache" device=CPU size="234.0 MiB"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:234 msg="compute graph" device=ROCm0 size="132.6 MiB"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="99.2 MiB"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:244 msg="total memory" size="61.5 GiB"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=sched.go:493 msg="loaded runners" count=1
time=2025-10-29T17:09:05.478+08:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding"
time=2025-10-29T17:09:05.478+08:00 level=INFO source=server.go:1270 msg="waiting for server to become available" status="llm server loading model"
[GIN] 2025/10/29 - 17:09:33 | 200 |     41.4312ms |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/29 - 17:09:33 | 200 |    197.2306ms |       127.0.0.1 | GET      "/api/tags"
time=2025-10-29T17:09:55.395+08:00 level=INFO source=server.go:1274 msg="llama runner started in 52.25 seconds"
[GIN] 2025/10/29 - 17:10:03 | 200 |     10.7807ms |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/29 - 17:10:03 | 200 |     40.4906ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/29 - 17:10:31 | 200 |         1m28s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/10/29 - 17:10:34 | 200 |       2.727ms |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/29 - 17:10:34 | 200 |     37.0723ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/29 - 17:10:34 | 200 |      3.6508ms |       127.0.0.1 | HEAD     "/"
[GIN] 2025/10/29 - 17:10:34 | 200 |      2.0834ms |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/10/29 - 17:11:04 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/29 - 17:11:04 | 200 |     15.7128ms |       127.0.0.1 | GET      "/api/tags"
time=2025-10-29T17:11:11.876+08:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 1"
time=2025-10-29T17:11:13.021+08:00 level=INFO source=routes.go:1524 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\admin\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-29T17:11:13.035+08:00 level=INFO source=images.go:522 msg="total blobs: 26"
time=2025-10-29T17:11:13.036+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-29T17:11:13.038+08:00 level=INFO source=routes.go:1577 msg="Listening on 127.0.0.1:11434 (version 0.12.7-rc0)"
time=2025-10-29T17:11:13.040+08:00 level=INFO source=runner.go:76 msg="discovering available GPUs..."
time=2025-10-29T17:11:13.052+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53328"
time=2025-10-29T17:11:13.200+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53335"
time=2025-10-29T17:11:13.694+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53341"
time=2025-10-29T17:11:14.503+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53348"
time=2025-10-29T17:11:16.106+08:00 level=INFO source=types.go:42 msg="inference compute" id=0 filtered_id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon(TM) 8060S Graphics" libdirs=ollama,rocm driver=60450.10 pci_id=0000:c3:00.0 type=iGPU total="96.0 GiB" available="95.1 GiB"
[GIN] 2025/10/29 - 17:11:16 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/29 - 17:11:16 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/29 - 17:11:16 | 200 |      2.6338ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/29 - 17:11:16 | 200 |     80.0889ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/10/29 - 17:12:01 | 200 |       529.2µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/29 - 17:12:01 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/29 - 17:12:01 | 200 |      5.8315ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/29 - 17:12:01 | 200 |     71.4009ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/10/29 - 17:12:20 | 200 |    193.0775ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/10/29 - 17:12:23 | 200 |     101.924ms |       127.0.0.1 | POST     "/api/show"
<!-- gh-comment-id:3471727421 --> @myoldcat commented on GitHub (Oct 31, 2025): Using version 0.12.7-rc0, the model can run, but it cannot be fully loaded onto the GPU. NAME ID SIZE PROCESSOR CONTEXT UNTIL gpt-oss:120b a951a23b46a1 66 GB 54%/46% CPU/GPU 8192 4 minutes from now logs ``` time=2025-10-29T17:08:30.953+08:00 level=INFO source=routes.go:1524 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\admin\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-29T17:08:31.072+08:00 level=INFO source=images.go:522 msg="total blobs: 26" time=2025-10-29T17:08:31.075+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-29T17:08:31.076+08:00 level=INFO source=routes.go:1577 msg="Listening on 127.0.0.1:11434 (version 0.12.7-rc0)" time=2025-10-29T17:08:31.079+08:00 level=INFO source=runner.go:76 msg="discovering available GPUs..." time=2025-10-29T17:08:31.090+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57116" time=2025-10-29T17:08:31.247+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57122" time=2025-10-29T17:08:31.747+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57129" time=2025-10-29T17:08:32.576+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53925" time=2025-10-29T17:08:34.075+08:00 level=INFO source=types.go:42 msg="inference compute" id=0 filtered_id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon(TM) 8060S Graphics" libdirs=ollama,rocm driver=60450.10 pci_id=0000:c3:00.0 type=iGPU total="96.0 GiB" available="95.1 GiB" [GIN] 2025/10/29 - 17:08:34 | 200 | 506.6µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/29 - 17:08:34 | 200 | 506.6µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/29 - 17:08:34 | 200 | 3.0959ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/29 - 17:08:35 | 200 | 89.0344ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/29 - 17:08:57 | 200 | 59.0768ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/29 - 17:09:02 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/29 - 17:09:02 | 200 | 2.617ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/29 - 17:09:02 | 200 | 68.3353ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/29 - 17:09:02 | 200 | 56.6994ms | 127.0.0.1 | POST "/api/show" time=2025-10-29T17:09:02.644+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 59285" time=2025-10-29T17:09:03.051+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-29T17:09:03.051+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-10-29T17:09:03.137+08:00 level=INFO source=server.go:215 msg="enabling flash attention" time=2025-10-29T17:09:03.138+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-6be6d66a3f546d8c19b130dc41dc24b2fc159f84ffbc76a0ee0676205083cf5a --port 59291" time=2025-10-29T17:09:03.144+08:00 level=INFO source=server.go:638 msg="loading model" "model layers"=37 requested=-1 time=2025-10-29T17:09:03.144+08:00 level=INFO source=server.go:643 msg="system memory" total="31.8 GiB" free="22.7 GiB" free_swap="108.3 GiB" time=2025-10-29T17:09:03.144+08:00 level=INFO source=server.go:650 msg="gpu memory" id=0 library=ROCm available="94.6 GiB" free="95.1 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-10-29T17:09:03.186+08:00 level=INFO source=runner.go:1337 msg="starting ollama engine" time=2025-10-29T17:09:03.192+08:00 level=INFO source=runner.go:1372 msg="Server listening on 127.0.0.1:59291" time=2025-10-29T17:09:03.199+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-29T17:09:03.235+08:00 level=INFO source=ggml.go:135 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=687 num_key_values=32 load_backend: loaded CPU backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon(TM) 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0 load_backend: loaded ROCm backend from C:\Users\admin\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll time=2025-10-29T17:09:03.293+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-10-29T17:09:03.957+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" **ggml_backend_cuda_buffer_type_alloc_buffer: allocating 61223.74 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 64197741312** time=2025-10-29T17:09:03.993+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.10 time=2025-10-29T17:09:03.994+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.20 time=2025-10-29T17:09:03.994+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.30 time=2025-10-29T17:09:03.994+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.40 time=2025-10-29T17:09:03.994+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:34[ID:0 Layers:34(2..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" **ggml_backend_cuda_buffer_type_alloc_buffer: allocating 56779.17 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 59537270528** time=2025-10-29T17:09:04.029+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.50 time=2025-10-29T17:09:04.029+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:28[ID:0 Layers:28(8..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" **ggml_backend_cuda_buffer_type_alloc_buffer: allocating 46759.31 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 49030693376** time=2025-10-29T17:09:04.062+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.60 time=2025-10-29T17:09:04.062+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:23[ID:0 Layers:23(13..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" **ggml_backend_cuda_buffer_type_alloc_buffer: allocating 38409.44 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 40275212416** time=2025-10-29T17:09:04.092+08:00 level=INFO source=server.go:777 msg="model layout did not fit, applying backoff" backoff=0.70 time=2025-10-29T17:09:04.093+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:17[ID:0 Layers:17(19..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-29T17:09:05.478+08:00 level=INFO source=runner.go:1210 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:17[ID:0 Layers:17(19..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-29T17:09:05.478+08:00 level=INFO source=ggml.go:481 msg="offloading 17 repeating layers to GPU" time=2025-10-29T17:09:05.478+08:00 level=INFO source=ggml.go:485 msg="offloading output layer to CPU" time=2025-10-29T17:09:05.478+08:00 level=INFO source=ggml.go:493 msg="offloaded 17/37 layers to GPU" time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:212 msg="model weights" device=ROCm0 size="27.7 GiB" time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:217 msg="model weights" device=CPU size="33.1 GiB" time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:223 msg="kv cache" device=ROCm0 size="216.0 MiB" time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:228 msg="kv cache" device=CPU size="234.0 MiB" time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:234 msg="compute graph" device=ROCm0 size="132.6 MiB" time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:239 msg="compute graph" device=CPU size="99.2 MiB" time=2025-10-29T17:09:05.478+08:00 level=INFO source=device.go:244 msg="total memory" size="61.5 GiB" time=2025-10-29T17:09:05.478+08:00 level=INFO source=sched.go:493 msg="loaded runners" count=1 time=2025-10-29T17:09:05.478+08:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding" time=2025-10-29T17:09:05.478+08:00 level=INFO source=server.go:1270 msg="waiting for server to become available" status="llm server loading model" [GIN] 2025/10/29 - 17:09:33 | 200 | 41.4312ms | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/29 - 17:09:33 | 200 | 197.2306ms | 127.0.0.1 | GET "/api/tags" time=2025-10-29T17:09:55.395+08:00 level=INFO source=server.go:1274 msg="llama runner started in 52.25 seconds" [GIN] 2025/10/29 - 17:10:03 | 200 | 10.7807ms | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/29 - 17:10:03 | 200 | 40.4906ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/29 - 17:10:31 | 200 | 1m28s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/10/29 - 17:10:34 | 200 | 2.727ms | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/29 - 17:10:34 | 200 | 37.0723ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/29 - 17:10:34 | 200 | 3.6508ms | 127.0.0.1 | HEAD "/" [GIN] 2025/10/29 - 17:10:34 | 200 | 2.0834ms | 127.0.0.1 | GET "/api/ps" [GIN] 2025/10/29 - 17:11:04 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/29 - 17:11:04 | 200 | 15.7128ms | 127.0.0.1 | GET "/api/tags" time=2025-10-29T17:11:11.876+08:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 1" time=2025-10-29T17:11:13.021+08:00 level=INFO source=routes.go:1524 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\admin\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-29T17:11:13.035+08:00 level=INFO source=images.go:522 msg="total blobs: 26" time=2025-10-29T17:11:13.036+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-29T17:11:13.038+08:00 level=INFO source=routes.go:1577 msg="Listening on 127.0.0.1:11434 (version 0.12.7-rc0)" time=2025-10-29T17:11:13.040+08:00 level=INFO source=runner.go:76 msg="discovering available GPUs..." time=2025-10-29T17:11:13.052+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53328" time=2025-10-29T17:11:13.200+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53335" time=2025-10-29T17:11:13.694+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53341" time=2025-10-29T17:11:14.503+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53348" time=2025-10-29T17:11:16.106+08:00 level=INFO source=types.go:42 msg="inference compute" id=0 filtered_id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon(TM) 8060S Graphics" libdirs=ollama,rocm driver=60450.10 pci_id=0000:c3:00.0 type=iGPU total="96.0 GiB" available="95.1 GiB" [GIN] 2025/10/29 - 17:11:16 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/29 - 17:11:16 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/29 - 17:11:16 | 200 | 2.6338ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/29 - 17:11:16 | 200 | 80.0889ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/29 - 17:12:01 | 200 | 529.2µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/29 - 17:12:01 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/29 - 17:12:01 | 200 | 5.8315ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/29 - 17:12:01 | 200 | 71.4009ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/29 - 17:12:20 | 200 | 193.0775ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/29 - 17:12:23 | 200 | 101.924ms | 127.0.0.1 | POST "/api/show" ```
Author
Owner

@kingkingyyk commented on GitHub (Dec 13, 2025):

Reproducible with same spec-ed HP Mini Z2 G1a (workstation version of OP's machine) @ 0.13.13 .

time=2025-12-13T09:56:14.830+08:00 level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:262144 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:30m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\testuser\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2025-12-13T09:56:14.877+08:00 level=INFO source=images.go:522 msg="total blobs: 45"
time=2025-12-13T09:56:14.881+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-12-13T09:56:14.886+08:00 level=INFO source=routes.go:1607 msg="Listening on [::]:11434 (version 0.13.3)"
time=2025-12-13T09:56:14.887+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-12-13T09:56:14.905+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\testuser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53123"
time=2025-12-13T09:56:15.109+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\testuser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 58217"
time=2025-12-13T09:56:15.533+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\testuser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57374"
time=2025-12-13T09:56:16.340+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2025-12-13T09:56:16.341+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\testuser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57386"
time=2025-12-13T09:56:17.444+08:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon(TM) 8060S Graphics" libdirs=ollama,rocm driver=60551.38 pci_id=0000:c4:00.0 type=iGPU total="96.0 GiB" available="95.1 GiB"
[GIN] 2025/12/13 - 09:56:17 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/12/13 - 09:56:17 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/12/13 - 09:56:17 | 200 |      8.2396ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/12/13 - 09:58:19 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/13 - 09:58:19 | 200 |      8.3428ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/12/13 - 10:02:30 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/12/13 - 10:02:30 | 200 |     64.6558ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/12/13 - 10:02:30 | 200 |     56.2054ms |       127.0.0.1 | POST     "/api/show"
time=2025-12-13T10:02:30.507+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\testuser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56101"
time=2025-12-13T10:02:31.322+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-12-13T10:02:31.322+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=16 efficiency=0 threads=32
time=2025-12-13T10:02:31.403+08:00 level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=262144 n_ctx_train=131072
time=2025-12-13T10:02:31.403+08:00 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-12-13T10:02:31.413+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\testuser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\testuser\\.ollama\\models\\blobs\\sha256-6be6d66a3f546d8c19b130dc41dc24b2fc159f84ffbc76a0ee0676205083cf5a --port 56112"
time=2025-12-13T10:02:31.422+08:00 level=INFO source=sched.go:443 msg="system memory" total="31.8 GiB" free="24.5 GiB" free_swap="55.5 GiB"
time=2025-12-13T10:02:31.422+08:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=ROCm available="95.0 GiB" free="95.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-13T10:02:31.422+08:00 level=INFO source=server.go:709 msg="loading model" "model layers"=37 requested=-1
time=2025-12-13T10:02:31.480+08:00 level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2025-12-13T10:02:31.524+08:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:56112"
time=2025-12-13T10:02:31.530+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-13T10:02:31.563+08:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=687 num_key_values=32
load_backend: loaded CPU backend from C:\Users\testuser\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon(TM) 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0
load_backend: loaded ROCm backend from C:\Users\testuser\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll
time=2025-12-13T10:02:31.731+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-12-13T10:02:32.162+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 61223.74 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 64197741312
time=2025-12-13T10:02:42.616+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.10
time=2025-12-13T10:02:42.616+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.20
time=2025-12-13T10:02:42.616+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.30
time=2025-12-13T10:02:42.616+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.40
time=2025-12-13T10:02:42.617+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:34[ID:0 Layers:34(2..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 56779.17 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 59537270528
time=2025-12-13T10:02:42.651+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.50
time=2025-12-13T10:02:42.651+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:28[ID:0 Layers:28(8..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 46759.31 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 49030693376
time=2025-12-13T10:02:42.687+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.60
time=2025-12-13T10:02:42.688+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:23[ID:0 Layers:23(13..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 38409.44 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 40275212416
time=2025-12-13T10:02:42.722+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.70
time=2025-12-13T10:02:42.722+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:17[ID:0 Layers:17(19..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 28389.58 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 29768635264
time=2025-12-13T10:02:42.756+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.80
time=2025-12-13T10:02:42.756+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:11[ID:0 Layers:11(25..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 18369.73 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 19262058112
time=2025-12-13T10:02:42.792+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.90
time=2025-12-13T10:02:42.792+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:5[ID:0 Layers:5(31..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8349.88 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate ROCm0 buffer of size 8755480960
time=2025-12-13T10:02:42.826+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=1.00
time=2025-12-13T10:02:42.826+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 65355994368
alloc_tensor_range: failed to allocate CPU buffer of size 65355994368
time=2025-12-13T10:02:43.862+08:00 level=WARN source=server.go:825 msg="memory layout cannot be allocated" memory.InputWeights=1158266880 memory.CPU.Weights="[1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1158278400]"
time=2025-12-13T10:02:43.862+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-13T10:02:43.862+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="60.9 GiB"
time=2025-12-13T10:02:43.862+08:00 level=INFO source=device.go:272 msg="total memory" size="60.9 GiB"
time=2025-12-13T10:02:43.862+08:00 level=INFO source=sched.go:470 msg="Load failed" model=C:\Users\testuser\.ollama\models\blobs\sha256-6be6d66a3f546d8c19b130dc41dc24b2fc159f84ffbc76a0ee0676205083cf5a error="memory layout cannot be allocated"
time=2025-12-13T10:02:43.900+08:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 1"
[GIN] 2025/12/13 - 10:02:43 | 500 |    13.515347s |       127.0.0.1 | POST     "/api/generate"

Running Windows 11 & 25.12.1 Adrenalin driver. Same RAM configuration - 32GB for CPU & 96GB for GPU.

<!-- gh-comment-id:3648739748 --> @kingkingyyk commented on GitHub (Dec 13, 2025): Reproducible with same spec-ed HP Mini Z2 G1a (workstation version of OP's machine) @ `0.13.13` . ``` time=2025-12-13T09:56:14.830+08:00 level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:262144 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:30m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\testuser\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2025-12-13T09:56:14.877+08:00 level=INFO source=images.go:522 msg="total blobs: 45" time=2025-12-13T09:56:14.881+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-12-13T09:56:14.886+08:00 level=INFO source=routes.go:1607 msg="Listening on [::]:11434 (version 0.13.3)" time=2025-12-13T09:56:14.887+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-12-13T09:56:14.905+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\testuser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 53123" time=2025-12-13T09:56:15.109+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\testuser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 58217" time=2025-12-13T09:56:15.533+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\testuser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57374" time=2025-12-13T09:56:16.340+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2025-12-13T09:56:16.341+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\testuser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 57386" time=2025-12-13T09:56:17.444+08:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=ROCm compute=gfx1151 name=ROCm0 description="AMD Radeon(TM) 8060S Graphics" libdirs=ollama,rocm driver=60551.38 pci_id=0000:c4:00.0 type=iGPU total="96.0 GiB" available="95.1 GiB" [GIN] 2025/12/13 - 09:56:17 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/12/13 - 09:56:17 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/12/13 - 09:56:17 | 200 | 8.2396ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/12/13 - 09:58:19 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/12/13 - 09:58:19 | 200 | 8.3428ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/12/13 - 10:02:30 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/12/13 - 10:02:30 | 200 | 64.6558ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/12/13 - 10:02:30 | 200 | 56.2054ms | 127.0.0.1 | POST "/api/show" time=2025-12-13T10:02:30.507+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\testuser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56101" time=2025-12-13T10:02:31.322+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-12-13T10:02:31.322+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-12-13T10:02:31.403+08:00 level=WARN source=server.go:167 msg="requested context size too large for model" num_ctx=262144 n_ctx_train=131072 time=2025-12-13T10:02:31.403+08:00 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-12-13T10:02:31.413+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\testuser\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\testuser\\.ollama\\models\\blobs\\sha256-6be6d66a3f546d8c19b130dc41dc24b2fc159f84ffbc76a0ee0676205083cf5a --port 56112" time=2025-12-13T10:02:31.422+08:00 level=INFO source=sched.go:443 msg="system memory" total="31.8 GiB" free="24.5 GiB" free_swap="55.5 GiB" time=2025-12-13T10:02:31.422+08:00 level=INFO source=sched.go:450 msg="gpu memory" id=0 library=ROCm available="95.0 GiB" free="95.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-13T10:02:31.422+08:00 level=INFO source=server.go:709 msg="loading model" "model layers"=37 requested=-1 time=2025-12-13T10:02:31.480+08:00 level=INFO source=runner.go:1405 msg="starting ollama engine" time=2025-12-13T10:02:31.524+08:00 level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:56112" time=2025-12-13T10:02:31.530+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-13T10:02:31.563+08:00 level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=687 num_key_values=32 load_backend: loaded CPU backend from C:\Users\testuser\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon(TM) 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0 load_backend: loaded ROCm backend from C:\Users\testuser\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll time=2025-12-13T10:02:31.731+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-12-13T10:02:32.162+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:37[ID:0 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 61223.74 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 64197741312 time=2025-12-13T10:02:42.616+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.10 time=2025-12-13T10:02:42.616+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.20 time=2025-12-13T10:02:42.616+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.30 time=2025-12-13T10:02:42.616+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.40 time=2025-12-13T10:02:42.617+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:34[ID:0 Layers:34(2..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 56779.17 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 59537270528 time=2025-12-13T10:02:42.651+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.50 time=2025-12-13T10:02:42.651+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:28[ID:0 Layers:28(8..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 46759.31 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 49030693376 time=2025-12-13T10:02:42.687+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.60 time=2025-12-13T10:02:42.688+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:23[ID:0 Layers:23(13..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 38409.44 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 40275212416 time=2025-12-13T10:02:42.722+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.70 time=2025-12-13T10:02:42.722+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:17[ID:0 Layers:17(19..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 28389.58 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 29768635264 time=2025-12-13T10:02:42.756+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.80 time=2025-12-13T10:02:42.756+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:11[ID:0 Layers:11(25..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 18369.73 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 19262058112 time=2025-12-13T10:02:42.792+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=0.90 time=2025-12-13T10:02:42.792+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:5[ID:0 Layers:5(31..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8349.88 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate ROCm0 buffer of size 8755480960 time=2025-12-13T10:02:42.826+08:00 level=INFO source=server.go:831 msg="model layout did not fit, applying backoff" backoff=1.00 time=2025-12-13T10:02:42.826+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:131072 KvCacheType:q4_0 NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 65355994368 alloc_tensor_range: failed to allocate CPU buffer of size 65355994368 time=2025-12-13T10:02:43.862+08:00 level=WARN source=server.go:825 msg="memory layout cannot be allocated" memory.InputWeights=1158266880 memory.CPU.Weights="[1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1751095808 1158278400]" time=2025-12-13T10:02:43.862+08:00 level=INFO source=runner.go:1278 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-13T10:02:43.862+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="60.9 GiB" time=2025-12-13T10:02:43.862+08:00 level=INFO source=device.go:272 msg="total memory" size="60.9 GiB" time=2025-12-13T10:02:43.862+08:00 level=INFO source=sched.go:470 msg="Load failed" model=C:\Users\testuser\.ollama\models\blobs\sha256-6be6d66a3f546d8c19b130dc41dc24b2fc159f84ffbc76a0ee0676205083cf5a error="memory layout cannot be allocated" time=2025-12-13T10:02:43.900+08:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 1" [GIN] 2025/12/13 - 10:02:43 | 500 | 13.515347s | 127.0.0.1 | POST "/api/generate" ```` Running Windows 11 & `25.12.1` Adrenalin driver. Same RAM configuration - 32GB for CPU & 96GB for GPU.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70515