[GH-ISSUE #11793] ollama will not use GPU to run gpt-oss #69881

Closed
opened 2026-05-04 19:42:30 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @galets on GitHub (Aug 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11793

What is the issue?

Hardware is: AI Max+ with 128G shared memory. 96G is allocated for GPU

Ollama was installed via update:

root@llama30:~# curl -fsSL https://ollama.com/install.sh | sh

>>> Cleaning up old version at /usr/local/lib/ollama
>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
>>> Downloading Linux ROCm amd64 bundle
######################################################################## 100.0%
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.
>>> AMD GPU ready.

After system restart getting:

Aug 07 18:26:44 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:26:44 | 200 |      41.119µs |       127.0.0.1 | GET      "/api/ps"
Aug 07 18:26:48 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:26:48 | 200 |    2.093067ms |       127.0.0.1 | GET      "/api/tags"
Aug 07 18:26:48 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:26:48 | 200 |      63.267µs |       127.0.0.1 | GET      "/api/ps"
Aug 07 18:27:18 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:27:18 | 200 |         1m16s |       127.0.0.1 | POST     "/api/chat"
Aug 07 18:28:00 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:28:00 | 200 | 42.232392706s |       127.0.0.1 | POST     "/api/chat"
Aug 07 18:29:48 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:29:48 | 200 |         1m47s |       127.0.0.1 | POST     "/api/chat"
Aug 07 18:30:02 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:30:02 | 200 |          1m3s |       127.0.0.1 | POST     "/api/chat"
Aug 07 18:32:22 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:32:22 | 200 |         2m19s |       127.0.0.1 | POST     "/api/chat"
Aug 07 18:42:47 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:42:47 | 200 |         3m12s |       127.0.0.1 | POST     "/api/chat"
Aug 07 18:45:04 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:45:04 | 200 |         2m16s |       127.0.0.1 | POST     "/api/chat"
Aug 07 18:46:17 llama30 systemd[1]: Stopping ollama.service - Ollama Service...
Aug 07 18:46:17 llama30 ollama[1255]: time=2025-08-07T18:46:17.970Z level=ERROR source=server.go:807 msg="post predict" error="Post \"http://127.0.0.1:43035/completion\": context canceled"
Aug 07 18:46:17 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:46:17 | 500 |         1m13s |       127.0.0.1 | POST     "/api/chat"
Aug 07 18:46:18 llama30 systemd[1]: ollama.service: Deactivated successfully.
Aug 07 18:46:18 llama30 systemd[1]: Stopped ollama.service - Ollama Service.
Aug 07 18:46:18 llama30 systemd[1]: ollama.service: Consumed 5h 3min 24.574s CPU time.
Aug 07 18:46:18 llama30 systemd[1]: Started ollama.service - Ollama Service.
Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.736Z level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:131072 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:3 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.739Z level=INFO source=images.go:477 msg="total blobs: 98"
Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.739Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.740Z level=INFO source=routes.go:1350 msg="Listening on 127.0.0.1:11434 (version 0.11.3)"
Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.740Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.746Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=0 gpu_type=gfx1151
Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.749Z level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.12 name=1002:1586 total="96.0 GiB" available="95.8 GiB"
Aug 07 18:47:25 llama30 ollama[117872]: [GIN] 2025/08/07 - 18:47:25 | 200 |    1.765323ms |       127.0.0.1 | GET      "/api/tags"
Aug 07 18:47:25 llama30 ollama[117872]: [GIN] 2025/08/07 - 18:47:25 | 200 |      93.127µs |       127.0.0.1 | GET      "/api/ps"
Aug 07 18:47:28 llama30 ollama[117872]: [GIN] 2025/08/07 - 18:47:28 | 200 |      62.308µs |       127.0.0.1 | GET      "/api/version"
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.739Z level=INFO source=server.go:135 msg="system memory" total="31.0 GiB" free="24.7 GiB" free_swap="110.4 GiB"
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.739Z level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[95.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.0 GiB" memory.required.partial="0 B" memory.required.kv="9.3 GiB" memory.required.allocations="[0 B]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="96.0 GiB" memory.graph.partial="96.0 GiB"
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.779Z level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 393216 --batch-size 512 --threads 16 --no-mmap --parallel 3 --port 44509"
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.779Z level=INFO source=sched.go:481 msg="loaded runners" count=1
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.779Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.780Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.788Z level=INFO source=runner.go:925 msg="starting ollama engine"
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.788Z level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:44509"
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.828Z level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
Aug 07 18:47:49 llama30 ollama[117872]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.849Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.849Z level=INFO source=ggml.go:367 msg="offloading 0 repeating layers to GPU"
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.849Z level=INFO source=ggml.go:371 msg="offloading output layer to CPU"
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.849Z level=INFO source=ggml.go:378 msg="offloaded 0/25 layers to GPU"
Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.849Z level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="12.8 GiB"
Aug 07 18:47:50 llama30 ollama[117872]: time=2025-08-07T18:47:50.031Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
Aug 07 18:47:51 llama30 ollama[117872]: time=2025-08-07T18:47:51.369Z level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="96.0 GiB"
Aug 07 18:47:55 llama30 ollama[117872]: time=2025-08-07T18:47:55.078Z level=INFO source=server.go:637 msg="llama runner started in 5.30 seconds"

Loading another model will use GPU:

before:

root@llama30:~# ollama ps
NAME              ID              SIZE     PROCESSOR    CONTEXT    UNTIL               
gpt-oss:latest    f2b8351c629c    22 GB    100% CPU     131072     49 minutes from now    

after trying to use another model:

root@llama30:~# ollama ps
NAME              ID              SIZE     PROCESSOR    CONTEXT    UNTIL               
mistral:latest    6577803aa9a0    24 GB    100% GPU     131072     49 minutes from now    
gpt-oss:latest    f2b8351c629c    22 GB    100% CPU     131072     48 minutes from now    

In another thread I was asked to post file listing, so here it is:

root@llama30:~# ls -l /usr/local/lib/ollama
total 2703152
lrwxrwxrwx 1 root root         21 Aug  6 05:41 libcublas.so.12 -> libcublas.so.12.8.4.1
-rwxr-xr-x 1 root root  116388640 Jul  8  2015 libcublas.so.12.8.4.1
lrwxrwxrwx 1 root root         23 Aug  6 05:41 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1
-rwxr-xr-x 1 root root  751771728 Jul  8  2015 libcublasLt.so.12.8.4.1
lrwxrwxrwx 1 root root         20 Aug  6 05:41 libcudart.so.12 -> libcudart.so.12.8.90
-rwxr-xr-x 1 root root     728800 Jul  8  2015 libcudart.so.12.8.90
-rwxr-xr-x 1 root root     595648 Aug  6 05:32 libggml-base.so
-rwxr-xr-x 1 root root     619280 Aug  6 05:32 libggml-cpu-alderlake.so
-rwxr-xr-x 1 root root     619280 Aug  6 05:32 libggml-cpu-haswell.so
-rwxr-xr-x 1 root root     729872 Aug  6 05:32 libggml-cpu-icelake.so
-rwxr-xr-x 1 root root     606992 Aug  6 05:32 libggml-cpu-sandybridge.so
-rwxr-xr-x 1 root root     729872 Aug  6 05:32 libggml-cpu-skylakex.so
-rwxr-xr-x 1 root root     480048 Aug  6 05:32 libggml-cpu-sse42.so
-rwxr-xr-x 1 root root     480048 Aug  6 05:32 libggml-cpu-x64.so
-rwxr-xr-x 1 root root 1288402984 Aug  6 05:41 libggml-cuda.so
-rwxr-xr-x 1 root root  605826112 Aug  6 05:46 libggml-hip.so
drwxr-xr-x 1 root root        970 Aug  6 05:46 rocm
root@llama30:~# ls -l /usr/local/lib/ollama/rocm/
total 1921028
lrwxrwxrwx 1 root root         25 Aug  6 05:46 libamd_comgr.so.2 -> libamd_comgr.so.2.8.60303
-rwxr-xr-x 1 root root  144125696 Feb 10 19:41 libamd_comgr.so.2.8.60303
lrwxrwxrwx 1 root root         24 Aug  6 05:46 libamdhip64.so.6 -> libamdhip64.so.6.3.60303
-rwxr-xr-x 1 root root   22294280 Feb 10 20:12 libamdhip64.so.6.3.60303
lrwxrwxrwx 1 root root         17 Aug  6 05:46 libdrm.so.2 -> libdrm.so.2.123.0
-rwxr-xr-x 1 root root     106888 Feb  7 19:06 libdrm.so.2.123.0
lrwxrwxrwx 1 root root         24 Aug  6 05:46 libdrm_amdgpu.so.1 -> libdrm_amdgpu.so.1.123.0
-rwxr-xr-x 1 root root      58200 Feb  7 19:06 libdrm_amdgpu.so.1.123.0
-rwxr-xr-x 1 root root     109000 Apr  6  2024 libelf-0.190.so
lrwxrwxrwx 1 root root         15 Aug  6 05:46 libelf.so.1 -> libelf-0.190.so
lrwxrwxrwx 1 root root         23 Aug  6 05:46 libhipblas.so.2 -> libhipblas.so.2.3.60303
-rwxr-xr-x 1 root root    1052288 Feb 11 06:36 libhipblas.so.2.3.60303
lrwxrwxrwx 1 root root         26 Aug  6 05:46 libhipblaslt.so.0 -> libhipblaslt.so.0.10.60303
-rwxr-xr-x 1 root root    7450504 Feb 11 03:27 libhipblaslt.so.0.10.60303
lrwxrwxrwx 1 root root         30 Aug  6 05:46 libhsa-runtime64.so.1 -> libhsa-runtime64.so.1.14.60303
-rwxr-xr-x 1 root root    3259872 Feb 10 19:40 libhsa-runtime64.so.1.14.60303
lrwxrwxrwx 1 root root         16 Aug  6 05:46 libnuma.so.1 -> libnuma.so.1.0.0
-rwxr-xr-x 1 root root      51400 Apr  6  2024 libnuma.so.1.0.0
lrwxrwxrwx 1 root root         23 Aug  6 05:46 librocblas.so.4 -> librocblas.so.4.3.60303
-rwxr-xr-x 1 root root   74646880 Feb 11 05:44 librocblas.so.4.3.60303
lrwxrwxrwx 1 root root         32 Aug  6 05:46 librocprofiler-register.so.0 -> librocprofiler-register.so.0.4.0
-rwxr-xr-x 1 root root     872192 Feb 10 19:08 librocprofiler-register.so.0.4.0
lrwxrwxrwx 1 root root         25 Aug  6 05:46 librocsolver.so.0 -> librocsolver.so.0.3.60303
-rwxr-xr-x 1 root root 1713040960 Feb 11 06:14 librocsolver.so.0.3.60303
drwxr-xr-x 1 root root         14 Aug  6 05:46 rocblas

Relevant log output


OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.11.3

Originally created by @galets on GitHub (Aug 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11793 ### What is the issue? Hardware is: AI Max+ with 128G shared memory. 96G is allocated for GPU Ollama was installed via update: ```bash root@llama30:~# curl -fsSL https://ollama.com/install.sh | sh >>> Cleaning up old version at /usr/local/lib/ollama >>> Installing ollama to /usr/local >>> Downloading Linux amd64 bundle ######################################################################## 100.0% >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... >>> Downloading Linux ROCm amd64 bundle ######################################################################## 100.0% >>> The Ollama API is now available at 127.0.0.1:11434. >>> Install complete. Run "ollama" from the command line. >>> AMD GPU ready. ``` After system restart getting: ```log Aug 07 18:26:44 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:26:44 | 200 | 41.119µs | 127.0.0.1 | GET "/api/ps" Aug 07 18:26:48 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:26:48 | 200 | 2.093067ms | 127.0.0.1 | GET "/api/tags" Aug 07 18:26:48 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:26:48 | 200 | 63.267µs | 127.0.0.1 | GET "/api/ps" Aug 07 18:27:18 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:27:18 | 200 | 1m16s | 127.0.0.1 | POST "/api/chat" Aug 07 18:28:00 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:28:00 | 200 | 42.232392706s | 127.0.0.1 | POST "/api/chat" Aug 07 18:29:48 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:29:48 | 200 | 1m47s | 127.0.0.1 | POST "/api/chat" Aug 07 18:30:02 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:30:02 | 200 | 1m3s | 127.0.0.1 | POST "/api/chat" Aug 07 18:32:22 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:32:22 | 200 | 2m19s | 127.0.0.1 | POST "/api/chat" Aug 07 18:42:47 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:42:47 | 200 | 3m12s | 127.0.0.1 | POST "/api/chat" Aug 07 18:45:04 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:45:04 | 200 | 2m16s | 127.0.0.1 | POST "/api/chat" Aug 07 18:46:17 llama30 systemd[1]: Stopping ollama.service - Ollama Service... Aug 07 18:46:17 llama30 ollama[1255]: time=2025-08-07T18:46:17.970Z level=ERROR source=server.go:807 msg="post predict" error="Post \"http://127.0.0.1:43035/completion\": context canceled" Aug 07 18:46:17 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:46:17 | 500 | 1m13s | 127.0.0.1 | POST "/api/chat" Aug 07 18:46:18 llama30 systemd[1]: ollama.service: Deactivated successfully. Aug 07 18:46:18 llama30 systemd[1]: Stopped ollama.service - Ollama Service. Aug 07 18:46:18 llama30 systemd[1]: ollama.service: Consumed 5h 3min 24.574s CPU time. Aug 07 18:46:18 llama30 systemd[1]: Started ollama.service - Ollama Service. Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.736Z level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:131072 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:3 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.739Z level=INFO source=images.go:477 msg="total blobs: 98" Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.739Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.740Z level=INFO source=routes.go:1350 msg="Listening on 127.0.0.1:11434 (version 0.11.3)" Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.740Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.746Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=0 gpu_type=gfx1151 Aug 07 18:46:18 llama30 ollama[117872]: time=2025-08-07T18:46:18.749Z level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.12 name=1002:1586 total="96.0 GiB" available="95.8 GiB" Aug 07 18:47:25 llama30 ollama[117872]: [GIN] 2025/08/07 - 18:47:25 | 200 | 1.765323ms | 127.0.0.1 | GET "/api/tags" Aug 07 18:47:25 llama30 ollama[117872]: [GIN] 2025/08/07 - 18:47:25 | 200 | 93.127µs | 127.0.0.1 | GET "/api/ps" Aug 07 18:47:28 llama30 ollama[117872]: [GIN] 2025/08/07 - 18:47:28 | 200 | 62.308µs | 127.0.0.1 | GET "/api/version" Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.739Z level=INFO source=server.go:135 msg="system memory" total="31.0 GiB" free="24.7 GiB" free_swap="110.4 GiB" Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.739Z level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[95.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.0 GiB" memory.required.partial="0 B" memory.required.kv="9.3 GiB" memory.required.allocations="[0 B]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="96.0 GiB" memory.graph.partial="96.0 GiB" Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.779Z level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 393216 --batch-size 512 --threads 16 --no-mmap --parallel 3 --port 44509" Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.779Z level=INFO source=sched.go:481 msg="loaded runners" count=1 Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.779Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding" Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.780Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.788Z level=INFO source=runner.go:925 msg="starting ollama engine" Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.788Z level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:44509" Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.828Z level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 Aug 07 18:47:49 llama30 ollama[117872]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.849Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.849Z level=INFO source=ggml.go:367 msg="offloading 0 repeating layers to GPU" Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.849Z level=INFO source=ggml.go:371 msg="offloading output layer to CPU" Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.849Z level=INFO source=ggml.go:378 msg="offloaded 0/25 layers to GPU" Aug 07 18:47:49 llama30 ollama[117872]: time=2025-08-07T18:47:49.849Z level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="12.8 GiB" Aug 07 18:47:50 llama30 ollama[117872]: time=2025-08-07T18:47:50.031Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" Aug 07 18:47:51 llama30 ollama[117872]: time=2025-08-07T18:47:51.369Z level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="96.0 GiB" Aug 07 18:47:55 llama30 ollama[117872]: time=2025-08-07T18:47:55.078Z level=INFO source=server.go:637 msg="llama runner started in 5.30 seconds" ``` Loading another model will use GPU: before: ```bash root@llama30:~# ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL gpt-oss:latest f2b8351c629c 22 GB 100% CPU 131072 49 minutes from now ``` after trying to use another model: ```bash root@llama30:~# ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL mistral:latest 6577803aa9a0 24 GB 100% GPU 131072 49 minutes from now gpt-oss:latest f2b8351c629c 22 GB 100% CPU 131072 48 minutes from now ``` In another thread I was asked to post file listing, so here it is: ```bash root@llama30:~# ls -l /usr/local/lib/ollama total 2703152 lrwxrwxrwx 1 root root 21 Aug 6 05:41 libcublas.so.12 -> libcublas.so.12.8.4.1 -rwxr-xr-x 1 root root 116388640 Jul 8 2015 libcublas.so.12.8.4.1 lrwxrwxrwx 1 root root 23 Aug 6 05:41 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1 -rwxr-xr-x 1 root root 751771728 Jul 8 2015 libcublasLt.so.12.8.4.1 lrwxrwxrwx 1 root root 20 Aug 6 05:41 libcudart.so.12 -> libcudart.so.12.8.90 -rwxr-xr-x 1 root root 728800 Jul 8 2015 libcudart.so.12.8.90 -rwxr-xr-x 1 root root 595648 Aug 6 05:32 libggml-base.so -rwxr-xr-x 1 root root 619280 Aug 6 05:32 libggml-cpu-alderlake.so -rwxr-xr-x 1 root root 619280 Aug 6 05:32 libggml-cpu-haswell.so -rwxr-xr-x 1 root root 729872 Aug 6 05:32 libggml-cpu-icelake.so -rwxr-xr-x 1 root root 606992 Aug 6 05:32 libggml-cpu-sandybridge.so -rwxr-xr-x 1 root root 729872 Aug 6 05:32 libggml-cpu-skylakex.so -rwxr-xr-x 1 root root 480048 Aug 6 05:32 libggml-cpu-sse42.so -rwxr-xr-x 1 root root 480048 Aug 6 05:32 libggml-cpu-x64.so -rwxr-xr-x 1 root root 1288402984 Aug 6 05:41 libggml-cuda.so -rwxr-xr-x 1 root root 605826112 Aug 6 05:46 libggml-hip.so drwxr-xr-x 1 root root 970 Aug 6 05:46 rocm root@llama30:~# ls -l /usr/local/lib/ollama/rocm/ total 1921028 lrwxrwxrwx 1 root root 25 Aug 6 05:46 libamd_comgr.so.2 -> libamd_comgr.so.2.8.60303 -rwxr-xr-x 1 root root 144125696 Feb 10 19:41 libamd_comgr.so.2.8.60303 lrwxrwxrwx 1 root root 24 Aug 6 05:46 libamdhip64.so.6 -> libamdhip64.so.6.3.60303 -rwxr-xr-x 1 root root 22294280 Feb 10 20:12 libamdhip64.so.6.3.60303 lrwxrwxrwx 1 root root 17 Aug 6 05:46 libdrm.so.2 -> libdrm.so.2.123.0 -rwxr-xr-x 1 root root 106888 Feb 7 19:06 libdrm.so.2.123.0 lrwxrwxrwx 1 root root 24 Aug 6 05:46 libdrm_amdgpu.so.1 -> libdrm_amdgpu.so.1.123.0 -rwxr-xr-x 1 root root 58200 Feb 7 19:06 libdrm_amdgpu.so.1.123.0 -rwxr-xr-x 1 root root 109000 Apr 6 2024 libelf-0.190.so lrwxrwxrwx 1 root root 15 Aug 6 05:46 libelf.so.1 -> libelf-0.190.so lrwxrwxrwx 1 root root 23 Aug 6 05:46 libhipblas.so.2 -> libhipblas.so.2.3.60303 -rwxr-xr-x 1 root root 1052288 Feb 11 06:36 libhipblas.so.2.3.60303 lrwxrwxrwx 1 root root 26 Aug 6 05:46 libhipblaslt.so.0 -> libhipblaslt.so.0.10.60303 -rwxr-xr-x 1 root root 7450504 Feb 11 03:27 libhipblaslt.so.0.10.60303 lrwxrwxrwx 1 root root 30 Aug 6 05:46 libhsa-runtime64.so.1 -> libhsa-runtime64.so.1.14.60303 -rwxr-xr-x 1 root root 3259872 Feb 10 19:40 libhsa-runtime64.so.1.14.60303 lrwxrwxrwx 1 root root 16 Aug 6 05:46 libnuma.so.1 -> libnuma.so.1.0.0 -rwxr-xr-x 1 root root 51400 Apr 6 2024 libnuma.so.1.0.0 lrwxrwxrwx 1 root root 23 Aug 6 05:46 librocblas.so.4 -> librocblas.so.4.3.60303 -rwxr-xr-x 1 root root 74646880 Feb 11 05:44 librocblas.so.4.3.60303 lrwxrwxrwx 1 root root 32 Aug 6 05:46 librocprofiler-register.so.0 -> librocprofiler-register.so.0.4.0 -rwxr-xr-x 1 root root 872192 Feb 10 19:08 librocprofiler-register.so.0.4.0 lrwxrwxrwx 1 root root 25 Aug 6 05:46 librocsolver.so.0 -> librocsolver.so.0.3.60303 -rwxr-xr-x 1 root root 1713040960 Feb 11 06:14 librocsolver.so.0.3.60303 drwxr-xr-x 1 root root 14 Aug 6 05:46 rocblas ``` ### Relevant log output ```shell ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.11.3
GiteaMirror added the bug label 2026-05-04 19:42:30 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 7, 2025):

Your problem is that you are allocating too much context: 3 x 128k. This increases the size of the memory graph to where it will not fit in the 96G you have allocated to the GPU:

Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.153Z level=INFO source=server.go:175 msg=offload
 library=rocm layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[95.8 GiB]"
 memory.gpu_overhead="0 B" memory.required.full="21.0 GiB" memory.required.partial="0 B" memory.required.kv="9.3 GiB"
 memory.required.allocations="[0 B]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB"
 memory.weights.nonrepeating="1.1 GiB" memory.graph.full="96.0 GiB" memory.graph.partial="96.0 GiB"

Since the graph can't fit, ollama doesn't allocate any layers to the GPU, so the whole lot is scheduled for system RAM. You can avoid this by decreasing context (OLLAMA_CONTEXT_LENGTH) or decreasing parallelism (OLLAMA_NUM_PARALLEL).

<!-- gh-comment-id:3165386798 --> @rick-github commented on GitHub (Aug 7, 2025): Your problem is that you are allocating too much context: 3 x 128k. This increases the size of the memory graph to where it will not fit in the 96G you have allocated to the GPU: ``` Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.153Z level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[95.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.0 GiB" memory.required.partial="0 B" memory.required.kv="9.3 GiB" memory.required.allocations="[0 B]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="96.0 GiB" memory.graph.partial="96.0 GiB" ``` Since the graph can't fit, ollama doesn't allocate any layers to the GPU, so the whole lot is scheduled for system RAM. You can avoid this by decreasing context (`OLLAMA_CONTEXT_LENGTH`) or decreasing parallelism (`OLLAMA_NUM_PARALLEL`).
Author
Owner

@galets commented on GitHub (Aug 7, 2025):

Indeed that was the issue:

NAME              ID              SIZE     PROCESSOR    CONTEXT    UNTIL               
gpt-oss:latest    f2b8351c629c    23 GB    100% GPU     32768      49 minutes from now    

Thank you

<!-- gh-comment-id:3165407436 --> @galets commented on GitHub (Aug 7, 2025): Indeed that was the issue: ```bash NAME ID SIZE PROCESSOR CONTEXT UNTIL gpt-oss:latest f2b8351c629c 23 GB 100% GPU 32768 49 minutes from now ``` Thank you
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69881