[GH-ISSUE #11790] cannot run gpt-oss on my server, no matter 20B or 120B #54332

Closed
opened 2026-04-29 05:45:41 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @nemo4aerobat on GitHub (Aug 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11790

What is the issue?

$ ollama pull gpt-oss:20b
$ ollama run gpt-oss:20b
Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

Also tested:
$ OLLAMA_LOG=debug ollama run gpt-oss:20b
Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

$ OLLAMA_NO_GPU=1 ollama run gpt-oss:20b
Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

Ollama version: latest (updated Aug 7, 2025)
ollama version is 0.11.3

OS: Ubuntu/Linux

GPUs: 3 × NVIDIA RTX A6000 (48 GB VRAM each)

NVIDIA driver: 545.29.06

CUDA version: 12.3

Other models like llama3.3:70b work fine

Model info from ollama show gpt-oss:20b:

Architecture: gptoss

Quantization: MXFP4

Expected
Model should load and run successfully, as GPT-OSS models are advertised to work with Ollama and my hardware meets all requirements.

Actual
Immediate crash with exit status 2, no additional logs, even with OLLAMA_LOG=debug.

Relevant log output


OS

Linux

GPU

Nvidia

CPU

No response

Ollama version

0.11.3

Originally created by @nemo4aerobat on GitHub (Aug 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11790 ### What is the issue? $ ollama pull gpt-oss:20b $ ollama run gpt-oss:20b Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 Also tested: $ OLLAMA_LOG=debug ollama run gpt-oss:20b Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 $ OLLAMA_NO_GPU=1 ollama run gpt-oss:20b Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 Ollama version: latest (updated Aug 7, 2025) ollama version is 0.11.3 OS: Ubuntu/Linux GPUs: 3 × NVIDIA RTX A6000 (48 GB VRAM each) NVIDIA driver: 545.29.06 CUDA version: 12.3 Other models like llama3.3:70b work fine Model info from ollama show gpt-oss:20b: Architecture: gptoss Quantization: MXFP4 ✅ Expected Model should load and run successfully, as GPT-OSS models are advertised to work with Ollama and my hardware meets all requirements. ❌ Actual Immediate crash with exit status 2, no additional logs, even with OLLAMA_LOG=debug. ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.11.3
GiteaMirror added the bug label 2026-04-29 05:45:41 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 7, 2025):

Server logs will help in debugging.

<!-- gh-comment-id:3164991358 --> @rick-github commented on GitHub (Aug 7, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Author
Owner

@edwrdq commented on GitHub (Aug 7, 2025):

Server logs will help in debugging.

Original post:

Immediate crash with exit status 2, no additional logs, even with OLLAMA_LOG=debug.

<!-- gh-comment-id:3165152127 --> @edwrdq commented on GitHub (Aug 7, 2025): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging. Original post: > Immediate crash with exit status 2, no additional logs, even with OLLAMA_LOG=debug.
Author
Owner

@rick-github commented on GitHub (Aug 7, 2025):

Server logs, not client logs.

<!-- gh-comment-id:3165161584 --> @rick-github commented on GitHub (Aug 7, 2025): Server logs, not client logs.
Author
Owner

@galets commented on GitHub (Aug 7, 2025):

I have same issue, and here are my logs:

# journalctl -fu ollama
Aug 07 18:07:26 llama30 systemd[1]: Started ollama.service - Ollama Service.
Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.618Z level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:131072 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:3 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.628Z level=INFO source=images.go:477 msg="total blobs: 98"
Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.628Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.629Z level=INFO source=routes.go:1350 msg="Listening on 127.0.0.1:11434 (version 0.11.3)"
Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.629Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.638Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=0 gpu_type=gfx1151
Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.640Z level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.12 name=1002:1586 total="96.0 GiB" available="95.8 GiB"
Aug 07 18:09:52 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:09:52 | 200 |      55.839µs |       127.0.0.1 | HEAD     "/"
Aug 07 18:09:53 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:09:53 | 200 |   89.418833ms |       127.0.0.1 | POST     "/api/show"
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.153Z level=INFO source=server.go:135 msg="system memory" total="31.0 GiB" free="23.3 GiB" free_swap="64.0 GiB"
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.153Z level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[95.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.0 GiB" memory.required.partial="0 B" memory.required.kv="9.3 GiB" memory.required.allocations="[0 B]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="96.0 GiB" memory.graph.partial="96.0 GiB"
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.192Z level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 393216 --batch-size 512 --threads 16 --no-mmap --parallel 3 --port 34349"
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.193Z level=INFO source=sched.go:481 msg="loaded runners" count=1
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.193Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.193Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.201Z level=INFO source=runner.go:925 msg="starting ollama engine"
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.202Z level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:34349"
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.241Z level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
Aug 07 18:09:53 llama30 ollama[1255]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.257Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.258Z level=INFO source=ggml.go:367 msg="offloading 0 repeating layers to GPU"
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.258Z level=INFO source=ggml.go:371 msg="offloading output layer to CPU"
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.258Z level=INFO source=ggml.go:378 msg="offloaded 0/25 layers to GPU"
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.258Z level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="12.8 GiB"
Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.444Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
Aug 07 18:09:54 llama30 ollama[1255]: ggml_aligned_malloc: insufficient memory (attempted to allocate 98333.38 MB)
Aug 07 18:09:54 llama30 ollama[1255]: ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 103110017024
Aug 07 18:09:54 llama30 ollama[1255]: ggml_gallocr_reserve_n: failed to allocate CPU buffer of size 103110017024
Aug 07 18:09:54 llama30 ollama[1255]: time=2025-08-07T18:09:54.648Z level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="96.0 GiB"
Aug 07 18:09:54 llama30 ollama[1255]: panic: insufficient memory - required allocations: {InputWeights:1158266880A CPU:{Name:CPU ID: Weights:[477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 1158278400A] Cache:[26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 0U] Graph:103110017024F} GPUs:[]}
Aug 07 18:09:54 llama30 ollama[1255]: goroutine 64 [running]:
Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/ml/backend/ggml.(*Context).Reserve(0xc00156b6c0)
Aug 07 18:09:54 llama30 ollama[1255]:         github.com/ollama/ollama/ml/backend/ggml/ggml.go:677 +0x756
Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc0006b6c60)
Aug 07 18:09:54 llama30 ollama[1255]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:826 +0xbcd
Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/runner/ollamarunner.(*Server).initModel(0xc0006b6c60, {0x7ffe85baaca3?, 0x0?}, {0x10, 0x0, 0x0, {0x0, 0x0, 0x0}, 0x0}, ...)
Aug 07 18:09:54 llama30 ollama[1255]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:865 +0x270
Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc0006b6c60, {0x6405a18ad790, 0xc000126230}, {0x7ffe85baaca3?, 0x0?}, {0x10, 0x0, 0x0, {0x0, 0x0, ...}, ...}, ...)
Aug 07 18:09:54 llama30 ollama[1255]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:878 +0xb8
Aug 07 18:09:54 llama30 ollama[1255]: created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
Aug 07 18:09:54 llama30 ollama[1255]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:959 +0xa11
Aug 07 18:09:54 llama30 ollama[1255]: time=2025-08-07T18:09:54.791Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
Aug 07 18:09:55 llama30 ollama[1255]: time=2025-08-07T18:09:55.042Z level=ERROR source=sched.go:487 msg="error loading llama server" error="llama runner process has terminated: exit status 2"
Aug 07 18:09:55 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:09:55 | 500 |  2.004535387s |       127.0.0.1 | POST     "/api/generate"
Aug 07 18:10:00 llama30 ollama[1255]: time=2025-08-07T18:10:00.043Z level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.000754365 runner.size="21.0 GiB" runner.vram="0 B" runner.parallel=3 runner.pid=1826 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583
Aug 07 18:10:00 llama30 ollama[1255]: time=2025-08-07T18:10:00.293Z level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.251111966 runner.size="21.0 GiB" runner.vram="0 B" runner.parallel=3 runner.pid=1826 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583
Aug 07 18:10:00 llama30 ollama[1255]: time=2025-08-07T18:10:00.543Z level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.500831782 runner.size="21.0 GiB" runner.vram="0 B" runner.parallel=3 runner.pid=1826 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583

I am running on Ryzen AI Max+ with 128G; 96G allocated for GPU

<!-- gh-comment-id:3165241329 --> @galets commented on GitHub (Aug 7, 2025): I have same issue, and here are my logs: ```log # journalctl -fu ollama Aug 07 18:07:26 llama30 systemd[1]: Started ollama.service - Ollama Service. Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.618Z level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:131072 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:3 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.628Z level=INFO source=images.go:477 msg="total blobs: 98" Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.628Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.629Z level=INFO source=routes.go:1350 msg="Listening on 127.0.0.1:11434 (version 0.11.3)" Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.629Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.638Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=0 gpu_type=gfx1151 Aug 07 18:07:26 llama30 ollama[1255]: time=2025-08-07T18:07:26.640Z level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.12 name=1002:1586 total="96.0 GiB" available="95.8 GiB" Aug 07 18:09:52 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:09:52 | 200 | 55.839µs | 127.0.0.1 | HEAD "/" Aug 07 18:09:53 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:09:53 | 200 | 89.418833ms | 127.0.0.1 | POST "/api/show" Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.153Z level=INFO source=server.go:135 msg="system memory" total="31.0 GiB" free="23.3 GiB" free_swap="64.0 GiB" Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.153Z level=INFO source=server.go:175 msg=offload library=rocm layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[95.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="21.0 GiB" memory.required.partial="0 B" memory.required.kv="9.3 GiB" memory.required.allocations="[0 B]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="96.0 GiB" memory.graph.partial="96.0 GiB" Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.192Z level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 393216 --batch-size 512 --threads 16 --no-mmap --parallel 3 --port 34349" Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.193Z level=INFO source=sched.go:481 msg="loaded runners" count=1 Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.193Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding" Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.193Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.201Z level=INFO source=runner.go:925 msg="starting ollama engine" Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.202Z level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:34349" Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.241Z level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 Aug 07 18:09:53 llama30 ollama[1255]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.257Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.258Z level=INFO source=ggml.go:367 msg="offloading 0 repeating layers to GPU" Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.258Z level=INFO source=ggml.go:371 msg="offloading output layer to CPU" Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.258Z level=INFO source=ggml.go:378 msg="offloaded 0/25 layers to GPU" Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.258Z level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="12.8 GiB" Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.444Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" Aug 07 18:09:54 llama30 ollama[1255]: ggml_aligned_malloc: insufficient memory (attempted to allocate 98333.38 MB) Aug 07 18:09:54 llama30 ollama[1255]: ggml_backend_cpu_buffer_type_alloc_buffer: failed to allocate buffer of size 103110017024 Aug 07 18:09:54 llama30 ollama[1255]: ggml_gallocr_reserve_n: failed to allocate CPU buffer of size 103110017024 Aug 07 18:09:54 llama30 ollama[1255]: time=2025-08-07T18:09:54.648Z level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="96.0 GiB" Aug 07 18:09:54 llama30 ollama[1255]: panic: insufficient memory - required allocations: {InputWeights:1158266880A CPU:{Name:CPU ID: Weights:[477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 1158278400A] Cache:[26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 26214400A 805306368A 0U] Graph:103110017024F} GPUs:[]} Aug 07 18:09:54 llama30 ollama[1255]: goroutine 64 [running]: Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/ml/backend/ggml.(*Context).Reserve(0xc00156b6c0) Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/ml/backend/ggml/ggml.go:677 +0x756 Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc0006b6c60) Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/runner/ollamarunner/runner.go:826 +0xbcd Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/runner/ollamarunner.(*Server).initModel(0xc0006b6c60, {0x7ffe85baaca3?, 0x0?}, {0x10, 0x0, 0x0, {0x0, 0x0, 0x0}, 0x0}, ...) Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/runner/ollamarunner/runner.go:865 +0x270 Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc0006b6c60, {0x6405a18ad790, 0xc000126230}, {0x7ffe85baaca3?, 0x0?}, {0x10, 0x0, 0x0, {0x0, 0x0, ...}, ...}, ...) Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/runner/ollamarunner/runner.go:878 +0xb8 Aug 07 18:09:54 llama30 ollama[1255]: created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 Aug 07 18:09:54 llama30 ollama[1255]: github.com/ollama/ollama/runner/ollamarunner/runner.go:959 +0xa11 Aug 07 18:09:54 llama30 ollama[1255]: time=2025-08-07T18:09:54.791Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" Aug 07 18:09:55 llama30 ollama[1255]: time=2025-08-07T18:09:55.042Z level=ERROR source=sched.go:487 msg="error loading llama server" error="llama runner process has terminated: exit status 2" Aug 07 18:09:55 llama30 ollama[1255]: [GIN] 2025/08/07 - 18:09:55 | 500 | 2.004535387s | 127.0.0.1 | POST "/api/generate" Aug 07 18:10:00 llama30 ollama[1255]: time=2025-08-07T18:10:00.043Z level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.000754365 runner.size="21.0 GiB" runner.vram="0 B" runner.parallel=3 runner.pid=1826 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 Aug 07 18:10:00 llama30 ollama[1255]: time=2025-08-07T18:10:00.293Z level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.251111966 runner.size="21.0 GiB" runner.vram="0 B" runner.parallel=3 runner.pid=1826 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 Aug 07 18:10:00 llama30 ollama[1255]: time=2025-08-07T18:10:00.543Z level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.500831782 runner.size="21.0 GiB" runner.vram="0 B" runner.parallel=3 runner.pid=1826 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 ``` I am running on Ryzen AI Max+ with 128G; 96G allocated for GPU
Author
Owner

@galets commented on GitHub (Aug 7, 2025):

update: I extended swap to be 128G, and model loaded

Interestingly, not that much swap is used after model has initially been loaded:

Image
<!-- gh-comment-id:3165261792 --> @galets commented on GitHub (Aug 7, 2025): update: I extended swap to be 128G, and model loaded Interestingly, not that much swap is used after model has initially been loaded: <img width="1973" height="276" alt="Image" src="https://github.com/user-attachments/assets/583e7c34-9eef-409f-99dc-9a34030891b6" />
Author
Owner

@rick-github commented on GitHub (Aug 7, 2025):

Aug 07 18:09:53 llama30 ollama[1255]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so

The ROCm backend wasn't loaded, so the runner had to load the model into system RAM. At the time, there was only 87.3GB free:

Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.153Z level=INFO source=server.go:135 msg="system memory" total="31.0 GiB" free="23.3 GiB" free_swap="64.0 GiB"

So the allocation required to hold the model, 98G, failed:

Aug 07 18:09:54 llama30 ollama[1255]: ggml_aligned_malloc: insufficient memory (attempted to allocate 98333.38 MB)

Extending the swap allows the model to load, but inference is running on the CPU, not the GPU. How did you install ollama?

<!-- gh-comment-id:3165301931 --> @rick-github commented on GitHub (Aug 7, 2025): ``` Aug 07 18:09:53 llama30 ollama[1255]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so ``` The ROCm backend wasn't loaded, so the runner had to load the model into system RAM. At the time, there was only 87.3GB free: ``` Aug 07 18:09:53 llama30 ollama[1255]: time=2025-08-07T18:09:53.153Z level=INFO source=server.go:135 msg="system memory" total="31.0 GiB" free="23.3 GiB" free_swap="64.0 GiB" ``` So the allocation required to hold the model, 98G, failed: ``` Aug 07 18:09:54 llama30 ollama[1255]: ggml_aligned_malloc: insufficient memory (attempted to allocate 98333.38 MB) ``` Extending the swap allows the model to load, but inference is running on the CPU, not the GPU. How did you install ollama?
Author
Owner

@galets commented on GitHub (Aug 7, 2025):

Ouch. I didn't realize that. Update was done a couple minutes ago, I still have it on my terminal.

root@llama30:~# curl -fsSL https://ollama.com/install.sh | sh

>>> Cleaning up old version at /usr/local/lib/ollama
>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
>>> Downloading Linux ROCm amd64 bundle
######################################################################## 100.0%
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.
>>> AMD GPU ready.

followed by reboot.

I realize I am hijacking the issue, since OP had nvidia GPU. If this requires a separate thread, I can stop posting here

<!-- gh-comment-id:3165311862 --> @galets commented on GitHub (Aug 7, 2025): Ouch. I didn't realize that. Update was done a couple minutes ago, I still have it on my terminal. ```bash root@llama30:~# curl -fsSL https://ollama.com/install.sh | sh >>> Cleaning up old version at /usr/local/lib/ollama >>> Installing ollama to /usr/local >>> Downloading Linux amd64 bundle ######################################################################## 100.0% >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... >>> Downloading Linux ROCm amd64 bundle ######################################################################## 100.0% >>> The Ollama API is now available at 127.0.0.1:11434. >>> Install complete. Run "ollama" from the command line. >>> AMD GPU ready. ``` followed by reboot. I realize I am hijacking the issue, since OP had nvidia GPU. If this requires a separate thread, I can stop posting here
Author
Owner

@rick-github commented on GitHub (Aug 7, 2025):

Yes, open a new issue and paste the contents of ls -l /usr/local/lib/ollama.

<!-- gh-comment-id:3165326708 --> @rick-github commented on GitHub (Aug 7, 2025): Yes, open a new issue and paste the contents of `ls -l /usr/local/lib/ollama`.
Author
Owner

@jia-zhen-yu commented on GitHub (Aug 8, 2025):

8月 08 08:02:04 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: time=2025-08-08T08:02:04.007+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
8月 08 08:02:04 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: time=2025-08-08T08:02:04.078+08:00 level=ERROR source=server.go:464 msg="llama runner terminated" error="exit status 2"
8月 08 08:02:04 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: time=2025-08-08T08:02:04.258+08:00 level=ERROR source=sched.go:487 msg="error loading llama server" error="llama runner process has terminated: error:fault"
8月 08 08:02:04 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: [GIN] 2025/08/08 - 08:02:04 | 500 | 2.196983034s | 127.0.0.1 | POST "/api/generate"
8月 08 08:02:09 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: time=2025-08-08T08:02:09.287+08:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.028900293 runner.size="82.3 GiB" runner.vram="82.3 GiB" runner.parallel=16 runner.pid=3749600 runner.model=/home/sgai/mydata/data1/ollamadata/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583
8月 08 08:02:09 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: time=2025-08-08T08:02:09.608+08:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.349644587 runner.size="82.3 GiB" runner.vram="82.3 GiB" runner.parallel=16 runner.pid=3749600 runner.model=/home/sgai/mydata/data1/ollamadata/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583
8月 08 08:02:09 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: time=2025-08-08T08:02:09.962+08:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.704095808 runner.size="82.3 GiB" runner.vram="82.3 GiB" runner.parallel=16 runner.pid=3749600 runner.model=/home/sgai/mydata/data1/ollamadata/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583

<!-- gh-comment-id:3166180428 --> @jia-zhen-yu commented on GitHub (Aug 8, 2025): 8月 08 08:02:04 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: time=2025-08-08T08:02:04.007+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" 8月 08 08:02:04 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: time=2025-08-08T08:02:04.078+08:00 level=ERROR source=server.go:464 msg="llama runner terminated" error="exit status 2" 8月 08 08:02:04 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: time=2025-08-08T08:02:04.258+08:00 level=ERROR source=sched.go:487 msg="error loading llama server" error="llama runner process has terminated: error:fault" 8月 08 08:02:04 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: [GIN] 2025/08/08 - 08:02:04 | 500 | 2.196983034s | 127.0.0.1 | POST "/api/generate" 8月 08 08:02:09 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: time=2025-08-08T08:02:09.287+08:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.028900293 runner.size="82.3 GiB" runner.vram="82.3 GiB" runner.parallel=16 runner.pid=3749600 runner.model=/home/sgai/mydata/data1/ollamadata/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 8月 08 08:02:09 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: time=2025-08-08T08:02:09.608+08:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.349644587 runner.size="82.3 GiB" runner.vram="82.3 GiB" runner.parallel=16 runner.pid=3749600 runner.model=/home/sgai/mydata/data1/ollamadata/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 8月 08 08:02:09 sgai-H3C-UniServer-R4900-G6 ollama[2579635]: time=2025-08-08T08:02:09.962+08:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.704095808 runner.size="82.3 GiB" runner.vram="82.3 GiB" runner.parallel=16 runner.pid=3749600 runner.model=/home/sgai/mydata/data1/ollamadata/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583
Author
Owner

@jia-zhen-yu commented on GitHub (Aug 8, 2025):

| NVIDIA-SMI 550.144.03 Driver Version: 550.144.03 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA L40 Off | 00000000:3D:00.0 Off | 0 |
| N/A 37C P0 79W / 300W | 32616MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA L40 Off | 00000000:BD:00.0 Off | 0 |
| N/A 39C P0 80W / 300W | 586MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1866 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 3754382 C /usr/local/bin/ollama 32594MiB |
| 1 N/A N/A 1866 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 25013 C python 564MiB |
+-----------------------------------------------------------------------------------------+

Using other models such as Qwen3 can run normally on a GPU.

<!-- gh-comment-id:3166185094 --> @jia-zhen-yu commented on GitHub (Aug 8, 2025): | NVIDIA-SMI 550.144.03 Driver Version: 550.144.03 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA L40 Off | 00000000:3D:00.0 Off | 0 | | N/A 37C P0 79W / 300W | 32616MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA L40 Off | 00000000:BD:00.0 Off | 0 | | N/A 39C P0 80W / 300W | 586MiB / 46068MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1866 G /usr/lib/xorg/Xorg 4MiB | | 0 N/A N/A 3754382 C /usr/local/bin/ollama 32594MiB | | 1 N/A N/A 1866 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 25013 C python 564MiB | +-----------------------------------------------------------------------------------------+ Using other models such as Qwen3 can run normally on a GPU.
Author
Owner

@rick-github commented on GitHub (Aug 8, 2025):

@jia-zhen-yu Post the full log.

<!-- gh-comment-id:3166716783 --> @rick-github commented on GitHub (Aug 8, 2025): @jia-zhen-yu Post the full log.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54332