[GH-ISSUE #11713] gpt-oss Error: 500 Internal Server Error: llama runner process has terminated: error:fault (0.11.3-rc0) #54267

Closed
opened 2026-04-29 05:32:15 -05:00 by GiteaMirror · 34 comments
Owner

Originally created by @LiangYang666 on GitHub (Aug 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11713

What is the issue?

ollama run gpt-oss:20b
Error: 500 Internal Server Error: llama runner process has terminated: error:fault

ollama -v
ollama version is 0.11.3-rc0

Relevant log output

load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
time=2025-08-06T10:24:56.928+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-08-06T10:24:57.024+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
unexpected fault address 0x13af3b0000
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x1 addr=0x13af3b0000 pc=0x5608ebc70780]

goroutine 111 gp=0xc00017e540 m=15 mp=0xc000581008 [running]:
runtime.throw({0x5608ecb9848b?, 0xc00017e540?})
        runtime/

......

        net/fd_posix.go:55 +0x25 fp=0xc0001b9aa8 sp=0xc0001b9a60 pc=0x5608ebddcde5
net.(*conn).Read(0xc0006ce000, {0xc00191e000?, 0x0?, 0x0?})
        net/net.go:194 +0x45 fp=0xc0001b9af0 sp=0xc0001b9aa8 pc=0x5608ebdeb1a5
net/http.(*connReader).Read(0xc000df4330, {0xc00191e000, 0x1000, 0x1000})
        net/http/server.go:798 +0x159 fp=0xc0001b9b40 sp=0xc0001b9af0 pc=0x5608ebfd74b9
bufio.(*Reader).fill(0xc0001c00c0)
        bufio/bufio.go:113 +0x103 fp=0xc0001b9b78 sp=0xc0001b9b40 pc=0x5608ebe02943
bufio.(*Reader).Peek(0xc0001c00c0, 0x4)
        bufio/bufio.go:152 +0x53 fp=0xc0001b9b98 sp=0xc0001b9b78 pc=0x5608ebe02a73
net/http.(*conn).serve(0xc00026bcb0, {0x5608ed056758, 0xc000376d20})
        net/http/server.go:2137 +0x785 fp=0xc0001b9fb8 sp=0xc0001b9b98 pc=0x5608ebfdd2a5
net/http.(*Server).Serve.gowrap3()
        net/http/server.go:3454 +0x28 fp=0xc0001b9fe0 sp=0xc0001b9fb8 pc=0x5608ebfe2a08
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0001b9fe8 sp=0xc0001b9fe0 pc=0x5608ebce8481
created by net/http.(*Server).Serve in goroutine 1
        net/http/server.go:3454 +0x485
time=2025-08-06T10:24:57.405+08:00 level=ERROR source=server.go:464 msg="llama runner terminated" error="exit status 2"
time=2025-08-06T10:24:57.525+08:00 level=ERROR source=sched.go:487 msg="error loading llama server" error="llama runner process has terminated: error:fault"
[GIN] 2025/08/06 - 10:24:57 | 500 |  1.867973924s |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.11.3-rc0

Originally created by @LiangYang666 on GitHub (Aug 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11713 ### What is the issue? ollama run gpt-oss:20b Error: 500 Internal Server Error: llama runner process has terminated: error:fault ollama -v ollama version is 0.11.3-rc0 ### Relevant log output ```shell load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so time=2025-08-06T10:24:56.928+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-08-06T10:24:57.024+08:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" unexpected fault address 0x13af3b0000 fatal error: fault [signal SIGSEGV: segmentation violation code=0x1 addr=0x13af3b0000 pc=0x5608ebc70780] goroutine 111 gp=0xc00017e540 m=15 mp=0xc000581008 [running]: runtime.throw({0x5608ecb9848b?, 0xc00017e540?}) runtime/ ...... net/fd_posix.go:55 +0x25 fp=0xc0001b9aa8 sp=0xc0001b9a60 pc=0x5608ebddcde5 net.(*conn).Read(0xc0006ce000, {0xc00191e000?, 0x0?, 0x0?}) net/net.go:194 +0x45 fp=0xc0001b9af0 sp=0xc0001b9aa8 pc=0x5608ebdeb1a5 net/http.(*connReader).Read(0xc000df4330, {0xc00191e000, 0x1000, 0x1000}) net/http/server.go:798 +0x159 fp=0xc0001b9b40 sp=0xc0001b9af0 pc=0x5608ebfd74b9 bufio.(*Reader).fill(0xc0001c00c0) bufio/bufio.go:113 +0x103 fp=0xc0001b9b78 sp=0xc0001b9b40 pc=0x5608ebe02943 bufio.(*Reader).Peek(0xc0001c00c0, 0x4) bufio/bufio.go:152 +0x53 fp=0xc0001b9b98 sp=0xc0001b9b78 pc=0x5608ebe02a73 net/http.(*conn).serve(0xc00026bcb0, {0x5608ed056758, 0xc000376d20}) net/http/server.go:2137 +0x785 fp=0xc0001b9fb8 sp=0xc0001b9b98 pc=0x5608ebfdd2a5 net/http.(*Server).Serve.gowrap3() net/http/server.go:3454 +0x28 fp=0xc0001b9fe0 sp=0xc0001b9fb8 pc=0x5608ebfe2a08 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0001b9fe8 sp=0xc0001b9fe0 pc=0x5608ebce8481 created by net/http.(*Server).Serve in goroutine 1 net/http/server.go:3454 +0x485 time=2025-08-06T10:24:57.405+08:00 level=ERROR source=server.go:464 msg="llama runner terminated" error="exit status 2" time=2025-08-06T10:24:57.525+08:00 level=ERROR source=sched.go:487 msg="error loading llama server" error="llama runner process has terminated: error:fault" [GIN] 2025/08/06 - 10:24:57 | 500 | 1.867973924s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.11.3-rc0
GiteaMirror added the bug label 2026-04-29 05:32:16 -05:00
Author
Owner

@seamo-sun commented on GitHub (Aug 6, 2025):

+1

<!-- gh-comment-id:3157239672 --> @seamo-sun commented on GitHub (Aug 6, 2025): +1
Author
Owner

@Wangyiquan95 commented on GitHub (Aug 6, 2025):

i update the ollama to 0.11.3 and ollama run gpt-oss.

I encountered similar issue:
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: error:fault

<!-- gh-comment-id:3157615847 --> @Wangyiquan95 commented on GitHub (Aug 6, 2025): i update the ollama to 0.11.3 and `ollama run gpt-oss`. I encountered similar issue: verifying sha256 digest writing manifest success Error: 500 Internal Server Error: llama runner process has terminated: error:fault
Author
Owner

@imAndyrrr commented on GitHub (Aug 6, 2025):

the same. 0.11.3.

<!-- gh-comment-id:3158059367 --> @imAndyrrr commented on GitHub (Aug 6, 2025): the same. 0.11.3.
Author
Owner
<!-- gh-comment-id:3158317196 --> @phiyodr commented on GitHub (Aug 6, 2025): Same error for `ollama/ollama:0.11.0` (https://hub.docker.com/layers/ollama/ollama/0.11.0/images/sha256-18891f25a4023ee405f118493b5e0d57405483d2d6ab2c6837584750fd4f8a5c).
Author
Owner

@graz68a commented on GitHub (Aug 6, 2025):

same here

<!-- gh-comment-id:3158952381 --> @graz68a commented on GitHub (Aug 6, 2025): same here
Author
Owner

@pengyuwei commented on GitHub (Aug 6, 2025):

Same error

$ ollama --version
ollama version is 0.11.2
$ ollama run gpt-oss:latest
Error: 500 Internal Server Error: llama runner process has terminated: error:fault
<!-- gh-comment-id:3159048120 --> @pengyuwei commented on GitHub (Aug 6, 2025): Same error ``` $ ollama --version ollama version is 0.11.2 $ ollama run gpt-oss:latest Error: 500 Internal Server Error: llama runner process has terminated: error:fault ```
Author
Owner

@graz68a commented on GitHub (Aug 6, 2025):

Using windows 11, I did a reset in settings and now it works for me.

<!-- gh-comment-id:3159079518 --> @graz68a commented on GitHub (Aug 6, 2025): Using windows 11, I did a reset in settings and now it works for me.
Author
Owner

@minburg commented on GitHub (Aug 6, 2025):

Can you specify what reset you did exactly? Thanks

<!-- gh-comment-id:3159227967 --> @minburg commented on GitHub (Aug 6, 2025): Can you specify what reset you did exactly? Thanks
Author
Owner

@graz68a commented on GitHub (Aug 6, 2025):

Settings , reset to Default then I changed the model location to G:\ollama
now it works.
Fairly slow on my pc however.

<!-- gh-comment-id:3159805993 --> @graz68a commented on GitHub (Aug 6, 2025): Settings , reset to Default then I changed the model location to G:\ollama now it works. Fairly slow on my pc however.
Author
Owner

@lenin2001 commented on GitHub (Aug 6, 2025):

the same

<!-- gh-comment-id:3160269341 --> @lenin2001 commented on GitHub (Aug 6, 2025): the same
Author
Owner

@Moosdijk commented on GitHub (Aug 6, 2025):

same error, 0.11.3
ollama run hf.co/unsloth/gemma-3n-E4B-it-GGUF:Q4_K_M

I did manage to get the gemma-3n-E4B-it-GGUF:Q4_K_M model to run by forcing it to run on CPU by setting num_gpu to 0 in open-webui

<!-- gh-comment-id:3160354004 --> @Moosdijk commented on GitHub (Aug 6, 2025): same error, 0.11.3 `ollama run hf.co/unsloth/gemma-3n-E4B-it-GGUF:Q4_K_M` I did manage to get the gemma-3n-E4B-it-GGUF:Q4_K_M model to run by forcing it to run on CPU by setting num_gpu to 0 in open-webui
Author
Owner

@marceloboemeke commented on GitHub (Aug 6, 2025):

Edit: I deleted the "lib" folder inside Ollama installation folder and it seemed to fix this issue.
Same error on 0.11.2, windows 11. Resetting the settings did not help.

<!-- gh-comment-id:3160403453 --> @marceloboemeke commented on GitHub (Aug 6, 2025): Edit: I deleted the "lib" folder inside Ollama installation folder and it seemed to fix this issue. Same error on 0.11.2, windows 11. Resetting the settings did not help.
Author
Owner

@rindholt commented on GitHub (Aug 6, 2025):

Same issue on linux with 0.11.2

<!-- gh-comment-id:3160516816 --> @rindholt commented on GitHub (Aug 6, 2025): Same issue on linux with 0.11.2
Author
Owner

@Moosdijk commented on GitHub (Aug 6, 2025):

The error seems to be related to the GPU usage.
The last time I used ollama (through open-webui), I could load and use models that currently give this error.
Forcing the model to run on CPU by setting num_gpu to 0 in open-webui seems to fix it (but not for gpt-oss)

<!-- gh-comment-id:3160552978 --> @Moosdijk commented on GitHub (Aug 6, 2025): The error seems to be related to the GPU usage. The last time I used ollama (through open-webui), I could load and use models that currently give this error. Forcing the model to run on CPU by setting num_gpu to 0 in open-webui seems to fix it (but not for gpt-oss)
Author
Owner

@jzzhang001 commented on GitHub (Aug 6, 2025):

try gpt-oss-20b with the same issue for 0.11.2 and 0.11.3, for other models like qwen3, it is ok

<!-- gh-comment-id:3160607669 --> @jzzhang001 commented on GitHub (Aug 6, 2025): try gpt-oss-20b with the same issue for 0.11.2 and 0.11.3, for other models like qwen3, it is ok
Author
Owner

@f10org commented on GitHub (Aug 6, 2025):

Same issue with gpt-oss:20b on 0.11.2 on Linux with RTX 6000 Ada. No issues with other models.

<!-- gh-comment-id:3160679706 --> @f10org commented on GitHub (Aug 6, 2025): Same issue with gpt-oss:20b on 0.11.2 on Linux with RTX 6000 Ada. No issues with other models.
Author
Owner

@SunilBhatiaTelus commented on GitHub (Aug 6, 2025):

Same issue with 0.11.2 while running gpt-oss:20b on Mac with 16 GB RAM

<!-- gh-comment-id:3160732896 --> @SunilBhatiaTelus commented on GitHub (Aug 6, 2025): Same issue with 0.11.2 while running gpt-oss:20b on Mac with 16 GB RAM
Author
Owner

@SqrtMinusOne commented on GitHub (Aug 6, 2025):

Had the same issue with ollama 0.11.3. Deleting the lib folder from the previous installation fixed it.

<!-- gh-comment-id:3161029387 --> @SqrtMinusOne commented on GitHub (Aug 6, 2025): Had the same issue with ollama 0.11.3. Deleting the `lib` folder from the previous installation fixed it.
Author
Owner

@Wangyiquan95 commented on GitHub (Aug 6, 2025):

remove the old folder and install again fixed the bug.

<!-- gh-comment-id:3161103078 --> @Wangyiquan95 commented on GitHub (Aug 6, 2025): remove the old folder and install again fixed the bug.
Author
Owner

@jessegross commented on GitHub (Aug 6, 2025):

@LiangYang666 Please post the full log. Everyone else - this is a generic error message and the issues are not necessarily related to each other.

<!-- gh-comment-id:3161697096 --> @jessegross commented on GitHub (Aug 6, 2025): @LiangYang666 Please post the full log. Everyone else - this is a generic error message and the issues are not necessarily related to each other.
Author
Owner

@matr1xp commented on GitHub (Aug 6, 2025):

+1 same running Ollama 0.11.2 in Windows (fresh install)

<!-- gh-comment-id:3161901155 --> @matr1xp commented on GitHub (Aug 6, 2025): +1 same running Ollama 0.11.2 in Windows (fresh install)
Author
Owner

@2441630833 commented on GitHub (Aug 7, 2025):

+1 same running Ollama 0.11.3 in Linux

<!-- gh-comment-id:3162094907 --> @2441630833 commented on GitHub (Aug 7, 2025): +1 same running Ollama 0.11.3 in Linux
Author
Owner

@LiangYang666 commented on GitHub (Aug 7, 2025):

Solution

The problem was caused by leftover files from a previous version of the ollama library. When reinstalling a newer version, the old files weren’t completely removed, leading to the conflict.

Fix

  1. Remove the old installation
    The directory can be one of the following (depending on your setup):

    • /usr/local/lib/ollama
    • /usr/lib/ollama

    Run the appropriate command, e.g.:

    sudo rm -rf /usr/local/lib/ollama
    # or
    sudo rm -rf /usr/lib/ollama
    
  2. Re‑install Ollama
    After the old files are cleared, install the new version again.

That’s it—once the old files are removed, the new installation works correctly.

<!-- gh-comment-id:3162164938 --> @LiangYang666 commented on GitHub (Aug 7, 2025): **Solution** The problem was caused by leftover files from a previous version of the *ollama* library. When reinstalling a newer version, the old files weren’t completely removed, leading to the conflict. **Fix** 1. **Remove the old installation** The directory can be one of the following (depending on your setup): - `/usr/local/lib/ollama` - `/usr/lib/ollama` Run the appropriate command, e.g.: ```bash sudo rm -rf /usr/local/lib/ollama # or sudo rm -rf /usr/lib/ollama ``` 2. **Re‑install Ollama** After the old files are cleared, install the new version again. That’s it—once the old files are removed, the new installation works correctly.
Author
Owner

@ZanderChang commented on GitHub (Aug 7, 2025):

+1 same running Ollama 0.11.3 in Mac

<!-- gh-comment-id:3162673913 --> @ZanderChang commented on GitHub (Aug 7, 2025): +1 same running Ollama 0.11.3 in Mac
Author
Owner

@Macl-Liu commented on GitHub (Aug 7, 2025):

Error: 500 Internal Server Error: unable to load model: D:\ollama\MODELS\blobs\sha256-db9d08d2105a0cd9a6b03556595de60656c95df47780a43d0d5e51a2d51f826c

<!-- gh-comment-id:3164205615 --> @Macl-Liu commented on GitHub (Aug 7, 2025): Error: 500 Internal Server Error: unable to load model: D:\ollama\MODELS\blobs\sha256-db9d08d2105a0cd9a6b03556595de60656c95df47780a43d0d5e51a2d51f826c
Author
Owner

@Macl-Liu commented on GitHub (Aug 7, 2025):

Windows 11
ollama version is 0.11.3

<!-- gh-comment-id:3164211133 --> @Macl-Liu commented on GitHub (Aug 7, 2025): Windows 11 ollama version is 0.11.3
Author
Owner

@Moosdijk commented on GitHub (Aug 7, 2025):

That’s it—once the old files are removed, the new installation works correctly.

I don't use the installer because I extract the new version (from ollama-windows-amd64.zip) with every update. It still gives me issues.

<!-- gh-comment-id:3164510905 --> @Moosdijk commented on GitHub (Aug 7, 2025): > That’s it—once the old files are removed, the new installation works correctly. I don't use the installer because I extract the new version (from ollama-windows-amd64.zip) with every update. It still gives me issues.
Author
Owner

@Guillaume-Pz commented on GitHub (Aug 8, 2025):

same with Mac and the latest version of Ollama thanks @LiangYang666

<!-- gh-comment-id:3167000769 --> @Guillaume-Pz commented on GitHub (Aug 8, 2025): same with Mac and the latest version of Ollama thanks @LiangYang666
Author
Owner

@hellonico commented on GitHub (Aug 8, 2025):

Solution

The problem was caused by leftover files from a previous version of the ollama library. When reinstalling a newer version, the old files weren’t completely removed, leading to the conflict.

Fix

1. **Remove the old installation**
   The directory can be one of the following (depending on your setup):
   
   * `/usr/local/lib/ollama`
   * `/usr/lib/ollama`
   
   Run the appropriate command, e.g.:
   sudo rm -rf /usr/local/lib/ollama
   # or
   sudo rm -rf /usr/lib/ollama

2. **Re‑install Ollama**
   After the old files are cleared, install the new version again.

That’s it—once the old files are removed, the new installation works correctly.

I removed the old files, but still some previously working models won't load:

cutebox@cute-2:~$ ollama run llama3.1
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: vector::_M_range_check: __n (which is 1) >= this->size() (which is 1)
llama_model_load_from_file_impl: failed to load model
cutebox@cute-2:~$ ollama run gemma3:12b
>>> Send a message (/? for help)

<!-- gh-comment-id:3167383067 --> @hellonico commented on GitHub (Aug 8, 2025): > **Solution** > > The problem was caused by leftover files from a previous version of the _ollama_ library. When reinstalling a newer version, the old files weren’t completely removed, leading to the conflict. > > **Fix** > > 1. **Remove the old installation** > The directory can be one of the following (depending on your setup): > > * `/usr/local/lib/ollama` > * `/usr/lib/ollama` > > Run the appropriate command, e.g.: > sudo rm -rf /usr/local/lib/ollama > # or > sudo rm -rf /usr/lib/ollama > > 2. **Re‑install Ollama** > After the old files are cleared, install the new version again. > > > That’s it—once the old files are removed, the new installation works correctly. I removed the old files, but still some previously working models won't load: ``` cutebox@cute-2:~$ ollama run llama3.1 Error: 500 Internal Server Error: llama runner process has terminated: error loading model: vector::_M_range_check: __n (which is 1) >= this->size() (which is 1) llama_model_load_from_file_impl: failed to load model cutebox@cute-2:~$ ollama run gemma3:12b >>> Send a message (/? for help) ```
Author
Owner

@prein commented on GitHub (Aug 8, 2025):

If the old installation files are the problem, perhaps this could evolve into request for a feature similar to docker "prune" command

<!-- gh-comment-id:3169153852 --> @prein commented on GitHub (Aug 8, 2025): If the old installation files are the problem, perhaps this could evolve into request for a feature similar to docker "prune" command
Author
Owner

@aghac19 commented on GitHub (Aug 12, 2025):

and me
No work

its infuriating when people close issues that clearly are not resolved.

reopen it and fix it

<!-- gh-comment-id:3179163043 --> @aghac19 commented on GitHub (Aug 12, 2025): and me No work its infuriating when people close issues that clearly are not resolved. reopen it and fix it
Author
Owner

@wgottwalt commented on GitHub (Aug 13, 2025):

Yeah, this is definitely not fixed, at least for the ROCm backend. It is an issue somewhere down in llama.cpp or backend code, the one which calculates required buffer sizes and then allocating the buffers. Heck, it even mananges to hit the direct rendering manager hard (which is basically always an over-allocation of video memory).

Aug 13 16:58:51 monster kernel: amdgpu 0000:43:00.0: amdgpu: 0000000050bbcadb pin failed
Aug 13 16:58:51 monster kernel: [drm:amdgpu_dm_plane_helper_prepare_fb [amdgpu]] *ERROR* Failed to pin framebuffer with error -12
Aug 13 16:58:56 monster ollama[3010374]: ROCm error: out of memory
Aug 13 16:58:56 monster ollama[3010374]:   current device: 0, in function alloc build/arch/ollama-rocm-git.git/src/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:420
Aug 13 16:58:56 monster ollama[3010374]:   ggml_cuda_device_malloc(&ptr, look_ahead_size, device)

The calculated sizes:

Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.163+02:00 level=INFO source=ggml.go:382 msg="offloading 24 repeating layers to GPU"
Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.163+02:00 level=INFO source=ggml.go:388 msg="offloading output layer to GPU"
Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.163+02:00 level=INFO source=ggml.go:393 msg="offloaded 25/25 layers to GPU"
Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.163+02:00 level=INFO source=ggml.go:396 msg="model weights" buffer=CPU size="1.1 GiB"
Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.163+02:00 level=INFO source=ggml.go:396 msg="model weights" buffer=ROCm0 size="11.7 GiB"
Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.229+02:00 level=INFO source=ggml.go:699 msg="compute graph" backend=ROCm0 buffer_type=ROCm0 size="32.3 GiB"
Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.229+02:00 level=INFO source=ggml.go:699 msg="compute graph" backend=CPU buffer_type=CPU size="5.6 MiB"

Lets say about ~45.1 GiB video memory required. Which is fine in my case, because:

Aug 13 16:54:17 monster ollama[3010374]: time=2025-08-13T16:54:17.718+02:00 level=INFO source=routes.go:1358 msg="Listening on 127.0.0.1:11434 (version 0.11.4.r11.ga343ae53a4fa)"
Aug 13 16:54:17 monster ollama[3010374]: time=2025-08-13T16:54:17.758+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-c058fd8959f5f59b library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:7448 total="48.0 GiB" available="47.5 GiB"
Aug 13 16:54:31 monster ollama[3010374]: time=2025-08-13T16:54:31.912+02:00 level=INFO source=server.go:135 msg="system memory" total="251.5 GiB" free="237.5 GiB" free_swap="0 B"

Yeah I have 48 GiB video memory and 47.5 GiB are free for use.

But then I see this:

Aug 13 16:54:31 monster ollama[3010374]: time=2025-08-13T16:54:31.913+02:00 level=INFO source=server.go:175 msg=offload library=rocm layers.requested=100 layers.model=25 layers.offload=24 layers.split="" memory.available="[47.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.7 GiB" memory.required.partial="46.7 GiB" memory.required.kv="3.1 GiB" memory.required.allocations="[46.7 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="32.0 GiB" memory.graph.partial="32.0 GiB"

There full required memory size is actually 47.7 GiB. Like I said, there is something wrong with the calculation of required memory, or just the wrong number is used for the final check. The check should have caught this and reduced the uploaded layers from 24 to 23.

<!-- gh-comment-id:3184364045 --> @wgottwalt commented on GitHub (Aug 13, 2025): Yeah, this is definitely not fixed, at least for the ROCm backend. It is an issue somewhere down in llama.cpp or backend code, the one which calculates required buffer sizes and then allocating the buffers. Heck, it even mananges to hit the direct rendering manager hard (which is basically always an over-allocation of video memory). ``` Aug 13 16:58:51 monster kernel: amdgpu 0000:43:00.0: amdgpu: 0000000050bbcadb pin failed Aug 13 16:58:51 monster kernel: [drm:amdgpu_dm_plane_helper_prepare_fb [amdgpu]] *ERROR* Failed to pin framebuffer with error -12 Aug 13 16:58:56 monster ollama[3010374]: ROCm error: out of memory Aug 13 16:58:56 monster ollama[3010374]: current device: 0, in function alloc build/arch/ollama-rocm-git.git/src/ollama/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:420 Aug 13 16:58:56 monster ollama[3010374]: ggml_cuda_device_malloc(&ptr, look_ahead_size, device) ``` The calculated sizes: ``` Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.163+02:00 level=INFO source=ggml.go:382 msg="offloading 24 repeating layers to GPU" Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.163+02:00 level=INFO source=ggml.go:388 msg="offloading output layer to GPU" Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.163+02:00 level=INFO source=ggml.go:393 msg="offloaded 25/25 layers to GPU" Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.163+02:00 level=INFO source=ggml.go:396 msg="model weights" buffer=CPU size="1.1 GiB" Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.163+02:00 level=INFO source=ggml.go:396 msg="model weights" buffer=ROCm0 size="11.7 GiB" Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.229+02:00 level=INFO source=ggml.go:699 msg="compute graph" backend=ROCm0 buffer_type=ROCm0 size="32.3 GiB" Aug 13 16:54:35 monster ollama[3010374]: time=2025-08-13T16:54:35.229+02:00 level=INFO source=ggml.go:699 msg="compute graph" backend=CPU buffer_type=CPU size="5.6 MiB" ``` Lets say about ~45.1 GiB video memory required. Which is fine in my case, because: ``` Aug 13 16:54:17 monster ollama[3010374]: time=2025-08-13T16:54:17.718+02:00 level=INFO source=routes.go:1358 msg="Listening on 127.0.0.1:11434 (version 0.11.4.r11.ga343ae53a4fa)" Aug 13 16:54:17 monster ollama[3010374]: time=2025-08-13T16:54:17.758+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-c058fd8959f5f59b library=rocm variant="" compute=gfx1100 driver=0.0 name=1002:7448 total="48.0 GiB" available="47.5 GiB" Aug 13 16:54:31 monster ollama[3010374]: time=2025-08-13T16:54:31.912+02:00 level=INFO source=server.go:135 msg="system memory" total="251.5 GiB" free="237.5 GiB" free_swap="0 B" ``` Yeah I have 48 GiB video memory and 47.5 GiB are free for use. But then I see this: ``` Aug 13 16:54:31 monster ollama[3010374]: time=2025-08-13T16:54:31.913+02:00 level=INFO source=server.go:175 msg=offload library=rocm layers.requested=100 layers.model=25 layers.offload=24 layers.split="" memory.available="[47.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.7 GiB" memory.required.partial="46.7 GiB" memory.required.kv="3.1 GiB" memory.required.allocations="[46.7 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="32.0 GiB" memory.graph.partial="32.0 GiB" ``` There full required memory size is actually 47.7 GiB. Like I said, there is something wrong with the calculation of required memory, or just the wrong number is used for the final check. The check should have caught this and reduced the uploaded layers from 24 to 23.
Author
Owner

@jbotte commented on GitHub (Aug 15, 2025):

having this issue as well, removed all libs reinstalled, redownloaded all models same issue with oss

this is on wsl2

<!-- gh-comment-id:3192822756 --> @jbotte commented on GitHub (Aug 15, 2025): having this issue as well, removed all libs reinstalled, redownloaded all models same issue with oss this is on wsl2
Author
Owner

@simonboydfoley commented on GitHub (Mar 26, 2026):

I had a similar issue on a strix halo machine ... but I was admittedly running the noble release on Ubuntu 25.10.
To fix my issue I did not try to reinstall ollama.

I reinstalled rocm and AMD GPU drivers from the quick start guide
https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html

and then added users to render and video groups.

It may well be a kernel update that broke rocm and recompiling the amdgpu module from scratch again fixed the issue.

<!-- gh-comment-id:4138312038 --> @simonboydfoley commented on GitHub (Mar 26, 2026): I had a similar issue on a strix halo machine ... but I was admittedly running the noble release on Ubuntu 25.10. To fix my issue I did *not* try to reinstall ollama. I reinstalled rocm and AMD GPU drivers from the quick start guide https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html and then added users to render and video groups. It may well be a kernel update that broke rocm and recompiling the amdgpu module from scratch again fixed the issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54267