[GH-ISSUE #11750] Ollama returns 500 Internal Server Error: llama runner process has terminated in macos with gpt-oss:20b #33546

Closed
opened 2026-04-22 16:23:20 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @imumesh18 on GitHub (Aug 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11750

What is the issue?

So when I run gpt-oss:20b on my m1 pro macbook. It works for 32k and 128k context window. Each one of them using gpu and cpu respectively but when make the context window 64k it throws below error. Ideally it should either load from gpu or cpu like it does for 128k and 32k but not return error.

Relevant log output

500 Internal Server Error: llama runner process has terminated: error:failed to allocate buffer, size = 16571.89 MiB ggml_gallocr_reserve_n: failed to allocate Metal buffer of size 17376872352

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.11.3

Originally created by @imumesh18 on GitHub (Aug 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11750 ### What is the issue? So when I run gpt-oss:20b on my m1 pro macbook. It works for 32k and 128k context window. Each one of them using gpu and cpu respectively but when make the context window 64k it throws below error. Ideally it should either load from gpu or cpu like it does for 128k and 32k but not return error. ### Relevant log output ```shell 500 Internal Server Error: llama runner process has terminated: error:failed to allocate buffer, size = 16571.89 MiB ggml_gallocr_reserve_n: failed to allocate Metal buffer of size 17376872352 ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.11.3
GiteaMirror added the bug label 2026-04-22 16:23:20 -05:00
Author
Owner

@habibbhutto commented on GitHub (Aug 6, 2025):

Linking comments here
https://github.com/ollama/ollama/issues/11673#issuecomment-3156583693
https://github.com/ollama/ollama/issues/11673#issuecomment-3161284197

Setup:
Macbook Pro M4, 48GB RAM
Ollama v0.11.3

❯ ollama run gpt-oss:20b
Error: 500 Internal Server Error: llama runner process has terminated: error:failed to allocate buffer, size = 33074.89 MiB
ggml_gallocr_reserve_n: failed to allocate Metal buffer of size 34681522080
<!-- gh-comment-id:3161316784 --> @habibbhutto commented on GitHub (Aug 6, 2025): Linking comments here https://github.com/ollama/ollama/issues/11673#issuecomment-3156583693 https://github.com/ollama/ollama/issues/11673#issuecomment-3161284197 Setup: Macbook Pro M4, 48GB RAM Ollama v0.11.3 ``` ❯ ollama run gpt-oss:20b Error: 500 Internal Server Error: llama runner process has terminated: error:failed to allocate buffer, size = 33074.89 MiB ggml_gallocr_reserve_n: failed to allocate Metal buffer of size 34681522080 ```
Author
Owner

@banianzr commented on GitHub (Aug 7, 2025):

I also encounter the same problem. I run gpt-oss:120b on a AMD MI210(64GB)*4 machine via ollama/ollama:0.11.3-rocm, and encountered the same 500 response code, saying "llama runner process has terminated". while I run gpt-oss:120b on a AMD MI100(32GB)*8 machine via ollama/ollama:0.11.3-rocm response correct result.
and I found another issues: run on the AMD MI100(32GB)*8 machine takes 75G GPU Memory while run on the AMD MI100(32GB)*8 machine takes 83G GPU Memory.

<!-- gh-comment-id:3162667588 --> @banianzr commented on GitHub (Aug 7, 2025): I also encounter the same problem. I run gpt-oss:120b on a AMD MI210(64GB)*4 machine via ollama/ollama:0.11.3-rocm, and encountered the same `500` response code, saying "llama runner process has terminated". while I run gpt-oss:120b on a AMD MI100(32GB)*8 machine via ollama/ollama:0.11.3-rocm response correct result. and I found another issues: run on the AMD MI100(32GB)*8 machine takes 75G GPU Memory while run on the AMD MI100(32GB)*8 machine takes 83G GPU Memory.
Author
Owner

@aabbccgg commented on GitHub (Aug 8, 2025):

Get the same error on OSX m4pro 48g in v0.11.4

<!-- gh-comment-id:3166747271 --> @aabbccgg commented on GitHub (Aug 8, 2025): Get the same error on OSX m4pro 48g in v0.11.4
Author
Owner

@piethonic commented on GitHub (Aug 8, 2025):

I was getting the following error

> ollama --version
ollama version is 0.11.2

> ollama run gpt-oss:20b
Error: 500 Internal Server Error: llama runner process has terminated: error:failed to allocate buffer, size = 33074.89 MiB
ggml_gallocr_reserve_n: failed to allocate Metal buffer of size 34681522080

For me on Macbook M4 Pro 48 GB RAM, I had to reduce the Context length from Ollama Settings from 128k to 64k, and then it worked.

Image
<!-- gh-comment-id:3166792376 --> @piethonic commented on GitHub (Aug 8, 2025): I was getting the following error ``` > ollama --version ollama version is 0.11.2 > ollama run gpt-oss:20b Error: 500 Internal Server Error: llama runner process has terminated: error:failed to allocate buffer, size = 33074.89 MiB ggml_gallocr_reserve_n: failed to allocate Metal buffer of size 34681522080 ``` For me on Macbook M4 Pro 48 GB RAM, I had to reduce the Context length from Ollama Settings from 128k to 64k, and then it worked. <img width="724" height="141" alt="Image" src="https://github.com/user-attachments/assets/c0a7d2b7-2524-49fd-9a45-9c9b09aab1aa" />
Author
Owner

@aabbccgg commented on GitHub (Aug 8, 2025):

I was getting the following error

> ollama --version
ollama version is 0.11.2

> ollama run gpt-oss:20b
Error: 500 Internal Server Error: llama runner process has terminated: error:failed to allocate buffer, size = 33074.89 MiB
ggml_gallocr_reserve_n: failed to allocate Metal buffer of size 34681522080

For me on Macbook M4 Pro 48 GB RAM, I had to reduce the Context length from Ollama Settings from 128k to 64k, and then it worked.

Image

Thank you for your reply, it works! But I noticed that was run on my CPU and got 10tokens/s' output speed, it is far slower than I ran it in LM Studio on GPU.

<!-- gh-comment-id:3166866119 --> @aabbccgg commented on GitHub (Aug 8, 2025): > I was getting the following error > > ``` > > ollama --version > ollama version is 0.11.2 > > > ollama run gpt-oss:20b > Error: 500 Internal Server Error: llama runner process has terminated: error:failed to allocate buffer, size = 33074.89 MiB > ggml_gallocr_reserve_n: failed to allocate Metal buffer of size 34681522080 > ``` > > For me on Macbook M4 Pro 48 GB RAM, I had to reduce the Context length from Ollama Settings from 128k to 64k, and then it worked. > > <img alt="Image" width="724" height="141" src="https://private-user-images.githubusercontent.com/42842989/475901772-c0a7d2b7-2524-49fd-9a45-9c9b09aab1aa.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NTQ2Mzg2NjQsIm5iZiI6MTc1NDYzODM2NCwicGF0aCI6Ii80Mjg0Mjk4OS80NzU5MDE3NzItYzBhN2QyYjctMjUyNC00OWZkLTlhNDUtOWM5YjA5YWFiMWFhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA4MDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwODA4VDA3MzI0NFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTAzNGYzM2JkYTIzNjk5MmZhY2NlNzQ3ZjdmYjVmNDQwMzQ5Y2JkM2YxM2RiZGQ5NmI1NGM4YmE0ZDEwZWFkYWQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.CpIjLr8jQL_gTrvAuWGfZmIAR7IeXMTHL4nZSjhpig8"> Thank you for your reply, it works! But I noticed that was run on my CPU and got 10tokens/s' output speed, it is far slower than I ran it in LM Studio on GPU.
Author
Owner

@habibbhutto commented on GitHub (Aug 8, 2025):

LM Studio
Context window: 128K
Memory consumption: 17gb to 20gb
Tokens/s: 59 when the context is less complex

Ollama seems to be consuming far more ram than it should, and is slow despite running on 64k context.

<!-- gh-comment-id:3167868176 --> @habibbhutto commented on GitHub (Aug 8, 2025): **LM Studio** Context window: 128K Memory consumption: 17gb to 20gb Tokens/s: 59 when the context is less complex Ollama seems to be consuming far more ram than it should, and is slow despite running on 64k context.
Author
Owner

@jessegross commented on GitHub (Aug 18, 2025):

There is an improved memory management system in 0.11.5-rc2 that should fix this issue. It is still opt-in but if you download the new version and set OLLAMA_NEW_ESTIMATES=1, you can give it a try.

<!-- gh-comment-id:3197840216 --> @jessegross commented on GitHub (Aug 18, 2025): There is an improved memory management system in 0.11.5-rc2 that should fix this issue. It is still opt-in but if you download the new version and set OLLAMA_NEW_ESTIMATES=1, you can give it a try.
Author
Owner

@rick-github commented on GitHub (Sep 23, 2025):

Has upgrading resolved this issue?

<!-- gh-comment-id:3324699912 --> @rick-github commented on GitHub (Sep 23, 2025): Has upgrading resolved this issue?
Author
Owner

@imumesh18 commented on GitHub (Sep 24, 2025):

Hey, this has been now resolved I longer see this issue.

<!-- gh-comment-id:3327282660 --> @imumesh18 commented on GitHub (Sep 24, 2025): Hey, this has been now resolved I longer see this issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33546