[GH-ISSUE #1516] Better reports "Out of memory" #47337

Closed
opened 2026-04-28 03:36:08 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @igorschlum on GitHub (Dec 14, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1516

Lot of Users don't understand they are facing a memory error.
It could be nice to explain in the error message that it is a memory error.

Error: llama runner process has terminated

Could be replace by:

Error: Llama process ran out of memory.

Or

Error, Ollama could not run the model because it ran out of memory.

Originally created by @igorschlum on GitHub (Dec 14, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1516 Lot of Users don't understand they are facing a memory error. It could be nice to explain in the error message that it is a memory error. Error: llama runner process has terminated Could be replace by: Error: Llama process ran out of memory. Or Error, Ollama could not run the model because it ran out of memory.
Author
Owner

@phalexo commented on GitHub (Dec 14, 2023):

Lot of user don't understand they are facing a memory error. It could be nice to explain in the error message that it is a memory error.

Error: llama runner process has terminated

Could be replace by:

Error: Llama process ran out of memory.

Or

Error, Ollama could not run the model because it ran out of memory.

A lot of these "out of memory" seem to be a bug introduced after the version 0.1.11

I have now seen reports of 3 people (including myself) who experienced an OOM, either after upgrading from an older version of ollama or installing the versions 0.1..{12,13,14,15} and then having the error DISAPPEAR after downgrading their ollama to 0.1.11.

So, it looks like a bug was introduced in ollama after 0.1.11.

I have been looking at the code trying to figure out what was changed to cause the problem. No success so far.

<!-- gh-comment-id:1856271288 --> @phalexo commented on GitHub (Dec 14, 2023): > Lot of user don't understand they are facing a memory error. It could be nice to explain in the error message that it is a memory error. > > Error: llama runner process has terminated > > Could be replace by: > > Error: Llama process ran out of memory. > > Or > > Error, Ollama could not run the model because it ran out of memory. A lot of these "out of memory" seem to be a bug introduced after the version 0.1.11 I have now seen reports of 3 people (including myself) who experienced an OOM, either after upgrading from an older version of ollama or installing the versions 0.1..{12,13,14,15} and then having the error DISAPPEAR after downgrading their ollama to 0.1.11. So, it looks like a bug was introduced in ollama after 0.1.11. I have been looking at the code trying to figure out what was changed to cause the problem. No success so far.
Author
Owner

@jukofyork commented on GitHub (Jan 5, 2024):

Lot of user don't understand they are facing a memory error. It could be nice to explain in the error message that it is a memory error.
Error: llama runner process has terminated
Could be replace by:
Error: Llama process ran out of memory.
Or
Error, Ollama could not run the model because it ran out of memory.

A lot of these "out of memory" seem to be a bug introduced after the version 0.1.11

I have now seen reports of 3 people (including myself) who experienced an OOM, either after upgrading from an older version of ollama or installing the versions 0.1..{12,13,14,15} and then having the error DISAPPEAR after downgrading their ollama to 0.1.11.

So, it looks like a bug was introduced in ollama after 0.1.11.

I have been looking at the code trying to figure out what was changed to cause the problem. No success so far.

Can you try changing the batch size and see if that helps you too: https://github.com/jmorganca/ollama/issues/1800

It was driving me nuts too, but that finally solved it for me.

<!-- gh-comment-id:1878052021 --> @jukofyork commented on GitHub (Jan 5, 2024): > > Lot of user don't understand they are facing a memory error. It could be nice to explain in the error message that it is a memory error. > > Error: llama runner process has terminated > > Could be replace by: > > Error: Llama process ran out of memory. > > Or > > Error, Ollama could not run the model because it ran out of memory. > > A lot of these "out of memory" seem to be a bug introduced after the version 0.1.11 > > I have now seen reports of 3 people (including myself) who experienced an OOM, either after upgrading from an older version of ollama or installing the versions 0.1..{12,13,14,15} and then having the error DISAPPEAR after downgrading their ollama to 0.1.11. > > So, it looks like a bug was introduced in ollama after 0.1.11. > > I have been looking at the code trying to figure out what was changed to cause the problem. No success so far. Can you try changing the batch size and see if that helps you too: https://github.com/jmorganca/ollama/issues/1800 It was driving me nuts too, but that finally solved it for me.
Author
Owner

@igorschlum commented on GitHub (Jan 5, 2024):

@jukofyork I'm not experiencing errors myself. I've seen that reading many Issue.

<!-- gh-comment-id:1879061743 --> @igorschlum commented on GitHub (Jan 5, 2024): @jukofyork I'm not experiencing errors myself. I've seen that reading many Issue.
Author
Owner

@phalexo commented on GitHub (Jan 5, 2024):

The most likely bug is that one of the specialized matrix/matrix multiply
kernels is leaking memory.

On Fri, Jan 5, 2024, 1:01 PM Igor Schlumberger @.***>
wrote:

@jukofyork https://github.com/jukofyork I'm not experiencing errors
myself. I've seen that reading many Issue.


Reply to this email directly, view it on GitHub
https://github.com/jmorganca/ollama/issues/1516#issuecomment-1879061743,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABDD3ZK2O7V373YFP4VGOS3YNA5WXAVCNFSM6AAAAABAUKJF6OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZZGA3DCNZUGM
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:1879272597 --> @phalexo commented on GitHub (Jan 5, 2024): The most likely bug is that one of the specialized matrix/matrix multiply kernels is leaking memory. On Fri, Jan 5, 2024, 1:01 PM Igor Schlumberger ***@***.***> wrote: > @jukofyork <https://github.com/jukofyork> I'm not experiencing errors > myself. I've seen that reading many Issue. > > — > Reply to this email directly, view it on GitHub > <https://github.com/jmorganca/ollama/issues/1516#issuecomment-1879061743>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABDD3ZK2O7V373YFP4VGOS3YNA5WXAVCNFSM6AAAAABAUKJF6OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZZGA3DCNZUGM> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47337