[GH-ISSUE #10920] Error: model requires more system memory (164.8 GiB) than is available (13.4 GiB) #7184

Closed
opened 2026-04-12 19:11:00 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @Owl3 on GitHub (May 30, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10920

What is the issue?

Hello, i love ollama by the way :)

This is happening on: Ollama v0.9.0 on Arm64 (this is a Radxa Orion o6 system with 16GB of ram), and when trying to run:

ollama run deepseek-v2.5

Ollama spends nearly 4 hours downloading deepseek-v2.5 model, only when it finishes downloading to say that it can't run it.

Error: model requires more system memory (164.8 GiB) than is available (13.4 GiB)

For the sake of saving your bandwidth and mine, in the future, maybe it's better to just do the memory check before attempting the download?

Relevant log output

root@orion-o6:/home/radxa# ollama run deepseek-v2.5
pulling manifest 
pulling 799587243b19: 100% ▕██████████████████▏ 132 GB                         
pulling 8aa4c0321ccd: 100% ▕██████████████████▏  493 B                         
pulling ccfee4895df0: 100% ▕██████████████████▏  13 KB                         
pulling 059ecca256c0: 100% ▕██████████████████▏  241 B                         
pulling f50c0c6cdd1e: 100% ▕██████████████████▏  495 B                         
verifying sha256 digest 
writing manifest 
success 
Error: model requires more system memory (164.8 GiB) than is available (13.4 GiB)
root@orion-o6:/home/radxa# ollama list
NAME                        ID              SIZE      MODIFIED    
deepseek-v2.5:latest        409b2dd8a3c4    132 GB    3 hours ago    
root@orion-o6:/home/radxa# ollama --version
ollama version is 0.9.0

root@orion-o6:/home/radxa# uname -a               
Linux orion-o6 6.1.44-cix-build-generic #2 SMP PREEMPT Tue Apr  8 18:29:20 CST 2025 aarch64 GNU/Linux
root@orion-o6:/home/radxa# cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
root@orion-o6:/home/radxa# cat /proc/cpuinfo | head -n 10
processor       : 0
model name      : CIX P1 CD8180 
BogoMIPS        : 2000.00
Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh bti ecv afp wfxt
CPU implementer : 0x41
CPU architecture: 8
CPU variant     : 0x0
CPU part        : 0xd81
CPU revision    : 1

OS

Linux

GPU

Other

CPU

Other

Ollama version

0.9.0

Originally created by @Owl3 on GitHub (May 30, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10920 ### What is the issue? Hello, i love ollama by the way :) This is happening on: Ollama v0.9.0 on Arm64 (this is a Radxa Orion o6 system with 16GB of ram), and when trying to run: `ollama run deepseek-v2.5` Ollama spends nearly 4 hours downloading deepseek-v2.5 model, only when it finishes downloading to say that it can't run it. `Error: model requires more system memory (164.8 GiB) than is available (13.4 GiB)` For the sake of saving your bandwidth and mine, in the future, maybe it's better to just do the memory check before attempting the download? ### Relevant log output ```shell root@orion-o6:/home/radxa# ollama run deepseek-v2.5 pulling manifest pulling 799587243b19: 100% ▕██████████████████▏ 132 GB pulling 8aa4c0321ccd: 100% ▕██████████████████▏ 493 B pulling ccfee4895df0: 100% ▕██████████████████▏ 13 KB pulling 059ecca256c0: 100% ▕██████████████████▏ 241 B pulling f50c0c6cdd1e: 100% ▕██████████████████▏ 495 B verifying sha256 digest writing manifest success Error: model requires more system memory (164.8 GiB) than is available (13.4 GiB) root@orion-o6:/home/radxa# ollama list NAME ID SIZE MODIFIED deepseek-v2.5:latest 409b2dd8a3c4 132 GB 3 hours ago root@orion-o6:/home/radxa# ollama --version ollama version is 0.9.0 root@orion-o6:/home/radxa# uname -a Linux orion-o6 6.1.44-cix-build-generic #2 SMP PREEMPT Tue Apr 8 18:29:20 CST 2025 aarch64 GNU/Linux root@orion-o6:/home/radxa# cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 12 (bookworm)" NAME="Debian GNU/Linux" VERSION_ID="12" VERSION="12 (bookworm)" VERSION_CODENAME=bookworm ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" root@orion-o6:/home/radxa# cat /proc/cpuinfo | head -n 10 processor : 0 model name : CIX P1 CD8180 BogoMIPS : 2000.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh bti ecv afp wfxt CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd81 CPU revision : 1 ``` ### OS Linux ### GPU Other ### CPU Other ### Ollama version 0.9.0
GiteaMirror added the bug label 2026-04-12 19:11:00 -05:00
Author
Owner

@rick-github commented on GitHub (May 30, 2025):

The size of the model is shown on the model page.

<!-- gh-comment-id:2923594047 --> @rick-github commented on GitHub (May 30, 2025): The size of the model is shown on the model page.
Author
Owner

@Owl3 commented on GitHub (May 31, 2025):

Yes it does, but as someone who is new to this, I didn't know it had to fit entirely into ram to run. So what i'm suggesting is since the check for the size is already in the ollama anyway, rather than do the check after you waste thousands of people hours downloading models they can't run, and probably costing you and everyone money, that instead, you just move the check to before the downloads start...

For me in my software if something takes a long time, i put the checks in before that long running operation starts rather than afterwards it's like a usability thing.

<!-- gh-comment-id:2925203216 --> @Owl3 commented on GitHub (May 31, 2025): Yes it does, but as someone who is new to this, I didn't know it had to fit entirely into ram to run. So what i'm suggesting is since the check for the size is already in the ollama anyway, rather than do the check after you waste thousands of people hours downloading models they can't run, and probably costing you and everyone money, that instead, you just move the check to before the downloads start... For me in my software if something takes a long time, i put the checks in before that long running operation starts rather than afterwards it's like a usability thing.
Author
Owner

@rick-github commented on GitHub (May 31, 2025):

ollama doesn't know what is going to be done with the model. It might be downloaded in order to move to an airgapped high performance server. Unfortunately some knowledge on behalf of the user is expected, such that informed decisions are made.

<!-- gh-comment-id:2925209935 --> @rick-github commented on GitHub (May 31, 2025): ollama doesn't know what is going to be done with the model. It might be downloaded in order to move to an airgapped high performance server. Unfortunately some knowledge on behalf of the user is expected, such that informed decisions are made.
Author
Owner

@Owl3 commented on GitHub (May 31, 2025):

That would make sense if i doing an ollama pull or download or something. But i'm not. I'm doing an ollama "run". The implication is i'm downloading it specifically with the intention to run it locally on the machine that i'm downloading it on.

<!-- gh-comment-id:2925280235 --> @Owl3 commented on GitHub (May 31, 2025): That would make sense if i doing an ollama pull or download or something. But i'm not. I'm doing an ollama "run". The implication is i'm downloading it specifically with the intention to run it locally on the machine that i'm downloading it on.
Author
Owner

@Owl3 commented on GitHub (May 31, 2025):

Anyway, sorry to come across as a "Karen", i appreciate the software and the great work you guys have done on it :)

<!-- gh-comment-id:2925288633 --> @Owl3 commented on GitHub (May 31, 2025): Anyway, sorry to come across as a "Karen", i appreciate the software and the great work you guys have done on it :)
Author
Owner

@rick-github commented on GitHub (May 31, 2025):

All good - at some point everybody has no idea how something works. The bulk of the effort in ollama goes into the server, the ollama client is deliberately low level and so the possibility of non-optimal interaction is non-negligible. Other clients might offer more safeguards.

<!-- gh-comment-id:2925306294 --> @rick-github commented on GitHub (May 31, 2025): All good - at some point everybody has no idea how something works. The bulk of the effort in ollama goes into the server, the ollama client is deliberately low level and so the possibility of non-optimal interaction is non-negligible. [Other clients](https://github.com/ollama/ollama?tab=readme-ov-file#community-integrations) might offer more safeguards.
Author
Owner

@virtualoranges commented on GitHub (Apr 5, 2026):

SO, How to fix that???

<!-- gh-comment-id:4188519548 --> @virtualoranges commented on GitHub (Apr 5, 2026): SO, How to fix that???
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7184