[GH-ISSUE #11253] Failed to Run Large Models #69473

Open
opened 2026-05-04 18:13:39 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @techana on GitHub (Jul 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11253

What is the issue?

After upgrading to version 0.9.3, Ollama fails to load large models such as llama4:maverick and deepseek-r1:671b. Running a command like ollama run llama4:maverick causes extremely high disk I/O on my Debian server, but the model never loads into memory—even after running for over an hour. I also tested version 0.9.4 RC1 and encountered the same issue. I hope this gets fixed soon.

Relevant log output


OS

Proxmox, Debian Linux

GPU

nVidia L4

CPU

Intel Xeon CPU E5-2697A v4 @ 2.60GHz (2 Sockets)

Ollama version

0.9.3

Originally created by @techana on GitHub (Jul 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11253 ### What is the issue? After upgrading to version 0.9.3, Ollama fails to load large models such as llama4:maverick and deepseek-r1:671b. Running a command like `ollama run llama4:maverick` causes extremely high disk I/O on my Debian server, but the model never loads into memory—even after running for over an hour. I also tested version 0.9.4 RC1 and encountered the same issue. I hope this gets fixed soon. ### Relevant log output ```shell ``` ### OS Proxmox, Debian Linux ### GPU nVidia L4 ### CPU Intel Xeon CPU E5-2697A v4 @ 2.60GHz (2 Sockets) ### Ollama version 0.9.3
GiteaMirror added the bug label 2026-05-04 18:13:39 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69473