[GH-ISSUE #939] Low memory systems with a lot of VRAM hit a memory issue #46972

Closed
opened 2026-04-28 02:19:26 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @jmorganca on GitHub (Oct 27, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/939

When creating a small instance with <4GB of RAM, ollama hits an error when loading the memory into VRAM

Originally created by @jmorganca on GitHub (Oct 27, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/939 When creating a small instance with <4GB of RAM, `ollama` hits an error when loading the memory into VRAM
GiteaMirror added the bug label 2026-04-28 02:19:26 -05:00
Author
Owner

@morandalex commented on GitHub (Jan 9, 2024):

I had a similar issue as in https://github.com/jmorganca/ollama/issues/1853

<!-- gh-comment-id:1883768459 --> @morandalex commented on GitHub (Jan 9, 2024): I had a similar issue as in https://github.com/jmorganca/ollama/issues/1853
Author
Owner

@jmorganca commented on GitHub (Jan 10, 2024):

This should be fixed as of https://github.com/jmorganca/ollama/releases/tag/v0.1.19 with further improvements to CUDA reliability + memory allocation coming. Will close for now, but can re-open if it's persisting

<!-- gh-comment-id:1885027339 --> @jmorganca commented on GitHub (Jan 10, 2024): This should be fixed as of https://github.com/jmorganca/ollama/releases/tag/v0.1.19 with further improvements to CUDA reliability + memory allocation coming. Will close for now, but can re-open if it's persisting
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#46972