[GH-ISSUE #3768] ~/ollama/ollama run llama3:70b Error: Post "http://127.0.0.1:11434/api/chat": EOF #2326

Closed
opened 2026-04-12 12:38:49 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @phalexo on GitHub (Apr 20, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3768

What is the issue?

Llama 3 70B under Ollama appears to be about 39GB.

I have 4 GPUs with 12.2GiB each. (and I can also add 4GiB on another)

The model gets loaded at about 10GiB per GPU, and then I get the error above,

Is it memory related? Not really clear to me.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

No response

Originally created by @phalexo on GitHub (Apr 20, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3768 ### What is the issue? Llama 3 70B under Ollama appears to be about 39GB. I have 4 GPUs with 12.2GiB each. (and I can also add 4GiB on another) The model gets loaded at about 10GiB per GPU, and then I get the error above, Is it memory related? Not really clear to me. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 12:38:49 -05:00
Author
Owner

@yc446833448 commented on GitHub (Apr 23, 2024):

你是怎么解决的啊?我也遇到了

<!-- gh-comment-id:2071246989 --> @yc446833448 commented on GitHub (Apr 23, 2024): 你是怎么解决的啊?我也遇到了
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2326