[GH-ISSUE #1033] Are these system specs good enough for any models? #62540

Closed
opened 2026-05-03 09:30:00 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @simoovara on GitHub (Nov 7, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1033

Just a question, I have an old laptop that i turned into a server with Ubuntu LTS. It has an AMD E1-6015 APU and 8gb of ram. I would like to know if that's enough to run any of these models, thank you!

Originally created by @simoovara on GitHub (Nov 7, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1033 Just a question, I have an old laptop that i turned into a server with Ubuntu LTS. It has an AMD E1-6015 APU and 8gb of ram. I would like to know if that's enough to run any of these models, thank you!
Author
Owner

@SebastianMpl commented on GitHub (Nov 7, 2023):

Not for any. Try those <= 7b parameters and q4 or lower (but it also lowers quality). I have 6 gb ram and have about 3 tokens per second with 7b q4km. The speed is like an answer from a real person, not those flashing words...

<!-- gh-comment-id:1799284091 --> @SebastianMpl commented on GitHub (Nov 7, 2023): Not for any. Try those <= 7b parameters and q4 or lower (but it also lowers quality). I have 6 gb ram and have about 3 tokens per second with 7b q4km. The speed is like an answer from a real person, not those flashing words...
Author
Owner

@BruceMacD commented on GitHub (Nov 7, 2023):

I don't know much about that AMD processor series so I cant really say for sure, but if you'd like to try the most lightweight model to see what you're system is capable I'd suggest giving orca-mini a try to see if it works: ollama run orca-mini

<!-- gh-comment-id:1799337109 --> @BruceMacD commented on GitHub (Nov 7, 2023): I don't know much about that AMD processor series so I cant really say for sure, but if you'd like to try the most lightweight model to see what you're system is capable I'd suggest giving orca-mini a try to see if it works: `ollama run orca-mini`
Author
Owner

@easp commented on GitHub (Nov 7, 2023):

That's a laptop chip from 2015. It may not be able to run any models at all if it lacks the needed vector instructions. If it can run models on the CPU, it'll be slow, but enough to get a taste.

<!-- gh-comment-id:1799745466 --> @easp commented on GitHub (Nov 7, 2023): That's a laptop chip from 2015. It may not be able to run any models at all if it lacks the needed vector instructions. If it can run models on the CPU, it'll be slow, but enough to get a taste.
Author
Owner

@simoovara commented on GitHub (Nov 7, 2023):

Alright, really helpful. Will try the lightest model orca-mini and I'll comment on this to let you know how it goes.

<!-- gh-comment-id:1800148513 --> @simoovara commented on GitHub (Nov 7, 2023): Alright, really helpful. Will try the lightest model orca-mini and I'll comment on this to let you know how it goes.
Author
Owner

@simoovara commented on GitHub (Nov 7, 2023):

Yeah, it's really really slow. It loads for about a minute and then types it out really slowly, approximately 1 word every second.
worth a try though

<!-- gh-comment-id:1800189729 --> @simoovara commented on GitHub (Nov 7, 2023): Yeah, it's really really slow. It loads for about a minute and then types it out really slowly, approximately 1 word every second. worth a try though
Author
Owner

@simoovara commented on GitHub (Nov 7, 2023):

thanks everyone for the help!

<!-- gh-comment-id:1800190622 --> @simoovara commented on GitHub (Nov 7, 2023): thanks everyone for the help!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62540