[GH-ISSUE #4017] Hardware configuration: 64-core CPU, 1TB memory, use of llama3:8b is slow. why? #2492

Closed
opened 2026-04-12 12:49:16 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @zhaohuaxi-Shi on GitHub (Apr 29, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4017

user api/generate pre-load model , its useless

Originally created by @zhaohuaxi-Shi on GitHub (Apr 29, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4017 user api/generate pre-load model , its useless
Author
Owner

@igorschlum commented on GitHub (Apr 29, 2024):

Hello @zhaohuaxi-Shi can you provide a prompt and logs to show What is slow?

<!-- gh-comment-id:2081926803 --> @igorschlum commented on GitHub (Apr 29, 2024): Hello @zhaohuaxi-Shi can you provide a prompt and logs to show What is slow?
Author
Owner

@zhaohuaxi-Shi commented on GitHub (Apr 29, 2024):

hi @igorschlum , thank you very much for your attention to my issue. The issue has been resolved as GPU has not been enabled thanks

<!-- gh-comment-id:2082242955 --> @zhaohuaxi-Shi commented on GitHub (Apr 29, 2024): hi @igorschlum , thank you very much for your attention to my issue. The issue has been resolved as GPU has not been enabled thanks
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2492