[GH-ISSUE #2041] Show or check the model of equipment minimum requirements #1178

Open
opened 2026-04-12 10:57:55 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @ChingWeiChan on GitHub (Jan 18, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2041

When I want to try a model, I need to check the minimum GPU vram of model.

  • There are many parameters version like 8b, 13b 70b.
  • If it's GGUF format, there will have many bits version like q3_K_S, q4_K_L.
  • For example, I have 4070ti in my computer, Running llama2 70b model will get a poor performance because lacks of enough vram.

Is it possibile to check the model whcih can run acceptably for users in their equipment when ollama downloads model? We can get the device info and show the recommend model tag in list.

Originally created by @ChingWeiChan on GitHub (Jan 18, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2041 When I want to try a model, I need to check the minimum GPU vram of model. * There are many parameters version like 8b, 13b 70b. * If it's GGUF format, there will have many bits version like q3_K_S, q4_K_L. * For example, I have 4070ti in my computer, Running llama2 70b model will get a poor performance because lacks of enough vram. Is it possibile to check the model whcih can run acceptably for users in their equipment when ollama downloads model? We can get the device info and show the recommend model tag in list.
GiteaMirror added the feature request label 2026-04-12 10:57:55 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1178