[GH-ISSUE #404] How to know which model(s) to use? #46693

Closed
opened 2026-04-27 23:32:27 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @burggraf on GitHub (Aug 24, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/404

How can I learn about what each main model does, and when it's appropriate to use each one? Also, there are many versions of each model, and how do I figure out which one to download and use?

I've looked at a lot of these models on the HuggingFace site, but there's often little or no signification background information that tells me anything useful about the model. I'm pretty lost here.

Originally created by @burggraf on GitHub (Aug 24, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/404 How can I learn about what each main model does, and when it's appropriate to use each one? Also, there are many versions of each model, and how do I figure out which one to download and use? I've looked at a lot of these models on the HuggingFace site, but there's often little or no signification background information that tells me anything useful about the model. I'm pretty lost here.
GiteaMirror added the question label 2026-04-27 23:32:27 -05:00
Author
Owner

@mchiang0610 commented on GitHub (Aug 24, 2023):

Hi @burggraf we're starting to add overview to the models. It depends on your specific use case, but for most general tasks, I'd recommend Meta's Llama 2 models.

https://ollama.ai/library/llama2

<!-- gh-comment-id:1692013669 --> @mchiang0610 commented on GitHub (Aug 24, 2023): Hi @burggraf we're starting to add overview to the models. It depends on your specific use case, but for most general tasks, I'd recommend Meta's Llama 2 models. https://ollama.ai/library/llama2
Author
Owner

@burggraf commented on GitHub (Aug 24, 2023):

Understood -- but it's certainly confusing to have so many choices when there's no way to distinguish how to choose intelligently. Plus there's even a lot of confusion over which Llama 2 model to use and for what purpose(s). I also understand this is more of a HuggingFace problem.

<!-- gh-comment-id:1692019550 --> @burggraf commented on GitHub (Aug 24, 2023): Understood -- but it's certainly confusing to have so many choices when there's no way to distinguish how to choose intelligently. Plus there's even a lot of confusion over **which** Llama 2 model to use and for what purpose(s). I also understand this is more of a HuggingFace problem.
Author
Owner

@mchiang0610 commented on GitHub (Aug 24, 2023):

Yes! This is why our overview has information on what to use.

<!-- gh-comment-id:1692020850 --> @mchiang0610 commented on GitHub (Aug 24, 2023): Yes! This is why our overview has information on what to use.
Author
Owner

@burggraf commented on GitHub (Aug 24, 2023):

That'll be awesome once there's anything written in all of the overviews :) Looking forward to that. (Sorry I mean for all the other models besides Llama 2. That's got some great info in it!)

<!-- gh-comment-id:1692067833 --> @burggraf commented on GitHub (Aug 24, 2023): That'll be awesome once there's anything written in all of the overviews :) Looking forward to that. (Sorry I mean for all the other models besides Llama 2. That's got some great info in it!)
Author
Owner

@burggraf commented on GitHub (Aug 24, 2023):

I guess the only one so far that really answers the question "Why/when should I use THIS model?" is in the overview for MedLlama2. I'd like to see more helpful info like this:

MedLlama2 by llSourcell (Siraj Raval) is a Llama 2-based model trained with medalpaca/medical_meadow_medqa to be able to provide medical answers to questions. It is not intended to replace a medical professional, but to provide a starting point for further research.

<!-- gh-comment-id:1692077197 --> @burggraf commented on GitHub (Aug 24, 2023): I guess the only one so far that really answers the question "Why/when should I use THIS model?" is in the overview for MedLlama2. I'd like to see more helpful info like this: MedLlama2 by llSourcell (Siraj Raval) is a Llama 2-based model trained with [medalpaca/medical_meadow_medqa](https://huggingface.co/datasets/medalpaca/medical_meadow_medqa) **to be able to provide medical answers to questions. It is not intended to replace a medical professional, but to provide a starting point for further research.**
Author
Owner

@technovangelist commented on GitHub (Aug 25, 2023):

Many of them are general use. Medllama and WizardMath are the only ones that are tuned for a specific use. You really just have to try them to see what works in your use case.

<!-- gh-comment-id:1693819410 --> @technovangelist commented on GitHub (Aug 25, 2023): Many of them are general use. Medllama and WizardMath are the only ones that are tuned for a specific use. You really just have to try them to see what works in your use case.
Author
Owner

@jkleckner commented on GitHub (Aug 26, 2023):

In the case of codellama, TheBloke has annotated size/quality tradeoffs for the quantizations here.
Note the GGUF appears to be completely deprecating GGML in llama.cpp as of August 21?

nhttps://huggingface.co/TheBloke/CodeLlama-34B-GGUF

<!-- gh-comment-id:1694405331 --> @jkleckner commented on GitHub (Aug 26, 2023): In the case of codellama, TheBloke has annotated size/quality tradeoffs for the quantizations here. Note the GGUF appears to be completely deprecating GGML in llama.cpp as of August 21? nhttps://huggingface.co/TheBloke/CodeLlama-34B-GGUF
Author
Owner

@PaulWoitaschek commented on GitHub (Aug 30, 2023):

I just tried llama2:13b-text and it's giving extremely weird responses. Is this expected? Maybe the -text models need some explanation.

Prompt:

is is bad to drink coke?

Response:

I am 15 and I love my Coca Cola. It's not good for you, but in moderation I don't think it will hurt you too bad. Plus, some people say that caffeine helps you burn fat faster, so if you are going to drink soda, Coke is the best kind to drink.

<!-- gh-comment-id:1699229552 --> @PaulWoitaschek commented on GitHub (Aug 30, 2023): I just tried `llama2:13b-text` and it's giving extremely weird responses. Is this expected? Maybe the -text models need some explanation. Prompt: > is is bad to drink coke? Response: > I am 15 and I love my Coca Cola. It's not good for you, but in moderation I don't think it will hurt you too bad. Plus, some people say that caffeine helps you burn fat faster, so if you are going to drink soda, Coke is the best kind to drink.
Author
Owner

@mchiang0610 commented on GitHub (Aug 31, 2023):

@burggraf @PaulWoitaschek @jkleckner

Overviews are available now in the model library:
example for llama 2 https://ollama.ai/library/llama2

code llama https://ollama.ai/library/codellama
medllama https://ollama.ai/library/medllama2

and more here: https://ollama.ai/library

There are still lots of tasks to do for organizing these. Please feel free to to create new issues regarding the specifics! Thank you.

@PaulWoitaschek Yeah, the -text models are 'completion' models, where it'll build stories, etc to 'continue' your prompt. I definitely agree we should document this better.

Chat models on the other hand will have a dialog with you.


Closing this issue, but please feel free to create new ones for specific concerns, bugs, feedback etc.

Our discord is also available here: https://discord.com/invite/ollama

<!-- gh-comment-id:1701329412 --> @mchiang0610 commented on GitHub (Aug 31, 2023): @burggraf @PaulWoitaschek @jkleckner Overviews are available now in the model library: example for llama 2 https://ollama.ai/library/llama2 code llama https://ollama.ai/library/codellama medllama https://ollama.ai/library/medllama2 and more here: https://ollama.ai/library There are still lots of tasks to do for organizing these. Please feel free to to create new issues regarding the specifics! Thank you. @PaulWoitaschek Yeah, the -text models are 'completion' models, where it'll build stories, etc to 'continue' your prompt. I definitely agree we should document this better. Chat models on the other hand will have a dialog with you. --- Closing this issue, but please feel free to create new ones for specific concerns, bugs, feedback etc. Our discord is also available here: https://discord.com/invite/ollama
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#46693