[GH-ISSUE #6956] Why doesn't the model know which model it is? #66450

Closed
opened 2026-05-04 05:28:09 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @robotom on GitHub (Sep 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6956

What is the issue?

If I load llama 3.1 8B and ask it which model it is, it does not know what LLaMa 3.1 is at all. Sometimes it thinks it's LLama 3 or a 7B param model. Is there any reason for this? How can I be sure what I am running except for whatever ollama ps reports?

(running on 4070 8GB VRAM and i7-13700HX)

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

ollama 3.1:8B

Originally created by @robotom on GitHub (Sep 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6956 ### What is the issue? If I load llama 3.1 8B and ask it which model it is, it does not know what LLaMa 3.1 is at all. Sometimes it thinks it's LLama 3 or a 7B param model. Is there any reason for this? How can I be sure what I am running except for whatever `ollama ps` reports? (running on 4070 8GB VRAM and i7-13700HX) ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version ollama 3.1:8B
GiteaMirror added the question label 2026-05-04 05:28:09 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 25, 2024):

Models don't "know" anything, they're fancy autocomplete based on a large corpus of random information. You can provide a model with guidance with context or system prompts and it will generate seemingly authoritative statements, but if you want actual facts, you can't rely on a model.

<!-- gh-comment-id:2374476425 --> @rick-github commented on GitHub (Sep 25, 2024): Models don't "know" anything, they're fancy autocomplete based on a large corpus of random information. You can provide a model with guidance with context or system prompts and it will generate seemingly authoritative statements, but if you want actual facts, you can't rely on a model.
Author
Owner

@robotom commented on GitHub (Sep 25, 2024):

Models don't "know" anything, they're fancy autocomplete based on a large corpus of random information. You can provide a model with guidance with context or system prompts and it will generate seemingly authoritative statements, but if you want actual facts, you can't rely on a model.

Valid, though it is overtly unaware of the existence of LLaMa 3.1 at all. It knows about couscous recipes and recent news up until the cutoff. Shouldn't it at least have that training data? Seems odd.

<!-- gh-comment-id:2374500704 --> @robotom commented on GitHub (Sep 25, 2024): > Models don't "know" anything, they're fancy autocomplete based on a large corpus of random information. You can provide a model with guidance with context or system prompts and it will generate seemingly authoritative statements, but if you want actual facts, you can't rely on a model. Valid, though it is overtly unaware of the existence of LLaMa 3.1 at all. It knows about couscous recipes and recent news up until the cutoff. Shouldn't it at least have that training data? Seems odd.
Author
Owner

@dhiltgen commented on GitHub (Sep 26, 2024):

How the models respond to these sorts of questions is based on how they're trained (or fine tuned) and you'll see varying results from different models. Try different models out to find the one that best suits your needs.

<!-- gh-comment-id:2377378573 --> @dhiltgen commented on GitHub (Sep 26, 2024): How the models respond to these sorts of questions is based on how they're trained (or fine tuned) and you'll see varying results from different models. Try different models out to find the one that best suits your needs.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66450