[GH-ISSUE #745] why different answers from same model? #26113

Closed
opened 2026-04-22 02:07:56 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Enhitech on GitHub (Oct 10, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/745

Hi, guys,

I run a llama2 model and then access the model in three ways: 1. use rest api; 2. use cmd "ollama run modelname 'prompt'"; 3. use a conversational terminal.

I got different answers. 1 and 2 are similar, but 3 is much better than 1 and 2.

WHY? how could I get the same answer as 3 via 1 or 2?

Thanks a lot!

Originally created by @Enhitech on GitHub (Oct 10, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/745 Hi, guys, I run a llama2 model and then access the model in three ways: 1. use rest api; 2. use cmd "ollama run modelname 'prompt'"; 3. use a conversational terminal. I got different answers. 1 and 2 are similar, but 3 is much better than 1 and 2. WHY? how could I get the same answer as 3 via 1 or 2? Thanks a lot!
Author
Owner

@jmorganca commented on GitHub (Oct 11, 2023):

Hi @Enhitech. Glad you tried Ollama! It's normal to get different responses from multiple calls. By default LLMs will output different answers due to randomness. To always get a consistent answer back, you get set temperature: 0 and seed: <seed value> in the API.

Going to close this however feel free to re-open if something isn't working as expected

<!-- gh-comment-id:1756496528 --> @jmorganca commented on GitHub (Oct 11, 2023): Hi @Enhitech. Glad you tried Ollama! It's normal to get different responses from multiple calls. By default LLMs will output different answers due to randomness. To always get a consistent answer back, you get set `temperature: 0` and `seed: <seed value>` in the API. Going to close this however feel free to re-open if something isn't working as expected
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26113