[GH-ISSUE #4281] Get Entropy #2674

Closed
opened 2026-04-12 13:00:26 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @antonbugaets on GitHub (May 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4281

Hello !
Using latest version of Ollama + self-hosted models.
Getting responses from API,
i didn't find something reg. Entropy in documentation, how i can get it in the model response ? is it any special parameter for this ?

Thanks.

Originally created by @antonbugaets on GitHub (May 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4281 Hello ! Using latest version of Ollama + self-hosted models. Getting responses from API, i didn't find something reg. Entropy in documentation, how i can get it in the model response ? is it any special parameter for this ? Thanks.
GiteaMirror added the question label 2026-04-12 13:00:26 -05:00
Author
Owner

@pdevine commented on GitHub (May 20, 2024):

@antonbugaets you can use the temperature parameter for getting more "randomness" in your responses. For getting deterministic results you can set the temperature to 0, and for more "creative" results you can set the value higher.

In the REPL you can do this with /set parameter temperature N with N being the value you want to set it to.

LMK if that answers your question. I wasn't 100% sure if this was what you were asking, or if you meant something more like this.

<!-- gh-comment-id:2121367429 --> @pdevine commented on GitHub (May 20, 2024): @antonbugaets you can use the `temperature` parameter for getting more "randomness" in your responses. For getting deterministic results you can set the temperature to `0`, and for more "creative" results you can set the value higher. In the REPL you can do this with `/set parameter temperature N` with N being the value you want to set it to. LMK if that answers your question. I wasn't 100% sure if this was what you were asking, or if you meant something more like [this](https://medium.com/@priyankads/perplexity-of-language-models-41160427ed72).
Author
Owner

@antonbugaets commented on GitHub (May 22, 2024):

@pdevine Hello, yes, i'm aware about temperature, top k/p parameters which i can configure using Ollama as serv. function for models inferencing,

But initially i meant about how do i can understand, that model is not sure about particular answer by my promts? While model inferences with Ollama.

I need to understand this to perform post-processing of 'low quality' answers.

Is it any possible ways to understand this ? Maybe by some optional parametr ? or get probas of tokens and calculate entropy.

Thanks!

<!-- gh-comment-id:2125386513 --> @antonbugaets commented on GitHub (May 22, 2024): @pdevine Hello, yes, i'm aware about temperature, top k/p parameters which i can configure using Ollama as serv. function for models inferencing, But initially i meant about how do i can understand, that model is not sure about particular answer by my promts? While model inferences with Ollama. I need to understand this to perform post-processing of 'low quality' answers. Is it any possible ways to understand this ? Maybe by some optional parametr ? or get probas of tokens and calculate entropy. Thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2674