[GH-ISSUE #5362] allow temperature to be set on command line ( w/out using a modelfile ) #49870

Open
opened 2026-04-28 13:15:26 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @pracplayopen on GitHub (Jun 28, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5362

would be super helpful to set temperature for models via command line, rather than having to create a separate model file for every model and temperature combination.

Originally created by @pracplayopen on GitHub (Jun 28, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5362 would be super helpful to set temperature for models via command line, rather than having to create a separate model file for every model and temperature combination.
GiteaMirror added the feature request label 2026-04-28 13:15:26 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 28, 2024):

Trying to get a handle on the use case here. Are you looking to do something like:

$ ollama run --temperature 0.7 gemma2
>>>

instead of:

$ ollama run gemma2
>>> /set parameter temperature 0.7
>>>
<!-- gh-comment-id:2197453930 --> @rick-github commented on GitHub (Jun 28, 2024): Trying to get a handle on the use case here. Are you looking to do something like: ``` $ ollama run --temperature 0.7 gemma2 >>> ``` instead of: ``` $ ollama run gemma2 >>> /set parameter temperature 0.7 >>> ```
Author
Owner

@pracplayopen commented on GitHub (Jun 28, 2024):

yes exactly.

the second example is useful for one-off tests, but since '/set parameter temperature' can't be scripted, it limits all situations where the prompt is generated outside of ollama (eg when prompt isn't created interactively).

<!-- gh-comment-id:2197458616 --> @pracplayopen commented on GitHub (Jun 28, 2024): yes exactly. the second example is useful for one-off tests, but since '/set parameter temperature' can't be scripted, it limits all situations where the prompt is generated outside of ollama (eg when prompt isn't created interactively).
Author
Owner

@rick-github commented on GitHub (Jun 28, 2024):

Understood. We currently use expect for this sort of scripting, and command line args would be generally useful.

#!/bin/bash

temperature=0
num_ctx=2048

eval set -- $(getopt --options=t:,n: --longoptions=temperature:,num_ctx: --name "$0" -- "$@")

while : ; do
  case "$1" in
    -t|--temperature)   temperature=$2
                        shift 2 ;;
    -n|--num_ctx)       num_ctx=$2
                        shift 2 ;;
    --)                 shift
                        break ;;
    *)                  exit 1 ;;
  esac
done

args=("${@:1:2}")
[ "${#@}" == 2 ] && { command=interact ; } || { command='send "'"${@:3}"'\r" ; expect ">>>" close' ; }

expect -f <(cat <<EOF
spawn ollama ${args[*]}
expect ">>>"
send "/set parameter temperature $temperature\r" ; expect ">>>"
send "/set parameter num_ctx $num_ctx\r" ; expect ">>>"
$command
EOF)
<!-- gh-comment-id:2197589963 --> @rick-github commented on GitHub (Jun 28, 2024): Understood. We currently use `expect` for this sort of scripting, and command line args would be generally useful. ``` #!/bin/bash temperature=0 num_ctx=2048 eval set -- $(getopt --options=t:,n: --longoptions=temperature:,num_ctx: --name "$0" -- "$@") while : ; do case "$1" in -t|--temperature) temperature=$2 shift 2 ;; -n|--num_ctx) num_ctx=$2 shift 2 ;; --) shift break ;; *) exit 1 ;; esac done args=("${@:1:2}") [ "${#@}" == 2 ] && { command=interact ; } || { command='send "'"${@:3}"'\r" ; expect ">>>" close' ; } expect -f <(cat <<EOF spawn ollama ${args[*]} expect ">>>" send "/set parameter temperature $temperature\r" ; expect ">>>" send "/set parameter num_ctx $num_ctx\r" ; expect ">>>" $command EOF) ```
Author
Owner

@pracplayopen commented on GitHub (Jun 28, 2024):

yes, it can also be done w/curl using REST api:

curl http://localhost:11434/api/generate -d '{
  "model": "llama3",
  "prompt": "who was president of US in 2023",
  "options": {
    "temperature": 0
  }
}'

but since temperature and the prompt sort of "go-together" in terms of how much they can impact the response, it would be faster to be able to specify both (vs just prompt as is now possible) on the the built-in command line, in one place.

<!-- gh-comment-id:2197685471 --> @pracplayopen commented on GitHub (Jun 28, 2024): yes, [it can also be done w/curl using REST api](https://github.com/ollama/ollama/blob/main/docs/api.md#chat-request-reproducible-outputs): ``` curl http://localhost:11434/api/generate -d '{ "model": "llama3", "prompt": "who was president of US in 2023", "options": { "temperature": 0 } }' ``` but since temperature and the prompt sort of "go-together" in terms of how much they can impact the response, it would be faster to be able to specify both (vs just prompt as is now possible) on the the built-in command line, in one place.
Author
Owner

@pracplayopen commented on GitHub (Jun 29, 2024):

Looking at this briefly... because based on the above two workarounds, conceptually this feature seems like it should be easy to add. Perhaps people who know more about the source might comment further

I was able to see that it appears that the 'runOptions' structure corresponds to the same options listed in 'ollama run --help' in the cli

So ideally the goal would be to parse a new option --temperature into runOptions, eg:


type runOptions struct {
	Model       string
	ParentModel string
	Prompt      string
	Messages    []api.Message
	WordWrap    bool
	Format      string
	System      string
	Template    string
	Images      []api.ImageData
	Options     map[string]interface{}
	MultiModal  bool
	KeepAlive   *api.Duration
        Temperature float64 // not sure if this is valid decl of golang field
}

then questions might be:

  1. where does parsing for cli-based run command options occur?
  2. what is done when '/set temperature' is called during interactive chat?
<!-- gh-comment-id:2198048049 --> @pracplayopen commented on GitHub (Jun 29, 2024): Looking at this briefly... because based on the above two workarounds, conceptually this feature seems like it should be easy to add. Perhaps people who know more about the source might comment further I was able to see that it appears that the ['runOptions' structure corresponds to the same options listed in 'ollama run --help' in the cli](https://github.com/ollama/ollama/blob/717f7229eb4f9220d4070aae617923950643d327/cmd/cmd.go#L838) So ideally the goal would be to parse a new option --temperature into runOptions, eg: ``` type runOptions struct { Model string ParentModel string Prompt string Messages []api.Message WordWrap bool Format string System string Template string Images []api.ImageData Options map[string]interface{} MultiModal bool KeepAlive *api.Duration Temperature float64 // not sure if this is valid decl of golang field } ``` then questions might be: 1. where does parsing for cli-based run command options occur? 2. what is done when '/set temperature' is called during interactive chat?
Author
Owner

@joliss commented on GitHub (Dec 10, 2024):

My question about this on StackExchange has over 10k views now, so it seems that many people would appreciate this feature!

<!-- gh-comment-id:2531763118 --> @joliss commented on GitHub (Dec 10, 2024): My [question about this](https://genai.stackexchange.com/q/699/3236) on StackExchange has over 10k views now, so it seems that many people would appreciate this feature!
Author
Owner

@pacien commented on GitHub (Jan 9, 2025):

Patch allowing that: https://github.com/ollama/ollama/pull/8340

<!-- gh-comment-id:2581353688 --> @pacien commented on GitHub (Jan 9, 2025): Patch allowing that: https://github.com/ollama/ollama/pull/8340
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49870