[GH-ISSUE #2505] How do I specify parameters when launching ollama from command line? #1464

Closed
opened 2026-04-12 11:22:14 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @dtp555-1212 on GitHub (Feb 14, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2505

I saw something online that said to try ollama run llama2:13b -temperature 0.0 but that does not work. I am also interested in setting the seed, so rerunning will do the same process rather than doing something different each time. (e.g. on a classification task, sometimes it says valid/invalid, sometimes is says correct/incorrect. sometimes is it very verbose explaining why it made its decision. I want to find a terse method and stick with it.

Thanks in advance

Originally created by @dtp555-1212 on GitHub (Feb 14, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2505 I saw something online that said to try ollama run llama2:13b -temperature 0.0 but that does not work. I am also interested in setting the seed, so rerunning will do the same process rather than doing something different each time. (e.g. on a classification task, sometimes it says valid/invalid, sometimes is says correct/incorrect. sometimes is it very verbose explaining why it made its decision. I want to find a terse method and stick with it. Thanks in advance
Author
Owner

@virt-10 commented on GitHub (Feb 15, 2024):

I am not sure if there is another way of doing this, but you can make a custom modelfile.

ollama show llama2:13b --modelfile >> modelfile-name

Append settings to modelfile-name
It'll look something like this

# I don't have this model, so I don't know if this is the correct template
# The only important thing here is importing llama2:13b and your changes at the bottom

FROM llama2:13b
# base settings
TEMPLATE """
[INST] <<SYS>>{{ .System }}<</SYS>>

{{ .Prompt }} [/INST]
"""

PARAMETER stop "[INST]"
PARAMETER stop "[/INST]"
PARAMETER stop "<<SYS>>"
PARAMETER stop "<</SYS>>"

# your changes
PARAMETER temperature 0.0
PARAMETER seed 0

For more options check the docs https://github.com/ollama/ollama/blob/main/docs/modelfile.md

After saving run (you can use any name for your model)

ollama create model-name -f ./modelfile-name
<!-- gh-comment-id:1945251592 --> @virt-10 commented on GitHub (Feb 15, 2024): I am not sure if there is another way of doing this, but you can make a custom modelfile. ``` ollama show llama2:13b --modelfile >> modelfile-name ``` Append settings to modelfile-name It'll look something like this ``` # I don't have this model, so I don't know if this is the correct template # The only important thing here is importing llama2:13b and your changes at the bottom FROM llama2:13b # base settings TEMPLATE """ [INST] <<SYS>>{{ .System }}<</SYS>> {{ .Prompt }} [/INST] """ PARAMETER stop "[INST]" PARAMETER stop "[/INST]" PARAMETER stop "<<SYS>>" PARAMETER stop "<</SYS>>" # your changes PARAMETER temperature 0.0 PARAMETER seed 0 ``` For more options check the docs https://github.com/ollama/ollama/blob/main/docs/modelfile.md After saving run (you can use any name for your model) ``` ollama create model-name -f ./modelfile-name ```
Author
Owner

@dtp555-1212 commented on GitHub (Feb 15, 2024):

thanks I will give that a shot.

<!-- gh-comment-id:1945439049 --> @dtp555-1212 commented on GitHub (Feb 15, 2024): thanks I will give that a shot.
Author
Owner

@asterbini commented on GitHub (Jul 12, 2024):

Notice that you can set parameters from the prompt, after the chat has started
/set parameter temperature 0

<!-- gh-comment-id:2226262490 --> @asterbini commented on GitHub (Jul 12, 2024): Notice that you can set parameters from the prompt, after the chat has started `/set parameter temperature 0`
Author
Owner

@mhagnumdw commented on GitHub (Oct 24, 2024):

It would be interesting to pass the parameters directly on the ollama run model command line. Do you plan on implementing this?

<!-- gh-comment-id:2435395442 --> @mhagnumdw commented on GitHub (Oct 24, 2024): It would be interesting to pass the parameters directly on the `ollama run model` command line. Do you plan on implementing this?
Author
Owner

@axemaster commented on GitHub (Dec 9, 2024):

Notice that you can set parameters from the prompt, after the chat has started /set parameter temperature 0

This (/set...) didn't work when I used it in a prompt file, that is, a file that gets fed into ollama STDIN instead of using the process interactively. e.g.

ollama run<promptfile.txt

Is instead it was interpreted as a literal prompt and the LLM responds with a summary of Rosa commands for me.

Is it supposed to? It would be a nice feature.

<!-- gh-comment-id:2526559765 --> @axemaster commented on GitHub (Dec 9, 2024): > Notice that you can set parameters from the prompt, after the chat has started `/set parameter temperature 0` This (/set...) didn't work when I used it in a prompt file, that is, a file that gets fed into ollama STDIN instead of using the process interactively. e.g. `ollama run<promptfile.txt` Is instead it was interpreted as a literal prompt and the LLM responds with a summary of Rosa commands for me. Is it supposed to? It would be a nice feature.
Author
Owner

@ichalov commented on GitHub (Mar 15, 2025):

@axemaster @cirosantilli
The problem can be worked around by using a wrapper expect script. You can probably generate one by using a prompt like this: Develop an expect script to sequentially pass content from two files into external command-line tool supplied after -- command line delimiter. First file should be passed line by line with waiting for some answer, the second file should be passed as whole.

Or here is my working version of this script: https://github.com/ichalov/gpt-tools/blob/main/call_ollama_with_params.exp

<!-- gh-comment-id:2726815975 --> @ichalov commented on GitHub (Mar 15, 2025): @axemaster @cirosantilli The problem can be worked around by using a wrapper `expect` script. You can probably generate one by using a prompt like this: _Develop an expect script to sequentially pass content from two files into external command-line tool supplied after -- command line delimiter. First file should be passed line by line with waiting for some answer, the second file should be passed as whole._ Or here is my working version of this script: https://github.com/ichalov/gpt-tools/blob/main/call_ollama_with_params.exp
Author
Owner

@cirosantilli commented on GitHub (Mar 18, 2025):

OK, the expect method works:

sudo apt install expect

then either directly on Bash:

expect \
  -c 'spawn ollama run llama3.2' \
  -c 'expect ">>> "' \
  -c 'send "/set parameter seed 0\r"' \
  -c 'expect ">>> "' \
  -c 'send "/set parameter num_predict 100\r"' \
  -c 'expect ">>> "' \
  -c 'send "/set parameter seed 0\r"' \
  -c 'expect ">>> "' \
  -c 'send "What is quantum field theory?\r"' \
  -c 'expect ">>> "' \
  -c 'send "/bye"' \
;

or from a script:

#!/usr/bin/expect -f
set prompt ">>> "
log_user 0
spawn ollama run [lindex $argv 0]
expect $prompt
send "/set parameter seed 0\r"
expect $prompt
send "/set parameter num_predict 100\r"
expect $prompt
send "[lindex $argv 1]\r"
expect -re "\n(.*?)$prompt"
puts -nonewline $expect_out(1,string)
send -- "/bye"

both seem to work.

Related pull request: https://github.com/ollama/ollama/issues/1415

Related question: https://genai.stackexchange.com/questions/699/how-to-set-ollama-temperature-from-command-line

Tested on ollama 0.5.13.

<!-- gh-comment-id:2733425744 --> @cirosantilli commented on GitHub (Mar 18, 2025): OK, the expect method works: ``` sudo apt install expect ``` then either directly on Bash: ``` expect \ -c 'spawn ollama run llama3.2' \ -c 'expect ">>> "' \ -c 'send "/set parameter seed 0\r"' \ -c 'expect ">>> "' \ -c 'send "/set parameter num_predict 100\r"' \ -c 'expect ">>> "' \ -c 'send "/set parameter seed 0\r"' \ -c 'expect ">>> "' \ -c 'send "What is quantum field theory?\r"' \ -c 'expect ">>> "' \ -c 'send "/bye"' \ ; ``` or from a script: ``` #!/usr/bin/expect -f set prompt ">>> " log_user 0 spawn ollama run [lindex $argv 0] expect $prompt send "/set parameter seed 0\r" expect $prompt send "/set parameter num_predict 100\r" expect $prompt send "[lindex $argv 1]\r" expect -re "\n(.*?)$prompt" puts -nonewline $expect_out(1,string) send -- "/bye" ``` both seem to work. Related pull request: https://github.com/ollama/ollama/issues/1415 Related question: https://genai.stackexchange.com/questions/699/how-to-set-ollama-temperature-from-command-line Tested on ollama 0.5.13.
Author
Owner

@cirosantilli commented on GitHub (May 14, 2025):

@ichalov did you find a solution for the control characters that get added to stdout with expect due to the progress indicator? https://github.com/ollama/ollama/issues/6120 Sad!!!

<!-- gh-comment-id:2880529821 --> @cirosantilli commented on GitHub (May 14, 2025): @ichalov did you find a solution for the control characters that get added to stdout with expect due to the progress indicator? https://github.com/ollama/ollama/issues/6120 Sad!!!
Author
Owner

@ichalov commented on GitHub (May 18, 2025):

@cirosantilli
I somehow don't have this problem, maybe because I run it on Linux and not MacOS. If I look out.txt using tail -f during execution, it shows the rolling cursor itself. And that character doesn't make it to the final version of out.txt at all:

$ cat out.txt 
spawn bash -c ... ollama run qwen2.5-coder:32b
/set parameter num_ctx 4096
The sky is
>>> /set parameter num_ctx 4096
Set parameter 'num_ctx' to '4096'
>>> The sky is
beautiful today! It's clear and blue with fluffy white clouds scattered 
across it. How does the weather make you feel?

>>> 

I don't have access to MacOS to test possible solutions. Can only recommend to try querying some LLM with: "how to make expect skip passing certain characters". It seems to have an easy snippet to insert into an existing script.

<!-- gh-comment-id:2889175793 --> @ichalov commented on GitHub (May 18, 2025): @cirosantilli I somehow don't have this problem, maybe because I run it on Linux and not MacOS. If I look `out.txt` using `tail -f` during execution, it shows the rolling cursor itself. And that character doesn't make it to the final version of `out.txt` at all: ``` $ cat out.txt spawn bash -c ... ollama run qwen2.5-coder:32b /set parameter num_ctx 4096 The sky is >>> /set parameter num_ctx 4096 Set parameter 'num_ctx' to '4096' >>> The sky is beautiful today! It's clear and blue with fluffy white clouds scattered across it. How does the weather make you feel? >>> ``` I don't have access to MacOS to test possible solutions. Can only recommend to try querying some LLM with: "how to make expect skip passing certain characters". It seems to have an easy snippet to insert into an existing script.
Author
Owner

@cirosantilli commented on GitHub (May 18, 2025):

Hi thanks. I'm on Linux. There are various escape sequences present, some of which contain regular ascii chars, so it would require some thought to understand exactly what they are.

<!-- gh-comment-id:2889186726 --> @cirosantilli commented on GitHub (May 18, 2025): Hi thanks. I'm on Linux. There are various escape sequences present, some of which contain regular ascii chars, so it would require some thought to understand exactly what they are.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1464