[GH-ISSUE #7823] Multiple prompt support over stdin. #5006

Closed
opened 2026-04-12 16:04:38 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @WyvernDotRed on GitHub (Nov 24, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7823

edit: Important clarifications on a misunderstanding from me on the current implementation at https://github.com/ollama/ollama/issues/7823#issuecomment-2542541753
These points still stay relevant, but might be easier to understand by reading the comment first.

In pull 416, the ability to enter multiple prompts over STDIN directly seems to have been removed.

To work around this, I recently made an except script (assembler in Fish) which filters out the interface related outputs elements of ollama run.
In version 0.4.3, this achieves the desired effect of only outputting model output through the pipe, while allowing for multiple prompts.
I will share my script in the first comment.

As I mentioned in issue 7820, this method of inputting has been broken as per version 0.4.4.
And as mentioned by @rick-github in their comment, this likely is by design, due to pull 7360.
Which at face value seems to completely cover my use-case and avoids the jank of a script parsing the interface information.

But the current implementation breaks entering prompts manually or through expect (my issue 7820 again), while piping the output to tee or similar.
I will explore the proposed script based workaround mentioned by @rick-github, either adding it to or replacing my expect based workaround with it.
Like my script, this still is an excessive level of workarounds to make something which would be expected functionality work.

Potential solutions:

  • Not having ollama run immediately close with output redirection, as the implementation from pull 7360 does.
  • Having some form of delimiter within the """ syntax, for starting a new prompt, like:
"""
First prompt.
---
Second prompt.
"""

This could start processing the first/earlier prompt after the --- delimiter is received to allow for full dialogue

  • Some feature flag to interpret STDIN as user input, similar to pull 6130 (which pull 7360 replaced).
    Perhaps --input user, --nogui, --keeppipe, ect.
  • Having multiple prompts passed as arguments processed sequentially, like the shared script does.

Of course, this is only a feature request, it's fine if piping multiple prompts over STDIN or as arguments is not supported.
In which case I would be fine with looking further and interested in suggestions for other frontends or libraries to use.

P.S. Since making such a feature request is a first for me, constructive criticism is welcome if you can spare the time.
Either way, have a nice rest of your day!

Originally created by @WyvernDotRed on GitHub (Nov 24, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7823 edit: Important clarifications on a misunderstanding from me on the current implementation at https://github.com/ollama/ollama/issues/7823#issuecomment-2542541753 These points still stay relevant, but might be easier to understand by reading the comment first. In [pull 416](https://github.com/ollama/ollama/pull/416), the ability to enter multiple prompts over STDIN directly seems to have been removed. To work around this, I recently made an `except` script (assembler in Fish) which filters out the interface related outputs elements of `ollama run`. In version 0.4.3, this achieves the desired effect of only outputting model output through the pipe, while allowing for multiple prompts. I will share my script in the first comment. As I mentioned in [issue 7820](https://github.com/ollama/ollama/issues/7820), this method of inputting has been broken as per version 0.4.4. And as mentioned by @rick-github in [their comment](https://github.com/ollama/ollama/issues/7820#issuecomment-2496152849), this likely is by design, due to [pull 7360](https://github.com/ollama/ollama/pull/7360). Which at face value seems to completely cover my use-case and avoids the jank of a script parsing the interface information. But the current implementation breaks entering prompts manually or through `expect` (my [issue 7820](https://github.com/ollama/ollama/issues/7820) again), while piping the output to `tee` or similar. I will explore the proposed `script` based workaround [mentioned by @rick-github](https://github.com/ollama/ollama/issues/7820#issuecomment-2496152849), either adding it to or replacing my `expect` based workaround with it. Like my script, this still is an excessive level of workarounds to make something which would be expected functionality work. **Potential solutions:** - Not having `ollama run` immediately close with output redirection, as the implementation from [pull 7360](https://github.com/ollama/ollama/pull/7360) does. - Having some form of delimiter within the """ syntax, for starting a new prompt, like: ``` """ First prompt. --- Second prompt. """ ``` This could start processing the first/earlier prompt after the --- delimiter is received to allow for full dialogue - Some feature flag to interpret STDIN as user input, similar to [pull 6130](https://github.com/ollama/ollama/pull/6130) (which [pull 7360](https://github.com/ollama/ollama/pull/7360) replaced). Perhaps `--input user`, `--nogui`, `--keeppipe`, ect. - Having multiple prompts passed as arguments processed sequentially, like the shared script does. Of course, this is only a feature *request*, it's fine if piping multiple prompts over STDIN or as arguments is not supported. In which case I would be fine with looking further and interested in suggestions for other frontends or libraries to use. P.S. Since making such a feature request is a first for me, constructive criticism is welcome if you can spare the time. Either way, have a nice rest of your day!
GiteaMirror added the feature request label 2026-04-12 16:04:38 -05:00
Author
Owner

@WyvernDotRed commented on GitHub (Nov 24, 2024):

Addendum: A function/script which provides the requested functionality in ollama 0.4.3.

#!/bin/fish



# The first argument is the model to use. This argument is succeptible to shell injection, WONTFIX.
# Subsequent arguments become individual prompts, use \n or \r characters for multiple lines.
# Returns the full model output as one text over STDOUT, by cat-ing a mktemp tempfile.

function prompt-ollama

    # We are piping to a temporary file within expect to separate the ollama throbber.
    # The use of a temporary file is beneficial for longer or interupted prompt output.
    set TEMP_FILE ( mktemp )

    set MODEL $argv[1]
    set PROMPT_LIST $argv[2..]

    # Assembles a script for the expect command to run the desired model and queue our prompts.
    # The full ollama output gets stored to the tempfile, including interface related elements.
    # All prompts get queued within """ syntax, \n and \r characters become newlines to the model.
    # We escape the symbols I caught to cause shell injection, escaping the send command or """ syntax.
    set EXPECT_SCRIPT ( echo \
    "
        set timeout -1
        spawn fish -c \"ollama run $MODEL > $TEMP_FILE\"
        $( for PROMPT in $PROMPT_LIST
            echo \
            "
                send -- \"\\\"\\\"\\\"\r\"
                send -- \"$( echo $PROMPT | sed \
                '
                    s/\\\\/\\\\\\\\/g
                    s/\\\\\\\\n/\\\\n/g
                    s/\\\\\\\\r/\\\\r/g
                    s/\"/\\\\\"/g
                    s/\$/\\\\\$/g
                    s/\[/\\\\\[/g
                ' \
                )\r\"
                send -- \"\\\"\\\"\\\"\r\"
            "
        end )
        send -- \"/bye\r\"
        send -- \"\x04\"
        interact
    " \
    | string collect -N )

    # Piping to /dev/null to remove the ollama throbber from output.
    # Comment > /dev/null out when debugging in a shell.
    expect -c $EXPECT_SCRIPT > /dev/null

    # Ollama output gets cleaned from interface related lines at this final step.
    # This so false positives can be extracted from the leftover tempfile if required.
    cat $TEMP_FILE | grep -v -e '^>>> ' -e '^\.\.\. ' -e '^\033'

end

The syntax is prompt-ollama [model] '[prompt]', with further prompts being queued and answered like in a chat.
For those unaware of how to import a Fish function and wanting to test this:
In a quick text, pasting the entire text in a Fish shell works for that session.
To permanently add it, write the script to ~/.config/fish/functions/promt-ollama.fish
After which it can be used in other shells with fish -c "promt-ollama [model] '[prompt]'"

<!-- gh-comment-id:2496229797 --> @WyvernDotRed commented on GitHub (Nov 24, 2024): Addendum: A function/script which provides the requested functionality in ollama 0.4.3. ```fish #!/bin/fish # The first argument is the model to use. This argument is succeptible to shell injection, WONTFIX. # Subsequent arguments become individual prompts, use \n or \r characters for multiple lines. # Returns the full model output as one text over STDOUT, by cat-ing a mktemp tempfile. function prompt-ollama # We are piping to a temporary file within expect to separate the ollama throbber. # The use of a temporary file is beneficial for longer or interupted prompt output. set TEMP_FILE ( mktemp ) set MODEL $argv[1] set PROMPT_LIST $argv[2..] # Assembles a script for the expect command to run the desired model and queue our prompts. # The full ollama output gets stored to the tempfile, including interface related elements. # All prompts get queued within """ syntax, \n and \r characters become newlines to the model. # We escape the symbols I caught to cause shell injection, escaping the send command or """ syntax. set EXPECT_SCRIPT ( echo \ " set timeout -1 spawn fish -c \"ollama run $MODEL > $TEMP_FILE\" $( for PROMPT in $PROMPT_LIST echo \ " send -- \"\\\"\\\"\\\"\r\" send -- \"$( echo $PROMPT | sed \ ' s/\\\\/\\\\\\\\/g s/\\\\\\\\n/\\\\n/g s/\\\\\\\\r/\\\\r/g s/\"/\\\\\"/g s/\$/\\\\\$/g s/\[/\\\\\[/g ' \ )\r\" send -- \"\\\"\\\"\\\"\r\" " end ) send -- \"/bye\r\" send -- \"\x04\" interact " \ | string collect -N ) # Piping to /dev/null to remove the ollama throbber from output. # Comment > /dev/null out when debugging in a shell. expect -c $EXPECT_SCRIPT > /dev/null # Ollama output gets cleaned from interface related lines at this final step. # This so false positives can be extracted from the leftover tempfile if required. cat $TEMP_FILE | grep -v -e '^>>> ' -e '^\.\.\. ' -e '^\033' end ``` The syntax is `prompt-ollama [model] '[prompt]'`, with further prompts being queued and answered like in a chat. For those unaware of how to import a Fish function and wanting to test this: In a quick text, pasting the entire text in a Fish shell works for that session. To permanently add it, write the script to `~/.config/fish/functions/promt-ollama.fish` After which it can be used in other shells with `fish -c "promt-ollama [model] '[prompt]'"`
Author
Owner

@WyvernDotRed commented on GitHub (Nov 24, 2024):

These are further answers made to @dhiltgen their question in my issue 7820.

In some cases, I am relying on the prior context, like I would in a chat.
This varies from separating questions to work around a smaller model shortening answered otherwise.
To the ollama run [model] | tee [file] use-case I mentioned in the issue.
Or simply sending a few 'continue' or similar prompts and leaving this running in the background, without mixing these prompts with the already given output (like the interactive chat does).

If it was not for that, the existing single prompt restriction on both STDIN and arguments would not be an issue.
Pull 7360 effectively fixes this scripting use-case, within this limit.
And before that, separating out STDERR and using grep -v -e '>>> ' -e '^\.\.\. ' -e '^.\[?2004l' achieves the same result, barring false positives.

Similarly, so would ollama run [model] $( echo -e "\"\"\"\n$( cat [file] )\n\"\"\"" ) before pull 416.
But this does open up the can of worms of shell injections and accidental escapes of the """ block, again making it sensible to have fixed.

That still leaves the removal of having multiple prompts through any other means than an interactive chat.
With the program now immediately exiting, without having received an EOF, this requires fully emulating a user chat session.
Returning back to issue 6120, the very thing I wrote a monstrosity of a script for to work around this.
I will note that this script was written as an educational exercise in getting comfortable with Fish shell, fuelled by a "this should work, let's actually make it" mindset, not as a practical nor correct solution.

In this specific case the catalyst was wanting to automate parts of an upcoming "prompt engineering" assignment of my Uni, but also interest for further small and local projects.
This example specifically entails having different models generate texts and having other models analyse these, something most easily set up in a shell script.
Here the desire for multiple prompts stems from comparing making continued questions versus a clean session with fresh context.
Additionally attempting to preserve resources on limited hardware, though I have yet to test which approach is more efficient.

It is reasonable to be expected to use the API instead.
Before going the scripting route I have explored the API and got single responses through curl so far, matching the ollama run behaviour.
But making multiple prompts work seems like more work and complexity than using the existing chat-like functionality.

So in the end I would find or make a minimal program, which would do close to the same, so I can easily whip up further shell scripts using it.
Which seems weird, since ollama run has the exact needed functionality and now works mostly as expected in a shell script, but arbitrarily limits this to single prompts.
On which I understand this being the default, but do not grasp why there is no flag or syntax for it otherwise.

<!-- gh-comment-id:2496320009 --> @WyvernDotRed commented on GitHub (Nov 24, 2024): These are further answers made to [@dhiltgen their question in my issue 7820](https://github.com/ollama/ollama/issues/7820#issuecomment-2496255414). In some cases, I am relying on the prior context, like I would in a chat. This varies from separating questions to work around a smaller model shortening answered otherwise. To the `ollama run [model] | tee [file]` use-case I mentioned in the issue. Or simply sending a few 'continue' or similar prompts and leaving this running in the background, without mixing these prompts with the already given output (like the interactive chat does). If it was not for that, the existing single prompt restriction on both STDIN and arguments would not be an issue. [Pull 7360](https://github.com/ollama/ollama/pull/7360) effectively fixes this scripting use-case, within this limit. And before that, separating out STDERR and using `grep -v -e '>>> ' -e '^\.\.\. ' -e '^.\[?2004l'` achieves the same result, barring false positives. Similarly, so would `ollama run [model] $( echo -e "\"\"\"\n$( cat [file] )\n\"\"\"" )` before [pull 416](https://github.com/ollama/ollama/pull/416). But this does open up the can of worms of shell injections and accidental escapes of the """ block, again making it sensible to have fixed. That still leaves the removal of having multiple prompts through any other means than an interactive chat. With the program now immediately exiting, without having received an EOF, this requires fully emulating a user chat session. Returning back to [issue 6120](https://github.com/ollama/ollama/issues/6120), the very thing I wrote a monstrosity of a script for to work around this. I will note that this script was written as an educational exercise in getting comfortable with Fish shell, fuelled by a "this should work, let's actually make it" mindset, not as a practical nor correct solution. In this specific case the catalyst was wanting to automate parts of an upcoming "prompt engineering" assignment of my Uni, but also interest for further small and local projects. This example specifically entails having different models generate texts and having other models analyse these, something most easily set up in a shell script. Here the desire for multiple prompts stems from comparing making continued questions versus a clean session with fresh context. Additionally attempting to preserve resources on limited hardware, though I have yet to test which approach is more efficient. It is reasonable to be expected to use the API instead. Before going the scripting route I have explored the API and got single responses through curl so far, matching the `ollama run` behaviour. But making multiple prompts work seems like more work and complexity than using the existing chat-like functionality. So in the end I would find or make a minimal program, which would do close to the same, so I can easily whip up further shell scripts using it. Which seems weird, since `ollama run` has the exact needed functionality and now works mostly as expected in a shell script, but arbitrarily limits this to single prompts. On which I understand this being the default, but do not grasp why there is no flag or syntax for it otherwise.
Author
Owner

@WyvernDotRed commented on GitHub (Dec 13, 2024):

In case this request gets anywhere in the future, it might help to clarify some things:
Having had a peek in the code, Ollama seems to switch to a batch processing mode of single prompts.
My expectation was that it would continue as a normal chat, but exclude interface elements and if relevant interpret input in a more script friendly manner.
This is in line with most coreutils which I have used, whereas Ollama completely changing the way it processes inputs without command flags threw me off-guard.

The usecase was locally scripting prompts, possibly with multiple queued queries in the same context.
For example to store clean output from the chat without manually copy-pasting, by manually interacting with STDIN.
Or generating texts with different models, then letting models analyse and answer questions about the texts, storing the results.
The downgrade functioned for long enough for me to use the shared script to achieve this, now this assignment is done for me.

The merits of the current batch prompting implementation or this requested script friendly chatting are not the most important point here, since if the software can reasonably do either, I feel that it should.
Most problematic was the lack of documentation and unexpected nature of this switch to bach processing mode when standard output gets detected.
This made well-considered changes in behaviour appear as bugs or obvious mistakes, the actual mistake being my unmet expectation of the working of ollama run without feature flags staying consistent.

<!-- gh-comment-id:2542541753 --> @WyvernDotRed commented on GitHub (Dec 13, 2024): In case this request gets anywhere in the future, it might help to clarify some things: Having had a peek in the code, Ollama seems to switch to a batch processing mode of single prompts. My expectation was that it would continue as a normal chat, but exclude interface elements and if relevant interpret input in a more script friendly manner. This is in line with most coreutils which I have used, whereas Ollama completely changing the way it processes inputs without command flags threw me off-guard. The usecase was locally scripting prompts, possibly with multiple queued queries in the same context. For example to store clean output from the chat without manually copy-pasting, by manually interacting with STDIN. Or generating texts with different models, then letting models analyse and answer questions about the texts, storing the results. The downgrade functioned for long enough for me to use the shared script to achieve this, now this assignment is done for me. The merits of the current batch prompting implementation or this requested script friendly chatting are not the most important point here, since if the software can reasonably do either, I feel that it should. Most problematic was the lack of documentation and unexpected nature of this switch to bach processing mode when standard output gets detected. This made well-considered changes in behaviour appear as bugs or obvious mistakes, the actual mistake being my unmet expectation of the working of `ollama run` without feature flags staying consistent.
Author
Owner

@rick-github commented on GitHub (Dec 14, 2024):

#!/usr/bin/env python3

import ollama
import argparse
import readline
import sys

parser = argparse.ArgumentParser()
parser.add_argument("-s", "--stream", help="Enable streaming", default=False, action="store_true")
parser.add_argument("model")
parser.add_argument("prompts", nargs='*')
args = parser.parse_args()

client = ollama.Client()
userprompt = ">>> " if sys.stdin.isatty() else ""

def chat(messages, prompt):
  messages.append({"role":"user", "content": prompt})
  response = client.chat( model=args.model, messages=messages, stream=args.stream)
  m = ''
  for r in response if args.stream else [response]:
    c = r['message']['content']
    print(c, end='', flush=True)
    m = m + c
  print()
  messages.append({"role": "assistant", "content": m})
  return messages

messages = []
for prompt in args.prompts:
  messages = chat(messages, prompt)
while True:
  try:
    prompt = input(userprompt)
  except:
    break
  if prompt == "/bye":
    break
  messages = chat(messages, prompt)
print()
$ ./ollama-pipe.py llama3.2 'an apple costs $1.50'
That's an affordable price for a delicious apple! Would you like to know the total cost of a few apples or perhaps calculate how many apples you can buy with a certain amount of money?
>>> how much does two apples cost?
Easy math!

If one apple costs $1.50, then:

2 apples = 2 x $1.50
= $3.00

So, two apples will cost you $3.00!
>>> /bye
$ echo 'how much does two apples cost?' | ./ollama-pipe.py llama3.2 'an apple costs $1.50'
That's a relatively affordable price for an apple! Would you like to know the cost of some other common items, or is there something else I can help you with?
Easy calculation!

If one apple costs $1.50, then two apples would cost:

$1.50 x 2 = $3.00

So, two apples would cost $3.00.
$ ./ollama-pipe.py llama3.2 'an apple costs $1.50' | tee /tmp/o.log
That's a relatively affordable price for an apple! Would you like to calculate the total cost of a certain number of apples, or perhaps estimate how many apples you can buy with a specific amount of money?
>>> how much does 3 apples cost?
Easy math!

If one apple costs $1.50, then 3 apples would cost:

$1.50 x 3 = $4.50

So, 3 apples would cost you $4.50!
>>> /bye

$ cat /tmp/o.log
That's a relatively affordable price for an apple! Would you like to calculate the total cost of a certain number of apples, or perhaps estimate how many apples you can buy with a specific amount of money?
>>> Easy math!

If one apple costs $1.50, then 3 apples would cost:

$1.50 x 3 = $4.50

So, 3 apples would cost you $4.50!
>>> 
<!-- gh-comment-id:2542598802 --> @rick-github commented on GitHub (Dec 14, 2024): ```python #!/usr/bin/env python3 import ollama import argparse import readline import sys parser = argparse.ArgumentParser() parser.add_argument("-s", "--stream", help="Enable streaming", default=False, action="store_true") parser.add_argument("model") parser.add_argument("prompts", nargs='*') args = parser.parse_args() client = ollama.Client() userprompt = ">>> " if sys.stdin.isatty() else "" def chat(messages, prompt): messages.append({"role":"user", "content": prompt}) response = client.chat( model=args.model, messages=messages, stream=args.stream) m = '' for r in response if args.stream else [response]: c = r['message']['content'] print(c, end='', flush=True) m = m + c print() messages.append({"role": "assistant", "content": m}) return messages messages = [] for prompt in args.prompts: messages = chat(messages, prompt) while True: try: prompt = input(userprompt) except: break if prompt == "/bye": break messages = chat(messages, prompt) print() ``` ```console $ ./ollama-pipe.py llama3.2 'an apple costs $1.50' That's an affordable price for a delicious apple! Would you like to know the total cost of a few apples or perhaps calculate how many apples you can buy with a certain amount of money? >>> how much does two apples cost? Easy math! If one apple costs $1.50, then: 2 apples = 2 x $1.50 = $3.00 So, two apples will cost you $3.00! >>> /bye ``` ```console $ echo 'how much does two apples cost?' | ./ollama-pipe.py llama3.2 'an apple costs $1.50' That's a relatively affordable price for an apple! Would you like to know the cost of some other common items, or is there something else I can help you with? Easy calculation! If one apple costs $1.50, then two apples would cost: $1.50 x 2 = $3.00 So, two apples would cost $3.00. ``` ```console $ ./ollama-pipe.py llama3.2 'an apple costs $1.50' | tee /tmp/o.log That's a relatively affordable price for an apple! Would you like to calculate the total cost of a certain number of apples, or perhaps estimate how many apples you can buy with a specific amount of money? >>> how much does 3 apples cost? Easy math! If one apple costs $1.50, then 3 apples would cost: $1.50 x 3 = $4.50 So, 3 apples would cost you $4.50! >>> /bye $ cat /tmp/o.log That's a relatively affordable price for an apple! Would you like to calculate the total cost of a certain number of apples, or perhaps estimate how many apples you can buy with a specific amount of money? >>> Easy math! If one apple costs $1.50, then 3 apples would cost: $1.50 x 3 = $4.50 So, 3 apples would cost you $4.50! >>> ```
Author
Owner

@jmorganca commented on GitHub (Dec 29, 2024):

@WyvernDotRed thanks so much for the issue and sorry for the slow response. ollama run is designed to take a single input and output a single output - it can be scripted as shown above to prompt the model multiple times (or this can be done via the Chat API). Hope this helps

<!-- gh-comment-id:2564830916 --> @jmorganca commented on GitHub (Dec 29, 2024): @WyvernDotRed thanks so much for the issue and sorry for the slow response. `ollama run` is designed to take a single input and output a single output - it can be scripted as shown above to prompt the model multiple times (or this can be done via the Chat API). Hope this helps
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5006