[PR #1126] [MERGED] Add ability to pass prompt in via standard input such as ollama run model < file #57169

Closed
opened 2026-04-29 11:44:34 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/1126
Author: @jmorganca
Created: 11/14/2023
Status: Merged
Merged: 11/14/2023
Merged by: @jmorganca

Base: mainHead: jmorgan-run-file


📝 Commits (1)

  • b06b8d5 treat ollama run model < file as entire prompt, not prompt-per-line

📊 Changes

1 file changed (+29 additions, -53 deletions)

View changed files

📝 cmd/cmd.go (+29 -53)

📄 Description

Originally opened by @sqs in #416

Previously, ollama run treated a non-terminal stdin (such as ollama run model < file) as containing one prompt per line. To run inference on a multi-line prompt, the only non-API workaround was to run ollama run interactively and wrap the prompt in """...""".

Now, ollama run treats a non-terminal stdin as containing a single prompt. For example, if myprompt.txt is a multi-line file, then ollama run model < myprompt.txt would treat myprompt.txt's entire contents as the prompt.

Examples:

cat mycode.py | ollama run codellama "what does this code do?"
cat essay.txt | ollama run llama2 "Summarize this story in 5 points. Respond in json." --format json | jq

Replacement for the current behavior is to create a bash script that reads in each line from stdin and calls ollama run:

#!/bin/bash
while IFS= read -r line; do
    echo "$line" | ollama run $1
done

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/1126 **Author:** [@jmorganca](https://github.com/jmorganca) **Created:** 11/14/2023 **Status:** ✅ Merged **Merged:** 11/14/2023 **Merged by:** [@jmorganca](https://github.com/jmorganca) **Base:** `main` ← **Head:** `jmorgan-run-file` --- ### 📝 Commits (1) - [`b06b8d5`](https://github.com/ollama/ollama/commit/b06b8d5d076e2e6ca3418be23b04195fe99e5408) treat `ollama run model < file` as entire prompt, not prompt-per-line ### 📊 Changes **1 file changed** (+29 additions, -53 deletions) <details> <summary>View changed files</summary> 📝 `cmd/cmd.go` (+29 -53) </details> ### 📄 Description Originally opened by @sqs in #416 Previously, `ollama run` treated a non-terminal stdin (such as `ollama run model < file`) as containing one prompt per line. To run inference on a multi-line prompt, the only non-API workaround was to run `ollama run` interactively and wrap the prompt in `"""..."""`. Now, `ollama run` treats a non-terminal stdin as containing a single prompt. For example, if `myprompt.txt` is a multi-line file, then `ollama run model < myprompt.txt` would treat `myprompt.txt`'s entire contents as the prompt. Examples: ``` cat mycode.py | ollama run codellama "what does this code do?" cat essay.txt | ollama run llama2 "Summarize this story in 5 points. Respond in json." --format json | jq ``` Replacement for the current behavior is to create a bash script that reads in each line from `stdin` and calls `ollama run`: ``` #!/bin/bash while IFS= read -r line; do echo "$line" | ollama run $1 done ``` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 11:44:34 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#57169