[PR #416] [CLOSED] treat ollama run model < file as entire prompt, not prompt-per-line #36016

Closed
opened 2026-04-22 20:44:29 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/416
Author: @sqs
Created: 8/26/2023
Status: Closed

Base: mainHead: run-file


📝 Commits (1)

  • ff9a93a treat ollama run model < file as entire prompt, not prompt-per-line

📊 Changes

1 file changed (+13 additions, -15 deletions)

View changed files

📝 cmd/cmd.go (+13 -15)

📄 Description

Previously, ollama run treated a non-terminal stdin (such as ollama run model < file) as containing one prompt per line. To run inference on a multi-line prompt, the only non-API workaround was to run ollama run interactively and wrap the prompt in """...""".

Now, ollama run treats a non-terminal stdin as containing a single prompt. For example, if myprompt.txt is a multi-line file, then ollama run model < myprompt.txt would treat myprompt.txt's entire contents as the prompt.

This breaks backcompat, but I believe this behavior is better than the old behavior. It is strictly more powerful than the prior behavior because callers can split a file by lines outside of Ollama and then invoke ollama run once per line on their own.

This is related to https://github.com/jmorganca/ollama/issues/357, but that refers to interactive usage.

Fixes #568


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/416 **Author:** [@sqs](https://github.com/sqs) **Created:** 8/26/2023 **Status:** ❌ Closed **Base:** `main` ← **Head:** `run-file` --- ### 📝 Commits (1) - [`ff9a93a`](https://github.com/ollama/ollama/commit/ff9a93a77542d4a7b3b2c995a41fb859a566c03a) treat `ollama run model < file` as entire prompt, not prompt-per-line ### 📊 Changes **1 file changed** (+13 additions, -15 deletions) <details> <summary>View changed files</summary> 📝 `cmd/cmd.go` (+13 -15) </details> ### 📄 Description Previously, `ollama run` treated a non-terminal stdin (such as `ollama run model < file`) as containing one prompt per line. To run inference on a multi-line prompt, the only non-API workaround was to run `ollama run` interactively and wrap the prompt in `"""..."""`. Now, `ollama run` treats a non-terminal stdin as containing a single prompt. For example, if `myprompt.txt` is a multi-line file, then `ollama run model < myprompt.txt` would treat `myprompt.txt`'s entire contents as the prompt. This breaks backcompat, but I believe this behavior is better than the old behavior. It is strictly more powerful than the prior behavior because callers can split a file by lines outside of Ollama and then invoke `ollama run` once per line on their own. This is related to https://github.com/jmorganca/ollama/issues/357, but that refers to interactive usage. Fixes #568 --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-22 20:44:29 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#36016