[GH-ISSUE #2305] Allow reading from file while in ollama run prompt #47842

Open
opened 2026-04-28 05:28:10 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @iplayfast on GitHub (Feb 1, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2305

If we can tell a model to look at picture we should be able to tell it to read from a text file.
There are so many cases where I want to frame a question with data or text, that just doesn't work.
But if I could say, read the file at ./mytext.txt
and it just sucked it all in as though it were keyboard input, that would be fantastic.

It could even be done before the llm actually sees the "read the file at" command as it could be prefiltered.

Also save output to file myfile.txt would be useful.

Originally created by @iplayfast on GitHub (Feb 1, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2305 If we can tell a model to look at picture we should be able to tell it to read from a text file. There are so many cases where I want to frame a question with data or text, that just doesn't work. But if I could say, read the file at ./mytext.txt and it just sucked it all in as though it were keyboard input, that would be fantastic. It could even be done before the llm actually sees the "read the file at" command as it could be prefiltered. Also save output to file myfile.txt would be useful.
GiteaMirror added the feature request label 2026-04-28 05:28:10 -05:00
Author
Owner

@remy415 commented on GitHub (Feb 1, 2024):

Giving any AI unfettered access to your directories can be dangerous. What you would probably want to do is build your own interface using the Ollama API and have the interface pre-load your file and pass it to the API with your prompt. Langchain has some tools that can help with this, and Ollama has a Python package you can integrate with it.

https://github.com/ollama/ollama-python
https://github.com/langchain-ai/langchain

<!-- gh-comment-id:1921459759 --> @remy415 commented on GitHub (Feb 1, 2024): Giving any AI unfettered access to your directories can be dangerous. What you would probably want to do is build your own interface using the Ollama API and have the interface pre-load your file and pass it to the API with your prompt. Langchain has some tools that can help with this, and Ollama has a Python package you can integrate with it. https://github.com/ollama/ollama-python https://github.com/langchain-ai/langchain
Author
Owner

@mountaineerbr commented on GitHub (Feb 5, 2024):

The first input prompt can be a file path, so it will be read. No?

<!-- gh-comment-id:1926072716 --> @mountaineerbr commented on GitHub (Feb 5, 2024): The first input prompt can be a file path, so it will be read. No?
Author
Owner

@remy415 commented on GitHub (Feb 5, 2024):

The first input prompt can be a file path, so it will be read. No?

Yes, the way it’s typically done is through the front end or through things like langchain tools.

Also, question for the general audience: would the context size of loadable files have to fit in the same context as the prompt? If I remember correctly the way other applications implement this is through embeddings? Or am I remembering this incorrectly?

<!-- gh-comment-id:1926096963 --> @remy415 commented on GitHub (Feb 5, 2024): > The first input prompt can be a file path, so it will be read. No? Yes, the way it’s typically done is through the front end or through things like langchain tools. Also, question for the general audience: would the context size of loadable files have to fit in the same context as the prompt? If I remember correctly the way other applications implement this is through embeddings? Or am I remembering this incorrectly?
Author
Owner

@iplayfast commented on GitHub (Feb 5, 2024):

Actually what I was proposing was was a filter between the llm and the outside world. The prompt "Read from file test.txt" would not be passed to the llm, the filter would catch it, and read the file, and pass the contents to the LLM.
Write to file, would be somewhat the same.

Yes this can, and is done outside Ollama, but it is such a common use case that it would be nice to be able to do it from the text interface. (eg ollama run mistral)

<!-- gh-comment-id:1926344594 --> @iplayfast commented on GitHub (Feb 5, 2024): Actually what I was proposing was was a filter between the llm and the outside world. The prompt "Read from file test.txt" would not be passed to the llm, the filter would catch it, and read the file, and pass the contents to the LLM. Write to file, would be somewhat the same. Yes this can, and is done outside Ollama, but it is such a common use case that it would be nice to be able to do it from the text interface. (eg ollama run mistral)
Author
Owner

@easp commented on GitHub (Feb 5, 2024):

Ollama accepts text through stdin, so you can pipe/redirect text into it.
ollama run MODEL "summarize this text" < file.txt

links -dump https://github.com/jmorganca/ollama/ |ollama run mistral --verbose "please summarize the provided text"
(links is a text-mode browser. It's utility is diminished on the modern web because it doesn't support javascript)

Unfortunately there doesn't seem to be a way to do this and then chat with ollama with the text loaded in context.

<!-- gh-comment-id:1927530956 --> @easp commented on GitHub (Feb 5, 2024): Ollama accepts text through stdin, so you can pipe/redirect text into it. `ollama run MODEL "summarize this text" < file.txt` `links -dump https://github.com/jmorganca/ollama/ |ollama run mistral --verbose "please summarize the provided text"` (links is a text-mode browser. It's utility is diminished on the modern web because it doesn't support javascript) Unfortunately there doesn't seem to be a way to do this and then chat with ollama with the text loaded in context.
Author
Owner

@stnava commented on GitHub (Aug 2, 2024):

any update on this?

<!-- gh-comment-id:2266215713 --> @stnava commented on GitHub (Aug 2, 2024): any update on this?
Author
Owner

@keevee09 commented on GitHub (Aug 4, 2024):

I successfully translated a text from Turkish to German using the following command in Linux:

cat input.txt | ollama run mistral-nemo --verbose "Please translate the provided text to German"

<!-- gh-comment-id:2267591542 --> @keevee09 commented on GitHub (Aug 4, 2024): I successfully translated a text from Turkish to German using the following command in Linux: `cat input.txt | ollama run mistral-nemo --verbose "Please translate the provided text to German"`
Author
Owner

@iplayfast commented on GitHub (Aug 4, 2024):

The problem is the you can't interact with ollama after this. It runs
accepts the pipe and a line from studio and then it's done. It would be
nice if after all that context was loaded you could interact with it

On Sun, Aug 4, 2024, 12:06 p.m. keevee09 @.***> wrote:

I successfully translated a text from Turkish to German using the
following command in Linux:

cat input.txt | ollama run mistral-nemo --verbose "Please translate the
provided text to German"


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/2305#issuecomment-2267591542,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAFXNST7LX4IRCGDSM34EJDZPZGPRAVCNFSM6AAAAABCUOVFRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRXGU4TCNJUGI
.
You are receiving this because you authored the thread.Message ID:
@.***>

<!-- gh-comment-id:2267601884 --> @iplayfast commented on GitHub (Aug 4, 2024): The problem is the you can't interact with ollama after this. It runs accepts the pipe and a line from studio and then it's done. It would be nice if after all that context was loaded you could interact with it On Sun, Aug 4, 2024, 12:06 p.m. keevee09 ***@***.***> wrote: > I successfully translated a text from Turkish to German using the > following command in Linux: > > cat input.txt | ollama run mistral-nemo --verbose "Please translate the > provided text to German" > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/2305#issuecomment-2267591542>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAFXNST7LX4IRCGDSM34EJDZPZGPRAVCNFSM6AAAAABCUOVFRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRXGU4TCNJUGI> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> >
Author
Owner

@keevee09 commented on GitHub (Aug 5, 2024):

True. I have also had limited success with this format, anyway.
Reading the, err, README.md from ollama's github page, I have tried this
format:

ollama run llama3.1 "Translate this file to German: $(cat turkish.txt)"

and had more success. The piped method was not always translating the
entire file.

On Sun, 4 Aug 2024 at 17:45, Chris Bruner @.***> wrote:

The problem is the you can't interact with ollama after this. It runs
accepts the pipe and a line from studio and then it's done. It would be
nice if after all that context was loaded you could interact with it

On Sun, Aug 4, 2024, 12:06 p.m. keevee09 @.***> wrote:

I successfully translated a text from Turkish to German using the
following command in Linux:

cat input.txt | ollama run mistral-nemo --verbose "Please translate the
provided text to German"


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/2305#issuecomment-2267591542,
or unsubscribe
<
https://github.com/notifications/unsubscribe-auth/AAFXNST7LX4IRCGDSM34EJDZPZGPRAVCNFSM6AAAAABCUOVFRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRXGU4TCNJUGI>

.
You are receiving this because you authored the thread.Message ID:
@.***>


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/2305#issuecomment-2267601884,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAUU7K5QN3VLSYCP2DDEGD3ZPZLAXAVCNFSM6AAAAABCUOVFRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRXGYYDCOBYGQ
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:2268864432 --> @keevee09 commented on GitHub (Aug 5, 2024): True. I have also had limited success with this format, anyway. Reading the, err, README.md from ollama's github page, I have tried this format: `ollama run llama3.1 "Translate this file to German: $(cat turkish.txt)"` and had more success. The piped method was not always translating the entire file. On Sun, 4 Aug 2024 at 17:45, Chris Bruner ***@***.***> wrote: > The problem is the you can't interact with ollama after this. It runs > accepts the pipe and a line from studio and then it's done. It would be > nice if after all that context was loaded you could interact with it > > On Sun, Aug 4, 2024, 12:06 p.m. keevee09 ***@***.***> wrote: > > > I successfully translated a text from Turkish to German using the > > following command in Linux: > > > > cat input.txt | ollama run mistral-nemo --verbose "Please translate the > > provided text to German" > > > > — > > Reply to this email directly, view it on GitHub > > <https://github.com/ollama/ollama/issues/2305#issuecomment-2267591542>, > > or unsubscribe > > < > https://github.com/notifications/unsubscribe-auth/AAFXNST7LX4IRCGDSM34EJDZPZGPRAVCNFSM6AAAAABCUOVFRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRXGU4TCNJUGI> > > > . > > You are receiving this because you authored the thread.Message ID: > > ***@***.***> > > > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/2305#issuecomment-2267601884>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAUU7K5QN3VLSYCP2DDEGD3ZPZLAXAVCNFSM6AAAAABCUOVFRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRXGYYDCOBYGQ> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Author
Owner

@mrtngrsbch commented on GitHub (Dec 19, 2024):

Ollama accepts text through stdin, so you can pipe/redirect text into it. ollama run MODEL "summarize this text" < file.txt

links -dump https://github.com/jmorganca/ollama/ |ollama run mistral --verbose "please summarize the provided text" (links is a text-mode browser. It's utility is diminished on the modern web because it doesn't support javascript)

Unfortunately there doesn't seem to be a way to do this and then chat with ollama with the text loaded in context.

hey! it's not 'linxs' but 'lynx'!
lynx -dump https://lemonde.fr/ | ollama run llama3.2 --verbose "please summarize news from this web in spanish"

<!-- gh-comment-id:2552791014 --> @mrtngrsbch commented on GitHub (Dec 19, 2024): > Ollama accepts text through stdin, so you can pipe/redirect text into it. `ollama run MODEL "summarize this text" < file.txt` > > `links -dump https://github.com/jmorganca/ollama/ |ollama run mistral --verbose "please summarize the provided text"` (links is a text-mode browser. It's utility is diminished on the modern web because it doesn't support javascript) > > Unfortunately there doesn't seem to be a way to do this and then chat with ollama with the text loaded in context. hey! it's not 'linxs' but 'lynx'! `lynx -dump https://lemonde.fr/ | ollama run llama3.2 --verbose "please summarize news from this web in spanish"`
Author
Owner

@iplayfast commented on GitHub (Jan 9, 2025):

I still want this! :)

<!-- gh-comment-id:2581061479 --> @iplayfast commented on GitHub (Jan 9, 2025): I still want this! :)
Author
Owner

@kha84 commented on GitHub (Aug 4, 2025):

Yeah, ollama interactive CLI thing needs some more syntax sugar commands to be more user-friendly, like:

# ollama run devstral:latest
>>> /?
Available Commands:
  /set            Set session variables
  /show           Show model information
  /load <model>   Load a session or model
  /save <model>   Save your current session
  /clear          Clear session context
  /add <file>     Adds file to context <-----------
  /bye            Exit
  /?, /help       Help for a command
  /? shortcuts    Help for keyboard shortcuts

Use """ to begin a multi-line message.
<!-- gh-comment-id:3149852418 --> @kha84 commented on GitHub (Aug 4, 2025): Yeah, ollama interactive CLI thing needs some more syntax sugar commands to be more user-friendly, like: ``` # ollama run devstral:latest >>> /? Available Commands: /set Set session variables /show Show model information /load <model> Load a session or model /save <model> Save your current session /clear Clear session context /add <file> Adds file to context <----------- /bye Exit /?, /help Help for a command /? shortcuts Help for keyboard shortcuts Use """ to begin a multi-line message. ```
Author
Owner

@philippeflorent commented on GitHub (Feb 27, 2026):

The first input prompt can be a file path, so it will be read. No?

Yes, the way it’s typically done is through the front end or through things like langchain tools.

Also, question for the general audience: would the context size of loadable files have to fit in the same context as the prompt? If I remember correctly the way other applications implement this is through embeddings? Or am I remembering this incorrectly?

from qwen in ollama
"No, I cannot directly access or read files from your local hard drive (e.g., ...\schematic.md) because I am a web-based AI and do not have access to your local file system."

<!-- gh-comment-id:3973439089 --> @philippeflorent commented on GitHub (Feb 27, 2026): > > The first input prompt can be a file path, so it will be read. No? > > Yes, the way it’s typically done is through the front end or through things like langchain tools. > > Also, question for the general audience: would the context size of loadable files have to fit in the same context as the prompt? If I remember correctly the way other applications implement this is through embeddings? Or am I remembering this incorrectly? from qwen in ollama "No, I cannot directly access or read files from your local hard drive (e.g., `...\schematic.md`) because I am a web-based AI and do not have access to your local file system."
Author
Owner

@kha84 commented on GitHub (Feb 28, 2026):

I think ollama CLI in a dialog mode could empower the same syntax @/path/to/file as other tools to put the content of the referenced file into the context

<!-- gh-comment-id:3976772128 --> @kha84 commented on GitHub (Feb 28, 2026): I think ollama CLI in a dialog mode could empower the same syntax `@/path/to/file` as other tools to put the content of the referenced file into the context
Author
Owner

@philippeflorent commented on GitHub (Feb 28, 2026):

I found the solution (windows) right click to copy paste, then hit ctrl+c until all lines are pasted
voila

<!-- gh-comment-id:3977018362 --> @philippeflorent commented on GitHub (Feb 28, 2026): I found the solution (windows) right click to copy paste, then hit ctrl+c until all lines are pasted voila
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47842