[GH-ISSUE #1721] How to enable Ollama read contents of directory? #63013

Closed
opened 2026-05-03 11:15:50 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @oliverbob on GitHub (Dec 26, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1721

Since it can read '/home/user/whateverfile.it.is', would it be possible for ollama to be able to read an entire directory or 'repo' for that matter so we can talk to it?

If it is not yet a feature, maybe its neat to add this for developers to help us quickly solve our coding challenges.

Thanks.

Originally created by @oliverbob on GitHub (Dec 26, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1721 Since it can read '/home/user/whateverfile.it.is', would it be possible for ollama to be able to read an entire directory or 'repo' for that matter so we can talk to it? If it is not yet a feature, maybe its neat to add this for developers to help us quickly solve our coding challenges. Thanks.
Author
Owner

@technovangelist commented on GitHub (Dec 27, 2023):

Hi @oliverbob , thanks for submitting this issue. To read files in to a prompt, you have a few options. First, you can use the features of your shell to pipe in the contents of a file. To read in more than a single file, you need to do a few extra steps because the contents of your files is probably bigger than the context size of the model. So you can use a technique known as RAG. We have a few examples here in our repo that show you how to do RAG with Ollama. Essentially, it comes down to importing your content into some sort of data store, usually in a special format that is semantically searchable. Then you filter the content based on a query. Then that is fed to the model with the prompt and the model generates an answer. The third option is to let someone else build RAG for your. On the front Readme of this repo is a list of community projects. Some of those do various forms of RAG on your files.

Let me know if that answers your questions.

<!-- gh-comment-id:1869970793 --> @technovangelist commented on GitHub (Dec 27, 2023): Hi @oliverbob , thanks for submitting this issue. To read files in to a prompt, you have a few options. First, you can use the features of your shell to pipe in the contents of a file. To read in more than a single file, you need to do a few extra steps because the contents of your files is probably bigger than the context size of the model. So you can use a technique known as RAG. We have a few examples here in our repo that show you how to do RAG with Ollama. Essentially, it comes down to importing your content into some sort of data store, usually in a special format that is semantically searchable. Then you filter the content based on a query. Then that is fed to the model with the prompt and the model generates an answer. The third option is to let someone else build RAG for your. On the front Readme of this repo is a list of community projects. Some of those do various forms of RAG on your files. Let me know if that answers your questions.
Author
Owner

@xiaoyuvax commented on GitHub (Feb 27, 2024):

Can u be more specific about those third-party tools or sevices? I want Ollama together with any of the models to respond relevantly according to my local documents (maybe extracted by RAG), what exactly should i do to use the RAG?
Ollama cannot access internet or a knowledge base stored in a datebase limits its usability, any way for Ollama to access ElasticSearch or any database for RAG?

<!-- gh-comment-id:1966057952 --> @xiaoyuvax commented on GitHub (Feb 27, 2024): Can u be more specific about those third-party tools or sevices? I want Ollama together with any of the models to respond relevantly according to my local documents (maybe extracted by RAG), what exactly should i do to use the RAG? Ollama cannot access internet or a knowledge base stored in a datebase limits its usability, any way for Ollama to access ElasticSearch or any database for RAG?
Author
Owner

@jmorganca commented on GitHub (May 10, 2024):

Hi @oliverbob this isn't something Ollama supports today. There are some open issues for having built-in vector store sort of functionality so I'll merge it with that #834

<!-- gh-comment-id:2103639329 --> @jmorganca commented on GitHub (May 10, 2024): Hi @oliverbob this isn't something Ollama supports today. There are some open issues for having built-in vector store sort of functionality so I'll merge it with that #834
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63013