[GH-ISSUE #6540] actively retrieves the content returned from the web page #50625

Closed
opened 2026-04-28 16:35:49 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Nurburgring-Zhang on GitHub (Aug 28, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6540

expected that ollama can automatically identify the model, and then when the problem exceeds the capacity of the model, ollama actively retrieves the content returned from the web page to the model, and the model analyzes the content returned and finally gives the answer.

Originally created by @Nurburgring-Zhang on GitHub (Aug 28, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6540 expected that ollama can automatically identify the model, and then when the problem exceeds the capacity of the model, ollama actively retrieves the content returned from the web page to the model, and the model analyzes the content returned and finally gives the answer.
GiteaMirror added the feature request label 2026-04-28 16:35:49 -05:00
Author
Owner

@igorschlum commented on GitHub (Aug 28, 2024):

Hi @Nurburgring-Zhang Thank you for your suggestion! What you're proposing is a very interesting vision for the future of Ollama. The idea of a hybrid solution that combines local models with automatic web retrieval to provide more comprehensive answers is indeed promising and could significantly enhance the capabilities of LLMs.

However, implementing such a feature would require many steps and substantial technical development, including considerations for model capacity, security, data privacy, and performance. At this time, GitHub is primarily used for addressing short-term issues and features that are achievable in the near future.

This idea might be better suited for discussion on the Discord server, where the community often talks about future possibilities and explores more experimental concepts. Feel free to join the conversation there!

Since we aim to keep GitHub focused on actionable items to avoid having too many open tickets that can detract from the project's progress, it would be great if you could close this ticket.

Thanks again for your feedback and for sharing your thoughts!

<!-- gh-comment-id:2315809318 --> @igorschlum commented on GitHub (Aug 28, 2024): Hi @Nurburgring-Zhang Thank you for your suggestion! What you're proposing is a very interesting vision for the future of Ollama. The idea of a hybrid solution that combines local models with automatic web retrieval to provide more comprehensive answers is indeed promising and could significantly enhance the capabilities of LLMs. However, implementing such a feature would require many steps and substantial technical development, including considerations for model capacity, security, data privacy, and performance. At this time, GitHub is primarily used for addressing short-term issues and features that are achievable in the near future. This idea might be better suited for discussion on the Discord server, where the community often talks about future possibilities and explores more experimental concepts. Feel free to join the conversation there! Since we aim to keep GitHub focused on actionable items to avoid having too many open tickets that can detract from the project's progress, it would be great if you could close this ticket. Thanks again for your feedback and for sharing your thoughts!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50625