[GH-ISSUE #10107] How to connect to ollama server running in a k8s cluster? #6628

Closed
opened 2026-04-12 18:18:21 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @khteh on GitHub (Apr 3, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10107

I have deployed an STS running in a local k8s. I don't see how I could configure a llm/rag application to connect to anything other than localhost? https://python.langchain.com/docs/integrations/chat/ollama/

init_chat_model("llama3.2", model_provider="ollama", temperature=0)
Originally created by @khteh on GitHub (Apr 3, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10107 I have deployed an STS running in a local k8s. I don't see how I could configure a llm/rag application to connect to anything other than `localhost`? https://python.langchain.com/docs/integrations/chat/ollama/ ``` init_chat_model("llama3.2", model_provider="ollama", temperature=0) ```
Author
Owner

@khteh commented on GitHub (Apr 3, 2025):

base_url

<!-- gh-comment-id:2775270945 --> @khteh commented on GitHub (Apr 3, 2025): `base_url`
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6628