[GH-ISSUE #15615] Guided Ollama install mission in KubeStellar Console #56477

Open
opened 2026-04-29 10:52:53 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @clubanderson on GitHub (Apr 16, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15615

We built a guided install mission for Ollama inside KubeStellar Console, a standalone Kubernetes dashboard (unrelated to legacy kubestellar/kubestellar, kubeflex, or OCM — zero shared code).

Open the Ollama install mission

What the mission does

The mission deploys Ollama as a Kubernetes workload with a persistent volume for the model cache and exposes its OpenAI-compatible shim at /v1/chat/completions. Copy-paste or run directly from the Console against your cluster. Validation steps confirm the Deployment rolls out, the model is pulled, and inference works end-to-end.

Why we're reaching out

Ollama is the first local LLM runner the Console integrates at the native provider level. kc-agent ships with Ollama registered as a chat-capable provider whose default URL is http://127.0.0.1:11434 — so on any workstation where Ollama is running, the Console's agent selector lists "Ollama (Local)" as available automatically with no configuration. Set OLLAMA_URL to point at a remote Ollama (for example an in-cluster Service URL or a LAN server) to override.

The broader local-LLM integration also covers llama.cpp, LocalAI, vLLM, LM Studio, Red Hat AI Inference Server, and Open WebUI — with the goal of giving operators in regulated or air-gapped environments a well-lit path to keeping chat content inside their trust boundary. See the Local LLM Strategy docs page for the full decision matrix and topology diagrams.

Install

Local (connects to your current kubeconfig context):

curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/start.sh | bash

Deploy into a cluster:

curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/deploy.sh | bash

Mission definitions are open source — PRs welcome at platform-ollama.json. Feel free to close if not relevant.

Originally created by @clubanderson on GitHub (Apr 16, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15615 We built a guided install mission for Ollama inside [KubeStellar Console](https://console.kubestellar.io?utm_source=github&utm_medium=issue&utm_campaign=cncf_outreach&utm_term=ollama), a standalone Kubernetes dashboard (unrelated to legacy kubestellar/kubestellar, kubeflex, or OCM — zero shared code). → **[Open the Ollama install mission](https://console.kubestellar.io/missions/install-ollama?utm_source=github&utm_medium=issue&utm_campaign=cncf_outreach&utm_term=ollama)** ### What the mission does The mission deploys Ollama as a Kubernetes workload with a persistent volume for the model cache and exposes its OpenAI-compatible shim at `/v1/chat/completions`. Copy-paste or run directly from the Console against your cluster. Validation steps confirm the Deployment rolls out, the model is pulled, and inference works end-to-end. ### Why we're reaching out Ollama is the first local LLM runner the Console integrates at the native provider level. kc-agent ships with Ollama registered as a chat-capable provider whose default URL is `http://127.0.0.1:11434` — so on any workstation where Ollama is running, the Console's agent selector lists \"Ollama (Local)\" as available automatically with no configuration. Set `OLLAMA_URL` to point at a remote Ollama (for example an in-cluster Service URL or a LAN server) to override. The broader local-LLM integration also covers llama.cpp, LocalAI, vLLM, LM Studio, Red Hat AI Inference Server, and Open WebUI — with the goal of giving operators in regulated or air-gapped environments a well-lit path to keeping chat content inside their trust boundary. See the [Local LLM Strategy](https://docs.kubestellar.io/console/local-llm-strategy?utm_source=github&utm_medium=issue&utm_campaign=cncf_outreach&utm_term=ollama) docs page for the full decision matrix and topology diagrams. ### Install Local (connects to your current kubeconfig context): ```bash curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/start.sh | bash ``` Deploy into a cluster: ```bash curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/deploy.sh | bash ``` --- Mission definitions are open source — PRs welcome at [platform-ollama.json](https://github.com/kubestellar/console-kb/blob/master/fixes/platform-install/platform-ollama.json?utm_source=github&utm_medium=issue&utm_campaign=cncf_outreach&utm_term=ollama). Feel free to close if not relevant.
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15615
Analyzed: 2026-04-18T18:19:45.011714

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274305406 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15615 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15615 **Analyzed**: 2026-04-18T18:19:45.011714 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Author
Owner

@clubanderson commented on GitHub (Apr 23, 2026):

Apologies — the mission link in the original post was temporarily broken due to a file naming issue on our end. It's now fixed and live:

Click here to launch the guided install mission →

Thanks for your patience, and sorry for the inconvenience!

<!-- gh-comment-id:4302463820 --> @clubanderson commented on GitHub (Apr 23, 2026): Apologies — the mission link in the original post was temporarily broken due to a file naming issue on our end. It's now fixed and live: **[Click here to launch the guided install mission →](https://console.kubestellar.io/missions/install-ollama)** Thanks for your patience, and sorry for the inconvenience!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56477