[GH-ISSUE #14844] Feature Request: Add Octomind Integration to ollama launch #56089

Open
opened 2026-04-29 10:15:05 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @donhardman on GitHub (Mar 14, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14844

Summary

Add Octomind as a new integration for ollama launch. Octomind is a session-based AI development assistant written in Rust with MCP support and native Ollama provider support.

What is Octomind?

Repository: https://github.com/muvon/octomind
Website: https://muvon.io
License: Apache 2.0

Features:

  • 7 LLM providers including native Ollama support (ollama:model-name)
  • MCP Protocol Support - Full Model Context Protocol implementation
  • Session-based workflow - Persistent context across conversations
  • Plan-first architecture - Multi-step planning with validation
  • Built-in tools - Shell, file editing, code search (ast_grep), web browsing
  • Native binary - Written in Rust, no Node.js required

Native Ollama Support

Octomind already has native Ollama provider support:

# ~/.config/octomind/config.toml
model = "ollama:qwen3-coder"

[providers.ollama]
# Uses OLLAMA_API_KEY or defaults to localhost

Environment variable: OLLAMA_API_KEY (already documented in Octomind)

Why Add Octomind?

Feature Octomind Other Tools
Provider-agnostic 7 providers Single provider
MCP native Built-in Varies
Runtime Native binary Node.js
Sessions Persistent Varies
Cost tracking Built-in Varies

Proposed Implementation

cmd/launch/octomind.go

package launch

import (
	"fmt"
	"os"
	"os/exec"
	"path/filepath"

	"github.com/ollama/ollama/cmd/internal/fileutil"
	"github.com/ollama/ollama/envconfig"
)

type Octomind struct{}

func (o *Octomind) String() string { return "Octomind" }

func (o *Octomind) Run(model string, args []string) error {
	if _, err := exec.LookPath("octomind"); err != nil {
		return fmt.Errorf("octomind is not installed, install from https://muvon.io")
	}
	cmd := exec.Command("octomind", "session")
	cmd.Stdin = os.Stdin
	cmd.Stdout = os.Stdout
	cmd.Stderr = os.Stderr
	cmd.Env = append(os.Environ(),
		"OLLAMA_API_KEY=ollama",
	)
	return cmd.Run()
}

func (o *Octomind) Paths() []string {
	home, err := os.UserHomeDir()
	if err != nil {
		return nil
	}
	p := filepath.Join(home, ".config", "octomind", "config.toml")
	if _, err := os.Stat(p); err == nil {
		return []string{p}
	}
	return nil
}

func (o *Octomind) Edit(models []string) error {
	if len(models) == 0 {
		return nil
	}
	home, err := os.UserHomeDir()
	if err != nil {
		return err
	}
	configDir := filepath.Join(home, ".config", "octomind")
	configPath := filepath.Join(configDir, "config.toml")
	if err := os.MkdirAll(configDir, 0o755); err != nil {
		return err
	}

	// Octomind uses TOML with model = "provider:model" format
	config := fmt.Sprintf(`# Generated by ollama launch
model = "ollama:%s"

[providers.ollama]
base_url = "%s/v1"
`, models[0], envconfig.Host().String())

	return fileutil.WriteWithBackup(configPath, []byte(config))
}

func (o *Octomind) Models() []string {
	home, err := os.UserHomeDir()
	if err != nil {
		return nil
	}
	configPath := filepath.Join(home, ".config", "octomind", "config.toml")
	data, err := os.ReadFile(configPath)
	if err != nil {
		return nil
	}
	// Parse model = "ollama:model_name"
	content := string(data)
	// Simple extraction - look for ollama: prefix
	for _, line := range strings.Split(content, "\n") {
		if strings.HasPrefix(line, `model = "ollama:`) {
			model := strings.TrimPrefix(line, `model = "ollama:`)
			model = strings.TrimSuffix(model, `"`)
			return []string{model}
		}
	}
	return nil
}

Registry Entry (cmd/launch/registry.go)

Add to integrationSpecs:

{
	Name: "octomind",
	Runner: &Octomind{},
	Description: "Session-based AI development assistant with MCP support",
	Install: IntegrationInstallSpec{
		CheckInstalled: func() bool {
			_, err := exec.LookPath("octomind")
			return err == nil
		},
		URL: "https://muvon.io",
	},
},

User Experience

# Launch Octomind with Ollama
ollama launch octomind

# With specific model
ollama launch octomind --model qwen3-coder

# Configure without launching
ollama launch octomind --config

Installation

Users install Octomind via:

curl -fsSL https://raw.githubusercontent.com/muvon/octomind/master/install.sh | bash
# or
cargo install octomind

Ready for PR

I have the implementation ready. Will submit PR after discussion.


Octomind already listed in:

  • awesome-mcp-clients
  • awesome-ai-devtools
  • awesome-ai-coding-tools
  • awesome-cli-rust
Originally created by @donhardman on GitHub (Mar 14, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14844 ## Summary Add **Octomind** as a new integration for `ollama launch`. Octomind is a session-based AI development assistant written in Rust with MCP support and **native Ollama provider support**. ## What is Octomind? **Repository**: https://github.com/muvon/octomind **Website**: https://muvon.io **License**: Apache 2.0 Features: - **7 LLM providers** including native Ollama support (`ollama:model-name`) - **MCP Protocol Support** - Full Model Context Protocol implementation - **Session-based workflow** - Persistent context across conversations - **Plan-first architecture** - Multi-step planning with validation - **Built-in tools** - Shell, file editing, code search (ast_grep), web browsing - **Native binary** - Written in Rust, no Node.js required ## Native Ollama Support Octomind already has native Ollama provider support: ```toml # ~/.config/octomind/config.toml model = "ollama:qwen3-coder" [providers.ollama] # Uses OLLAMA_API_KEY or defaults to localhost ``` Environment variable: `OLLAMA_API_KEY` (already documented in Octomind) ## Why Add Octomind? | Feature | Octomind | Other Tools | |---------|----------|-------------| | Provider-agnostic | ✅ 7 providers | Single provider | | MCP native | ✅ Built-in | Varies | | Runtime | Native binary | Node.js | | Sessions | ✅ Persistent | Varies | | Cost tracking | ✅ Built-in | Varies | ## Proposed Implementation ### `cmd/launch/octomind.go` ```go package launch import ( "fmt" "os" "os/exec" "path/filepath" "github.com/ollama/ollama/cmd/internal/fileutil" "github.com/ollama/ollama/envconfig" ) type Octomind struct{} func (o *Octomind) String() string { return "Octomind" } func (o *Octomind) Run(model string, args []string) error { if _, err := exec.LookPath("octomind"); err != nil { return fmt.Errorf("octomind is not installed, install from https://muvon.io") } cmd := exec.Command("octomind", "session") cmd.Stdin = os.Stdin cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr cmd.Env = append(os.Environ(), "OLLAMA_API_KEY=ollama", ) return cmd.Run() } func (o *Octomind) Paths() []string { home, err := os.UserHomeDir() if err != nil { return nil } p := filepath.Join(home, ".config", "octomind", "config.toml") if _, err := os.Stat(p); err == nil { return []string{p} } return nil } func (o *Octomind) Edit(models []string) error { if len(models) == 0 { return nil } home, err := os.UserHomeDir() if err != nil { return err } configDir := filepath.Join(home, ".config", "octomind") configPath := filepath.Join(configDir, "config.toml") if err := os.MkdirAll(configDir, 0o755); err != nil { return err } // Octomind uses TOML with model = "provider:model" format config := fmt.Sprintf(`# Generated by ollama launch model = "ollama:%s" [providers.ollama] base_url = "%s/v1" `, models[0], envconfig.Host().String()) return fileutil.WriteWithBackup(configPath, []byte(config)) } func (o *Octomind) Models() []string { home, err := os.UserHomeDir() if err != nil { return nil } configPath := filepath.Join(home, ".config", "octomind", "config.toml") data, err := os.ReadFile(configPath) if err != nil { return nil } // Parse model = "ollama:model_name" content := string(data) // Simple extraction - look for ollama: prefix for _, line := range strings.Split(content, "\n") { if strings.HasPrefix(line, `model = "ollama:`) { model := strings.TrimPrefix(line, `model = "ollama:`) model = strings.TrimSuffix(model, `"`) return []string{model} } } return nil } ``` ### Registry Entry (`cmd/launch/registry.go`) Add to `integrationSpecs`: ```go { Name: "octomind", Runner: &Octomind{}, Description: "Session-based AI development assistant with MCP support", Install: IntegrationInstallSpec{ CheckInstalled: func() bool { _, err := exec.LookPath("octomind") return err == nil }, URL: "https://muvon.io", }, }, ``` ## User Experience ```bash # Launch Octomind with Ollama ollama launch octomind # With specific model ollama launch octomind --model qwen3-coder # Configure without launching ollama launch octomind --config ``` ## Installation Users install Octomind via: ```bash curl -fsSL https://raw.githubusercontent.com/muvon/octomind/master/install.sh | bash # or cargo install octomind ``` ## Ready for PR I have the implementation ready. Will submit PR after discussion. --- **Octomind already listed in:** - awesome-mcp-clients - awesome-ai-devtools - awesome-ai-coding-tools - awesome-cli-rust
GiteaMirror added the feature request label 2026-04-29 10:15:05 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56089