[PR #13576] feat: Add support for remote providers and OpenAI integration #76571

Open
opened 2026-05-05 09:12:21 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/13576
Author: @x22x22
Created: 12/27/2025
Status: 🔄 Open

Base: mainHead: feat/remote-bridge-cli


📝 Commits (6)

  • 60887c1 feat: Add support for remote providers and OpenAI integration
  • 5d322a4 fix: Make ResponseFormat field optional in ChatCompletionRequest
  • 3cddaca fix: Use command context for remote provider operations
  • 329ab55 fix: Implement defensive copy for headers in NewRemoteClient
  • c80e3b9 fix: Enhance NewRemoteClient with custom HTTP client and timeout settings
  • ac76c1a fix: Handle response body read errors in StreamChatCompletion

📊 Changes

16 files changed (+2223 additions, -71 deletions)

View changed files

📝 api/client.go (+23 -0)
📝 api/types.go (+59 -25)
📝 cmd/cmd.go (+391 -0)
📝 envconfig/config.go (+60 -0)
📝 middleware/openai.go (+7 -0)
📝 openai/openai.go (+47 -21)
openai/remote_client.go (+196 -0)
openai/remote_client_test.go (+126 -0)
openai/remote_convert.go (+346 -0)
📝 openai/responses.go (+42 -4)
openai/toolcall_args.go (+38 -0)
remoteproviders/registry.go (+289 -0)
📝 server/create.go (+36 -1)
📝 server/routes.go (+467 -14)
server/routes_remote_providers.go (+88 -0)
📝 types/model/config.go (+8 -6)

📄 Description

Summary

This PR adds a minimal OpenAI-compatible remote bridge so /v1/responses can proxy to multiple chat/completions providers. It also introduces CLI helpers to manage remote providers and remote models, and ensures expected OpenAI Codex CLI behavior when using remote providers (Working indicator and usage reporting). In the remainder of this document, "Codex" refers to OpenAI Codex CLI.

Features

  • Multi-channel OpenAI-compatible remotes stored in a local registry.
  • /v1/responses can proxy to remote chat/completions models via remote_provider=openai and remote_channel.
  • Streaming usage reporting for remote /v1/responses by enabling stream_options.include_usage.
  • Codex-friendly streaming behavior (no text deltas before tool calls).
  • CLI for managing remote providers and remote models (CRUD).

Usage

1) Add a remote provider (channel)

ollama remote add aliyun-dashscope \
  --type openai \
  --base-url https://dashscope.aliyuncs.com/compatible-mode/v1 \
  --api-key sk-... \
  --default-model glm-4.6

2) Create a remote model (CLI)

ollama remote model add glm-4.6-remote \
  --channel aliyun-dashscope \
  --from glm-4.6

3) Create a remote model (API)

curl http://localhost:11434/api/create -d '{
  "model": "glm-4.6-remote",
  "from": "glm-4.6",
  "remote_provider": "openai",
  "remote_channel": "aliyun-dashscope"
}'

4) Call via /v1/responses

curl -N http://localhost:11434/v1/responses -d '{
  "model": "glm-4.6-remote",
  "input": "hello",
  "stream": true
}'

5) Optional streaming policy (Codex compatibility)

export OLLAMA_RESPONSES_STREAM_TEXT=strict
# or
export OLLAMA_RESPONSES_STREAM_TEXT=prefix
export OLLAMA_RESPONSES_STREAM_TEXT_PREFIX_CHARS=120

CLI Help (New Subcommands)

ollama remote list
ollama remote show ID
ollama remote add [ID] ...
ollama remote update [ID] ...
ollama remote rm ID

ollama remote model list
ollama remote model show MODEL
ollama remote model add MODEL ...
ollama remote model update MODEL ...
ollama remote model rm MODEL

Tests

Executed (compile-only):

  • go test ./cmd -run TestDoesNotExist
  • go test ./openai -run TestDoesNotExist
  • go test ./middleware -run TestDoesNotExist
  • go test ./server -run TestDoesNotExist

Note: The environment reports a linker warning about -lobjc on macOS; no test failures observed.

Behavior / Result

  • The Working indicator and usage tokens behave as expected for remote /v1/responses streams.
  • Remote providers and models can be created/updated/removed via CLI or API.

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/13576 **Author:** [@x22x22](https://github.com/x22x22) **Created:** 12/27/2025 **Status:** 🔄 Open **Base:** `main` ← **Head:** `feat/remote-bridge-cli` --- ### 📝 Commits (6) - [`60887c1`](https://github.com/ollama/ollama/commit/60887c141657e6a7d80897ee5782122dfc4d1b64) feat: Add support for remote providers and OpenAI integration - [`5d322a4`](https://github.com/ollama/ollama/commit/5d322a4571f2bf3a9b5bc0ac0c34e95af235814c) fix: Make ResponseFormat field optional in ChatCompletionRequest - [`3cddaca`](https://github.com/ollama/ollama/commit/3cddacab3c78f027fefd1456349eea49214ee3a5) fix: Use command context for remote provider operations - [`329ab55`](https://github.com/ollama/ollama/commit/329ab5598aa00e47bc9f3b73459a1a88eb38573e) fix: Implement defensive copy for headers in NewRemoteClient - [`c80e3b9`](https://github.com/ollama/ollama/commit/c80e3b9dcd2789b3f428aebb79ac1cd26b76c258) fix: Enhance NewRemoteClient with custom HTTP client and timeout settings - [`ac76c1a`](https://github.com/ollama/ollama/commit/ac76c1ac32a599ae73235848de2aa99731146763) fix: Handle response body read errors in StreamChatCompletion ### 📊 Changes **16 files changed** (+2223 additions, -71 deletions) <details> <summary>View changed files</summary> 📝 `api/client.go` (+23 -0) 📝 `api/types.go` (+59 -25) 📝 `cmd/cmd.go` (+391 -0) 📝 `envconfig/config.go` (+60 -0) 📝 `middleware/openai.go` (+7 -0) 📝 `openai/openai.go` (+47 -21) ➕ `openai/remote_client.go` (+196 -0) ➕ `openai/remote_client_test.go` (+126 -0) ➕ `openai/remote_convert.go` (+346 -0) 📝 `openai/responses.go` (+42 -4) ➕ `openai/toolcall_args.go` (+38 -0) ➕ `remoteproviders/registry.go` (+289 -0) 📝 `server/create.go` (+36 -1) 📝 `server/routes.go` (+467 -14) ➕ `server/routes_remote_providers.go` (+88 -0) 📝 `types/model/config.go` (+8 -6) </details> ### 📄 Description ## Summary This PR adds a minimal OpenAI-compatible remote bridge so `/v1/responses` can proxy to multiple chat/completions providers. It also introduces CLI helpers to manage remote providers and remote models, and ensures expected OpenAI Codex CLI behavior when using remote providers (Working indicator and usage reporting). In the remainder of this document, "Codex" refers to OpenAI Codex CLI. ## Features - Multi-channel OpenAI-compatible remotes stored in a local registry. - `/v1/responses` can proxy to remote chat/completions models via `remote_provider=openai` and `remote_channel`. - Streaming usage reporting for remote `/v1/responses` by enabling `stream_options.include_usage`. - Codex-friendly streaming behavior (no text deltas before tool calls). - CLI for managing remote providers and remote models (CRUD). ## Usage ### 1) Add a remote provider (channel) ```bash ollama remote add aliyun-dashscope \ --type openai \ --base-url https://dashscope.aliyuncs.com/compatible-mode/v1 \ --api-key sk-... \ --default-model glm-4.6 ``` ### 2) Create a remote model (CLI) ```bash ollama remote model add glm-4.6-remote \ --channel aliyun-dashscope \ --from glm-4.6 ``` ### 3) Create a remote model (API) ```bash curl http://localhost:11434/api/create -d '{ "model": "glm-4.6-remote", "from": "glm-4.6", "remote_provider": "openai", "remote_channel": "aliyun-dashscope" }' ``` ### 4) Call via /v1/responses ```bash curl -N http://localhost:11434/v1/responses -d '{ "model": "glm-4.6-remote", "input": "hello", "stream": true }' ``` ### 5) Optional streaming policy (Codex compatibility) ```bash export OLLAMA_RESPONSES_STREAM_TEXT=strict # or export OLLAMA_RESPONSES_STREAM_TEXT=prefix export OLLAMA_RESPONSES_STREAM_TEXT_PREFIX_CHARS=120 ``` ## CLI Help (New Subcommands) ``` ollama remote list ollama remote show ID ollama remote add [ID] ... ollama remote update [ID] ... ollama remote rm ID ollama remote model list ollama remote model show MODEL ollama remote model add MODEL ... ollama remote model update MODEL ... ollama remote model rm MODEL ``` ## Tests Executed (compile-only): - `go test ./cmd -run TestDoesNotExist` - `go test ./openai -run TestDoesNotExist` - `go test ./middleware -run TestDoesNotExist` - `go test ./server -run TestDoesNotExist` Note: The environment reports a linker warning about `-lobjc` on macOS; no test failures observed. ## Behavior / Result - The Working indicator and usage tokens behave as expected for remote `/v1/responses` streams. - Remote providers and models can be created/updated/removed via CLI or API. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-05 09:12:21 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#76571