[PR #15630] feat(server): add inference webhook hooks for input/output interception #77535

Open
opened 2026-05-05 10:12:42 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/15630
Author: @jrideout
Created: 4/16/2026
Status: 🔄 Open

Base: mainHead: feat/inference-webhooks


📝 Commits (2)

  • f0b92ad feat(server): add inference webhook hooks for input/output interception
  • 62d2931 docs(cmd): list OLLAMA_HOOK_* env vars in serve --help

📊 Changes

8 files changed (+2550 additions, -9 deletions)

View changed files

📝 cmd/cmd.go (+5 -0)
📝 docs/docs.json (+1 -0)
📝 docs/faq.mdx (+9 -0)
docs/inference-webhooks.mdx (+532 -0)
📝 envconfig/config.go (+46 -0)
server/inference_hook.go (+896 -0)
server/inference_hook_test.go (+1022 -0)
📝 server/routes.go (+39 -9)

📄 Description

TL;DR — Optional HTTP webhooks fired before and after inference so external guardrail, policy, audit, and human-in-the-loop services can inspect, rewrite, refuse, or hold each request and response. Off by default; no middleware is registered when unconfigured (zero overhead).

Motivation

Ollama has become production inference infrastructure for teams that cannot deploy ollama serve without some combination of:

  • Prompt-injection / jailbreak defense (direct and indirect — tool results replayed into a new turn)
  • PII / secret scanning on both user input and model output
  • Policy enforcement — allow-lists, deny-lists, per-tenant tool restrictions
  • Audit logging — structured capture of inputs, tool calls, and outputs for compliance
  • Human-in-the-loop approval for high-risk tool calls

Today every team that needs these builds a reverse proxy in front of Ollama. That works, but each proxy re-implements the OpenAI / Ollama / Anthropic body-shape conversion Ollama already did one layer down, then reverses it to hand Ollama the original body. The result is duplicated code, duplicated bugs (tool-call-argument serialization in particular — see #12413), and an integration surface that varies by vendor.

This PR moves the extension point into Ollama: the hook runs after Ollama's own protocol-conversion middleware, so the webhook always receives a normalized api.ChatRequest / api.GenerateRequest regardless of whether the caller hit /api/chat, /v1/chat/completions, /v1/messages, /v1/responses, /v1/completions, or /api/generate. One wire contract, six routes.

Landscape

Other inference servers have converged on in-process hooks for this exact reason:

Server Hook mechanism
vLLM --middleware (ASGI) + --tool-parser-plugin
llama.cpp server request/response filters via custom server wrappers
TGI (HuggingFace) --trust-remote-code-style preprocessor plugins
Triton Inference Server pre/post model ensembles
Ollama — (this PR)

Guardrail vendors (HiddenLayer, Lakera, Protect AI, Prompt Security, LLM Guard, Nemo Guardrails, IBM Granite Guardian, AWS Bedrock Guardrails) all expose an HTTP-callable API. With this PR each of those becomes a one-env-var integration; without it, each becomes a reverse-proxy project.

The permission verb set (allow / deny / ask) and the user_message / agent_message payload fields are deliberately aligned with Cursor's hooks contract so existing Cursor-style hook scripts port to Ollama without rewriting. The modify permission and the per-shape mutation fields (messages / output_text / output_thinking / tool_calls) are Ollama-specific extensions.

Design

Vendor-neutral and intentionally minimal:

  • Two URLs, pre and post. Either, both, or neither. URLs are validated at startup — only http:// / https:// schemes with a valid host are accepted; misconfiguration fails ollama serve fast rather than per-request.
  • Four permission verbs: allow, deny, ask, modify. Standardized across pre and post so client code branches the same way regardless of where in the lifecycle the verdict fired.
    • ask returns HTTP 403 with the structured hook envelope so a client can drive a human-in-the-loop confirmation flow. Content is never released on ask. Two documented patterns (client-owned retry via options.hook_approval, webhook-owned fingerprint store) cover interactive-user and operator-console use cases without making Ollama stateful.
    • Unknown permissions are treated as deny. A misbehaving hook refuses, it doesn't silently allow.
  • One wire contract, six routes: /api/chat, /api/generate, /v1/chat/completions, /v1/completions, /v1/responses, /v1/messages. Messages normalize to OpenAI chat shape before the hook sees them.
  • Schema-versioned payloads: every HookRequest carries schema_version: 1 and a User-Agent: ollama-hooks/1 header so hook servers can branch on version and ignore unknown fields — forward additions don't break existing hooks.
  • Zero-dep: stdlib + gin + uuid (both already vendored).
  • Zero-overhead when disabled: withInferenceHook returns the original handler chain untouched when no pre URL is set. No middleware registration, no per-request branch in the hot path.
  • Fail-closed by default: OLLAMA_HOOK_ON_ERROR=deny. Configurable to allow for deployments that prefer availability. Fail-open events are logged Warn once per channel (pre / post) then demoted to Debug so silent bypasses surface without flooding logs.

Wire contract

Request (Ollama → webhook):

POST <url>
User-Agent: ollama-hooks/1
X-Ollama-Hook-Event: pre_inference | post_inference
X-Ollama-Request-Id: <uuid>

{
  "schema_version": 1,
  "event":      "pre_inference" | "post_inference",
  "request_id": "uuid",
  "route":      "/api/chat",
  "model":      "llama3",
  "messages":   [{"role":"user","content":"..."}],
  "tools":      [{"type":"function","function":{...}}],
  "options":    {"temperature": 0.7, ...},
  "output_text":     "...",    // post only
  "output_thinking": "...",    // post only
  "tool_calls":      [...]     // post only
}

Response (webhook → Ollama):

{
  "permission":    "allow" | "deny" | "ask" | "modify",
  "user_message":  "shown to the end user when present",
  "agent_message": "fed back to an upstream agent loop when present",
  "messages":        [...],   // pre, modify: new request messages
  "output_text":     "...",   // post, modify: new assistant content
  "output_thinking": "...",   // post, modify: new chain-of-thought
  "tool_calls":      [...]    // post, modify: new tool calls
}

The request id Ollama generates is reflected back on the response it returns to the client (X-Ollama-Request-Id response header) so audit logs on both ends correlate.

Configuration

OLLAMA_HOOK_PRE_INFERENCE_URL    webhook called before inference
OLLAMA_HOOK_POST_INFERENCE_URL   webhook called after response assembled
OLLAMA_HOOK_TIMEOUT              per-call timeout (default 5s)
OLLAMA_HOOK_ON_ERROR             deny (default, fail-closed) | allow
OLLAMA_HOOK_HEADERS              "Name:Value, Name:Value" (e.g. API keys)

Listed in envconfig.AsMap() so ollama serve --help and the admin UI discover them automatically.

Scope

In scope:

  • Pre-inference on all six chat/generate routes (after format-conversion middleware so the hook sees normalized bodies)
  • Post-inference on the non-streaming path of ChatHandler and GenerateHandler
  • Four standardized permission verbs + schema-versioned payloads + Cursor-aligned response fields
  • Options / tools / tool_calls / assistant thinking all included in the payload and round-tripped through modify

Not in scope (intentional):

  • Post-inference on streaming responses. Tokens have already been flushed to the client by the time the response is assembled, so there's no meaningful intervention point. Ollama emits a one-shot Warn log the first time a streamed request arrives with a post-hook configured so operators can detect the misconfiguration. Docs recommend stream: false for callers that need post-inference guardrails. I have a design sketch for firing at stream-end and marking the terminal chunk with done_reason: "content_filter", but that's a separate PR.
  • Embedding / image-generation / transcription routes. These have different risk profiles; adding them is mechanical but not useful without the risk model to justify it.
  • Multimodal images propagation. Not carried in the v1 wire format to keep payloads bounded. A modify that round-trips messages drops images; documented.
  • Ollama-native pause/resume for ask. Would require persistent request state and duplicates what a proper approval service already does. Both documented HITL patterns keep that state in the webhook or the client, where it belongs.

API surface changes

None to the client-facing API. Three new response shapes appear only when the hook decides to use them. All share a common envelope — top-level error string for legacy clients, nested hook object for hook-aware clients:

{
  "error": "<human-readable summary, always present>",
  "hook": {
    "permission":    "deny" | "ask" | "unavailable" | "modify",
    "user_message":  "<optional>",
    "agent_message": "<optional>"
  }
}
  • HTTP 400 — pre or post, hook.permission = "deny" (or unknown permission from a misbehaving hook)
  • HTTP 403 — pre or post, hook.permission = "ask"
  • HTTP 413 — inbound request body exceeds 32 MiB (Ollama-side limit, no hook envelope)
  • HTTP 502 — hook returned a modify shape Ollama cannot apply (e.g. multi-turn messages on /api/generate)
  • HTTP 503 — OLLAMA_HOOK_ON_ERROR=deny and the hook is unreachable, hook.permission = "unavailable" (distinct from a hook-returned deny so infra failures don't collide with policy decisions)

Handlers that already treat any non-2xx as a failure need no changes to be correct; handlers that want to surface the ask flow branch on hook.permission in the response body.

Security / hardening

  • URLs validated at startup (http(s) only, valid host)
  • 32 MiB cap on inbound request bodies (413 before the hook fires); 4 MiB cap on hook response bodies
  • Hook-provided user_message / agent_message sanitized (control chars stripped, 256-char cap) before reflection into the HTTP body — prevents response splitting and log injection
  • Userinfo redacted from URLs in startup logs
  • X-Ollama-Request-Id reflected as a response header for cross-end audit correlation
  • Fail-open events logged Warn once per channel, then demoted to Debug — silent bypasses surface without flooding logs
  • Structured error envelope lets audit pipelines branch on machine-readable hook.permission without regex-parsing the error string

Tests

server/inference_hook_test.go covers:

  • allow / deny / ask / modify on pre and post
  • Unknown permission denies (fail-safe default)
  • Request body > 32 MiB returns 413 before the hook is called
  • Fail-closed post-hook returns 503 with hook.permission = "unavailable" (distinct from hook-returned deny)
  • /api/generate modify shape enforcement — multi-turn / assistant / tool / multi-user returns 502 rather than silently truncating
  • sanitizeReason strips control chars and caps length; redactURL hides userinfo
  • Tool-call and thinking round-trip through modify
  • Outbound contract: schema_version, User-Agent, X-Ollama-Request-Id on hook calls; request-id reflected on client response
  • Post-only deployments still correlate (post-hook generates its own request id when no pre fired)
  • Full pre-modify → post-modify integration path
  • Headers read from OLLAMA_HOOK_HEADERS; zero-init when no URLs are set

Downstream handler is asserted not to run on deny / ask / unknown-permission / body-too-large / shape-unsupported / fail-closed.

Docs

docs/inference-webhooks.mdx — full wire protocol, semantics, streaming behavior, HITL patterns (client-owned + webhook-owned), cost-of-denial note, two runnable examples (always-allow, regex-deny). Linked from the FAQ and docs.json navigation.

Implementation size

docs/docs.json                |    1 +
docs/faq.mdx                  |    9 +
docs/inference-webhooks.mdx   |  532 +++++++++++++++++
envconfig/config.go           |   46 ++
server/inference_hook.go      |  896 ++++++++++++++++++++++++++++++++++++
server/inference_hook_test.go | 1022 +++++++++++++++++++++++++++++++++++++++++
server/routes.go              |   48 +-

All additions are in new files except envconfig/config.go (env var registration) and server/routes.go (middleware wiring + a small hookedChain helper that replaces a nested append([]gin.HandlerFunc{...}, ...) pattern on the six inference routes).

Use cases this unlocks without further Ollama changes

  • Prompt-injection and PII guardrails (HiddenLayer, Lakera, LLM Guard, Prompt Security, etc.)
  • Policy enforcement (per-tenant allow-list of tools, per-role model access)
  • Human-in-the-loop approval for destructive tool calls
  • Structured audit logging (SIEM ingestion, compliance)
  • Request rewriting (auto-translation, system-prompt injection, canary-string insertion for data-exfil detection)
  • Content moderation before release

Every one of these exists today as a reverse proxy in front of ollama serve. This PR collapses them to an env-var plus a webhook.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/15630 **Author:** [@jrideout](https://github.com/jrideout) **Created:** 4/16/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `feat/inference-webhooks` --- ### 📝 Commits (2) - [`f0b92ad`](https://github.com/ollama/ollama/commit/f0b92add85a3b19ce20fb71ac4899320dc7ed93f) feat(server): add inference webhook hooks for input/output interception - [`62d2931`](https://github.com/ollama/ollama/commit/62d2931fe6607baccbb92eb48a2f04c3f5591889) docs(cmd): list OLLAMA_HOOK_* env vars in serve --help ### 📊 Changes **8 files changed** (+2550 additions, -9 deletions) <details> <summary>View changed files</summary> 📝 `cmd/cmd.go` (+5 -0) 📝 `docs/docs.json` (+1 -0) 📝 `docs/faq.mdx` (+9 -0) ➕ `docs/inference-webhooks.mdx` (+532 -0) 📝 `envconfig/config.go` (+46 -0) ➕ `server/inference_hook.go` (+896 -0) ➕ `server/inference_hook_test.go` (+1022 -0) 📝 `server/routes.go` (+39 -9) </details> ### 📄 Description > **TL;DR** — Optional HTTP webhooks fired before and after inference so external guardrail, policy, audit, and human-in-the-loop services can inspect, rewrite, refuse, or hold each request and response. Off by default; no middleware is registered when unconfigured (zero overhead). ## Motivation Ollama has become production inference infrastructure for teams that cannot deploy `ollama serve` without some combination of: - **Prompt-injection / jailbreak defense** (direct and indirect — tool results replayed into a new turn) - **PII / secret scanning** on both user input and model output - **Policy enforcement** — allow-lists, deny-lists, per-tenant tool restrictions - **Audit logging** — structured capture of inputs, tool calls, and outputs for compliance - **Human-in-the-loop** approval for high-risk tool calls Today every team that needs these builds a reverse proxy in front of Ollama. That works, but each proxy re-implements the OpenAI / Ollama / Anthropic body-shape conversion Ollama already did one layer down, then reverses it to hand Ollama the original body. The result is duplicated code, duplicated bugs (tool-call-argument serialization in particular — see #12413), and an integration surface that varies by vendor. This PR moves the extension point into Ollama: the hook runs **after** Ollama's own protocol-conversion middleware, so the webhook always receives a normalized `api.ChatRequest` / `api.GenerateRequest` regardless of whether the caller hit `/api/chat`, `/v1/chat/completions`, `/v1/messages`, `/v1/responses`, `/v1/completions`, or `/api/generate`. One wire contract, six routes. ## Landscape Other inference servers have converged on in-process hooks for this exact reason: | Server | Hook mechanism | |---|---| | vLLM | `--middleware` (ASGI) + `--tool-parser-plugin` | | llama.cpp server | request/response filters via custom server wrappers | | TGI (HuggingFace) | `--trust-remote-code`-style preprocessor plugins | | Triton Inference Server | pre/post model ensembles | | **Ollama** | — (this PR) | Guardrail vendors (HiddenLayer, Lakera, Protect AI, Prompt Security, LLM Guard, Nemo Guardrails, IBM Granite Guardian, AWS Bedrock Guardrails) all expose an HTTP-callable API. With this PR each of those becomes a one-env-var integration; without it, each becomes a reverse-proxy project. The permission verb set (`allow` / `deny` / `ask`) and the `user_message` / `agent_message` payload fields are deliberately aligned with [Cursor's hooks contract](https://cursor.com/docs/hooks) so existing Cursor-style hook scripts port to Ollama without rewriting. The `modify` permission and the per-shape mutation fields (`messages` / `output_text` / `output_thinking` / `tool_calls`) are Ollama-specific extensions. ## Design Vendor-neutral and intentionally minimal: - **Two URLs**, pre and post. Either, both, or neither. URLs are validated at startup — only `http://` / `https://` schemes with a valid host are accepted; misconfiguration fails `ollama serve` fast rather than per-request. - **Four permission verbs**: `allow`, `deny`, `ask`, `modify`. Standardized across pre and post so client code branches the same way regardless of where in the lifecycle the verdict fired. - `ask` returns HTTP 403 with the structured hook envelope so a client can drive a human-in-the-loop confirmation flow. Content is never released on `ask`. Two documented patterns (client-owned retry via `options.hook_approval`, webhook-owned fingerprint store) cover interactive-user and operator-console use cases without making Ollama stateful. - Unknown permissions are treated as `deny`. A misbehaving hook refuses, it doesn't silently allow. - **One wire contract, six routes**: `/api/chat`, `/api/generate`, `/v1/chat/completions`, `/v1/completions`, `/v1/responses`, `/v1/messages`. Messages normalize to OpenAI chat shape before the hook sees them. - **Schema-versioned payloads**: every `HookRequest` carries `schema_version: 1` and a `User-Agent: ollama-hooks/1` header so hook servers can branch on version and ignore unknown fields — forward additions don't break existing hooks. - **Zero-dep**: stdlib + `gin` + `uuid` (both already vendored). - **Zero-overhead when disabled**: `withInferenceHook` returns the original handler chain untouched when no pre URL is set. No middleware registration, no per-request branch in the hot path. - **Fail-closed by default**: `OLLAMA_HOOK_ON_ERROR=deny`. Configurable to `allow` for deployments that prefer availability. Fail-open events are logged `Warn` once per channel (pre / post) then demoted to `Debug` so silent bypasses surface without flooding logs. ### Wire contract Request (Ollama → webhook): ```json POST <url> User-Agent: ollama-hooks/1 X-Ollama-Hook-Event: pre_inference | post_inference X-Ollama-Request-Id: <uuid> { "schema_version": 1, "event": "pre_inference" | "post_inference", "request_id": "uuid", "route": "/api/chat", "model": "llama3", "messages": [{"role":"user","content":"..."}], "tools": [{"type":"function","function":{...}}], "options": {"temperature": 0.7, ...}, "output_text": "...", // post only "output_thinking": "...", // post only "tool_calls": [...] // post only } ``` Response (webhook → Ollama): ```json { "permission": "allow" | "deny" | "ask" | "modify", "user_message": "shown to the end user when present", "agent_message": "fed back to an upstream agent loop when present", "messages": [...], // pre, modify: new request messages "output_text": "...", // post, modify: new assistant content "output_thinking": "...", // post, modify: new chain-of-thought "tool_calls": [...] // post, modify: new tool calls } ``` The request id Ollama generates is reflected back on the response it returns to the client (`X-Ollama-Request-Id` response header) so audit logs on both ends correlate. ### Configuration ``` OLLAMA_HOOK_PRE_INFERENCE_URL webhook called before inference OLLAMA_HOOK_POST_INFERENCE_URL webhook called after response assembled OLLAMA_HOOK_TIMEOUT per-call timeout (default 5s) OLLAMA_HOOK_ON_ERROR deny (default, fail-closed) | allow OLLAMA_HOOK_HEADERS "Name:Value, Name:Value" (e.g. API keys) ``` Listed in `envconfig.AsMap()` so `ollama serve --help` and the admin UI discover them automatically. ## Scope **In scope:** - Pre-inference on all six chat/generate routes (after format-conversion middleware so the hook sees normalized bodies) - Post-inference on the non-streaming path of `ChatHandler` and `GenerateHandler` - Four standardized permission verbs + schema-versioned payloads + Cursor-aligned response fields - Options / tools / tool_calls / assistant `thinking` all included in the payload and round-tripped through `modify` **Not in scope (intentional):** - **Post-inference on streaming responses.** Tokens have already been flushed to the client by the time the response is assembled, so there's no meaningful intervention point. Ollama emits a one-shot `Warn` log the first time a streamed request arrives with a post-hook configured so operators can detect the misconfiguration. Docs recommend `stream: false` for callers that need post-inference guardrails. I have a design sketch for firing at stream-end and marking the terminal chunk with `done_reason: "content_filter"`, but that's a separate PR. - **Embedding / image-generation / transcription routes.** These have different risk profiles; adding them is mechanical but not useful without the risk model to justify it. - **Multimodal `images` propagation.** Not carried in the v1 wire format to keep payloads bounded. A `modify` that round-trips messages drops images; documented. - **Ollama-native pause/resume for `ask`.** Would require persistent request state and duplicates what a proper approval service already does. Both documented HITL patterns keep that state in the webhook or the client, where it belongs. ## API surface changes None to the client-facing API. Three new response shapes appear **only** when the hook decides to use them. All share a common envelope — top-level `error` string for legacy clients, nested `hook` object for hook-aware clients: ```json { "error": "<human-readable summary, always present>", "hook": { "permission": "deny" | "ask" | "unavailable" | "modify", "user_message": "<optional>", "agent_message": "<optional>" } } ``` - HTTP 400 — pre or post, `hook.permission = "deny"` (or unknown permission from a misbehaving hook) - HTTP 403 — pre or post, `hook.permission = "ask"` - HTTP 413 — inbound request body exceeds 32 MiB (Ollama-side limit, no `hook` envelope) - HTTP 502 — hook returned a modify shape Ollama cannot apply (e.g. multi-turn messages on `/api/generate`) - HTTP 503 — `OLLAMA_HOOK_ON_ERROR=deny` and the hook is unreachable, `hook.permission = "unavailable"` (distinct from a hook-returned deny so infra failures don't collide with policy decisions) Handlers that already treat any non-2xx as a failure need no changes to be correct; handlers that want to surface the `ask` flow branch on `hook.permission` in the response body. ## Security / hardening - URLs validated at startup (http(s) only, valid host) - 32 MiB cap on inbound request bodies (413 before the hook fires); 4 MiB cap on hook response bodies - Hook-provided `user_message` / `agent_message` sanitized (control chars stripped, 256-char cap) before reflection into the HTTP body — prevents response splitting and log injection - Userinfo redacted from URLs in startup logs - `X-Ollama-Request-Id` reflected as a response header for cross-end audit correlation - Fail-open events logged Warn once per channel, then demoted to Debug — silent bypasses surface without flooding logs - Structured error envelope lets audit pipelines branch on machine-readable `hook.permission` without regex-parsing the `error` string ## Tests `server/inference_hook_test.go` covers: - `allow` / `deny` / `ask` / `modify` on pre and post - Unknown permission denies (fail-safe default) - Request body > 32 MiB returns 413 before the hook is called - Fail-closed post-hook returns 503 with `hook.permission = "unavailable"` (distinct from hook-returned deny) - `/api/generate` modify shape enforcement — multi-turn / assistant / tool / multi-user returns 502 rather than silently truncating - `sanitizeReason` strips control chars and caps length; `redactURL` hides userinfo - Tool-call and `thinking` round-trip through modify - Outbound contract: `schema_version`, `User-Agent`, `X-Ollama-Request-Id` on hook calls; request-id reflected on client response - Post-only deployments still correlate (post-hook generates its own request id when no pre fired) - Full pre-modify → post-modify integration path - Headers read from `OLLAMA_HOOK_HEADERS`; zero-init when no URLs are set Downstream handler is asserted **not** to run on `deny` / `ask` / unknown-permission / body-too-large / shape-unsupported / fail-closed. ## Docs `docs/inference-webhooks.mdx` — full wire protocol, semantics, streaming behavior, HITL patterns (client-owned + webhook-owned), cost-of-denial note, two runnable examples (always-allow, regex-deny). Linked from the FAQ and `docs.json` navigation. ## Implementation size ``` docs/docs.json | 1 + docs/faq.mdx | 9 + docs/inference-webhooks.mdx | 532 +++++++++++++++++ envconfig/config.go | 46 ++ server/inference_hook.go | 896 ++++++++++++++++++++++++++++++++++++ server/inference_hook_test.go | 1022 +++++++++++++++++++++++++++++++++++++++++ server/routes.go | 48 +- ``` All additions are in new files except `envconfig/config.go` (env var registration) and `server/routes.go` (middleware wiring + a small `hookedChain` helper that replaces a nested `append([]gin.HandlerFunc{...}, ...)` pattern on the six inference routes). ## Use cases this unlocks without further Ollama changes - Prompt-injection and PII guardrails (HiddenLayer, Lakera, LLM Guard, Prompt Security, etc.) - Policy enforcement (per-tenant allow-list of tools, per-role model access) - Human-in-the-loop approval for destructive tool calls - Structured audit logging (SIEM ingestion, compliance) - Request rewriting (auto-translation, system-prompt injection, canary-string insertion for data-exfil detection) - Content moderation before release Every one of these exists today as a reverse proxy in front of `ollama serve`. This PR collapses them to an env-var plus a webhook. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-05 10:12:42 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#77535