[PR #10734] fix: OpenAI model endpoint (/v1/models/{model}) fails for models with slashes (/) in their name #60045

Open
opened 2026-04-29 14:57:47 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/10734
Author: @SplittyDev
Created: 5/16/2025
Status: 🔄 Open

Base: mainHead: fix/openai-api-model-retrieval


📝 Commits (2)

  • 8f3521d Use raw path in OpenAI api
  • d41c360 Add test case for model retrieval with slashes

📊 Changes

2 files changed (+30 additions, -6 deletions)

View changed files

📝 openai/openai_test.go (+29 -6)
📝 server/routes.go (+1 -0)

📄 Description

Supersedes #10147
Fixes #10139


High Level Overview

As described in #10139, there is a bug - or at the very least unintended behavior - in the current implementation of the model retrieval endpoint of the OpenAI-compatible API.

It's not possible to retrieve models with slashes in their name, because gin handles slashes as part of the route, even when escaped as %2F.

The issue lies in the default url path resolution of gin, where %2F gets decoded to / prior to the URL being resolved, effectively making it impossible to use this endpoint with models that have a / in the name, which is quite common (for example, any model pulled from hf.co).

Without using the router with UseRawPath, Gin can't tell the difference between /v1/models/hf.co/foo and /v1/models/hf.co%2Ffoo, and that's bad, because it breaks the API. The fix is simple: Enabling UseRawPath on the OpenAI router handles this case correctly. As far as I can tell, there are no unintended consequences of using UseRawPath on the router, but I'm neither a Go developer, nor have I ever used Gin, so it would be great if someone could double-check that. I've also added a test-case to make sure this doesn't break again.

For what it's worth, I don't think this is a particularly dangerous change. As far as I can tell from both the Gin and Go docs, it just affects the way %2F is handled as part of URL paths, and given that ollama needs to handle it correctly in order not to break the API, I don't see much of a way around it, unless there are other options I don't know about.

Why I think this should be fixed

In #10147 I've been asked about the use-case of retrieving a model this way:

Also wondering what you are using this API for? Doesn't seem to provide much information about the models.

Even though I don't think this is really relevant to this issue, here's my use-case in case it helps anyone understand why it's necessary that the API works correctly and doesn't 404 when being called with valid parameters:

The AI chat app my company is developing supports arbitrary inference providers (any API that is OpenAI API-compatible), with some special handling for providers that have flawed implementations (such as ollama), or return additional information on top (such as OpenRouter).

Generally, APIs being supersets of the base OpenAI API is perfectly fine, because I can do feature detection based on available keys. But APIs being subsets or straight up incompatible is an obvious issue.

When a user connects an API endpoint (such as ollama, OpenRouter, OpenAI, etc.), I do the following:

  • Call GET /v1/models to retrieve the model list.
    Besides telling the app which models are available so it can make them selectable by the user, this also serves as a sanity check to make sure that this is indeed an OpenAI-compatible API.
  • For every model, call /v1/model/{model} in order to get model metadata.
    You are correct about ollama not returning particularly useful metadata there, but some providers do return valuable information, such as input and output modalities (i.e. in the case of OpenRouter) or displayName, description, etc.

For that reason, retrieving the model list and each model individually gives me the most complete picture of which models and capabilities the provider provides.

Handholding Ollama

In order to support ollama, we already have to do quite a bit of additional work, such as:

  • Detecting whether the provider is ollama and parsing its version via GET /api/version
  • Calling POST /api/show for each model to parse capabilities
  • For ollama < 0.6.4, doing feature detection via model and projector information (i.e. detecting the presence of CLIP or checking for gemma3 vision block count, etc.) in order to find out whether the model supports image input
  • For ollama >= 0.6.4, directly using the new capabilities field in addition to the aforementioned brute-force input modality detection, because sadly in my testing so far, the capabilities field isn't always populated correctly, or even at all.

Ollama requires quite a bit of special handling in order to get the same information we usually get much more easily from other providers. For now our custom handling is here to stay, since we can't expect users to readily upgrade even if these things are fixed. Nevertheless, I think there's value in bringing ollama's OpenAI-compatible API closer to spec, in order to at least improve the situation going forward.

TL;DR

Model retrieval is broken in many common cases. Regardless of my personal use-case and whether or not the information returned by this API is generally useful or not: It's currently broken, and I think it should be fixed.


I accidentally deleted the branch associated with the old PR, which is why I'm recreating it. I've updated the description to reflect the previous discussion, but feel free to check out #10147 for more context.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/10734 **Author:** [@SplittyDev](https://github.com/SplittyDev) **Created:** 5/16/2025 **Status:** 🔄 Open **Base:** `main` ← **Head:** `fix/openai-api-model-retrieval` --- ### 📝 Commits (2) - [`8f3521d`](https://github.com/ollama/ollama/commit/8f3521d1bb2da23d7d838278ff27dd41aa322780) Use raw path in OpenAI api - [`d41c360`](https://github.com/ollama/ollama/commit/d41c36069b9501093592d545fb86e6816eb34eb5) Add test case for model retrieval with slashes ### 📊 Changes **2 files changed** (+30 additions, -6 deletions) <details> <summary>View changed files</summary> 📝 `openai/openai_test.go` (+29 -6) 📝 `server/routes.go` (+1 -0) </details> ### 📄 Description Supersedes #10147 Fixes #10139 --- ## High Level Overview As described in #10139, there is a bug - or at the very least unintended behavior - in the current implementation of the model retrieval endpoint of the OpenAI-compatible API. It's not possible to retrieve models with slashes in their name, because gin handles slashes as part of the route, even when escaped as `%2F`. The issue lies in the default url path resolution of gin, where `%2F` gets decoded to `/` _prior_ to the URL being resolved, effectively making it impossible to use this endpoint with models that have a `/` in the name, which is quite common (for example, _any_ model pulled from `hf.co`). Without using the router with `UseRawPath`, Gin can't tell the difference between `/v1/models/hf.co/foo` and `/v1/models/hf.co%2Ffoo`, and that's bad, because it breaks the API. The fix is simple: Enabling `UseRawPath` on the OpenAI router handles this case correctly. As far as I can tell, there are no unintended consequences of using `UseRawPath` on the router, but I'm neither a Go developer, nor have I ever used Gin, so it would be great if someone could double-check that. I've also added a test-case to make sure this doesn't break again. For what it's worth, I don't think this is a particularly dangerous change. As far as I can tell from both the Gin and Go docs, it just affects the way `%2F` is handled as part of URL paths, and given that ollama _needs_ to handle it correctly in order not to break the API, I don't see much of a way around it, unless there are other options I don't know about. ## Why I think this should be fixed In #10147 I've been asked about the use-case of retrieving a model this way: > Also wondering what you are using this API for? Doesn't seem to provide much information about the models. Even though I don't think this is really relevant to this issue, here's my use-case in case it helps anyone understand why it's necessary that the API works correctly and doesn't 404 when being called with valid parameters: The AI chat app my company is developing supports arbitrary inference providers (any API that is OpenAI API-compatible), with some special handling for providers that have flawed implementations (such as ollama), or return additional information on top (such as OpenRouter). Generally, APIs being _supersets_ of the base OpenAI API is perfectly fine, because I can do feature detection based on available keys. But APIs being _subsets_ or straight up incompatible is an obvious issue. When a user connects an API endpoint (such as ollama, OpenRouter, OpenAI, etc.), I do the following: - Call `GET /v1/models` to retrieve the model list. Besides telling the app which models are available so it can make them selectable by the user, this also serves as a sanity check to make sure that this is indeed an OpenAI-compatible API. - For every model, call `/v1/model/{model}` in order to get model metadata. You are correct about ollama not returning particularly useful metadata there, but some providers _do_ return valuable information, such as input and output modalities (i.e. in the case of OpenRouter) or displayName, description, etc. For that reason, retrieving the model list _and_ each model individually gives me the most complete picture of which models and capabilities the provider provides. ## Handholding Ollama In order to support ollama, we already have to do quite a bit of additional work, such as: - Detecting whether the provider is ollama _and_ parsing its version via `GET /api/version` - Calling `POST /api/show` for each model to parse capabilities - For ollama `< 0.6.4`, doing feature detection via model and projector information (i.e. detecting the presence of CLIP or checking for gemma3 vision block count, etc.) in order to find out whether the model supports image input - For ollama `>= 0.6.4`, directly using the new `capabilities` field _in addition_ to the aforementioned brute-force input modality detection, because sadly in my testing so far, the `capabilities` field isn't always populated correctly, or even at all. Ollama requires quite a bit of special handling in order to get the same information we usually get much more easily from other providers. For now our custom handling is here to stay, since we can't expect users to readily upgrade even if these things are fixed. Nevertheless, I think there's value in bringing ollama's OpenAI-compatible API closer to spec, in order to at least improve the situation going forward. ## TL;DR Model retrieval is broken in many common cases. Regardless of my personal use-case and whether or not the information returned by this API is generally useful or not: It's currently broken, and I think it should be fixed. --- <sub>I accidentally deleted the branch associated with the old PR, which is why I'm recreating it. I've updated the description to reflect the previous discussion, but feel free to check out #10147 for more context.</sub> --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 14:57:47 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#60045