[GH-ISSUE #10139] OpenAI API: Models with slashes not retrievable #32413

Closed
opened 2026-04-22 13:38:25 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @SplittyDev on GitHub (Apr 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10139

What is the issue?

In the /v1/models/{model} endpoint, models with a / in their name (which is fairly common when downloading from HuggingFace or other sources) are not retrievable individually.

As expected, the model shows up in the model list:

GET /v1/models:

{
    "object": "list",
    "data": [
        {
            "id": "aaa/llava:13b",
            "object": "model",
            "created": 1743869353,
            "owned_by": "aaa"
        },
        {
            "id": "llava:13b",
            "object": "model",
            "created": 1743863049,
            "owned_by": "library"
        }
    ]
}

But when trying to retrieve the model individually, the API always returns 404.

GET /v1/models/aaa/llava:13b (the naive approach):

404 page not found

GET /v1/models/aaa%2Fllava:13b (the proper url-encoded approach):

404 page not found

GET /v1/models/aaa%2Fllava%3A13b (the just-to-be-safe unnecessarily-url-encoded approach):

404 page not found

Relevant links to OpenAI API spec:

Relevant log output

There's nothing in the logs.

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.6.4

Originally created by @SplittyDev on GitHub (Apr 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10139 ### What is the issue? In the `/v1/models/{model}` endpoint, models with a `/` in their name (which is fairly common when downloading from HuggingFace or other sources) are not retrievable individually. As expected, the model shows up in the model list: `GET /v1/models`: ```json { "object": "list", "data": [ { "id": "aaa/llava:13b", "object": "model", "created": 1743869353, "owned_by": "aaa" }, { "id": "llava:13b", "object": "model", "created": 1743863049, "owned_by": "library" } ] } ``` But when trying to retrieve the model individually, the API always returns `404`. `GET /v1/models/aaa/llava:13b` (the naive approach): ```plain 404 page not found ``` `GET /v1/models/aaa%2Fllava:13b` (the proper url-encoded approach): ```plain 404 page not found ``` `GET /v1/models/aaa%2Fllava%3A13b` (the just-to-be-safe unnecessarily-url-encoded approach): ```plain 404 page not found ``` Relevant links to OpenAI API spec: - `GET /v1/models` (https://platform.openai.com/docs/api-reference/models/list) - `GET /v1/models/{model}` (https://platform.openai.com/docs/api-reference/models/retrieve) ### Relevant log output ```shell There's nothing in the logs. ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.6.4
GiteaMirror added the bug label 2026-04-22 13:38:25 -05:00
Author
Owner

@chigkim commented on GitHub (Apr 5, 2025):

What are you trying to do? If you want to chat with a model, you have to use chat completion API.
/v1/models/aaa/llava:13b is not OpenAI API.

<!-- gh-comment-id:2781115544 --> @chigkim commented on GitHub (Apr 5, 2025): What are you trying to do? If you want to chat with a model, you have to use chat completion API. `/v1/models/aaa/llava:13b` is not OpenAI API.
Author
Owner

@SplittyDev commented on GitHub (Apr 6, 2025):

@chigkim It most certainly is part of the OpenAI API: https://platform.openai.com/docs/api-reference/models/retrieve

I'm not trying to use chat completion. OpenAI defines two endpoints for model listing and retrieval:

The issue is that OpenAI models don't have slashes in them. GET /v1/models/gpt-4o works perfectly in the OpenAI API, and so does GET /v1/models/llava:13b in the Ollama OpenAI-compatible API.

But let's take this scenario:

  1. You pull a model from HuggingFace: ollama pull hf.co/TheDrummer/Moistral-11B-v3-GGUF
  2. You call the GET /v1/models endpoint:
    {
        "object": "list",
        "data": [
            {
                "id": "hf.co/TheDrummer/Moistral-11B-v3-GGUF:latest",
                "object": "model",
                "created": 1743863049,
                "owned_by": "library"
            }
        ]
    }
    
  3. You call the GET /v1/models/hf.co/TheDrummer/Moistral-11B-v3-GGUF:latest endpoint:
    404 page not found
    

At first I thought it's because slashes have to be escaped, but turns out that doesn't help. Properly URL-escaped, the model name would be hf.co%2FTheDrummer%2FMoistral-11B-v3-GGUF%3Alatest. But calling the GET /v1/models/hf.co%2FTheDrummer%2FMoistral-11B-v3-GGUF%3Alatest endpoint also results in 404 page not found.

So, it's not actually possible to retrieve a model with slashes in its name in an OpenAI-compatible way, because the URL isn't being decoded correctly, and ollama doesn't find the model id, resulting in a http 404 status error.

<!-- gh-comment-id:2781220627 --> @SplittyDev commented on GitHub (Apr 6, 2025): @chigkim It most certainly _is_ part of the OpenAI API: https://platform.openai.com/docs/api-reference/models/retrieve I'm not trying to use chat completion. OpenAI defines two endpoints for model listing and retrieval: - `GET /v1/models` (https://platform.openai.com/docs/api-reference/models/list) - `GET /v1/models/{model}` (https://platform.openai.com/docs/api-reference/models/retrieve) The issue is that OpenAI models don't have slashes in them. `GET /v1/models/gpt-4o` works perfectly in the OpenAI API, and so does `GET /v1/models/llava:13b` in the Ollama OpenAI-compatible API. But let's take this scenario: 1. You pull a model from HuggingFace: `ollama pull hf.co/TheDrummer/Moistral-11B-v3-GGUF` 2. You call the `GET /v1/models` endpoint: ```json { "object": "list", "data": [ { "id": "hf.co/TheDrummer/Moistral-11B-v3-GGUF:latest", "object": "model", "created": 1743863049, "owned_by": "library" } ] } ``` 3. You call the `GET /v1/models/hf.co/TheDrummer/Moistral-11B-v3-GGUF:latest` endpoint: ``` 404 page not found ``` At first I thought it's because slashes have to be escaped, but turns out that doesn't help. Properly URL-escaped, the model name would be `hf.co%2FTheDrummer%2FMoistral-11B-v3-GGUF%3Alatest`. But calling the `GET /v1/models/hf.co%2FTheDrummer%2FMoistral-11B-v3-GGUF%3Alatest` endpoint also results in `404 page not found`. So, it's not actually possible to retrieve a model with slashes in its name in an OpenAI-compatible way, because the URL isn't being decoded correctly, and ollama doesn't find the model id, resulting in a http 404 status error.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32413