[GH-ISSUE #10863] Model not found - Inconsistent API Behavior: api/generate fails over IPv6 localhost, while api/embeddings works. #53650

Closed
opened 2026-04-29 04:22:59 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @ppblaauw on GitHub (May 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10863

What is the issue?

I'm encountering an issue where the /api/generate endpoint for a text generation model (deepseek-r1:1.5b) consistently fails with a "model not found" error when accessed via http://localhost:11434 (which resolves to IPv6 ::1 on my system). However, the /api/embeddings endpoint for nomic-embed-text:latest works perfectly fine using the exact same http://localhost:11434 URL.

Crucially, both models and endpoints work correctly when accessed via http://127.0.0.1:11434 (IPv4) or when curl is explicitly forced to use IPv4 (-4 flag). This suggests a specific problem with how the /api/generate endpoint handles requests coming in over IPv6.

Steps to Reproduce:

1 Ensure Ollama server is running (e.g., ollama serve).

2 Pull the necessary models:

Bash

ollama pull deepseek-r1:1.5b
ollama pull nomic-embed-text:latest

3 Confirm deepseek-r1:1.5b works via CLI (expected to work):

Bash

ollama run deepseek-r1:1.5b "what model are you?"

4 Attempt to use api/generate with localhost (expected to fail):

Bash

curl http://localhost:11434/api/generate -d '{"model": "deepseek-r1:1.5b" , "prompt": " what model are you?" }' --verbose

5 Attempt to use api/embeddings with localhost (expected to work):

Bash

curl http://localhost:11434/api/embeddings -d '{ "model": "nomic-embed-text", "prompt": "The sky is blue because of Rayleigh scattering" }' --verbose

6 Attempt to use api/generate with 127.0.0.1 (expected to work):

Bash

curl http://127.0.0.1:11434/api/generate -d '{"model": "deepseek-r1:1.5b" , "prompt": " what model are you?" }'

7 Attempt to use api/generate with localhost and forced IPv4 (expected to work):

Bash

    curl -4 http://localhost:11434/api/generate -d '{"model": "deepseek-r1:1.5b" , "prompt": " what model are you?" }'

Expected Behavior:

All curl commands (Steps 4, 5, 6, 7) should successfully return a valid response from the Ollama server, including the api/generate request made via localhost.

Actual Behavior:

Step 4 (Fails): Returns {"error":"model 'deepseek-r1:1.5b' not found"} with an HTTP/1.1 404 Not Found status. The curl --verbose output confirms a successful connection to ::1 (IPv6).
Step 5 (Works): Returns a valid embedding. The curl --verbose output confirms a successful connection to ::1 (IPv6).
Step 6 (Works): Returns a valid text generation.
Step 7 (Works): Returns a valid text generation.

Diagnostic Information:

  ollama list output:

   NAME                         ID           SIZE      MODIFIED
   nomic-embed-text:latest      0a109f422b47 274 MB    6 hours ago
   deepseek-r1:1.5b             a42b25d8c10a 1.1 GB    4 months ago
   llama3.2:1b                  baf6a787fdff 1.3 GB    8 months ago
   llama3.2:latest              a80c4f17acd5 2.0 GB    8 months ago
    ping -a localhost output:

    Pinging DESKTOP-USER [::1] with 32 bytes of data:
    Reply from ::1: time<1ms
    Reply from ::1: time<1ms
    Reply from ::1: time<1ms
    Reply from ::1: time<1ms
Verbose output for failing api/generate call (Step 4):
    * Host localhost:11434 was resolved.
    * IPv6: ::1
    * IPv4: 127.0.0.1
    * Trying [::1]:11434...
    * Connected to localhost (::1) port 11434
    * using HTTP/1.x
    > POST /api/generate HTTP/1.1
    > Host: localhost:11434
    > User-Agent: curl/8.12.1
    > Accept: */*
    > Content-Length: 59
    > Content-Type: application/x-www-form-urlencoded
    >
    * upload completely sent off: 59 bytes
    < HTTP/1.1 404 Not Found
    < Content-Type: application/json; charset=utf-8
    < Date: Mon, 26 May 2025 09:31:27 GMT
    < Content-Length: 46
    <
    {"error":"model 'deepseek-r1:1.5b' not found"}* Connection #0 to host localhost left intact

Verbose output for working api/embeddings call (Step 5):
    * Host localhost:11434 was resolved.
    * IPv6: ::1
    * IPv4: 127.0.0.1
    * Trying [::1]:11434...
    * Connected to localhost (::1) port 11434
    * using HTTP/1.x
    > POST /api/embeddings HTTP/1.1
    > Host: localhost:11434
    > User-Agent: curl/8.12.1
    > Accept: */*
    > Content-Length: 91
    > Content-Type: application/x-www-form-urlencoded
    >
    * upload completely sent off: 91 bytes
    < HTTP/1.1 200 OK
    < Content-Type: application/json; charset=utf-8
    < Date: Mon, 26 May 2025 09:46:08 GMT
    < Transfer-Encoding: chunked
    < [rest of embedding response]

Environment:

Operating System: Windows 11
Ollama Version: 0.5.12

Relevant log output


OS

Windows

GPU

No response

CPU

Intel

Ollama version

0.5.12

Originally created by @ppblaauw on GitHub (May 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10863 ### What is the issue? I'm encountering an issue where the /api/generate endpoint for a text generation model (deepseek-r1:1.5b) consistently fails with a "model not found" error when accessed via http://localhost:11434 (which resolves to IPv6 ::1 on my system). However, the /api/embeddings endpoint for nomic-embed-text:latest works perfectly fine using the exact same http://localhost:11434 URL. Crucially, both models and endpoints work correctly when accessed via http://127.0.0.1:11434 (IPv4) or when curl is explicitly forced to use IPv4 (-4 flag). This suggests a specific problem with how the /api/generate endpoint handles requests coming in over IPv6. Steps to Reproduce: 1 Ensure Ollama server is running (e.g., ollama serve). 2 Pull the necessary models: ``` Bash ollama pull deepseek-r1:1.5b ollama pull nomic-embed-text:latest ``` 3 Confirm deepseek-r1:1.5b works via CLI (expected to work): ``` Bash ollama run deepseek-r1:1.5b "what model are you?" ``` 4 Attempt to use api/generate with localhost (expected to fail): ``` Bash curl http://localhost:11434/api/generate -d '{"model": "deepseek-r1:1.5b" , "prompt": " what model are you?" }' --verbose ``` 5 Attempt to use api/embeddings with localhost (expected to work): ``` Bash curl http://localhost:11434/api/embeddings -d '{ "model": "nomic-embed-text", "prompt": "The sky is blue because of Rayleigh scattering" }' --verbose ``` 6 Attempt to use api/generate with 127.0.0.1 (expected to work): ``` Bash curl http://127.0.0.1:11434/api/generate -d '{"model": "deepseek-r1:1.5b" , "prompt": " what model are you?" }' ``` 7 Attempt to use api/generate with localhost and forced IPv4 (expected to work): ``` Bash curl -4 http://localhost:11434/api/generate -d '{"model": "deepseek-r1:1.5b" , "prompt": " what model are you?" }' ``` Expected Behavior: All curl commands (Steps 4, 5, 6, 7) should successfully return a valid response from the Ollama server, including the api/generate request made via localhost. Actual Behavior: Step 4 (Fails): Returns {"error":"model 'deepseek-r1:1.5b' not found"} with an HTTP/1.1 404 Not Found status. The curl --verbose output confirms a successful connection to ::1 (IPv6). Step 5 (Works): Returns a valid embedding. The curl --verbose output confirms a successful connection to ::1 (IPv6). Step 6 (Works): Returns a valid text generation. Step 7 (Works): Returns a valid text generation. Diagnostic Information: ``` ollama list output: NAME ID SIZE MODIFIED nomic-embed-text:latest 0a109f422b47 274 MB 6 hours ago deepseek-r1:1.5b a42b25d8c10a 1.1 GB 4 months ago llama3.2:1b baf6a787fdff 1.3 GB 8 months ago llama3.2:latest a80c4f17acd5 2.0 GB 8 months ago ``` ``` ping -a localhost output: Pinging DESKTOP-USER [::1] with 32 bytes of data: Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms ``` Verbose output for failing api/generate call (Step 4): ``` * Host localhost:11434 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:11434... * Connected to localhost (::1) port 11434 * using HTTP/1.x > POST /api/generate HTTP/1.1 > Host: localhost:11434 > User-Agent: curl/8.12.1 > Accept: */* > Content-Length: 59 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 59 bytes < HTTP/1.1 404 Not Found < Content-Type: application/json; charset=utf-8 < Date: Mon, 26 May 2025 09:31:27 GMT < Content-Length: 46 < {"error":"model 'deepseek-r1:1.5b' not found"}* Connection #0 to host localhost left intact ``` Verbose output for working api/embeddings call (Step 5): ``` * Host localhost:11434 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:11434... * Connected to localhost (::1) port 11434 * using HTTP/1.x > POST /api/embeddings HTTP/1.1 > Host: localhost:11434 > User-Agent: curl/8.12.1 > Accept: */* > Content-Length: 91 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 91 bytes < HTTP/1.1 200 OK < Content-Type: application/json; charset=utf-8 < Date: Mon, 26 May 2025 09:46:08 GMT < Transfer-Encoding: chunked < [rest of embedding response] ``` Environment: Operating System: Windows 11 Ollama Version: 0.5.12 ### Relevant log output ```shell ``` ### OS Windows ### GPU _No response_ ### CPU Intel ### Ollama version 0.5.12
GiteaMirror added the bug label 2026-04-29 04:22:59 -05:00
Author
Owner

@rick-github commented on GitHub (May 26, 2025):

Server logs may aid in debugging.

What's the output of:

curl 'http://[::1]:11434/api/version'
curl 'http://[::1]:11434/api/tags'
curl 'http://127.0.0.1:11434/api/version'
curl 'http://127.0.0.1:11434/api/tags'
<!-- gh-comment-id:2909384051 --> @rick-github commented on GitHub (May 26, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging. What's the output of: ``` curl 'http://[::1]:11434/api/version' curl 'http://[::1]:11434/api/tags' curl 'http://127.0.0.1:11434/api/version' curl 'http://127.0.0.1:11434/api/tags' ```
Author
Owner

@ppblaauw commented on GitHub (May 26, 2025):

curl 'http://[::1]:11434/api/version'
{"version":"0.7.0"}

curl 'http://[::1]:11434/api/tags'  | jq
{
  "models": [
    {
      "name": "nomic-embed-text:latest",
      "model": "nomic-embed-text:latest",
      "modified_at": "2025-05-21T04:24:51.73054831Z",
      "size": 274302450,
      "digest": "0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f",
      "details": {
        "parent_model": "",
        "format": "gguf",
        "family": "nomic-bert",
        "families": [
          "nomic-bert"
        ],
        "parameter_size": "137M",
        "quantization_level": "F16"
      }
    },
    {
      "name": "qwen2.5:7b-instruct-q4_K_M",
      "model": "qwen2.5:7b-instruct-q4_K_M",
      "modified_at": "2025-05-21T04:24:50.680466722Z",
      "size": 4683087332,
      "digest": "845dbda0ea48ed749caafd9e6037047aa19acfcfd82e704d7ca97d631a0b697e",
      "details": {
        "parent_model": "",
        "format": "gguf",
        "family": "qwen2",
        "families": [
          "qwen2"
        ],
        "parameter_size": "7.6B",
        "quantization_level": "Q4_K_M"
      }
    }
  ]
}

curl 'http://127.0.0.1:11434/api/version'
{"version":"0.5.12"}

curl -H "Accept: application/json" 'http://127.0.0.1:11434/api/tags'  | jq
{
  "models": [
    {
      "name": "deepseek-r1:1.5b",
      "model": "deepseek-r1:1.5b",
      "modified_at": "2025-05-26T14:50:59.5454808+08:00",
      "size": 1117322599,
      "digest": "a42b25d8c10a841bd24724309898ae851466696a7d7f3a0a408b895538ccbc96",
      "details": {
        "parent_model": "",
        "format": "gguf",
        "family": "qwen2",
        "families": [
          "qwen2"
        ],
        "parameter_size": "1.8B",
        "quantization_level": "Q4_K_M"
      }
    },
    {
      "name": "nomic-embed-text:latest",
      "model": "nomic-embed-text:latest",
      "modified_at": "2025-05-26T08:11:10.0245722+08:00",
      "size": 274302450,
      "digest": "0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f",
      "details": {
        "parent_model": "",
        "format": "gguf",
        "family": "nomic-bert",
        "families": [
          "nomic-bert"
        ],
        "parameter_size": "137M",
        "quantization_level": "F16"
      }
    },
    {
      "name": "llama3.2:1b",
      "model": "llama3.2:1b",
      "modified_at": "2024-09-26T15:46:18.6438371+08:00",
      "size": 1321098329,
      "digest": "baf6a787fdffd633537aa2eb51cfd54cb93ff08e28040095462bb63daf552878",
      "details": {
        "parent_model": "",
        "format": "gguf",
        "family": "llama",
        "families": [
          "llama"
        ],
        "parameter_size": "1.2B",
        "quantization_level": "Q8_0"
      }
    },
    {
      "name": "llama3.2:latest",
      "model": "llama3.2:latest",
      "modified_at": "2024-09-26T15:25:03.773311+08:00",
      "size": 2019393189,
      "digest": "a80c4f17acd55265feec403c7aef86be0c25983ab279d83f3bcd3abbcb5b8b72",
      "details": {
        "parent_model": "",
        "format": "gguf",
        "family": "llama",
        "families": [
          "llama"
        ],
        "parameter_size": "3.2B",
        "quantization_level": "Q4_K_M"
      }
    }
  ]
}

Thanks, that explains more

<!-- gh-comment-id:2909508015 --> @ppblaauw commented on GitHub (May 26, 2025): ``` curl 'http://[::1]:11434/api/version' {"version":"0.7.0"} curl 'http://[::1]:11434/api/tags' | jq { "models": [ { "name": "nomic-embed-text:latest", "model": "nomic-embed-text:latest", "modified_at": "2025-05-21T04:24:51.73054831Z", "size": 274302450, "digest": "0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f", "details": { "parent_model": "", "format": "gguf", "family": "nomic-bert", "families": [ "nomic-bert" ], "parameter_size": "137M", "quantization_level": "F16" } }, { "name": "qwen2.5:7b-instruct-q4_K_M", "model": "qwen2.5:7b-instruct-q4_K_M", "modified_at": "2025-05-21T04:24:50.680466722Z", "size": 4683087332, "digest": "845dbda0ea48ed749caafd9e6037047aa19acfcfd82e704d7ca97d631a0b697e", "details": { "parent_model": "", "format": "gguf", "family": "qwen2", "families": [ "qwen2" ], "parameter_size": "7.6B", "quantization_level": "Q4_K_M" } } ] } curl 'http://127.0.0.1:11434/api/version' {"version":"0.5.12"} curl -H "Accept: application/json" 'http://127.0.0.1:11434/api/tags' | jq { "models": [ { "name": "deepseek-r1:1.5b", "model": "deepseek-r1:1.5b", "modified_at": "2025-05-26T14:50:59.5454808+08:00", "size": 1117322599, "digest": "a42b25d8c10a841bd24724309898ae851466696a7d7f3a0a408b895538ccbc96", "details": { "parent_model": "", "format": "gguf", "family": "qwen2", "families": [ "qwen2" ], "parameter_size": "1.8B", "quantization_level": "Q4_K_M" } }, { "name": "nomic-embed-text:latest", "model": "nomic-embed-text:latest", "modified_at": "2025-05-26T08:11:10.0245722+08:00", "size": 274302450, "digest": "0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f", "details": { "parent_model": "", "format": "gguf", "family": "nomic-bert", "families": [ "nomic-bert" ], "parameter_size": "137M", "quantization_level": "F16" } }, { "name": "llama3.2:1b", "model": "llama3.2:1b", "modified_at": "2024-09-26T15:46:18.6438371+08:00", "size": 1321098329, "digest": "baf6a787fdffd633537aa2eb51cfd54cb93ff08e28040095462bb63daf552878", "details": { "parent_model": "", "format": "gguf", "family": "llama", "families": [ "llama" ], "parameter_size": "1.2B", "quantization_level": "Q8_0" } }, { "name": "llama3.2:latest", "model": "llama3.2:latest", "modified_at": "2024-09-26T15:25:03.773311+08:00", "size": 2019393189, "digest": "a80c4f17acd55265feec403c7aef86be0c25983ab279d83f3bcd3abbcb5b8b72", "details": { "parent_model": "", "format": "gguf", "family": "llama", "families": [ "llama" ], "parameter_size": "3.2B", "quantization_level": "Q4_K_M" } } ] } ``` Thanks, that explains more
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53650