[GH-ISSUE #6308] Getting Error: unexpected status code 200 when pulling a model from an internal registry v0.3.1 and above #29716

Open
opened 2026-04-22 08:52:49 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @killerwhile on GitHub (Aug 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6308

What is the issue?

Starting version 0.3.1, when pulling a model from an internal registry (https://distribution.github.io/distribution/), I'm getting the error unexpected status code 200.
Version up to 0.3.0 worked properly with this setup.

The line returning the error seems to be https://github.com/ollama/ollama/compare/v0.3.0...v0.3.1#diff-9e32d213fc229fc9c327863932f4fc8a875d854333b5ad2dffa9b43fd0848232R226

How to reproduce

Via docker-compose, I create a test environment with ollama and registry.

version: "3"

services:
  ollama:
    image: ollama/ollama:0.3.0
    ports:
      - "11434"

  registry:
    image: registry:2
    environment:
      REGISTRY_LOG_LEVEL: debug
      REGISTRY_LOG_ACCESSLOG_DISABLED: "false"
    ports:
      - "5000"

Start the test stack via docker compose up -d.

In the snippet above, I'm using ollama v0.3.0.

The following commands will pull a model (qwen2:0.5b for sake of size), and push it to the local registry.

docker compose exec ollama ollama pull qwen2:0.5b
docker compose exec ollama ollama cp qwen2:0.5b registry:5000/library/qwen2:0.5b
docker compose exec ollama ollama push registry:5000/library/qwen2:0.5b --insecure

Now the models can be remove from ollama:

docker compose exec ollama ollama rm qwen2:0.5b registry:5000/library/qwen2:0.5b

And re-downloaded from the local registry:

docker compose exec ollama ollama pull registry:5000/library/qwen2:0.5b --insecure

With ollama version up to 0.3.0 (included), this works.
With ollama version from 0.3.1 (included), I'm getting the following error:

Error: unexpected status code 200

OS

Docker

GPU

No response

CPU

No response

Ollama version

0.3.1 and above

Originally created by @killerwhile on GitHub (Aug 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6308 ### What is the issue? Starting version 0.3.1, when pulling a model from an internal registry (https://distribution.github.io/distribution/), I'm getting the error `unexpected status code 200`. Version up to 0.3.0 worked properly with this setup. The line returning the error seems to be https://github.com/ollama/ollama/compare/v0.3.0...v0.3.1#diff-9e32d213fc229fc9c327863932f4fc8a875d854333b5ad2dffa9b43fd0848232R226 # How to reproduce Via docker-compose, I create a test environment with ollama and registry. ``` version: "3" services: ollama: image: ollama/ollama:0.3.0 ports: - "11434" registry: image: registry:2 environment: REGISTRY_LOG_LEVEL: debug REGISTRY_LOG_ACCESSLOG_DISABLED: "false" ports: - "5000" ``` Start the test stack via `docker compose up -d`. In the snippet above, I'm using ollama v0.3.0. The following commands will pull a model (qwen2:0.5b for sake of size), and push it to the local registry. ``` docker compose exec ollama ollama pull qwen2:0.5b docker compose exec ollama ollama cp qwen2:0.5b registry:5000/library/qwen2:0.5b docker compose exec ollama ollama push registry:5000/library/qwen2:0.5b --insecure ``` Now the models can be remove from ollama: ``` docker compose exec ollama ollama rm qwen2:0.5b registry:5000/library/qwen2:0.5b ``` And re-downloaded from the local registry: ``` docker compose exec ollama ollama pull registry:5000/library/qwen2:0.5b --insecure ``` With ollama version up to 0.3.0 (included), this works. With ollama version from 0.3.1 (included), I'm getting the following error: ``` Error: unexpected status code 200 ``` ### OS Docker ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.3.1 and above
GiteaMirror added the bug label 2026-04-22 08:52:49 -05:00
Author
Owner

@jorander commented on GitHub (Aug 17, 2024):

I experience the exact same issue still in version 0.3.6.

With version <= 0.3.0 pulling from my internal library setup in Artifactory works just fine, but in later versions I get the "unexpected status code 200".
I seems to me that the problem is that this code, added in PR #5962, supposes there will be at least one redirect, but when using an internal repository we will have the direct url from the start.

<!-- gh-comment-id:2294911119 --> @jorander commented on GitHub (Aug 17, 2024): I experience the exact same issue still in version 0.3.6. With version <= 0.3.0 pulling from my internal library setup in Artifactory works just fine, but in later versions I get the "unexpected status code 200". I seems to me that the problem is that [this code](https://github.com/ollama/ollama/commit/c8af3c2d969a99618eecf169bd75aa112573ac27#diff-9e32d213fc229fc9c327863932f4fc8a875d854333b5ad2dffa9b43fd0848232R183-R222), added in PR #5962, supposes there will be at least one redirect, but when using an internal repository we will have the direct url from the start.
Author
Owner

@yuebo commented on GitHub (Sep 4, 2024):

The same error for me, and I just setup a nginx proxy to resolve this.
You need to have two domain name in your private network. here I use hub-ollama.test.com and blobs-ollama.test.com, and related nginx config like this.

server {
    listen       80;
    listen  [::]:80;
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name hub-ollama.test.com;
    client_max_body_size 409600m;
    ssl_certificate "test.pem";
    ssl_certificate_key "test.key";
    ssl_session_cache shared:SSL:1m;
    ssl_session_timeout  10m;
    ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
    ssl_prefer_server_ciphers on;
    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto "https";
        proxy_pass http://127.0.0.1:5080;
        proxy_redirect http://127.0.0.1:5080/ https://hub-ollama.test.com;
    }
    location ~ ^/v2/.*/blobs/.*$ {
        return 307 https://blobs-ollama.test.com$request_uri;
    }

}

server {
    listen       80;
    listen  [::]:80;
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name blobs-ollama.test.com;
    client_max_body_size 409600m;
    ssl_certificate "test.pem";
    ssl_certificate_key "test.key";
    ssl_session_cache shared:SSL:1m;
    ssl_session_timeout  10m;
    ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
    ssl_prefer_server_ciphers on;
    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto "https";
        proxy_pass http://127.0.0.1:5080;
        proxy_redirect http://127.0.0.1:5080/ https://hub-ollama.test.com;
    }
}

The key configuration is below,

location ~ ^/v2/.*/blobs/.*$ {
        return 307 https://blobs-ollama.test.com$request_uri;
}

you can use ollama to pull private registry images like ollama pull hub-ollama.test.com/llama3:7b.

<!-- gh-comment-id:2328284162 --> @yuebo commented on GitHub (Sep 4, 2024): The same error for me, and I just setup a nginx proxy to resolve this. You need to have two domain name in your private network. here I use `hub-ollama.test.com` and `blobs-ollama.test.com`, and related nginx config like this. ``` server { listen 80; listen [::]:80; listen 443 ssl; listen [::]:443 ssl; server_name hub-ollama.test.com; client_max_body_size 409600m; ssl_certificate "test.pem"; ssl_certificate_key "test.key"; ssl_session_cache shared:SSL:1m; ssl_session_timeout 10m; ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; ssl_prefer_server_ciphers on; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto "https"; proxy_pass http://127.0.0.1:5080; proxy_redirect http://127.0.0.1:5080/ https://hub-ollama.test.com; } location ~ ^/v2/.*/blobs/.*$ { return 307 https://blobs-ollama.test.com$request_uri; } } server { listen 80; listen [::]:80; listen 443 ssl; listen [::]:443 ssl; server_name blobs-ollama.test.com; client_max_body_size 409600m; ssl_certificate "test.pem"; ssl_certificate_key "test.key"; ssl_session_cache shared:SSL:1m; ssl_session_timeout 10m; ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; ssl_prefer_server_ciphers on; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto "https"; proxy_pass http://127.0.0.1:5080; proxy_redirect http://127.0.0.1:5080/ https://hub-ollama.test.com; } } ``` The key configuration is below, ``` location ~ ^/v2/.*/blobs/.*$ { return 307 https://blobs-ollama.test.com$request_uri; } ``` you can use ollama to pull private registry images like `ollama pull hub-ollama.test.com/llama3:7b`.
Author
Owner

@GhostDog98 commented on GitHub (Feb 17, 2025):

I'm getting this exact same issue when pulling certain models from huggingface...
To replicate:
ollama pull hf.co/zetasepic/Qwen2.5-32B-Instruct-abliterated-v2-GGUF:IQ4_XS

Is there a workaround for this that doesn't involve setting up an entire freaking proxy server?

<!-- gh-comment-id:2662429385 --> @GhostDog98 commented on GitHub (Feb 17, 2025): I'm getting this exact same issue when pulling certain models from huggingface... To replicate: `ollama pull hf.co/zetasepic/Qwen2.5-32B-Instruct-abliterated-v2-GGUF:IQ4_XS` Is there a workaround for this that doesn't involve setting up an entire freaking proxy server?
Author
Owner

@swiftimundo commented on GitHub (Feb 21, 2025):

I was having this issue on v0.3.3. Saw the PR and redownloaded. On 0.5.11 I don't have this issue anymore. @GhostDog98 upgrade your client.

<!-- gh-comment-id:2675356336 --> @swiftimundo commented on GitHub (Feb 21, 2025): I was having this issue on v0.3.3. Saw the PR and redownloaded. On 0.5.11 I don't have this issue anymore. @GhostDog98 upgrade your client.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29716