[GH-ISSUE #7522] not able to download models from ollama behind proxy #4783

Closed
opened 2026-04-12 15:43:52 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @anshika1234 on GitHub (Nov 6, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7522

What is the issue?

ollama run qwen:4b
pulling manifest
pulling 46bb65206e0e... 0% ▕ ▏ 3.3 MB/2.3 GB 1.4 KB/s 99h+^****

-------------------------------------------- Log content -------------------
ollama[719]: time=2024-11-06T10:08:16.817+05:30 level=INFO source=download.go:370 msg="8eeb52dfb3bb part 15 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."

-----------------------------------------------Settings-----------
https_proxy setup is done in ollama environment

Please suggest how to debug further.

OS

Linux

GPU

No response

CPU

Intel

Ollama version

0.3.14

Originally created by @anshika1234 on GitHub (Nov 6, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7522 ### What is the issue? ollama run qwen:4b pulling manifest pulling 46bb65206e0e... 0% ▕ ▏ 3.3 MB/2.3 GB 1.4 KB/s 99h+^**** -------------------------------------------- Log content ------------------- ollama[719]: time=2024-11-06T10:08:16.817+05:30 level=INFO source=download.go:370 msg="8eeb52dfb3bb part 15 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." -----------------------------------------------Settings----------- https_proxy setup is done in ollama environment Please suggest how to debug further. ### OS Linux ### GPU _No response_ ### CPU Intel ### Ollama version 0.3.14
GiteaMirror added the bug label 2026-04-12 15:43:52 -05:00
Author
Owner

@nikhil-swamix commented on GitHub (Nov 6, 2024):

could you try a different proxy? some proxies disallow ports, or have bandwidth throttles in my exp... let me know if work

<!-- gh-comment-id:2458815800 --> @nikhil-swamix commented on GitHub (Nov 6, 2024): could you try a different proxy? some proxies disallow ports, or have bandwidth throttles in my exp... let me know if work
Author
Owner

@anshika1234 commented on GitHub (Nov 6, 2024):

No i cannot change the proxy. But I confirmed there is no port blocking in proxy server.

Need to debug it further. How do it?

<!-- gh-comment-id:2458875937 --> @anshika1234 commented on GitHub (Nov 6, 2024): No i cannot change the proxy. But I confirmed there is no port blocking in proxy server. Need to debug it further. How do it?
Author
Owner

@rick-github commented on GitHub (Nov 6, 2024):

Server logs may help in debugging.

What's the output of the commands:

systemctl cat --no-pager ollama
curl -D - https://registry.ollama.ai/v2/library/gemma2/manifests/2b
curl -D - https://registry.ollama.ai/v2/library/gemma2/manifests/2b --noproxy "*"
<!-- gh-comment-id:2458898306 --> @rick-github commented on GitHub (Nov 6, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may help in debugging. What's the output of the commands: ``` systemctl cat --no-pager ollama curl -D - https://registry.ollama.ai/v2/library/gemma2/manifests/2b curl -D - https://registry.ollama.ai/v2/library/gemma2/manifests/2b --noproxy "*" ```
Author
Owner

@rick-github commented on GitHub (Nov 6, 2024):

Server logs?

<!-- gh-comment-id:2458911917 --> @rick-github commented on GitHub (Nov 6, 2024): Server logs?
Author
Owner

@anshika1234 commented on GitHub (Nov 6, 2024):

curl -D - https://registry.ollama.ai/v2/library/gemma2/manifests/2b --noproxy "*"
curl: (28) Failed to connect to registry.ollama.ai port 443 after 217352 ms: Connection timed out

-------------------------------------------- Log content -------------------
level=INFO source=download.go:370 msg="8eeb52dfb3bb part 3 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."

<!-- gh-comment-id:2458920864 --> @anshika1234 commented on GitHub (Nov 6, 2024): curl -D - https://registry.ollama.ai/v2/library/gemma2/manifests/2b --noproxy "*" curl: (28) Failed to connect to registry.ollama.ai port 443 after 217352 ms: Connection timed out -------------------------------------------- Log content ------------------- level=INFO source=download.go:370 msg="8eeb52dfb3bb part 3 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
Author
Owner

@rick-github commented on GitHub (Nov 6, 2024):

Full server log.

<!-- gh-comment-id:2458922552 --> @rick-github commented on GitHub (Nov 6, 2024): Full server log.
Author
Owner

@rick-github commented on GitHub (Nov 6, 2024):

Full log, there is information in other parts that may be useful in debugging the problem you have.

<!-- gh-comment-id:2458929793 --> @rick-github commented on GitHub (Nov 6, 2024): Full log, there is information in other parts that may be useful in debugging the problem you have.
Author
Owner

@rick-github commented on GitHub (Nov 6, 2024):

Is the ollama server running on the same machine you are running the ollama run command on?

<!-- gh-comment-id:2458935940 --> @rick-github commented on GitHub (Nov 6, 2024): Is the ollama server running on the same machine you are running the `ollama run` command on?
Author
Owner

@anshika1234 commented on GitHub (Nov 6, 2024):

yes

<!-- gh-comment-id:2458936622 --> @anshika1234 commented on GitHub (Nov 6, 2024): yes
Author
Owner

@nikhil-swamix commented on GitHub (Nov 6, 2024):

try this just for sample...
if this work, i believe my guess would be right. confirm result once done.
ollama run qwen2.5:0.5b

<!-- gh-comment-id:2458951340 --> @nikhil-swamix commented on GitHub (Nov 6, 2024): try this just for sample... if this work, i believe my guess would be right. confirm result once done. `ollama run qwen2.5:0.5b`
Author
Owner

@anshika1234 commented on GitHub (Nov 6, 2024):

/usr/share/ollama/ollama2477802559/runners/cpu# ollama run qwen2.5:0.5b
pulling manifest
pulling c5396e06af29... 100% ▕██████████████████████████▏ 397 MB
pulling 66b9ea09bd5b... 100% ▕██████████████████████████▏ 68 B
pulling eb4402837c78... 100% ▕██████████████████████████▏ 1.5 KB
pulling 832dd9e00a68... 100% ▕██████████████████████████▏ 11 KB
pulling 005f95c74751... 100% ▕██████████████████████████▏ 490 B
verifying sha256 digest
Error: digest mismatch, file must be downloaded again: want sha256:c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515, got sha256:5bdde73520ef51f67ce75262ffe51b46aca27e89ba7647eb0f5c82cdd81fa479


what next?

<!-- gh-comment-id:2458958355 --> @anshika1234 commented on GitHub (Nov 6, 2024): /usr/share/ollama/ollama2477802559/runners/cpu# ollama run qwen2.5:0.5b pulling manifest pulling c5396e06af29... 100% ▕██████████████████████████▏ 397 MB pulling 66b9ea09bd5b... 100% ▕██████████████████████████▏ 68 B pulling eb4402837c78... 100% ▕██████████████████████████▏ 1.5 KB pulling 832dd9e00a68... 100% ▕██████████████████████████▏ 11 KB pulling 005f95c74751... 100% ▕██████████████████████████▏ 490 B verifying sha256 digest Error: digest mismatch, file must be downloaded again: want sha256:c5396e06af294bd101b30dce59131a76d2b773e76950acc870eda801d3ab0515, got sha256:5bdde73520ef51f67ce75262ffe51b46aca27e89ba7647eb0f5c82cdd81fa479 _______________________________________ what next?
Author
Owner

@nikhil-swamix commented on GitHub (Nov 6, 2024):

thers problem with your I/O, it could be disk space issue, faulty drive, or if in container, some settings/permissions.
previously i discovered that, it try to allocate full block area on disk, before it download. my m2 drive was having issue.
so best attempt would be to change the OLLAMA_MODELS directory and try the same run command.
refer variable settings #7523
do df -h
select a big drive as path. and set as OLLAMA_MODELS
OLLAMA_TMPDIR=same as OLLAMA models
and make sure write permissions are correct.
on a side note, try with only recent models, as the ollama registery may cold archive older model which you initially tried...

<!-- gh-comment-id:2459012383 --> @nikhil-swamix commented on GitHub (Nov 6, 2024): thers problem with your I/O, it could be disk space issue, faulty drive, or if in container, some settings/permissions. previously i discovered that, it try to allocate full block area on disk, before it download. my m2 drive was having issue. so best attempt would be to change the OLLAMA_MODELS directory and try the same run command. refer variable settings #7523 do `df -h` select a big drive as path. and set as OLLAMA_MODELS `OLLAMA_TMPDIR=same as OLLAMA models` and make sure write permissions are correct. on a side note, try with only recent models, as the ollama registery may cold archive older model which you initially tried...
Author
Owner

@nikhil-swamix commented on GitHub (Nov 6, 2024):

Attempt 001:

# Get and set user's .ollama directory explicitly
OLLAMA_HOME=$(eval echo ~$USER)/.ollama
# Create directory if needed
mkdir -p $OLLAMA_HOME
mkdir -p $OLLAMA_HOME/models
# Create ollama user and group if they don't exist
sudo groupadd -f ollama 
sudo useradd -r -g ollama ollama 2>/dev/null || true
# Add current user to ollama group
sudo usermod -aG ollama $USER
# Set correct ownership and permissions
sudo chown -R ollama:ollama $OLLAMA_HOME
sudo chmod -R 770 $OLLAMA_HOME
# Stop ollama service safely  
sudo service ollama stop
# Set model and temp directories
export OLLAMA_MODELS=$OLLAMA_HOME/models
export OLLAMA_TEMPDIR=$OLLAMA_HOME/models
# Start ollama with explicit home
OLLAMA_HOME=$OLLAMA_HOME OLLAMA_MODELS=$OLLAMA_MODELS OLLAMA_TEMPDIR=$OLLAMA_TEMPDIR ollama run qwen2.5:0.5b
# Note: You may need to log out and back in for group changes to take effect
echo "Please log out and log back in for group changes to take effect"

Notes: Previous ollama must die for new changes to effect, modify the last part as required.

<!-- gh-comment-id:2459305835 --> @nikhil-swamix commented on GitHub (Nov 6, 2024): Attempt 001: ```bash # Get and set user's .ollama directory explicitly OLLAMA_HOME=$(eval echo ~$USER)/.ollama # Create directory if needed mkdir -p $OLLAMA_HOME mkdir -p $OLLAMA_HOME/models # Create ollama user and group if they don't exist sudo groupadd -f ollama sudo useradd -r -g ollama ollama 2>/dev/null || true # Add current user to ollama group sudo usermod -aG ollama $USER # Set correct ownership and permissions sudo chown -R ollama:ollama $OLLAMA_HOME sudo chmod -R 770 $OLLAMA_HOME # Stop ollama service safely sudo service ollama stop # Set model and temp directories export OLLAMA_MODELS=$OLLAMA_HOME/models export OLLAMA_TEMPDIR=$OLLAMA_HOME/models # Start ollama with explicit home OLLAMA_HOME=$OLLAMA_HOME OLLAMA_MODELS=$OLLAMA_MODELS OLLAMA_TEMPDIR=$OLLAMA_TEMPDIR ollama run qwen2.5:0.5b # Note: You may need to log out and back in for group changes to take effect echo "Please log out and log back in for group changes to take effect" ``` Notes: Previous ollama must die for new changes to effect, modify the last part as required.
Author
Owner

@anshika1234 commented on GitHub (Nov 6, 2024):

bypassing proxy solved the problem. Can we conclude it is not I/O problem. Thanks a lot for the responses.

<!-- gh-comment-id:2459403574 --> @anshika1234 commented on GitHub (Nov 6, 2024): bypassing proxy solved the problem. Can we conclude it is not I/O problem. Thanks a lot for the responses.
Author
Owner

@nikhil-swamix commented on GitHub (Nov 6, 2024):

Dear Anshika, glad to hear that the attempts yielded positive outcome. hope issue marked as closed.
best regards,
Swamix Global AI Solutions, swamix.com

<!-- gh-comment-id:2459411046 --> @nikhil-swamix commented on GitHub (Nov 6, 2024): Dear Anshika, glad to hear that the attempts yielded positive outcome. hope issue marked as closed. best regards, Swamix Global AI Solutions, swamix.com
Author
Owner

@rick-github commented on GitHub (Nov 6, 2024):

OLLAMA_HOME=$OLLAMA_HOME OLLAMA_MODELS=$OLLAMA_MODELS OLLAMA_TEMPDIR=$OLLAMA_TEMPDIR ollama run qwen2.5:0.5b

This will not change the environment variables, they need to be set in the server environment.

<!-- gh-comment-id:2459422538 --> @rick-github commented on GitHub (Nov 6, 2024): > `OLLAMA_HOME=$OLLAMA_HOME OLLAMA_MODELS=$OLLAMA_MODELS OLLAMA_TEMPDIR=$OLLAMA_TEMPDIR ollama run qwen2.5:0.5b` This will not change the environment variables, they need to be set in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server).
Author
Owner

@nikhil-swamix commented on GitHub (Nov 6, 2024):

OLLAMA_HOME=$OLLAMA_HOME OLLAMA_MODELS=$OLLAMA_MODELS OLLAMA_TEMPDIR=$OLLAMA_TEMPDIR ollama run qwen2.5:0.5b

This will not change the environment variables, they need to be set in the server environment.

ahh, agreed, in windows powershell $env:something=value worked, my answer may have been biased from debugging standpoint. this one i believe it would set only one time per session and lost . nevertheless, thanks for correction.

<!-- gh-comment-id:2459433488 --> @nikhil-swamix commented on GitHub (Nov 6, 2024): > > `OLLAMA_HOME=$OLLAMA_HOME OLLAMA_MODELS=$OLLAMA_MODELS OLLAMA_TEMPDIR=$OLLAMA_TEMPDIR ollama run qwen2.5:0.5b` > > This will not change the environment variables, they need to be set in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server). ahh, agreed, in windows powershell `$env:something=value` worked, my answer may have been biased from debugging standpoint. this one i believe it would set only one time per session and lost . nevertheless, thanks for correction.
Author
Owner

@anshika1234 commented on GitHub (Nov 6, 2024):

OLLAMA_HOME=$OLLAMA_HOME OLLAMA_MODELS=$OLLAMA_MODELS OLLAMA_TEMPDIR=$OLLAMA_TEMPDIR ollama run qwen2.5:0.5b

This will not change the environment variables, they need to be set in the server environment.

yes i have set in server enviroment only. But it worked only when proxy was bypassed.

<!-- gh-comment-id:2459455610 --> @anshika1234 commented on GitHub (Nov 6, 2024): > > `OLLAMA_HOME=$OLLAMA_HOME OLLAMA_MODELS=$OLLAMA_MODELS OLLAMA_TEMPDIR=$OLLAMA_TEMPDIR ollama run qwen2.5:0.5b` > > This will not change the environment variables, they need to be set in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server). yes i have set in server enviroment only. But it worked only when proxy was bypassed.
Author
Owner

@anshika1234 commented on GitHub (Nov 6, 2024):

Thanks again.

<!-- gh-comment-id:2459456071 --> @anshika1234 commented on GitHub (Nov 6, 2024): Thanks again.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4783