[GH-ISSUE #755] Ollama re-attempts to pull model when served on a remote server #357

Closed
opened 2026-04-12 09:59:56 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @zenarcher007 on GitHub (Oct 11, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/755

I am running the Ollama server on a remote server, streaming the default port "11434" to localhost served via an SSH tunnel. On my local machine, every time the client,ollama run, is run, Ollama attempts to pull the model on the server, even if it is already installed, and verify its hash: a process which takes additional time. Since most of the other ollama client commands, such as ollama list, work as expected with the remote server configuration, it is expected that ollama run would be able to detect that the model is already installed on the server without attempting to re-pull and verify the model.

This produces output such as the following:

> ollama run codellama:13b Hello
pulling manifest
pulling a44062a96a2b... 100% |█████████████████| (7.3/7.3 GB, 2.0 TB/s)        
pulling 2c8743bdc4ad... 100% |█████████████████| (7.0/7.0 kB, 144 MB/s)        
pulling 38fa20ee7daa... 100% |██████████████████| (4.8/4.8 kB, 50 MB/s)        
pulling 578a2e81f706... 100% |████████████████████| (95/95 B, 2.5 MB/s)        
pulling 404e21afdc6a... 100% |████████████████████| (30/30 B, 870 kB/s)        
pulling 9423dcb51326... 100% |███████████████████| (508/508 B, 15 MB/s)        
verifying sha256 digest
writing manifest
removing any unused layers
success
 Hello! It's nice to meet you. Is there anything in particular you would like to chat about?

Without investigating the code too deeply, this section in the run handler of ollama/cmd/cmd.go appears not to work as expected because of this issue, as the model name is not detected, dropping to the PullHandler call below.

canonicalModelPath := server.ParseModelPath(args[0])
for _, model := range models.Models {
	if model.Name == canonicalModelPath.GetShortTagname() {
		return RunGenerate(cmd, args)
	}
}
Originally created by @zenarcher007 on GitHub (Oct 11, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/755 I am running the Ollama server on a remote server, streaming the default port "11434" to localhost served via an SSH tunnel. On my local machine, every time the client,`ollama run`, is run, Ollama attempts to pull the model on the server, even if it is already installed, and verify its hash: a process which takes additional time. Since most of the other ollama client commands, such as `ollama list`, work as expected with the remote server configuration, it is expected that `ollama run` would be able to detect that the model is already installed on the server without attempting to re-pull and verify the model. This produces output such as the following: ``` > ollama run codellama:13b Hello pulling manifest pulling a44062a96a2b... 100% |█████████████████| (7.3/7.3 GB, 2.0 TB/s) pulling 2c8743bdc4ad... 100% |█████████████████| (7.0/7.0 kB, 144 MB/s) pulling 38fa20ee7daa... 100% |██████████████████| (4.8/4.8 kB, 50 MB/s) pulling 578a2e81f706... 100% |████████████████████| (95/95 B, 2.5 MB/s) pulling 404e21afdc6a... 100% |████████████████████| (30/30 B, 870 kB/s) pulling 9423dcb51326... 100% |███████████████████| (508/508 B, 15 MB/s) verifying sha256 digest writing manifest removing any unused layers success Hello! It's nice to meet you. Is there anything in particular you would like to chat about? ``` Without investigating the code too deeply, this section in the run handler of ollama/cmd/cmd.go appears not to work as expected because of this issue, as the model name is not detected, dropping to the PullHandler call below. ``` canonicalModelPath := server.ParseModelPath(args[0]) for _, model := range models.Models { if model.Name == canonicalModelPath.GetShortTagname() { return RunGenerate(cmd, args) } } ```
GiteaMirror added the bug label 2026-04-12 09:59:56 -05:00
Author
Owner

@zenarcher007 commented on GitHub (Oct 11, 2023):

It appears I was not using the latest version of Ollama. Updating the version of Ollama via Homebrew appears to solve the issue. I will close this issue.

<!-- gh-comment-id:1756723416 --> @zenarcher007 commented on GitHub (Oct 11, 2023): It appears I was not using the latest version of Ollama. Updating the version of Ollama via Homebrew appears to solve the issue. I will close this issue.
Author
Owner

@jmorganca commented on GitHub (Oct 11, 2023):

@zenarcher007 great!

<!-- gh-comment-id:1756724787 --> @jmorganca commented on GitHub (Oct 11, 2023): @zenarcher007 great!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#357