[GH-ISSUE #484] ollama run doesn't pull model if using a remote host #62260

Closed
opened 2026-05-03 08:01:20 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @jmorganca on GitHub (Sep 7, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/484

Currently when running ollama run against a remote instance of Ollama (e.g. OLLAMA_HOST=192.168.1.32:11434 ollama run llama2, it will error if the model does not exist (vs pulling it). We rely on the client checking for the file here: https://github.com/jmorganca/ollama/blob/main/cmd/cmd.go#L115. Instead we can use an api such as /api/show or /api/generate to check if the model has been pulled.

Originally created by @jmorganca on GitHub (Sep 7, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/484 Currently when running `ollama run` against a remote instance of Ollama (e.g. `OLLAMA_HOST=192.168.1.32:11434 ollama run llama2`, it will error if the model does not exist (vs pulling it). We rely on the client checking for the file here: https://github.com/jmorganca/ollama/blob/main/cmd/cmd.go#L115. Instead we can use an api such as `/api/show` or `/api/generate` to check if the model has been pulled.
GiteaMirror added the good first issuebug labels 2026-05-03 08:01:20 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62260