[GH-ISSUE #14593] Ollama tries to pull when it shouldn't #71521

Open
opened 2026-05-05 02:02:26 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @mirkovich0 on GitHub (Mar 3, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14593

What is the issue?

Downloaded a local GGUF model.
Created Modelfile pointing to GGUF file and did ollama create for my model.

  • ollama list shows my model with the name I assigned and :latest
  • ollama run starts the model and it works (same for ollama launch and selecting "Run a model")
  • ollama launch codex fails... because it tries to pull the model (it shouldn't; there's a local manifest and the model is registered and recognized -- I can pick it from the list when using --config)

Error: OSS setup failed: Pull failed: pull model manifest: file does not exist

  • Launching codex -p myprofile also fails, even if I have configured base_url = "http://localhost:11434/v1". After the "model metadata not found" error I get:

{"error":{"message":"registry.ollama.ai/library/mymodel:latest does not support tools","type":"api_error","param":null,"code":null}}

Setting OLLAMA_NO_CLOUD=1 does not fix this.

Relevant log output


OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.17.5

Originally created by @mirkovich0 on GitHub (Mar 3, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14593 ### What is the issue? Downloaded a local GGUF model. Created `Modelfile` pointing to GGUF file and did `ollama create` for my model. - `ollama list` shows my model with the name I assigned and `:latest` - `ollama run` starts the model and it works (same for ollama launch and selecting "Run a model") - `ollama launch codex` fails... because it tries to pull the model (it shouldn't; there's a local manifest and the model is registered and recognized -- I can pick it from the list when using `--config`) > Error: OSS setup failed: Pull failed: pull model manifest: file does not exist - Launching `codex -p myprofile` also fails, even if I have configured `base_url = "http://localhost:11434/v1"`. After the "model metadata not found" error I get: >{"error":{"message":"registry.ollama.ai/library/mymodel:latest does not support tools","type":"api_error","param":null,"code":null}} Setting `OLLAMA_NO_CLOUD=1` does not fix this. ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.17.5
GiteaMirror added the launchbug labels 2026-05-05 02:02:27 -05:00
Author
Owner

@drifkin commented on GitHub (Mar 3, 2026):

so the auto-pulling is actually codex itself doing it, but we'll look into if we can change ollama launch codex to work around it.

I suspect that your base_url test shows that even once we fix that, the lack of tool calling for your custom model would still be a problem, since codex requires tool calling. What model are you trying to use? If it's a variant of one we already support, you probably want to build your Modelfile based off our built-in one: tool support is added via either the template or the Renderer/Parser fields

<!-- gh-comment-id:3993015761 --> @drifkin commented on GitHub (Mar 3, 2026): so the auto-pulling is actually `codex` itself doing it, but we'll look into if we can change `ollama launch codex` to work around it. I suspect that your `base_url` test shows that even once we fix that, the lack of tool calling for your custom model would still be a problem, since codex requires tool calling. What model are you trying to use? If it's a variant of one we already support, you probably want to build your Modelfile based off our built-in one: tool support is added via either the template or the `Renderer`/`Parser` fields
Author
Owner

@mirkovich0 commented on GitHub (Mar 3, 2026):

Thank you very much.
As of now, I didn't found anything about this on the web so I rather raise this at least so even if it gets rejected somebody seeing the same problem has where to go.
I'll just stick to freeing some hdd space and work with a newly downloaded model as that seems easier.

<!-- gh-comment-id:3993112103 --> @mirkovich0 commented on GitHub (Mar 3, 2026): Thank you very much. As of now, I didn't found anything about this on the web so I rather raise this at least so even if it gets rejected somebody seeing the same problem has where to go. I'll just stick to freeing some hdd space and work with a newly downloaded model as that seems easier.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71521