[GH-ISSUE #6734] Error: pull model manifest: file does not exist (again) #4244

Closed
opened 2026-04-12 15:10:30 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @MatrixForgeLabs on GitHub (Sep 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6734

What is the issue?

I have read about the Error: pull model manifest: file does not exist issue other sare having but it seems like its a simple typo for everyone else.

i'm simply trying to import a gguf file as a model. I create the Modelfile:

FROM DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored.i1-Q4_K_M.gguf
PARAMETER temperature 9
SYSTEM You are Peter from Family Guy, acting as an assistant.

if i use the full path to the file it fails completely. the model is in the same directory.

i wonder if its because the model is a symlink..?

There is little information on this issue and seems I'll be stuck to using models from ollamas library. The basic docs say we can use any model such as gguf's..

OS

Linux

GPU

Other

CPU

Intel

Ollama version

0.3.9

Originally created by @MatrixForgeLabs on GitHub (Sep 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6734 ### What is the issue? I have read about the Error: pull model manifest: file does not exist issue other sare having but it seems like its a simple typo for everyone else. i'm simply trying to import a gguf file as a model. I create the Modelfile: ``` FROM DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored.i1-Q4_K_M.gguf PARAMETER temperature 9 SYSTEM You are Peter from Family Guy, acting as an assistant. ``` if i use the full path to the file it fails completely. the model is in the same directory. i wonder if its because the model is a symlink..? There is little information on this issue and seems I'll be stuck to using models from ollamas library. The basic docs say we can use any model such as gguf's.. ### OS Linux ### GPU Other ### CPU Intel ### Ollama version 0.3.9
GiteaMirror added the bug label 2026-04-12 15:10:30 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 10, 2024):

It should work, lots of people download GGUF's and import them. There may be some feature of your environment that is tripping you up. How did you install ollama? System install or docker? Is the command ollama an alias? Why is the filename a symlink? When you say "fails completely", is that a different error message to "file does not exist"? If you replace the symlink with a hard link does it work any better?

FWIW I dowloaded the Q4_K_M from mradermacher/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-i1-GGUF and imported with your Modelfile without a problem. The model itself seems untrained so maybe I got the wrong source, but the process worked fine.

$ ollama show darkidol:Q4_K_M
  Model
        parameters              8.0B
        quantization            Q4_K_M
        arch                    llama
        context length          131072
        embedding length        4096

  Parameters
        temperature     9

  System
        You are Peter from Family Guy, acting as an assistant.
<!-- gh-comment-id:2341791476 --> @rick-github commented on GitHub (Sep 10, 2024): It should work, lots of people download GGUF's and import them. There may be some feature of your environment that is tripping you up. How did you install ollama? System install or docker? Is the command `ollama` an alias? Why is the filename a symlink? When you say "fails completely", is that a different error message to "file does not exist"? If you replace the symlink with a hard link does it work any better? FWIW I dowloaded the Q4_K_M from mradermacher/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored-i1-GGUF and imported with your Modelfile without a problem. The model itself seems untrained so maybe I got the wrong source, but the process worked fine. ``` $ ollama show darkidol:Q4_K_M Model parameters 8.0B quantization Q4_K_M arch llama context length 131072 embedding length 4096 Parameters temperature 9 System You are Peter from Family Guy, acting as an assistant. ```
Author
Owner

@jmorganca commented on GitHub (Sep 12, 2024):

Thanks for the issue!

Indeed, as @rick-github mentioned it's most likely because that file wasn't in the same directory as the Modelfile. Let me know if that doesn't fix it for you.

<!-- gh-comment-id:2345051597 --> @jmorganca commented on GitHub (Sep 12, 2024): Thanks for the issue! Indeed, as @rick-github mentioned it's most likely because that file wasn't in the same directory as the `Modelfile`. Let me know if that doesn't fix it for you.
Author
Owner

@Propfend commented on GitHub (Mar 19, 2025):

@rick-github @jmorganca I have the Same GPU, CPU and OS env as the OP.

in /usr/local/models i have

/Modelfile
/qwen2_500m.gguf (which i downloaded from internet)

in the Modelfile i have:

FROM /usr/local/models/qwen2_500m.gguf

Having /usr/local/models as a workdir i used:

ollama create qwen2_500m_ollama -f Modelfile

the model was created and i can list it:

$ ollama list

qwen2_500m_ollama:latest

$ ollama show qwen2_500m_ollama:latest

architecture      qwen2
.
.
.

i can run it:

$ ollama run qwen2_500m_ollama:latest

but for some reason i cant pull it:

specifying HOST_OLLAMA

OLLAMA_HOST="localhost:7070" ollama pull qwen2_500m_ollama:latest

Error: pull model manifest: file does not exist

or not:

ollama pull qwen2_500m_ollama:latest

Error: pull model manifest: file does not exist

OBS: the server is running

<!-- gh-comment-id:2738345519 --> @Propfend commented on GitHub (Mar 19, 2025): @rick-github @jmorganca I have the Same GPU, CPU and OS env as the OP. in `/usr/local/models` i have /Modelfile /qwen2_500m.gguf (which i downloaded from internet) in the Modelfile i have: ```nano FROM /usr/local/models/qwen2_500m.gguf ``` Having /usr/local/models as a workdir i used: ```bash ollama create qwen2_500m_ollama -f Modelfile ``` the model was created and i can list it: ```bash $ ollama list qwen2_500m_ollama:latest $ ollama show qwen2_500m_ollama:latest architecture qwen2 . . . ``` i can run it: ```bash $ ollama run qwen2_500m_ollama:latest ``` but for some reason i cant pull it: specifying `HOST_OLLAMA` ```bash OLLAMA_HOST="localhost:7070" ollama pull qwen2_500m_ollama:latest Error: pull model manifest: file does not exist ``` or not: ```bash ollama pull qwen2_500m_ollama:latest Error: pull model manifest: file does not exist ``` OBS: the server is running
Author
Owner

@rick-github commented on GitHub (Mar 20, 2025):

ollama pull downloads a model, from the ollama library or huggingface or other repo, depending on the model name. You can't pull a model you created locally because it's local, it's not available from a remote repo. With some prep, you can push a model to a repo and then download it.

<!-- gh-comment-id:2738615434 --> @rick-github commented on GitHub (Mar 20, 2025): `ollama pull` downloads a model, from the ollama library or huggingface or other repo, depending on the model name. You can't pull a model you created locally because it's local, it's not available from a remote repo. With some prep, you can [push](https://github.com/ollama/ollama/blob/main/docs/api.md#push-a-model) a model to a repo and then download it.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4244