[GH-ISSUE #5869] Error: file does not exist but it exists #65697

Open
opened 2026-05-03 22:16:43 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @DevLLM on GitHub (Jul 23, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5869

Originally assigned to: @BruceMacD on GitHub.

What is the issue?

Hello, I want to push my model to ollama but I got the error

retrieving manifest
Error: file does not exist

but the problem is that I have the file, specifically "C:\Users\User.ollama\models\manifests\registry.ollama.ai_\mymodel\latest"

and my username is _ (link: https://ollama.com/_ ), so and I can't change my username

"""
D:\ollama> ollama create _/mymodel:latest -f Modelfile
transferring model data
using existing layer sha256:617ba424eabae67d228cf4598d2b18d9656b73c1f8f5bfa974ead81485dad2a5
using existing layer sha256:f5dc666b38fce911ccd916bcb13ea78a8002803fd11d5bb6486c4dd76ab8223f
using existing layer sha256:3dddcbf82aec37d515d388e1141900e1530f74f20c5091f64567609a56fe8f43
using existing layer sha256:023c31c9015bbf14d78183c19eec819c3142e791c857bbc3989e53250f00561d
using existing layer sha256:c50ad1ef7469cb081d31e4c321e73562e1e657e890a325b4d7214f8988fd1678
using existing layer sha256:6a6636a5d2ef8c1f29444967fb0f17930369d2c53117d39bd3926760d1062230
writing manifest
success

D:\ollama> ollama list
NAME ID SIZE MODIFIED
_/mymodel:latest 37dad3f2b9d3 13 GB 18 seconds ago
mymodel:latest 37dad3f2b9d3 13 GB 23 minutes ago

D:\ollama> ollama push _/mymodel:latest
retrieving manifest
Error: file does not exist
"""

OS

Windows, WSL2

GPU

Nvidia, Intel

CPU

Intel

Ollama version

0.2.7

Originally created by @DevLLM on GitHub (Jul 23, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5869 Originally assigned to: @BruceMacD on GitHub. ### What is the issue? Hello, I want to push my model to ollama but I got the error `retrieving manifest` `Error: file does not exist ` but the problem is that I have the file, specifically "C:\Users\User\.ollama\models\manifests\registry.ollama.ai\_\mymodel\latest" and my username is _ (link: [https://ollama.com/_](https://ollama.com/_ ) ), so and I can't change my username """ D:\ollama> ollama create _/mymodel:latest -f Modelfile transferring model data using existing layer sha256:617ba424eabae67d228cf4598d2b18d9656b73c1f8f5bfa974ead81485dad2a5 using existing layer sha256:f5dc666b38fce911ccd916bcb13ea78a8002803fd11d5bb6486c4dd76ab8223f using existing layer sha256:3dddcbf82aec37d515d388e1141900e1530f74f20c5091f64567609a56fe8f43 using existing layer sha256:023c31c9015bbf14d78183c19eec819c3142e791c857bbc3989e53250f00561d using existing layer sha256:c50ad1ef7469cb081d31e4c321e73562e1e657e890a325b4d7214f8988fd1678 using existing layer sha256:6a6636a5d2ef8c1f29444967fb0f17930369d2c53117d39bd3926760d1062230 writing manifest success D:\ollama> ollama list NAME ID SIZE MODIFIED _/mymodel:latest 37dad3f2b9d3 13 GB 18 seconds ago mymodel:latest 37dad3f2b9d3 13 GB 23 minutes ago D:\ollama> ollama push _/mymodel:latest retrieving manifest Error: file does not exist """ ### OS Windows, WSL2 ### GPU Nvidia, Intel ### CPU Intel ### Ollama version 0.2.7
GiteaMirror added the bug label 2026-05-03 22:16:43 -05:00
Author
Owner

@BruceMacD commented on GitHub (Jul 25, 2024):

Hi @DevLLM, this one slipped past our validation. Send me an email at bruce.macdonald-at-ollama.com and I'll get you switched over to whatever valid namespace you'd like.

<!-- gh-comment-id:2249095169 --> @BruceMacD commented on GitHub (Jul 25, 2024): Hi @DevLLM, this one slipped past our validation. Send me an email at `bruce.macdonald-at-ollama.com` and I'll get you switched over to whatever valid namespace you'd like.
Author
Owner

@bmizerany commented on GitHub (Jul 25, 2024):

@DevLLM Will you please confirm the path you pasted (C:Users\User.ollama\models\manifests\registry.ollama.ai_\mymodel\latest) is correct? If so, it's also a bug that there is no \ after .ai and before _.

<!-- gh-comment-id:2249148114 --> @bmizerany commented on GitHub (Jul 25, 2024): @DevLLM Will you please confirm the path you pasted (`C:Users\User.ollama\models\manifests\registry.ollama.ai_\mymodel\latest`) is correct? If so, it's also a bug that there is no `\` after `.ai` and before `_`.
Author
Owner

@Minxiangliu commented on GitHub (Nov 15, 2024):

Hi @BruceMacD , @bmizerany
I tried creating a model and followed this example to execute the ollama commands. Everything went smoothly during the creation process, but when I tried to run the model, it said it couldn't be found. Did I make a mistake somewhere? Could you please help me?
Thank you in advance.

OS

Docker ubuntu22.04 in Windows WSL2

GPU

Nvidia, Intel

CPU

Intel

Ollama version

0.4.1

Modelfile:

# Modelfile
FROM "/build_docker/Llama-3.2/Llama-3.2-11B-Vision-Instruct-mmproj.f16.gguf"

PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"

TEMPLATE """
<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""

Log:

(py310) root@019a5ec06402:~# ollama create my-own-model -f Modelfile
transferring model data 100%
using existing layer sha256:622429e8d31810962dd984bc98559e706db2fb1d40e99cb073beb7148d909d73
creating new layer sha256:8971eb8e89ce161a65232db6db5019953dbc313fc296d9e6e9d7823e395673b9
creating new layer sha256:f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216
creating new layer sha256:896227e1de4d5b48a7fbf9eec7a095ab38351f5c3efb11cf81f616cb9208570e
writing manifest
success
(py310) root@019a5ec06402:~# ollama list
NAME                   ID              SIZE      MODIFIED
my-own-model:latest    c4901261a022    1.9 GB    About a minute ago
llama3.2:1b            baf6a787fdff    1.3 GB    2 hours ago
(py310) root@019a5ec06402:~# ollama run my-own-model:latest
pulling manifest
Error: pull model manifest: file does not exist
(py310) root@019a5ec06402:~#

ollama serve:

[GIN] 2024/11/15 - 16:38:06 | 200 |      2.5631ms |       127.0.0.1 | POST     "/api/create"
[GIN] 2024/11/15 - 16:38:34 | 200 |        91.4µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/15 - 16:38:34 | 200 |       322.4µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/11/15 - 16:38:41 | 200 |        18.3µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/15 - 16:38:41 | 200 |       357.1µs |       127.0.0.1 | POST     "/api/generate"
[GIN] 2024/11/15 - 16:38:41 | 200 |    133.8325ms |       127.0.0.1 | DELETE   "/api/delete"
[GIN] 2024/11/15 - 16:38:43 | 200 |          18µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/15 - 16:38:43 | 200 |       235.5µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/11/15 - 16:41:16 | 200 |        39.3µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/15 - 16:41:29 | 201 |    5.5687215s |       127.0.0.1 | POST     "/api/blobs/sha256:622429e8d31810962dd984bc98559e706db2fb1d40e99cb073beb7148d909d73"
[GIN] 2024/11/15 - 16:41:29 | 200 |      3.1334ms |       127.0.0.1 | POST     "/api/create"
[GIN] 2024/11/15 - 16:42:48 | 200 |        77.5µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/15 - 16:42:48 | 200 |       784.1µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/11/15 - 16:43:01 | 200 |        18.4µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/15 - 16:43:01 | 404 |       422.2µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/11/15 - 16:43:02 | 200 |    1.2704652s |       127.0.0.1 | POST     "/api/pull"
<!-- gh-comment-id:2478249376 --> @Minxiangliu commented on GitHub (Nov 15, 2024): Hi @BruceMacD , @bmizerany I tried creating a [model](https://huggingface.co/leafspark/Llama-3.2-11B-Vision-Instruct-GGUF/tree/main) and followed this [example](https://medium.com/@sudarshan-koirala/ollama-huggingface-8e8bc55ce572) to execute the ollama commands. Everything went smoothly during the creation process, but when I tried to run the model, it said it couldn't be found. Did I make a mistake somewhere? Could you please help me? Thank you in advance. ## OS Docker ubuntu22.04 in Windows WSL2 ## GPU Nvidia, Intel ## CPU Intel ## Ollama version 0.4.1 Modelfile: ``` # Modelfile FROM "/build_docker/Llama-3.2/Llama-3.2-11B-Vision-Instruct-mmproj.f16.gguf" PARAMETER stop "<|im_start|>" PARAMETER stop "<|im_end|>" TEMPLATE """ <|im_start|>system {{ .System }}<|im_end|> <|im_start|>user {{ .Prompt }}<|im_end|> <|im_start|>assistant """ ``` Log: ```log (py310) root@019a5ec06402:~# ollama create my-own-model -f Modelfile transferring model data 100% using existing layer sha256:622429e8d31810962dd984bc98559e706db2fb1d40e99cb073beb7148d909d73 creating new layer sha256:8971eb8e89ce161a65232db6db5019953dbc313fc296d9e6e9d7823e395673b9 creating new layer sha256:f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216 creating new layer sha256:896227e1de4d5b48a7fbf9eec7a095ab38351f5c3efb11cf81f616cb9208570e writing manifest success (py310) root@019a5ec06402:~# ollama list NAME ID SIZE MODIFIED my-own-model:latest c4901261a022 1.9 GB About a minute ago llama3.2:1b baf6a787fdff 1.3 GB 2 hours ago (py310) root@019a5ec06402:~# ollama run my-own-model:latest pulling manifest Error: pull model manifest: file does not exist (py310) root@019a5ec06402:~# ``` ollama serve: ```log [GIN] 2024/11/15 - 16:38:06 | 200 | 2.5631ms | 127.0.0.1 | POST "/api/create" [GIN] 2024/11/15 - 16:38:34 | 200 | 91.4µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/15 - 16:38:34 | 200 | 322.4µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/15 - 16:38:41 | 200 | 18.3µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/15 - 16:38:41 | 200 | 357.1µs | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/15 - 16:38:41 | 200 | 133.8325ms | 127.0.0.1 | DELETE "/api/delete" [GIN] 2024/11/15 - 16:38:43 | 200 | 18µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/15 - 16:38:43 | 200 | 235.5µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/15 - 16:41:16 | 200 | 39.3µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/15 - 16:41:29 | 201 | 5.5687215s | 127.0.0.1 | POST "/api/blobs/sha256:622429e8d31810962dd984bc98559e706db2fb1d40e99cb073beb7148d909d73" [GIN] 2024/11/15 - 16:41:29 | 200 | 3.1334ms | 127.0.0.1 | POST "/api/create" [GIN] 2024/11/15 - 16:42:48 | 200 | 77.5µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/15 - 16:42:48 | 200 | 784.1µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/15 - 16:43:01 | 200 | 18.4µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/15 - 16:43:01 | 404 | 422.2µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/11/15 - 16:43:02 | 200 | 1.2704652s | 127.0.0.1 | POST "/api/pull" ```
Author
Owner

@BruceMacD commented on GitHub (Nov 15, 2024):

Hi @Minxiangliu it looks like your referencing a file that exists in a docker container: FROM "/build_docker/Llama-3.2/Llama-3.2-11B-Vision-Instruct-mmproj.f16.gguf". Is this file accessible from where the ollama server is running?

<!-- gh-comment-id:2479811529 --> @BruceMacD commented on GitHub (Nov 15, 2024): Hi @Minxiangliu it looks like your referencing a file that exists in a docker container: `FROM "/build_docker/Llama-3.2/Llama-3.2-11B-Vision-Instruct-mmproj.f16.gguf"`. Is this file accessible from where the ollama server is running?
Author
Owner

@Minxiangliu commented on GitHub (Nov 18, 2024):

Hi @BruceMacD ,
Thank you for your reply, and apologies for my late response. After some testing, I was able to use the ollama3.2-vision model on the ollama server. Among these GGUF models, all of them work except for mmproj.f16 model. Thank you! I'll continue testing with the models on the ollama server.

<!-- gh-comment-id:2481739136 --> @Minxiangliu commented on GitHub (Nov 18, 2024): Hi @BruceMacD , Thank you for your reply, and apologies for my late response. After some testing, I was able to use the `ollama3.2-vision` model on the `ollama server`. Among these [GGUF models](https://huggingface.co/leafspark/Llama-3.2-11B-Vision-Instruct-GGUF/tree/main), all of them work except for `mmproj.f16` model. Thank you! I'll continue testing with the models on the `ollama server`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65697