[GH-ISSUE #224] Can't create model from modelfile #94

Closed
opened 2026-04-12 09:38:05 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @ajstair on GitHub (Jul 27, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/224

Originally assigned to: @BruceMacD on GitHub.

I was able to build and run the docker image, but I'm having issues creating a model through the REST API.

I attempted to create a model using
curl -X POST http://localhost:11434/api/create -d '{"name": "llama2", "path": "/mnt/c/ollama/library/modelfiles/llama2"}'
where /mnt/c/ollama/ is the project directory. That curl got the response:

{"status":"parsing modelfile"}
{"status":"looking for model"}
{"status":"pulling model file"}
{"status":"pulling manifest"}
{"error":"pull model manifest: Get \"https://../v2/models/llama-2-7b-chat.ggmlv3.q4_0.bin/manifests/latest\": dial tcp: lookup ..: no such host"}

I wasn't able to follow the logic of how model manifests are pulled... any idea what's going on here?

Originally created by @ajstair on GitHub (Jul 27, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/224 Originally assigned to: @BruceMacD on GitHub. I was able to build and run the docker image, but I'm having issues creating a model through the REST API. I attempted to create a model using ``` curl -X POST http://localhost:11434/api/create -d '{"name": "llama2", "path": "/mnt/c/ollama/library/modelfiles/llama2"}'``` where `/mnt/c/ollama/` is the project directory. That curl got the response: ``` {"status":"parsing modelfile"} {"status":"looking for model"} {"status":"pulling model file"} {"status":"pulling manifest"} {"error":"pull model manifest: Get \"https://../v2/models/llama-2-7b-chat.ggmlv3.q4_0.bin/manifests/latest\": dial tcp: lookup ..: no such host"} ``` I wasn't able to follow the logic of how model manifests are pulled... any idea what's going on here?
GiteaMirror added the bug label 2026-04-12 09:38:05 -05:00
Author
Owner

@jmorganca commented on GitHub (Jul 27, 2023):

@ajstair looking into this, thanks for the issue!

<!-- gh-comment-id:1653605018 --> @jmorganca commented on GitHub (Jul 27, 2023): @ajstair looking into this, thanks for the issue!
Author
Owner

@BruceMacD commented on GitHub (Jul 27, 2023):

Thanks for opening this issue @ajstair. Creating Modelfile from an existing binary should work, it looks like ollama is having trouble finding the model binary specified in your llama2 Modelfile.

Based on the error it looks like the model binary is specified as FROM models/llama-2-7b-chat.ggmlv3.q4_0.bin is this relative to the Modelfile location and accessible to the server? The directory structure its expecting in this case should look something like this:

mnt
  -- c
    -- ollama
    -- library
    -- modelfiles
         -- llama2
         -- models
           -- llama-2-7b-chat.ggmlv3.q4_0.bin
<!-- gh-comment-id:1653770566 --> @BruceMacD commented on GitHub (Jul 27, 2023): Thanks for opening this issue @ajstair. Creating Modelfile from an existing binary should work, it looks like ollama is having trouble finding the model binary specified in your `llama2` Modelfile. Based on the error it looks like the model binary is specified as `FROM models/llama-2-7b-chat.ggmlv3.q4_0.bin` is this relative to the Modelfile location and accessible to the server? The directory structure its expecting in this case should look something like this: ``` mnt -- c -- ollama -- library -- modelfiles -- llama2 -- models -- llama-2-7b-chat.ggmlv3.q4_0.bin ```
Author
Owner

@ajstair commented on GitHub (Jul 28, 2023):

Thanks for your response. Your message made me realize that the models directory (and the binary) were indeed missing. The root cause seems to be "problem exists between chair and keyboard".

I went and dug around through the repo to better understand how the model binaries are downloaded. The problem was that I followed the build instructions here: https://github.com/jmorganca/ollama#building
and stopped after that. I failed to run ./ollama pull to obtain the model before attempting to create the model with the POST.

Only thing I might suggest: adding an instruction about how to pull a model binary to the building section of the readme so other similarly naive people don't make the same mistake :)

Thanks for your work on this project - keep it up!

<!-- gh-comment-id:1656419890 --> @ajstair commented on GitHub (Jul 28, 2023): Thanks for your response. Your message made me realize that the models directory (and the binary) were indeed missing. The root cause seems to be "problem exists between chair and keyboard". I went and dug around through the repo to better understand how the model binaries are downloaded. The problem was that I followed the build instructions here: https://github.com/jmorganca/ollama#building and stopped after that. I failed to run `./ollama pull` to obtain the model before attempting to create the model with the `POST`. Only thing I might suggest: adding an instruction about how to pull a model binary to the [building](https://github.com/jmorganca/ollama#building) section of the readme so other similarly naive people don't make the same mistake :) Thanks for your work on this project - keep it up!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#94