[GH-ISSUE #10083] can't use ollama create to load gguf models #53120

Closed
opened 2026-04-29 02:00:21 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @ngwarrencinyen on GitHub (Apr 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10083

Issue: Error When Using ollama create to Load a Model

Description

When attempting to use the ollama create API to load a model, the following error is returned:

*** Error:
{"error":"path or Modelfile are required"}
***

Environment

  • Ollama Version: 0.5.4
  • Docker Compose Service: ollama running on localhost:8015

Steps to Reproduce

  1. Generate SHA-256 Checksum:

    sha256sum {model path}
    

    Example:

    sha256sum Llama-3.2-3B-Instruct-Q4_K_M.gguf
    

    Output:

    6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff  Llama-3.2-3B-Instruct-Q4_K_M.gguf
    
  2. Push Blob to Ollama:

    curl -T Llama-3.2-3B-Instruct-Q4_K_M.gguf -X POST http://localhost:8015/api/blobs/sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff
    
  3. Verify Blob Exists:

    curl -I http://localhost:8015/api/blobs/sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff
    

    Output:

    HTTP/1.1 200 OK
    Date: Thu, 03 Apr 2025 06:43:44 GMT
    
  4. Use ollama create to Load the Model:

    curl http://localhost:8015/api/create -d '{
      "model": "onepiece",
      "files": {
        "Llama-3.2-3B-Instruct-Q4_K_M.gguf": "sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff"
      },
      "template": "{{- if .System }}\n<|system|>\n{{ .System }}\n</s>\n{{- end }}\n<|user|>\n{{ .Prompt }}\n</s>\n<|assistant|>",
      "parameters": {
          "temperature": 0.2,
          "num_ctx": 8192,
          "stop": ["<|system|>", "<|user|>", "<|assistant|>", "</s>"]
      },
      "system": "You are Luffy from One Piece, acting as an assistant."
    }'
    
  5. Error Output:

    {"error":"path or Modelfile are required"}
    

Expected Behavior

The model should be successfully created and loaded into the Ollama server.


Actual Behavior

The server returns the following error:

{"error":"path or Modelfile are required"}

Additional Information

  • Blob Verification Screenshot:
    Blob Verification

  • Command Used:

    curl http://localhost:8015/api/create -d '{
      "model": "onepiece",
      "files": {
        "Llama-3.2-3B-Instruct-Q4_K_M.gguf": "sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff"
      },
      "template": "{{- if .System }}\n<|system|>\n{{ .System }}\n</s>\n{{- end }}\n<|user|>\n{{ .Prompt }}\n</s>\n<|assistant|>",
      "parameters": {
          "temperature": 0.2,
          "num_ctx": 8192,
          "stop": ["<|system|>", "<|user|>", "<|assistant|>", "</s>"]
      },
      "system": "You are Luffy from One Piece, acting as an assistant."
    }'
    
  • Model Reference:
    The model used for this process is available at Llama-3.2-3B-Instruct-Q4_K_M.gguf.


Questions

  1. Is the path or modelfile field required in the /api/create payload? If so, how should it be structured?
  2. Are there additional steps or configurations needed to successfully create a model from a GGUF file?

Request for Assistance

Any guidance on how to resolve this issue and successfully create a model using the /api/create endpoint is greatly appreciated!!!

OS

Docker

GPU

Intel

CPU

Intel

Ollama version

0.5.4

Originally created by @ngwarrencinyen on GitHub (Apr 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10083 ## **Issue: Error When Using `ollama create` to Load a Model** ### **Description** When attempting to use the `ollama create` API to load a model, the following error is returned: ```plaintext *** Error: {"error":"path or Modelfile are required"} *** ``` ### **Environment** - **Ollama Version**: 0.5.4 - **Docker Compose Service**: ollama running on `localhost:8015` --- ### **Steps to Reproduce** 1. **Generate SHA-256 Checksum**: ```bash sha256sum {model path} ``` Example: ```bash sha256sum Llama-3.2-3B-Instruct-Q4_K_M.gguf ``` Output: ```plaintext 6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff Llama-3.2-3B-Instruct-Q4_K_M.gguf ``` 2. **Push Blob to Ollama**: ```bash curl -T Llama-3.2-3B-Instruct-Q4_K_M.gguf -X POST http://localhost:8015/api/blobs/sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff ``` 3. **Verify Blob Exists**: ```bash curl -I http://localhost:8015/api/blobs/sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff ``` Output: ```plaintext HTTP/1.1 200 OK Date: Thu, 03 Apr 2025 06:43:44 GMT ``` 4. **Use `ollama create` to Load the Model**: ```bash curl http://localhost:8015/api/create -d '{ "model": "onepiece", "files": { "Llama-3.2-3B-Instruct-Q4_K_M.gguf": "sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff" }, "template": "{{- if .System }}\n<|system|>\n{{ .System }}\n</s>\n{{- end }}\n<|user|>\n{{ .Prompt }}\n</s>\n<|assistant|>", "parameters": { "temperature": 0.2, "num_ctx": 8192, "stop": ["<|system|>", "<|user|>", "<|assistant|>", "</s>"] }, "system": "You are Luffy from One Piece, acting as an assistant." }' ``` 5. **Error Output**: ```plaintext {"error":"path or Modelfile are required"} ``` --- ### **Expected Behavior** The model should be successfully created and loaded into the Ollama server. --- ### **Actual Behavior** The server returns the following error: ```plaintext {"error":"path or Modelfile are required"} ``` --- ### **Additional Information** - **Blob Verification Screenshot**: ![Blob Verification](https://github.com/user-attachments/assets/03d1fe7f-6c75-4f03-9ab7-ea889b80b7c2) - **Command Used**: ```bash curl http://localhost:8015/api/create -d '{ "model": "onepiece", "files": { "Llama-3.2-3B-Instruct-Q4_K_M.gguf": "sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff" }, "template": "{{- if .System }}\n<|system|>\n{{ .System }}\n</s>\n{{- end }}\n<|user|>\n{{ .Prompt }}\n</s>\n<|assistant|>", "parameters": { "temperature": 0.2, "num_ctx": 8192, "stop": ["<|system|>", "<|user|>", "<|assistant|>", "</s>"] }, "system": "You are Luffy from One Piece, acting as an assistant." }' ``` - **Model Reference**: The model used for this process is available at [Llama-3.2-3B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q4_K_M.gguf). --- ### **Questions** 1. Is the `path` or `modelfile` field required in the `/api/create` payload? If so, how should it be structured? 2. Are there additional steps or configurations needed to successfully create a model from a GGUF file? --- ### **Request for Assistance** Any guidance on how to resolve this issue and successfully create a model using the `/api/create` endpoint is greatly appreciated!!! ### OS Docker ### GPU Intel ### CPU Intel ### Ollama version 0.5.4
GiteaMirror added the bug label 2026-04-29 02:00:21 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 2, 2025):

Upgrade ollama. The format for /api/create has changed.

<!-- gh-comment-id:2771752876 --> @rick-github commented on GitHub (Apr 2, 2025): Upgrade ollama. The format for `/api/create` has changed.
Author
Owner

@ngwarrencinyen commented on GitHub (Apr 3, 2025):

Upgrade ollama. The format for /api/create has changed.

@rick-github Thanks! Appreciate the guidance.

Is it still possible to use the old format to create and load custom gguf models?

Currently i am using this repo: https://github.com/intel/ipex-llm to run ollama on intel gpu. And their latest supported ollama version is up to v0.5.4 only.

<!-- gh-comment-id:2774099347 --> @ngwarrencinyen commented on GitHub (Apr 3, 2025): > Upgrade ollama. The format for `/api/create` has changed. @rick-github Thanks! Appreciate the guidance. Is it still possible to use the old format to create and load custom gguf models? Currently i am using this repo: https://github.com/intel/ipex-llm to run ollama on intel gpu. And their latest supported ollama version is up to v0.5.4 only.
Author
Owner

@rick-github commented on GitHub (Apr 3, 2025):

https://github.com/ollama/ollama/blob/v0.5.4/docs/api.md#create-a-model

<!-- gh-comment-id:2774881391 --> @rick-github commented on GitHub (Apr 3, 2025): https://github.com/ollama/ollama/blob/v0.5.4/docs/api.md#create-a-model
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53120