[GH-ISSUE #1039] Fail to load Custom Models #47018

Closed
opened 2026-04-28 02:43:14 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @tjlcast on GitHub (Nov 8, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1039

Hi
I want to load a custom gguf model TheBloke/deepseek-coder-6.7B-instruct-GGUF

ModelFile is:

FROM ./deepseek-coder-6.7b-instruct.Q4_K_M.gguf

But when I do build, it reports a error for me.

 % ollama create amodel -f ./Modelfile 
parsing modelfile    
looking for model    
⠋ creating model layer  Error: invalid version

And I do this on my old mac(MacBook Air (13-inch, Early 2015))

Could you help me how to solve this?

Originally created by @tjlcast on GitHub (Nov 8, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1039 Hi I want to load a custom gguf model [TheBloke/deepseek-coder-6.7B-instruct-GGUF](https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF) ModelFile is: ``` FROM ./deepseek-coder-6.7b-instruct.Q4_K_M.gguf ``` But when I do build, it reports a error for me. ``` % ollama create amodel -f ./Modelfile parsing modelfile looking for model ⠋ creating model layer Error: invalid version ``` And I do this on my old mac(MacBook Air (13-inch, Early 2015)) Could you help me how to solve this?
Author
Owner

@Nan-Do commented on GitHub (Nov 8, 2023):

I have just created a model for the 33b model on my local machine and it worked just fine.

deepseek.model

FROM ./deepseek-coder-33b-instruct.Q4_K_M.gguf

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 0.2

# set the system prompt
TEMPLATE """{{ .System }}

### Instruction:
{{ .Prompt }}

### Response:
"""

SYSTEM """Below is an instruction that describes a task. Write a response that appropriately completes the request."""

ollama create deepseek:33b -f deepseek.model

parsing modelfile
looking for model
creating model layer
creating model template layer
creating model system layer
creating parameter layer
creating config layer
writing layer sha256:e76518575f9d367b0a04278cd027c51e53519506b4316b4d368e853a42bfe790
using already created layer sha256:2d836d77287d85ac3d2ea87f4d765db6aaabc98543442072111b3d9831cdf9f1
using already created layer sha256:1678ff0c9fe594005f222a18bf691d621729e87de57e32e4521974a1c9365a05
writing layer sha256:3343deb6401157bc04c57916fafb02774d8485eef8f969d4ed6f7ceaf90524e9
writing layer sha256:cbdc8e7144de42175ce2c56d5b8a52e4c42f136ebe3fde7a1ac7ee72f0ba9fbd
writing manifest
removing any unused layers
success

Downloading the file from huggingface can be misleading. Are you sure you downloaded the proper file?
Also, your laptop is really old does it work with similar sized models like llama2?

This should be a valid link.
https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf

<!-- gh-comment-id:1801243770 --> @Nan-Do commented on GitHub (Nov 8, 2023): I have just created a model for the 33b model on my local machine and it worked just fine. `deepseek.model` ``` FROM ./deepseek-coder-33b-instruct.Q4_K_M.gguf # set the temperature to 1 [higher is more creative, lower is more coherent] PARAMETER temperature 0.2 # set the system prompt TEMPLATE """{{ .System }} ### Instruction: {{ .Prompt }} ### Response: """ SYSTEM """Below is an instruction that describes a task. Write a response that appropriately completes the request.""" ``` `ollama create deepseek:33b -f deepseek.model` ``` parsing modelfile looking for model creating model layer creating model template layer creating model system layer creating parameter layer creating config layer writing layer sha256:e76518575f9d367b0a04278cd027c51e53519506b4316b4d368e853a42bfe790 using already created layer sha256:2d836d77287d85ac3d2ea87f4d765db6aaabc98543442072111b3d9831cdf9f1 using already created layer sha256:1678ff0c9fe594005f222a18bf691d621729e87de57e32e4521974a1c9365a05 writing layer sha256:3343deb6401157bc04c57916fafb02774d8485eef8f969d4ed6f7ceaf90524e9 writing layer sha256:cbdc8e7144de42175ce2c56d5b8a52e4c42f136ebe3fde7a1ac7ee72f0ba9fbd writing manifest removing any unused layers success ``` Downloading the file from huggingface can be misleading. Are you sure you downloaded the proper file? Also, your laptop is really old does it work with similar sized models like llama2? This should be a valid link. https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
Author
Owner

@tjlcast commented on GitHub (Nov 8, 2023):

I have just created a model for the 33b model on my local machine and it worked just fine.

deepseek.model

FROM ./deepseek-coder-33b-instruct.Q4_K_M.gguf

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 0.2

# set the system prompt
TEMPLATE """{{ .System }}

### Instruction:
{{ .Prompt }}

### Response:
"""

SYSTEM """Below is an instruction that describes a task. Write a response that appropriately completes the request."""

ollama create deepseek:33b -f deepseek.model

parsing modelfile
looking for model
creating model layer
creating model template layer
creating model system layer
creating parameter layer
creating config layer
writing layer sha256:e76518575f9d367b0a04278cd027c51e53519506b4316b4d368e853a42bfe790
using already created layer sha256:2d836d77287d85ac3d2ea87f4d765db6aaabc98543442072111b3d9831cdf9f1
using already created layer sha256:1678ff0c9fe594005f222a18bf691d621729e87de57e32e4521974a1c9365a05
writing layer sha256:3343deb6401157bc04c57916fafb02774d8485eef8f969d4ed6f7ceaf90524e9
writing layer sha256:cbdc8e7144de42175ce2c56d5b8a52e4c42f136ebe3fde7a1ac7ee72f0ba9fbd
writing manifest
removing any unused layers
success

Downloading the file from huggingface can be misleading. Are you sure you downloaded the proper file? Also, your laptop is really old does it work with similar sized models like llama2?

This should be a valid link. https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf

@Nan-Do
Thank for your reply. I have checked my ModelFile and gguf file.
Maybe it is Ollama's problem. Can you provide me with your Ollama version?(ollama --version) And my version is 0.1.3

<!-- gh-comment-id:1801352666 --> @tjlcast commented on GitHub (Nov 8, 2023): > I have just created a model for the 33b model on my local machine and it worked just fine. > > `deepseek.model` > > ``` > FROM ./deepseek-coder-33b-instruct.Q4_K_M.gguf > > # set the temperature to 1 [higher is more creative, lower is more coherent] > PARAMETER temperature 0.2 > > # set the system prompt > TEMPLATE """{{ .System }} > > ### Instruction: > {{ .Prompt }} > > ### Response: > """ > > SYSTEM """Below is an instruction that describes a task. Write a response that appropriately completes the request.""" > ``` > > `ollama create deepseek:33b -f deepseek.model` > > ``` > parsing modelfile > looking for model > creating model layer > creating model template layer > creating model system layer > creating parameter layer > creating config layer > writing layer sha256:e76518575f9d367b0a04278cd027c51e53519506b4316b4d368e853a42bfe790 > using already created layer sha256:2d836d77287d85ac3d2ea87f4d765db6aaabc98543442072111b3d9831cdf9f1 > using already created layer sha256:1678ff0c9fe594005f222a18bf691d621729e87de57e32e4521974a1c9365a05 > writing layer sha256:3343deb6401157bc04c57916fafb02774d8485eef8f969d4ed6f7ceaf90524e9 > writing layer sha256:cbdc8e7144de42175ce2c56d5b8a52e4c42f136ebe3fde7a1ac7ee72f0ba9fbd > writing manifest > removing any unused layers > success > ``` > > Downloading the file from huggingface can be misleading. Are you sure you downloaded the proper file? Also, your laptop is really old does it work with similar sized models like llama2? > > This should be a valid link. https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf @Nan-Do Thank for your reply. I have checked my ModelFile and gguf file. Maybe it is Ollama's problem. Can you provide me with your Ollama version?(ollama --version) And my version is 0.1.3
Author
Owner

@Nan-Do commented on GitHub (Nov 8, 2023):

@tjlcast I'm using version 0.1.8, the problem might be with llama.cpp not being able to understand the format of the model, try to upgrade the version of ollama (and/or compile llama.cpp by hand and check it)

<!-- gh-comment-id:1801364798 --> @Nan-Do commented on GitHub (Nov 8, 2023): @tjlcast I'm using version 0.1.8, the problem might be with llama.cpp not being able to understand the format of the model, try to upgrade the version of ollama (and/or compile llama.cpp by hand and check it)
Author
Owner

@technovangelist commented on GitHub (Dec 4, 2023):

Its been a month since there has been any activity. The first error indicates you are on an older version. Once you update that, it should work. I will go ahead and close the issue now. If you think there is anything we left out, reopen and we can address. Thanks for being part of this great community.

<!-- gh-comment-id:1839526584 --> @technovangelist commented on GitHub (Dec 4, 2023): Its been a month since there has been any activity. The first error indicates you are on an older version. Once you update that, it should work. I will go ahead and close the issue now. If you think there is anything we left out, reopen and we can address. Thanks for being part of this great community.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47018