[GH-ISSUE #2753] Error: error loading model #27419

Closed
opened 2026-04-22 04:45:34 -05:00 by GiteaMirror · 44 comments
Owner

Originally created by @zbrkic on GitHub (Feb 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2753

Originally assigned to: @BruceMacD on GitHub.

First time install ollama version is 0.1.27, trying to run model, getting Error loading model - No such file or directory, but files exist:

time=2024-02-26T00:11:49.314+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\ELJKO~1\\AppData\\Local\\Temp\\ollama816527122\\cpu_avx2\\ext_server.dll"
time=2024-02-26T00:11:49.314+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_load: error loading model: failed to open C:\Users\Željko\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246: No such file or directory
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'C:\Users\Željko\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246'
{"timestamp":1708902709,"level":"ERROR","function":"load_model","line":388,"message":"unable to load model","model":"C:\\Users\\Željko\\.ollama\\models\\blobs\\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246"}
time=2024-02-26T00:11:49.314+01:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library C:\\Users\\ELJKO~1\\AppData\\Local\\Temp\\ollama816527122\\cpu_avx2\\ext_server.dll  error loading model C:\\Users\\Željko\\.ollama\\models\\blobs\\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef92"
[GIN] 2024/02/26 - 00:11:49 | 500 |    332.6058ms |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/02/26 - 00:15:10 | 200 |         509µs |       127.0.0.1 | GET      "/api/version"

Files exist:

PS C:\> ls C:\Users\Željko\AppData\Local\Temp\ollama816527122\cpu_avx2\ext_server.dll


    Directory: C:\Users\Željko\AppData\Local\Temp\ollama816527122\cpu_avx2


Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
-a----        26.2.2024.      0:03        2475456 ext_server.dll


PS C:\> ls C:\Users\Željko\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246


    Directory: C:\Users\Željko\.ollama\models\blobs


Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
-a----        26.2.2024.      0:05     3826781184 sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246

At some point application tries to read in invalid folder:

C:\Users\Željko\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246

slika

It seems that unicode letters are not supported in the path, in some part of code. Is there a way to install the app in some other folder until this is fixed?

Originally created by @zbrkic on GitHub (Feb 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2753 Originally assigned to: @BruceMacD on GitHub. First time install `ollama version is 0.1.27`, trying to run model, getting Error loading model - No such file or directory, but files exist: ``` time=2024-02-26T00:11:49.314+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\ELJKO~1\\AppData\\Local\\Temp\\ollama816527122\\cpu_avx2\\ext_server.dll" time=2024-02-26T00:11:49.314+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" llama_model_load: error loading model: failed to open C:\Users\Željko\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246: No such file or directory llama_load_model_from_file: failed to load model llama_init_from_gpt_params: error: failed to load model 'C:\Users\Željko\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246' {"timestamp":1708902709,"level":"ERROR","function":"load_model","line":388,"message":"unable to load model","model":"C:\\Users\\Željko\\.ollama\\models\\blobs\\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246"} time=2024-02-26T00:11:49.314+01:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library C:\\Users\\ELJKO~1\\AppData\\Local\\Temp\\ollama816527122\\cpu_avx2\\ext_server.dll error loading model C:\\Users\\Željko\\.ollama\\models\\blobs\\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef92" [GIN] 2024/02/26 - 00:11:49 | 500 | 332.6058ms | 127.0.0.1 | POST "/api/chat" [GIN] 2024/02/26 - 00:15:10 | 200 | 509µs | 127.0.0.1 | GET "/api/version" ``` Files exist: ``` PS C:\> ls C:\Users\Željko\AppData\Local\Temp\ollama816527122\cpu_avx2\ext_server.dll Directory: C:\Users\Željko\AppData\Local\Temp\ollama816527122\cpu_avx2 Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 26.2.2024. 0:03 2475456 ext_server.dll PS C:\> ls C:\Users\Željko\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 Directory: C:\Users\Željko\.ollama\models\blobs Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 26.2.2024. 0:05 3826781184 sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 ``` At some point application tries to read in invalid folder: `C:\Users\Ĺ˝eljko\.ollama\models\blobs\sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246` ![slika](https://github.com/ollama/ollama/assets/11651634/f062fe7f-7d6f-4bd8-83b2-d67bee97b5ae) It seems that unicode letters are not supported in the path, in some part of code. Is there a way to install the app in some other folder until this is fixed?
GiteaMirror added the bug label 2026-04-22 04:45:34 -05:00
Author
Owner

@skwolvie commented on GitHub (Feb 26, 2024):

I got the same error:

Here is what i did:

  1. I finetuned llama2-7b-hf
  2. I made it into gguf (after a lot of bug fixing)
  3. I created the Modelfile and loaded gguf model to ollama.
  4. ollama list shows it is in the list of models
  5. ollama run fails

C:\Users\31405.ISBDOMAIN1\Desktop>ollama list
NAME                            ID              SIZE    MODIFIED
llama2:latest                   78e26419b446    3.8 GB  3 hours ago
skwolvie_patent_v1:latest       fa698afb0826    7.2 GB  8 minutes ago

C:\Users\31405.ISBDOMAIN1\Desktop>ollama run skwolvie_patent_v1
Error: error loading model C:\Users\31405.ISBDOMAIN1\.ollama\models\blobs\sha256-0a623fb0634059b6ab0857a30f57eae9686729452cc8437dc95ca
<!-- gh-comment-id:1963839722 --> @skwolvie commented on GitHub (Feb 26, 2024): I got the same error: Here is what i did: 1. I finetuned llama2-7b-hf 2. I made it into gguf (after a lot of bug fixing) 3. I created the Modelfile and loaded gguf model to ollama. 4. ollama list shows it is in the list of models 5. ollama run fails ``` C:\Users\31405.ISBDOMAIN1\Desktop>ollama list NAME ID SIZE MODIFIED llama2:latest 78e26419b446 3.8 GB 3 hours ago skwolvie_patent_v1:latest fa698afb0826 7.2 GB 8 minutes ago C:\Users\31405.ISBDOMAIN1\Desktop>ollama run skwolvie_patent_v1 Error: error loading model C:\Users\31405.ISBDOMAIN1\.ollama\models\blobs\sha256-0a623fb0634059b6ab0857a30f57eae9686729452cc8437dc95ca ```
Author
Owner

@skwolvie commented on GitHub (Feb 26, 2024):

I see that the error is because the file name while loading and the file name that is actually stored are quite different. It is truncated.

Notice here error:

Error: error loading model C:\Users\31405.ISBDOMAIN1\.ollama\models\blobs\sha256-0a623fb0634059b6ab0857a30f57eae9686729452cc8437dc95ca

Notice here file name:

C:\Users\31405.ISBDOMAIN1\.ollama\models\blobs\sha256-0a623fb0634059b6ab0857a30f57eae9686729452cc8437dc95ca39368fad73c
<!-- gh-comment-id:1963972401 --> @skwolvie commented on GitHub (Feb 26, 2024): I see that the error is because the file name while loading and the file name that is actually stored are quite different. It is truncated. Notice here error: ``` Error: error loading model C:\Users\31405.ISBDOMAIN1\.ollama\models\blobs\sha256-0a623fb0634059b6ab0857a30f57eae9686729452cc8437dc95ca ``` Notice here file name: ``` C:\Users\31405.ISBDOMAIN1\.ollama\models\blobs\sha256-0a623fb0634059b6ab0857a30f57eae9686729452cc8437dc95ca39368fad73c ```
Author
Owner

@skwolvie commented on GitHub (Feb 27, 2024):

just verified that the same issue exist in mac os as well.
when we create a model from a gguf file and then run it, it throws error blob file not found.

However, if we dont load gguf, and download the basic models no error.

Pushing the loaded gguf model to hub, and then loading it also produces the same error.

<!-- gh-comment-id:1966070101 --> @skwolvie commented on GitHub (Feb 27, 2024): just verified that the same issue exist in mac os as well. when we create a model from a gguf file and then run it, it throws error blob file not found. However, if we dont load gguf, and download the basic models no error. Pushing the loaded gguf model to hub, and then loading it also produces the same error.
Author
Owner

@hotsmile commented on GitHub (Feb 27, 2024):

I see that the error is because the file name while loading and the file name that is actually stored are quite different. It is truncated.

Notice here error:

Error: error loading model C:\Users\31405.ISBDOMAIN1\.ollama\models\blobs\sha256-0a623fb0634059b6ab0857a30f57eae9686729452cc8437dc95ca

Notice here file name:

C:\Users\31405.ISBDOMAIN1\.ollama\models\blobs\sha256-0a623fb0634059b6ab0857a30f57eae9686729452cc8437dc95ca39368fad73c

the same to me! i use ollama to pull gemma:2b ,but run ,i got the error

<!-- gh-comment-id:1966377081 --> @hotsmile commented on GitHub (Feb 27, 2024): > I see that the error is because the file name while loading and the file name that is actually stored are quite different. It is truncated. > > Notice here error: > > ``` > Error: error loading model C:\Users\31405.ISBDOMAIN1\.ollama\models\blobs\sha256-0a623fb0634059b6ab0857a30f57eae9686729452cc8437dc95ca > ``` > > Notice here file name: > > ``` > C:\Users\31405.ISBDOMAIN1\.ollama\models\blobs\sha256-0a623fb0634059b6ab0857a30f57eae9686729452cc8437dc95ca39368fad73c > ``` the same to me! i use ollama to pull gemma:2b ,but run ,i got the error
Author
Owner

@skwolvie commented on GitHub (Feb 28, 2024):

@hotsmile what could be the fix?

<!-- gh-comment-id:1968090380 --> @skwolvie commented on GitHub (Feb 28, 2024): @hotsmile what could be the fix?
Author
Owner

@kime541200 commented on GitHub (Feb 28, 2024):

I got the same error, is there any solution?

<!-- gh-comment-id:1968646979 --> @kime541200 commented on GitHub (Feb 28, 2024): I got the same error, is there any solution?
Author
Owner

@nonacosa commented on GitHub (Feb 29, 2024):

same error
image

<!-- gh-comment-id:1970282079 --> @nonacosa commented on GitHub (Feb 29, 2024): same error <img width="1021" alt="image" src="https://github.com/ollama/ollama/assets/14212375/6aef9154-8bb1-4fc7-96a2-53f1c4314ba3">
Author
Owner

@saamerm commented on GitHub (Feb 29, 2024):

Seeing the same error as @nonacosa on a CentOS Linux computer running the latest ollama version for dolphin-phi

<!-- gh-comment-id:1970320127 --> @saamerm commented on GitHub (Feb 29, 2024): Seeing the same error as @nonacosa on a CentOS Linux computer running the latest ollama version for dolphin-phi
Author
Owner

@kime541200 commented on GitHub (Feb 29, 2024):

For you reference,I update the version of Ollama to latest and the issue been fixed,but still not sure why this issue happened.

OS:Windows10 WSL2

<!-- gh-comment-id:1970332612 --> @kime541200 commented on GitHub (Feb 29, 2024): For you reference,I update the version of Ollama to latest and the issue been fixed,but still not sure why this issue happened. OS:Windows10 WSL2
Author
Owner

@nonacosa commented on GitHub (Feb 29, 2024):

Seeing the same error as @nonacosa on a CentOS Linux computer running the latest ollama version for dolphin-phi

After removing all models: ollama rm <model> and clear ~/.ollama/models/blobs/.
Repeat install, it works now, mabe you can try it? @saamerm

image
<!-- gh-comment-id:1970444997 --> @nonacosa commented on GitHub (Feb 29, 2024): > Seeing the same error as @nonacosa on a CentOS Linux computer running the latest ollama version for dolphin-phi After removing all models:` ollama rm <model>` and clear `~/.ollama/models/blobs/`. Repeat install, it works now, mabe you can try it? @saamerm <img width="938" alt="image" src="https://github.com/ollama/ollama/assets/14212375/5490f396-a297-448a-a4a5-2f684147bf00">
Author
Owner

@lilnick commented on GitHub (Feb 29, 2024):

Hiya, I'm another user with the same issue.
l have Ollama version 0.1.17 installed on another machine and do not get this error (WSL Ubuntu)
Newly installed on a new machine 0.1.27 and I do get this error (Baremetal Debian)

You'll note that the last few characters are suspiciously missing on the terminal output.

Running fresh:
'root@host:# ollama --version
ollama version is 0.1.27
root@host:
# ollama run yarn-llama2:7b-128k
pulling manifest
pulling 4878cabf5227... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 3.8 GB
pulling 1639d5c1f004... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 18 B
pulling 8f22d62f4bee... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 307 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: error loading model
/usr/share/ollama/.ollama/models/blobs/sha256:4878cabf52270598be566c491cabf8ca467b71300367b89ec2f36736e8ddf'

Checking the file exists, it does not:
'root@host:~# stat /usr/share/ollama/.ollama/models/blobs/sha256:4878cabf52270598be566c491cabf8ca467b71300367b89ec2f36736e8ddf
stat: cannot statx '/usr/share/ollama/.ollama/models/blobs/sha256:4878cabf52270598be566c491cabf8ca467b71300367b89ec2f36736e8ddf': No such file or directory'

Checking what files exist
'root@host:~# ls -lhrta /usr/share/ollama/.ollama/models/blobs/
total 3.6G
-rw-r--r-- 1 ollama ollama 3.6G Feb 29 15:27 sha256:4878cabf52270598be566c491cabf8ca467b71300367b89ec2f36736e8ddf020
-rw-r--r-- 1 ollama ollama 18 Feb 29 15:27 sha256:1639d5c1f0045b7fa5d7de7648663b5dac1c61ddbdd865ff7cc448cba84bb1e3
-rw-r--r-- 1 ollama ollama 307 Feb 29 15:27 sha256:8f22d62f4bee4765c0b0db4720d093d4ef4cdaa66dd2b6f81ec146c5614af232
drwxr-xr-x 2 ollama ollama 4.0K Feb 29 15:27 .
drwxr-xr-x 4 ollama ollama 36 Feb 29 15:27 ..'

One file exists with the same name aside from the last three additional chars.

Looking at the source this error comes from ./llm/ext_server/ext_server.cpp, but I don't know enough to figure out if the params.model variable is being altered or if it was set incorrectly...

It seems like, between the tagsv0.1.17 and v0.1.27 the ./llm/ext_server/ext_server.cpp is new?

<!-- gh-comment-id:1971424885 --> @lilnick commented on GitHub (Feb 29, 2024): Hiya, I'm another user with the same issue. l have Ollama version 0.1.17 installed on another machine and _do not_ get this error (WSL Ubuntu) Newly installed on a new machine 0.1.27 and I do get this error (Baremetal Debian) You'll note that the last few characters are suspiciously missing on the terminal output. Running fresh: 'root@host:~# ollama --version ollama version is 0.1.27 root@host:~# ollama run yarn-llama2:7b-128k pulling manifest pulling 4878cabf5227... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 3.8 GB pulling 1639d5c1f004... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 18 B pulling 8f22d62f4bee... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 307 B verifying sha256 digest writing manifest removing any unused layers success Error: error loading model /usr/share/ollama/.ollama/models/blobs/sha256:4878cabf52270598be566c491cabf8ca467b71300367b89ec2f36736e8ddf' Checking the file exists, it does not: 'root@host:~# stat /usr/share/ollama/.ollama/models/blobs/sha256:4878cabf52270598be566c491cabf8ca467b71300367b89ec2f36736e8ddf stat: cannot statx '/usr/share/ollama/.ollama/models/blobs/sha256:4878cabf52270598be566c491cabf8ca467b71300367b89ec2f36736e8ddf': No such file or directory' Checking what files exist 'root@host:~# ls -lhrta /usr/share/ollama/.ollama/models/blobs/ total 3.6G -rw-r--r-- 1 ollama ollama 3.6G Feb 29 15:27 sha256:4878cabf52270598be566c491cabf8ca467b71300367b89ec2f36736e8ddf020 -rw-r--r-- 1 ollama ollama 18 Feb 29 15:27 sha256:1639d5c1f0045b7fa5d7de7648663b5dac1c61ddbdd865ff7cc448cba84bb1e3 -rw-r--r-- 1 ollama ollama 307 Feb 29 15:27 sha256:8f22d62f4bee4765c0b0db4720d093d4ef4cdaa66dd2b6f81ec146c5614af232 drwxr-xr-x 2 ollama ollama 4.0K Feb 29 15:27 . drwxr-xr-x 4 ollama ollama 36 Feb 29 15:27 ..' One file exists with the same name aside from the last three additional chars. Looking at the source this error comes from ./llm/ext_server/ext_server.cpp, but I don't know enough to figure out if the params.model variable is being altered or if it was set incorrectly... It seems like, between the tagsv0.1.17 and v0.1.27 the ./llm/ext_server/ext_server.cpp is new?
Author
Owner

@gekiumasarada commented on GitHub (Mar 1, 2024):

I also catch the problem(ver. windows preview.)
I converted a model to GGUF format model by myself with using my GPU server and I move the GGUF to my local. (windows)
Aftar that I create Modelfile but I had faced the problem.

blobs\sha256-8277d34ba7ab4bd6a64ad57c5d510e439f66cad7bc910ee1441da45001b2a68
blobs\sha256-8277d34ba7ab4bd6a64ad57c5d510e439f66cad7bc910ee1441da45001b2a687

Last character is suspiciously missing on the terminal output.
Is there a good interim solution?

<!-- gh-comment-id:1972354248 --> @gekiumasarada commented on GitHub (Mar 1, 2024): I also catch the problem(ver. windows preview.) I converted a model to GGUF format model by myself with using my GPU server and I move the GGUF to my local. (windows) Aftar that I create Modelfile but I had faced the problem. ``` blobs\sha256-8277d34ba7ab4bd6a64ad57c5d510e439f66cad7bc910ee1441da45001b2a68 blobs\sha256-8277d34ba7ab4bd6a64ad57c5d510e439f66cad7bc910ee1441da45001b2a687 ``` Last character is suspiciously missing on the terminal output. Is there a good interim solution?
Author
Owner

@Parici75 commented on GitHub (Mar 1, 2024):

Same error as @nonacosa and @saamerm, I tried to remove all models and clear ~/.ollama/models/blobs/, to no avail.

➜  ~ ollama pull deepseek-coder:6.7b
pulling manifest 
pulling 59bb50d8116b... 100% ▕████████████████▏ 3.8 GB                         
pulling a3a0e9449cb6... 100% ▕████████████████▏  13 KB                         
pulling 8893e08fa9f9... 100% ▕████████████████▏   59 B                         
pulling 8972a96b8ff1... 100% ▕████████████████▏  297 B                         
pulling 772f510b9558... 100% ▕████████████████▏  483 B                         
verifying sha256 digest 
writing manifest 
removing any unused layers 
success 
➜  ~ ollama run deepseek-coder:6.7b
Error: error loading model /Users/benroland/.ollama/models/blobs/sha256:59bb50d8116b6a1f9bfbb940d6bb946a05554e591e30c8c2429ed6c854867e

System info :

➜  ~ ollama --version
ollama version is 0.1.27
➜  ~ sw_vers
ProductName:		macOS
ProductVersion:		13.6.1
BuildVersion:		22G313
<!-- gh-comment-id:1972847514 --> @Parici75 commented on GitHub (Mar 1, 2024): Same error as @nonacosa and @saamerm, I tried to remove all models and clear `~/.ollama/models/blobs/`, to no avail. ```bash ➜ ~ ollama pull deepseek-coder:6.7b pulling manifest pulling 59bb50d8116b... 100% ▕████████████████▏ 3.8 GB pulling a3a0e9449cb6... 100% ▕████████████████▏ 13 KB pulling 8893e08fa9f9... 100% ▕████████████████▏ 59 B pulling 8972a96b8ff1... 100% ▕████████████████▏ 297 B pulling 772f510b9558... 100% ▕████████████████▏ 483 B verifying sha256 digest writing manifest removing any unused layers success ➜ ~ ollama run deepseek-coder:6.7b Error: error loading model /Users/benroland/.ollama/models/blobs/sha256:59bb50d8116b6a1f9bfbb940d6bb946a05554e591e30c8c2429ed6c854867e ``` System info : ``` ➜ ~ ollama --version ollama version is 0.1.27 ➜ ~ sw_vers ProductName: macOS ProductVersion: 13.6.1 BuildVersion: 22G313 ```
Author
Owner

@guilhermecomum commented on GitHub (Mar 2, 2024):

Same here.
Mac OS Ventura 13.6
ollama 0.1.27 homebrew

<!-- gh-comment-id:1974154540 --> @guilhermecomum commented on GitHub (Mar 2, 2024): Same here. Mac OS Ventura 13.6 ollama 0.1.27 homebrew
Author
Owner

@NeevJewalkar commented on GitHub (Mar 2, 2024):

I have been facing this issue since today morning.
Could this be the result of a new update?

<!-- gh-comment-id:1974354134 --> @NeevJewalkar commented on GitHub (Mar 2, 2024): I have been facing this issue since today morning. Could this be the result of a new update?
Author
Owner

@cyriltw commented on GitHub (Mar 3, 2024):

+1 on this issue.

Here is my workaround to get it fixed. Downloaded the latest version of ollama (I'm on Mac, so downloaded the standalone). Then removed old models that was downloaded and re-pulled. Seems to be working now. The version I have downloaded is 0.1.27.

<!-- gh-comment-id:1975195002 --> @cyriltw commented on GitHub (Mar 3, 2024): ~+1 on this issue.~ Here is my workaround to get it fixed. Downloaded the latest version of ollama (I'm on Mac, so downloaded the standalone). Then removed old models that was downloaded and re-pulled. Seems to be working now. The version I have downloaded is 0.1.27.
Author
Owner

@eggsyntax commented on GitHub (Mar 3, 2024):

I was having the same error, and found that it was sufficient to quit the main ollama app (from the menu bar) and restart it (from spotlight). Then I can ollama run <some-model> with no problems.

<!-- gh-comment-id:1975328027 --> @eggsyntax commented on GitHub (Mar 3, 2024): I was having the same error, and found that it was sufficient to quit the main ollama app (from the menu bar) and restart it (from spotlight). Then I can `ollama run <some-model>` with no problems.
Author
Owner

@RicooSuave commented on GitHub (Mar 4, 2024):

I was having the same error, and found that it was sufficient to quit the main ollama app (from the menu bar) and restart it (from spotlight). Then I can ollama run <some-model> with no problems.

This has also fixed my issue haha, good work!

<!-- gh-comment-id:1975459831 --> @RicooSuave commented on GitHub (Mar 4, 2024): > I was having the same error, and found that it was sufficient to quit the main ollama app (from the menu bar) and restart it (from spotlight). Then I can `ollama run <some-model>` with no problems. This has also fixed my issue haha, good work!
Author
Owner

@eggsyntax commented on GitHub (Mar 4, 2024):

I'm a professional programmer, so "Have you tried turning it off and on again?" is burned into my very soul 😁

<!-- gh-comment-id:1975463960 --> @eggsyntax commented on GitHub (Mar 4, 2024): I'm a professional programmer, so "[Have you tried turning it off and on again?](https://www.youtube.com/watch?v=nn2FB1P_Mn8)" is burned into my very soul 😁
Author
Owner

@JulienBreux commented on GitHub (Mar 5, 2024):

Same error here :)

Solution

Just reinstall Ollama 😄

Version

~ ❯❯❯ ollama --version
ollama version is 0.1.27

~ ❯❯❯ sw_vers
ProductName:		macOS
ProductVersion:		14.3.1
BuildVersion:		23D60

Error with Gemma in 7b

~ ❯❯❯ ollama run gemma:7b
pulling manifest
pulling 456402914e83... 100% ▕████████████████▏ 5.2 GB
pulling 097a36493f71... 100% ▕████████████████▏ 8.4 KB
pulling 109037bec39c... 100% ▕████████████████▏  136 B
pulling 22a838ceb7fb... 100% ▕████████████████▏   84 B
pulling a443857c4317... 100% ▕████████████████▏  483 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: error loading model /Users/🤓/.ollama/models/blobs/sha256:456402914e838a953e0cf80caa6adbe75383d9e63584a964f504a7bbb8f7

Error with Mistral

~ ❯❯❯ ollama run mistral
pulling manifest
pulling e8a35b5937a5... 100% ▕████████████████▏ 4.1 GB
pulling 43070e2d4e53... 100% ▕████████████████▏  11 KB
pulling e6836092461f... 100% ▕████████████████▏   42 B
pulling ed11eda7790d... 100% ▕████████████████▏   30 B
pulling f9b1e3196ecf... 100% ▕████████████████▏  483 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: error loading model /Users/🤓/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b05
<!-- gh-comment-id:1978252139 --> @JulienBreux commented on GitHub (Mar 5, 2024): Same error here :) ## Solution Just reinstall Ollama 😄 ## Version ``` ~ ❯❯❯ ollama --version ollama version is 0.1.27 ~ ❯❯❯ sw_vers ProductName: macOS ProductVersion: 14.3.1 BuildVersion: 23D60 ``` ## Error with Gemma in 7b ``` ~ ❯❯❯ ollama run gemma:7b pulling manifest pulling 456402914e83... 100% ▕████████████████▏ 5.2 GB pulling 097a36493f71... 100% ▕████████████████▏ 8.4 KB pulling 109037bec39c... 100% ▕████████████████▏ 136 B pulling 22a838ceb7fb... 100% ▕████████████████▏ 84 B pulling a443857c4317... 100% ▕████████████████▏ 483 B verifying sha256 digest writing manifest removing any unused layers success Error: error loading model /Users/🤓/.ollama/models/blobs/sha256:456402914e838a953e0cf80caa6adbe75383d9e63584a964f504a7bbb8f7 ``` ## Error with Mistral ``` ~ ❯❯❯ ollama run mistral pulling manifest pulling e8a35b5937a5... 100% ▕████████████████▏ 4.1 GB pulling 43070e2d4e53... 100% ▕████████████████▏ 11 KB pulling e6836092461f... 100% ▕████████████████▏ 42 B pulling ed11eda7790d... 100% ▕████████████████▏ 30 B pulling f9b1e3196ecf... 100% ▕████████████████▏ 483 B verifying sha256 digest writing manifest removing any unused layers success Error: error loading model /Users/🤓/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b05 ```
Author
Owner

@cmingxu commented on GitHub (Mar 5, 2024):

same to me on mac ollama version is 0.1.27, even I rm / pull model again

<!-- gh-comment-id:1978320661 --> @cmingxu commented on GitHub (Mar 5, 2024): same to me on mac `ollama version is 0.1.27`, even I rm / pull model again
Author
Owner

@nicaudinet commented on GitHub (Mar 5, 2024):

Same on me (MacOS) with Ollama 0.1.27. Got it to work by:

  • ollama rm mistral
  • ollama pull mistral
  • Quitting ollama and opening again
  • ollama run mistral

Not sure if the rm / pull was even necessary

<!-- gh-comment-id:1978419311 --> @nicaudinet commented on GitHub (Mar 5, 2024): Same on me (MacOS) with Ollama 0.1.27. Got it to work by: - `ollama rm mistral` - `ollama pull mistral` - Quitting ollama and opening again - `ollama run mistral` Not sure if the rm / pull was even necessary
Author
Owner

@guilhermecomum commented on GitHub (Mar 5, 2024):

If you are using homebrew, don't forget to stop the ollama service.

  • brew services stop ollama
  • rm -rf ~/.ollama
  • brew services start ollama
  • ollama pull <model>
<!-- gh-comment-id:1978716949 --> @guilhermecomum commented on GitHub (Mar 5, 2024): If you are using homebrew, don't forget to stop the ollama service. - `brew services stop ollama` - `rm -rf ~/.ollama` - `brew services start ollama` - `ollama pull <model>`
Author
Owner

@skwolvie commented on GitHub (Mar 5, 2024):

@here Is this issue resolved?

<!-- gh-comment-id:1979478089 --> @skwolvie commented on GitHub (Mar 5, 2024): @here Is this issue resolved?
Author
Owner

@Kota1609 commented on GitHub (Mar 5, 2024):

im using mac, i just uninstalled and installed ollama app again and it worked.

<!-- gh-comment-id:1979579123 --> @Kota1609 commented on GitHub (Mar 5, 2024): im using mac, i just uninstalled and installed ollama app again and it worked.
Author
Owner

@skwolvie commented on GitHub (Mar 6, 2024):

i tired and still getting the same error. Which version did you install?

<!-- gh-comment-id:1980169097 --> @skwolvie commented on GitHub (Mar 6, 2024): i tired and still getting the same error. Which version did you install?
Author
Owner

@RamiKassouf commented on GitHub (Mar 6, 2024):

I'm running ollama using cmd and with no GUI, creating a model using a gguf or bin is resulting in the same error above
I've tried removing model and recreating it, stopping ollama serve and rerunning it and to no avail
Please any updates on a fix?

<!-- gh-comment-id:1980447701 --> @RamiKassouf commented on GitHub (Mar 6, 2024): I'm running ollama using cmd and with no GUI, creating a model using a gguf or bin is resulting in the same error above I've tried removing model and recreating it, stopping ollama serve and rerunning it and to no avail Please any updates on a fix?
Author
Owner

@JulienBreux commented on GitHub (Mar 6, 2024):

@RamiKassouf @skwolvie @skwolvie actually, you need to uninstall and install Ollama from your system.

<!-- gh-comment-id:1980489399 --> @JulienBreux commented on GitHub (Mar 6, 2024): @RamiKassouf @skwolvie @skwolvie actually, you need to uninstall and install Ollama from your system.
Author
Owner

@zbrkic commented on GitHub (Mar 6, 2024):

Start/stop/uninstall/install - nothing helped.

<!-- gh-comment-id:1980567378 --> @zbrkic commented on GitHub (Mar 6, 2024): Start/stop/uninstall/install - nothing helped.
Author
Owner

@RamiKassouf commented on GitHub (Mar 6, 2024):

@RamiKassouf @skwolvie @skwolvie actually, you need to uninstall and install Ollama from your system.

is that after i create the model? or when is it in the process?

<!-- gh-comment-id:1980687912 --> @RamiKassouf commented on GitHub (Mar 6, 2024): > @RamiKassouf @skwolvie @skwolvie actually, you need to uninstall and install Ollama from your system. is that after i create the model? or when is it in the process?
Author
Owner

@RamiKassouf commented on GitHub (Mar 7, 2024):

@RamiKassouf @skwolvie @skwolvie actually, you need to uninstall and install Ollama from your system.

I tried It still didn t work

<!-- gh-comment-id:1982771625 --> @RamiKassouf commented on GitHub (Mar 7, 2024): > @RamiKassouf @skwolvie @skwolvie actually, you need to uninstall and install Ollama from your system. I tried It still didn t work
Author
Owner

@JulienBreux commented on GitHub (Mar 7, 2024):

Sorry friends, it worked for me.

Maybe @BruceMacD have an idea

<!-- gh-comment-id:1982803022 --> @JulienBreux commented on GitHub (Mar 7, 2024): Sorry friends, it worked for me. Maybe @BruceMacD have an idea
Author
Owner

@BruceMacD commented on GitHub (Mar 7, 2024):

Hey everyone, sorry about this. I'm looking to reproduce this now and get it fixed. Thanks to everyone for the details. The error here is vague so it like there are maybe 3-4 potential problems here.

  1. Unsupported unicode characters in the path cause models to not be able to load. I've reproduced this one, and it seems to be Windows specific, I'll be fixing this one.
  • workaround: Set OLLAMA_MODELS to a path that does not include a unicode character until the fix is in.
  1. Unsupported model imported into Ollama. If you import a model which cannot be converted into gguf format, it may result in the error loading model message at runtime. I'll see if I can catch the earlier and return a better error.

  2. The most common problem seems to be previously downloaded models not working, sometimes with the filename being incorrect. This is the hard one for me to track down at the moment. I'll update again when I find something.

  • workaround: try ollama pull $MODEL_NAME
  1. Old ollama version that can't run some newer models (this may be the case for some Gemma issues) this is why re-install is working for some people.
<!-- gh-comment-id:1983934841 --> @BruceMacD commented on GitHub (Mar 7, 2024): Hey everyone, sorry about this. I'm looking to reproduce this now and get it fixed. Thanks to everyone for the details. The error here is vague so it like there are maybe 3-4 potential problems here. 1. Unsupported unicode characters in the path cause models to not be able to load. I've reproduced this one, and it seems to be Windows specific, I'll be fixing this one. - workaround: Set `OLLAMA_MODELS` to a path that does not include a unicode character until the fix is in. 2. Unsupported model imported into Ollama. If you import a model which cannot be converted into gguf format, it may result in the `error loading model` message at runtime. I'll see if I can catch the earlier and return a better error. 4. The most common problem seems to be previously downloaded models not working, sometimes with the filename being incorrect. This is the hard one for me to track down at the moment. I'll update again when I find something. - workaround: try `ollama pull $MODEL_NAME` 5. Old ollama version that can't run some newer models (this may be the case for some Gemma issues) this is why re-install is working for some people. - workaround: update Ollama -> https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-upgrade-ollama
Author
Owner

@BruceMacD commented on GitHub (Mar 7, 2024):

If anyone else hits this error please try stopping ollama and running the ollama server directly with debug on.
powershell:

$env:OLLAMA_DEBUG=1; ollama serve

command prompt:

set "OLLAMA_DEBUG=1" && ollama serve

Then if you see any suspicious logs paste them here. Might help me out.

<!-- gh-comment-id:1984287423 --> @BruceMacD commented on GitHub (Mar 7, 2024): If anyone else hits this error please try stopping ollama and running the ollama server directly with debug on. powershell: ``` $env:OLLAMA_DEBUG=1; ollama serve ``` command prompt: ``` set "OLLAMA_DEBUG=1" && ollama serve ``` Then if you see any suspicious logs paste them here. Might help me out.
Author
Owner

@FotieMConstant commented on GitHub (Mar 7, 2024):

yeah, i see some suspicious log:(

image
<!-- gh-comment-id:1984363369 --> @FotieMConstant commented on GitHub (Mar 7, 2024): yeah, i see some suspicious log:( <img width="1419" alt="image" src="https://github.com/ollama/ollama/assets/42372656/256ee6e1-11fe-4cfa-9558-d09b3def0ccb">
Author
Owner

@BruceMacD commented on GitHub (Mar 7, 2024):

related for unicode path support: https://github.com/ggerganov/llama.cpp/pull/5927

<!-- gh-comment-id:1984561754 --> @BruceMacD commented on GitHub (Mar 7, 2024): related for unicode path support: https://github.com/ggerganov/llama.cpp/pull/5927
Author
Owner

@JulienBreux commented on GitHub (Mar 8, 2024):

related for unicode path support: https://github.com/ggerganov/llama.cpp/pull/5927

Big thnks Bruce for your followup!

<!-- gh-comment-id:1985181001 --> @JulienBreux commented on GitHub (Mar 8, 2024): > related for unicode path support: https://github.com/ggerganov/llama.cpp/pull/5927 Big thnks Bruce for your followup!
Author
Owner

@FotieMConstant commented on GitHub (Mar 8, 2024):

related for unicode path support: ggerganov/llama.cpp#5927

Don't get it, i am on macOS and the PR is related to windows i think...

<!-- gh-comment-id:1985291199 --> @FotieMConstant commented on GitHub (Mar 8, 2024): > related for unicode path support: [ggerganov/llama.cpp#5927](https://github.com/ggerganov/llama.cpp/pull/5927) Don't get it, i am on macOS and the PR is related to windows i think...
Author
Owner

@bmaciag commented on GitHub (Mar 8, 2024):

same error image

same issue here on windows with llama2

<!-- gh-comment-id:1985734564 --> @bmaciag commented on GitHub (Mar 8, 2024): > same error <img alt="image" width="1021" src="https://private-user-images.githubusercontent.com/14212375/308773885-6aef9154-8bb1-4fc7-96a2-53f1c4314ba3.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDk5MDYxODcsIm5iZiI6MTcwOTkwNTg4NywicGF0aCI6Ii8xNDIxMjM3NS8zMDg3NzM4ODUtNmFlZjkxNTQtOGJiMS00ZmM3LTk2YTItNTNmMWM0MzE0YmEzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzA4VDEzNTEyN1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTdlOTNkMzRjNWViNjkyYjBkZGNjMTFhNzJjMDg0MzAwMDRiMmQ2NTdhZTNkMDk5ODZlYTg2YWFmNDI1NGUwZjQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.JJlXJptUs-pDd-tFPwyLx7lC7rmfxaRYTtDOBYhiB7A"> same issue here on windows with llama2
Author
Owner

@karpovas1505 commented on GitHub (Mar 10, 2024):

Hi all
I have the same issue at Windows 10 Pro 64

<!-- gh-comment-id:1987160172 --> @karpovas1505 commented on GitHub (Mar 10, 2024): Hi all I have the same issue at Windows 10 Pro 64
Author
Owner

@CROmartin commented on GitHub (Mar 12, 2024):

Hi everybody, I faced the same issue, but in my case model worked fine until now:
"ollama run gemma:latest
Error: error loading model /Users/martinstaresincic/.ollama/models/blobs/sha256:456402914e838a953e0cf80caa6adbe75383d9e63584a964f504a"

I was not too fond of deleting models and redownloading as suggested many times on online forums. A quick fix for me was: I just re-downloaded the Ollama installer, reran it and things started working properly again.

<!-- gh-comment-id:1992620437 --> @CROmartin commented on GitHub (Mar 12, 2024): Hi everybody, I faced the same issue, but in my case model worked fine until now: "ollama run gemma:latest Error: error loading model /Users/martinstaresincic/.ollama/models/blobs/sha256:456402914e838a953e0cf80caa6adbe75383d9e63584a964f504a" I was not too fond of deleting models and redownloading as suggested many times on online forums. **A quick fix** for me was: I just re-downloaded the Ollama installer, reran it and things started working properly again.
Author
Owner

@xiaods commented on GitHub (Mar 13, 2024):

Image_20240313180930

C:\Users\kingdee.DESKTOP-7IJNE2P>ollama -v
ollama version is 0.1.28

Win10

<!-- gh-comment-id:1994019761 --> @xiaods commented on GitHub (Mar 13, 2024): ![Image_20240313180930](https://github.com/ollama/ollama/assets/37678/11c1669b-8236-4f62-9d82-ef3d4f2459bd) ```bash C:\Users\kingdee.DESKTOP-7IJNE2P>ollama -v ollama version is 0.1.28 ``` Win10
Author
Owner

@BruceMacD commented on GitHub (Mar 13, 2024):

Hey everyone, sorry about this. I'm looking to reproduce this now and get it fixed. Thanks to everyone for the details. The error here is vague so it like there are maybe 3-4 potential problems here.

  1. Unsupported unicode characters in the path cause models to not be able to load. I've reproduced this one, and it seems to be Windows specific, I'll be fixing this one.
  • workaround: Set OLLAMA_MODELS to a path that does not include a unicode character until the fix is in.
  1. Unsupported model imported into Ollama. If you import a model which cannot be converted into gguf format, it may result in the error loading model message at runtime. I'll see if I can catch the earlier and return a better error.
  2. The most common problem seems to be previously downloaded models not working, sometimes with the filename being incorrect. This is the hard one for me to track down at the moment. I'll update again when I find something.
  • workaround: try ollama pull $MODEL_NAME
  1. Old ollama version that can't run some newer models (this may be the case for some Gemma issues) this is why re-install is working for some people.

Tracked down some more of these issues.
3. filename being incorrect on failure to load -> this is getting cut-off by the number of characters allocated to the error message, and is not the root problem when the model is failing to run.

This is also the error message that would be displayed if you did not have enough memory to run the model. This could explain the on/off occurrence for some people. @xiaods this is probably what is happening in your case, as yi-6b-200k requires around 12GB of RAM in my test to accommodate the large context window.

I've merged #3065, and it will be in the upcoming release. This change will relay the proper error message so we can start debugging these individual problems. I'm going to resolve this issue for now as it is getting many unrelated load time errors group together and ask people to open new issues when they see them so we can properly segment the problems. Apologies to everyone for the confusions here.

workaround in the meantime:
If you'd like to try to find the root of the error before the next release (which is soon). Please stop any running ollama instances and start ollama in debug mode, ex: OLLAMA_DEBUG=1 ollama serve. This should log the actual error.

<!-- gh-comment-id:1995216493 --> @BruceMacD commented on GitHub (Mar 13, 2024): > Hey everyone, sorry about this. I'm looking to reproduce this now and get it fixed. Thanks to everyone for the details. The error here is vague so it like there are maybe 3-4 potential problems here. > > 1. Unsupported unicode characters in the path cause models to not be able to load. I've reproduced this one, and it seems to be Windows specific, I'll be fixing this one. > > * workaround: Set `OLLAMA_MODELS` to a path that does not include a unicode character until the fix is in. > > 2. Unsupported model imported into Ollama. If you import a model which cannot be converted into gguf format, it may result in the `error loading model` message at runtime. I'll see if I can catch the earlier and return a better error. > 3. The most common problem seems to be previously downloaded models not working, sometimes with the filename being incorrect. This is the hard one for me to track down at the moment. I'll update again when I find something. > > * workaround: try `ollama pull $MODEL_NAME` > > 5. Old ollama version that can't run some newer models (this may be the case for some Gemma issues) this is why re-install is working for some people. > > * workaround: update Ollama -> https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-upgrade-ollama Tracked down some more of these issues. 3. `filename being incorrect on failure to load` -> this is getting cut-off by the number of characters allocated to the error message, and is not the root problem when the model is failing to run. This is also the error message that would be displayed if you did not have enough memory to run the model. This could explain the on/off occurrence for some people. @xiaods this is probably what is happening in your case, as yi-6b-200k requires around 12GB of RAM in my test to accommodate the large context window. I've merged #3065, and it will be in the upcoming release. This change will relay the proper error message so we can start debugging these individual problems. I'm going to resolve this issue for now as it is getting many unrelated load time errors group together and ask people to open new issues when they see them so we can properly segment the problems. Apologies to everyone for the confusions here. **workaround in the meantime:** If you'd like to try to find the root of the error before the next release (which is soon). Please stop any running ollama instances and start ollama in debug mode, ex: `OLLAMA_DEBUG=1 ollama serve`. This should log the actual error.
Author
Owner

@Praisethefab commented on GitHub (Mar 25, 2024):

With Windows 10 the "Unsupported unicode characters in the path cause models to not be able to load." is still present, or at least changing the OLLAMA_MODELS directory to not include the unicode character "ò" that it included before made it work, I did have the model updated as it was my first time downloading this software and the model that I had just installed was llama2, to not have to wait to redownload all I also migrated the hashes so that I didn't have to redownload them all and they worked perfectly so I am sure it is the "Unsupported unicode" problem

<!-- gh-comment-id:2019083400 --> @Praisethefab commented on GitHub (Mar 25, 2024): With Windows 10 the "Unsupported unicode characters in the path cause models to not be able to load." is still present, or at least changing the OLLAMA_MODELS directory to not include the unicode character "ò" that it included before made it work, I did have the model updated as it was my first time downloading this software and the model that I had just installed was llama2, to not have to wait to redownload all I also migrated the hashes so that I didn't have to redownload them all and they worked perfectly so I am sure it is the "Unsupported unicode" problem
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27419