[GH-ISSUE #1605] Failed to Load Model Error in Ollama 0.0.0 #889

Closed
opened 2026-04-12 10:33:38 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @mariusraupach on GitHub (Dec 19, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1605

Description

Encountered an issue while trying to load a model in Ollama. The error message received is:

"Error: llama runner: failed to load model '/Users/mariusraupach/.ollama/models/blobs/sha256:bdb11b0699e03d791f0accd97279989d810d79615c6cf5ac21fb68e8f33e8ca3': this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running ollama pull dolphin-mixtral:latest"

Additionally, when checking the version of Ollama with ollama -v, the response was:

"ollama version is 0.0.0
Warning: client version is 0.1.16"

Reproduction Steps

Steps to Reproduce:

  1. Run ollama pull dolphin-mixtral:latest to update the model.
  2. Attempt to run the model in Ollama.
  3. Error occurs during the model loading process.

Expected vs Actual Behavior

Expected Behavior:
The model should load successfully after being updated.

Actual Behavior:
The model fails to load with an error indicating potential incompatibility with the current Ollama version.

Environment Details

Environment:

  • Chip: Apple M1 Max
  • Operating System: macOS Sonoma Version 14.2
  • Ollama Version: 0.0.0 (client version 0.1.16)

Attempted Solutions

I've tried updating the model as suggested by the error message, but the issue persists.

Originally created by @mariusraupach on GitHub (Dec 19, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1605 ### Description Encountered an issue while trying to load a model in Ollama. The error message received is: "Error: llama runner: failed to load model '/Users/mariusraupach/.ollama/models/blobs/sha256:bdb11b0699e03d791f0accd97279989d810d79615c6cf5ac21fb68e8f33e8ca3': this model may be incompatible with your version of Ollama. If you previously pulled this model, try updating it by running `ollama pull dolphin-mixtral:latest`" Additionally, when checking the version of Ollama with `ollama -v`, the response was: "ollama version is 0.0.0 Warning: client version is 0.1.16" ### Reproduction Steps **Steps to Reproduce:** 1. Run `ollama pull dolphin-mixtral:latest` to update the model. 2. Attempt to run the model in Ollama. 3. Error occurs during the model loading process. ### Expected vs Actual Behavior **Expected Behavior:** The model should load successfully after being updated. **Actual Behavior:** The model fails to load with an error indicating potential incompatibility with the current Ollama version. ### Environment Details **Environment:** - Chip: Apple M1 Max - Operating System: macOS Sonoma Version 14.2 - Ollama Version: 0.0.0 (client version 0.1.16) ### Attempted Solutions I've tried updating the model as suggested by the error message, but the issue persists.
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

Hi there

Version 0 indicated that you have built ollama yourself. You need to either rebuild the executable again using the latest bits or remove that version and install using the Mac installer. That will have the benefit of keeping you up to date.

Does that make sense.

Thanks so much for being part of this community.

<!-- gh-comment-id:1862798852 --> @technovangelist commented on GitHub (Dec 19, 2023): Hi there Version 0 indicated that you have built ollama yourself. You need to either rebuild the executable again using the latest bits or remove that version and install using the Mac installer. That will have the benefit of keeping you up to date. Does that make sense. Thanks so much for being part of this community.
Author
Owner

@mariusraupach commented on GitHub (Dec 19, 2023):

Hi @technovangelist, thank you for your prompt response.

Initially, I installed Ollama using Homebrew. However, after encountering the issue, I uninstalled it and then reinstalled it using the official Mac installer from the Ollama website.

The environment details I provided are from this latest installation via the Mac installer.

Could you please advise if there's a specific directory where the "server" version of Ollama is stored?

<!-- gh-comment-id:1862822869 --> @mariusraupach commented on GitHub (Dec 19, 2023): Hi @technovangelist, thank you for your prompt response. Initially, I installed Ollama using Homebrew. However, after encountering the issue, I uninstalled it and then reinstalled it using the official Mac installer from the Ollama website. The environment details I provided are from this latest installation via the Mac installer. Could you please advise if there's a specific directory where the "server" version of Ollama is stored?
Author
Owner

@actuday6418 commented on GitHub (Dec 19, 2023):

@technovangelist I have a similar issue where it returns Error: invalid file magic. This happens when using the orca2 model (and others) that is supposed to work with my version (0.1.11) according to the release page. My install is from Nix.

<!-- gh-comment-id:1862914554 --> @actuday6418 commented on GitHub (Dec 19, 2023): @technovangelist I have a similar issue where it returns `Error: invalid file magic`. This happens when using the `orca2` model (and others) that is supposed to work with my version (0.1.11) according to the [release page](https://github.com/jmorganca/ollama/releases/tag/v0.1.11). My install is from [Nix](https://search.nixos.org/packages?channel=unstable&from=0&size=50&sort=relevance&type=packages&query=ollama).
Author
Owner

@mariusraupach commented on GitHub (Dec 19, 2023):

I successfully resolved the error I was encountering. The issue stemmed from the version of the software I installed via Homebrew, which was unexpectedly blocking a network port. This became evident after I reviewed the log file located at ~/.ollama/logs/server.log, where I discovered the following error message:

Error: listen tcp 127.0.0.1:11434: bind: address already in use

Upon identifying the problem, I proceeded to terminate the process that was keeping the port hostage. After this action, I checked the version of Ollama using the ollama -v.

ollama version is 0.1.16

<!-- gh-comment-id:1862966194 --> @mariusraupach commented on GitHub (Dec 19, 2023): I successfully resolved the error I was encountering. The issue stemmed from the version of the software I installed via Homebrew, which was unexpectedly blocking a network port. This became evident after I reviewed the log file located at ~/.ollama/logs/server.log, where I discovered the following error message: Error: listen tcp 127.0.0.1:11434: bind: address already in use Upon identifying the problem, I proceeded to terminate the process that was keeping the port hostage. After this action, I checked the version of Ollama using the ollama -v. ollama version is 0.1.16
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

OK, so now you have a working version. Are you still experiencing the problem you opened the issue about?

<!-- gh-comment-id:1863290949 --> @technovangelist commented on GitHub (Dec 19, 2023): OK, so now you have a working version. Are you still experiencing the problem you opened the issue about?
Author
Owner

@louis-thevenet commented on GitHub (Dec 19, 2023):

@technovangelist I have a similar issue where it returns Error: invalid file magic. This happens when using the orca2 model (and others) that is supposed to work with my version (0.1.11) according to the release page. My install is from Nix.

The Orca2 model has been updated 3 weeks ago, which is after the 0.1.11 version was released.

You can get the 0.1.17 version of Ollama from the nixpkgs master channel.

So a solution would be to switch to the master channel while waiting for it to be merged with the channel you usually follow. 😃

<!-- gh-comment-id:1863468529 --> @louis-thevenet commented on GitHub (Dec 19, 2023): > @technovangelist I have a similar issue where it returns `Error: invalid file magic`. This happens when using the `orca2` model (and others) that is supposed to work with my version (0.1.11) according to the [release page](https://github.com/jmorganca/ollama/releases/tag/v0.1.11). My install is from [Nix](https://search.nixos.org/packages?channel=unstable&from=0&size=50&sort=relevance&type=packages&query=ollama). The Orca2 model has been updated 3 weeks ago, which is after the 0.1.11 version was released. You can get the [0.1.17](https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/misc/ollama/default.nix) version of Ollama from the nixpkgs master channel. So a solution would be to switch to the master channel while waiting for it to be merged with the channel you usually follow. :smiley:
Author
Owner

@Kota1609 commented on GitHub (Mar 5, 2024):

im using mac, i just uninstalled and installed ollama app again and it worked.

<!-- gh-comment-id:1979578947 --> @Kota1609 commented on GitHub (Mar 5, 2024): im using mac, i just uninstalled and installed ollama app again and it worked.
Author
Owner

@andeplane commented on GitHub (Mar 11, 2024):

Also using mac, did a killall ollama and restarted it. That worked!

<!-- gh-comment-id:1987970879 --> @andeplane commented on GitHub (Mar 11, 2024): Also using mac, did a `killall ollama` and restarted it. That worked!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#889