[GH-ISSUE #756] Mistral - Failed To Load Model #62395

Closed
opened 2026-05-03 08:46:57 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @mattdavenport on GitHub (Oct 11, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/756

I'm running macOS (Ventura 13.0.1) 16in. M1 2021. I am able to run all of the llama2 models just fine, but the following occurs when attempting to run the mistral model:

~ % ollama pull mistral:latest                                                                                                                                                                                                                                                                                                                                                                                                   
pulling manifest
pulling 6ae280299950... 100% |███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (4.1/4.1 GB, 48 MB/s)
pulling fede2d8d6c1f... 100% |████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (29/29 B, 194 kB/s)
pulling b96850d2e482... 100% |█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████���████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (307/307 B, 1.4 MB/s)
verifying sha256 digest
writing manifest
success
~ % ollama run mistral:latest                                                                                                                                                                                                                                                                                                                                                                                                   
>>> Hello
Error: failed to load model

If this is still a WIP please close this issue. The only other information I could find is the following log entries:

2023/10/11 10:25:47 images.go:1093: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6a/6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20231011%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20231011T142546Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=386696851dae4763d830fc88c05381be653dab1e21243686e3180c01011644b6
2023/10/11 10:27:13 images.go:1061: success getting sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054
2023/10/11 10:27:14 images.go:1093: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/fe/fede2d8d6c1f404b1db73b1cd26f7d5455ff2deeb737b5e2b339339dce2969d4/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20231011%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20231011T142714Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=4b1d208c4dcb6b20ae9727869c284c8ec0f77ee382ee975d96f50f1c358047e7
2023/10/11 10:27:14 images.go:1061: success getting sha256:fede2d8d6c1f404b1db73b1cd26f7d5455ff2deeb737b5e2b339339dce2969d4
2023/10/11 10:27:15 images.go:1093: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/b9/b96850d2e482b0d1af356eda4ac158af93e9b00e71363a9173d7b5480680bcf3/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20231011%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20231011T142715Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=a5cb6b9a865ca7264746ec67325b71ad80987c800bbeeadd43eef75a6e0363bc
2023/10/11 10:27:15 images.go:1061: success getting sha256:b96850d2e482b0d1af356eda4ac158af93e9b00e71363a9173d7b5480680bcf3
[GIN] 2023/10/11 - 10:27:18 | 200 |         1m33s |       127.0.0.1 | POST     "/api/pull"
llama.cpp: loading model from /Users/mattdavenport/.ollama/models/blobs/sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054
error loading model: unknown (magic, version) combination: 46554747, 00000002; is this really a GGML file?
llama_load_model_from_file: failed to load model
[GIN] 2023/10/11 - 11:04:20 | 500 |    3.950083ms |       127.0.0.1 | POST     "/api/generate"

Please let me know if I can provide any additional information here to help debug. Thanks!

Originally created by @mattdavenport on GitHub (Oct 11, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/756 I'm running macOS (Ventura 13.0.1) 16in. M1 2021. I am able to run all of the llama2 models just fine, but the following occurs when attempting to run the mistral model: ``` ~ % ollama pull mistral:latest pulling manifest pulling 6ae280299950... 100% |███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (4.1/4.1 GB, 48 MB/s) pulling fede2d8d6c1f... 100% |████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (29/29 B, 194 kB/s) pulling b96850d2e482... 100% |█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████���████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (307/307 B, 1.4 MB/s) verifying sha256 digest writing manifest success ~ % ollama run mistral:latest >>> Hello Error: failed to load model ``` If this is still a WIP please close this issue. The only other information I could find is the following log entries: ``` 2023/10/11 10:25:47 images.go:1093: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/6a/6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20231011%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20231011T142546Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=386696851dae4763d830fc88c05381be653dab1e21243686e3180c01011644b6 2023/10/11 10:27:13 images.go:1061: success getting sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054 2023/10/11 10:27:14 images.go:1093: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/fe/fede2d8d6c1f404b1db73b1cd26f7d5455ff2deeb737b5e2b339339dce2969d4/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20231011%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20231011T142714Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=4b1d208c4dcb6b20ae9727869c284c8ec0f77ee382ee975d96f50f1c358047e7 2023/10/11 10:27:14 images.go:1061: success getting sha256:fede2d8d6c1f404b1db73b1cd26f7d5455ff2deeb737b5e2b339339dce2969d4 2023/10/11 10:27:15 images.go:1093: redirected to: https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/b9/b96850d2e482b0d1af356eda4ac158af93e9b00e71363a9173d7b5480680bcf3/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2F20231011%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20231011T142715Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=a5cb6b9a865ca7264746ec67325b71ad80987c800bbeeadd43eef75a6e0363bc 2023/10/11 10:27:15 images.go:1061: success getting sha256:b96850d2e482b0d1af356eda4ac158af93e9b00e71363a9173d7b5480680bcf3 [GIN] 2023/10/11 - 10:27:18 | 200 | 1m33s | 127.0.0.1 | POST "/api/pull" llama.cpp: loading model from /Users/mattdavenport/.ollama/models/blobs/sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054 error loading model: unknown (magic, version) combination: 46554747, 00000002; is this really a GGML file? llama_load_model_from_file: failed to load model [GIN] 2023/10/11 - 11:04:20 | 500 | 3.950083ms | 127.0.0.1 | POST "/api/generate" ``` Please let me know if I can provide any additional information here to help debug. Thanks!
Author
Owner

@jmorganca commented on GitHub (Oct 11, 2023):

Hi @mattdavenport, may I ask which version of Ollama you are running? mistral requires 0.19 or above – sorry that's not more obvious! You can get the latest version here

I'll close this for now, but keep the feedback coming (or re-open if you're still seeing this after updating)

<!-- gh-comment-id:1757914412 --> @jmorganca commented on GitHub (Oct 11, 2023): Hi @mattdavenport, may I ask which version of Ollama you are running? `mistral` requires 0.19 or above – sorry that's not more obvious! You can get the latest version [here](https://ollama.ai/download) I'll close this for now, but keep the feedback coming (or re-open if you're still seeing this after updating)
Author
Owner

@mattdavenport commented on GitHub (Oct 11, 2023):

🤦 That was it. Thanks for the tip, and I appreciate all the great work you've done here!

<!-- gh-comment-id:1757983722 --> @mattdavenport commented on GitHub (Oct 11, 2023): 🤦 That was it. Thanks for the tip, and I appreciate all the great work you've done here!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62395