[GH-ISSUE #15515] endmeaiohyeah/whisper-large-v2:latest Error: 500 Internal Server Error: unable to load model #56428

Closed
opened 2026-04-29 10:48:58 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @sterlp on GitHub (Apr 12, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15515

What is the issue?

Note sure if it is a problem with my Mac or with Ollama - but I have trouble loading the whispermodels.

Relevant log output

❯ ollama pull sendmeaiohyeah/whisper-large-v2:latest
pulling manifest 
pulling 2862bb0d66a0: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 890 MB                         
pulling e33f46c4dbc3: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏  270 B                         
verifying sha256 digest 
writing manifest 
success 
❯ ollama run sendmeaiohyeah/whisper-large-v2:latest
Error: 500 Internal Server Error: unable to load model: /Users/paul/.ollama/models/blobs/sha256-2862bb0d66a0fe2799de099ee5fdb242449e16f1eaee1091677112d7ca2859c8

Direct usage of the API:

Transcription failed
Transcription request failed (500): {"error":{"message":"unable to load model: /Users/paul/.ollama/models/blobs/sha256-2862bb0d66a0fe2799de099ee5fdb242449e16f1eaee1091677112d7ca2859c8","type":"api_error","param":null,"code":null}}

OS

MacOS 15.7.3 (24G419)

GPU

M1

CPU

M1

Ollama version

Version 0.20.5 (0.20.5)

Originally created by @sterlp on GitHub (Apr 12, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15515 ### What is the issue? Note sure if it is a problem with my Mac or with Ollama - but I have trouble loading the `whisper`models. ### Relevant log output ```shell ❯ ollama pull sendmeaiohyeah/whisper-large-v2:latest pulling manifest pulling 2862bb0d66a0: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 890 MB pulling e33f46c4dbc3: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 270 B verifying sha256 digest writing manifest success ❯ ollama run sendmeaiohyeah/whisper-large-v2:latest Error: 500 Internal Server Error: unable to load model: /Users/paul/.ollama/models/blobs/sha256-2862bb0d66a0fe2799de099ee5fdb242449e16f1eaee1091677112d7ca2859c8 ``` Direct usage of the API: ``` Transcription failed Transcription request failed (500): {"error":{"message":"unable to load model: /Users/paul/.ollama/models/blobs/sha256-2862bb0d66a0fe2799de099ee5fdb242449e16f1eaee1091677112d7ca2859c8","type":"api_error","param":null,"code":null}} ``` ### OS MacOS 15.7.3 (24G419) ### GPU M1 ### CPU M1 ### Ollama version Version 0.20.5 (0.20.5)
GiteaMirror added the bug label 2026-04-29 10:48:58 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 12, 2026):

Ollama does not currently support whisper.

<!-- gh-comment-id:4231474621 --> @rick-github commented on GitHub (Apr 12, 2026): Ollama does not currently support whisper.
Author
Owner

@sterlp commented on GitHub (Apr 12, 2026):

Confirmed - I couldn't get any wisper to to run. gemma4:e2b works fine

<!-- gh-comment-id:4231546940 --> @sterlp commented on GitHub (Apr 12, 2026): Confirmed - I couldn't get any wisper to to run. `gemma4:e2b` works fine
Author
Owner

@sterlp commented on GitHub (Apr 12, 2026):

would it make sense to create a feature request?

<!-- gh-comment-id:4231551599 --> @sterlp commented on GitHub (Apr 12, 2026): would it make sense to create a feature request?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56428