[GH-ISSUE #53] ollama run shows no error if the model failed to load #21

Closed
opened 2026-04-12 09:33:41 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @jmorganca on GitHub (Jul 7, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/53

2023/07/07 11:30:34 routes.go:145: Listening on 127.0.0.1:11434
llama.cpp: loading model from /Users/jmorgan/Downloads/gpt2-medium-alpaca-355m-ggml-f16.bin
error loading model: unexpectedly reached end of file
llama_load_model_from_file: failed to load model
Loading the model failed: failed loading model

On the client, ollama run keeps spinning

Originally created by @jmorganca on GitHub (Jul 7, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/53 ``` 2023/07/07 11:30:34 routes.go:145: Listening on 127.0.0.1:11434 llama.cpp: loading model from /Users/jmorgan/Downloads/gpt2-medium-alpaca-355m-ggml-f16.bin error loading model: unexpectedly reached end of file llama_load_model_from_file: failed to load model Loading the model failed: failed loading model ``` On the client, `ollama run` keeps spinning
GiteaMirror added the bug label 2026-04-12 09:33:41 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#21