ollama run shows no error if the model failed to load #21

Closed
opened 2025-11-12 09:08:56 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @jmorganca on GitHub (Jul 7, 2023).

2023/07/07 11:30:34 routes.go:145: Listening on 127.0.0.1:11434
llama.cpp: loading model from /Users/jmorgan/Downloads/gpt2-medium-alpaca-355m-ggml-f16.bin
error loading model: unexpectedly reached end of file
llama_load_model_from_file: failed to load model
Loading the model failed: failed loading model

On the client, ollama run keeps spinning

Originally created by @jmorganca on GitHub (Jul 7, 2023). ``` 2023/07/07 11:30:34 routes.go:145: Listening on 127.0.0.1:11434 llama.cpp: loading model from /Users/jmorgan/Downloads/gpt2-medium-alpaca-355m-ggml-f16.bin error loading model: unexpectedly reached end of file llama_load_model_from_file: failed to load model Loading the model failed: failed loading model ``` On the client, `ollama run` keeps spinning
GiteaMirror added the
bug
label 2025-11-12 09:08:56 -06:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#21
No description provided.