[GH-ISSUE #9558] CLI interactive client exits to shell if you attempt to load a model that does not exist on the host. #31993

Closed
opened 2026-04-22 12:51:28 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @xk86 on GitHub (Mar 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9558

What is the issue?

Hello, I have encountered what I can only assume is undesired behavior in the CLI application. Specifically, when attempting to load a model via the /load command, even if the specified model does not exist, the CLI reports that it is "loading" the model, and then exits the interface upon failing, returning me to the shell. This is a bit disruptive if, say, you make a typo in the model name, and the program exits completely, losing that session history. On the server side (with OLLAMA_DEBUG=1), I can see a 404 being generated, with no other messages. I am not entirely sure if this is a bug, so much as it is a potentially unwanted behavior (since the behavior seems to result from the load model function returning an error that aborts the client program), but I didn't find any other posts about it, so here we are!

Relevant log output

>>> /list
NAME                                    ID              SIZE      MODIFIED
llama3.2:latest                        a80c4f17acd5    2.0 GB    5 weeks ago
...
>>> /load model-not-found
Loading model 'model-not-found'
Error: model 'model-not-found' not found
user@host: ~ $

OS

No response

GPU

No response

CPU

No response

Ollama version

0.5.13

Originally created by @xk86 on GitHub (Mar 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9558 ### What is the issue? Hello, I have encountered what I can only assume is undesired behavior in the CLI application. Specifically, when attempting to load a model via the `/load` command, even if the specified model does not exist, the CLI reports that it is "loading" the model, and then exits the interface upon failing, returning me to the shell. This is a bit disruptive if, say, you make a typo in the model name, and the program exits completely, losing that session history. On the server side (with `OLLAMA_DEBUG=1`), I can see a 404 being generated, with no other messages. I am not entirely sure if this is a bug, so much as it is a potentially unwanted behavior (since the behavior seems to result from the load model function returning an error that aborts the client program), but I didn't find any other posts about it, so here we are! ### Relevant log output ```shell >>> /list NAME ID SIZE MODIFIED llama3.2:latest a80c4f17acd5 2.0 GB 5 weeks ago ... >>> /load model-not-found Loading model 'model-not-found' Error: model 'model-not-found' not found user@host: ~ $ ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.5.13
GiteaMirror added the bug label 2026-04-22 12:51:28 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 7, 2025):

#6487

<!-- gh-comment-id:2706145820 --> @rick-github commented on GitHub (Mar 7, 2025): #6487
Author
Owner

@rick-github commented on GitHub (Mar 13, 2025):

#9576

<!-- gh-comment-id:2722848321 --> @rick-github commented on GitHub (Mar 13, 2025): #9576
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#31993