[PR #12350] [CLOSED] cmd: keep session state and continue chat when /load fails #60492

Closed
opened 2026-04-29 15:29:15 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12350
Author: @jhsong233
Created: 9/20/2025
Status: Closed

Base: mainHead: cmd-load-fix


📝 Commits (1)

  • 17124b0 cmd: keep session state and continue chat when /load fails

📊 Changes

1 file changed (+15 additions, -0 deletions)

View changed files

📝 cmd/interactive.go (+15 -0)

📄 Description

Background

#12351 When running ollama run and using the /load command to switch models, if the specified model does not exist or cannot be loaded, the session behaved incorrectly.

After fix

/load failure no longer exits or corrupts the session.
The original model, messages, and think settings remain intact, allowing seamless continuation of the conversation.

Issues

  1. Incorrect behavior when loading a non-existent model
  • Problem:
    Using /load to load a model that does not exist should not terminate the conversation. It should simply report the error and allow the user to continue chatting with the current model.

  • Before (model without think support)

    ➜  ~ ollama run gemma3:4b
    >>> /load x
    Loading model 'x'
    Error: model 'x' not found
    ➜  ~        # ❌ Session exits unexpectedly
    
  • Before (model with think support)

    ➜  ~ ollama run gpt-oss:20b
    >>> /load x
    Loading model 'x'
    error: 404 Not Found: model 'x' not found
    >>>           # Session continues, but see issue 2 below
    
  1. Session state corruption after failed /load
    Even when the session did not exit, a failed /load call modified runOptions (e.g. opts.Model, opts.Messages, opts.Think) and did not restore them, causing later errors:

    ➜  ~ ollama run gpt-oss:20b
    >>> /load x
    Loading model 'x'
    error: 404 Not Found: model 'x' not found
    >>> hello
    Error: 404 Not Found: model "x" not found, try pulling it first
    ➜  ~ 
    
    ➜  ~ ollama run gpt-oss:20b
    >>> /load gemma3:4b
    Loading model 'gemma3:4b'
    error: 400 Bad Request: registry.ollama.ai/library/gemma3:4b does not support thinking
    >>> /show info
      Model
        architecture        gemma3   # ❌ Original model info lost
    

Fix

  • Unified error handling:
    Loading a non-existent model now always returns a clear error message without terminating the chat.

  • Full restoration of runOptions on failure:
    Restore opts.Model, opts.Messages and opts.Think

Test

  • restore opts.Model
    ➜  ~ ollama run gemma3:4b
    >>> /load x
    Loading model 'x'
    error: model 'x' not found
    >>> /show info
      Model
        architecture        gemma3
    
  • restore opts.Messages
    ➜  ~ ollama run gpt-oss:20b
    >>> /set system you are a english teacher
    Set system message.
    >>> /load x
    Loading model 'x'
    error: 404 Not Found: model 'x' not found
    >>> who are you
    Thinking...
    The user says "who are you". The instruction says I'm an English teacher. So I need to respond as an English teacher.            
    
    Hello! I’m your English teacher for today...
    >>> 
    
  • restore opts.Think
    ➜  ~ ollama run gpt-oss:20b
    >>> /load gemma3:4b
    Loading model 'gemma3:4b'
    error: 400 Bad Request: registry.ollama.ai/library/gemma3:4b does not support thinking
    >>> /show info
      Model
        architecture        gptoss 
    

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12350 **Author:** [@jhsong233](https://github.com/jhsong233) **Created:** 9/20/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `cmd-load-fix` --- ### 📝 Commits (1) - [`17124b0`](https://github.com/ollama/ollama/commit/17124b065eb9cca30306693ecac95afc09ad4920) cmd: keep session state and continue chat when /load fails ### 📊 Changes **1 file changed** (+15 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `cmd/interactive.go` (+15 -0) </details> ### 📄 Description # Background #12351 When running ollama run and using the `/load` command to switch models, if the specified model does not exist or cannot be loaded, the session behaved incorrectly. # After fix `/load` failure no longer exits or corrupts the session. The original model, messages, and think settings remain intact, allowing seamless continuation of the conversation. # Issues 1. Incorrect behavior when loading a non-existent model - Problem: Using `/load` to load a model that does not exist should not terminate the conversation. It should simply report the error and allow the user to continue chatting with the current model. - Before (model without think support) ```bash ➜ ~ ollama run gemma3:4b >>> /load x Loading model 'x' Error: model 'x' not found ➜ ~ # ❌ Session exits unexpectedly ``` - Before (model with think support) ``` ➜ ~ ollama run gpt-oss:20b >>> /load x Loading model 'x' error: 404 Not Found: model 'x' not found >>> # Session continues, but see issue 2 below ``` 2. Session state corruption after failed `/load` Even when the session did not exit, a failed `/load` call modified runOptions (e.g. `opts.Model`, `opts.Messages`, `opts.Think`) and did not restore them, causing later errors: ``` ➜ ~ ollama run gpt-oss:20b >>> /load x Loading model 'x' error: 404 Not Found: model 'x' not found >>> hello Error: 404 Not Found: model "x" not found, try pulling it first ➜ ~ ``` ``` ➜ ~ ollama run gpt-oss:20b >>> /load gemma3:4b Loading model 'gemma3:4b' error: 400 Bad Request: registry.ollama.ai/library/gemma3:4b does not support thinking >>> /show info Model architecture gemma3 # ❌ Original model info lost ``` # Fix - Unified error handling: Loading a non-existent model now always returns a clear error message without terminating the chat. - Full restoration of runOptions on failure: Restore `opts.Model`, `opts.Messages` and `opts.Think` # Test - restore `opts.Model` ``` ➜ ~ ollama run gemma3:4b >>> /load x Loading model 'x' error: model 'x' not found >>> /show info Model architecture gemma3 ``` - restore `opts.Messages` ``` ➜ ~ ollama run gpt-oss:20b >>> /set system you are a english teacher Set system message. >>> /load x Loading model 'x' error: 404 Not Found: model 'x' not found >>> who are you Thinking... The user says "who are you". The instruction says I'm an English teacher. So I need to respond as an English teacher. Hello! I’m your English teacher for today... >>> ``` - restore `opts.Think` ``` ➜ ~ ollama run gpt-oss:20b >>> /load gemma3:4b Loading model 'gemma3:4b' error: 400 Bad Request: registry.ollama.ai/library/gemma3:4b does not support thinking >>> /show info Model architecture gptoss ``` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 15:29:16 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#60492