[GH-ISSUE #14504] App GUI: model popup doesn't update #35170

Closed
opened 2026-04-22 19:29:36 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @xmddmx on GitHub (Feb 27, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14504

What is the issue?

Testing this in Ollama.app for mac 0.17.4 (but have seen this issue for many versions).

  1. Start a long chat - this case, it's Qwen3.5
  2. switch to a historical chat using another model
  3. switch back to the running chat

Expected result: UI should be consistent.
Actual result: model popup can get stick at the model used in the prior completed historical chat.

Example:

Starting chat in Qwen3.5:

Image

After switching back from another chat - notice that the model selection popup is wrong:

Image

Relevant log output


OS

macOS 15.7.4

GPU

M4 Pro

CPU

M4 Pro

Ollama version

many, including in latest 0.17.4

Regression testing: often, after the chat has completed, if you switch away and switch back, the model popup will finally update. Oddly, it often has 0.5 to 1 second delay before it updates.

Originally created by @xmddmx on GitHub (Feb 27, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14504 ### What is the issue? Testing this in Ollama.app for mac 0.17.4 (but have seen this issue for many versions). 1. Start a long chat - this case, it's Qwen3.5 2. switch to a historical chat using another model 3. switch back to the running chat Expected result: UI should be consistent. Actual result: model popup can get stick at the model used in the prior completed historical chat. Example: Starting chat in Qwen3.5: <img width="788" alt="Image" src="https://github.com/user-attachments/assets/8a65afe7-eaf2-4ef4-887e-f34dd5c94e64" /> After switching back from another chat - notice that the model selection popup is wrong: <img width="1674" height="848" alt="Image" src="https://github.com/user-attachments/assets/bd586e72-f62f-47db-a5f4-703495244bc0" /> ### Relevant log output ```shell ``` ### OS macOS 15.7.4 ### GPU M4 Pro ### CPU M4 Pro ### Ollama version many, including in latest 0.17.4 Regression testing: often, after the chat has completed, if you switch away and switch back, the model popup will finally update. Oddly, it often has 0.5 to 1 second delay before it updates.
GiteaMirror added the bug label 2026-04-22 19:29:36 -05:00
Author
Owner

@guicybercode commented on GitHub (Mar 11, 2026):

Code Review: App GUI - Model Popup Update Issue (#14504)

Overview

Issue #14504 describes a problem where the model popup in Ollama.app for macOS does not update during long chats 1. This appears to be a GUI state synchronization issue between the application interface and the underlying LLM service. Since the actual code diff was not provided in your request, this review covers typical patterns and considerations for fixing GUI update bugs in desktop applications.


Analysis of the Reported Issue

Problem: When users start a long chat session (e.g., with Qwen3), the model display popup fails to reflect current state.

Common Root Causes for GUI State Issues:

Category Likely Cause Typical Fix Location
State Synchronization API events not bubbling to UI layer Backend→Frontend event handlers
Lifecycle Management Component unmounted/re-rendered incorrectly React/Vue/Svelte component hooks
Data Binding Observable state not updating Reactive framework subscriptions
Race Conditions Multiple simultaneous API calls overwriting each other Request queuing/lock mechanisms
Event Propagation Stop propagation preventing parent listeners Event handler configuration

1. State Update Propagation

// Before: Missing update trigger
handleLLMResponse(data) {
    setModelData(data);
}

// After: Ensure reactive update
handleLLMResponse(data) {
    setState({ model: data.model }, true); // Force re-render
    dispatch({ type: 'MODEL_UPDATE', payload: data });
}

2. Component Subscription

// Add proper cleanup to prevent stale data
func observeModelUpdates() {
    modelSubject.subscribe { [weak self] data in
        guard let self = self else { return }
        DispatchQueue.main.async {
            self.updatePopupContent(data)
        }
    }
}

3. Event Queue for Long Operations

// Prevent concurrent modifications during long chats
var queue = make(chan llm.Response, 100)

func processResponses() {
    for resp := range queue {
        if !isProcessing {
            updateUI(resp)
            isProcessing = true
        }
    }
}

Strengths to Look For in Proposed Fix

  1. Debounce Implementation: Updates should be debounced to prevent excessive re-renders during streaming responses.

  2. Optimistic Updates: UI reflects expected state immediately while waiting for confirmation.

  3. Error Boundaries: Failed updates don't leave the popup in a broken state.

  4. Memory Leak Prevention: Clear all subscriptions when component closes.

  5. Test Coverage: Unit tests verify popup state transitions under various conditions.


Considerations

1. Electron/Native Bridge

If using Electron:

  • Ensure IPC messages are properly serialized
  • Verify renderer process has permission to receive updates
  • Check main process isn't blocking event loop

2. macOS Notification Center Integration

// For native app notifications
if ([self.supportsNotifications]) {
    [NSUserNotificationCenter defaultUserNotificationCenter 
     deliverNotification:[notification]];
}

3. Long Chat Session Edge Cases

Scenario Risk Mitigation
Stream timeout Popup shows partial data Validate complete response before update
Memory pressure GC pauses delay UI Pre-allocate buffers for streaming
Concurrent requests Overwrites newer data Request ID matching/validation
App backgrounding Stale cached state Periodic refresh on foreground

Testing Recommendations

1. Stress Test Scenarios

// Test 1: Rapid consecutive messages
async function testRapidMessages() {
    for (let i = 0; i < 50; i++) {
        await sendMessage(`Message ${i}`);
        assert(modelPopup.state === 'updating');
    }
}

// Test 2: Long streaming response (>65k tokens)
async function testLongStream() {
    const startTime = Date.now();
    await startChat('qwen3');
    assert(modelPopup.updated > Date.now() - startTime);
}

2. Visual Regression Tests

  • Capture screenshots of popup at various stages (loading, updating, error)
  • Compare against expected states to catch unintended layout shifts

3. Cross-Platform Verification

  • macOS (primary platform for reported issue)
  • Windows/Linux if app is multi-platform
  • Different screen resolutions and dark/light modes

Potential Issues Without Full Diff

  1. Incomplete Event Handling: Partial implementation may work for some cases but fail others
  2. Race Condition Triggers: May manifest only under specific timing conditions
  3. Resource Leaks: New subscriptions without cleanup could degrade performance over time
  4. Backwards Compatibility: Breaking existing API contracts with downstream components

Documentation Updates Needed

  1. Architecture Diagram: Show data flow between backend chat service and frontend popup
  2. State Change Log: Document what triggers popup updates
  3. Known Limitations: If full synchronization isn't possible, document acceptable lag windows

Recommendation: Pending Additional Context

This issue requires visual debugging due to its nature as a UI/state synchronization problem 1.

Actions Required Before Merge:

  1. Provide the actual code diff for detailed line-by-line analysis
  2. Include reproduction steps with consistent environment
  3. Confirm testing strategy covers race conditions and edge cases
  4. Verify fix works across different model types (Qwen3, Llama, etc.)
  5. Add regression tests to prevent future occurrences

Estimated Impact: Medium-High

  • Affects core UX functionality (model status visibility)
  • User-visible bug impacting perceived reliability
  • Potential to cascade into other GUI state issues
<!-- gh-comment-id:4040234282 --> @guicybercode commented on GitHub (Mar 11, 2026): # Code Review: App GUI - Model Popup Update Issue (#14504) ## Overview Issue #14504 describes a problem where the model popup in Ollama.app for macOS does not update during long chats [[1]]. This appears to be a GUI state synchronization issue between the application interface and the underlying LLM service. Since the actual code diff was not provided in your request, this review covers typical patterns and considerations for fixing GUI update bugs in desktop applications. --- ## Analysis of the Reported Issue **Problem**: When users start a long chat session (e.g., with Qwen3), the model display popup fails to reflect current state. **Common Root Causes for GUI State Issues**: | Category | Likely Cause | Typical Fix Location | |----------|--------------|---------------------| | State Synchronization | API events not bubbling to UI layer | Backend→Frontend event handlers | | Lifecycle Management | Component unmounted/re-rendered incorrectly | React/Vue/Svelte component hooks | | Data Binding | Observable state not updating | Reactive framework subscriptions | | Race Conditions | Multiple simultaneous API calls overwriting each other | Request queuing/lock mechanisms | | Event Propagation | Stop propagation preventing parent listeners | Event handler configuration | --- ## Recommended Fix Patterns ### 1. **State Update Propagation** ```typescript // Before: Missing update trigger handleLLMResponse(data) { setModelData(data); } // After: Ensure reactive update handleLLMResponse(data) { setState({ model: data.model }, true); // Force re-render dispatch({ type: 'MODEL_UPDATE', payload: data }); } ``` ### 2. **Component Subscription** ```swift // Add proper cleanup to prevent stale data func observeModelUpdates() { modelSubject.subscribe { [weak self] data in guard let self = self else { return } DispatchQueue.main.async { self.updatePopupContent(data) } } } ``` ### 3. **Event Queue for Long Operations** ```go // Prevent concurrent modifications during long chats var queue = make(chan llm.Response, 100) func processResponses() { for resp := range queue { if !isProcessing { updateUI(resp) isProcessing = true } } } ``` --- ## Strengths to Look For in Proposed Fix 1. **Debounce Implementation**: Updates should be debounced to prevent excessive re-renders during streaming responses. 2. **Optimistic Updates**: UI reflects expected state immediately while waiting for confirmation. 3. **Error Boundaries**: Failed updates don't leave the popup in a broken state. 4. **Memory Leak Prevention**: Clear all subscriptions when component closes. 5. **Test Coverage**: Unit tests verify popup state transitions under various conditions. --- ## Considerations ### 1. **Electron/Native Bridge** If using Electron: - Ensure IPC messages are properly serialized - Verify renderer process has permission to receive updates - Check main process isn't blocking event loop ### 2. **macOS Notification Center Integration** ```objectivec // For native app notifications if ([self.supportsNotifications]) { [NSUserNotificationCenter defaultUserNotificationCenter deliverNotification:[notification]]; } ``` ### 3. **Long Chat Session Edge Cases** | Scenario | Risk | Mitigation | |----------|------|------------| | Stream timeout | Popup shows partial data | Validate complete response before update | | Memory pressure | GC pauses delay UI | Pre-allocate buffers for streaming | | Concurrent requests | Overwrites newer data | Request ID matching/validation | | App backgrounding | Stale cached state | Periodic refresh on foreground | --- ## Testing Recommendations ### 1. **Stress Test Scenarios** ```typescript // Test 1: Rapid consecutive messages async function testRapidMessages() { for (let i = 0; i < 50; i++) { await sendMessage(`Message ${i}`); assert(modelPopup.state === 'updating'); } } // Test 2: Long streaming response (>65k tokens) async function testLongStream() { const startTime = Date.now(); await startChat('qwen3'); assert(modelPopup.updated > Date.now() - startTime); } ``` ### 2. **Visual Regression Tests** - Capture screenshots of popup at various stages (loading, updating, error) - Compare against expected states to catch unintended layout shifts ### 3. **Cross-Platform Verification** - macOS (primary platform for reported issue) - Windows/Linux if app is multi-platform - Different screen resolutions and dark/light modes --- ## Potential Issues Without Full Diff 1. **Incomplete Event Handling**: Partial implementation may work for some cases but fail others 2. **Race Condition Triggers**: May manifest only under specific timing conditions 3. **Resource Leaks**: New subscriptions without cleanup could degrade performance over time 4. **Backwards Compatibility**: Breaking existing API contracts with downstream components --- ## Documentation Updates Needed 1. **Architecture Diagram**: Show data flow between backend chat service and frontend popup 2. **State Change Log**: Document what triggers popup updates 3. **Known Limitations**: If full synchronization isn't possible, document acceptable lag windows --- ## Recommendation: Pending Additional Context This issue requires visual debugging due to its nature as a UI/state synchronization problem [[1]]. **Actions Required Before Merge:** 1. ✅ Provide the actual code diff for detailed line-by-line analysis 2. ✅ Include reproduction steps with consistent environment 3. ✅ Confirm testing strategy covers race conditions and edge cases 4. ✅ Verify fix works across different model types (Qwen3, Llama, etc.) 5. ✅ Add regression tests to prevent future occurrences **Estimated Impact**: Medium-High - Affects core UX functionality (model status visibility) - User-visible bug impacting perceived reliability - Potential to cascade into other GUI state issues
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35170