[PR #14528] app: add monitoring dashboard #14714

Open
opened 2026-04-13 01:01:14 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/14528
Author: @4RH1T3CT0R7
Created: 3/1/2026
Status: 🔄 Open

Base: mainHead: feature/monitoring-dashboard


📝 Commits (5)

  • c6df318 Add tests and logic to adjust defaultNumCtx based on parallelism
  • a88a03e Merge remote-tracking branch 'origin/main'
  • 878e626 Merge remote-tracking branch 'origin/main'
  • 95eabe5 app: add monitoring dashboard with GPU info and running models
  • 43ae188 app: enhance dashboard with GPU model name, VRAM usage, and error handling

📊 Changes

14 files changed (+623 additions, -36 deletions)

View changed files

📝 app/server/server.go (+30 -12)
📝 app/ui/app/codegen/gotypes.gen.ts (+6 -0)
📝 app/ui/app/src/api.ts (+30 -0)
📝 app/ui/app/src/components/ChatSidebar.tsx (+19 -9)
app/ui/app/src/components/Dashboard.tsx (+409 -0)
📝 app/ui/app/src/routeTree.gen.ts (+26 -3)
app/ui/app/src/routes/dashboard.tsx (+6 -0)
📝 app/ui/responses/types.go (+9 -6)
📝 app/ui/ui.go (+10 -6)
📝 app/wintray/eventloop.go (+2 -0)
📝 app/wintray/menus.go (+4 -0)
📝 app/wintray/messages.go (+1 -0)
📝 server/routes.go (+6 -0)
📝 server/routes_options_test.go (+65 -0)

📄 Description

Summary

  • Adds a new /dashboard page to the Ollama desktop app showing real-time GPU and model monitoring info
  • Proxies /api/ps endpoint through the app UI server for running model data
  • Adds Dashboard navigation link in the sidebar (Windows) and system tray menu

This is Part 2 of 3 in the Ollama Desktop UI improvement series:

  1. PR #14526Extended Settings: Tabbed settings UI with ~22 new configuration fields (General, GPU, Generation, Network)
  2. This PR — Monitoring Dashboard: GPU/memory monitoring, loaded models, system tray extensions
  3. PR #14531Model Manager: Full model CRUD, batch ops, copy/alias, per-model settings

Each PR is independent and can be merged separately. Together they transform the desktop app from a chat-only interface into a comprehensive Ollama management tool with full settings control, real-time monitoring, and model lifecycle management.

Changes

Area Files Description
Backend ui.go Proxy /api/ps endpoint
Frontend Dashboard.tsx New dashboard component with GPU cards, model cards, system info
Frontend api.ts getRunningModels() API function + ProcessModelResponse types
Route dashboard.tsx TanStack Router file route
Navigation ChatSidebar.tsx Dashboard link in sidebar
Tray menus.go, messages.go, eventloop.go "Dashboard..." menu item in Windows system tray

Features

  • GPU Devices: Shows GPU name, VRAM, library (CUDA/ROCm/Metal), compute capability, driver version
  • Loaded Models: Auto-refreshing list (5s polling) with VRAM usage bar, context length, parameters, quantization, keep-alive expiry countdown
  • System Info: Default context length, GPU count, loaded model count

Test plan

  • Open Dashboard from sidebar — verify GPU info matches system
  • Load a model via chat — verify it appears in Loaded Models with correct stats
  • Verify VRAM usage bar reflects actual GPU offload ratio
  • Wait for model expiry — verify it disappears from the list
  • Open Dashboard from Windows tray — verify it navigates correctly
  • Build passes: cd app/ui/app && npm run build
  • TypeScript check: npx tsc --noEmit

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/14528 **Author:** [@4RH1T3CT0R7](https://github.com/4RH1T3CT0R7) **Created:** 3/1/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `feature/monitoring-dashboard` --- ### 📝 Commits (5) - [`c6df318`](https://github.com/ollama/ollama/commit/c6df31859c31c475306188d4697b67563b20b4f8) Add tests and logic to adjust defaultNumCtx based on parallelism - [`a88a03e`](https://github.com/ollama/ollama/commit/a88a03efbfa305d7b4a6f9c5a0773949b6762ed9) Merge remote-tracking branch 'origin/main' - [`878e626`](https://github.com/ollama/ollama/commit/878e626f70d3754027c1e459b51deee54da5a2f8) Merge remote-tracking branch 'origin/main' - [`95eabe5`](https://github.com/ollama/ollama/commit/95eabe551fc8b7d7a5f452243050a28ccdb6d481) app: add monitoring dashboard with GPU info and running models - [`43ae188`](https://github.com/ollama/ollama/commit/43ae188aeedf73b28ab0774f84f434848b07de91) app: enhance dashboard with GPU model name, VRAM usage, and error handling ### 📊 Changes **14 files changed** (+623 additions, -36 deletions) <details> <summary>View changed files</summary> 📝 `app/server/server.go` (+30 -12) 📝 `app/ui/app/codegen/gotypes.gen.ts` (+6 -0) 📝 `app/ui/app/src/api.ts` (+30 -0) 📝 `app/ui/app/src/components/ChatSidebar.tsx` (+19 -9) ➕ `app/ui/app/src/components/Dashboard.tsx` (+409 -0) 📝 `app/ui/app/src/routeTree.gen.ts` (+26 -3) ➕ `app/ui/app/src/routes/dashboard.tsx` (+6 -0) 📝 `app/ui/responses/types.go` (+9 -6) 📝 `app/ui/ui.go` (+10 -6) 📝 `app/wintray/eventloop.go` (+2 -0) 📝 `app/wintray/menus.go` (+4 -0) 📝 `app/wintray/messages.go` (+1 -0) 📝 `server/routes.go` (+6 -0) 📝 `server/routes_options_test.go` (+65 -0) </details> ### 📄 Description ## Summary - Adds a new `/dashboard` page to the Ollama desktop app showing real-time GPU and model monitoring info - Proxies `/api/ps` endpoint through the app UI server for running model data - Adds Dashboard navigation link in the sidebar (Windows) and system tray menu ### Related PRs This is **Part 2 of 3** in the Ollama Desktop UI improvement series: 1. **PR #14526** — [Extended Settings](https://github.com/ollama/ollama/pull/14526): Tabbed settings UI with ~22 new configuration fields (General, GPU, Generation, Network) 2. **This PR** — Monitoring Dashboard: GPU/memory monitoring, loaded models, system tray extensions 3. **PR #14531** — [Model Manager](https://github.com/ollama/ollama/pull/14531): Full model CRUD, batch ops, copy/alias, per-model settings Each PR is independent and can be merged separately. Together they transform the desktop app from a chat-only interface into a comprehensive Ollama management tool with full settings control, real-time monitoring, and model lifecycle management. ## Changes | Area | Files | Description | |------|-------|-------------| | Backend | `ui.go` | Proxy `/api/ps` endpoint | | Frontend | `Dashboard.tsx` | New dashboard component with GPU cards, model cards, system info | | Frontend | `api.ts` | `getRunningModels()` API function + `ProcessModelResponse` types | | Route | `dashboard.tsx` | TanStack Router file route | | Navigation | `ChatSidebar.tsx` | Dashboard link in sidebar | | Tray | `menus.go`, `messages.go`, `eventloop.go` | "Dashboard..." menu item in Windows system tray | ## Features - **GPU Devices**: Shows GPU name, VRAM, library (CUDA/ROCm/Metal), compute capability, driver version - **Loaded Models**: Auto-refreshing list (5s polling) with VRAM usage bar, context length, parameters, quantization, keep-alive expiry countdown - **System Info**: Default context length, GPU count, loaded model count ## Test plan - [ ] Open Dashboard from sidebar — verify GPU info matches system - [ ] Load a model via chat — verify it appears in Loaded Models with correct stats - [ ] Verify VRAM usage bar reflects actual GPU offload ratio - [ ] Wait for model expiry — verify it disappears from the list - [ ] Open Dashboard from Windows tray — verify it navigates correctly - [ ] Build passes: `cd app/ui/app && npm run build` - [ ] TypeScript check: `npx tsc --noEmit` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-13 01:01:14 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#14714