[PR #16779] [MERGED] Fix: Free VRAM memory when updating embedding / reranking models #47287

Closed
opened 2026-04-29 22:28:07 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/open-webui/open-webui/pull/16779
Author: @mahenning
Created: 8/21/2025
Status: Merged
Merged: 8/21/2025
Merged by: @tjbck

Base: devHead: fix--clean-unload-embed/reranker-models


📝 Commits (6)

  • 39fe385 Correctly unloads embedding/reranker models
  • cd02ff2 Fix if checks
  • 6663fc3 Unloads only if internal models are used.
  • b3de329 Chage torch import to conditional import
  • c821c3e Formatting
  • f2e78d7 More formatting

📊 Changes

2 files changed (+53 additions, -24 deletions)

View changed files

📝 backend/open_webui/main.py (+13 -8)
📝 backend/open_webui/routers/retrieval.py (+40 -16)

📄 Description

Pull Request Checklist

Note to first-time contributors: Please open a discussion post in Discussions and describe your changes before submitting a pull request.

Before submitting, make sure you've checked the following:

  • Target branch: Please verify that the pull request targets the dev branch.
  • Description: Provide a concise description of the changes made in this pull request.
  • Changelog: Ensure a changelog entry following the format of Keep a Changelog is added at the bottom of the PR description.
  • Documentation: Have you updated relevant documentation Open WebUI Docs, or other documentation sources?
  • Dependencies: Are there any new dependencies? Have you updated the dependency versions in the documentation?
  • Testing: Have you written and run sufficient tests to validate the changes?
  • Code review: Have you performed a self-review of your code, addressing any coding standard issues and ensuring adherence to the project's coding standards?
  • Prefix: To clearly categorize this pull request, prefix the pull request title using one of the following:
    • BREAKING CHANGE: Significant changes that may affect compatibility
    • build: Changes that affect the build system or external dependencies
    • ci: Changes to our continuous integration processes or workflows
    • chore: Refactor, cleanup, or other non-functional code changes
    • docs: Documentation update or addition
    • feat: Introduces a new feature or enhancement to the codebase
    • fix: Bug fix or error correction
    • i18n: Internationalization or localization changes
    • perf: Performance improvement
    • refactor: Code restructuring for better maintainability, readability, or scalability
    • style: Changes that do not affect the meaning of the code (white space, formatting, missing semi-colons, etc.)
    • test: Adding missing tests or correcting existing tests
    • WIP: Work in progress, a temporary label for incomplete or ongoing work

Changelog Entry

Description

  • This PR aims to fix a dangling VRAM issue when changing/reloading internal (Sentence Transformers) embedding and reranking models.
  • The reranker is now only loaded when not Bypass Embedding and Retrieval and hybrid search is turned on

Fixed

  • Old models are now properly removed from VRAM
  • Fix reranker is always loaded on startup, even if unused

Additional Information

  • This PR addresses parts of https://github.com/open-webui/open-webui/discussions/6381
  • VRAM freeing is only called when the old model is not externally
  • If one switches from internal models on GPU to only external models, ~300-500 MB VRAM are still reserved by torch. This is done by the CUDA driver and can be fixed by restarting the app after switching to external models. Calling torch.cuda.empty_cache() won’t drop that.

Contributor License Agreement

By submitting this pull request, I confirm that I have read and fully agree to the Contributor License Agreement (CLA), and I am providing my contributions under its terms.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/open-webui/open-webui/pull/16779 **Author:** [@mahenning](https://github.com/mahenning) **Created:** 8/21/2025 **Status:** ✅ Merged **Merged:** 8/21/2025 **Merged by:** [@tjbck](https://github.com/tjbck) **Base:** `dev` ← **Head:** `fix--clean-unload-embed/reranker-models` --- ### 📝 Commits (6) - [`39fe385`](https://github.com/open-webui/open-webui/commit/39fe385017a190b773522fbfd5cd11fb0618ae24) Correctly unloads embedding/reranker models - [`cd02ff2`](https://github.com/open-webui/open-webui/commit/cd02ff2e079eb4840417c63b362dddc03cf96612) Fix if checks - [`6663fc3`](https://github.com/open-webui/open-webui/commit/6663fc3a6c69db90c7c49632b3654800fb24f5ef) Unloads only if internal models are used. - [`b3de329`](https://github.com/open-webui/open-webui/commit/b3de3295d650fe92bad74cb792ba802653ebb807) Chage torch import to conditional import - [`c821c3e`](https://github.com/open-webui/open-webui/commit/c821c3ecb06e19872bea5378bbcee6deeed066bb) Formatting - [`f2e78d7`](https://github.com/open-webui/open-webui/commit/f2e78d79407004dfe5bc40a6ef3e71bf4088d273) More formatting ### 📊 Changes **2 files changed** (+53 additions, -24 deletions) <details> <summary>View changed files</summary> 📝 `backend/open_webui/main.py` (+13 -8) 📝 `backend/open_webui/routers/retrieval.py` (+40 -16) </details> ### 📄 Description # Pull Request Checklist ### Note to first-time contributors: Please open a discussion post in [Discussions](https://github.com/open-webui/open-webui/discussions) and describe your changes before submitting a pull request. **Before submitting, make sure you've checked the following:** - [x] **Target branch:** Please verify that the pull request targets the `dev` branch. - [x] **Description:** Provide a concise description of the changes made in this pull request. - [x] **Changelog:** Ensure a changelog entry following the format of [Keep a Changelog](https://keepachangelog.com/) is added at the bottom of the PR description. - [x] **Documentation:** Have you updated relevant documentation [Open WebUI Docs](https://github.com/open-webui/docs), or other documentation sources? - [x] **Dependencies:** Are there any new dependencies? Have you updated the dependency versions in the documentation? - [x] **Testing:** Have you written and run sufficient tests to validate the changes? - [x] **Code review:** Have you performed a self-review of your code, addressing any coding standard issues and ensuring adherence to the project's coding standards? - [x] **Prefix:** To clearly categorize this pull request, prefix the pull request title using one of the following: - **BREAKING CHANGE**: Significant changes that may affect compatibility - **build**: Changes that affect the build system or external dependencies - **ci**: Changes to our continuous integration processes or workflows - **chore**: Refactor, cleanup, or other non-functional code changes - **docs**: Documentation update or addition - **feat**: Introduces a new feature or enhancement to the codebase - **fix**: Bug fix or error correction - **i18n**: Internationalization or localization changes - **perf**: Performance improvement - **refactor**: Code restructuring for better maintainability, readability, or scalability - **style**: Changes that do not affect the meaning of the code (white space, formatting, missing semi-colons, etc.) - **test**: Adding missing tests or correcting existing tests - **WIP**: Work in progress, a temporary label for incomplete or ongoing work # Changelog Entry ### Description - This PR aims to fix a dangling VRAM issue when changing/reloading internal (Sentence Transformers) embedding and reranking models. - The reranker is now only loaded when **not** Bypass Embedding and Retrieval **and** hybrid search is turned on ### Fixed - Old models are now properly removed from VRAM - Fix reranker is always loaded on startup, even if unused --- ### Additional Information - This PR addresses parts of https://github.com/open-webui/open-webui/discussions/6381 - VRAM freeing is only called when the old model is not externally - If one switches from internal models on GPU to only external models, ~300-500 MB VRAM are still reserved by torch. This is done by the CUDA driver and can be fixed by restarting the app after switching to external models. Calling `torch.cuda.empty_cache()` won’t drop that. ### Contributor License Agreement By submitting this pull request, I confirm that I have read and fully agree to the [Contributor License Agreement (CLA)](/CONTRIBUTOR_LICENSE_AGREEMENT), and I am providing my contributions under its terms. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 22:28:07 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#47287