[PR #13906] [CLOSED] Modular Docker Compose, GPU Support, and Build Improvements #10124

Closed
opened 2025-11-11 18:56:49 -06:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/open-webui/open-webui/pull/13906
Author: @paulovilae
Created: 5/15/2025
Status: Closed

Base: devHead: dev


📝 Commits (3)

  • da6dd03 docs: add modular Docker Compose setup and AI Lab/tools stacks -- Add docker-compose.ai-lab.yaml for Flowise, n8n, Qdrant, Postgres, Redis -- Add docker-compose.tools.yaml for SearxNG, Jupyter, ComfyUI -- Update README.md with modular compose usage, shared resource structure, and examples -- Improve troubleshooting section and clarify advanced usage -- This enables advanced users to run research, prototyping, and workflow automation stacks alongside Open WebUI.
  • 7a87192 fix: set NODE_OPTIONS in Dockerfile for stable frontend build; sync .env.example with upstream; log in changelog
  • 7ec8d1c docs: add simple instructions for enabling GPU support for Ollama in Docker Compose

📊 Changes

6 files changed (+322 additions, -2 deletions)

View changed files

📝 .env.example (+40 -1)
CHANGELOG_PENDING.md (+21 -0)
📝 Dockerfile (+1 -0)
📝 README.md (+78 -1)
docker-compose.ai-lab.yaml (+110 -0)
docker-compose.tools.yaml (+72 -0)

📄 Description

Description

This pull request introduces a modular Docker Compose setup, adds GPU support instructions for Ollama, and improves build stability and documentation.

Added

  • docker-compose.ai-lab.yaml for Flowise, n8n, Qdrant, Postgres, Redis.
  • docker-compose.tools.yaml for SearxNG, Jupyter, ComfyUI.
  • Section in README for enabling GPU support for Ollama in Docker Compose.
  • CHANGELOG_PENDING.md to track pending changes.

Changed

  • Updated README.md with modular compose usage, shared resource structure, troubleshooting, and GPU instructions.
  • Set NODE_OPTIONS=--max-old-space-size=8192 in Dockerfile for stable frontend builds.
  • Synced .env.example with upstream Open WebUI project.

Deprecated

  • N/A

Removed

  • N/A

Fixed

  • Improved build reliability for large frontend builds.

Security

  • N/A

Breaking Changes

  • N/A

Additional Information

  • All changes have been tested locally with and without GPU support.
  • No new dependencies introduced.
  • See CHANGELOG_PENDING.md for a detailed list of pending changes.

Screenshots or Videos

  • N/A

Contributor License Agreement

By submitting this pull request, I confirm that I have read and fully agree to the Contributor License Agreement (CLA), and I am providing my contributions under its terms.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/open-webui/open-webui/pull/13906 **Author:** [@paulovilae](https://github.com/paulovilae) **Created:** 5/15/2025 **Status:** ❌ Closed **Base:** `dev` ← **Head:** `dev` --- ### 📝 Commits (3) - [`da6dd03`](https://github.com/open-webui/open-webui/commit/da6dd03874a487e46bc09e6f13136053e65380f9) docs: add modular Docker Compose setup and AI Lab/tools stacks -- Add docker-compose.ai-lab.yaml for Flowise, n8n, Qdrant, Postgres, Redis -- Add docker-compose.tools.yaml for SearxNG, Jupyter, ComfyUI -- Update README.md with modular compose usage, shared resource structure, and examples -- Improve troubleshooting section and clarify advanced usage -- This enables advanced users to run research, prototyping, and workflow automation stacks alongside Open WebUI. - [`7a87192`](https://github.com/open-webui/open-webui/commit/7a87192cdf4171e7205be5579f8211d326637962) fix: set NODE_OPTIONS in Dockerfile for stable frontend build; sync .env.example with upstream; log in changelog - [`7ec8d1c`](https://github.com/open-webui/open-webui/commit/7ec8d1c29a1b66da3907f7c742288969421d6015) docs: add simple instructions for enabling GPU support for Ollama in Docker Compose ### 📊 Changes **6 files changed** (+322 additions, -2 deletions) <details> <summary>View changed files</summary> 📝 `.env.example` (+40 -1) ➕ `CHANGELOG_PENDING.md` (+21 -0) 📝 `Dockerfile` (+1 -0) 📝 `README.md` (+78 -1) ➕ `docker-compose.ai-lab.yaml` (+110 -0) ➕ `docker-compose.tools.yaml` (+72 -0) </details> ### 📄 Description ### Description This pull request introduces a modular Docker Compose setup, adds GPU support instructions for Ollama, and improves build stability and documentation. ### Added - `docker-compose.ai-lab.yaml` for Flowise, n8n, Qdrant, Postgres, Redis. - `docker-compose.tools.yaml` for SearxNG, Jupyter, ComfyUI. - Section in README for enabling GPU support for Ollama in Docker Compose. - `CHANGELOG_PENDING.md` to track pending changes. ### Changed - Updated `README.md` with modular compose usage, shared resource structure, troubleshooting, and GPU instructions. - Set `NODE_OPTIONS=--max-old-space-size=8192` in Dockerfile for stable frontend builds. - Synced `.env.example` with upstream Open WebUI project. ### Deprecated - N/A ### Removed - N/A ### Fixed - Improved build reliability for large frontend builds. ### Security - N/A ### Breaking Changes - N/A --- ### Additional Information - All changes have been tested locally with and without GPU support. - No new dependencies introduced. - See `CHANGELOG_PENDING.md` for a detailed list of pending changes. ### Screenshots or Videos - N/A ### Contributor License Agreement By submitting this pull request, I confirm that I have read and fully agree to the [Contributor License Agreement (CLA)](/CONTRIBUTOR_LICENSE_AGREEMENT), and I am providing my contributions under its terms. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2025-11-11 18:56:49 -06:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#10124