Commit Graph

5220 Commits

Author SHA1 Message Date
Timothy Jaeryang Baek
8eebc2aea6 fix: mcp get_discovery_urls 2026-01-22 03:11:33 +04:00
Timothy Jaeryang Baek
a9a0ce6bea refac 2026-01-22 03:09:04 +04:00
Timothy Jaeryang Baek
ecbdef732b enh: PDF_LOADER_MODE 2026-01-21 23:51:36 +04:00
Timothy Jaeryang Baek
4615e8f92b refac 2026-01-20 22:28:10 +04:00
Classic298
38bf0b6eec feat: Add new ENV VAR for custom error message on error on signup / password change due to password not meeting requirements (#20650)
* add env var for custom auth pw message

* Update auth.py

* Update auth.py
2026-01-19 14:00:48 +04:00
G30
e9926694c3 fix: add username search support to workspace and admin pages (#20780)
This fix restores and extends the username/email search functionality across workspace pages that was originally added in PR #14002. The issue was that:

1. The backend search functions for Models and Knowledge only searched `User.name` and `User.email`, but not `User.username`

2. The Functions admin page lacked user search entirely

Changes made:

Added User.username to backend search conditions for Models and Knowledge pages
Added complete user search (name, email, username) to the Functions admin page client-side filter
2026-01-19 13:42:33 +04:00
Timothy Jaeryang Baek
5cfb7a08cb refac 2026-01-17 21:52:12 +04:00
rohithshenoy
9d642f6354 Added support for connecting to self hosted weaviate deployments using connect_to_custom replacing connect_to_local, which is better suited for cases where HTTP and GRPC are hosted on different ingresses. (#20620)
Co-authored-by: Tim Baek <tim@openwebui.com>
Co-authored-by: joaoback <156559121+joaoback@users.noreply.github.com>
Co-authored-by: rohithshenoyg@gmail.com <rohithshenoyg@gmail.com>
2026-01-17 21:48:52 +04:00
Classic298
716f2986b9 dep bump (#20735) 2026-01-17 21:44:32 +04:00
Timothy Jaeryang Baek
409f565f09 refac 2026-01-17 21:41:48 +04:00
Classic298
1c1f72f05c Update builtin.py (#20705) 2026-01-16 00:15:02 +04:00
EntropyYue
1d343aeae4 enh: Make builtin search web tools asynchronous (#20630)
Co-authored-by: Tim Baek <tim@openwebui.com>
Co-authored-by: joaoback <156559121+joaoback@users.noreply.github.com>
2026-01-15 10:46:00 +04:00
Kailey Wong
e26f6acc3b fix: use proper X-Api-Key header format when docling api key provided (#20652) 2026-01-15 10:44:35 +04:00
Timothy Jaeryang Baek
de0cbb9073 refac 2026-01-12 21:56:02 +04:00
Timothy Jaeryang Baek
5a075a2c83 fix: members only groups 2026-01-12 21:53:41 +04:00
Timothy Jaeryang Baek
7da37b4f66 refac 2026-01-12 21:41:23 +04:00
Classic298
af584b46f4 feat: code-interpreter native (#20592)
* code-interpreter native

* Update tools.py

* Update builtin.py

* Update builtin.py

* Update tools.py

* Update builtin.py

* Update builtin.py

* Update builtin.py

* Update builtin.py

* Update builtin.py

* Update builtin.py

* Update builtin.py

* Update builtin.py

* Update builtin.py

* Update builtin.py
2026-01-12 00:18:41 +04:00
Classic298
1dc353433a fix(db): release connection before embedding in memory /query (#20579)
Remove Depends(get_session) from POST /query endpoint to prevent database connections from being held during embedding API calls (1-5+ seconds).

The Memories.get_memories_by_user_id() function manages its own short-lived session internally, releasing the connection before the slow EMBEDDING_FUNCTION() call begins.
2026-01-11 23:37:47 +04:00
Classic298
33e8a09880 fix(db): release connection before embedding in knowledge /create (#20575)
Remove Depends(get_session) from POST /create endpoint to prevent database connections from being held during embedding API calls (1-5+ seconds).

The has_permission() and Knowledges.insert_new_knowledge() functions manage their own short-lived sessions internally, releasing connections before the slow embed_knowledge_base_metadata() call begins.
2026-01-11 23:37:05 +04:00
Classic298
1cb751d184 fix(db): release connection before embedding in knowledge /{id}/update (#20574)
Remove Depends(get_session) from POST /{id}/update endpoint to prevent database connections from being held during embedding API calls (1-5+ seconds).

All database operations (get_knowledge_by_id, has_access, has_permission, update_knowledge_by_id, get_file_metadatas_by_id) manage their own short-lived sessions internally, releasing connections before and after the slow embed_knowledge_base_metadata() call.
2026-01-11 23:36:36 +04:00
Classic298
9e596f8616 fix(db): release connection before LLM call in Ollama /v1/completions (#20570)
Remove Depends(get_session) from the /v1/completions endpoint to prevent database connections from being held during the entire duration of LLM calls.

Previously, the database session was acquired at request start and held until the response completed. Under concurrent load, this exhausted the connection pool, causing QueuePool timeout errors.

The fix allows Models.get_model_by_id() and has_access() to manage their own short-lived sessions internally, releasing the connection immediately after authorization checks complete.
2026-01-11 23:35:46 +04:00
Classic298
24044b42ea fix(db): release connection before LLM call in Ollama /v1/chat/completions (#20569)
Remove Depends(get_session) from the /v1/chat/completions endpoint to prevent database connections from being held during the entire duration of LLM calls.

Previously, the database session was acquired at request start and held until the streaming response completed. Under concurrent load, this exhausted the connection pool, causing QueuePool timeout errors.

The fix allows Models.get_model_by_id() and has_access() to manage their own short-lived sessions internally, releasing the connection immediately after authorization checks complete.
2026-01-11 23:35:38 +04:00
Classic298
0b5aa6dd60 fix(db): release connection before LLM call in Ollama /api/chat (#20571)
Remove Depends(get_session) from the /api/chat endpoint to prevent database connections from being held during the entire duration of LLM calls (30-60+ seconds for streaming responses).

Previously, the database session was acquired at request start and held until the streaming response completed. Under concurrent load, this exhausted the connection pool, causing QueuePool timeout errors for other database operations.

The fix allows Models.get_model_by_id() and has_access() to manage their own short-lived sessions internally, releasing the connection immediately after the quick authorization checks complete - before the slow external LLM API call begins.
2026-01-11 23:34:23 +04:00
Classic298
d0c2bfdbff fix(db): release connection before LLM call in OpenAI /chat/completions (#20572)
Remove Depends(get_session) from the /chat/completions endpoint to prevent database connections from being held during the entire duration of LLM calls (30-60+ seconds for streaming responses).

Previously, the database session was acquired at request start and held until the streaming response completed. Under concurrent load, this exhausted the connection pool, causing QueuePool timeout errors for other database operations.

The fix allows Models.get_model_by_id() and has_access() to manage their own short-lived sessions internally, releasing the connection immediately after the quick authorization checks complete - before the slow external LLM API call begins.
2026-01-11 23:34:11 +04:00
Classic298
242625782f fix(db): release connection before embedding in memory /add (#20578)
Remove Depends(get_session) from POST /add endpoint to prevent database connections from being held during embedding API calls (1-5+ seconds).

The Memories.insert_new_memory() function manages its own short-lived session internally, releasing the connection before the slow EMBEDDING_FUNCTION() call begins.
2026-01-11 23:33:17 +04:00
Classic298
826e9ab317 fix(db): release connection before embeddings in knowledge /metadata/reindex (#20577)
Remove Depends(get_session) from POST /metadata/reindex endpoint to prevent database connections from being held during N embedding API calls.

This endpoint is CRITICAL as it loops through ALL knowledge bases and calls embed_knowledge_base_metadata() for each one. With the original code, a single connection would be held for the entire duration (potentially minutes for large deployments), completely exhausting the pool.

The Knowledges.get_knowledge_bases() function manages its own short-lived session, releasing the connection before the embedding loop begins.
2026-01-11 23:33:04 +04:00
Classic298
182d5e8591 fix(db): release connection before embedding in process_files_batch (#20576)
Remove Depends(get_session) from POST /process/files/batch endpoint to prevent database connections from being held during batch embedding API calls (5-60+ seconds for large batches).

The save_docs_to_vector_db() function makes external embedding API calls. Post-embedding file updates (Files.update_file_by_id) manage their own short-lived sessions internally, releasing connections promptly.
2026-01-11 23:32:56 +04:00
Classic298
3fc866117d fix(db): CRITICAL - prevent pool exhaustion in memory /reset (#20580)
Remove Depends(get_session) from POST /reset to prevent catastrophic connection pool exhaustion.

This endpoint was holding a SINGLE database connection while executing N PARALLEL embedding API calls via asyncio.gather(). For a user with 100 memories, this meant one connection blocked for potentially MINUTES (100 calls * 1-5 seconds each, even in parallel due to rate limits).

A single user triggering /reset could completely starve the connection pool, causing QueuePool timeout errors across the entire application.

The Memories.get_memories_by_user_id() function now manages its own short-lived session, releasing the connection immediately before the massive parallel embedding operation begins.
2026-01-11 23:32:40 +04:00
Classic298
b464b48f53 Merge pull request #20581 from Classic298/fix/db-pool-memory-update
fix(db): release connection before embedding in memory /{memory_id}/update
2026-01-11 23:32:27 +04:00
Timothy Jaeryang Baek
d56bb2c383 refac 2026-01-11 00:52:43 +04:00
Classic298
3f133fad56 fix: release database connections immediately after auth instead of holding during LLM calls (#20545)
fix: release database connections immediately after auth instead of holding during LLM calls

Authentication was using Depends(get_session) which holds a database connection
for the entire request lifecycle. For chat completions, this meant connections
were held for 30-60 seconds while waiting for LLM responses, despite only needing
the connection for ~50ms of actual database work.

With a default pool of 15 connections, this limited concurrent chat users to ~15
before pool exhaustion and timeout errors:

    sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10 reached,
    connection timed out, timeout 30.00

The fix removes Depends(get_session) from get_current_user. Each database
operation now manages its own short-lived session internally:

    BEFORE: One session held for entire request
    ──────────────────────────────────────────────────
    │ auth │ queries │ LLM wait (30s) │ save │
    │         CONNECTION HELD ENTIRE TIME            │
    ──────────────────────────────────────────────────

    AFTER: Short-lived sessions, released immediately
    ┌──────┐ ┌───────┐                 ┌──────┐
    │ auth │ │ query │   LLM (30s)     │ save │
    │ 10ms │ │ 20ms  │  NO CONNECTION  │ 20ms │
    └──────┘ └───────┘                 └──────┘

This is safe because:
- User model has no lazy-loaded relationships (all simple columns)
- Pydantic conversion (UserModel.model_validate) happens while session is open
- Returned object is pure Pydantic with no SQLAlchemy ties

Combined with the telemetry efficiency fix, this resolves connection pool
exhaustion for high-concurrency deployments, particularly on network-attached
databases like AWS Aurora where connection hold time is more impactful.
2026-01-10 15:34:36 +04:00
Classic298
41d1ccd39c Update channels.py (#20546) 2026-01-10 15:34:12 +04:00
Classic298
7839d043ff fix: use efficient COUNT queries in telemetry metrics to prevent connection pool exhaustion (#20542)
fix: use efficient COUNT queries in telemetry metrics to prevent connection pool exhaustion

This fixes database connection pool exhaustion issues reported after v0.7.0,
particularly affecting PostgreSQL deployments on high-latency networks (e.g., AWS Aurora).

## The Problem

The telemetry metrics callbacks (running every 10 seconds via OpenTelemetry's
PeriodicExportingMetricReader) were using inefficient queries that loaded entire
database tables into memory just to count records:

    len(Users.get_users()["users"])  # Loads ALL user records to count them

On high-latency network-attached databases like AWS Aurora, this would:
1. Hold database connections for hundreds of milliseconds while transferring data
2. Deserialize all records into Python objects
3. Only then count the list length

Under concurrent load, these long-held connections would stack up and drain the
connection pool, resulting in:

    sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10 reached,
    connection timed out, timeout 30.00

## The Fix

Replace inefficient full-table loads with efficient COUNT(*) queries using
methods that already exist in the codebase:

- `len(Users.get_users()["users"])` → `Users.get_num_users()`
- Similar changes for other telemetry callbacks as needed

COUNT(*) queries use database indexes and return a single integer, completing in
~5-10ms even on Aurora, versus potentially 500ms+ for loading all records.

## Why v0.7.1's Session Sharing Disable "Helped"

The v0.7.1 change to disable DATABASE_ENABLE_SESSION_SHARING by default appeared
to fix the issue, but it was masking the root cause. Disabling session sharing
causes connections to be returned to the pool faster (more connection churn),
which reduced the window for pool exhaustion but didn't address the underlying
inefficient queries.

With this fix, session sharing can be safely re-enabled for deployments that
benefit from it (especially PostgreSQL), as telemetry will no longer hold
connections for extended periods.

## Impact

- Telemetry connection usage drops from potentially seconds to ~30ms total per
  collection cycle
- Connection pool pressure from telemetry becomes negligible (~0.3% utilization)
- Enterprise PostgreSQL deployments (Aurora, RDS, etc.) should no longer
  experience pool exhaustion under normal load
2026-01-10 15:33:42 +04:00
G30
9b9e6ce2ab fix: correct empty STT_ENGINE handling and improve TTS error response (#20534)
- Remove incorrect 403 check that blocked STT when ENGINE="" (local whisper)
- Change TTS empty ENGINE check from 403 to 404 for proper semantics
2026-01-10 15:32:22 +04:00
Classic298
81510e9d8f fix(files): prevent connection pool exhaustion in file status streaming (#20547)
Refactored the file processing status streaming endpoint to avoid holding
a database connection for the entire stream duration (up to 2 hours).
Changes:
- Each status poll now creates its own short-lived database session instead
  of capturing the request's session in the generator closure
- Increased poll interval from 0.5s to 1s, halving database queries with
  negligible UX impact
This prevents a single file status stream from blocking a connection pool
slot for hours, which could contribute to pool exhaustion under load.
2026-01-10 15:23:48 +04:00
Timothy Jaeryang Baek
8646aebaab refac/fix: DATABASE_ENABLE_SESSION_SHARING env var 2026-01-10 00:16:04 +04:00
Timothy Jaeryang Baek
5990c51ab5 chore: format 2026-01-09 22:27:53 +04:00
Timothy Jaeryang Baek
3c986adeda enh: kb metadata search 2026-01-09 22:21:00 +04:00
Timothy Jaeryang Baek
7a7a0c423b chore: format 2026-01-09 20:44:31 +04:00
Timothy Jaeryang Baek
74c4af6e11 refac 2026-01-09 20:25:51 +04:00
Timothy Jaeryang Baek
9496e8f7b5 feat: model evaluation activity chart 2026-01-09 20:19:51 +04:00
Timothy Jaeryang Baek
a7b4b6e51a enh: WHISPER_MULTILINGUAL 2026-01-09 19:42:13 +04:00
Timothy Jaeryang Baek
401c1949a0 refac 2026-01-09 18:51:38 +04:00
Timothy Jaeryang Baek
10838b3654 refac/fix: feedback leaderboard 2026-01-09 18:24:09 +04:00
Timothy Jaeryang Baek
3a57233dd4 chore: aiohttp 2026-01-09 18:10:27 +04:00
Tim Baek
daccf0713e enh: file context model setting 2026-01-09 03:41:43 -05:00
Timothy Jaeryang Baek
1138929f4d feat: headless admin creation 2026-01-09 12:01:36 +04:00
Timothy Jaeryang Baek
b2a1f71d92 refac: get feedback ids 2026-01-09 03:06:24 +04:00
Timothy Jaeryang Baek
ffbd6ec7f2 refac 2026-01-09 03:03:25 +04:00
Timothy Jaeryang Baek
b377e5ff4c chore: format 2026-01-09 02:46:04 +04:00