1370 Commits

Author SHA1 Message Date
Timothy Jaeryang Baek
8ff7ff459b chore: format 2026-04-24 18:48:21 +09:00
Timothy Jaeryang Baek
3d1e355df7 refac 2026-04-24 18:20:10 +09:00
Timothy Jaeryang Baek
678c44c7cd refac 2026-04-24 16:17:46 +09:00
Jacob Leksan
f2cb63140c perf: redirect default model profile image to canonical static URL (#24015)
- Return 302 to /static/favicon.png instead of streaming the same PNG per
  model id so browsers can cache one asset for default avatars.
- Validate stored /static/ paths with decode, normpath, and /static
  prefix checks; invalid paths fall back to favicon.

Made-with: Cursor
2026-04-24 15:45:10 +09:00
Timothy Jaeryang Baek
7da6b82471 refac 2026-04-24 15:35:59 +09:00
goodbey857
58bc254809 feat: add PaddleOCR-vl loader support and implement retrieval router infrastructure (#23945)
Co-authored-by: Tim Baek <tim@openwebui.com>
Co-authored-by: joaoback <156559121+joaoback@users.noreply.github.com>
2026-04-24 15:19:37 +09:00
Timothy Jaeryang Baek
d0e51bde5d refac 2026-04-24 15:03:29 +09:00
Timothy Jaeryang Baek
6cc799b1bb chore: format 2026-04-21 15:52:00 +09:00
Timothy Jaeryang Baek
b9fc3f367a refac 2026-04-21 15:47:32 +09:00
Timothy Jaeryang Baek
c7b6de6ca4 refac 2026-04-21 15:41:07 +09:00
Timothy Jaeryang Baek
46d73c9dcd refac 2026-04-21 13:46:39 +09:00
Tim Baek
51627555bf refac 2026-04-20 03:35:17 -04:00
Timothy Jaeryang Baek
47329b5032 refac 2026-04-20 08:55:34 +09:00
Timothy Jaeryang Baek
d5e69f182c refac 2026-04-20 08:53:06 +09:00
Timothy Jaeryang Baek
e29d145a1c refac 2026-04-20 08:48:35 +09:00
Classic298
b3ca943da1 perf(channels): batch user lookup in model_response_handler thread history (#23795)
* perf(channels): batch user lookup in model_response_handler thread history

The thread-history builder in model_response_handler called
Users.get_user_by_id once per thread message (deduped via an intra-loop
dict), producing N individual SELECTs for a thread of N unique authors.

Replace with a single Users.get_users_by_user_ids call that returns all
authors in one WHERE id IN (...) query, matching the batch pattern
already used elsewhere in this file (lines 739, 804, 1320).

Behavior is preserved: deleted users still resolve to None and fall
through to the existing 'Unknown' fallback via .get().

* refac(channels): rename loop vars to full words per review

Address reviewer feedback to use descriptive names `message` and `user`
instead of single-letter `m` and `u` in the batch user-lookup
comprehensions.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-20 08:37:07 +09:00
Timothy Jaeryang Baek
56c5bc1d34 refac 2026-04-20 08:36:24 +09:00
Timothy Jaeryang Baek
fd25152076 refac 2026-04-20 08:34:15 +09:00
Timothy Jaeryang Baek
24dd5b461e refac 2026-04-20 00:09:24 +09:00
Timothy Jaeryang Baek
37eba1c5a6 chore: format 2026-04-19 22:45:54 +09:00
Timothy Jaeryang Baek
5afc258c5b refac 2026-04-19 22:37:10 +09:00
Timothy Jaeryang Baek
42694c7c0c refac 2026-04-19 22:33:32 +09:00
Timothy Jaeryang Baek
f45d0f130e refac 2026-04-19 22:22:15 +09:00
Timothy Jaeryang Baek
4a5401b417 refac 2026-04-19 21:48:27 +09:00
Timothy Jaeryang Baek
8d739e2aba feat: calendar 2026-04-19 19:15:05 +09:00
Classic298
f0e0cfcf02 perf: avoid redundant knowledge re-fetch in update_knowledge_access_by_id (#23799)
After set_access_grants, the handler was reloading the same knowledge
record via get_knowledge_by_id, which triggers an extra SELECT plus a
nested fetch of access grants. set_access_grants already returns the
newly-written grants and the local knowledge object is otherwise
unchanged, so update it in place and reuse it for the response.

https://claude.ai/code/session_01S18Lgqbih7Ry2JZUUv8TxF

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-17 14:44:42 +09:00
Timothy Jaeryang Baek
4113b15a60 chore: format 2026-04-17 14:28:18 +09:00
Timothy Jaeryang Baek
8acce144f9 refac 2026-04-17 14:15:36 +09:00
Timothy Jaeryang Baek
860b90fd17 refac 2026-04-17 13:47:21 +09:00
Timothy Jaeryang Baek
914ccf07ef refac 2026-04-17 13:37:52 +09:00
Timothy Jaeryang Baek
ba83613ff2 refac 2026-04-17 13:35:35 +09:00
Timothy Jaeryang Baek
499129625b refac
Co-Authored-By: Classic298 <27028174+Classic298@users.noreply.github.com>
2026-04-17 13:33:11 +09:00
Timothy Jaeryang Baek
34d569d564 refac 2026-04-17 13:04:07 +09:00
Timothy Jaeryang Baek
7e453de4f7 refac 2026-04-17 11:54:19 +09:00
Timothy Jaeryang Baek
f1ef09ddc8 refac 2026-04-17 11:48:17 +09:00
Timothy Jaeryang Baek
3332878321 refac 2026-04-17 11:29:59 +09:00
Classic298
e396af3cc8 perf: reuse request db session in get_model_profile_image (#23796)
Pass the request-scoped AsyncSession into Models.get_model_by_id so the
endpoint no longer opens a fresh DB session on every call, avoiding an
extra connection acquisition per profile image request.

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-17 11:28:19 +09:00
Classic298
5eae0a5cdd perf(users): drop redundant get_user_by_id refetch in session-user endpoints (#23794)
* perf(users): drop redundant get_user_by_id refetch in session-user endpoints

Five /user/* handlers refetched the user row via Users.get_user_by_id(user.id)
immediately after receiving an identical UserModel from Depends(get_verified_user).
Since get_verified_user already populated the user within the same request
microseconds earlier, the refetch is pure overhead. The dead else branches
(unreachable — get_verified_user raises 401 on missing user) are removed as
a natural consequence.

Affected endpoints:
- GET  /user/settings
- GET  /user/status
- POST /user/status/update
- GET  /user/info
- POST /user/info/update

Eliminates one SELECT per request to each of these endpoints with no behavioral
change.

* fix(users): preserve USER_NOT_FOUND error on status update failure

update_user_status_by_id returns None when the target user is missing or
the update raises. The previous commit removed the pre-update existence
gate (get_user_by_id) and returned the update result directly, which
turned not-found/failure cases into 200 OK with a null body instead of
the expected 400 USER_NOT_FOUND.

Guard the update result explicitly to preserve the original API contract,
matching the equivalent pattern already applied in /user/info/update.

* docs(users): note lost-update tradeoff on /user/info/update

Make the concurrency tradeoff explicit: merging against the auth-time
snapshot slightly widens the lost-update window compared to the previous
pre-merge refetch, but the refetch only narrowed (did not eliminate) that
window. Real safety requires row locking or a version column.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-17 11:23:08 +09:00
Timothy Jaeryang Baek
43e5905c13 refac 2026-04-17 11:18:48 +09:00
Timothy Jaeryang Baek
128cf41fce refac 2026-04-17 11:12:42 +09:00
Timothy Jaeryang Baek
2e52ad8ff2 refac: shared chat 2026-04-17 10:16:32 +09:00
Timothy Jaeryang Baek
4d2f189810 feat: add RAG_RERANKING_BATCH_SIZE configuration option
Add configurable reranker batch size (env var RAG_RERANKING_BATCH_SIZE,
default 32) following the same pattern as RAG_EMBEDDING_BATCH_SIZE.

- config.py: PersistentConfig for RAG_RERANKING_BATCH_SIZE
- main.py: import, state init, pass to get_reranking_function
- colbert.py: accept batch_size param in predict() (was hardcoded 32)
- utils.py: get_reranking_function passes batch_size at call time
- retrieval.py: expose in config GET/POST endpoints and ConfigForm
- Documents.svelte: add Reranking Batch Size input in admin settings

Closes #23730
2026-04-17 08:35:45 +09:00
Timothy Jaeryang Baek
5dae600ce7 chore: format 2026-04-14 17:27:31 -05:00
Timothy Jaeryang Baek
ecd74f220c refac 2026-04-14 17:22:54 -05:00
Timothy Jaeryang Baek
8bd23b9145 refac 2026-04-14 16:47:43 -05:00
Classic298
a3ea7bf043 fix(retrieval): offload Loader.load to a worker thread so file uploads stop blocking the event loop (#23705)
Loader.load() dispatches to the underlying langchain document loaders
(PyMuPDF, Unstructured, python-docx, Tika, …) which are all
synchronous and CPU/IO-bound. process_file() awaited it directly on
the event loop, so parsing a non-trivial PDF/DOCX would freeze the
entire FastAPI app for the duration of the parse — which is what users
experience as "the server hangs whenever I upload a file."

Add an `aload()` async wrapper on Loader that runs the sync load on a
worker thread via asyncio.to_thread, and update process_file() to
await it. The sync API is preserved so existing callers that already
run inside run_in_threadpool (e.g. save_docs_to_vector_db) are
unaffected.

https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-14 10:55:46 -05:00
Timothy Jaeryang Baek
4866bec0f2 refac 2026-04-14 10:55:11 -05:00
Classic298
804f9f3153 fix(retrieval): offload sync VECTOR_DB_CLIENT calls in async paths via AsyncVectorDBClient (#23706)
* fix(retrieval): offload sync VECTOR_DB_CLIENT calls in async paths via AsyncVectorDBClient

The vector DB backends (Chroma, pgvector, Qdrant, Milvus, Pinecone,
Weaviate, …) are uniformly synchronous and their methods perform
blocking network or disk I/O. Multiple async route handlers and helpers
were calling them directly on the event loop — file processing,
memories, knowledge bases, hybrid search bookkeeping — so a single
upsert/delete/search would freeze every other in-flight request for the
duration of the call.

Introduce `AsyncVectorDBClient`, a thin async facade that wraps the
existing sync client and dispatches each method through
`asyncio.to_thread`. It mirrors `VectorDBBase` exactly and forwards
*args/**kwargs so backend-specific extra parameters keep working.

Update every async-context call site (routers/retrieval, routers/files,
routers/memories, routers/knowledge, retrieval/utils,
tools/builtin) to await `ASYNC_VECTOR_DB_CLIENT` instead of calling the
sync client directly. Two helpers that were sync-only also acquire
async siblings or are awaited via `asyncio.to_thread` at their async
call site (`remove_knowledge_base_metadata_embedding`,
`get_all_items_from_collections`, `query_doc`).

The original sync `VECTOR_DB_CLIENT` is unchanged, so callers that
already run inside `run_in_threadpool` (e.g. `save_docs_to_vector_db`
and the sync `query_doc`/`get_doc` helpers) are unaffected.

https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8

* fix(retrieval): restore explicit AsyncVectorDBClient signatures matching VectorDBBase

Per PR review: the original *args/**kwargs forwarding lost type
safety and IDE/static-analysis support. Restore explicit signatures
that mirror VectorDBBase exactly, so:

  * Bad kwargs fail at the facade boundary instead of inside the
    worker thread (where the resulting TypeError tends to be
    swallowed by surrounding `try/except`).
  * IDE autocomplete and static analysis work as expected.
  * The stated intent ("mirror VectorDBBase exactly") now holds at
    the API contract level, not just behaviourally.

While doing this, surface a pre-existing bug in
`delete_entries_from_collection` that the stricter typing flagged:
the call passed `metadata={'hash': hash}` which is not a parameter
on `VectorDBBase.delete` nor any backend. The TypeError raised
inside the sync delete was silently swallowed by `except Exception`
so the endpoint always reported `{'status': False}` for every
request instead of actually deleting matching vectors. Replace with
`filter=...` to do what the endpoint name promises.

The thorough review's other note (no concurrency/backpressure on
the shared default threadpool) is intentionally not addressed here:
asyncio.to_thread on the shared executor is the right primitive for
this use case; per-domain bounded executors would add lifecycle
complexity disproportionate to the problem and the loop is no
longer blocked, which was the actual bug.

https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8

* fix(retrieval): parallelize hybrid-search collection prefetch; document async facade contracts

Address PR review findings:

1. Hybrid-search prefetch was sequential
   `query_collection_with_hybrid_search` previously awaited
   `ASYNC_VECTOR_DB_CLIENT.get(name)` once per collection in a for
   loop. Each call already off-loaded to a worker thread, but
   awaiting them serially meant total prefetch latency scaled
   linearly with the number of collections. Run them concurrently
   with `asyncio.gather` so multi-collection queries actually
   benefit from the threadpool. Per-collection exception handling
   is preserved by wrapping each fetch in a small helper that
   logs and returns `(name, None)` on failure, so a single bad
   collection cannot poison the whole gather.

2. Document the thread-safety expectation explicitly
   The facade now formally states what was always implicit: the
   sync `VECTOR_DB_CLIENT` is shared across worker threads, so the
   underlying backend driver must be thread-safe. This is not a
   new exposure — `save_docs_to_vector_db` already called the sync
   client from `run_in_threadpool`. Adding a global lock here
   would defeat the responsiveness the facade exists to provide;
   backends that cannot tolerate concurrent access should grow
   their own internal serialization.

3. Document the API-surface choice and `.sync` escape hatch
   The strict `VectorDBBase` mirror was a deliberate choice (the
   previous `*args/**kwargs` revision let a `metadata=` typo
   silently break an endpoint). Document it, and call out the
   `.sync` escape hatch with an example for callers that genuinely
   need a backend-specific parameter not on `VectorDBBase`.

https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8

* fix(retrieval): guard /delete against null file.hash and let HTTPException reach the client

Address PR review finding on the `metadata=` → `filter=` change in
`delete_entries_from_collection`.

The new `filter={'hash': hash}` query was correct for files that
have a hash, but did not handle `file.hash is None` (unprocessed,
failed, or legacy records). The match semantics of a null filter
value are backend-dependent — some ignore the key entirely, some
treat it as "metadata field absent" and match every such row — so
issuing the query risked deleting unrelated entries.

  * Reject `hash is None` up front with a 400 explaining the file
    has no hash to target.

  * Narrow the surrounding `except Exception` so it no longer
    swallows `HTTPException`. Without this fix the new 400 (and the
    pre-existing 404 for missing files) would be silently re-shaped
    into `{'status': False}` and the caller could not distinguish a
    bad-request input from a backend error.

https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-04-14 10:50:18 -05:00
Timothy Jaeryang Baek
cf4218e688 refac 2026-04-13 21:29:03 -05:00
Timothy Jaeryang Baek
c767bcaa73 refac 2026-04-13 18:20:46 -05:00