* fix(retrieval): offload sync VECTOR_DB_CLIENT calls in async paths via AsyncVectorDBClient
The vector DB backends (Chroma, pgvector, Qdrant, Milvus, Pinecone,
Weaviate, …) are uniformly synchronous and their methods perform
blocking network or disk I/O. Multiple async route handlers and helpers
were calling them directly on the event loop — file processing,
memories, knowledge bases, hybrid search bookkeeping — so a single
upsert/delete/search would freeze every other in-flight request for the
duration of the call.
Introduce `AsyncVectorDBClient`, a thin async facade that wraps the
existing sync client and dispatches each method through
`asyncio.to_thread`. It mirrors `VectorDBBase` exactly and forwards
*args/**kwargs so backend-specific extra parameters keep working.
Update every async-context call site (routers/retrieval, routers/files,
routers/memories, routers/knowledge, retrieval/utils,
tools/builtin) to await `ASYNC_VECTOR_DB_CLIENT` instead of calling the
sync client directly. Two helpers that were sync-only also acquire
async siblings or are awaited via `asyncio.to_thread` at their async
call site (`remove_knowledge_base_metadata_embedding`,
`get_all_items_from_collections`, `query_doc`).
The original sync `VECTOR_DB_CLIENT` is unchanged, so callers that
already run inside `run_in_threadpool` (e.g. `save_docs_to_vector_db`
and the sync `query_doc`/`get_doc` helpers) are unaffected.
https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8
* fix(retrieval): restore explicit AsyncVectorDBClient signatures matching VectorDBBase
Per PR review: the original *args/**kwargs forwarding lost type
safety and IDE/static-analysis support. Restore explicit signatures
that mirror VectorDBBase exactly, so:
* Bad kwargs fail at the facade boundary instead of inside the
worker thread (where the resulting TypeError tends to be
swallowed by surrounding `try/except`).
* IDE autocomplete and static analysis work as expected.
* The stated intent ("mirror VectorDBBase exactly") now holds at
the API contract level, not just behaviourally.
While doing this, surface a pre-existing bug in
`delete_entries_from_collection` that the stricter typing flagged:
the call passed `metadata={'hash': hash}` which is not a parameter
on `VectorDBBase.delete` nor any backend. The TypeError raised
inside the sync delete was silently swallowed by `except Exception`
so the endpoint always reported `{'status': False}` for every
request instead of actually deleting matching vectors. Replace with
`filter=...` to do what the endpoint name promises.
The thorough review's other note (no concurrency/backpressure on
the shared default threadpool) is intentionally not addressed here:
asyncio.to_thread on the shared executor is the right primitive for
this use case; per-domain bounded executors would add lifecycle
complexity disproportionate to the problem and the loop is no
longer blocked, which was the actual bug.
https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8
* fix(retrieval): parallelize hybrid-search collection prefetch; document async facade contracts
Address PR review findings:
1. Hybrid-search prefetch was sequential
`query_collection_with_hybrid_search` previously awaited
`ASYNC_VECTOR_DB_CLIENT.get(name)` once per collection in a for
loop. Each call already off-loaded to a worker thread, but
awaiting them serially meant total prefetch latency scaled
linearly with the number of collections. Run them concurrently
with `asyncio.gather` so multi-collection queries actually
benefit from the threadpool. Per-collection exception handling
is preserved by wrapping each fetch in a small helper that
logs and returns `(name, None)` on failure, so a single bad
collection cannot poison the whole gather.
2. Document the thread-safety expectation explicitly
The facade now formally states what was always implicit: the
sync `VECTOR_DB_CLIENT` is shared across worker threads, so the
underlying backend driver must be thread-safe. This is not a
new exposure — `save_docs_to_vector_db` already called the sync
client from `run_in_threadpool`. Adding a global lock here
would defeat the responsiveness the facade exists to provide;
backends that cannot tolerate concurrent access should grow
their own internal serialization.
3. Document the API-surface choice and `.sync` escape hatch
The strict `VectorDBBase` mirror was a deliberate choice (the
previous `*args/**kwargs` revision let a `metadata=` typo
silently break an endpoint). Document it, and call out the
`.sync` escape hatch with an example for callers that genuinely
need a backend-specific parameter not on `VectorDBBase`.
https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8
* fix(retrieval): guard /delete against null file.hash and let HTTPException reach the client
Address PR review finding on the `metadata=` → `filter=` change in
`delete_entries_from_collection`.
The new `filter={'hash': hash}` query was correct for files that
have a hash, but did not handle `file.hash is None` (unprocessed,
failed, or legacy records). The match semantics of a null filter
value are backend-dependent — some ignore the key entirely, some
treat it as "metadata field absent" and match every such row — so
issuing the query risked deleting unrelated entries.
* Reject `hash is None` up front with a 400 explaining the file
has no hash to target.
* Narrow the surrounding `except Exception` so it no longer
swallows `HTTPException`. Without this fix the new 400 (and the
pre-existing 404 for missing files) would be silently re-shaped
into `{'status': False}` and the caller could not distinguish a
bad-request input from a backend error.
https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8
---------
Co-authored-by: Claude <noreply@anthropic.com>
Refactored the file processing status streaming endpoint to avoid holding
a database connection for the entire stream duration (up to 2 hours).
Changes:
- Each status poll now creates its own short-lived database session instead
of capturing the request's session in the generator closure
- Increased poll interval from 0.5s to 1s, halving database queries with
negligible UX impact
This prevents a single file status stream from blocking a connection pool
slot for hours, which could contribute to pool exhaustion under load.