mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-04 19:29:27 -05:00
* fix(retrieval): offload sync VECTOR_DB_CLIENT calls in async paths via AsyncVectorDBClient The vector DB backends (Chroma, pgvector, Qdrant, Milvus, Pinecone, Weaviate, …) are uniformly synchronous and their methods perform blocking network or disk I/O. Multiple async route handlers and helpers were calling them directly on the event loop — file processing, memories, knowledge bases, hybrid search bookkeeping — so a single upsert/delete/search would freeze every other in-flight request for the duration of the call. Introduce `AsyncVectorDBClient`, a thin async facade that wraps the existing sync client and dispatches each method through `asyncio.to_thread`. It mirrors `VectorDBBase` exactly and forwards *args/**kwargs so backend-specific extra parameters keep working. Update every async-context call site (routers/retrieval, routers/files, routers/memories, routers/knowledge, retrieval/utils, tools/builtin) to await `ASYNC_VECTOR_DB_CLIENT` instead of calling the sync client directly. Two helpers that were sync-only also acquire async siblings or are awaited via `asyncio.to_thread` at their async call site (`remove_knowledge_base_metadata_embedding`, `get_all_items_from_collections`, `query_doc`). The original sync `VECTOR_DB_CLIENT` is unchanged, so callers that already run inside `run_in_threadpool` (e.g. `save_docs_to_vector_db` and the sync `query_doc`/`get_doc` helpers) are unaffected. https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8 * fix(retrieval): restore explicit AsyncVectorDBClient signatures matching VectorDBBase Per PR review: the original *args/**kwargs forwarding lost type safety and IDE/static-analysis support. Restore explicit signatures that mirror VectorDBBase exactly, so: * Bad kwargs fail at the facade boundary instead of inside the worker thread (where the resulting TypeError tends to be swallowed by surrounding `try/except`). * IDE autocomplete and static analysis work as expected. * The stated intent ("mirror VectorDBBase exactly") now holds at the API contract level, not just behaviourally. While doing this, surface a pre-existing bug in `delete_entries_from_collection` that the stricter typing flagged: the call passed `metadata={'hash': hash}` which is not a parameter on `VectorDBBase.delete` nor any backend. The TypeError raised inside the sync delete was silently swallowed by `except Exception` so the endpoint always reported `{'status': False}` for every request instead of actually deleting matching vectors. Replace with `filter=...` to do what the endpoint name promises. The thorough review's other note (no concurrency/backpressure on the shared default threadpool) is intentionally not addressed here: asyncio.to_thread on the shared executor is the right primitive for this use case; per-domain bounded executors would add lifecycle complexity disproportionate to the problem and the loop is no longer blocked, which was the actual bug. https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8 * fix(retrieval): parallelize hybrid-search collection prefetch; document async facade contracts Address PR review findings: 1. Hybrid-search prefetch was sequential `query_collection_with_hybrid_search` previously awaited `ASYNC_VECTOR_DB_CLIENT.get(name)` once per collection in a for loop. Each call already off-loaded to a worker thread, but awaiting them serially meant total prefetch latency scaled linearly with the number of collections. Run them concurrently with `asyncio.gather` so multi-collection queries actually benefit from the threadpool. Per-collection exception handling is preserved by wrapping each fetch in a small helper that logs and returns `(name, None)` on failure, so a single bad collection cannot poison the whole gather. 2. Document the thread-safety expectation explicitly The facade now formally states what was always implicit: the sync `VECTOR_DB_CLIENT` is shared across worker threads, so the underlying backend driver must be thread-safe. This is not a new exposure — `save_docs_to_vector_db` already called the sync client from `run_in_threadpool`. Adding a global lock here would defeat the responsiveness the facade exists to provide; backends that cannot tolerate concurrent access should grow their own internal serialization. 3. Document the API-surface choice and `.sync` escape hatch The strict `VectorDBBase` mirror was a deliberate choice (the previous `*args/**kwargs` revision let a `metadata=` typo silently break an endpoint). Document it, and call out the `.sync` escape hatch with an example for callers that genuinely need a backend-specific parameter not on `VectorDBBase`. https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8 * fix(retrieval): guard /delete against null file.hash and let HTTPException reach the client Address PR review finding on the `metadata=` → `filter=` change in `delete_entries_from_collection`. The new `filter={'hash': hash}` query was correct for files that have a hash, but did not handle `file.hash is None` (unprocessed, failed, or legacy records). The match semantics of a null filter value are backend-dependent — some ignore the key entirely, some treat it as "metadata field absent" and match every such row — so issuing the query risked deleting unrelated entries. * Reject `hash is None` up front with a 400 explaining the file has no hash to target. * Narrow the surrounding `except Exception` so it no longer swallows `HTTPException`. Without this fix the new 400 (and the pre-existing 404 for missing files) would be silently re-shaped into `{'status': False}` and the caller could not distinguish a bad-request input from a backend error. https://claude.ai/code/session_01JSr4NZSskEUQvoJnavVXh8 --------- Co-authored-by: Claude <noreply@anthropic.com>
1109 lines
35 KiB
Python
1109 lines
35 KiB
Python
from typing import List, Optional
|
|
from pydantic import BaseModel
|
|
from fastapi import APIRouter, Depends, HTTPException, status, Request, Query
|
|
from fastapi.responses import StreamingResponse
|
|
|
|
import logging
|
|
import io
|
|
import zipfile
|
|
from urllib.parse import quote
|
|
|
|
from sqlalchemy.ext.asyncio import AsyncSession
|
|
from open_webui.internal.db import get_async_session
|
|
from open_webui.models.groups import Groups
|
|
from open_webui.models.knowledge import (
|
|
KnowledgeFileListResponse,
|
|
Knowledges,
|
|
KnowledgeForm,
|
|
KnowledgeResponse,
|
|
KnowledgeUserResponse,
|
|
)
|
|
from open_webui.models.files import Files, FileModel, FileMetadataResponse
|
|
from open_webui.retrieval.vector.async_client import ASYNC_VECTOR_DB_CLIENT
|
|
from open_webui.routers.retrieval import (
|
|
process_file,
|
|
ProcessFileForm,
|
|
process_files_batch,
|
|
BatchProcessFilesForm,
|
|
)
|
|
from open_webui.storage.provider import Storage
|
|
|
|
from open_webui.constants import ERROR_MESSAGES
|
|
from open_webui.utils.auth import get_verified_user, get_admin_user
|
|
from open_webui.utils.access_control import has_permission, filter_allowed_access_grants
|
|
from open_webui.models.access_grants import AccessGrants
|
|
|
|
|
|
from open_webui.config import BYPASS_ADMIN_ACCESS_CONTROL
|
|
from open_webui.models.models import Models, ModelForm
|
|
|
|
log = logging.getLogger(__name__)
|
|
|
|
router = APIRouter()
|
|
|
|
############################
|
|
# getKnowledgeBases
|
|
############################
|
|
|
|
PAGE_ITEM_COUNT = 30
|
|
|
|
############################
|
|
# Knowledge Base Embedding
|
|
############################
|
|
|
|
# Knowledge that sits unread serves no one. Let what is
|
|
# stored here find the ones who need it.
|
|
KNOWLEDGE_BASES_COLLECTION = 'knowledge-bases'
|
|
|
|
|
|
async def embed_knowledge_base_metadata(
|
|
request: Request,
|
|
knowledge_base_id: str,
|
|
name: str,
|
|
description: str,
|
|
) -> bool:
|
|
"""Generate and store embedding for knowledge base."""
|
|
try:
|
|
content = f'{name}\n\n{description}' if description else name
|
|
embedding = await request.app.state.EMBEDDING_FUNCTION(content)
|
|
await ASYNC_VECTOR_DB_CLIENT.upsert(
|
|
collection_name=KNOWLEDGE_BASES_COLLECTION,
|
|
items=[
|
|
{
|
|
'id': knowledge_base_id,
|
|
'text': content,
|
|
'vector': embedding,
|
|
'metadata': {
|
|
'knowledge_base_id': knowledge_base_id,
|
|
},
|
|
}
|
|
],
|
|
)
|
|
return True
|
|
except Exception as e:
|
|
log.error(f'Failed to embed knowledge base {knowledge_base_id}: {e}')
|
|
return False
|
|
|
|
|
|
async def remove_knowledge_base_metadata_embedding(knowledge_base_id: str) -> bool:
|
|
"""Remove knowledge base embedding."""
|
|
try:
|
|
await ASYNC_VECTOR_DB_CLIENT.delete(
|
|
collection_name=KNOWLEDGE_BASES_COLLECTION,
|
|
ids=[knowledge_base_id],
|
|
)
|
|
return True
|
|
except Exception as e:
|
|
log.debug(f'Failed to remove embedding for {knowledge_base_id}: {e}')
|
|
return False
|
|
|
|
|
|
class KnowledgeAccessResponse(KnowledgeUserResponse):
|
|
write_access: Optional[bool] = False
|
|
|
|
|
|
class KnowledgeAccessListResponse(BaseModel):
|
|
items: list[KnowledgeAccessResponse]
|
|
total: int
|
|
|
|
|
|
@router.get('/', response_model=KnowledgeAccessListResponse)
|
|
async def get_knowledge_bases(
|
|
page: Optional[int] = 1,
|
|
user=Depends(get_verified_user),
|
|
db: AsyncSession = Depends(get_async_session),
|
|
):
|
|
page = max(page, 1)
|
|
limit = PAGE_ITEM_COUNT
|
|
skip = (page - 1) * limit
|
|
|
|
filter = {}
|
|
groups = await Groups.get_groups_by_member_id(user.id, db=db)
|
|
user_group_ids = {group.id for group in groups}
|
|
|
|
if not user.role == 'admin' or not BYPASS_ADMIN_ACCESS_CONTROL:
|
|
if groups:
|
|
filter['group_ids'] = [group.id for group in groups]
|
|
|
|
filter['user_id'] = user.id
|
|
|
|
result = await Knowledges.search_knowledge_bases(user.id, filter=filter, skip=skip, limit=limit, db=db)
|
|
|
|
# Batch-fetch writable knowledge IDs in a single query instead of N has_access calls
|
|
knowledge_base_ids = [knowledge_base.id for knowledge_base in result.items]
|
|
writable_knowledge_base_ids = await AccessGrants.get_accessible_resource_ids(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_ids=knowledge_base_ids,
|
|
permission='write',
|
|
user_group_ids=user_group_ids,
|
|
db=db,
|
|
)
|
|
|
|
return KnowledgeAccessListResponse(
|
|
items=[
|
|
KnowledgeAccessResponse(
|
|
**knowledge_base.model_dump(),
|
|
write_access=(
|
|
user.id == knowledge_base.user_id
|
|
or (user.role == 'admin' and BYPASS_ADMIN_ACCESS_CONTROL)
|
|
or knowledge_base.id in writable_knowledge_base_ids
|
|
),
|
|
)
|
|
for knowledge_base in result.items
|
|
],
|
|
total=result.total,
|
|
)
|
|
|
|
|
|
@router.get('/search', response_model=KnowledgeAccessListResponse)
|
|
async def search_knowledge_bases(
|
|
query: Optional[str] = None,
|
|
view_option: Optional[str] = None,
|
|
page: Optional[int] = 1,
|
|
user=Depends(get_verified_user),
|
|
db: AsyncSession = Depends(get_async_session),
|
|
):
|
|
page = max(page, 1)
|
|
limit = PAGE_ITEM_COUNT
|
|
skip = (page - 1) * limit
|
|
|
|
filter = {}
|
|
if query:
|
|
filter['query'] = query
|
|
if view_option:
|
|
filter['view_option'] = view_option
|
|
|
|
groups = await Groups.get_groups_by_member_id(user.id, db=db)
|
|
user_group_ids = {group.id for group in groups}
|
|
|
|
if not user.role == 'admin' or not BYPASS_ADMIN_ACCESS_CONTROL:
|
|
if groups:
|
|
filter['group_ids'] = [group.id for group in groups]
|
|
|
|
filter['user_id'] = user.id
|
|
|
|
result = await Knowledges.search_knowledge_bases(user.id, filter=filter, skip=skip, limit=limit, db=db)
|
|
|
|
# Batch-fetch writable knowledge IDs in a single query instead of N has_access calls
|
|
knowledge_base_ids = [knowledge_base.id for knowledge_base in result.items]
|
|
writable_knowledge_base_ids = await AccessGrants.get_accessible_resource_ids(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_ids=knowledge_base_ids,
|
|
permission='write',
|
|
user_group_ids=user_group_ids,
|
|
db=db,
|
|
)
|
|
|
|
return KnowledgeAccessListResponse(
|
|
items=[
|
|
KnowledgeAccessResponse(
|
|
**knowledge_base.model_dump(),
|
|
write_access=(
|
|
user.id == knowledge_base.user_id
|
|
or (user.role == 'admin' and BYPASS_ADMIN_ACCESS_CONTROL)
|
|
or knowledge_base.id in writable_knowledge_base_ids
|
|
),
|
|
)
|
|
for knowledge_base in result.items
|
|
],
|
|
total=result.total,
|
|
)
|
|
|
|
|
|
@router.get('/search/files', response_model=KnowledgeFileListResponse)
|
|
async def search_knowledge_files(
|
|
query: Optional[str] = None,
|
|
page: Optional[int] = 1,
|
|
user=Depends(get_verified_user),
|
|
db: AsyncSession = Depends(get_async_session),
|
|
):
|
|
page = max(page, 1)
|
|
limit = PAGE_ITEM_COUNT
|
|
skip = (page - 1) * limit
|
|
|
|
filter = {}
|
|
if query:
|
|
filter['query'] = query
|
|
|
|
groups = await Groups.get_groups_by_member_id(user.id, db=db)
|
|
if groups:
|
|
filter['group_ids'] = [group.id for group in groups]
|
|
|
|
filter['user_id'] = user.id
|
|
|
|
return await Knowledges.search_knowledge_files(filter=filter, skip=skip, limit=limit, db=db)
|
|
|
|
|
|
############################
|
|
# CreateNewKnowledge
|
|
############################
|
|
|
|
|
|
@router.post('/create', response_model=Optional[KnowledgeResponse])
|
|
async def create_new_knowledge(
|
|
request: Request,
|
|
form_data: KnowledgeForm,
|
|
user=Depends(get_verified_user),
|
|
):
|
|
# NOTE: We intentionally do NOT use Depends(get_async_session) here.
|
|
# Database operations (has_permission, filter_allowed_access_grants, insert_new_knowledge) manage their own sessions.
|
|
# This prevents holding a connection during embed_knowledge_base_metadata()
|
|
# which makes external embedding API calls (1-5+ seconds).
|
|
if user.role != 'admin' and not await has_permission(
|
|
user.id, 'workspace.knowledge', request.app.state.config.USER_PERMISSIONS
|
|
):
|
|
raise HTTPException(
|
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
|
detail=ERROR_MESSAGES.UNAUTHORIZED,
|
|
)
|
|
|
|
form_data.access_grants = await filter_allowed_access_grants(
|
|
request.app.state.config.USER_PERMISSIONS,
|
|
user.id,
|
|
user.role,
|
|
form_data.access_grants,
|
|
'sharing.public_knowledge',
|
|
)
|
|
|
|
knowledge = await Knowledges.insert_new_knowledge(user.id, form_data)
|
|
|
|
if knowledge:
|
|
# Embed knowledge base for semantic search
|
|
await embed_knowledge_base_metadata(
|
|
request,
|
|
knowledge.id,
|
|
knowledge.name,
|
|
knowledge.description,
|
|
)
|
|
return knowledge
|
|
else:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.FILE_EXISTS,
|
|
)
|
|
|
|
|
|
############################
|
|
# ReindexKnowledgeFiles
|
|
############################
|
|
|
|
|
|
@router.post('/reindex', response_model=bool)
|
|
async def reindex_knowledge_files(
|
|
request: Request,
|
|
user=Depends(get_verified_user),
|
|
db: AsyncSession = Depends(get_async_session),
|
|
):
|
|
if user.role != 'admin':
|
|
raise HTTPException(
|
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
|
detail=ERROR_MESSAGES.UNAUTHORIZED,
|
|
)
|
|
|
|
knowledge_bases = await Knowledges.get_knowledge_bases(db=db)
|
|
|
|
log.info(f'Starting reindexing for {len(knowledge_bases)} knowledge bases')
|
|
|
|
for knowledge_base in knowledge_bases:
|
|
try:
|
|
files = await Knowledges.get_files_by_id(knowledge_base.id, db=db)
|
|
try:
|
|
if await ASYNC_VECTOR_DB_CLIENT.has_collection(collection_name=knowledge_base.id):
|
|
await ASYNC_VECTOR_DB_CLIENT.delete_collection(collection_name=knowledge_base.id)
|
|
except Exception as e:
|
|
log.error(f'Error deleting collection {knowledge_base.id}: {str(e)}')
|
|
continue # Skip, don't raise
|
|
|
|
failed_files = []
|
|
for file in files:
|
|
try:
|
|
await process_file(
|
|
request,
|
|
ProcessFileForm(file_id=file.id, collection_name=knowledge_base.id),
|
|
user=user,
|
|
db=db,
|
|
)
|
|
except Exception as e:
|
|
log.error(f'Error processing file {file.filename} (ID: {file.id}): {str(e)}')
|
|
failed_files.append({'file_id': file.id, 'error': str(e)})
|
|
continue
|
|
|
|
except Exception as e:
|
|
log.error(f'Error processing knowledge base {knowledge_base.id}: {str(e)}')
|
|
# Don't raise, just continue
|
|
continue
|
|
|
|
if failed_files:
|
|
log.warning(f'Failed to process {len(failed_files)} files in knowledge base {knowledge_base.id}')
|
|
for failed in failed_files:
|
|
log.warning(f'File ID: {failed["file_id"]}, Error: {failed["error"]}')
|
|
|
|
log.info(f'Reindexing completed.')
|
|
return True
|
|
|
|
|
|
############################
|
|
# ReindexKnowledgeBases
|
|
############################
|
|
|
|
|
|
@router.post('/metadata/reindex', response_model=dict)
|
|
async def reindex_knowledge_base_metadata_embeddings(
|
|
request: Request,
|
|
user=Depends(get_admin_user),
|
|
):
|
|
"""Batch embed all existing knowledge bases. Admin only.
|
|
|
|
NOTE: We intentionally do NOT use Depends(get_async_session) here.
|
|
This endpoint loops through ALL knowledge bases and calls embed_knowledge_base_metadata()
|
|
for each one, making N external embedding API calls. Holding a session during
|
|
this entire operation would exhaust the connection pool.
|
|
"""
|
|
knowledge_bases = await Knowledges.get_knowledge_bases()
|
|
log.info(f'Reindexing embeddings for {len(knowledge_bases)} knowledge bases')
|
|
|
|
success_count = 0
|
|
for kb in knowledge_bases:
|
|
if await embed_knowledge_base_metadata(request, kb.id, kb.name, kb.description):
|
|
success_count += 1
|
|
|
|
log.info(f'Embedding reindex complete: {success_count}/{len(knowledge_bases)}')
|
|
return {'total': len(knowledge_bases), 'success': success_count}
|
|
|
|
|
|
############################
|
|
# GetKnowledgeById
|
|
############################
|
|
|
|
|
|
class KnowledgeFilesResponse(KnowledgeResponse):
|
|
files: Optional[list[FileMetadataResponse]] = None
|
|
write_access: Optional[bool] = False
|
|
|
|
|
|
@router.get('/{id}', response_model=Optional[KnowledgeFilesResponse])
|
|
async def get_knowledge_by_id(id: str, user=Depends(get_verified_user), db: AsyncSession = Depends(get_async_session)):
|
|
knowledge = await Knowledges.get_knowledge_by_id(id=id, db=db)
|
|
|
|
if knowledge:
|
|
if (
|
|
user.role == 'admin'
|
|
or knowledge.user_id == user.id
|
|
or await AccessGrants.has_access(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_id=knowledge.id,
|
|
permission='read',
|
|
db=db,
|
|
)
|
|
):
|
|
return KnowledgeFilesResponse(
|
|
**knowledge.model_dump(),
|
|
write_access=(
|
|
user.id == knowledge.user_id
|
|
or (user.role == 'admin' and BYPASS_ADMIN_ACCESS_CONTROL)
|
|
or await AccessGrants.has_access(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_id=knowledge.id,
|
|
permission='write',
|
|
db=db,
|
|
)
|
|
),
|
|
)
|
|
else:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
|
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
|
)
|
|
else:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_404_NOT_FOUND,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
|
|
############################
|
|
# UpdateKnowledgeById
|
|
############################
|
|
|
|
|
|
@router.post('/{id}/update', response_model=Optional[KnowledgeFilesResponse])
|
|
async def update_knowledge_by_id(
|
|
request: Request,
|
|
id: str,
|
|
form_data: KnowledgeForm,
|
|
user=Depends(get_verified_user),
|
|
):
|
|
# NOTE: We intentionally do NOT use Depends(get_async_session) here.
|
|
# Database operations manage their own short-lived sessions internally.
|
|
# This prevents holding a connection during embed_knowledge_base_metadata()
|
|
# which makes external embedding API calls (1-5+ seconds).
|
|
knowledge = await Knowledges.get_knowledge_by_id(id=id)
|
|
if not knowledge:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
# Is the user the original creator, in a group with write access, or an admin
|
|
if (
|
|
knowledge.user_id != user.id
|
|
and not await AccessGrants.has_access(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_id=knowledge.id,
|
|
permission='write',
|
|
)
|
|
and user.role != 'admin'
|
|
):
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
|
)
|
|
|
|
form_data.access_grants = await filter_allowed_access_grants(
|
|
request.app.state.config.USER_PERMISSIONS,
|
|
user.id,
|
|
user.role,
|
|
form_data.access_grants,
|
|
'sharing.public_knowledge',
|
|
)
|
|
|
|
knowledge = await Knowledges.update_knowledge_by_id(id=id, form_data=form_data)
|
|
if knowledge:
|
|
# Re-embed knowledge base for semantic search
|
|
await embed_knowledge_base_metadata(
|
|
request,
|
|
knowledge.id,
|
|
knowledge.name,
|
|
knowledge.description,
|
|
)
|
|
return KnowledgeFilesResponse(
|
|
**knowledge.model_dump(),
|
|
files=await Knowledges.get_file_metadatas_by_id(knowledge.id),
|
|
)
|
|
else:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.ID_TAKEN,
|
|
)
|
|
|
|
|
|
############################
|
|
# UpdateKnowledgeAccessById
|
|
############################
|
|
|
|
|
|
class KnowledgeAccessGrantsForm(BaseModel):
|
|
access_grants: list[dict]
|
|
|
|
|
|
@router.post('/{id}/access/update', response_model=Optional[KnowledgeFilesResponse])
|
|
async def update_knowledge_access_by_id(
|
|
request: Request,
|
|
id: str,
|
|
form_data: KnowledgeAccessGrantsForm,
|
|
user=Depends(get_verified_user),
|
|
db: AsyncSession = Depends(get_async_session),
|
|
):
|
|
knowledge = await Knowledges.get_knowledge_by_id(id=id, db=db)
|
|
if not knowledge:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_404_NOT_FOUND,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
if (
|
|
knowledge.user_id != user.id
|
|
and not await AccessGrants.has_access(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_id=knowledge.id,
|
|
permission='write',
|
|
db=db,
|
|
)
|
|
and user.role != 'admin'
|
|
):
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
|
)
|
|
|
|
form_data.access_grants = await filter_allowed_access_grants(
|
|
request.app.state.config.USER_PERMISSIONS,
|
|
user.id,
|
|
user.role,
|
|
form_data.access_grants,
|
|
'sharing.public_knowledge',
|
|
)
|
|
|
|
await AccessGrants.set_access_grants('knowledge', id, form_data.access_grants, db=db)
|
|
|
|
return KnowledgeFilesResponse(
|
|
**(await Knowledges.get_knowledge_by_id(id=id, db=db)).model_dump(),
|
|
files=await Knowledges.get_file_metadatas_by_id(id, db=db),
|
|
)
|
|
|
|
|
|
############################
|
|
# GetKnowledgeFilesById
|
|
############################
|
|
|
|
|
|
@router.get('/{id}/files', response_model=KnowledgeFileListResponse)
|
|
async def get_knowledge_files_by_id(
|
|
id: str,
|
|
query: Optional[str] = None,
|
|
view_option: Optional[str] = None,
|
|
order_by: Optional[str] = None,
|
|
direction: Optional[str] = None,
|
|
page: Optional[int] = 1,
|
|
user=Depends(get_verified_user),
|
|
db: AsyncSession = Depends(get_async_session),
|
|
):
|
|
knowledge = await Knowledges.get_knowledge_by_id(id=id, db=db)
|
|
if not knowledge:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
if not (
|
|
user.role == 'admin'
|
|
or knowledge.user_id == user.id
|
|
or await AccessGrants.has_access(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_id=knowledge.id,
|
|
permission='read',
|
|
db=db,
|
|
)
|
|
):
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
|
)
|
|
|
|
page = max(page, 1)
|
|
|
|
limit = 30
|
|
skip = (page - 1) * limit
|
|
|
|
filter = {}
|
|
if query:
|
|
filter['query'] = query
|
|
if view_option:
|
|
filter['view_option'] = view_option
|
|
if order_by:
|
|
filter['order_by'] = order_by
|
|
if direction:
|
|
filter['direction'] = direction
|
|
|
|
return await Knowledges.search_files_by_id(id, user.id, filter=filter, skip=skip, limit=limit, db=db)
|
|
|
|
|
|
############################
|
|
# AddFileToKnowledge
|
|
############################
|
|
|
|
|
|
class KnowledgeFileIdForm(BaseModel):
|
|
file_id: str
|
|
|
|
|
|
@router.post('/{id}/file/add', response_model=Optional[KnowledgeFilesResponse])
|
|
async def add_file_to_knowledge_by_id(
|
|
request: Request,
|
|
id: str,
|
|
form_data: KnowledgeFileIdForm,
|
|
user=Depends(get_verified_user),
|
|
db: AsyncSession = Depends(get_async_session),
|
|
):
|
|
knowledge = await Knowledges.get_knowledge_by_id(id=id, db=db)
|
|
if not knowledge:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
if (
|
|
knowledge.user_id != user.id
|
|
and not await AccessGrants.has_access(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_id=knowledge.id,
|
|
permission='write',
|
|
db=db,
|
|
)
|
|
and user.role != 'admin'
|
|
):
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
|
)
|
|
|
|
file = await Files.get_file_by_id(form_data.file_id, db=db)
|
|
if not file:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
if not file.data:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.FILE_NOT_PROCESSED,
|
|
)
|
|
|
|
# Add content to the vector database
|
|
try:
|
|
await process_file(
|
|
request,
|
|
ProcessFileForm(file_id=form_data.file_id, collection_name=id),
|
|
user=user,
|
|
db=db,
|
|
)
|
|
|
|
# Add file to knowledge base
|
|
await Knowledges.add_file_to_knowledge_by_id(knowledge_id=id, file_id=form_data.file_id, user_id=user.id, db=db)
|
|
except Exception as e:
|
|
log.debug(e)
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=str(e),
|
|
)
|
|
|
|
if knowledge:
|
|
return KnowledgeFilesResponse(
|
|
**knowledge.model_dump(),
|
|
files=await Knowledges.get_file_metadatas_by_id(knowledge.id, db=db),
|
|
)
|
|
else:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
|
|
@router.post('/{id}/file/update', response_model=Optional[KnowledgeFilesResponse])
|
|
async def update_file_from_knowledge_by_id(
|
|
request: Request,
|
|
id: str,
|
|
form_data: KnowledgeFileIdForm,
|
|
user=Depends(get_verified_user),
|
|
db: AsyncSession = Depends(get_async_session),
|
|
):
|
|
knowledge = await Knowledges.get_knowledge_by_id(id=id, db=db)
|
|
if not knowledge:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
if (
|
|
knowledge.user_id != user.id
|
|
and not await AccessGrants.has_access(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_id=knowledge.id,
|
|
permission='write',
|
|
db=db,
|
|
)
|
|
and user.role != 'admin'
|
|
):
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
|
)
|
|
|
|
file = await Files.get_file_by_id(form_data.file_id, db=db)
|
|
if not file:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
# Validate the file actually belongs to this knowledge base
|
|
if not await Knowledges.has_file(knowledge_id=id, file_id=form_data.file_id, db=db):
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
# Remove content from the vector database
|
|
await ASYNC_VECTOR_DB_CLIENT.delete(collection_name=knowledge.id, filter={'file_id': form_data.file_id})
|
|
|
|
# Add content to the vector database
|
|
try:
|
|
await process_file(
|
|
request,
|
|
ProcessFileForm(file_id=form_data.file_id, collection_name=id),
|
|
user=user,
|
|
db=db,
|
|
)
|
|
except Exception as e:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=str(e),
|
|
)
|
|
|
|
if knowledge:
|
|
return KnowledgeFilesResponse(
|
|
**knowledge.model_dump(),
|
|
files=await Knowledges.get_file_metadatas_by_id(knowledge.id, db=db),
|
|
)
|
|
else:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
|
|
############################
|
|
# RemoveFileFromKnowledge
|
|
############################
|
|
|
|
|
|
@router.post('/{id}/file/remove', response_model=Optional[KnowledgeFilesResponse])
|
|
async def remove_file_from_knowledge_by_id(
|
|
id: str,
|
|
form_data: KnowledgeFileIdForm,
|
|
delete_file: bool = Query(True),
|
|
user=Depends(get_verified_user),
|
|
db: AsyncSession = Depends(get_async_session),
|
|
):
|
|
knowledge = await Knowledges.get_knowledge_by_id(id=id, db=db)
|
|
if not knowledge:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
if (
|
|
knowledge.user_id != user.id
|
|
and not await AccessGrants.has_access(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_id=knowledge.id,
|
|
permission='write',
|
|
db=db,
|
|
)
|
|
and user.role != 'admin'
|
|
):
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
|
)
|
|
|
|
file = await Files.get_file_by_id(form_data.file_id, db=db)
|
|
if not file:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
# Validate the file actually belongs to this knowledge base
|
|
if not await Knowledges.has_file(knowledge_id=id, file_id=form_data.file_id, db=db):
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
await Knowledges.remove_file_from_knowledge_by_id(knowledge_id=id, file_id=form_data.file_id, db=db)
|
|
|
|
# Remove content from the vector database
|
|
try:
|
|
await ASYNC_VECTOR_DB_CLIENT.delete(
|
|
collection_name=knowledge.id, filter={'file_id': form_data.file_id}
|
|
) # Remove by file_id first
|
|
|
|
await ASYNC_VECTOR_DB_CLIENT.delete(
|
|
collection_name=knowledge.id, filter={'hash': file.hash}
|
|
) # Remove by hash as well in case of duplicates
|
|
except Exception as e:
|
|
log.debug('This was most likely caused by bypassing embedding processing')
|
|
log.debug(e)
|
|
pass
|
|
|
|
if delete_file:
|
|
try:
|
|
# Remove the file's collection from vector database
|
|
file_collection = f'file-{form_data.file_id}'
|
|
if await ASYNC_VECTOR_DB_CLIENT.has_collection(collection_name=file_collection):
|
|
await ASYNC_VECTOR_DB_CLIENT.delete_collection(collection_name=file_collection)
|
|
except Exception as e:
|
|
log.debug('This was most likely caused by bypassing embedding processing')
|
|
log.debug(e)
|
|
pass
|
|
|
|
# Delete file from database
|
|
await Files.delete_file_by_id(form_data.file_id, db=db)
|
|
|
|
if knowledge:
|
|
return KnowledgeFilesResponse(
|
|
**knowledge.model_dump(),
|
|
files=await Knowledges.get_file_metadatas_by_id(knowledge.id, db=db),
|
|
)
|
|
else:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
|
|
############################
|
|
# DeleteKnowledgeById
|
|
############################
|
|
|
|
|
|
@router.delete('/{id}/delete', response_model=bool)
|
|
async def delete_knowledge_by_id(
|
|
id: str, user=Depends(get_verified_user), db: AsyncSession = Depends(get_async_session)
|
|
):
|
|
knowledge = await Knowledges.get_knowledge_by_id(id=id, db=db)
|
|
if not knowledge:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
if (
|
|
knowledge.user_id != user.id
|
|
and not await AccessGrants.has_access(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_id=knowledge.id,
|
|
permission='write',
|
|
db=db,
|
|
)
|
|
and user.role != 'admin'
|
|
):
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
|
)
|
|
|
|
log.info(f'Deleting knowledge base: {id} (name: {knowledge.name})')
|
|
|
|
# Get all models
|
|
models = await Models.get_all_models(db=db)
|
|
log.info(f'Found {len(models)} models to check for knowledge base {id}')
|
|
|
|
# Update models that reference this knowledge base
|
|
for model in models:
|
|
if model.meta and hasattr(model.meta, 'knowledge'):
|
|
knowledge_list = model.meta.knowledge or []
|
|
# Filter out the deleted knowledge base
|
|
updated_knowledge = [k for k in knowledge_list if k.get('id') != id]
|
|
|
|
# If the knowledge list changed, update the model
|
|
if len(updated_knowledge) != len(knowledge_list):
|
|
log.info(f'Updating model {model.id} to remove knowledge base {id}')
|
|
model.meta.knowledge = updated_knowledge
|
|
# Create a ModelForm for the update
|
|
model_form = ModelForm(
|
|
id=model.id,
|
|
name=model.name,
|
|
base_model_id=model.base_model_id,
|
|
meta=model.meta,
|
|
params=model.params,
|
|
access_grants=model.access_grants,
|
|
is_active=model.is_active,
|
|
)
|
|
await Models.update_model_by_id(model.id, model_form, db=db)
|
|
|
|
# Clean up vector DB
|
|
try:
|
|
await ASYNC_VECTOR_DB_CLIENT.delete_collection(collection_name=id)
|
|
except Exception as e:
|
|
log.debug(e)
|
|
pass
|
|
|
|
# Remove knowledge base embedding
|
|
await remove_knowledge_base_metadata_embedding(id)
|
|
|
|
result = await Knowledges.delete_knowledge_by_id(id=id, db=db)
|
|
return result
|
|
|
|
|
|
############################
|
|
# ResetKnowledgeById
|
|
############################
|
|
|
|
|
|
@router.post('/{id}/reset', response_model=Optional[KnowledgeResponse])
|
|
async def reset_knowledge_by_id(
|
|
id: str, user=Depends(get_verified_user), db: AsyncSession = Depends(get_async_session)
|
|
):
|
|
knowledge = await Knowledges.get_knowledge_by_id(id=id, db=db)
|
|
if not knowledge:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
if (
|
|
knowledge.user_id != user.id
|
|
and not await AccessGrants.has_access(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_id=knowledge.id,
|
|
permission='write',
|
|
db=db,
|
|
)
|
|
and user.role != 'admin'
|
|
):
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
|
)
|
|
|
|
try:
|
|
await ASYNC_VECTOR_DB_CLIENT.delete_collection(collection_name=id)
|
|
except Exception as e:
|
|
log.debug(e)
|
|
pass
|
|
|
|
knowledge = await Knowledges.reset_knowledge_by_id(id=id, db=db)
|
|
return knowledge
|
|
|
|
|
|
############################
|
|
# AddFilesToKnowledge
|
|
############################
|
|
|
|
|
|
@router.post('/{id}/files/batch/add', response_model=Optional[KnowledgeFilesResponse])
|
|
async def add_files_to_knowledge_batch(
|
|
request: Request,
|
|
id: str,
|
|
form_data: list[KnowledgeFileIdForm],
|
|
user=Depends(get_verified_user),
|
|
db: AsyncSession = Depends(get_async_session),
|
|
):
|
|
"""
|
|
Add multiple files to a knowledge base
|
|
"""
|
|
knowledge = await Knowledges.get_knowledge_by_id(id=id, db=db)
|
|
if not knowledge:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
if (
|
|
knowledge.user_id != user.id
|
|
and not await AccessGrants.has_access(
|
|
user_id=user.id,
|
|
resource_type='knowledge',
|
|
resource_id=knowledge.id,
|
|
permission='write',
|
|
db=db,
|
|
)
|
|
and user.role != 'admin'
|
|
):
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
|
)
|
|
|
|
# Batch-fetch all files to avoid N+1 queries
|
|
log.info(f'files/batch/add - {len(form_data)} files')
|
|
file_ids = [form.file_id for form in form_data]
|
|
files = await Files.get_files_by_ids(file_ids, db=db)
|
|
|
|
# Verify all requested files were found
|
|
found_ids = {file.id for file in files}
|
|
missing_ids = [fid for fid in file_ids if fid not in found_ids]
|
|
if missing_ids:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_400_BAD_REQUEST,
|
|
detail=f'File {missing_ids[0]} not found',
|
|
)
|
|
|
|
# Process files
|
|
try:
|
|
result = await process_files_batch(
|
|
request=request,
|
|
form_data=BatchProcessFilesForm(files=files, collection_name=id),
|
|
user=user,
|
|
db=db,
|
|
)
|
|
except Exception as e:
|
|
log.error(f'add_files_to_knowledge_batch: Exception occurred: {e}', exc_info=True)
|
|
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e))
|
|
|
|
# Only add files that were successfully processed
|
|
successful_file_ids = [r.file_id for r in result.results if r.status == 'completed']
|
|
for file_id in successful_file_ids:
|
|
await Knowledges.add_file_to_knowledge_by_id(knowledge_id=id, file_id=file_id, user_id=user.id, db=db)
|
|
|
|
# If there were any errors, include them in the response
|
|
if result.errors:
|
|
error_details = [f'{err.file_id}: {err.error}' for err in result.errors]
|
|
return KnowledgeFilesResponse(
|
|
**knowledge.model_dump(),
|
|
files=await Knowledges.get_file_metadatas_by_id(knowledge.id, db=db),
|
|
warnings={
|
|
'message': 'Some files failed to process',
|
|
'errors': error_details,
|
|
},
|
|
)
|
|
|
|
return KnowledgeFilesResponse(
|
|
**knowledge.model_dump(),
|
|
files=await Knowledges.get_file_metadatas_by_id(knowledge.id, db=db),
|
|
)
|
|
|
|
|
|
############################
|
|
# ExportKnowledgeById
|
|
############################
|
|
|
|
|
|
@router.get('/{id}/export')
|
|
async def export_knowledge_by_id(id: str, user=Depends(get_admin_user), db: AsyncSession = Depends(get_async_session)):
|
|
"""
|
|
Export a knowledge base as a zip file containing .txt files.
|
|
Admin only.
|
|
"""
|
|
|
|
knowledge = await Knowledges.get_knowledge_by_id(id=id, db=db)
|
|
if not knowledge:
|
|
raise HTTPException(
|
|
status_code=status.HTTP_404_NOT_FOUND,
|
|
detail=ERROR_MESSAGES.NOT_FOUND,
|
|
)
|
|
|
|
files = await Knowledges.get_files_by_id(id, db=db)
|
|
|
|
# Create zip file in memory
|
|
zip_buffer = io.BytesIO()
|
|
with zipfile.ZipFile(zip_buffer, 'w', zipfile.ZIP_DEFLATED) as zf:
|
|
for file in files:
|
|
content = file.data.get('content', '') if file.data else ''
|
|
if content:
|
|
# Use original filename with .txt extension
|
|
filename = file.filename
|
|
if not filename.endswith('.txt'):
|
|
filename = f'{filename}.txt'
|
|
zf.writestr(filename, content)
|
|
|
|
zip_buffer.seek(0)
|
|
|
|
# Sanitize knowledge name for filename
|
|
# ASCII-safe fallback for the basic filename parameter (latin-1 safe)
|
|
safe_name = ''.join(c if c.isascii() and (c.isalnum() or c in ' -_') else '_' for c in knowledge.name)
|
|
zip_filename = f'{safe_name}.zip'
|
|
|
|
# Use RFC 5987 filename* for non-ASCII names so the browser gets the real name
|
|
quoted_name = quote(f'{knowledge.name}.zip')
|
|
content_disposition = f'attachment; filename="{zip_filename}"; filename*=UTF-8\'\'{quoted_name}'
|
|
|
|
return StreamingResponse(
|
|
zip_buffer,
|
|
media_type='application/zip',
|
|
headers={'Content-Disposition': content_disposition},
|
|
)
|