ImportError: cannot import name 'Dataset' from 'datasets' (unknown location) #2124

Closed
opened 2025-11-11 15:00:44 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @monkeycc on GitHub (Sep 19, 2024).

pip install open-webui

open-webui serve

Loading WEBUI_SECRET_KEY from file, not provided as an environment variable.
Generating a new secret key and saving it to C:\Users\mm\.webui_secret_key
Loading WEBUI_SECRET_KEY from C:\Users\mm\.webui_secret_key
D:\anaconda3\envs\ollama\Lib\site-packages\open_webui
D:\anaconda3\envs\ollama\Lib\site-packages
D:\anaconda3\envs\ollama\Lib
Running migrations
INFO  [alembic.runtime.migration] Context impl SQLiteImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> 7e5b5dc7342b, init
INFO  [alembic.runtime.migration] Running upgrade 7e5b5dc7342b -> ca81bd47c050, Add config table
INFO  [open_webui.env] 'DEFAULT_LOCALE' loaded from the latest database entry
INFO  [open_webui.env] 'DEFAULT_PROMPT_SUGGESTIONS' loaded from the latest database entry
WARNI [open_webui.env]

WARNING: CORS_ALLOW_ORIGIN IS SET TO '*' - NOT RECOMMENDED FOR PRODUCTION DEPLOYMENTS.

INFO  [open_webui.env] Embedding model set: sentence-transformers/all-MiniLM-L6-v2
INFO  [open_webui.apps.audio.main] whisper_device_type: cpu
WARNI [langchain_community.utils.user_agent] USER_AGENT environment variable not set, consider setting it to identify your requests.
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
0it [00:00, ?it/s]
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\anaconda3\envs\ollama\Lib\site-packages\open_webui\__init__.py:42 in serve                                        │
│                                                                                                                      │
│   39 │   │   │   │   "/usr/local/lib/python3.11/site-packages/nvidia/cudnn/lib",              ╭───── locals ─────╮   │
│   40 │   │   │   ]                                                                            │ host = '0.0.0.0' │   │
│   41 │   │   )                                                                                │ port = 8080      │   │
│ ❱ 42 │   import open_webui.main  # we need set environment variables before importing main    ╰──────────────────╯   │
│   43 │                                                                                                               │
│   44 │   uvicorn.run(open_webui.main.app, host=host, port=port, forwarded_allow_ips="*")                             │
│   45                                                                                                                 │
│                                                                                                                      │
│ D:\anaconda3\envs\ollama\Lib\site-packages\open_webui\main.py:30 in <module>                                         │
│                                                                                                                      │
│     27 │   generate_chat_completion as generate_openai_chat_completion,                                              │
│     28 )                                                                                                             │
│     29 from open_webui.apps.openai.main import get_all_models as get_openai_models                                   │
│ ❱   30 from open_webui.apps.rag.main import app as rag_app                                                           │
│     31 from open_webui.apps.rag.utils import get_rag_context, rag_template                                           │
│     32 from open_webui.apps.socket.main import app as socket_app                                                     │
│     33 from open_webui.apps.socket.main import get_event_call, get_event_emitter                                     │
│                                                                                                                      │
│ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │
│ │                         aiohttp = <module 'aiohttp' from                                                         │ │
│ │                                   'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\aiohttp\\__init__.py'>       │ │
│ │             asynccontextmanager = <function asynccontextmanager at 0x000002C09F6B36A0>                           │ │
│ │                       audio_app = <fastapi.applications.FastAPI object at 0x000002C0CF6F7410>                    │ │
│ │                          base64 = <module 'base64' from 'D:\\anaconda3\\envs\\ollama\\Lib\\base64.py'>           │ │
│ │ generate_ollama_chat_completion = <function generate_openai_chat_completion at 0x000002C0CF871B20>               │ │
│ │ generate_openai_chat_completion = <function generate_chat_completion at 0x000002C0CF8C9E40>                      │ │
│ │               get_ollama_models = <function get_all_models at 0x000002C0CF828F40>                                │ │
│ │               get_openai_models = <function get_all_models at 0x000002C0CF873740>                                │ │
│ │                      images_app = <fastapi.applications.FastAPI object at 0x000002C0CF7DFE90>                    │ │
│ │                         inspect = <module 'inspect' from 'D:\\anaconda3\\envs\\ollama\\Lib\\inspect.py'>         │ │
│ │                            json = <module 'json' from 'D:\\anaconda3\\envs\\ollama\\Lib\\json\\__init__.py'>     │ │
│ │                         logging = <module 'logging' from                                                         │ │
│ │                                   'D:\\anaconda3\\envs\\ollama\\Lib\\logging\\__init__.py'>                      │ │
│ │                       mimetypes = <module 'mimetypes' from 'D:\\anaconda3\\envs\\ollama\\Lib\\mimetypes.py'>     │ │
│ │                      ollama_app = <fastapi.applications.FastAPI object at 0x000002C0CE2359D0>                    │ │
│ │                      openai_app = <fastapi.applications.FastAPI object at 0x000002C0CF87B4D0>                    │ │
│ │                        Optional = typing.Optional                                                                │ │
│ │                              os = <module 'os' (frozen)>                                                         │ │
│ │                        requests = <module 'requests' from                                                        │ │
│ │                                   'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\requests\\__init__.py'>      │ │
│ │                          shutil = <module 'shutil' from 'D:\\anaconda3\\envs\\ollama\\Lib\\shutil.py'>           │ │
│ │                             sys = <module 'sys' (built-in)>                                                      │ │
│ │                            time = <module 'time' (built-in)>                                                     │ │
│ │                            uuid = <module 'uuid' from 'D:\\anaconda3\\envs\\ollama\\Lib\\uuid.py'>               │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│                                                                                                                      │
│ D:\anaconda3\envs\ollama\Lib\site-packages\open_webui\apps\rag\main.py:208 in <module>                               │
│                                                                                                                      │
│    205 │   │   app.state.sentence_transformer_rf = None                                                              │
│    206                                                                                                               │
│    207                                                                                                               │
│ ❱  208 update_embedding_model(                                                                                       │
│    209 │   app.state.config.RAG_EMBEDDING_MODEL,                                                                     │
│    210 │   RAG_EMBEDDING_MODEL_AUTO_UPDATE,                                                                          │
│    211 )                                                                                                             │
│                                                                                                                      │
│ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │
│ │                                    app = <fastapi.applications.FastAPI object at 0x000002C0D065BF50>             │ │
│ │                              AppConfig = <class 'open_webui.config.AppConfig'>                                   │ │
│ │                              BaseModel = <class 'pydantic.main.BaseModel'>                                       │ │
│ │                   BRAVE_SEARCH_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707B90>       │ │
│ │                           BSHTMLLoader = <class 'langchain_community.document_loaders.html_bs.BSHTMLLoader'>     │ │
│ │                       calculate_sha256 = <function calculate_sha256 at 0x000002C0CD903740>                       │ │
│ │                calculate_sha256_string = <function calculate_sha256_string at 0x000002C0CD9037E0>                │ │
│ │                          CHROMA_CLIENT = <chromadb.api.client.Client object at 0x000002C0CE029750>               │ │
│ │                          CHUNK_OVERLAP = <open_webui.config.PersistentConfig object at 0x000002C0CF582790>       │ │
│ │                             CHUNK_SIZE = <open_webui.config.PersistentConfig object at 0x000002C0CF583050>       │ │
│ │              CONTENT_EXTRACTION_ENGINE = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0310>       │ │
│ │                      CORS_ALLOW_ORIGIN = ['*']                                                                   │ │
│ │                         CORSMiddleware = <class 'starlette.middleware.cors.CORSMiddleware'>                      │ │
│ │                         create_batches = <function create_batches at 0x000002C0D060AA20>                         │ │
│ │                              CSVLoader = <class 'langchain_community.document_loaders.csv_loader.CSVLoader'>     │ │
│ │                               datetime = <class 'datetime.datetime'>                                             │ │
│ │                                Depends = <function Depends at 0x000002C0CE31E700>                                │ │
│ │                            DEVICE_TYPE = 'cpu'                                                                   │ │
│ │                               DOCS_DIR = 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\open_webui\\data/doc… │ │
│ │                               Document = <class 'langchain_core.documents.base.Document'>                        │ │
│ │                           DocumentForm = <class 'open_webui.apps.webui.models.documents.DocumentForm'>           │ │
│ │                              Documents = <open_webui.apps.webui.models.documents.DocumentsTable object at        │ │
│ │                                          0x000002C0D060CC90>                                                     │ │
│ │                         Docx2txtLoader = <class                                                                  │ │
│ │                                          'langchain_community.document_loaders.word_document.Docx2txtLoader'>    │ │
│ │               ENABLE_RAG_HYBRID_SEARCH = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0510>       │ │
│ │             ENABLE_RAG_LOCAL_WEB_FETCH = False                                                                   │ │
│ │ ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0690>       │ │
│ │                  ENABLE_RAG_WEB_SEARCH = <open_webui.config.PersistentConfig object at 0x000002C0CF7078D0>       │ │
│ │                                    ENV = 'dev'                                                                   │ │
│ │                         ERROR_MESSAGES = <enum 'ERROR_MESSAGES'>                                                 │ │
│ │        extract_folders_after_data_docs = <function extract_folders_after_data_docs at 0x000002C0CD9039C0>        │ │
│ │                                FastAPI = <class 'fastapi.applications.FastAPI'>                                  │ │
│ │                                   File = <function File at 0x000002C0CE31E660>                                   │ │
│ │                                  Files = <open_webui.apps.webui.models.files.FilesTable object at                │ │
│ │                                          0x000002C0D061C310>                                                     │ │
│ │                                   Form = <function Form at 0x000002C0CE31E5C0>                                   │ │
│ │                         get_admin_user = <function get_admin_user at 0x000002C0CF434900>                         │ │
│ │                 get_embedding_function = <function get_embedding_function at 0x000002C0D05D1B20>                 │ │
│ │                         get_model_path = <function get_model_path at 0x000002C0D05D1C60>                         │ │
│ │                      get_verified_user = <function get_verified_user at 0x000002C0CF434860>                      │ │
│ │                     GOOGLE_PSE_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707A90>       │ │
│ │                   GOOGLE_PSE_ENGINE_ID = <open_webui.config.PersistentConfig object at 0x000002C0CF707B10>       │ │
│ │                          HTTPException = <class 'fastapi.exceptions.HTTPException'>                              │ │
│ │                               Iterator = typing.Iterator                                                         │ │
│ │                                   json = <module 'json' from                                                     │ │
│ │                                          'D:\\anaconda3\\envs\\ollama\\Lib\\json\\__init__.py'>                  │ │
│ │                                    log = <Logger open_webui.apps.rag.main (INFO)>                                │ │
│ │                                logging = <module 'logging' from                                                  │ │
│ │                                          'D:\\anaconda3\\envs\\ollama\\Lib\\logging\\__init__.py'>               │ │
│ │                              mimetypes = <module 'mimetypes' from                                                │ │
│ │                                          'D:\\anaconda3\\envs\\ollama\\Lib\\mimetypes.py'>                       │ │
│ │                               Optional = typing.Optional                                                         │ │
│ │                                     os = <module 'os' (frozen)>                                                  │ │
│ │                   OutlookMessageLoader = <class                                                                  │ │
│ │                                          'langchain_community.document_loaders.email.OutlookMessageLoader'>      │ │
│ │                                   Path = <class 'pathlib.Path'>                                                  │ │
│ │                     PDF_EXTRACT_IMAGES = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0790>       │ │
│ │                            PyPDFLoader = <class 'langchain_community.document_loaders.pdf.PyPDFLoader'>          │ │
│ │                       query_collection = <function query_collection at 0x000002C0D05D1760>                       │ │
│ │    query_collection_with_hybrid_search = <function query_collection_with_hybrid_search at 0x000002C0D05D1800>    │ │
│ │                              query_doc = <function query_doc at 0x000002C0CF9A71A0>                              │ │
│ │           query_doc_with_hybrid_search = <function query_doc_with_hybrid_search at 0x000002C0D05D13A0>           │ │
│ │                   RAG_EMBEDDING_ENGINE = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0710>       │ │
│ │                    RAG_EMBEDDING_MODEL = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0810>       │ │
│ │        RAG_EMBEDDING_MODEL_AUTO_UPDATE = False                                                                   │ │
│ │  RAG_EMBEDDING_MODEL_TRUST_REMOTE_CODE = False                                                                   │ │
│ │        RAG_EMBEDDING_OPENAI_BATCH_SIZE = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0890>       │ │
│ │                     RAG_FILE_MAX_COUNT = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0590>       │ │
│ │                      RAG_FILE_MAX_SIZE = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0610>       │ │
│ │                RAG_OPENAI_API_BASE_URL = <open_webui.config.PersistentConfig object at 0x000002C0CF599510>       │ │
│ │                     RAG_OPENAI_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707150>       │ │
│ │                RAG_RELEVANCE_THRESHOLD = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0450>       │ │
│ │                    RAG_RERANKING_MODEL = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0910>       │ │
│ │        RAG_RERANKING_MODEL_AUTO_UPDATE = False                                                                   │ │
│ │  RAG_RERANKING_MODEL_TRUST_REMOTE_CODE = False                                                                   │ │
│ │                           RAG_TEMPLATE = <open_webui.config.PersistentConfig object at 0x000002C0CF707310>       │ │
│ │                              RAG_TOP_K = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0490>       │ │
│ │     RAG_WEB_SEARCH_CONCURRENT_REQUESTS = <open_webui.config.PersistentConfig object at 0x000002C0CF71C050>       │ │
│ │      RAG_WEB_SEARCH_DOMAIN_FILTER_LIST = <open_webui.config.PersistentConfig object at 0x000002C0CF707990>       │ │
│ │                  RAG_WEB_SEARCH_ENGINE = <open_webui.config.PersistentConfig object at 0x000002C0CF707910>       │ │
│ │            RAG_WEB_SEARCH_RESULT_COUNT = <open_webui.config.PersistentConfig object at 0x000002C0CF707F90>       │ │
│ │         RecursiveCharacterTextSplitter = <class                                                                  │ │
│ │                                          'langchain_text_splitters.character.RecursiveCharacterTextSplitter'>    │ │
│ │                               requests = <module 'requests' from                                                 │ │
│ │                                          'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\requests\\__init__.p… │ │
│ │                      sanitize_filename = <function sanitize_filename at 0x000002C0CD903920>                      │ │
│ │                           search_brave = <function search_brave at 0x000002C0CF94C0E0>                           │ │
│ │                      search_duckduckgo = <function search_duckduckgo at 0x000002C0CF94C9A0>                      │ │
│ │                      search_google_pse = <function search_google_pse at 0x000002C0CF9A5B20>                      │ │
│ │                            search_jina = <function search_jina at 0x000002C0CF9A6520>                            │ │
│ │                       search_searchapi = <function search_searchapi at 0x000002C0CF9A6660>                       │ │
│ │                         search_searxng = <function search_searxng at 0x000002C0CF933F60>                         │ │
│ │                          search_serper = <function search_serper at 0x000002C0CF9A6840>                          │ │
│ │                          search_serply = <function search_serply at 0x000002C0CF9A6AC0>                          │ │
│ │                       search_serpstack = <function search_serpstack at 0x000002C0CF9A6D40>                       │ │
│ │                          search_tavily = <function search_tavily at 0x000002C0CF9A6FC0>                          │ │
│ │                      SEARCHAPI_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707F10>       │ │
│ │                       SEARCHAPI_ENGINE = <open_webui.config.PersistentConfig object at 0x000002C0CF707ED0>       │ │
│ │                           SearchResult = <class 'open_webui.apps.rag.search.main.SearchResult'>                  │ │
│ │                      SEARXNG_QUERY_URL = <open_webui.config.PersistentConfig object at 0x000002C0CF707A50>       │ │
│ │                               Sequence = typing.Sequence                                                         │ │
│ │                         SERPER_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707D50>       │ │
│ │                         SERPLY_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707DD0>       │ │
│ │                      SERPSTACK_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707C10>       │ │
│ │                        SERPSTACK_HTTPS = <open_webui.config.PersistentConfig object at 0x000002C0CF707D10>       │ │
│ │                                 shutil = <module 'shutil' from 'D:\\anaconda3\\envs\\ollama\\Lib\\shutil.py'>    │ │
│ │                                 socket = <module 'socket' from 'D:\\anaconda3\\envs\\ollama\\Lib\\socket.py'>    │ │
│ │                         SRC_LOG_LEVELS = {                                                                       │ │
│ │                                          │   'AUDIO': 'INFO',                                                    │ │
│ │                                          │   'COMFYUI': 'INFO',                                                  │ │
│ │                                          │   'CONFIG': 'INFO',                                                   │ │
│ │                                          │   'DB': 'INFO',                                                       │ │
│ │                                          │   'IMAGES': 'INFO',                                                   │ │
│ │                                          │   'MAIN': 'INFO',                                                     │ │
│ │                                          │   'MODELS': 'INFO',                                                   │ │
│ │                                          │   'OLLAMA': 'INFO',                                                   │ │
│ │                                          │   'OPENAI': 'INFO',                                                   │ │
│ │                                          │   'RAG': 'INFO',                                                      │ │
│ │                                          │   ... +1                                                              │ │
│ │                                          }                                                                       │ │
│ │                                 status = <module 'starlette.status' from                                         │ │
│ │                                          'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\starlette\\status.py… │ │
│ │                         TAVILY_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707E50>       │ │
│ │                             TextLoader = <class 'langchain_community.document_loaders.text.TextLoader'>          │ │
│ │                        TIKA_SERVER_URL = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0410>       │ │
│ │                                  Union = typing.Union                                                            │ │
│ │                 UnstructuredEPubLoader = <class                                                                  │ │
│ │                                          'langchain_community.document_loaders.epub.UnstructuredEPubLoader'>     │ │
│ │                UnstructuredExcelLoader = <class                                                                  │ │
│ │                                          'langchain_community.document_loaders.excel.UnstructuredExcelLoader'>   │ │
│ │             UnstructuredMarkdownLoader = <class                                                                  │ │
│ │                                          'langchain_community.document_loaders.markdown.UnstructuredMarkdownLoa… │ │
│ │           UnstructuredPowerPointLoader = <class                                                                  │ │
│ │                                          'langchain_community.document_loaders.powerpoint.UnstructuredPowerPoin… │ │
│ │                  UnstructuredRSTLoader = <class                                                                  │ │
│ │                                          'langchain_community.document_loaders.rst.UnstructuredRSTLoader'>       │ │
│ │                  UnstructuredXMLLoader = <class                                                                  │ │
│ │                                          'langchain_community.document_loaders.xml.UnstructuredXMLLoader'>       │ │
│ │                 update_embedding_model = <function update_embedding_model at 0x000002C0D0690E00>                 │ │
│ │                 update_reranking_model = <function update_reranking_model at 0x000002C0D0690EA0>                 │ │
│ │                             UPLOAD_DIR = 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\open_webui\\data/upl… │ │
│ │                             UploadFile = <class 'fastapi.datastructures.UploadFile'>                             │ │
│ │                                 urllib = <module 'urllib' from                                                   │ │
│ │                                          'D:\\anaconda3\\envs\\ollama\\Lib\\urllib\\__init__.py'>                │ │
│ │                                   uuid = <module 'uuid' from 'D:\\anaconda3\\envs\\ollama\\Lib\\uuid.py'>        │ │
│ │                             validators = <module 'validators' from                                               │ │
│ │                                          'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\validators\\__init__… │ │
│ │                          WebBaseLoader = <class 'langchain_community.document_loaders.web_base.WebBaseLoader'>   │ │
│ │                YOUTUBE_LOADER_LANGUAGE = <open_webui.config.PersistentConfig object at 0x000002C0CF707810>       │ │
│ │                          YoutubeLoader = <class 'langchain_community.document_loaders.youtube.YoutubeLoader'>    │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│                                                                                                                      │
│ D:\anaconda3\envs\ollama\Lib\site-packages\open_webui\apps\rag\main.py:181 in update_embedding_model                 │
│                                                                                                                      │
│    178 │   update_model: bool = False,                                                                               │
│    179 ):                                                                                                            │
│    180 │   if embedding_model and app.state.config.RAG_EMBEDDING_ENGINE == "":                                       │
│ ❱  181 │   │   import sentence_transformers                                                                          │
│    182 │   │                                                                                                         │
│    183 │   │   app.state.sentence_transformer_ef = sentence_transformers.SentenceTransformer(                        │
│    184 │   │   │   get_model_path(embedding_model, update_model),                                                    │
│                                                                                                                      │
│ ╭────────────────────────── locals ──────────────────────────╮                                                       │
│ │ embedding_model = 'sentence-transformers/all-MiniLM-L6-v2' │                                                       │
│ │    update_model = False                                    │                                                       │
│ ╰────────────────────────────────────────────────────────────╯                                                       │
│                                                                                                                      │
│ D:\anaconda3\envs\ollama\Lib\site-packages\sentence_transformers\__init__.py:7 in <module>                           │
│                                                                                                                      │
│    4 import importlib                                                                                                │
│    5 import os                                                                                                       │
│    6                                                                                                                 │
│ ❱  7 from sentence_transformers.cross_encoder.CrossEncoder import CrossEncoder                                       │
│    8 from sentence_transformers.datasets import ParallelSentencesDataset, SentencesDataset                           │
│    9 from sentence_transformers.LoggingHandler import LoggingHandler                                                 │
│   10 from sentence_transformers.model_card import SentenceTransformerModelCardData                                   │
│                                                                                                                      │
│ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │
│ │           evaluation = <module 'sentence_transformers.evaluation' from                                           │ │
│ │                        'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\sentence_transformers\\evaluation\\__i… │ │
│ │            importlib = <module 'importlib' from 'D:\\anaconda3\\envs\\ollama\\Lib\\importlib\\__init__.py'>      │ │
│ │               models = <module 'sentence_transformers.models' from                                               │ │
│ │                        'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\sentence_transformers\\models\\__init_… │ │
│ │                   os = <module 'os' (frozen)>                                                                    │ │
│ │              readers = <module 'sentence_transformers.readers' from                                              │ │
│ │                        'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\sentence_transformers\\readers\\__init… │ │
│ │ similarity_functions = <module 'sentence_transformers.similarity_functions' from                                 │ │
│ │                        'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\sentence_transformers\\similarity_func… │ │
│ │        training_args = <module 'sentence_transformers.training_args' from                                        │ │
│ │                        'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\sentence_transformers\\training_args.p… │ │
│ │                 util = <module 'sentence_transformers.util' from                                                 │ │
│ │                        'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\sentence_transformers\\util.py'>        │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│                                                                                                                      │
│ D:\anaconda3\envs\ollama\Lib\site-packages\sentence_transformers\cross_encoder\__init__.py:1 in <module>             │
│                                                                                                                      │
│ ❱ 1 from .CrossEncoder import CrossEncoder                                                                           │
│   2                                                                                                                  │
│   3 __all__ = ["CrossEncoder"]                                                                                       │
│   4                                                                                                                  │
│                                                                                                                      │
│ D:\anaconda3\envs\ollama\Lib\site-packages\sentence_transformers\cross_encoder\CrossEncoder.py:18 in <module>        │
│                                                                                                                      │
│    15                                                                                                                │
│    16 from sentence_transformers.evaluation.SentenceEvaluator import SentenceEvaluator                               │
│    17 from sentence_transformers.readers import InputExample                                                         │
│ ❱  18 from sentence_transformers.SentenceTransformer import SentenceTransformer                                      │
│    19 from sentence_transformers.util import fullname, get_device_name, import_from_string                           │
│    20                                                                                                                │
│    21 logger = logging.getLogger(__name__)                                                                           │
│                                                                                                                      │
│ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │
│ │                         AutoConfig = <class 'transformers.models.auto.configuration_auto.AutoConfig'>            │ │
│ │ AutoModelForSequenceClassification = <class                                                                      │ │
│ │                                      'transformers.models.auto.modeling_auto.AutoModelForSequenceClassification… │ │
│ │                      AutoTokenizer = <class 'transformers.models.auto.tokenization_auto.AutoTokenizer'>          │ │
│ │                      BatchEncoding = <class 'transformers.tokenization_utils_base.BatchEncoding'>                │ │
│ │                           Callable = typing.Callable                                                             │ │
│ │                         DataLoader = <class 'torch.utils.data.dataloader.DataLoader'>                            │ │
│ │                               Dict = typing.Dict                                                                 │ │
│ │                       InputExample = <class 'sentence_transformers.readers.InputExample.InputExample'>           │ │
│ │             is_torch_npu_available = <functools._lru_cache_wrapper object at 0x000002C0D7D3EB90>                 │ │
│ │                               List = typing.List                                                                 │ │
│ │                            Literal = typing.Literal                                                              │ │
│ │                            logging = <module 'logging' from                                                      │ │
│ │                                      'D:\\anaconda3\\envs\\ollama\\Lib\\logging\\__init__.py'>                   │ │
│ │                                 nn = <module 'torch.nn' from                                                     │ │
│ │                                      'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\nn\\__init__.py'>  │ │
│ │                                 np = <module 'numpy' from                                                        │ │
│ │                                      'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\numpy\\__init__.py'>      │ │
│ │                          Optimizer = <class 'torch.optim.optimizer.Optimizer'>                                   │ │
│ │                           Optional = typing.Optional                                                             │ │
│ │                                 os = <module 'os' (frozen)>                                                      │ │
│ │                     PushToHubMixin = <class 'transformers.utils.hub.PushToHubMixin'>                             │ │
│ │                  SentenceEvaluator = <class                                                                      │ │
│ │                                      'sentence_transformers.evaluation.SentenceEvaluator.SentenceEvaluator'>     │ │
│ │                             Tensor = <class 'torch.Tensor'>                                                      │ │
│ │                              torch = <module 'torch' from                                                        │ │
│ │                                      'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\__init__.py'>      │ │
│ │                               tqdm = <class 'tqdm.std.tqdm'>                                                     │ │
│ │                             trange = <function trange at 0x000002C0CAD3DDA0>                                     │ │
│ │                              Tuple = typing.Tuple                                                                │ │
│ │                               Type = typing.Type                                                                 │ │
│ │                              Union = typing.Union                                                                │ │
│ │                              wraps = <function wraps at 0x000002C09F6B1A80>                                      │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│                                                                                                                      │
│ D:\anaconda3\envs\ollama\Lib\site-packages\sentence_transformers\SentenceTransformer.py:27 in <module>               │
│                                                                                                                      │
│     24 from tqdm.autonotebook import trange                                                                          │
│     25 from transformers import is_torch_npu_available                                                               │
│     26                                                                                                               │
│ ❱   27 from sentence_transformers.model_card import SentenceTransformerModelCardData, generate_                      │
│     28 from sentence_transformers.similarity_functions import SimilarityFunction                                     │
│     29                                                                                                               │
│     30 from . import __MODEL_HUB_ORGANIZATION__, __version__                                                         │
│                                                                                                                      │
│ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │
│ │                    Any = typing.Any                                                                              │ │
│ │               Callable = typing.Callable                                                                         │ │
│ │         contextmanager = <function contextmanager at 0x000002C09F62DD00>                                         │ │
│ │                   copy = <module 'copy' from 'D:\\anaconda3\\envs\\ollama\\Lib\\copy.py'>                        │ │
│ │                 device = <class 'torch.device'>                                                                  │ │
│ │                   Dict = typing.Dict                                                                             │ │
│ │                  HfApi = <class 'huggingface_hub.hf_api.HfApi'>                                                  │ │
│ │              importlib = <module 'importlib' from 'D:\\anaconda3\\envs\\ollama\\Lib\\importlib\\__init__.py'>    │ │
│ │ is_torch_npu_available = <functools._lru_cache_wrapper object at 0x000002C0D7D3EB90>                             │ │
│ │               Iterable = typing.Iterable                                                                         │ │
│ │               Iterator = typing.Iterator                                                                         │ │
│ │                   json = <module 'json' from 'D:\\anaconda3\\envs\\ollama\\Lib\\json\\__init__.py'>              │ │
│ │                   List = typing.List                                                                             │ │
│ │                Literal = typing.Literal                                                                          │ │
│ │                logging = <module 'logging' from 'D:\\anaconda3\\envs\\ollama\\Lib\\logging\\__init__.py'>        │ │
│ │                   math = <module 'math' (built-in)>                                                              │ │
│ │                     mp = <module 'torch.multiprocessing' from                                                    │ │
│ │                          'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\multiprocessing\\__init__.py'> │ │
│ │                ndarray = <class 'numpy.ndarray'>                                                                 │ │
│ │                     nn = <module 'torch.nn' from                                                                 │ │
│ │                          'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\nn\\__init__.py'>              │ │
│ │                     np = <module 'numpy' from                                                                    │ │
│ │                          'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\numpy\\__init__.py'>                  │ │
│ │               Optional = typing.Optional                                                                         │ │
│ │            OrderedDict = <class 'collections.OrderedDict'>                                                       │ │
│ │                     os = <module 'os' (frozen)>                                                                  │ │
│ │               overload = <function overload at 0x000002C09FC707C0>                                               │ │
│ │                   Path = <class 'pathlib.Path'>                                                                  │ │
│ │                  queue = <module 'queue' from 'D:\\anaconda3\\envs\\ollama\\Lib\\queue.py'>                      │ │
│ │                  Queue = <bound method BaseContext.Queue of <multiprocessing.context.DefaultContext object at    │ │
│ │                          0x000002C0A2AF69D0>>                                                                    │ │
│ │               tempfile = <module 'tempfile' from 'D:\\anaconda3\\envs\\ollama\\Lib\\tempfile.py'>                │ │
│ │                 Tensor = <class 'torch.Tensor'>                                                                  │ │
│ │                  torch = <module 'torch' from                                                                    │ │
│ │                          'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\__init__.py'>                  │ │
│ │              traceback = <module 'traceback' from 'D:\\anaconda3\\envs\\ollama\\Lib\\traceback.py'>              │ │
│ │                 trange = <function trange at 0x000002C0CAD3DDA0>                                                 │ │
│ │           transformers = <module 'transformers' from                                                             │ │
│ │                          'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\transformers\\__init__.py'>           │ │
│ │                  Tuple = typing.Tuple                                                                            │ │
│ │                  Union = typing.Union                                                                            │ │
│ │               warnings = <module 'warnings' from 'D:\\anaconda3\\envs\\ollama\\Lib\\warnings.py'>                │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│                                                                                                                      │
│ D:\anaconda3\envs\ollama\Lib\site-packages\sentence_transformers\model_card.py:33 in <module>                        │
│                                                                                                                      │
│    30 from sentence_transformers.util import fullname, is_accelerate_available, is_datasets_av                       │
│    31                                                                                                                │
│    32 if is_datasets_available():                                                                                    │
│ ❱  33 │   from datasets import Dataset, DatasetDict, Value                                                           │
│    34                                                                                                                │
│    35 logger = logging.getLogger(__name__)                                                                           │
│    36                                                                                                                │
│                                                                                                                      │
│ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │
│ │                                  Any = typing.Any                                                                │ │
│ │                             CardData = <class 'huggingface_hub.repocard_data.CardData'>                          │ │
│ │                   CodeCarbonCallback = <class 'transformers.integrations.integration_utils.CodeCarbonCallback'>  │ │
│ │                                 copy = <function copy at 0x000002C0A1572480>                                     │ │
│ │                              Counter = <class 'collections.Counter'>                                             │ │
│ │                            dataclass = <function dataclass at 0x000002C0A1ACB420>                                │ │
│ │                          defaultdict = <class 'collections.defaultdict'>                                         │ │
│ │                                 Dict = typing.Dict                                                               │ │
│ │          eval_results_to_model_index = <function eval_results_to_model_index at 0x000002C0CFC02C00>              │ │
│ │                           EvalResult = <class 'huggingface_hub.repocard_data.EvalResult'>                        │ │
│ │                                field = <function field at 0x000002C0A1AC9D00>                                    │ │
│ │                               fields = <function fields at 0x000002C0A1ACB4C0>                                   │ │
│ │                             fullname = <function fullname at 0x000002C083D528E0>                                 │ │
│ │                     get_dataset_info = <bound method HfApi.dataset_info of <huggingface_hub.hf_api.HfApi object  │ │
│ │                                        at 0x000002C0CFC7B950>>                                                   │ │
│ │                       get_model_info = <bound method HfApi.model_info of <huggingface_hub.hf_api.HfApi object at │ │
│ │                                        0x000002C0CFC7B950>>                                                      │ │
│ │                               indent = <function indent at 0x000002C0A155C400>                                   │ │
│ │              is_accelerate_available = <function is_accelerate_available at 0x000002C083D531A0>                  │ │
│ │                is_datasets_available = <function is_datasets_available at 0x000002C083D53240>                    │ │
│ │                                 json = <module 'json' from                                                       │ │
│ │                                        'D:\\anaconda3\\envs\\ollama\\Lib\\json\\__init__.py'>                    │ │
│ │                                 List = typing.List                                                               │ │
│ │                              Literal = typing.Literal                                                            │ │
│ │                              logging = <module 'logging' from                                                    │ │
│ │                                        'D:\\anaconda3\\envs\\ollama\\Lib\\logging\\__init__.py'>                 │ │
│ │                  make_markdown_table = <function make_markdown_table at 0x000002C0864740E0>                      │ │
│ │                            ModelCard = <class 'huggingface_hub.repocard.ModelCard'>                              │ │
│ │                                   nn = <module 'torch.nn' from                                                   │ │
│ │                                        'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\nn\\__init__.py… │ │
│ │                             Optional = typing.Optional                                                           │ │
│ │                                 Path = <class 'pathlib.Path'>                                                    │ │
│ │                       python_version = <function python_version at 0x000002C0A1C66520>                           │ │
│ │                               random = <module 'random' from 'D:\\anaconda3\\envs\\ollama\\Lib\\random.py'>      │ │
│ │                                   re = <module 're' from 'D:\\anaconda3\\envs\\ollama\\Lib\\re\\__init__.py'>    │ │
│ │        sentence_transformers_version = '3.0.1'                                                                   │ │
│ │ SentenceTransformerTrainingArguments = <class                                                                    │ │
│ │                                        'sentence_transformers.training_args.SentenceTransformerTrainingArgument… │ │
│ │                                torch = <module 'torch' from                                                      │ │
│ │                                        'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\__init__.py'>    │ │
│ │                                 tqdm = <class 'tqdm.std.tqdm'>                                                   │ │
│ │                      TrainerCallback = <class 'transformers.trainer_callback.TrainerCallback'>                   │ │
│ │                       TrainerControl = <class 'transformers.trainer_callback.TrainerControl'>                    │ │
│ │                         TrainerState = <class 'transformers.trainer_callback.TrainerState'>                      │ │
│ │                          Transformer = <class 'sentence_transformers.models.Transformer.Transformer'>            │ │
│ │                         transformers = <module 'transformers' from                                               │ │
│ │                                        'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\transformers\\__init__… │ │
│ │                                Tuple = typing.Tuple                                                              │ │
│ │                        TYPE_CHECKING = False                                                                     │ │
│ │                                Union = typing.Union                                                              │ │
│ │                            yaml_dump = functools.partial(<function dump at 0x000002C0CAE56E80>, stream=None,     │ │
│ │                                        allow_unicode=True)                                                       │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name 'Dataset' from 'datasets' (unknown location)
Originally created by @monkeycc on GitHub (Sep 19, 2024). `pip install open-webui` `open-webui serve` ``` Loading WEBUI_SECRET_KEY from file, not provided as an environment variable. Generating a new secret key and saving it to C:\Users\mm\.webui_secret_key Loading WEBUI_SECRET_KEY from C:\Users\mm\.webui_secret_key D:\anaconda3\envs\ollama\Lib\site-packages\open_webui D:\anaconda3\envs\ollama\Lib\site-packages D:\anaconda3\envs\ollama\Lib Running migrations INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> 7e5b5dc7342b, init INFO [alembic.runtime.migration] Running upgrade 7e5b5dc7342b -> ca81bd47c050, Add config table INFO [open_webui.env] 'DEFAULT_LOCALE' loaded from the latest database entry INFO [open_webui.env] 'DEFAULT_PROMPT_SUGGESTIONS' loaded from the latest database entry WARNI [open_webui.env] WARNING: CORS_ALLOW_ORIGIN IS SET TO '*' - NOT RECOMMENDED FOR PRODUCTION DEPLOYMENTS. INFO [open_webui.env] Embedding model set: sentence-transformers/all-MiniLM-L6-v2 INFO [open_webui.apps.audio.main] whisper_device_type: cpu WARNI [langchain_community.utils.user_agent] USER_AGENT environment variable not set, consider setting it to identify your requests. The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] ╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮ │ D:\anaconda3\envs\ollama\Lib\site-packages\open_webui\__init__.py:42 in serve │ │ │ │ 39 │ │ │ │ "/usr/local/lib/python3.11/site-packages/nvidia/cudnn/lib", ╭───── locals ─────╮ │ │ 40 │ │ │ ] │ host = '0.0.0.0' │ │ │ 41 │ │ ) │ port = 8080 │ │ │ ❱ 42 │ import open_webui.main # we need set environment variables before importing main ╰──────────────────╯ │ │ 43 │ │ │ 44 │ uvicorn.run(open_webui.main.app, host=host, port=port, forwarded_allow_ips="*") │ │ 45 │ │ │ │ D:\anaconda3\envs\ollama\Lib\site-packages\open_webui\main.py:30 in <module> │ │ │ │ 27 │ generate_chat_completion as generate_openai_chat_completion, │ │ 28 ) │ │ 29 from open_webui.apps.openai.main import get_all_models as get_openai_models │ │ ❱ 30 from open_webui.apps.rag.main import app as rag_app │ │ 31 from open_webui.apps.rag.utils import get_rag_context, rag_template │ │ 32 from open_webui.apps.socket.main import app as socket_app │ │ 33 from open_webui.apps.socket.main import get_event_call, get_event_emitter │ │ │ │ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │ │ │ aiohttp = <module 'aiohttp' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\aiohttp\\__init__.py'> │ │ │ │ asynccontextmanager = <function asynccontextmanager at 0x000002C09F6B36A0> │ │ │ │ audio_app = <fastapi.applications.FastAPI object at 0x000002C0CF6F7410> │ │ │ │ base64 = <module 'base64' from 'D:\\anaconda3\\envs\\ollama\\Lib\\base64.py'> │ │ │ │ generate_ollama_chat_completion = <function generate_openai_chat_completion at 0x000002C0CF871B20> │ │ │ │ generate_openai_chat_completion = <function generate_chat_completion at 0x000002C0CF8C9E40> │ │ │ │ get_ollama_models = <function get_all_models at 0x000002C0CF828F40> │ │ │ │ get_openai_models = <function get_all_models at 0x000002C0CF873740> │ │ │ │ images_app = <fastapi.applications.FastAPI object at 0x000002C0CF7DFE90> │ │ │ │ inspect = <module 'inspect' from 'D:\\anaconda3\\envs\\ollama\\Lib\\inspect.py'> │ │ │ │ json = <module 'json' from 'D:\\anaconda3\\envs\\ollama\\Lib\\json\\__init__.py'> │ │ │ │ logging = <module 'logging' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\logging\\__init__.py'> │ │ │ │ mimetypes = <module 'mimetypes' from 'D:\\anaconda3\\envs\\ollama\\Lib\\mimetypes.py'> │ │ │ │ ollama_app = <fastapi.applications.FastAPI object at 0x000002C0CE2359D0> │ │ │ │ openai_app = <fastapi.applications.FastAPI object at 0x000002C0CF87B4D0> │ │ │ │ Optional = typing.Optional │ │ │ │ os = <module 'os' (frozen)> │ │ │ │ requests = <module 'requests' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\requests\\__init__.py'> │ │ │ │ shutil = <module 'shutil' from 'D:\\anaconda3\\envs\\ollama\\Lib\\shutil.py'> │ │ │ │ sys = <module 'sys' (built-in)> │ │ │ │ time = <module 'time' (built-in)> │ │ │ │ uuid = <module 'uuid' from 'D:\\anaconda3\\envs\\ollama\\Lib\\uuid.py'> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ D:\anaconda3\envs\ollama\Lib\site-packages\open_webui\apps\rag\main.py:208 in <module> │ │ │ │ 205 │ │ app.state.sentence_transformer_rf = None │ │ 206 │ │ 207 │ │ ❱ 208 update_embedding_model( │ │ 209 │ app.state.config.RAG_EMBEDDING_MODEL, │ │ 210 │ RAG_EMBEDDING_MODEL_AUTO_UPDATE, │ │ 211 ) │ │ │ │ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │ │ │ app = <fastapi.applications.FastAPI object at 0x000002C0D065BF50> │ │ │ │ AppConfig = <class 'open_webui.config.AppConfig'> │ │ │ │ BaseModel = <class 'pydantic.main.BaseModel'> │ │ │ │ BRAVE_SEARCH_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707B90> │ │ │ │ BSHTMLLoader = <class 'langchain_community.document_loaders.html_bs.BSHTMLLoader'> │ │ │ │ calculate_sha256 = <function calculate_sha256 at 0x000002C0CD903740> │ │ │ │ calculate_sha256_string = <function calculate_sha256_string at 0x000002C0CD9037E0> │ │ │ │ CHROMA_CLIENT = <chromadb.api.client.Client object at 0x000002C0CE029750> │ │ │ │ CHUNK_OVERLAP = <open_webui.config.PersistentConfig object at 0x000002C0CF582790> │ │ │ │ CHUNK_SIZE = <open_webui.config.PersistentConfig object at 0x000002C0CF583050> │ │ │ │ CONTENT_EXTRACTION_ENGINE = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0310> │ │ │ │ CORS_ALLOW_ORIGIN = ['*'] │ │ │ │ CORSMiddleware = <class 'starlette.middleware.cors.CORSMiddleware'> │ │ │ │ create_batches = <function create_batches at 0x000002C0D060AA20> │ │ │ │ CSVLoader = <class 'langchain_community.document_loaders.csv_loader.CSVLoader'> │ │ │ │ datetime = <class 'datetime.datetime'> │ │ │ │ Depends = <function Depends at 0x000002C0CE31E700> │ │ │ │ DEVICE_TYPE = 'cpu' │ │ │ │ DOCS_DIR = 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\open_webui\\data/doc… │ │ │ │ Document = <class 'langchain_core.documents.base.Document'> │ │ │ │ DocumentForm = <class 'open_webui.apps.webui.models.documents.DocumentForm'> │ │ │ │ Documents = <open_webui.apps.webui.models.documents.DocumentsTable object at │ │ │ │ 0x000002C0D060CC90> │ │ │ │ Docx2txtLoader = <class │ │ │ │ 'langchain_community.document_loaders.word_document.Docx2txtLoader'> │ │ │ │ ENABLE_RAG_HYBRID_SEARCH = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0510> │ │ │ │ ENABLE_RAG_LOCAL_WEB_FETCH = False │ │ │ │ ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0690> │ │ │ │ ENABLE_RAG_WEB_SEARCH = <open_webui.config.PersistentConfig object at 0x000002C0CF7078D0> │ │ │ │ ENV = 'dev' │ │ │ │ ERROR_MESSAGES = <enum 'ERROR_MESSAGES'> │ │ │ │ extract_folders_after_data_docs = <function extract_folders_after_data_docs at 0x000002C0CD9039C0> │ │ │ │ FastAPI = <class 'fastapi.applications.FastAPI'> │ │ │ │ File = <function File at 0x000002C0CE31E660> │ │ │ │ Files = <open_webui.apps.webui.models.files.FilesTable object at │ │ │ │ 0x000002C0D061C310> │ │ │ │ Form = <function Form at 0x000002C0CE31E5C0> │ │ │ │ get_admin_user = <function get_admin_user at 0x000002C0CF434900> │ │ │ │ get_embedding_function = <function get_embedding_function at 0x000002C0D05D1B20> │ │ │ │ get_model_path = <function get_model_path at 0x000002C0D05D1C60> │ │ │ │ get_verified_user = <function get_verified_user at 0x000002C0CF434860> │ │ │ │ GOOGLE_PSE_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707A90> │ │ │ │ GOOGLE_PSE_ENGINE_ID = <open_webui.config.PersistentConfig object at 0x000002C0CF707B10> │ │ │ │ HTTPException = <class 'fastapi.exceptions.HTTPException'> │ │ │ │ Iterator = typing.Iterator │ │ │ │ json = <module 'json' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\json\\__init__.py'> │ │ │ │ log = <Logger open_webui.apps.rag.main (INFO)> │ │ │ │ logging = <module 'logging' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\logging\\__init__.py'> │ │ │ │ mimetypes = <module 'mimetypes' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\mimetypes.py'> │ │ │ │ Optional = typing.Optional │ │ │ │ os = <module 'os' (frozen)> │ │ │ │ OutlookMessageLoader = <class │ │ │ │ 'langchain_community.document_loaders.email.OutlookMessageLoader'> │ │ │ │ Path = <class 'pathlib.Path'> │ │ │ │ PDF_EXTRACT_IMAGES = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0790> │ │ │ │ PyPDFLoader = <class 'langchain_community.document_loaders.pdf.PyPDFLoader'> │ │ │ │ query_collection = <function query_collection at 0x000002C0D05D1760> │ │ │ │ query_collection_with_hybrid_search = <function query_collection_with_hybrid_search at 0x000002C0D05D1800> │ │ │ │ query_doc = <function query_doc at 0x000002C0CF9A71A0> │ │ │ │ query_doc_with_hybrid_search = <function query_doc_with_hybrid_search at 0x000002C0D05D13A0> │ │ │ │ RAG_EMBEDDING_ENGINE = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0710> │ │ │ │ RAG_EMBEDDING_MODEL = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0810> │ │ │ │ RAG_EMBEDDING_MODEL_AUTO_UPDATE = False │ │ │ │ RAG_EMBEDDING_MODEL_TRUST_REMOTE_CODE = False │ │ │ │ RAG_EMBEDDING_OPENAI_BATCH_SIZE = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0890> │ │ │ │ RAG_FILE_MAX_COUNT = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0590> │ │ │ │ RAG_FILE_MAX_SIZE = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0610> │ │ │ │ RAG_OPENAI_API_BASE_URL = <open_webui.config.PersistentConfig object at 0x000002C0CF599510> │ │ │ │ RAG_OPENAI_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707150> │ │ │ │ RAG_RELEVANCE_THRESHOLD = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0450> │ │ │ │ RAG_RERANKING_MODEL = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0910> │ │ │ │ RAG_RERANKING_MODEL_AUTO_UPDATE = False │ │ │ │ RAG_RERANKING_MODEL_TRUST_REMOTE_CODE = False │ │ │ │ RAG_TEMPLATE = <open_webui.config.PersistentConfig object at 0x000002C0CF707310> │ │ │ │ RAG_TOP_K = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0490> │ │ │ │ RAG_WEB_SEARCH_CONCURRENT_REQUESTS = <open_webui.config.PersistentConfig object at 0x000002C0CF71C050> │ │ │ │ RAG_WEB_SEARCH_DOMAIN_FILTER_LIST = <open_webui.config.PersistentConfig object at 0x000002C0CF707990> │ │ │ │ RAG_WEB_SEARCH_ENGINE = <open_webui.config.PersistentConfig object at 0x000002C0CF707910> │ │ │ │ RAG_WEB_SEARCH_RESULT_COUNT = <open_webui.config.PersistentConfig object at 0x000002C0CF707F90> │ │ │ │ RecursiveCharacterTextSplitter = <class │ │ │ │ 'langchain_text_splitters.character.RecursiveCharacterTextSplitter'> │ │ │ │ requests = <module 'requests' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\requests\\__init__.p… │ │ │ │ sanitize_filename = <function sanitize_filename at 0x000002C0CD903920> │ │ │ │ search_brave = <function search_brave at 0x000002C0CF94C0E0> │ │ │ │ search_duckduckgo = <function search_duckduckgo at 0x000002C0CF94C9A0> │ │ │ │ search_google_pse = <function search_google_pse at 0x000002C0CF9A5B20> │ │ │ │ search_jina = <function search_jina at 0x000002C0CF9A6520> │ │ │ │ search_searchapi = <function search_searchapi at 0x000002C0CF9A6660> │ │ │ │ search_searxng = <function search_searxng at 0x000002C0CF933F60> │ │ │ │ search_serper = <function search_serper at 0x000002C0CF9A6840> │ │ │ │ search_serply = <function search_serply at 0x000002C0CF9A6AC0> │ │ │ │ search_serpstack = <function search_serpstack at 0x000002C0CF9A6D40> │ │ │ │ search_tavily = <function search_tavily at 0x000002C0CF9A6FC0> │ │ │ │ SEARCHAPI_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707F10> │ │ │ │ SEARCHAPI_ENGINE = <open_webui.config.PersistentConfig object at 0x000002C0CF707ED0> │ │ │ │ SearchResult = <class 'open_webui.apps.rag.search.main.SearchResult'> │ │ │ │ SEARXNG_QUERY_URL = <open_webui.config.PersistentConfig object at 0x000002C0CF707A50> │ │ │ │ Sequence = typing.Sequence │ │ │ │ SERPER_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707D50> │ │ │ │ SERPLY_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707DD0> │ │ │ │ SERPSTACK_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707C10> │ │ │ │ SERPSTACK_HTTPS = <open_webui.config.PersistentConfig object at 0x000002C0CF707D10> │ │ │ │ shutil = <module 'shutil' from 'D:\\anaconda3\\envs\\ollama\\Lib\\shutil.py'> │ │ │ │ socket = <module 'socket' from 'D:\\anaconda3\\envs\\ollama\\Lib\\socket.py'> │ │ │ │ SRC_LOG_LEVELS = { │ │ │ │ │ 'AUDIO': 'INFO', │ │ │ │ │ 'COMFYUI': 'INFO', │ │ │ │ │ 'CONFIG': 'INFO', │ │ │ │ │ 'DB': 'INFO', │ │ │ │ │ 'IMAGES': 'INFO', │ │ │ │ │ 'MAIN': 'INFO', │ │ │ │ │ 'MODELS': 'INFO', │ │ │ │ │ 'OLLAMA': 'INFO', │ │ │ │ │ 'OPENAI': 'INFO', │ │ │ │ │ 'RAG': 'INFO', │ │ │ │ │ ... +1 │ │ │ │ } │ │ │ │ status = <module 'starlette.status' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\starlette\\status.py… │ │ │ │ TAVILY_API_KEY = <open_webui.config.PersistentConfig object at 0x000002C0CF707E50> │ │ │ │ TextLoader = <class 'langchain_community.document_loaders.text.TextLoader'> │ │ │ │ TIKA_SERVER_URL = <open_webui.config.PersistentConfig object at 0x000002C0CF4C0410> │ │ │ │ Union = typing.Union │ │ │ │ UnstructuredEPubLoader = <class │ │ │ │ 'langchain_community.document_loaders.epub.UnstructuredEPubLoader'> │ │ │ │ UnstructuredExcelLoader = <class │ │ │ │ 'langchain_community.document_loaders.excel.UnstructuredExcelLoader'> │ │ │ │ UnstructuredMarkdownLoader = <class │ │ │ │ 'langchain_community.document_loaders.markdown.UnstructuredMarkdownLoa… │ │ │ │ UnstructuredPowerPointLoader = <class │ │ │ │ 'langchain_community.document_loaders.powerpoint.UnstructuredPowerPoin… │ │ │ │ UnstructuredRSTLoader = <class │ │ │ │ 'langchain_community.document_loaders.rst.UnstructuredRSTLoader'> │ │ │ │ UnstructuredXMLLoader = <class │ │ │ │ 'langchain_community.document_loaders.xml.UnstructuredXMLLoader'> │ │ │ │ update_embedding_model = <function update_embedding_model at 0x000002C0D0690E00> │ │ │ │ update_reranking_model = <function update_reranking_model at 0x000002C0D0690EA0> │ │ │ │ UPLOAD_DIR = 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\open_webui\\data/upl… │ │ │ │ UploadFile = <class 'fastapi.datastructures.UploadFile'> │ │ │ │ urllib = <module 'urllib' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\urllib\\__init__.py'> │ │ │ │ uuid = <module 'uuid' from 'D:\\anaconda3\\envs\\ollama\\Lib\\uuid.py'> │ │ │ │ validators = <module 'validators' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\validators\\__init__… │ │ │ │ WebBaseLoader = <class 'langchain_community.document_loaders.web_base.WebBaseLoader'> │ │ │ │ YOUTUBE_LOADER_LANGUAGE = <open_webui.config.PersistentConfig object at 0x000002C0CF707810> │ │ │ │ YoutubeLoader = <class 'langchain_community.document_loaders.youtube.YoutubeLoader'> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ D:\anaconda3\envs\ollama\Lib\site-packages\open_webui\apps\rag\main.py:181 in update_embedding_model │ │ │ │ 178 │ update_model: bool = False, │ │ 179 ): │ │ 180 │ if embedding_model and app.state.config.RAG_EMBEDDING_ENGINE == "": │ │ ❱ 181 │ │ import sentence_transformers │ │ 182 │ │ │ │ 183 │ │ app.state.sentence_transformer_ef = sentence_transformers.SentenceTransformer( │ │ 184 │ │ │ get_model_path(embedding_model, update_model), │ │ │ │ ╭────────────────────────── locals ──────────────────────────╮ │ │ │ embedding_model = 'sentence-transformers/all-MiniLM-L6-v2' │ │ │ │ update_model = False │ │ │ ╰────────────────────────────────────────────────────────────╯ │ │ │ │ D:\anaconda3\envs\ollama\Lib\site-packages\sentence_transformers\__init__.py:7 in <module> │ │ │ │ 4 import importlib │ │ 5 import os │ │ 6 │ │ ❱ 7 from sentence_transformers.cross_encoder.CrossEncoder import CrossEncoder │ │ 8 from sentence_transformers.datasets import ParallelSentencesDataset, SentencesDataset │ │ 9 from sentence_transformers.LoggingHandler import LoggingHandler │ │ 10 from sentence_transformers.model_card import SentenceTransformerModelCardData │ │ │ │ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │ │ │ evaluation = <module 'sentence_transformers.evaluation' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\sentence_transformers\\evaluation\\__i… │ │ │ │ importlib = <module 'importlib' from 'D:\\anaconda3\\envs\\ollama\\Lib\\importlib\\__init__.py'> │ │ │ │ models = <module 'sentence_transformers.models' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\sentence_transformers\\models\\__init_… │ │ │ │ os = <module 'os' (frozen)> │ │ │ │ readers = <module 'sentence_transformers.readers' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\sentence_transformers\\readers\\__init… │ │ │ │ similarity_functions = <module 'sentence_transformers.similarity_functions' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\sentence_transformers\\similarity_func… │ │ │ │ training_args = <module 'sentence_transformers.training_args' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\sentence_transformers\\training_args.p… │ │ │ │ util = <module 'sentence_transformers.util' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\sentence_transformers\\util.py'> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ D:\anaconda3\envs\ollama\Lib\site-packages\sentence_transformers\cross_encoder\__init__.py:1 in <module> │ │ │ │ ❱ 1 from .CrossEncoder import CrossEncoder │ │ 2 │ │ 3 __all__ = ["CrossEncoder"] │ │ 4 │ │ │ │ D:\anaconda3\envs\ollama\Lib\site-packages\sentence_transformers\cross_encoder\CrossEncoder.py:18 in <module> │ │ │ │ 15 │ │ 16 from sentence_transformers.evaluation.SentenceEvaluator import SentenceEvaluator │ │ 17 from sentence_transformers.readers import InputExample │ │ ❱ 18 from sentence_transformers.SentenceTransformer import SentenceTransformer │ │ 19 from sentence_transformers.util import fullname, get_device_name, import_from_string │ │ 20 │ │ 21 logger = logging.getLogger(__name__) │ │ │ │ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │ │ │ AutoConfig = <class 'transformers.models.auto.configuration_auto.AutoConfig'> │ │ │ │ AutoModelForSequenceClassification = <class │ │ │ │ 'transformers.models.auto.modeling_auto.AutoModelForSequenceClassification… │ │ │ │ AutoTokenizer = <class 'transformers.models.auto.tokenization_auto.AutoTokenizer'> │ │ │ │ BatchEncoding = <class 'transformers.tokenization_utils_base.BatchEncoding'> │ │ │ │ Callable = typing.Callable │ │ │ │ DataLoader = <class 'torch.utils.data.dataloader.DataLoader'> │ │ │ │ Dict = typing.Dict │ │ │ │ InputExample = <class 'sentence_transformers.readers.InputExample.InputExample'> │ │ │ │ is_torch_npu_available = <functools._lru_cache_wrapper object at 0x000002C0D7D3EB90> │ │ │ │ List = typing.List │ │ │ │ Literal = typing.Literal │ │ │ │ logging = <module 'logging' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\logging\\__init__.py'> │ │ │ │ nn = <module 'torch.nn' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\nn\\__init__.py'> │ │ │ │ np = <module 'numpy' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\numpy\\__init__.py'> │ │ │ │ Optimizer = <class 'torch.optim.optimizer.Optimizer'> │ │ │ │ Optional = typing.Optional │ │ │ │ os = <module 'os' (frozen)> │ │ │ │ PushToHubMixin = <class 'transformers.utils.hub.PushToHubMixin'> │ │ │ │ SentenceEvaluator = <class │ │ │ │ 'sentence_transformers.evaluation.SentenceEvaluator.SentenceEvaluator'> │ │ │ │ Tensor = <class 'torch.Tensor'> │ │ │ │ torch = <module 'torch' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\__init__.py'> │ │ │ │ tqdm = <class 'tqdm.std.tqdm'> │ │ │ │ trange = <function trange at 0x000002C0CAD3DDA0> │ │ │ │ Tuple = typing.Tuple │ │ │ │ Type = typing.Type │ │ │ │ Union = typing.Union │ │ │ │ wraps = <function wraps at 0x000002C09F6B1A80> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ D:\anaconda3\envs\ollama\Lib\site-packages\sentence_transformers\SentenceTransformer.py:27 in <module> │ │ │ │ 24 from tqdm.autonotebook import trange │ │ 25 from transformers import is_torch_npu_available │ │ 26 │ │ ❱ 27 from sentence_transformers.model_card import SentenceTransformerModelCardData, generate_ │ │ 28 from sentence_transformers.similarity_functions import SimilarityFunction │ │ 29 │ │ 30 from . import __MODEL_HUB_ORGANIZATION__, __version__ │ │ │ │ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │ │ │ Any = typing.Any │ │ │ │ Callable = typing.Callable │ │ │ │ contextmanager = <function contextmanager at 0x000002C09F62DD00> │ │ │ │ copy = <module 'copy' from 'D:\\anaconda3\\envs\\ollama\\Lib\\copy.py'> │ │ │ │ device = <class 'torch.device'> │ │ │ │ Dict = typing.Dict │ │ │ │ HfApi = <class 'huggingface_hub.hf_api.HfApi'> │ │ │ │ importlib = <module 'importlib' from 'D:\\anaconda3\\envs\\ollama\\Lib\\importlib\\__init__.py'> │ │ │ │ is_torch_npu_available = <functools._lru_cache_wrapper object at 0x000002C0D7D3EB90> │ │ │ │ Iterable = typing.Iterable │ │ │ │ Iterator = typing.Iterator │ │ │ │ json = <module 'json' from 'D:\\anaconda3\\envs\\ollama\\Lib\\json\\__init__.py'> │ │ │ │ List = typing.List │ │ │ │ Literal = typing.Literal │ │ │ │ logging = <module 'logging' from 'D:\\anaconda3\\envs\\ollama\\Lib\\logging\\__init__.py'> │ │ │ │ math = <module 'math' (built-in)> │ │ │ │ mp = <module 'torch.multiprocessing' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\multiprocessing\\__init__.py'> │ │ │ │ ndarray = <class 'numpy.ndarray'> │ │ │ │ nn = <module 'torch.nn' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\nn\\__init__.py'> │ │ │ │ np = <module 'numpy' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\numpy\\__init__.py'> │ │ │ │ Optional = typing.Optional │ │ │ │ OrderedDict = <class 'collections.OrderedDict'> │ │ │ │ os = <module 'os' (frozen)> │ │ │ │ overload = <function overload at 0x000002C09FC707C0> │ │ │ │ Path = <class 'pathlib.Path'> │ │ │ │ queue = <module 'queue' from 'D:\\anaconda3\\envs\\ollama\\Lib\\queue.py'> │ │ │ │ Queue = <bound method BaseContext.Queue of <multiprocessing.context.DefaultContext object at │ │ │ │ 0x000002C0A2AF69D0>> │ │ │ │ tempfile = <module 'tempfile' from 'D:\\anaconda3\\envs\\ollama\\Lib\\tempfile.py'> │ │ │ │ Tensor = <class 'torch.Tensor'> │ │ │ │ torch = <module 'torch' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\__init__.py'> │ │ │ │ traceback = <module 'traceback' from 'D:\\anaconda3\\envs\\ollama\\Lib\\traceback.py'> │ │ │ │ trange = <function trange at 0x000002C0CAD3DDA0> │ │ │ │ transformers = <module 'transformers' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\transformers\\__init__.py'> │ │ │ │ Tuple = typing.Tuple │ │ │ │ Union = typing.Union │ │ │ │ warnings = <module 'warnings' from 'D:\\anaconda3\\envs\\ollama\\Lib\\warnings.py'> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ D:\anaconda3\envs\ollama\Lib\site-packages\sentence_transformers\model_card.py:33 in <module> │ │ │ │ 30 from sentence_transformers.util import fullname, is_accelerate_available, is_datasets_av │ │ 31 │ │ 32 if is_datasets_available(): │ │ ❱ 33 │ from datasets import Dataset, DatasetDict, Value │ │ 34 │ │ 35 logger = logging.getLogger(__name__) │ │ 36 │ │ │ │ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │ │ │ Any = typing.Any │ │ │ │ CardData = <class 'huggingface_hub.repocard_data.CardData'> │ │ │ │ CodeCarbonCallback = <class 'transformers.integrations.integration_utils.CodeCarbonCallback'> │ │ │ │ copy = <function copy at 0x000002C0A1572480> │ │ │ │ Counter = <class 'collections.Counter'> │ │ │ │ dataclass = <function dataclass at 0x000002C0A1ACB420> │ │ │ │ defaultdict = <class 'collections.defaultdict'> │ │ │ │ Dict = typing.Dict │ │ │ │ eval_results_to_model_index = <function eval_results_to_model_index at 0x000002C0CFC02C00> │ │ │ │ EvalResult = <class 'huggingface_hub.repocard_data.EvalResult'> │ │ │ │ field = <function field at 0x000002C0A1AC9D00> │ │ │ │ fields = <function fields at 0x000002C0A1ACB4C0> │ │ │ │ fullname = <function fullname at 0x000002C083D528E0> │ │ │ │ get_dataset_info = <bound method HfApi.dataset_info of <huggingface_hub.hf_api.HfApi object │ │ │ │ at 0x000002C0CFC7B950>> │ │ │ │ get_model_info = <bound method HfApi.model_info of <huggingface_hub.hf_api.HfApi object at │ │ │ │ 0x000002C0CFC7B950>> │ │ │ │ indent = <function indent at 0x000002C0A155C400> │ │ │ │ is_accelerate_available = <function is_accelerate_available at 0x000002C083D531A0> │ │ │ │ is_datasets_available = <function is_datasets_available at 0x000002C083D53240> │ │ │ │ json = <module 'json' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\json\\__init__.py'> │ │ │ │ List = typing.List │ │ │ │ Literal = typing.Literal │ │ │ │ logging = <module 'logging' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\logging\\__init__.py'> │ │ │ │ make_markdown_table = <function make_markdown_table at 0x000002C0864740E0> │ │ │ │ ModelCard = <class 'huggingface_hub.repocard.ModelCard'> │ │ │ │ nn = <module 'torch.nn' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\nn\\__init__.py… │ │ │ │ Optional = typing.Optional │ │ │ │ Path = <class 'pathlib.Path'> │ │ │ │ python_version = <function python_version at 0x000002C0A1C66520> │ │ │ │ random = <module 'random' from 'D:\\anaconda3\\envs\\ollama\\Lib\\random.py'> │ │ │ │ re = <module 're' from 'D:\\anaconda3\\envs\\ollama\\Lib\\re\\__init__.py'> │ │ │ │ sentence_transformers_version = '3.0.1' │ │ │ │ SentenceTransformerTrainingArguments = <class │ │ │ │ 'sentence_transformers.training_args.SentenceTransformerTrainingArgument… │ │ │ │ torch = <module 'torch' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\torch\\__init__.py'> │ │ │ │ tqdm = <class 'tqdm.std.tqdm'> │ │ │ │ TrainerCallback = <class 'transformers.trainer_callback.TrainerCallback'> │ │ │ │ TrainerControl = <class 'transformers.trainer_callback.TrainerControl'> │ │ │ │ TrainerState = <class 'transformers.trainer_callback.TrainerState'> │ │ │ │ Transformer = <class 'sentence_transformers.models.Transformer.Transformer'> │ │ │ │ transformers = <module 'transformers' from │ │ │ │ 'D:\\anaconda3\\envs\\ollama\\Lib\\site-packages\\transformers\\__init__… │ │ │ │ Tuple = typing.Tuple │ │ │ │ TYPE_CHECKING = False │ │ │ │ Union = typing.Union │ │ │ │ yaml_dump = functools.partial(<function dump at 0x000002C0CAE56E80>, stream=None, │ │ │ │ allow_unicode=True) │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ImportError: cannot import name 'Dataset' from 'datasets' (unknown location) ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#2124