[GH-ISSUE #19512] issue: Regression: 'NoneType' object has no attribute 'encode' with SentenceTransformers embedding Qwen/Qwen3-Embedding-0.6B in v0.6.40 (works in v0.6.38) #34436

Closed
opened 2026-04-25 08:25:56 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @borntoknow on GitHub (Nov 26, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/19512

Title: Regression: 'NoneType' object has no attribute 'encode' with SentenceTransformers embedding Qwen/Qwen3-Embedding-0.6B in v0.6.40 (works in v0.6.38)

Summary

After upgrading Open WebUI from v0.6.38 to v0.6.40, Knowledge Base ingestion / RAG embedding fails with:

AttributeError: 'NoneType' object has no attribute 'encode'

Same environment and configuration works on v0.6.38.

Environment

  • Host OS: Windows (Docker Desktop)
  • Open WebUI: v0.6.40 (broken), v0.6.38 (works)
  • Deployment: docker-compose
  • Vector DB: pgvector (pgvector/pgvector:pg17)
  • Document extraction: Docling (quay.io/docling-project/docling-serve:latest)
  • Embedding model: Qwen/Qwen3-Embedding-0.6B (SentenceTransformers / local)
  • Data volume (persistent): D:\Dev_Tools\OpenWebUI -> /app/backend/data

Steps to Reproduce

  1. Run stack with persistent volume for /app/backend/data.
  2. Configure RAG to use SentenceTransformers embedding model:
    • Embedding model: Qwen/Qwen3-Embedding-0.6B
    • Embedding engine: Default (SentenceTransformers)
  3. Upload a document to Knowledge / Knowledge Base and trigger embedding.
  4. Observe Open WebUI backend logs.

Expected Behavior

Embeddings are generated and stored in pgvector, document ingestion completes (as in v0.6.38).

Actual Behavior (v0.6.40)

Document embedding fails with:
AttributeError: 'NoneType' object has no attribute 'encode'

Traceback points to retrieval embedding path calling embedding_function.encode(...) where embedding_function is None.

Evidence: model works inside the container

Inside the running openwebui container:

  • Python 3.11
  • SentenceTransformers successfully loads the model and encode() works:

from sentence_transformers import SentenceTransformer
m = SentenceTransformer("Qwen/Qwen3-Embedding-0.6B")
print("dim:", m.get_sentence_embedding_dimension()) # dim: 1024
print(m.encode(["ping"]).shape) # (1, 1024)

So the model + cache are valid; the failure is likely Open WebUI embedding initialization / config handling.

Notes / Observations

  • v0.6.38: works with the same setup
  • v0.6.40: fails consistently
  • OpenWebUI PersistentConfig is stored in SQLite (webui.db) and includes JSON under config.data.rag.
  • In my webui.db, rag.embedding_model is set to "Qwen/Qwen3-Embedding-0.6B".
  • Re-saving embedding settings in UI + reindex did not resolve the issue in v0.6.40.

Stack trace (excerpt)

File "/app/backend/open_webui/retrieval/utils.py", line 792, in
lambda query, prefix=None: embedding_function.encode(
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'encode'

Regression

  • v0.6.38: OK
  • v0.6.40: broken

Suspected Cause

Regression in get_embedding_function() / RAG config loading: embedding_function becomes None even though the SentenceTransformer model is loadable. Possibly incorrect handling of Default/SentenceTransformers embedding engine when loaded from persisted config.

Request

Please advise what changed between v0.6.38 and v0.6.40 in embedding initialization, and/or provide a fix/workaround for SentenceTransformers embedding models (specifically Qwen/Qwen3-Embedding-0.6B) so embedding_function is always initialized (not None).

Originally created by @borntoknow on GitHub (Nov 26, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/19512 Title: Regression: 'NoneType' object has no attribute 'encode' with SentenceTransformers embedding Qwen/Qwen3-Embedding-0.6B in v0.6.40 (works in v0.6.38) ### Summary After upgrading Open WebUI from v0.6.38 to v0.6.40, Knowledge Base ingestion / RAG embedding fails with: AttributeError: 'NoneType' object has no attribute 'encode' Same environment and configuration works on v0.6.38. ### Environment - Host OS: Windows (Docker Desktop) - Open WebUI: v0.6.40 (broken), v0.6.38 (works) - Deployment: docker-compose - Vector DB: pgvector (pgvector/pgvector:pg17) - Document extraction: Docling (quay.io/docling-project/docling-serve:latest) - Embedding model: Qwen/Qwen3-Embedding-0.6B (SentenceTransformers / local) - Data volume (persistent): D:\Dev_Tools\OpenWebUI -> /app/backend/data ### Steps to Reproduce 1) Run stack with persistent volume for /app/backend/data. 2) Configure RAG to use SentenceTransformers embedding model: - Embedding model: Qwen/Qwen3-Embedding-0.6B - Embedding engine: Default (SentenceTransformers) 3) Upload a document to Knowledge / Knowledge Base and trigger embedding. 4) Observe Open WebUI backend logs. ### Expected Behavior Embeddings are generated and stored in pgvector, document ingestion completes (as in v0.6.38). ### Actual Behavior (v0.6.40) Document embedding fails with: AttributeError: 'NoneType' object has no attribute 'encode' Traceback points to retrieval embedding path calling embedding_function.encode(...) where embedding_function is None. ### Evidence: model works inside the container Inside the running openwebui container: - Python 3.11 - SentenceTransformers successfully loads the model and encode() works: from sentence_transformers import SentenceTransformer m = SentenceTransformer("Qwen/Qwen3-Embedding-0.6B") print("dim:", m.get_sentence_embedding_dimension()) # dim: 1024 print(m.encode(["ping"]).shape) # (1, 1024) So the model + cache are valid; the failure is likely Open WebUI embedding initialization / config handling. ### Notes / Observations - v0.6.38: works with the same setup - v0.6.40: fails consistently - OpenWebUI PersistentConfig is stored in SQLite (webui.db) and includes JSON under config.data.rag. - In my webui.db, rag.embedding_model is set to "Qwen/Qwen3-Embedding-0.6B". - Re-saving embedding settings in UI + reindex did not resolve the issue in v0.6.40. ### Stack trace (excerpt) File "/app/backend/open_webui/retrieval/utils.py", line 792, in <lambda> lambda query, prefix=None: embedding_function.encode( ^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'encode' ### Regression - v0.6.38: OK - v0.6.40: broken ### Suspected Cause Regression in get_embedding_function() / RAG config loading: embedding_function becomes None even though the SentenceTransformer model is loadable. Possibly incorrect handling of Default/SentenceTransformers embedding engine when loaded from persisted config. ### Request Please advise what changed between v0.6.38 and v0.6.40 in embedding initialization, and/or provide a fix/workaround for SentenceTransformers embedding models (specifically Qwen/Qwen3-Embedding-0.6B) so embedding_function is always initialized (not None).
GiteaMirror added the bug label 2026-04-25 08:25:56 -05:00
Author
Owner

@pap-prm commented on GitHub (Nov 26, 2025):

<!-- gh-comment-id:3580816967 --> @pap-prm commented on GitHub (Nov 26, 2025): +
Author
Owner

@mahenning commented on GitHub (Nov 26, 2025):

I have the same problem, and when changing/saving my embedding model, I get in my docker logs:

open-webui | 2025-11-26 11:56:17.857 | DEBUG | open_webui.routers.retrieval:get_ef:141 - Error loading SentenceTransformer: 'dict' object has no attribute 'model_type'

This message is only displayed when the log level is debug, so it's hidden normally. But the behavior afterwards is the same, I can't process documents anymore and get the AttributeError: 'NoneType' object has no attribute 'encode' message.

Fix:
https://github.com/huggingface/transformers/issues/42374
The transformers version 4.57.2 used in open-webui 0.6.40 has a bug that is already fixed in version transformers 4.57.3. open-webui should update its dependencies for transformers to fix this.

I also build 0.6.40 in a cloned git project, and the dependency there is transformers 4.57.0 (probably from an older build), and there I dont get the errors above.

Edit: I tested it in the cloned project with open-webui v0.6.40 and transfomers with the versions 4.57.0, 4.57.2, 4.57.3. With 4.57.2, I get the errors I mentioned above, with the other two versions I don't get any errors and document uploading is working again.

<!-- gh-comment-id:3580985820 --> @mahenning commented on GitHub (Nov 26, 2025): I have the same problem, and when changing/saving my embedding model, I get in my docker logs: > open-webui | 2025-11-26 11:56:17.857 | DEBUG | open_webui.routers.retrieval:get_ef:141 - Error loading SentenceTransformer: 'dict' object has no attribute 'model_type' This message is only displayed when the log level is debug, so it's hidden normally. But the behavior afterwards is the same, I can't process documents anymore and get the `AttributeError: 'NoneType' object has no attribute 'encode'` message. Fix: https://github.com/huggingface/transformers/issues/42374 The `transformers` version 4.57.2 used in `open-webui` 0.6.40 has a bug that is already fixed in version `transformers` 4.57.3. `open-webui` should update its dependencies for `transformers` to fix this. I also build 0.6.40 in a cloned git project, and the dependency there is `transformers` 4.57.0 (probably from an older build), and there I dont get the errors above. Edit: I tested it in the cloned project with `open-webui` v0.6.40 and `transfomers` with the versions 4.57.0, 4.57.2, 4.57.3. With 4.57.2, I get the errors I mentioned above, with the other two versions I don't get any errors and document uploading is working again.
Author
Owner

@Classic298 commented on GitHub (Nov 26, 2025):

Thanks, i will open a PR to update the dependency @mahenning

<!-- gh-comment-id:3581045890 --> @Classic298 commented on GitHub (Nov 26, 2025): Thanks, i will open a PR to update the dependency @mahenning
Author
Owner

@borntoknow commented on GitHub (Nov 26, 2025):

Thanks — confirmed.

I had the same regression after upgrading from 0.6.38 to 0.6.40:

  • In DEBUG logs I also see: Error loading SentenceTransformer: 'dict' object has no attribute 'model_type'
  • Then ingestion fails with: AttributeError: 'NoneType' object has no attribute 'encode'

Temporary workaround on my side:

  • Built a custom Open WebUI image and pinned transformers==4.57.3 (instead of 4.57.2)
  • After that, embeddings / document processing work again with Qwen/Qwen3-Embedding-0.6B.

So yes, updating Open WebUI deps to transformers 4.57.3 should fix it permanently. Thanks for the pointer and the upstream link.

<!-- gh-comment-id:3581048884 --> @borntoknow commented on GitHub (Nov 26, 2025): Thanks — confirmed. I had the same regression after upgrading from 0.6.38 to 0.6.40: - In DEBUG logs I also see: `Error loading SentenceTransformer: 'dict' object has no attribute 'model_type'` - Then ingestion fails with: `AttributeError: 'NoneType' object has no attribute 'encode'` Temporary workaround on my side: - Built a custom Open WebUI image and pinned `transformers==4.57.3` (instead of 4.57.2) - After that, embeddings / document processing work again with `Qwen/Qwen3-Embedding-0.6B`. So yes, updating Open WebUI deps to transformers 4.57.3 should fix it permanently. Thanks for the pointer and the upstream link.
Author
Owner

@Classic298 commented on GitHub (Nov 26, 2025):

https://github.com/open-webui/open-webui/pull/19513

<!-- gh-comment-id:3581060231 --> @Classic298 commented on GitHub (Nov 26, 2025): https://github.com/open-webui/open-webui/pull/19513
Author
Owner

@Classic298 commented on GitHub (Nov 26, 2025):

thanks for testing and confirming guys

this is how we can solve issues quickly

<!-- gh-comment-id:3581063131 --> @Classic298 commented on GitHub (Nov 26, 2025): thanks for testing and confirming guys this is how we can solve issues quickly
Author
Owner

@Classic298 commented on GitHub (Nov 26, 2025):

Version is pinned in dev

Anyone who still has this issue should reinstall their version. As of .40 the version is still unpinned and therefore it should fetch the latest version which is a working version.

<!-- gh-comment-id:3583349844 --> @Classic298 commented on GitHub (Nov 26, 2025): Version is pinned in dev Anyone who still has this issue should reinstall their version. As of .40 the version is still unpinned and therefore it should fetch the latest version which is a working version.
Author
Owner

@mahenning commented on GitHub (Nov 27, 2025):

Is it possible to yank the docker version, or at least release 0.6.41 fast with the fix? I still remember v0.6.33 where focused retrieval mode was broken, and the fix was there 1d after but needed like a week to be released into a new version. I suppose that most people use docker and changing the transformers version there is not trivial.
0.6.40 was also a hotfix on the same day, so it is possible in your release cycle.

<!-- gh-comment-id:3584676282 --> @mahenning commented on GitHub (Nov 27, 2025): Is it possible to yank the docker version, or at least release 0.6.41 fast with the fix? I still remember v0.6.33 where focused retrieval mode was broken, and the fix was there 1d after but needed like a week to be released into a new version. I suppose that most people use docker and changing the transformers version there is not trivial. 0.6.40 was also a hotfix on the same day, so it is possible in your release cycle.
Author
Owner

@Classic298 commented on GitHub (Nov 27, 2025):

@mahenning the version itself works. If you install it now you will not run into issues. On .40 the version is still unpinned, meaning you would install the latest version of the transformers depencency - and the latest version works.

<!-- gh-comment-id:3584721679 --> @Classic298 commented on GitHub (Nov 27, 2025): @mahenning the version itself works. If you install it now you will not run into issues. On .40 the version is still unpinned, meaning you would install the latest version of the transformers depencency - and the latest version works.
Author
Owner

@mahenning commented on GitHub (Nov 27, 2025):

Your 0.6.40 docker image has all it's dependencies already installed, and one dependency is transformers 4.57.2. Thats the whole point of docker images, that everything is already packaged. Meaning everyone who pulls your 0.6.40 image gets the bugged transformer version. Yes, you can build a new docker image on top of that which updates this dep but most people wont do that.
You statement is only true for people who build open-webui locally.

<!-- gh-comment-id:3584759500 --> @mahenning commented on GitHub (Nov 27, 2025): Your 0.6.40 docker image has all it's dependencies already installed, and one dependency is `transformers` 4.57.2. Thats the whole point of docker images, that everything is already packaged. Meaning everyone who pulls your 0.6.40 image gets the bugged transformer version. Yes, you can build a new docker image on top of that which updates this dep but most people wont do that. You statement is only true for people who build open-webui locally.
Author
Owner

@Classic298 commented on GitHub (Nov 27, 2025):

@mahenning yes i meant for pip or local git installations - for docker: i asked tim to perhaps rebuild the :main docker image. Then .41 would not be urgently needed.

<!-- gh-comment-id:3584798905 --> @Classic298 commented on GitHub (Nov 27, 2025): @mahenning yes i meant for pip or local git installations - for docker: i asked tim to perhaps rebuild the :main docker image. Then .41 would not be urgently needed.
Author
Owner

@Classic298 commented on GitHub (Nov 27, 2025):

@mahenning
https://github.com/open-webui/open-webui/actions/runs/19667243130

<!-- gh-comment-id:3584828571 --> @Classic298 commented on GitHub (Nov 27, 2025): @mahenning https://github.com/open-webui/open-webui/actions/runs/19667243130
Author
Owner

@mahenning commented on GitHub (Nov 27, 2025):

I re-pulled the image, it's still the same transformers version. And when I read the build logs right, it used a cached version of the layer that installs the pip/uv dependencies, which means it never pulls/installs the newest transformers version:

2025-11-27T09:03:04.2563636Z #27 [base 9/15] RUN pip3 install --no-cache-dir uv && if [ "false" = "true" ]; then pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128 --no-cache-dir && uv pip install --system -r requirements.txt --no-cache-dir && python -c "import os; from sentence_transformers import SentenceTransformer; SentenceTransformer(os.environ['RAG_EMBEDDING_MODEL'], device='cpu')" && python -c "import os; from faster_whisper import WhisperModel; WhisperModel(os.environ['WHISPER_MODEL'], device='cpu', compute_type='int8', download_root=os.environ['WHISPER_MODEL_DIR'])"; python -c "import os; import tiktoken; tiktoken.get_encoding(os.environ['TIKTOKEN_ENCODING_NAME'])"; else pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu --no-cache-dir && uv pip install --system -r requirements.txt --no-cache-dir && if [ "false" != "true" ]; then python -c "import os; from sentence_transformers import SentenceTransformer; SentenceTransformer(os.environ['RAG_EMBEDDING_MODEL'], device='cpu')" && python -c "import os; from faster_whisper import WhisperModel; WhisperModel(os.environ['WHISPER_MODEL'], device='cpu', compute_type='int8', download_root=os.environ['WHISPER_MODEL_DIR'])"; python -c "import os; import tiktoken; tiktoken.get_encoding(os.environ['TIKTOKEN_ENCODING_NAME'])"; fi; fi; mkdir -p /app/backend/data && chown -R 0:0 /app/backend/data/ && rm -rf /var/lib/apt/lists/*;
2025-11-27T09:03:04.2569977Z #27 CACHED

<!-- gh-comment-id:3585029502 --> @mahenning commented on GitHub (Nov 27, 2025): I re-pulled the image, it's still the same transformers version. And when I read the build logs right, it used a cached version of the layer that installs the pip/uv dependencies, which means it never pulls/installs the newest transformers version: > 2025-11-27T09:03:04.2563636Z #27 [base 9/15] RUN pip3 install --no-cache-dir uv && if [ "false" = "true" ]; then pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128 --no-cache-dir && uv pip install --system -r requirements.txt --no-cache-dir && python -c "import os; from sentence_transformers import SentenceTransformer; SentenceTransformer(os.environ['RAG_EMBEDDING_MODEL'], device='cpu')" && python -c "import os; from faster_whisper import WhisperModel; WhisperModel(os.environ['WHISPER_MODEL'], device='cpu', compute_type='int8', download_root=os.environ['WHISPER_MODEL_DIR'])"; python -c "import os; import tiktoken; tiktoken.get_encoding(os.environ['TIKTOKEN_ENCODING_NAME'])"; else pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu --no-cache-dir && uv pip install --system -r requirements.txt --no-cache-dir && if [ "false" != "true" ]; then python -c "import os; from sentence_transformers import SentenceTransformer; SentenceTransformer(os.environ['RAG_EMBEDDING_MODEL'], device='cpu')" && python -c "import os; from faster_whisper import WhisperModel; WhisperModel(os.environ['WHISPER_MODEL'], device='cpu', compute_type='int8', download_root=os.environ['WHISPER_MODEL_DIR'])"; python -c "import os; import tiktoken; tiktoken.get_encoding(os.environ['TIKTOKEN_ENCODING_NAME'])"; fi; fi; mkdir -p /app/backend/data && chown -R 0:0 /app/backend/data/ && rm -rf /var/lib/apt/lists/*; **2025-11-27T09:03:04.2569977Z #27 CACHED**
Author
Owner

@Classic298 commented on GitHub (Nov 27, 2025):

thanks, fw'ed

<!-- gh-comment-id:3585086369 --> @Classic298 commented on GitHub (Nov 27, 2025): thanks, fw'ed
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#34436