mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-06 19:08:59 -05:00
issue: The search tool encountered an error: Torch not compiled with CUDA enabled #5729
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @awesomez on GitHub (Jul 9, 2025).
Check Existing Issues
Installation Method
Other
Open WebUI Version
v0.6.15
Ollama Version (if applicable)
No response
Operating System
Windows 10
Browser (if applicable)
Firefox
Confirmation
README.md.Expected Behavior
Use LLM Web Search tool (from OWUI repository) with "jan-nano-128k" model (with full context) via LM Studio (as the v1 model provider) and SearXNG (via official docker image) to perform a web search for a topic, then an analysis of the found web content should be performed as per the script/tool execution.
Actual Behavior
"The search tool encountered an error: Torch not compiled with CUDA enabled"
Then the model proceeded to think, outputting its response, but no web search had been performed.
Steps to Reproduce
SearXNG works fine with the standard Web Search tool, and it also worked for the FIRST time the web search was performed using "LLM Web Search" tool. (If it helps, the Qwen3 embeddings 0.6 model were in the embeddings folder the LLM Web Search tool requested in its Valves, as I did not know that LLM Web Search tool downloaded its own embeddings).
The second time I performed a web search (and all subsequent times after closing/restarting OpenWebUI, adding or deleting tools, etc), I recieved the error above.
Logs & Screenshots
See this post:
https://github.com/open-webui/open-webui/discussions/8170
This describes the exact scenario I was experiencing.
Additional Information
https://github.com/open-webui/open-webui/discussions/8170
I've provided further information in that post as to particulars of my setup, but these are all bog-standard and the only difference was that I was using SearXNG for the standard Web Search tool (successfully) in OWUI and decided to use a faster one that uses SearXNG (LLM Web Search) - both were using SearXNG via the standard docker image running in Docker Desktop on Windows 10.
Also, sometimes even the standard Web Search tool seems to hang up for a while. Lastly, I also get lots of uvicorn 303 responses in the terminal, assuming that is not a red-herring. These seem to be related to tools.
@awesomez commented on GitHub (Jul 9, 2025):
UPDATE:
It appears that when I turn on "Temporary Chat" and explicitly enable CPU-only, the embedding models load and the web search is performed using the "LLM Web Search" tool. See below:
This is just very slow - I'd prefer NOT to use the CPU-only mode. :(
Otherwise, I get this:
@tjbck commented on GitHub (Jul 9, 2025):
Custom tools are outside of the scope.