mirror of
https://github.com/open-webui/open-webui.git
synced 2026-05-07 11:28:35 -05:00
[GH-ISSUE #23851] issue: Tool server OAuth token forwarding works in chat but not in scheduled tasks #35619
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @R00T99 on GitHub (Apr 17, 2026).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/23851
Check Existing Issues
Installation Method
Docker
Open WebUI Version
v0.9.0
Ollama Version (if applicable)
No response
Operating System
Ubuntu 22.04
Browser (if applicable)
No response
Confirmation
README.md.Expected Behavior
When "Forwards system user OAuth access token to authenticate" is enabled on an OpenAPI tool server, the logged-in user's OAuth access token should be forwarded to the tool on every invocation, including when the tool is called from a scheduled/automated task. This way the tool can authenticate against downstream APIs (in my case Microsoft Graph) as the user who owns the task.
Actual Behavior
The token is forwarded correctly when the tool is invoked from a normal chat message.
My tool receives the Authorization header and Graph API calls succeed.
When the exact same tool is invoked from a scheduled task, no auth context is forwarded at all. No Authorization header, no user identity. The tool still runs but has no way to know who it's running for, so all downstream user-scoped API calls fail.
Unclear whether this is by design (tasks run as system / no user) or a bug. Either way there doesn't seem to be a supported way to have a task execute in a specific user's context.
Steps to Reproduce
Expected at step 9: same Authorization header as step 6.
Actual at step 9: no auth forwarded.
Logs & Screenshots
Tool server log log:
time=2026-04-17T19:47:27Z method=POST path=/openapi/tools/list_calendar_events remote=10.90.20.20:56392
Accept: /
Accept-Encoding: gzip, deflate, br
Content-Length: 2
Content-Type: application/json
User-Agent: Python/3.11 aiohttp/3.13.5
LOGS OWUI:
2026-04-17 19:47:26.723 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.96.15.149:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200
2026-04-17 19:47:26.739 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.96.15.149:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200
2026-04-17 19:47:26.883 | INFO | uvicorn.protocols.http.httptools_impl:send:483 - 10.96.15.149:0 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200
2026-04-17 19:47:27.427 | ERROR | open_webui.utils.tools:execute_tool_server:1391 - API Request Error: HTTP error 401: {"error":"missing oauth token"}
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 4, in
uvicorn.main()
│ └
└ <module 'uvicorn' from '/usr/local/lib/python3.11/site-packages/uvicorn/init.py'>
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1485, in call
return self.main(args, **kwargs)
│ │ │ └ {}
│ │ └ ()
│ └ <function Command.main at 0x73ec79aa31a0>
└
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1406, in main
rv = self.invoke(ctx)
│ │ └ <click.core.Context object at 0x73ec7a912850>
│ └ <function Command.invoke at 0x73ec79aa2e80>
└
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1269, in invoke
return ctx.invoke(self.callback, **ctx.params)
│ │ │ │ │ └ {'host': '0.0.0.0', 'port': 8080, 'forwarded_allow_ips': '', 'workers': 1, 'app': 'open_webui.main:app', 'uds': None, 'fd': ...
│ │ │ │ └ <click.core.Context object at 0x73ec7a912850>
│ │ │ └ <function main at 0x73ec798a0220>
│ │ └
│ └ <function Context.invoke at 0x73ec79aa20c0>
└ <click.core.Context object at 0x73ec7a912850>
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 824, in invoke
return callback(args, **kwargs)
│ │ └ {'host': '0.0.0.0', 'port': 8080, 'forwarded_allow_ips': '', 'workers': 1, 'app': 'open_webui.main:app', 'uds': None, 'fd': ...
│ └ ()
└ <function main at 0x73ec798a0220>
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 433, in main
run(
└ <function run at 0x73ec79955e40>
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 606, in run
server.run()
│ └ <function Server.run at 0x73ec799556c0>
└ <uvicorn.server.Server object at 0x73ec79acf910>
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 75, in run
return asyncio_run(self.serve(sockets=sockets), loop_factory=self.config.get_loop_factory())
│ │ │ │ │ │ └ <function Config.get_loop_factory at 0x73ec79a6c400>
│ │ │ │ │ └ <uvicorn.config.Config object at 0x73ec79e18210>
│ │ │ │ └ <uvicorn.server.Server object at 0x73ec79acf910>
│ │ │ └ None
│ │ └ <function Server.serve at 0x73ec79955760>
│ └ <uvicorn.server.Server object at 0x73ec79acf910>
└ <function asyncio_run at 0x73ec79aa6160>
File "/usr/local/lib/python3.11/site-packages/uvicorn/_compat.py", line 30, in asyncio_run
return runner.run(main)
│ │ └ <coroutine object Server.serve at 0x73ec7980b3d0>
│ └ <function Runner.run at 0x73ec79cf0fe0>
└ <asyncio.runners.Runner object at 0x73ec798b52d0>
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
│ │ │ └ <Task pending name='Task-1' coro=<Server.serve() running at /usr/local/lib/python3.11/site-packages/uvicorn/server.py:79> wai...
│ │ └ <cyfunction Loop.run_until_complete at 0x73ec798ba740>
│ └ <uvloop.Loop running=True closed=False debug=False>
└ <asyncio.runners.Runner object at 0x73ec798b52d0>
File "/app/backend/open_webui/main.py", line 1858, in process_chat
return await process_chat_response(response, ctx)
│ │ └ {'request': <starlette.requests.Request object at 0x73ebd815dcd0>, 'form_data': {'model': 'gpt-test', 'messages': [{'role': ...
│ └ <starlette.responses.StreamingResponse object at 0x73ebd8abb410>
└ <function process_chat_response at 0x73ec00f6f380>
File "/app/backend/open_webui/utils/middleware.py", line 5021, in process_chat_response
return await streaming_chat_response_handler(response, ctx)
│ │ └ {'request': <starlette.requests.Request object at 0x73ebd815dcd0>, 'form_data': {'model': 'gpt-test', 'messages': [{'role': ...
│ └ <starlette.responses.StreamingResponse object at 0x73ebd8abb410>
└ <function streaming_chat_response_handler at 0x73ec00f6f2e0>
File "/app/backend/open_webui/utils/middleware.py", line 4969, in streaming_chat_response_handler
return await response_handler(response, events)
│ │ └ []
│ └ <starlette.responses.StreamingResponse object at 0x73ebd8abb410>
└ <function streaming_chat_response_handler..response_handler at 0x73ec092fc0e0>
File "/app/backend/open_webui/utils/middleware.py", line 4441, in response_handler
tool_result = await tool_function(**tool_function_params)
│ └ {}
└ <function get_tools..make_tool_function..tool_function at 0x73ec0938db20>
File "/app/backend/open_webui/utils/tools.py", line 129, in new_function
return await partial_func(*args, **kwargs)
│ │ └ {}
│ └ ()
└ functools.partial(<function get_tools..make_tool_function..tool_function at 0x73ec09365580>)
File "/app/backend/open_webui/utils/tools.py", line 348, in tool_function
return await execute_tool_server(
└ <function execute_tool_server at 0x73ec02616340>
Additional Information
No response
@tjbck commented on GitHub (Apr 21, 2026):
Likely addressed in dev.