feat: Add option in admin panel to set new API parameter for reasoning in GPT-5 #5990

Closed
opened 2025-11-11 16:41:43 -06:00 by GiteaMirror · 9 comments
Owner

Originally created by @karolkt1 on GitHub (Aug 8, 2025).

Check Existing Issues

  • I have searched the existing issues and discussions.

Problem Description

According to OpenAI docs, GPT-5 supports a reasoning parameter to control reasoning effort.
Currently, there is no way in OpenWebUI to pass this parameter to the API, which means we can't set it to "minimal" to improve response speed, by default, GPT-5 can be extremely slow without this control.

Image

Desired Solution you'd like

New parametr in admin menu where the red arrow is

Image

Alternatives Considered

No response

Additional Context

No response

Originally created by @karolkt1 on GitHub (Aug 8, 2025). ### Check Existing Issues - [x] I have searched the existing issues and discussions. ### Problem Description According to [OpenAI docs](https://platform.openai.com/docs/guides/latest-model#minimal-reasoning-effort), GPT-5 supports a reasoning parameter to control reasoning effort. Currently, there is no way in OpenWebUI to pass this parameter to the API, which means we can't set it to "minimal" to improve response speed, by default, GPT-5 can be extremely slow without this control. <img width="764" height="364" alt="Image" src="https://github.com/user-attachments/assets/3128a6d1-bbbc-4366-89b4-4eba40abd9e0" /> ### Desired Solution you'd like New parametr in admin menu where the red arrow is <img width="1083" height="1100" alt="Image" src="https://github.com/user-attachments/assets/5646899f-3738-41f3-90b7-6cc213cd4e6f" /> ### Alternatives Considered _No response_ ### Additional Context _No response_
Author
Owner

@rgaricano commented on GitHub (Aug 8, 2025):

Right now, you can use custom parameter (at the end ot advanced params)

@rgaricano commented on GitHub (Aug 8, 2025): Right now, you can use custom parameter (at the end ot advanced params)
Author
Owner

@karolkt1 commented on GitHub (Aug 8, 2025):

Right now, you can use custom parameter (at the end ot advanced params)

Could you help me how excatly because I tried some variations of reasoning/effort and always got errors. Other parameters visible on screenshot work just fine

Image Image
@karolkt1 commented on GitHub (Aug 8, 2025): > Right now, you can use custom parameter (at the end ot advanced params) Could you help me how excatly because I tried some variations of reasoning/effort and always got errors. Other parameters visible on screenshot work just fine <img width="534" height="241" alt="Image" src="https://github.com/user-attachments/assets/8306609d-1116-4592-961d-d4d8a27bba59" /> <img width="662" height="223" alt="Image" src="https://github.com/user-attachments/assets/e3e29696-58f8-4c20-a85c-0d316299ddd2" />
Author
Owner

@17jmumford commented on GitHub (Aug 8, 2025):

Reasoning effort is not available on the /chat/completions endpoints. This is true for both OpenAI and LiteLLM.
https://docs.litellm.ai/docs/completion/input

It's only available on the new Responses API. You would have to use Open WebUI's fancy pipeline/valve stuff to get it to work

@17jmumford commented on GitHub (Aug 8, 2025): Reasoning effort is not available on the /chat/completions endpoints. This is true for both OpenAI and LiteLLM. https://docs.litellm.ai/docs/completion/input It's only available on the new Responses API. You would have to use Open WebUI's fancy pipeline/valve stuff to get it to work
Author
Owner

@jsweetzer-ea commented on GitHub (Aug 8, 2025):

Reasoning effort is available in both. See here:

https://platform.openai.com/docs/guides/latest-model#migrating-from-chat-completions-to-responses-api

curl --request POST
--url https://api.openai.com/v1/chat/completions
--header "Authorization: Bearer $OPENAI_API_KEY"
--header 'Content-type: application/json'
--data '{
"model": "gpt-5",
"messages": [
{
"role": "user",
"content": "How much gold would it take to coat the Statue of Liberty in a 1mm layer?"
}
],
"reasoning_effort": "minimal"
}'

@jsweetzer-ea commented on GitHub (Aug 8, 2025): Reasoning effort is available in both. See here: https://platform.openai.com/docs/guides/latest-model#migrating-from-chat-completions-to-responses-api curl --request POST \ --url https://api.openai.com/v1/chat/completions \ --header "Authorization: Bearer $OPENAI_API_KEY" \ --header 'Content-type: application/json' \ --data '{ "model": "gpt-5", "messages": [ { "role": "user", "content": "How much gold would it take to coat the Statue of Liberty in a 1mm layer?" } ], "reasoning_effort": "minimal" }'
Author
Owner

@karolkt1 commented on GitHub (Aug 9, 2025):

Reasoning effort is available in both. See here:

https://platform.openai.com/docs/guides/latest-model#migrating-from-chat-completions-to-responses-api

curl --request POST --url https://api.openai.com/v1/chat/completions --header "Authorization: Bearer $OPENAI_API_KEY" --header 'Content-type: application/json' --data '{ "model": "gpt-5", "messages": [ { "role": "user", "content": "How much gold would it take to coat the Statue of Liberty in a 1mm layer?" } ], "reasoning_effort": "minimal" }'

I do confirm that it works in /chat/completions. I tried 2 curls with minimal and high. The difference was almost 5 times longer response time with high reasoning.
So then I hope developers can add "reasoning" in right place of a call.

Below successful call with minimal reasoning and fast asnwer
Image

@karolkt1 commented on GitHub (Aug 9, 2025): > Reasoning effort is available in both. See here: > > https://platform.openai.com/docs/guides/latest-model#migrating-from-chat-completions-to-responses-api > > curl --request POST --url https://api.openai.com/v1/chat/completions --header "Authorization: Bearer $OPENAI_API_KEY" --header 'Content-type: application/json' --data '{ "model": "gpt-5", "messages": [ { "role": "user", "content": "How much gold would it take to coat the Statue of Liberty in a 1mm layer?" } ], "reasoning_effort": "minimal" }' I do confirm that it works in /chat/completions. I tried 2 curls with minimal and high. The difference was almost 5 times longer response time with high reasoning. So then I hope developers can add "reasoning" in right place of a call. Below successful call with minimal reasoning and fast asnwer <img width="1344" height="746" alt="Image" src="https://github.com/user-attachments/assets/f37e2c70-a008-4e81-ad8f-4e98c7d56e74" />
Author
Owner

@decent-engineer-decent-datascientist commented on GitHub (Aug 11, 2025):

Any updates on this? I too was not able to leverage reasoning_effort in the params.

@decent-engineer-decent-datascientist commented on GitHub (Aug 11, 2025): Any updates on this? I too was not able to leverage reasoning_effort in the params.
Author
Owner

@gaby commented on GitHub (Aug 11, 2025):

Reasoning effort is already there below top_p in the model parameters.

It defaults to "medium" as a string.

@gaby commented on GitHub (Aug 11, 2025): Reasoning effort is already there below `top_p` in the model parameters. It defaults to "medium" as a string.
Author
Owner

@karolkt1 commented on GitHub (Aug 11, 2025):

Reasoning effort is already there below top_p in the model parameters.

It defaults to "medium" as a string.

It doesn't work and it was the first thing I've tested. Try enabling it and then send request to GPT-5. I receive following response

Image
@karolkt1 commented on GitHub (Aug 11, 2025): > Reasoning effort is already there below `top_p` in the model parameters. > > It defaults to "medium" as a string. It doesn't work and it was the first thing I've tested. Try enabling it and then send request to GPT-5. I receive following response <img width="787" height="404" alt="Image" src="https://github.com/user-attachments/assets/c2d54b4f-451f-4372-9a54-80bb2db09e35" />
Author
Owner

@karolkt1 commented on GitHub (Aug 12, 2025):

Any updates on this? I too was not able to leverage reasoning_effort in the params.

If you are also using LiteLLM it was fixed in https://github.com/BerriAI/litellm/releases/tag/litellm_v1.75.5-dev_memory_fix

@karolkt1 commented on GitHub (Aug 12, 2025): > Any updates on this? I too was not able to leverage reasoning_effort in the params. If you are also using LiteLLM it was fixed in https://github.com/BerriAI/litellm/releases/tag/litellm_v1.75.5-dev_memory_fix
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#5990