[GH-ISSUE #12606] GPT-OSS:20b reasoning loop when reasoning==high #8368

Open
opened 2026-04-12 20:58:45 -05:00 by GiteaMirror · 36 comments
Owner

Originally created by @jbcallaghan on GitHub (Oct 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12606

What is the issue?

Setting GPT:OSS:20b reasoning to high randomly results in reasoning loops where the model keeps repeating itself. I have tested this with langgraph and a tool node, so bind_tools=True.

The solution has around 8 available tools that are all quite distinct in their functions.

Relevant log output


OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.12.5

Originally created by @jbcallaghan on GitHub (Oct 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12606 ### What is the issue? Setting GPT:OSS:20b reasoning to high randomly results in reasoning loops where the model keeps repeating itself. I have tested this with langgraph and a tool node, so bind_tools=True. The solution has around 8 available tools that are all quite distinct in their functions. ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.12.5
GiteaMirror added the bug label 2026-04-12 20:58:45 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 14, 2025):

Server log may aid in debugging.

<!-- gh-comment-id:3401080560 --> @rick-github commented on GitHub (Oct 14, 2025): [Server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging.
Author
Owner

@jbcallaghan commented on GitHub (Oct 15, 2025):

I enabled using debug=1 and monitored using journalctl -u ollama --no-pager --follow --pager-end

However nothing out of the ordinary shows up when the model enters one of these reasoning loops.

Setting reasoning effort to low or medium never produces the same error. The model is running on 2 x RTX 4090's with a context window of 30,000.

There is also no chat history, it will do it randomly on the first query, I even tried having the most basic system prompt to see if this was the issue

<!-- gh-comment-id:3406228948 --> @jbcallaghan commented on GitHub (Oct 15, 2025): I enabled using debug=1 and monitored using journalctl -u ollama --no-pager --follow --pager-end However nothing out of the ordinary shows up when the model enters one of these reasoning loops. Setting reasoning effort to low or medium never produces the same error. The model is running on 2 x RTX 4090's with a context window of 30,000. There is also no chat history, it will do it randomly on the first query, I even tried having the most basic system prompt to see if this was the issue
Author
Owner

@Ithrial commented on GitHub (Oct 16, 2025):

I'm seeing the same things with GPT-OSS:20b as well with both medium and high reasonings running on my 2x RTX 4090s in both a vanilla download directly in the Ollama docker with no advanced parameters AND in OpenWebUI with advanced parameters configured.

Same model downloaded and running on my LM studio rig - no craziness

Update: just configured gpt-oss-20b as well on vllm (same 3090 rig) and answers no problem

Update #2: I just downgraded from v0.12.5 to v0.12.3 and GPT-OSS:20B is back to normal, able to call tools etc.

<!-- gh-comment-id:3408729247 --> @Ithrial commented on GitHub (Oct 16, 2025): I'm seeing the same things with GPT-OSS:20b as well with both medium and high reasonings running on my 2x RTX 4090s in both a vanilla download directly in the Ollama docker with no advanced parameters AND in OpenWebUI with advanced parameters configured. Same model downloaded and running on my LM studio rig - no craziness Update: just configured gpt-oss-20b as well on vllm (same 3090 rig) and answers no problem Update #2: I just downgraded from v0.12.5 to v0.12.3 and GPT-OSS:20B is back to normal, able to call tools etc.
Author
Owner

@rick-github commented on GitHub (Oct 16, 2025):

https://github.com/ollama/ollama/issues/12606#issuecomment-3401080560

<!-- gh-comment-id:3408730962 --> @rick-github commented on GitHub (Oct 16, 2025): https://github.com/ollama/ollama/issues/12606#issuecomment-3401080560
Author
Owner

@jbcallaghan commented on GitHub (Oct 17, 2025):

I'm seeing the same things with GPT-OSS:20b as well with both medium and high reasonings running on my 2x RTX 4090s in both a vanilla download directly in the Ollama docker with no advanced parameters AND in OpenWebUI with advanced parameters configured.

Same model downloaded and running on my LM studio rig - no craziness

Update: just configured gpt-oss-20b as well on vllm (same 3090 rig) and answers no problem

Update #2: I just downgraded to v0.12.5 and GPT-OSS:20B is back to normal, able to call tools etc.

@Ithrial What version were you running before? I am seeing the issue with 0.12.5.

Interesting you are running the same setup and had no issues with LM Studio.

I have tried running the model only on one card and splitting it and the issue is always there.

<!-- gh-comment-id:3414035625 --> @jbcallaghan commented on GitHub (Oct 17, 2025): > I'm seeing the same things with GPT-OSS:20b as well with both medium and high reasonings running on my 2x RTX 4090s in both a vanilla download directly in the Ollama docker with no advanced parameters AND in OpenWebUI with advanced parameters configured. > > Same model downloaded and running on my LM studio rig - no craziness > > Update: just configured gpt-oss-20b as well on vllm (same 3090 rig) and answers no problem > > Update [#2](https://github.com/ollama/ollama/issues/2): I just downgraded to v0.12.5 and GPT-OSS:20B is back to normal, able to call tools etc. @Ithrial What version were you running before? I am seeing the issue with 0.12.5. Interesting you are running the same setup and had no issues with LM Studio. I have tried running the model only on one card and splitting it and the issue is always there.
Author
Owner

@Ithrial commented on GitHub (Oct 17, 2025):

Sorry that was a typo. I was on 12.5. Downgraded to 12.3

<!-- gh-comment-id:3415397057 --> @Ithrial commented on GitHub (Oct 17, 2025): Sorry that was a typo. I was on 12.5. Downgraded to 12.3
Author
Owner

@jbcallaghan commented on GitHub (Oct 17, 2025):

@Ithrial I thought you might have been on a pre-release version. Interesting that 12.3 works

<!-- gh-comment-id:3415422151 --> @jbcallaghan commented on GitHub (Oct 17, 2025): @Ithrial I thought you might have been on a pre-release version. Interesting that 12.3 works
Author
Owner

@Ithrial commented on GitHub (Oct 18, 2025):

@Ithrial I thought you might have been on a pre-release version. Interesting that 12.3 works

I did just try 0.12.6, as it came out today. Same issue - logic/self argument loop.

<!-- gh-comment-id:3417777911 --> @Ithrial commented on GitHub (Oct 18, 2025): > [@Ithrial](https://github.com/Ithrial) I thought you might have been on a pre-release version. Interesting that 12.3 works I did just try 0.12.6, as it came out today. Same issue - logic/self argument loop.
Author
Owner

@jbcallaghan commented on GitHub (Oct 21, 2025):

@Ithrial Like you I confirm the same issue persists with 0.12.6.

Can someone please look at what's changed from 0.12.3 onwards?

<!-- gh-comment-id:3424898278 --> @jbcallaghan commented on GitHub (Oct 21, 2025): @Ithrial Like you I confirm the same issue persists with 0.12.6. Can someone please look at what's changed from 0.12.3 onwards?
Author
Owner

@rick-github commented on GitHub (Oct 21, 2025):

Image
<!-- gh-comment-id:3424985933 --> @rick-github commented on GitHub (Oct 21, 2025): <img width="671" height="244" alt="Image" src="https://github.com/user-attachments/assets/8b046dff-9589-426d-97e8-0aec67f24117" />
Author
Owner

@jbcallaghan commented on GitHub (Oct 21, 2025):

I started the query at 8:37:25 and it almost immediately went into a reasoning loop, I previously managed about 3 runs of the same query without issue, each starting the process without any previous chat history. I manually stopped the task as it was stuck repeating the same step over and over again.

log.txt

<!-- gh-comment-id:3425463448 --> @jbcallaghan commented on GitHub (Oct 21, 2025): I started the query at 8:37:25 and it almost immediately went into a reasoning loop, I previously managed about 3 runs of the same query without issue, each starting the process without any previous chat history. I manually stopped the task as it was stuck repeating the same step over and over again. [log.txt](https://github.com/user-attachments/files/23016864/log.txt)
Author
Owner

@rick-github commented on GitHub (Oct 21, 2025):

Full log. Also set OLLAMA_DEBUG=2.

<!-- gh-comment-id:3425470163 --> @rick-github commented on GitHub (Oct 21, 2025): Full log. Also set `OLLAMA_DEBUG=2`.
Author
Owner

@rick-github commented on GitHub (Oct 21, 2025):

journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)"
<!-- gh-comment-id:3425477922 --> @rick-github commented on GitHub (Oct 21, 2025): ``` journalctl -u ollama --no-pager --since "$(systemctl show ollama --property=ActiveEnterTimestamp --value)" ```
Author
Owner

@jbcallaghan commented on GitHub (Oct 21, 2025):

log.zip

<!-- gh-comment-id:3425910042 --> @jbcallaghan commented on GitHub (Oct 21, 2025): [log.zip](https://github.com/user-attachments/files/23019034/log.zip)
Author
Owner

@nfsecurity commented on GitHub (Oct 21, 2025):

I am having the same issue with "loops" in responses only when I set "think": "high" in the API call.

<!-- gh-comment-id:3426814524 --> @nfsecurity commented on GitHub (Oct 21, 2025): I am having the same issue with "loops" in responses only when I set "think": "high" in the API call.
Author
Owner

@rick-github commented on GitHub (Oct 21, 2025):

https://github.com/ollama/ollama/issues/12606#issuecomment-3424985933

<!-- gh-comment-id:3426820147 --> @rick-github commented on GitHub (Oct 21, 2025): https://github.com/ollama/ollama/issues/12606#issuecomment-3424985933
Author
Owner

@jbcallaghan commented on GitHub (Oct 23, 2025):

@rick-github Did you manage to find anything in the logs I provided?

<!-- gh-comment-id:3435573678 --> @jbcallaghan commented on GitHub (Oct 23, 2025): @rick-github Did you manage to find anything in the logs I provided?
Author
Owner

@rick-github commented on GitHub (Oct 23, 2025):

Thanks for the logs. So far I've been unable to reproduce the problem, I will have some free time to dig into the logs further tomorrow.

<!-- gh-comment-id:3438762829 --> @rick-github commented on GitHub (Oct 23, 2025): Thanks for the logs. So far I've been unable to reproduce the problem, I will have some free time to dig into the logs further tomorrow.
Author
Owner

@jbcallaghan commented on GitHub (Oct 24, 2025):

@rick-github A bit more information for you. I found the reasoning to most likely loop when using a model with .bind_tools. The reasoning seems to get stuck around the next step to take, where it keeps repeating the tool or tools it should call next.

<!-- gh-comment-id:3443693674 --> @jbcallaghan commented on GitHub (Oct 24, 2025): @rick-github A bit more information for you. I found the reasoning to most likely loop when using a model with .bind_tools. The reasoning seems to get stuck around the next step to take, where it keeps repeating the tool or tools it should call next.
Author
Owner

@jbcallaghan commented on GitHub (Oct 30, 2025):

I tried 0.12.7 and the issue still persists.

<!-- gh-comment-id:3467057448 --> @jbcallaghan commented on GitHub (Oct 30, 2025): I tried 0.12.7 and the issue still persists.
Author
Owner

@jbcallaghan commented on GitHub (Oct 31, 2025):

Image

I see it's very similar to what was reported below.
https://www.reddit.com/r/ollama/comments/1o7u30c/reported_bug_gptoss20b_reasoning_loop_in_0125/

<!-- gh-comment-id:3474801801 --> @jbcallaghan commented on GitHub (Oct 31, 2025): <img width="897" height="484" alt="Image" src="https://github.com/user-attachments/assets/fcf8bd31-aa0f-4a42-87fc-6eede0e74121" /> I see it's very similar to what was reported below. https://www.reddit.com/r/ollama/comments/1o7u30c/reported_bug_gptoss20b_reasoning_loop_in_0125/
Author
Owner

@Ithrial commented on GitHub (Oct 31, 2025):

Lol that's my reddit post. I cross posted for awareness with how popular gpt-oss:20b is

<!-- gh-comment-id:3474905221 --> @Ithrial commented on GitHub (Oct 31, 2025): Lol that's my reddit post. I cross posted for awareness with how popular gpt-oss:20b is
Author
Owner

@jbcallaghan commented on GitHub (Oct 31, 2025):

I think I got to the bottom of my issue, if I use top_p or top_k with reasoning effort set to high, it goes into a loop, if I remove top_p and top_k it works

<!-- gh-comment-id:3474963888 --> @jbcallaghan commented on GitHub (Oct 31, 2025): I think I got to the bottom of my issue, if I use top_p or top_k with reasoning effort set to high, it goes into a loop, if I remove top_p and top_k it works
Author
Owner

@jbcallaghan commented on GitHub (Oct 31, 2025):

Fiddling around with a combination of temperature, top_k and top_p can make it work.

temperature=0.4+
top_k=200+

Or simply don't set top_k or top_p

<!-- gh-comment-id:3475000606 --> @jbcallaghan commented on GitHub (Oct 31, 2025): Fiddling around with a combination of temperature, top_k and top_p can make it work. temperature=0.4+ top_k=200+ Or simply don't set top_k or top_p
Author
Owner

@Ithrial commented on GitHub (Oct 31, 2025):

My top p is 0.95, temp 0.8 and top k 40. I might play around as well and see. I'm interested in qwen 3 VL which requires an update

<!-- gh-comment-id:3475135801 --> @Ithrial commented on GitHub (Oct 31, 2025): My top p is 0.95, temp 0.8 and top k 40. I might play around as well and see. I'm interested in qwen 3 VL which requires an update
Author
Owner

@thzzlan commented on GitHub (Nov 1, 2025):

I'm having the same issue. Noticed it while trying to set up an N8N AI agent flow, and GPT-OSS:20b kept calling the same tool again and again, even thought the tool already returned a successful response. For some reason I'm not seeing the issue in Open-WebUI though, not sure why.

<!-- gh-comment-id:3476506998 --> @thzzlan commented on GitHub (Nov 1, 2025): I'm having the same issue. Noticed it while trying to set up an N8N AI agent flow, and GPT-OSS:20b kept calling the same tool again and again, even thought the tool already returned a successful response. For some reason I'm not seeing the issue in Open-WebUI though, not sure why.
Author
Owner

@Ithrial commented on GitHub (Nov 2, 2025):

So I tried removing Top_P and Top_k (both set to default) with a temp set to 0.8 - still looped gibberish.

<!-- gh-comment-id:3477132050 --> @Ithrial commented on GitHub (Nov 2, 2025): So I tried removing Top_P and Top_k (both set to default) with a temp set to 0.8 - still looped gibberish.
Author
Owner

@jbcallaghan commented on GitHub (Nov 2, 2025):

@Ithrial what have you got num_predict and num_ctx set to?

<!-- gh-comment-id:3477856200 --> @jbcallaghan commented on GitHub (Nov 2, 2025): @Ithrial what have you got num_predict and num_ctx set to?
Author
Owner

@Ithrial commented on GitHub (Nov 2, 2025):

Context is at 64k, but I usually run at the max 132k and predict (max tokens in OWUI) is at 8k

<!-- gh-comment-id:3477965882 --> @Ithrial commented on GitHub (Nov 2, 2025): Context is at 64k, but I usually run at the max 132k and predict (max tokens in OWUI) is at 8k
Author
Owner

@Ithrial commented on GitHub (Nov 3, 2025):

Image

I'm really starting to wonder if the issue is not in ollama but in gpt-oss:20b itself - this is with Llama-swap and llama.cpp with temp at 0.8, Max tokens at 8k, top K @ 40, top p @ 0.8 and a context of 64k.

<!-- gh-comment-id:3479001948 --> @Ithrial commented on GitHub (Nov 3, 2025): <img width="2270" height="959" alt="Image" src="https://github.com/user-attachments/assets/15a609b1-0803-4f34-9eaf-559cf8935d80" /> I'm really starting to wonder if the issue is not in ollama but in gpt-oss:20b itself - this is with Llama-swap and llama.cpp with temp at 0.8, Max tokens at 8k, top K @ 40, top p @ 0.8 and a context of 64k.
Author
Owner

@jbcallaghan commented on GitHub (Nov 3, 2025):

@Ithrial interesting it does it outside of Ollama. Have you tried setting top_k to say 500 and top_p to 0.95

<!-- gh-comment-id:3479535156 --> @jbcallaghan commented on GitHub (Nov 3, 2025): @Ithrial interesting it does it outside of Ollama. Have you tried setting top_k to say 500 and top_p to 0.95
Author
Owner

@nfsecurity commented on GitHub (Nov 7, 2025):

So I tried removing Top_P and Top_k (both set to default) with a temp set to 0.8 - still looped gibberish.

Same for me.

<!-- gh-comment-id:3503791483 --> @nfsecurity commented on GitHub (Nov 7, 2025): > So I tried removing Top_P and Top_k (both set to default) with a temp set to 0.8 - still looped gibberish. Same for me.
Author
Owner

@ParthSareen commented on GitHub (Nov 8, 2025):

Hey folks. This model is pretty sensitive and meant to be run with default parameters. Unless there's something very particular you want to do I'd recommend running with those. Are you guys finding issues with the default params too?

<!-- gh-comment-id:3505538640 --> @ParthSareen commented on GitHub (Nov 8, 2025): Hey folks. This model is pretty sensitive and meant to be run with default parameters. Unless there's something very particular you want to do I'd recommend running with those. Are you guys finding issues with the default params too?
Author
Owner

@Ithrial commented on GitHub (Nov 8, 2025):

Hey folks. This model is pretty sensitive and meant to be run with default parameters. Unless there's something very particular you want to do I'd recommend running with those. Are you guys finding issues with the default params too?

Yes one of the tests I did was downloading and interacting with Ollama/model direct thru the container - just Ollama Run gpt-oss:latest - can't get much more default than that - hallucinated right in the terminal.

<!-- gh-comment-id:3506675208 --> @Ithrial commented on GitHub (Nov 8, 2025): > Hey folks. This model is pretty sensitive and meant to be run with default parameters. Unless there's something very particular you want to do I'd recommend running with those. Are you guys finding issues with the default params too? Yes one of the tests I did was downloading and interacting with Ollama/model direct thru the container - just Ollama Run gpt-oss:latest - can't get much more default than that - hallucinated right in the terminal.
Author
Owner

@ParthSareen commented on GitHub (Nov 8, 2025):

Hey folks. This model is pretty sensitive and meant to be run with default parameters. Unless there's something very particular you want to do I'd recommend running with those. Are you guys finding issues with the default params too?

Yes one of the tests I did was downloading and interacting with Ollama/model direct thru the container - just Ollama Run gpt-oss:latest - can't get much more default than that - hallucinated right in the terminal.

Can you send me a script to repro with? Thanks!

<!-- gh-comment-id:3506737088 --> @ParthSareen commented on GitHub (Nov 8, 2025): > > Hey folks. This model is pretty sensitive and meant to be run with default parameters. Unless there's something very particular you want to do I'd recommend running with those. Are you guys finding issues with the default params too? > > Yes one of the tests I did was downloading and interacting with Ollama/model direct thru the container - just Ollama Run gpt-oss:latest - can't get much more default than that - hallucinated right in the terminal. Can you send me a script to repro with? Thanks!
Author
Owner

@nfsecurity commented on GitHub (Nov 11, 2025):

Hey folks. This model is pretty sensitive and meant to be run with default parameters. Unless there's something very particular you want to do I'd recommend running with those. Are you guys finding issues with the default params too?

It's been two days without reasoning loops at "high". The only change I did was:

#PARAMETER temperature 1.0
#PARAMETER top_p 1.0
#PARAMETER top_k 0
#PARAMETER num_ctx 16384
#PARAMETER num_batch 2048

I disabled all the parameters in Modelfile. If I enable the parameters, then the reasoning loop returns again only at "high" level.

Hope this helps!

<!-- gh-comment-id:3517767507 --> @nfsecurity commented on GitHub (Nov 11, 2025): > Hey folks. This model is pretty sensitive and meant to be run with default parameters. Unless there's something very particular you want to do I'd recommend running with those. Are you guys finding issues with the default params too? It's been two days without reasoning loops at "high". The only change I did was: ``` #PARAMETER temperature 1.0 #PARAMETER top_p 1.0 #PARAMETER top_k 0 #PARAMETER num_ctx 16384 #PARAMETER num_batch 2048 ``` I disabled all the parameters in Modelfile. If I enable the parameters, then the reasoning loop returns again only at "high" level. Hope this helps!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8368