[GH-ISSUE #14752] ollama launch codex - ⚠ Model metadata for gpt-oss:20b not found. Defaulting to fallback metadata; this can degrade performance and cause issues. #71598

Closed
opened 2026-05-05 02:14:04 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @vt-alt on GitHub (Mar 10, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14752

What is the issue?

Tested on 0.17.7 and codex-cli 0.104.0.

  1. Frequent message ⚠ Model metadata for gpt-oss:20b not found. Defaulting to fallback metadata; this can degrade performance and cause issues.
  2. The requested task is frequently just stopped.
  3. The patch does not apply with errors like these:
• - Update File: path ..."]}. So we need to escape newlines. In the JSON, we need to encode newline as \n. The example from system used raw string with
    newlines? Actually the assistant earlier wrote apply_patch with plain text. Maybe the tool supports multiline string as is? In the earlier conversation,
    the assistant used apply_patch like:

  {"command":["apply_patch","*** Begin Patch\\n*** Update File: path/...\\n@@ ..."]}

  But not with escapes. The error "unsupported call: apply_patch" may be due to quoting issues. Let's try using triple quotes but escape newline? Let's attempt
  minimal patch.

  We'll patch the section in a simpler way: replace the lines. Provide patch as a single string with newline escapes.

  We'll craft patch string manually: "*** Begin Patch\n*** Update File: bzlib.c\n@@\n-   bzf->strm.bzalloc  = NULL;\n-   bzf->strm.bzfree   = NULL;\n-
  bzf->strm.opaque   = NULL;\n+   bzf->strm.bzalloc  = NULL;\n+   bzf->strm.bzfree   = NULL;\n+   /* Preserve small flag /\n+   bzf->strm.opaque   = (void)
  (intptr_t)small;\n*** End Patch"

  Let's try.
• Ran apply_patch
  └ /bin/bash: line 1: apply_patch: command not found

Then in went into the rabbit hole thinking why apply_patch tool does not work.

So basically ollama launch codex does not work for gpt-oss:20b. This might be codex-cli issue, but ollama launch codex is expected to configure it properly to work isn't it?

Repro

I request a simple change to modify bzip2:

$ git clone https://sourceware.org/git/bzip2.git
$ cd bzip2
$ ollama launch codex --model gpt-oss:20b
> Implement multiple concatenated streams support in `BZ2_bzread` (zlib-compatibility API). 

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.17.7

Originally created by @vt-alt on GitHub (Mar 10, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14752 ### What is the issue? Tested on 0.17.7 and codex-cli 0.104.0. 1. Frequent message `⚠ Model metadata for `gpt-oss:20b` not found. Defaulting to fallback metadata; this can degrade performance and cause issues.` 2. The requested task is frequently just stopped. 3. The patch does not apply with errors like these: ``` • - Update File: path ..."]}. So we need to escape newlines. In the JSON, we need to encode newline as \n. The example from system used raw string with newlines? Actually the assistant earlier wrote apply_patch with plain text. Maybe the tool supports multiline string as is? In the earlier conversation, the assistant used apply_patch like: {"command":["apply_patch","*** Begin Patch\\n*** Update File: path/...\\n@@ ..."]} But not with escapes. The error "unsupported call: apply_patch" may be due to quoting issues. Let's try using triple quotes but escape newline? Let's attempt minimal patch. We'll patch the section in a simpler way: replace the lines. Provide patch as a single string with newline escapes. We'll craft patch string manually: "*** Begin Patch\n*** Update File: bzlib.c\n@@\n- bzf->strm.bzalloc = NULL;\n- bzf->strm.bzfree = NULL;\n- bzf->strm.opaque = NULL;\n+ bzf->strm.bzalloc = NULL;\n+ bzf->strm.bzfree = NULL;\n+ /* Preserve small flag /\n+ bzf->strm.opaque = (void) (intptr_t)small;\n*** End Patch" Let's try. ``` ``` • Ran apply_patch └ /bin/bash: line 1: apply_patch: command not found ``` Then in went into the rabbit hole thinking why `apply_patch` tool does not work. So basically `ollama launch codex` does not work for `gpt-oss:20b`. This might be `codex-cli` issue, but `ollama launch codex` is expected to configure it properly to work isn't it? ### Repro I request a simple change to modify bzip2: ```shell $ git clone https://sourceware.org/git/bzip2.git $ cd bzip2 $ ollama launch codex --model gpt-oss:20b > Implement multiple concatenated streams support in `BZ2_bzread` (zlib-compatibility API). ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.17.7
GiteaMirror added the bug label 2026-05-05 02:14:04 -05:00
Author
Owner

@coderenewables commented on GitHub (Mar 13, 2026):

trying to run same with qwen3:4b model. same error!

<!-- gh-comment-id:4054021186 --> @coderenewables commented on GitHub (Mar 13, 2026): trying to run same with qwen3:4b model. same error!
Author
Owner

@Bombe commented on GitHub (Mar 15, 2026):

The initial issue with gpt-oss:20b also happens on macOS 15.7.4 with ollama 0.18.0 and codex-cli 0.114.0.

<!-- gh-comment-id:4061876590 --> @Bombe commented on GitHub (Mar 15, 2026): The initial issue with `gpt-oss:20b` also happens on macOS 15.7.4 with ollama 0.18.0 and codex-cli 0.114.0.
Author
Owner

@loonery commented on GitHub (Mar 15, 2026):

Same as other users, I am experiencing this problem with Ollama 0.17.7, codex cli 0.114.0 on NixOS. Changing model does not work, even if the model name is unhyphenated, which I thought might have been the problem due to the behavior described in issue: https://github.com/openai/codex/issues/14276

Everything was working before upgrade from codex 0.92, so this may be a codex issue.

I opened an issue with codex cli: https://github.com/openai/codex/issues/14757

<!-- gh-comment-id:4063582171 --> @loonery commented on GitHub (Mar 15, 2026): Same as other users, I am experiencing this problem with Ollama 0.17.7, codex cli 0.114.0 on NixOS. Changing model does not work, even if the model name is unhyphenated, which I thought might have been the problem due to the behavior described in issue: https://github.com/openai/codex/issues/14276 Everything was working before upgrade from codex 0.92, so this may be a codex issue. I opened an issue with codex cli: https://github.com/openai/codex/issues/14757
Author
Owner

@ronaldpetty commented on GitHub (Mar 24, 2026):

I have same issue with reverse. Codex using Ollama. I am guessing tool launching order doesn't mater here. On M5 / Tahoe. Just sharing in off chance it helps.

<!-- gh-comment-id:4114506490 --> @ronaldpetty commented on GitHub (Mar 24, 2026): I have same issue with reverse. Codex using Ollama. I am guessing tool launching order doesn't mater here. On M5 / Tahoe. Just sharing in off chance it helps.
Author
Owner

@loonery commented on GitHub (Mar 24, 2026):

@ronaldpetty If you look at my issue that I opened on Codex's repo, you can see that I found a workaround. I think this is a regression that was introduced for custom models in Codex and there does not seem to be a huge appetite to fix it.

The key is to make sure that in your config you set a 'model_catalog_json' path to a local 'models.json' file. That file should look like https://github.com/openai/codex/blob/main/codex-rs/core/models.json does, but you should modify the file to include your the model names your looking for. The 'slug' and 'display_name' keys should match your specified model name. For me, that meant replacing all 'gpt-oss-20b' instances with 'gpt-oss:20b'.

This is at least, what I did for using gpt-oss:20b. I don't know if this will work for other custom models.

<!-- gh-comment-id:4114608354 --> @loonery commented on GitHub (Mar 24, 2026): @ronaldpetty If you look at my issue that I opened on Codex's repo, you can see that I found a workaround. I think this is a regression that was introduced for custom models in Codex and there does not seem to be a huge appetite to fix it. The key is to make sure that in your config you set a 'model_catalog_json' path to a local 'models.json' file. That file should look like https://github.com/openai/codex/blob/main/codex-rs/core/models.json does, but you should modify the file to include your the model names your looking for. The 'slug' and 'display_name' keys should match your specified model name. For me, that meant replacing all 'gpt-oss-20b' instances with 'gpt-oss:20b'. This is at least, what I did for using gpt-oss:20b. I don't know if this will work for other custom models.
Author
Owner

@ronaldpetty commented on GitHub (Mar 24, 2026):

Thanks @loonery , I'll give it a whirl. My initial checks were leading that direction, thanks for clearing a path!

<!-- gh-comment-id:4114659505 --> @ronaldpetty commented on GitHub (Mar 24, 2026): Thanks @loonery , I'll give it a whirl. My initial checks were leading that direction, thanks for clearing a path!
Author
Owner

@ParthSareen commented on GitHub (Apr 15, 2026):

I believe this should be gone with some of our recent releases.

<!-- gh-comment-id:4255535257 --> @ParthSareen commented on GitHub (Apr 15, 2026): I believe this should be gone with some of our recent releases.
Author
Owner

@savanthongvanh commented on GitHub (Apr 16, 2026):

i was still getting this with 121 btw. i'm expecting it to figure out how to use curl or something and go get the weather

Image
<!-- gh-comment-id:4257349294 --> @savanthongvanh commented on GitHub (Apr 16, 2026): i was still getting this with 121 btw. i'm expecting it to figure out how to use curl or something and go get the weather <img width="2142" height="990" alt="Image" src="https://github.com/user-attachments/assets/f1598997-222f-4a8f-9581-ca86617cabb9" />
Author
Owner

@kazaff commented on GitHub (Apr 21, 2026):

Image

v122 is still having this issue.

<!-- gh-comment-id:4285437133 --> @kazaff commented on GitHub (Apr 21, 2026): <img width="714" height="371" alt="Image" src="https://github.com/user-attachments/assets/f8e6a062-6572-41da-867e-4019d9d457d9" /> v122 is still having this issue.
Author
Owner

@namitha393 commented on GitHub (Apr 26, 2026):

v125 has this issue

Image
<!-- gh-comment-id:4322992712 --> @namitha393 commented on GitHub (Apr 26, 2026): v125 has this issue <img width="626" height="289" alt="Image" src="https://github.com/user-attachments/assets/8efd4275-ea20-470d-87c8-01ef0c56d710" />
Author
Owner

@vt-alt commented on GitHub (Apr 27, 2026):

I believe this should be gone with some of our recent releases.

@ParthSareen This is not resolved. Latest ollama 0.21.2 + latest OpenAI Codex (v0.125.0):

$ ollama launch codex
╭──────────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.125.0)                       │
│                                                  │
│ model:     gpt-oss:20b medium   /model to change │
│ directory: ~/src/ollama                          │
╰──────────────────────────────────────────────────╯

  Tip: New Build faster with Codex.

› hi

⚠ Model metadata for `gpt-oss:20b` not found. Defaulting to fallback metadata; this can degrade performance and cause issues.

• Hello! How can I help you today?
Token usage: total=7,744 input=7,710 output=34

Can you reopen the issue, please?

<!-- gh-comment-id:4325065101 --> @vt-alt commented on GitHub (Apr 27, 2026): > I believe this should be gone with some of our recent releases. @ParthSareen This is **not** resolved. Latest ollama 0.21.2 + latest OpenAI Codex (v0.125.0): ``` $ ollama launch codex ╭──────────────────────────────────────────────────╮ │ >_ OpenAI Codex (v0.125.0) │ │ │ │ model: gpt-oss:20b medium /model to change │ │ directory: ~/src/ollama │ ╰──────────────────────────────────────────────────╯ Tip: New Build faster with Codex. › hi ⚠ Model metadata for `gpt-oss:20b` not found. Defaulting to fallback metadata; this can degrade performance and cause issues. • Hello! How can I help you today? Token usage: total=7,744 input=7,710 output=34 ``` Can you reopen the issue, please?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71598