[GH-ISSUE #11816] llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gpt-oss' #69902

Closed
opened 2026-05-04 19:44:38 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @yichuan1118 on GitHub (Aug 8, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11816

What is the issue?

trying to load quantized gpt-oss model
e.g.
https://huggingface.co/huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated

causing errors like

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gpt-oss'

Relevant log output


time=2025-08-08T11:15:02.337-07:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/yichuan/.ollama/models/blobs/sha256-e01aba477beff0c8c43bf4c0faa8b1b14ceaa1adba8d7849a30cb8ba79a8eeda gpu=0 parallel=1 available=22906503168 required="15.9 GiB"
time=2025-08-08T11:15:02.337-07:00 level=INFO source=server.go:135 msg="system memory" total="32.0 GiB" free="14.9 GiB" free_swap="0 B"
time=2025-08-08T11:15:02.337-07:00 level=INFO source=server.go:175 msg=offload library=metal layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[21.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="15.9 GiB" memory.required.partial="15.9 GiB" memory.required.kv="192.0 MiB" memory.required.allocations="[15.9 GiB]" memory.weights.total="14.3 GiB" memory.weights.repeating="13.8 GiB" memory.weights.nonrepeating="586.8 MiB" memory.graph.full="256.0 MiB" memory.graph.partial="256.0 MiB"
llama_model_load_from_file_impl: using device Metal (Apple M2 Pro) - 21845 MiB free
llama_model_loader: loaded meta data with 40 key-value pairs and 459 tensors from /Users/yichuan/.ollama/models/blobs/sha256-e01aba477beff0c8c43bf4c0faa8b1b14ceaa1adba8d7849a30cb8ba79a8eeda (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gpt-oss
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Huihui Gpt Oss 20b BF16 Abliterated
llama_model_loader: - kv   3:                           general.finetune str              = abliterated
llama_model_loader: - kv   4:                           general.basename str              = Huihui-gpt-oss
llama_model_loader: - kv   5:                         general.size_label str              = 20B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                   general.base_model.count u32              = 1
llama_model_loader: - kv   8:                  general.base_model.0.name str              = Gpt Oss 20b BF16
llama_model_loader: - kv   9:          general.base_model.0.organization str              = Unsloth
llama_model_loader: - kv  10:              general.base_model.0.repo_url str              = https://huggingface.co/unsloth/gpt-os...
llama_model_loader: - kv  11:                               general.tags arr[str,5]       = ["vllm", "unsloth", "abliterated", "u...
llama_model_loader: - kv  12:                        gpt-oss.block_count u32              = 24
llama_model_loader: - kv  13:                     gpt-oss.context_length u32              = 131072
llama_model_loader: - kv  14:                   gpt-oss.embedding_length u32              = 2880
llama_model_loader: - kv  15:                gpt-oss.feed_forward_length u32              = 2880
llama_model_loader: - kv  16:               gpt-oss.attention.head_count u32              = 64
llama_model_loader: - kv  17:            gpt-oss.attention.head_count_kv u32              = 8
llama_model_loader: - kv  18:                     gpt-oss.rope.freq_base f32              = 150000.000000
llama_model_loader: - kv  19:   gpt-oss.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  20:                       gpt-oss.expert_count u32              = 32
llama_model_loader: - kv  21:                  gpt-oss.expert_used_count u32              = 4
llama_model_loader: - kv  22:               gpt-oss.attention.key_length u32              = 64
llama_model_loader: - kv  23:             gpt-oss.attention.value_length u32              = 64
llama_model_loader: - kv  24:           gpt-oss.attention.sliding_window u32              = 128
llama_model_loader: - kv  25:         gpt-oss.expert_feed_forward_length u32              = 2880
llama_model_loader: - kv  26:                  gpt-oss.rope.scaling.type str              = yarn
llama_model_loader: - kv  27:                gpt-oss.rope.scaling.factor f32              = 32.000000
llama_model_loader: - kv  28: gpt-oss.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = gpt-4o
llama_model_loader: - kv  31:                      tokenizer.ggml.tokens arr[str,201088]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,201088]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,446189]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 199998
llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 200002
llama_model_loader: - kv  36:            tokenizer.ggml.padding_token_id u32              = 199999
llama_model_loader: - kv  37:                    tokenizer.chat_template str              = {# Copyright 2025-present Unsloth. Ap...
llama_model_loader: - kv  38:               general.quantization_version u32              = 2
llama_model_loader: - kv  39:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  289 tensors
llama_model_loader: - type q5_0:  121 tensors
llama_model_loader: - type q8_0:   25 tensors
llama_model_loader: - type q4_K:   24 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 14.71 GiB (6.04 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gpt-oss'
llama_model_load_from_file_impl: failed to load model
time=2025-08-08T11:15:02.463-07:00 level=INFO source=sched.go:453 msg="NewLlamaServer failed" model=/Users/yichuan/.ollama/models/blobs/sha256-e01aba477beff0c8c43bf4c0faa8b1b14ceaa1adba8d7849a30cb8ba79a8eeda error="unable to load model: /Users/yichuan/.ollama/models/blobs/sha256-e01aba477beff0c8c43bf4c0faa8b1b14ceaa1adba8d7849a30cb8ba79a8eeda"

OS

MacOS 15.3.2 (24D81)

GPU

Apple M2 Pro

CPU

Apple M2 Pro

Ollama version

ollama version is 0.11.4

Originally created by @yichuan1118 on GitHub (Aug 8, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11816 ### What is the issue? trying to load quantized gpt-oss model e.g. https://huggingface.co/huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated causing errors like **llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gpt-oss'** ### Relevant log output ```shell time=2025-08-08T11:15:02.337-07:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/yichuan/.ollama/models/blobs/sha256-e01aba477beff0c8c43bf4c0faa8b1b14ceaa1adba8d7849a30cb8ba79a8eeda gpu=0 parallel=1 available=22906503168 required="15.9 GiB" time=2025-08-08T11:15:02.337-07:00 level=INFO source=server.go:135 msg="system memory" total="32.0 GiB" free="14.9 GiB" free_swap="0 B" time=2025-08-08T11:15:02.337-07:00 level=INFO source=server.go:175 msg=offload library=metal layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[21.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="15.9 GiB" memory.required.partial="15.9 GiB" memory.required.kv="192.0 MiB" memory.required.allocations="[15.9 GiB]" memory.weights.total="14.3 GiB" memory.weights.repeating="13.8 GiB" memory.weights.nonrepeating="586.8 MiB" memory.graph.full="256.0 MiB" memory.graph.partial="256.0 MiB" llama_model_load_from_file_impl: using device Metal (Apple M2 Pro) - 21845 MiB free llama_model_loader: loaded meta data with 40 key-value pairs and 459 tensors from /Users/yichuan/.ollama/models/blobs/sha256-e01aba477beff0c8c43bf4c0faa8b1b14ceaa1adba8d7849a30cb8ba79a8eeda (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gpt-oss llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Huihui Gpt Oss 20b BF16 Abliterated llama_model_loader: - kv 3: general.finetune str = abliterated llama_model_loader: - kv 4: general.basename str = Huihui-gpt-oss llama_model_loader: - kv 5: general.size_label str = 20B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Gpt Oss 20b BF16 llama_model_loader: - kv 9: general.base_model.0.organization str = Unsloth llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/unsloth/gpt-os... llama_model_loader: - kv 11: general.tags arr[str,5] = ["vllm", "unsloth", "abliterated", "u... llama_model_loader: - kv 12: gpt-oss.block_count u32 = 24 llama_model_loader: - kv 13: gpt-oss.context_length u32 = 131072 llama_model_loader: - kv 14: gpt-oss.embedding_length u32 = 2880 llama_model_loader: - kv 15: gpt-oss.feed_forward_length u32 = 2880 llama_model_loader: - kv 16: gpt-oss.attention.head_count u32 = 64 llama_model_loader: - kv 17: gpt-oss.attention.head_count_kv u32 = 8 llama_model_loader: - kv 18: gpt-oss.rope.freq_base f32 = 150000.000000 llama_model_loader: - kv 19: gpt-oss.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 20: gpt-oss.expert_count u32 = 32 llama_model_loader: - kv 21: gpt-oss.expert_used_count u32 = 4 llama_model_loader: - kv 22: gpt-oss.attention.key_length u32 = 64 llama_model_loader: - kv 23: gpt-oss.attention.value_length u32 = 64 llama_model_loader: - kv 24: gpt-oss.attention.sliding_window u32 = 128 llama_model_loader: - kv 25: gpt-oss.expert_feed_forward_length u32 = 2880 llama_model_loader: - kv 26: gpt-oss.rope.scaling.type str = yarn llama_model_loader: - kv 27: gpt-oss.rope.scaling.factor f32 = 32.000000 llama_model_loader: - kv 28: gpt-oss.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 30: tokenizer.ggml.pre str = gpt-4o llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,201088] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,201088] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,446189] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 199998 llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 200002 llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 199999 llama_model_loader: - kv 37: tokenizer.chat_template str = {# Copyright 2025-present Unsloth. Ap... llama_model_loader: - kv 38: general.quantization_version u32 = 2 llama_model_loader: - kv 39: general.file_type u32 = 15 llama_model_loader: - type f32: 289 tensors llama_model_loader: - type q5_0: 121 tensors llama_model_loader: - type q8_0: 25 tensors llama_model_loader: - type q4_K: 24 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 14.71 GiB (6.04 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gpt-oss' llama_model_load_from_file_impl: failed to load model time=2025-08-08T11:15:02.463-07:00 level=INFO source=sched.go:453 msg="NewLlamaServer failed" model=/Users/yichuan/.ollama/models/blobs/sha256-e01aba477beff0c8c43bf4c0faa8b1b14ceaa1adba8d7849a30cb8ba79a8eeda error="unable to load model: /Users/yichuan/.ollama/models/blobs/sha256-e01aba477beff0c8c43bf4c0faa8b1b14ceaa1adba8d7849a30cb8ba79a8eeda" ``` ### OS MacOS 15.3.2 (24D81) ### GPU Apple M2 Pro ### CPU Apple M2 Pro ### Ollama version ollama version is 0.11.4
GiteaMirror added the bug label 2026-05-04 19:44:39 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 8, 2025):

This model is not quantized, so needs to be quantized for importing. The model architecture is set as gpt-oss, whereas ollama is looking for an architecture of gptoss.

<!-- gh-comment-id:3168974327 --> @rick-github commented on GitHub (Aug 8, 2025): This model is not quantized, so needs to be quantized for importing. The model architecture is set as `gpt-oss`, whereas ollama is looking for an architecture of `gptoss`.
Author
Owner

@gwehmeyer commented on GitHub (Aug 8, 2025):

https://huggingface.co/mradermacher/Huihui-gpt-oss-20b-BF16-abliterated-GGUF
The quants have 'gpt-oss' but ollama is looking for 'gptoss'.

Who is correct?

<!-- gh-comment-id:3169070035 --> @gwehmeyer commented on GitHub (Aug 8, 2025): https://huggingface.co/mradermacher/Huihui-gpt-oss-20b-BF16-abliterated-GGUF The quants have 'gpt-oss' but ollama is looking for 'gptoss'. Who is correct?
Author
Owner

@chaserhkj commented on GitHub (Aug 8, 2025):

@gwehmeyer Apparently ollama is using an older version of llama.cpp and thus does not have support for gpt-oss there. And ollama implemented its own gpt-oss support, see here:
114c3f2265/fs/ggml/ggml.go (L175-L185)

Implementation in ollama obviously used "gptoss" while llama.cpp used "gpt-oss".

Theoretically you could workaround this by changing the field in gguf file using this script but I haven't tried that yet.

<!-- gh-comment-id:3169125715 --> @chaserhkj commented on GitHub (Aug 8, 2025): @gwehmeyer Apparently ollama is using an older version of llama.cpp and thus does not have support for gpt-oss there. And ollama implemented its own gpt-oss support, see here: https://github.com/ollama/ollama/blob/114c3f22657750cfb57f70c4a0d6e7389fb7a9fe/fs/ggml/ggml.go#L175-L185 Implementation in ollama obviously used "gptoss" while llama.cpp used "gpt-oss". Theoretically you could workaround this by changing the field in gguf file using [this script](https://github.com/ggml-org/llama.cpp/blob/master/gguf-py/gguf/scripts/gguf_set_metadata.py) but I haven't tried that yet.
Author
Owner

@chaserhkj commented on GitHub (Aug 8, 2025):

Ok that script didn't work. Seems it cannot change string fields.

<!-- gh-comment-id:3169187563 --> @chaserhkj commented on GitHub (Aug 8, 2025): Ok that script didn't work. Seems it cannot change string fields.
Author
Owner

@AncientMystic commented on GitHub (Aug 8, 2025):

Im having the same issue, only the gpt-oss model from ollama works, all the models on huggingface fail, Ive downloaded 5 different models just to try from different people all of them give the same exact error as the one above.

<!-- gh-comment-id:3169258048 --> @AncientMystic commented on GitHub (Aug 8, 2025): Im having the same issue, only the gpt-oss model from ollama works, all the models on huggingface fail, Ive downloaded 5 different models just to try from different people all of them give the same exact error as the one above.
Author
Owner

@chaserhkj commented on GitHub (Aug 9, 2025):

I tried to modify the architecture fields of these quantized gguf files using a modified version of this script

However I am met with another golang panic:

panic: runtime error: index out of range [0] with length 0

goroutine 52 [running]:
github.com/ollama/ollama/ml/backend/ggml.New({0x7ffc55139d52, 0x62}, {0x10, 0x0, 0x1, {0x0, 0x0, 0x0}, 0x0})
        github.com/ollama/ollama/ml/backend/ggml/ggml.go:324 +0x34b3
github.com/ollama/ollama/ml.NewBackend({0x7ffc55139d52, 0x62}, {0x10, 0x0, 0x1, {0x0, 0x0, 0x0}, 0x0})
        github.com/ollama/ollama/ml/backend.go:209 +0xb1
github.com/ollama/ollama/model.New({0x7ffc55139d52?, 0x0?}, {0x10, 0x0, 0x1, {0x0, 0x0, 0x0}, 0x0})
        github.com/ollama/ollama/model/model.go:102 +0x8f
github.com/ollama/ollama/runner/ollamarunner.(*Server).initModel(0xc000555d40, {0x7ffc55139d52?, 0x0?}, {0x10, 0x0, 0x1, {0x0, 0x0, 0x0}, 0x0}, ...)
        github.com/ollama/ollama/runner/ollamarunner/runner.go:841 +0x8d
github.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc000555d40, {0x55a08617c450, 0xc0003a3090}, {0x7ffc55139d52?, 0x0?}, {0x10, 0x0, 0x1, {0x0, 0x0, ...}, ...}, ...)
        github.com/ollama/ollama/runner/ollamarunner/runner.go:878 +0xb8
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
        github.com/ollama/ollama/runner/ollamarunner/runner.go:959 +0xa11

At this point I believe the ollama implementation and llama.cpp implementation of gpt-oss architecture have something down there that is fundamentally different. And there's probably no shortcuts until ollama fully adapts the llama.cpp implementation or change its own implementation to be compliant. I am sticking with llama.cpp to run gpt-oss at the moment.

<!-- gh-comment-id:3170214489 --> @chaserhkj commented on GitHub (Aug 9, 2025): I tried to modify the architecture fields of these quantized gguf files using a modified version of this [script](https://github.com/ggml-org/llama.cpp/blob/master/gguf-py/gguf/scripts/gguf_new_metadata.py) However I am met with another golang panic: ``` panic: runtime error: index out of range [0] with length 0 goroutine 52 [running]: github.com/ollama/ollama/ml/backend/ggml.New({0x7ffc55139d52, 0x62}, {0x10, 0x0, 0x1, {0x0, 0x0, 0x0}, 0x0}) github.com/ollama/ollama/ml/backend/ggml/ggml.go:324 +0x34b3 github.com/ollama/ollama/ml.NewBackend({0x7ffc55139d52, 0x62}, {0x10, 0x0, 0x1, {0x0, 0x0, 0x0}, 0x0}) github.com/ollama/ollama/ml/backend.go:209 +0xb1 github.com/ollama/ollama/model.New({0x7ffc55139d52?, 0x0?}, {0x10, 0x0, 0x1, {0x0, 0x0, 0x0}, 0x0}) github.com/ollama/ollama/model/model.go:102 +0x8f github.com/ollama/ollama/runner/ollamarunner.(*Server).initModel(0xc000555d40, {0x7ffc55139d52?, 0x0?}, {0x10, 0x0, 0x1, {0x0, 0x0, 0x0}, 0x0}, ...) github.com/ollama/ollama/runner/ollamarunner/runner.go:841 +0x8d github.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc000555d40, {0x55a08617c450, 0xc0003a3090}, {0x7ffc55139d52?, 0x0?}, {0x10, 0x0, 0x1, {0x0, 0x0, ...}, ...}, ...) github.com/ollama/ollama/runner/ollamarunner/runner.go:878 +0xb8 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/ollamarunner/runner.go:959 +0xa11 ``` At this point I believe the ollama implementation and llama.cpp implementation of gpt-oss architecture have something down there that is fundamentally different. And there's probably no shortcuts until ollama fully adapts the llama.cpp implementation or change its own implementation to be compliant. I am sticking with llama.cpp to run `gpt-oss` at the moment.
Author
Owner

@Teravus commented on GitHub (Aug 10, 2025):

Just want to add that this happens for all of the unsloth gguf versions with gpt-oss as the architecture on ollama 0.11.4
llama_model_loader: - type f32: 289 tensors
llama_model_loader: - type q5_1: 1 tensors
llama_model_loader: - type q8_0: 169 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q8_0
print_info: file size = 20.55 GiB (8.44 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gpt-oss'
llama_model_load_from_file_impl: failed to load model.
There are several of these bugs open with this. Another one: #11714

<!-- gh-comment-id:3172328841 --> @Teravus commented on GitHub (Aug 10, 2025): Just want to add that this happens for all of the unsloth gguf versions with gpt-oss as the architecture on ollama 0.11.4 llama_model_loader: - type f32: 289 tensors llama_model_loader: - type q5_1: 1 tensors llama_model_loader: - type q8_0: 169 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 20.55 GiB (8.44 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gpt-oss' llama_model_load_from_file_impl: failed to load model. There are several of these bugs open with this. Another one: #11714
Author
Owner

@AncientMystic commented on GitHub (Aug 10, 2025):

Just want to add that this happens for all of the unsloth gguf versions with gpt-oss as the architecture on ollama 0.11.4
llama_model_loader: - type f32: 289 tensors
llama_model_loader: - type q5_1: 1 tensors
llama_model_loader: - type q8_0: 169 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q8_0
print_info: file size = 20.55 GiB (8.44 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gpt-oss'
llama_model_load_from_file_impl: failed to load model.
There are several of these bugs open with this. Another one: #11714

Yeah, I personally can't find one on huggingface that ollama doesn't error out with, i have tried like 6 or 7 of them now, just hoping they fix it at this point.

<!-- gh-comment-id:3172332055 --> @AncientMystic commented on GitHub (Aug 10, 2025): > Just want to add that this happens for all of the unsloth gguf versions with gpt-oss as the architecture on ollama 0.11.4 > llama_model_loader: - type f32: 289 tensors > llama_model_loader: - type q5_1: 1 tensors > llama_model_loader: - type q8_0: 169 tensors > print_info: file format = GGUF V3 (latest) > print_info: file type = Q8_0 > print_info: file size = 20.55 GiB (8.44 BPW) > llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gpt-oss' > llama_model_load_from_file_impl: failed to load model. > There are several of these bugs open with this. Another one: #11714 Yeah, I personally can't find one on huggingface that ollama doesn't error out with, i have tried like 6 or 7 of them now, just hoping they fix it at this point.
Author
Owner

@jeepshop commented on GitHub (Aug 14, 2025):

I was wondering the same thing and noticed the repository copy of llama.cpp source is over 3 months old. Maybe that's the issue?

Everything in here has a last commit date of "3 months ago"

https://github.com/ollama/ollama/tree/main/llama/llama.cpp

<!-- gh-comment-id:3188330666 --> @jeepshop commented on GitHub (Aug 14, 2025): I was wondering the same thing and noticed the repository copy of llama.cpp source is over 3 months old. Maybe that's the issue? Everything in here has a last commit date of "3 months ago" https://github.com/ollama/ollama/tree/main/llama/llama.cpp
Author
Owner

@rick-github commented on GitHub (Aug 14, 2025):

https://github.com/ollama/ollama/pull/11823 will merge the upstream implementation of MXFP4 and allow ollama to load gpt-oss models from external repos.

<!-- gh-comment-id:3188341217 --> @rick-github commented on GitHub (Aug 14, 2025): https://github.com/ollama/ollama/pull/11823 will merge the upstream implementation of MXFP4 and allow ollama to load gpt-oss models from external repos.
Author
Owner

@AncientMystic commented on GitHub (Aug 19, 2025):

https://github.com/ollama/ollama/pull/11823 will merge the upstream implementation of MXFP4 and allow ollama to load gpt-oss models from external repos.

As of the v0.11.5 pre-release it works now, but still buggy.

i get errors that reasoning effort (kind of important on gpt-oss) and logit_bias are unknown for gpt-oss and for some reason they just don't respond (although they do load now without error), it seems to just hang for awhile and stop. (I could only manage to get a few pruned 6B versions to even respond, but they were terrible. )

<!-- gh-comment-id:3199108617 --> @AncientMystic commented on GitHub (Aug 19, 2025): > https://github.com/ollama/ollama/pull/11823 will merge the upstream implementation of MXFP4 and allow ollama to load gpt-oss models from external repos. As of the v0.11.5 pre-release it works now, but still buggy. i get errors that reasoning effort (kind of important on gpt-oss) and logit_bias are unknown for gpt-oss and for some reason they just don't respond (although they do load now without error), it seems to just hang for awhile and stop. (I could only manage to get a few pruned 6B versions to even respond, but they were terrible. )
Author
Owner

@rick-github commented on GitHub (Aug 19, 2025):

i get errors that reasoning effort (kind of important on gpt-oss)

$ for r in low medium high ; do printf "%-6s:" $r ; curl -s localhost:11434/v1/chat/completions -d '{"model":"gpt-oss:20b","messages":[{"role":"user","content":"hello"}],"reasoning_effort":"'$r'"}' | jq '.choices[0].message.reasoning' ; done
low   :"Need a friendly greeting."
medium:"We need to provide a response to a greeting. Just say hello, friendly."
high  :"The user just wrote \"hello\". The assistant should respond with a friendly greeting. Possibly ask what they need help with. This is basic. The conversation is very short. There's no other context. The assistant should likely say something like \"Hello! How can I help you today?\" Or a short greeting: \"Hi! How can I assist you?\" They might also ask if there's something specific they need. So respond accordingly."

and logit_bias

#2415

for some reason they just don't respond (although they do load now without error),

Logs?

<!-- gh-comment-id:3201077371 --> @rick-github commented on GitHub (Aug 19, 2025): > i get errors that reasoning effort (kind of important on gpt-oss) ```console $ for r in low medium high ; do printf "%-6s:" $r ; curl -s localhost:11434/v1/chat/completions -d '{"model":"gpt-oss:20b","messages":[{"role":"user","content":"hello"}],"reasoning_effort":"'$r'"}' | jq '.choices[0].message.reasoning' ; done low :"Need a friendly greeting." medium:"We need to provide a response to a greeting. Just say hello, friendly." high :"The user just wrote \"hello\". The assistant should respond with a friendly greeting. Possibly ask what they need help with. This is basic. The conversation is very short. There's no other context. The assistant should likely say something like \"Hello! How can I help you today?\" Or a short greeting: \"Hi! How can I assist you?\" They might also ask if there's something specific they need. So respond accordingly." ``` > and logit_bias #2415 > for some reason they just don't respond (although they do load now without error), [Logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues)?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69902