[GH-ISSUE #14086] qwen3 next Unsloth Dynamic Quants missing tensor #34957

Open
opened 2026-04-22 19:02:25 -05:00 by GiteaMirror · 21 comments
Owner

Originally created by @GitUsers1234 on GitHub (Feb 5, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14086

What is the issue?

Get below error when running qwen3 next or qwen3 next coder [in Unsloth Dynamic 2.0 Quants], please help .

Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight'

Relevant log output


OS

Win11

GPU

Nvidia RTX PRO 1000 Blackwell 8G

CPU

Ultra 9 285HX

Ollama version

ollama version is 0.15.5-rc3

Originally created by @GitUsers1234 on GitHub (Feb 5, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14086 ### What is the issue? Get below error when running qwen3 next or qwen3 next coder [in Unsloth Dynamic 2.0 Quants], please help . Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight' ### Relevant log output ```shell ``` ### OS Win11 ### GPU Nvidia RTX PRO 1000 Blackwell 8G ### CPU Ultra 9 285HX ### Ollama version ollama version is 0.15.5-rc3
GiteaMirror added the bug label 2026-04-22 19:02:25 -05:00
Author
Owner

@snapo commented on GitHub (Feb 5, 2026):

If it helps the Ollama team, i downloaded the 15.5 RC3 (also tested just now on RC4, same error) and have the exact same error....

command to launch it on my 2 x rtx 22GB gpu system
ollama run hf.co/unsloth/Qwen3-Coder-Next-GGUF:UD-Q3_K_XL

Here the logs of ollama server:

snapo@snabox:~$ sudo journalctl -u ollama --no-pager --follow
ก.พ. 05 12:20:06 snabox ollama[2087803]: panic: unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e
ก.พ. 05 12:20:06 snabox ollama[2087803]: goroutine 24 [running]:
ก.พ. 05 12:20:06 snabox ollama[2087803]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc00001a640, {{0xc0001e0de0, 0x2, 0x2}, 0x28, 0x0, 0x1, {0xc0001e0dc8, 0x2, 0x2}, ...}, ...)
ก.พ. 05 12:20:06 snabox ollama[2087803]:         github.com/ollama/ollama/runner/llamarunner/runner.go:843 +0x33f
ก.พ. 05 12:20:06 snabox ollama[2087803]: created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 7
ก.พ. 05 12:20:06 snabox ollama[2087803]:         github.com/ollama/ollama/runner/llamarunner/runner.go:934 +0x889
ก.พ. 05 12:20:06 snabox ollama[2087803]: time=2026-02-05T12:20:06.995+07:00 level=ERROR source=server.go:303 msg="llama runner terminated" error="exit status 2"
ก.พ. 05 12:20:06 snabox ollama[2087803]: time=2026-02-05T12:20:06.995+07:00 level=INFO source=sched.go:490 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e error="llama runner process no longer running: 2 error loading model: missing tensor 'blk.0.ssm_in.weight'\nllama_model_load_from_file_impl: failed to load model"
ก.พ. 05 12:20:07 snabox ollama[2087803]: [GIN] 2026/02/05 - 12:20:07 | 500 |  1.670102026s |       127.0.0.1 | POST     "/api/generate"
ก.พ. 05 12:25:01 snabox ollama[2087803]: [GIN] 2026/02/05 - 12:25:01 | 200 |      39.961µs |       127.0.0.1 | GET      "/api/version"
ก.พ. 05 12:40:11 snabox ollama[2087803]: [GIN] 2026/02/05 - 12:40:11 | 200 |       20.29µs |       127.0.0.1 | HEAD     "/"
ก.พ. 05 12:40:12 snabox ollama[2087803]: [GIN] 2026/02/05 - 12:40:12 | 200 |   97.311509ms |       127.0.0.1 | POST     "/api/show"
ก.พ. 05 12:40:12 snabox ollama[2087803]: [GIN] 2026/02/05 - 12:40:12 | 200 |  148.305331ms |       127.0.0.1 | POST     "/api/show"
ก.พ. 05 12:40:12 snabox ollama[2087803]: time=2026-02-05T12:40:12.365+07:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34713"
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: loaded meta data with 52 key-value pairs and 843 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e (version GGUF V3 (latest))
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv   0:                       general.architecture str              = qwen3next
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv   1:                               general.type str              = model
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv   2:                     general.sampling.top_k i32              = 40
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv   3:                     general.sampling.top_p f32              = 0.950000
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv   4:                      general.sampling.temp f32              = 1.000000
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv   5:                               general.name str              = Qwen3-Coder-Next
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv   6:                           general.basename str              = Qwen3-Coder-Next
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv   7:                       general.quantized_by str              = Unsloth
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv   8:                         general.size_label str              = 512x2.5B
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv   9:                            general.license str              = apache-2.0
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  10:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3-Cod...
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  11:                           general.repo_url str              = https://huggingface.co/unsloth
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  12:                   general.base_model.count u32              = 1
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  13:                  general.base_model.0.name str              = Qwen3 Coder Next
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  14:          general.base_model.0.organization str              = Qwen
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  15:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen3-Cod...
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  16:                               general.tags arr[str,2]       = ["unsloth", "text-generation"]
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  17:                      qwen3next.block_count u32              = 48
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  18:                   qwen3next.context_length u32              = 262144
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  19:                 qwen3next.embedding_length u32              = 2048
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  20:              qwen3next.feed_forward_length u32              = 5120
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  21:             qwen3next.attention.head_count u32              = 16
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  22:          qwen3next.attention.head_count_kv u32              = 2
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  23:                   qwen3next.rope.freq_base f32              = 5000000.000000
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  24: qwen3next.attention.layer_norm_rms_epsilon f32              = 0.000001
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  25:                qwen3next.expert_used_count u32              = 10
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  26:             qwen3next.attention.key_length u32              = 256
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  27:           qwen3next.attention.value_length u32              = 256
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  28:                     qwen3next.expert_count u32              = 512
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  29:       qwen3next.expert_feed_forward_length u32              = 512
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  30: qwen3next.expert_shared_feed_forward_length u32              = 512
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  31:                  qwen3next.ssm.conv_kernel u32              = 4
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  32:                   qwen3next.ssm.state_size u32              = 128
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  33:                  qwen3next.ssm.group_count u32              = 16
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  34:               qwen3next.ssm.time_step_rank u32              = 32
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  35:                   qwen3next.ssm.inner_size u32              = 4096
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  36:             qwen3next.rope.dimension_count u32              = 64
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  37:                       tokenizer.ggml.model str              = gpt2
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  38:                         tokenizer.ggml.pre str              = qwen2
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  39:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  40:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  41:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  42:                tokenizer.ggml.eos_token_id u32              = 151645
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  43:            tokenizer.ggml.padding_token_id u32              = 151654
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  44:               tokenizer.ggml.add_bos_token bool             = false
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  45:                    tokenizer.chat_template str              = {% macro render_extra_keys(json_dict,...
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  46:               general.quantization_version u32              = 2
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  47:                          general.file_type u32              = 12
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  48:                      quantize.imatrix.file str              = Qwen3-Coder-Next-GGUF/imatrix_unsloth...
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  49:                   quantize.imatrix.dataset str              = unsloth_calibration_Qwen3-Coder-Next.txt
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  50:             quantize.imatrix.entries_count u32              = 576
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv  51:              quantize.imatrix.chunks_count u32              = 154
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - type  f32:  313 tensors
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - type q3_K:  161 tensors
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - type q4_K:  289 tensors
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - type q5_K:   24 tensors
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - type q6_K:    8 tensors
ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - type bf16:   48 tensors
ก.พ. 05 12:40:12 snabox ollama[2087803]: print_info: file format = GGUF V3 (latest)
ก.พ. 05 12:40:12 snabox ollama[2087803]: print_info: file type   = Q3_K - Medium
ก.พ. 05 12:40:12 snabox ollama[2087803]: print_info: file size   = 35.78 GiB (3.86 BPW)
ก.พ. 05 12:40:13 snabox ollama[2087803]: load: printing all EOG tokens:
ก.พ. 05 12:40:13 snabox ollama[2087803]: load:   - 151643 ('<|endoftext|>')
ก.พ. 05 12:40:13 snabox ollama[2087803]: load:   - 151645 ('<|im_end|>')
ก.พ. 05 12:40:13 snabox ollama[2087803]: load:   - 151662 ('<|fim_pad|>')
ก.พ. 05 12:40:13 snabox ollama[2087803]: load:   - 151663 ('<|repo_name|>')
ก.พ. 05 12:40:13 snabox ollama[2087803]: load:   - 151664 ('<|file_sep|>')
ก.พ. 05 12:40:13 snabox ollama[2087803]: load: special tokens cache size = 26
ก.พ. 05 12:40:13 snabox ollama[2087803]: load: token to piece cache size = 0.9311 MB
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: arch             = qwen3next
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: vocab_only       = 1
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: no_alloc         = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_d_conv       = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_d_inner      = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_d_state      = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_dt_rank      = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_n_group      = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_dt_b_c_rms   = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: model type       = ?B
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: model params     = 79.67 B
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: general.name     = Qwen3-Coder-Next
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: vocab type       = BPE
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_vocab          = 151936
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_merges         = 151387
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: BOS token        = 11 ','
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOS token        = 151645 '<|im_end|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOT token        = 151645 '<|im_end|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: PAD token        = 151654 '<|vision_pad|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: LF token         = 198 'Ċ'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM REP token    = 151663 '<|repo_name|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token        = 151643 '<|endoftext|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token        = 151645 '<|im_end|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token        = 151662 '<|fim_pad|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token        = 151663 '<|repo_name|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token        = 151664 '<|file_sep|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: max token length = 256
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_load: vocab only - skipping tensors
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.101+07:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e --port 46527"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.101+07:00 level=INFO source=sched.go:463 msg="system memory" total="62.7 GiB" free="59.3 GiB" free_swap="2.0 GiB"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.101+07:00 level=INFO source=sched.go:470 msg="gpu memory" id=GPU-82ffbccc-dc59-6737-e152-b793f91df52c library=CUDA available="21.0 GiB" free="21.5 GiB" minimum="457.0 MiB" overhead="0 B"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.101+07:00 level=INFO source=sched.go:470 msg="gpu memory" id=GPU-c6fd9d07-a143-73ab-6f4e-e89c5ae43279 library=CUDA available="20.9 GiB" free="21.3 GiB" minimum="457.0 MiB" overhead="0 B"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.101+07:00 level=INFO source=server.go:497 msg="loading model" "model layers"=49 requested=-1
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="14.7 GiB"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="14.7 GiB"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="6.3 GiB"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.2 GiB"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="1.2 GiB"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="512.0 MiB"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="4.0 GiB"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="4.0 GiB"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:272 msg="total memory" size="46.6 GiB"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.114+07:00 level=INFO source=runner.go:965 msg="starting go runner"
ก.พ. 05 12:40:13 snabox ollama[2087803]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
ก.พ. 05 12:40:13 snabox ollama[2087803]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ก.พ. 05 12:40:13 snabox ollama[2087803]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ก.พ. 05 12:40:13 snabox ollama[2087803]: ggml_cuda_init: found 2 CUDA devices:
ก.พ. 05 12:40:13 snabox ollama[2087803]:   Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-82ffbccc-dc59-6737-e152-b793f91df52c
ก.พ. 05 12:40:13 snabox ollama[2087803]:   Device 1: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-c6fd9d07-a143-73ab-6f4e-e89c5ae43279
ก.พ. 05 12:40:13 snabox ollama[2087803]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.323+07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.324+07:00 level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:46527"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.329+07:00 level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:32768 KvCacheType: NumThreads:6 GPULayers:40[ID:GPU-82ffbccc-dc59-6737-e152-b793f91df52c Layers:20(8..27) ID:GPU-c6fd9d07-a143-73ab-6f4e-e89c5ae43279 Layers:20(28..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.330+07:00 level=INFO source=server.go:1349 msg="waiting for llama runner to start responding"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.330+07:00 level=INFO source=server.go:1383 msg="waiting for server to become available" status="llm server loading model"
ก.พ. 05 12:40:13 snabox ollama[2087803]: ggml_backend_cuda_device_get_memory device GPU-82ffbccc-dc59-6737-e152-b793f91df52c utilizing NVML memory reporting free: 23067033600 total: 23622320128
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2080 Ti) (0000:23:00.0) - 21998 MiB free
ก.พ. 05 12:40:13 snabox ollama[2087803]: ggml_backend_cuda_device_get_memory device GPU-c6fd9d07-a143-73ab-6f4e-e89c5ae43279 utilizing NVML memory reporting free: 22881107968 total: 23622320128
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 2080 Ti) (0000:2d:00.0) - 21821 MiB free
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: loaded meta data with 52 key-value pairs and 843 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e (version GGUF V3 (latest))
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv   0:                       general.architecture str              = qwen3next
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv   1:                               general.type str              = model
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv   2:                     general.sampling.top_k i32              = 40
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv   3:                     general.sampling.top_p f32              = 0.950000
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv   4:                      general.sampling.temp f32              = 1.000000
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv   5:                               general.name str              = Qwen3-Coder-Next
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv   6:                           general.basename str              = Qwen3-Coder-Next
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv   7:                       general.quantized_by str              = Unsloth
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv   8:                         general.size_label str              = 512x2.5B
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv   9:                            general.license str              = apache-2.0
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  10:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3-Cod...
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  11:                           general.repo_url str              = https://huggingface.co/unsloth
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  12:                   general.base_model.count u32              = 1
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  13:                  general.base_model.0.name str              = Qwen3 Coder Next
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  14:          general.base_model.0.organization str              = Qwen
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  15:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen3-Cod...
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  16:                               general.tags arr[str,2]       = ["unsloth", "text-generation"]
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  17:                      qwen3next.block_count u32              = 48
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  18:                   qwen3next.context_length u32              = 262144
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  19:                 qwen3next.embedding_length u32              = 2048
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  20:              qwen3next.feed_forward_length u32              = 5120
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  21:             qwen3next.attention.head_count u32              = 16
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  22:          qwen3next.attention.head_count_kv u32              = 2
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  23:                   qwen3next.rope.freq_base f32              = 5000000.000000
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  24: qwen3next.attention.layer_norm_rms_epsilon f32              = 0.000001
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  25:                qwen3next.expert_used_count u32              = 10
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  26:             qwen3next.attention.key_length u32              = 256
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  27:           qwen3next.attention.value_length u32              = 256
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  28:                     qwen3next.expert_count u32              = 512
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  29:       qwen3next.expert_feed_forward_length u32              = 512
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  30: qwen3next.expert_shared_feed_forward_length u32              = 512
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  31:                  qwen3next.ssm.conv_kernel u32              = 4
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  32:                   qwen3next.ssm.state_size u32              = 128
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  33:                  qwen3next.ssm.group_count u32              = 16
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  34:               qwen3next.ssm.time_step_rank u32              = 32
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  35:                   qwen3next.ssm.inner_size u32              = 4096
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  36:             qwen3next.rope.dimension_count u32              = 64
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  37:                       tokenizer.ggml.model str              = gpt2
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  38:                         tokenizer.ggml.pre str              = qwen2
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  39:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  40:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  41:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  42:                tokenizer.ggml.eos_token_id u32              = 151645
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  43:            tokenizer.ggml.padding_token_id u32              = 151654
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  44:               tokenizer.ggml.add_bos_token bool             = false
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  45:                    tokenizer.chat_template str              = {% macro render_extra_keys(json_dict,...
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  46:               general.quantization_version u32              = 2
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  47:                          general.file_type u32              = 12
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  48:                      quantize.imatrix.file str              = Qwen3-Coder-Next-GGUF/imatrix_unsloth...
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  49:                   quantize.imatrix.dataset str              = unsloth_calibration_Qwen3-Coder-Next.txt
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  50:             quantize.imatrix.entries_count u32              = 576
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv  51:              quantize.imatrix.chunks_count u32              = 154
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - type  f32:  313 tensors
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - type q3_K:  161 tensors
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - type q4_K:  289 tensors
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - type q5_K:   24 tensors
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - type q6_K:    8 tensors
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - type bf16:   48 tensors
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: file format = GGUF V3 (latest)
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: file type   = Q3_K - Medium
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: file size   = 35.78 GiB (3.86 BPW)
ก.พ. 05 12:40:13 snabox ollama[2087803]: load: printing all EOG tokens:
ก.พ. 05 12:40:13 snabox ollama[2087803]: load:   - 151643 ('<|endoftext|>')
ก.พ. 05 12:40:13 snabox ollama[2087803]: load:   - 151645 ('<|im_end|>')
ก.พ. 05 12:40:13 snabox ollama[2087803]: load:   - 151662 ('<|fim_pad|>')
ก.พ. 05 12:40:13 snabox ollama[2087803]: load:   - 151663 ('<|repo_name|>')
ก.พ. 05 12:40:13 snabox ollama[2087803]: load:   - 151664 ('<|file_sep|>')
ก.พ. 05 12:40:13 snabox ollama[2087803]: load: special tokens cache size = 26
ก.พ. 05 12:40:13 snabox ollama[2087803]: load: token to piece cache size = 0.9311 MB
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: arch             = qwen3next
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: vocab_only       = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: no_alloc         = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_ctx_train      = 262144
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_embd           = 2048
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_embd_inp       = 2048
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_layer          = 48
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_head           = 16
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_head_kv        = 2
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_rot            = 64
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_swa            = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: is_swa_any       = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_embd_head_k    = 256
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_embd_head_v    = 256
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_gqa            = 8
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_embd_k_gqa     = 512
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_embd_v_gqa     = 512
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: f_norm_eps       = 0.0e+00
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: f_norm_rms_eps   = 1.0e-06
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: f_clamp_kqv      = 0.0e+00
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: f_max_alibi_bias = 0.0e+00
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: f_logit_scale    = 0.0e+00
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: f_attn_scale     = 0.0e+00
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_ff             = 5120
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_expert         = 512
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_expert_used    = 10
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_expert_groups  = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_group_used     = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: causal attn      = 1
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: pooling type     = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: rope type        = 2
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: rope scaling     = linear
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: freq_base_train  = 5000000.0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: freq_scale_train = 1
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_ctx_orig_yarn  = 262144
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: rope_yarn_log_mul= 0.0000
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: rope_finetuned   = unknown
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_d_conv       = 4
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_d_inner      = 4096
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_d_state      = 128
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_dt_rank      = 32
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_n_group      = 16
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_dt_b_c_rms   = 0
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: model type       = 80B.A3B
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: model params     = 79.67 B
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: general.name     = Qwen3-Coder-Next
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: vocab type       = BPE
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_vocab          = 151936
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_merges         = 151387
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: BOS token        = 11 ','
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOS token        = 151645 '<|im_end|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOT token        = 151645 '<|im_end|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: PAD token        = 151654 '<|vision_pad|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: LF token         = 198 'Ċ'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM REP token    = 151663 '<|repo_name|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token        = 151643 '<|endoftext|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token        = 151645 '<|im_end|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token        = 151662 '<|fim_pad|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token        = 151663 '<|repo_name|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token        = 151664 '<|file_sep|>'
ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: max token length = 256
ก.พ. 05 12:40:13 snabox ollama[2087803]: load_tensors: loading model tensors, this can take a while... (mmap = true)
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_load: error loading model: missing tensor 'blk.0.ssm_in.weight'
ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_load_from_file_impl: failed to load model
ก.พ. 05 12:40:13 snabox ollama[2087803]: panic: unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e
ก.พ. 05 12:40:13 snabox ollama[2087803]: goroutine 14 [running]:
ก.พ. 05 12:40:13 snabox ollama[2087803]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0004f1360, {{0xc0003c4d50, 0x2, 0x2}, 0x28, 0x0, 0x1, {0xc0003c4d38, 0x2, 0x2}, ...}, ...)
ก.พ. 05 12:40:13 snabox ollama[2087803]:         github.com/ollama/ollama/runner/llamarunner/runner.go:843 +0x33f
ก.พ. 05 12:40:13 snabox ollama[2087803]: created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 11
ก.พ. 05 12:40:13 snabox ollama[2087803]:         github.com/ollama/ollama/runner/llamarunner/runner.go:934 +0x889
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.832+07:00 level=INFO source=server.go:1383 msg="waiting for server to become available" status="llm server not responding"
ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.862+07:00 level=ERROR source=server.go:303 msg="llama runner terminated" error="exit status 2"
ก.พ. 05 12:40:14 snabox ollama[2087803]: time=2026-02-05T12:40:14.082+07:00 level=INFO source=sched.go:490 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e error="llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight'\nllama_model_load_from_file_impl: failed to load model"
ก.พ. 05 12:40:14 snabox ollama[2087803]: [GIN] 2026/02/05 - 12:40:14 | 500 |   1.91733215s |       127.0.0.1 | POST     "/api/generate"

<!-- gh-comment-id:3851210284 --> @snapo commented on GitHub (Feb 5, 2026): If it helps the Ollama team, i downloaded the 15.5 RC3 (also tested just now on RC4, same error) and have the exact same error.... command to launch it on my 2 x rtx 22GB gpu system ollama run hf.co/unsloth/Qwen3-Coder-Next-GGUF:UD-Q3_K_XL Here the logs of ollama server: ``` snapo@snabox:~$ sudo journalctl -u ollama --no-pager --follow ก.พ. 05 12:20:06 snabox ollama[2087803]: panic: unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e ก.พ. 05 12:20:06 snabox ollama[2087803]: goroutine 24 [running]: ก.พ. 05 12:20:06 snabox ollama[2087803]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc00001a640, {{0xc0001e0de0, 0x2, 0x2}, 0x28, 0x0, 0x1, {0xc0001e0dc8, 0x2, 0x2}, ...}, ...) ก.พ. 05 12:20:06 snabox ollama[2087803]: github.com/ollama/ollama/runner/llamarunner/runner.go:843 +0x33f ก.พ. 05 12:20:06 snabox ollama[2087803]: created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 7 ก.พ. 05 12:20:06 snabox ollama[2087803]: github.com/ollama/ollama/runner/llamarunner/runner.go:934 +0x889 ก.พ. 05 12:20:06 snabox ollama[2087803]: time=2026-02-05T12:20:06.995+07:00 level=ERROR source=server.go:303 msg="llama runner terminated" error="exit status 2" ก.พ. 05 12:20:06 snabox ollama[2087803]: time=2026-02-05T12:20:06.995+07:00 level=INFO source=sched.go:490 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e error="llama runner process no longer running: 2 error loading model: missing tensor 'blk.0.ssm_in.weight'\nllama_model_load_from_file_impl: failed to load model" ก.พ. 05 12:20:07 snabox ollama[2087803]: [GIN] 2026/02/05 - 12:20:07 | 500 | 1.670102026s | 127.0.0.1 | POST "/api/generate" ก.พ. 05 12:25:01 snabox ollama[2087803]: [GIN] 2026/02/05 - 12:25:01 | 200 | 39.961µs | 127.0.0.1 | GET "/api/version" ก.พ. 05 12:40:11 snabox ollama[2087803]: [GIN] 2026/02/05 - 12:40:11 | 200 | 20.29µs | 127.0.0.1 | HEAD "/" ก.พ. 05 12:40:12 snabox ollama[2087803]: [GIN] 2026/02/05 - 12:40:12 | 200 | 97.311509ms | 127.0.0.1 | POST "/api/show" ก.พ. 05 12:40:12 snabox ollama[2087803]: [GIN] 2026/02/05 - 12:40:12 | 200 | 148.305331ms | 127.0.0.1 | POST "/api/show" ก.พ. 05 12:40:12 snabox ollama[2087803]: time=2026-02-05T12:40:12.365+07:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34713" ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: loaded meta data with 52 key-value pairs and 843 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e (version GGUF V3 (latest)) ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 0: general.architecture str = qwen3next ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 1: general.type str = model ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 2: general.sampling.top_k i32 = 40 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.950000 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 4: general.sampling.temp f32 = 1.000000 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 5: general.name str = Qwen3-Coder-Next ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 6: general.basename str = Qwen3-Coder-Next ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 7: general.quantized_by str = Unsloth ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 8: general.size_label str = 512x2.5B ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 9: general.license str = apache-2.0 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 10: general.license.link str = https://huggingface.co/Qwen/Qwen3-Cod... ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 11: general.repo_url str = https://huggingface.co/unsloth ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 12: general.base_model.count u32 = 1 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 13: general.base_model.0.name str = Qwen3 Coder Next ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 14: general.base_model.0.organization str = Qwen ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 15: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen3-Cod... ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 16: general.tags arr[str,2] = ["unsloth", "text-generation"] ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 17: qwen3next.block_count u32 = 48 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 18: qwen3next.context_length u32 = 262144 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 19: qwen3next.embedding_length u32 = 2048 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 20: qwen3next.feed_forward_length u32 = 5120 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 21: qwen3next.attention.head_count u32 = 16 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 22: qwen3next.attention.head_count_kv u32 = 2 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 23: qwen3next.rope.freq_base f32 = 5000000.000000 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 24: qwen3next.attention.layer_norm_rms_epsilon f32 = 0.000001 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 25: qwen3next.expert_used_count u32 = 10 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 26: qwen3next.attention.key_length u32 = 256 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 27: qwen3next.attention.value_length u32 = 256 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 28: qwen3next.expert_count u32 = 512 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 29: qwen3next.expert_feed_forward_length u32 = 512 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 30: qwen3next.expert_shared_feed_forward_length u32 = 512 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 31: qwen3next.ssm.conv_kernel u32 = 4 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 32: qwen3next.ssm.state_size u32 = 128 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 33: qwen3next.ssm.group_count u32 = 16 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 34: qwen3next.ssm.time_step_rank u32 = 32 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 35: qwen3next.ssm.inner_size u32 = 4096 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 36: qwen3next.rope.dimension_count u32 = 64 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 37: tokenizer.ggml.model str = gpt2 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 38: tokenizer.ggml.pre str = qwen2 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 39: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 40: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 41: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 42: tokenizer.ggml.eos_token_id u32 = 151645 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 43: tokenizer.ggml.padding_token_id u32 = 151654 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 44: tokenizer.ggml.add_bos_token bool = false ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 45: tokenizer.chat_template str = {% macro render_extra_keys(json_dict,... ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 46: general.quantization_version u32 = 2 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 47: general.file_type u32 = 12 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 48: quantize.imatrix.file str = Qwen3-Coder-Next-GGUF/imatrix_unsloth... ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 49: quantize.imatrix.dataset str = unsloth_calibration_Qwen3-Coder-Next.txt ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 50: quantize.imatrix.entries_count u32 = 576 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - kv 51: quantize.imatrix.chunks_count u32 = 154 ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - type f32: 313 tensors ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - type q3_K: 161 tensors ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - type q4_K: 289 tensors ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - type q5_K: 24 tensors ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - type q6_K: 8 tensors ก.พ. 05 12:40:12 snabox ollama[2087803]: llama_model_loader: - type bf16: 48 tensors ก.พ. 05 12:40:12 snabox ollama[2087803]: print_info: file format = GGUF V3 (latest) ก.พ. 05 12:40:12 snabox ollama[2087803]: print_info: file type = Q3_K - Medium ก.พ. 05 12:40:12 snabox ollama[2087803]: print_info: file size = 35.78 GiB (3.86 BPW) ก.พ. 05 12:40:13 snabox ollama[2087803]: load: printing all EOG tokens: ก.พ. 05 12:40:13 snabox ollama[2087803]: load: - 151643 ('<|endoftext|>') ก.พ. 05 12:40:13 snabox ollama[2087803]: load: - 151645 ('<|im_end|>') ก.พ. 05 12:40:13 snabox ollama[2087803]: load: - 151662 ('<|fim_pad|>') ก.พ. 05 12:40:13 snabox ollama[2087803]: load: - 151663 ('<|repo_name|>') ก.พ. 05 12:40:13 snabox ollama[2087803]: load: - 151664 ('<|file_sep|>') ก.พ. 05 12:40:13 snabox ollama[2087803]: load: special tokens cache size = 26 ก.พ. 05 12:40:13 snabox ollama[2087803]: load: token to piece cache size = 0.9311 MB ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: arch = qwen3next ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: vocab_only = 1 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: no_alloc = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_d_conv = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_d_inner = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_d_state = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_dt_rank = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_n_group = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_dt_b_c_rms = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: model type = ?B ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: model params = 79.67 B ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: general.name = Qwen3-Coder-Next ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: vocab type = BPE ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_vocab = 151936 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_merges = 151387 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: BOS token = 11 ',' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOS token = 151645 '<|im_end|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOT token = 151645 '<|im_end|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: PAD token = 151654 '<|vision_pad|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: LF token = 198 'Ċ' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM MID token = 151660 '<|fim_middle|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM PAD token = 151662 '<|fim_pad|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM REP token = 151663 '<|repo_name|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM SEP token = 151664 '<|file_sep|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token = 151643 '<|endoftext|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token = 151645 '<|im_end|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token = 151662 '<|fim_pad|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token = 151663 '<|repo_name|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token = 151664 '<|file_sep|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: max token length = 256 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_load: vocab only - skipping tensors ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.101+07:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e --port 46527" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.101+07:00 level=INFO source=sched.go:463 msg="system memory" total="62.7 GiB" free="59.3 GiB" free_swap="2.0 GiB" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.101+07:00 level=INFO source=sched.go:470 msg="gpu memory" id=GPU-82ffbccc-dc59-6737-e152-b793f91df52c library=CUDA available="21.0 GiB" free="21.5 GiB" minimum="457.0 MiB" overhead="0 B" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.101+07:00 level=INFO source=sched.go:470 msg="gpu memory" id=GPU-c6fd9d07-a143-73ab-6f4e-e89c5ae43279 library=CUDA available="20.9 GiB" free="21.3 GiB" minimum="457.0 MiB" overhead="0 B" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.101+07:00 level=INFO source=server.go:497 msg="loading model" "model layers"=49 requested=-1 ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="14.7 GiB" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="14.7 GiB" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="6.3 GiB" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.2 GiB" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="1.2 GiB" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="512.0 MiB" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="4.0 GiB" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="4.0 GiB" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.102+07:00 level=INFO source=device.go:272 msg="total memory" size="46.6 GiB" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.114+07:00 level=INFO source=runner.go:965 msg="starting go runner" ก.พ. 05 12:40:13 snabox ollama[2087803]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so ก.พ. 05 12:40:13 snabox ollama[2087803]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ก.พ. 05 12:40:13 snabox ollama[2087803]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ก.พ. 05 12:40:13 snabox ollama[2087803]: ggml_cuda_init: found 2 CUDA devices: ก.พ. 05 12:40:13 snabox ollama[2087803]: Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-82ffbccc-dc59-6737-e152-b793f91df52c ก.พ. 05 12:40:13 snabox ollama[2087803]: Device 1: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes, ID: GPU-c6fd9d07-a143-73ab-6f4e-e89c5ae43279 ก.พ. 05 12:40:13 snabox ollama[2087803]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.323+07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.324+07:00 level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:46527" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.329+07:00 level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:32768 KvCacheType: NumThreads:6 GPULayers:40[ID:GPU-82ffbccc-dc59-6737-e152-b793f91df52c Layers:20(8..27) ID:GPU-c6fd9d07-a143-73ab-6f4e-e89c5ae43279 Layers:20(28..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.330+07:00 level=INFO source=server.go:1349 msg="waiting for llama runner to start responding" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.330+07:00 level=INFO source=server.go:1383 msg="waiting for server to become available" status="llm server loading model" ก.พ. 05 12:40:13 snabox ollama[2087803]: ggml_backend_cuda_device_get_memory device GPU-82ffbccc-dc59-6737-e152-b793f91df52c utilizing NVML memory reporting free: 23067033600 total: 23622320128 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2080 Ti) (0000:23:00.0) - 21998 MiB free ก.พ. 05 12:40:13 snabox ollama[2087803]: ggml_backend_cuda_device_get_memory device GPU-c6fd9d07-a143-73ab-6f4e-e89c5ae43279 utilizing NVML memory reporting free: 22881107968 total: 23622320128 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 2080 Ti) (0000:2d:00.0) - 21821 MiB free ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: loaded meta data with 52 key-value pairs and 843 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e (version GGUF V3 (latest)) ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 0: general.architecture str = qwen3next ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 1: general.type str = model ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 2: general.sampling.top_k i32 = 40 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.950000 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 4: general.sampling.temp f32 = 1.000000 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 5: general.name str = Qwen3-Coder-Next ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 6: general.basename str = Qwen3-Coder-Next ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 7: general.quantized_by str = Unsloth ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 8: general.size_label str = 512x2.5B ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 9: general.license str = apache-2.0 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 10: general.license.link str = https://huggingface.co/Qwen/Qwen3-Cod... ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 11: general.repo_url str = https://huggingface.co/unsloth ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 12: general.base_model.count u32 = 1 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 13: general.base_model.0.name str = Qwen3 Coder Next ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 14: general.base_model.0.organization str = Qwen ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 15: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen3-Cod... ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 16: general.tags arr[str,2] = ["unsloth", "text-generation"] ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 17: qwen3next.block_count u32 = 48 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 18: qwen3next.context_length u32 = 262144 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 19: qwen3next.embedding_length u32 = 2048 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 20: qwen3next.feed_forward_length u32 = 5120 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 21: qwen3next.attention.head_count u32 = 16 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 22: qwen3next.attention.head_count_kv u32 = 2 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 23: qwen3next.rope.freq_base f32 = 5000000.000000 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 24: qwen3next.attention.layer_norm_rms_epsilon f32 = 0.000001 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 25: qwen3next.expert_used_count u32 = 10 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 26: qwen3next.attention.key_length u32 = 256 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 27: qwen3next.attention.value_length u32 = 256 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 28: qwen3next.expert_count u32 = 512 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 29: qwen3next.expert_feed_forward_length u32 = 512 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 30: qwen3next.expert_shared_feed_forward_length u32 = 512 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 31: qwen3next.ssm.conv_kernel u32 = 4 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 32: qwen3next.ssm.state_size u32 = 128 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 33: qwen3next.ssm.group_count u32 = 16 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 34: qwen3next.ssm.time_step_rank u32 = 32 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 35: qwen3next.ssm.inner_size u32 = 4096 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 36: qwen3next.rope.dimension_count u32 = 64 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 37: tokenizer.ggml.model str = gpt2 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 38: tokenizer.ggml.pre str = qwen2 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 39: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 40: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 41: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 42: tokenizer.ggml.eos_token_id u32 = 151645 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 43: tokenizer.ggml.padding_token_id u32 = 151654 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 44: tokenizer.ggml.add_bos_token bool = false ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 45: tokenizer.chat_template str = {% macro render_extra_keys(json_dict,... ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 46: general.quantization_version u32 = 2 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 47: general.file_type u32 = 12 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 48: quantize.imatrix.file str = Qwen3-Coder-Next-GGUF/imatrix_unsloth... ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 49: quantize.imatrix.dataset str = unsloth_calibration_Qwen3-Coder-Next.txt ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 50: quantize.imatrix.entries_count u32 = 576 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - kv 51: quantize.imatrix.chunks_count u32 = 154 ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - type f32: 313 tensors ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - type q3_K: 161 tensors ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - type q4_K: 289 tensors ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - type q5_K: 24 tensors ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - type q6_K: 8 tensors ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_loader: - type bf16: 48 tensors ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: file format = GGUF V3 (latest) ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: file type = Q3_K - Medium ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: file size = 35.78 GiB (3.86 BPW) ก.พ. 05 12:40:13 snabox ollama[2087803]: load: printing all EOG tokens: ก.พ. 05 12:40:13 snabox ollama[2087803]: load: - 151643 ('<|endoftext|>') ก.พ. 05 12:40:13 snabox ollama[2087803]: load: - 151645 ('<|im_end|>') ก.พ. 05 12:40:13 snabox ollama[2087803]: load: - 151662 ('<|fim_pad|>') ก.พ. 05 12:40:13 snabox ollama[2087803]: load: - 151663 ('<|repo_name|>') ก.พ. 05 12:40:13 snabox ollama[2087803]: load: - 151664 ('<|file_sep|>') ก.พ. 05 12:40:13 snabox ollama[2087803]: load: special tokens cache size = 26 ก.พ. 05 12:40:13 snabox ollama[2087803]: load: token to piece cache size = 0.9311 MB ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: arch = qwen3next ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: vocab_only = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: no_alloc = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_ctx_train = 262144 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_embd = 2048 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_embd_inp = 2048 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_layer = 48 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_head = 16 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_head_kv = 2 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_rot = 64 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_swa = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: is_swa_any = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_embd_head_k = 256 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_embd_head_v = 256 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_gqa = 8 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_embd_k_gqa = 512 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_embd_v_gqa = 512 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: f_norm_eps = 0.0e+00 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: f_norm_rms_eps = 1.0e-06 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: f_clamp_kqv = 0.0e+00 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: f_max_alibi_bias = 0.0e+00 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: f_logit_scale = 0.0e+00 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: f_attn_scale = 0.0e+00 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_ff = 5120 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_expert = 512 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_expert_used = 10 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_expert_groups = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_group_used = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: causal attn = 1 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: pooling type = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: rope type = 2 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: rope scaling = linear ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: freq_base_train = 5000000.0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: freq_scale_train = 1 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_ctx_orig_yarn = 262144 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: rope_yarn_log_mul= 0.0000 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: rope_finetuned = unknown ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_d_conv = 4 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_d_inner = 4096 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_d_state = 128 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_dt_rank = 32 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_n_group = 16 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: ssm_dt_b_c_rms = 0 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: model type = 80B.A3B ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: model params = 79.67 B ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: general.name = Qwen3-Coder-Next ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: vocab type = BPE ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_vocab = 151936 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: n_merges = 151387 ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: BOS token = 11 ',' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOS token = 151645 '<|im_end|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOT token = 151645 '<|im_end|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: PAD token = 151654 '<|vision_pad|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: LF token = 198 'Ċ' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM MID token = 151660 '<|fim_middle|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM PAD token = 151662 '<|fim_pad|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM REP token = 151663 '<|repo_name|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: FIM SEP token = 151664 '<|file_sep|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token = 151643 '<|endoftext|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token = 151645 '<|im_end|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token = 151662 '<|fim_pad|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token = 151663 '<|repo_name|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: EOG token = 151664 '<|file_sep|>' ก.พ. 05 12:40:13 snabox ollama[2087803]: print_info: max token length = 256 ก.พ. 05 12:40:13 snabox ollama[2087803]: load_tensors: loading model tensors, this can take a while... (mmap = true) ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_load: error loading model: missing tensor 'blk.0.ssm_in.weight' ก.พ. 05 12:40:13 snabox ollama[2087803]: llama_model_load_from_file_impl: failed to load model ก.พ. 05 12:40:13 snabox ollama[2087803]: panic: unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e ก.พ. 05 12:40:13 snabox ollama[2087803]: goroutine 14 [running]: ก.พ. 05 12:40:13 snabox ollama[2087803]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0004f1360, {{0xc0003c4d50, 0x2, 0x2}, 0x28, 0x0, 0x1, {0xc0003c4d38, 0x2, 0x2}, ...}, ...) ก.พ. 05 12:40:13 snabox ollama[2087803]: github.com/ollama/ollama/runner/llamarunner/runner.go:843 +0x33f ก.พ. 05 12:40:13 snabox ollama[2087803]: created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 11 ก.พ. 05 12:40:13 snabox ollama[2087803]: github.com/ollama/ollama/runner/llamarunner/runner.go:934 +0x889 ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.832+07:00 level=INFO source=server.go:1383 msg="waiting for server to become available" status="llm server not responding" ก.พ. 05 12:40:13 snabox ollama[2087803]: time=2026-02-05T12:40:13.862+07:00 level=ERROR source=server.go:303 msg="llama runner terminated" error="exit status 2" ก.พ. 05 12:40:14 snabox ollama[2087803]: time=2026-02-05T12:40:14.082+07:00 level=INFO source=sched.go:490 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-1da12f525eebb390b73b595d54875b60c0dbdeb0046be1ede51e85e85e8f368e error="llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight'\nllama_model_load_from_file_impl: failed to load model" ก.พ. 05 12:40:14 snabox ollama[2087803]: [GIN] 2026/02/05 - 12:40:14 | 500 | 1.91733215s | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@znowfox commented on GitHub (Feb 5, 2026):

same, worked in rc1, not after

<!-- gh-comment-id:3851653895 --> @znowfox commented on GitHub (Feb 5, 2026): same, worked in rc1, not after
Author
Owner

@rick-github commented on GitHub (Feb 6, 2026):

https://github.com/ollama/ollama/issues/14049#issuecomment-3856418375

<!-- gh-comment-id:3859440254 --> @rick-github commented on GitHub (Feb 6, 2026): https://github.com/ollama/ollama/issues/14049#issuecomment-3856418375
Author
Owner

@znowfox commented on GitHub (Feb 6, 2026):

for us peasant dependant on q1 (24gb) we need that unsloth/REAPS \o/

<!-- gh-comment-id:3862020805 --> @znowfox commented on GitHub (Feb 6, 2026): for us peasant dependant on q1 (24gb) we need that unsloth/REAPS \o/
Author
Owner

@mymel2001-holder commented on GitHub (Feb 8, 2026):

tldr for solution (at least temporarily until the "vendor sync" happens): downgrade to version "0.15.5-rc1"

<!-- gh-comment-id:3867517668 --> @mymel2001-holder commented on GitHub (Feb 8, 2026): tldr for solution (at least temporarily until the "vendor sync" happens): downgrade to version "0.15.5-rc1"
Author
Owner

@GitUsers1234 commented on GitHub (Feb 9, 2026):

I can't find 0.15.5-rc1, can anyone share it? Much appreciated.

<!-- gh-comment-id:3868939865 --> @GitUsers1234 commented on GitHub (Feb 9, 2026): I can't find 0.15.5-rc1, can anyone share it? Much appreciated.
Author
Owner

@Johnreidsilver commented on GitHub (Feb 9, 2026):

I think rc1 had other problems, at least I had a VRAM memory leak problem
https://github.com/ollama/ollama/issues/14044

Haven't tried the latest 0.15.6, but in previous release I'm getting this error for unsloth quant:

ollama --version
ollama version is 0.15.5

ollama run sparksammy/qwen3-coder-next-unsloth:tiny-hotfixed
Error: 500 Internal Server Error: llama runner process no longer running: 2 error loading model: missing tensor 'blk.0.ssm_in.weight'
llama_model_load_from_file_impl: failed to load model

<!-- gh-comment-id:3870711354 --> @Johnreidsilver commented on GitHub (Feb 9, 2026): I think rc1 had other problems, at least I had a VRAM memory leak problem https://github.com/ollama/ollama/issues/14044 Haven't tried the latest 0.15.6, but in previous release I'm getting this error for unsloth quant: ollama --version ollama version is 0.15.5 ollama run sparksammy/qwen3-coder-next-unsloth:tiny-hotfixed Error: 500 Internal Server Error: llama runner process no longer running: 2 error loading model: missing tensor 'blk.0.ssm_in.weight' llama_model_load_from_file_impl: failed to load model
Author
Owner

@znowfox commented on GitHub (Feb 9, 2026):

yep same 15.6

<!-- gh-comment-id:3871581880 --> @znowfox commented on GitHub (Feb 9, 2026): yep same 15.6
Author
Owner

@Wladastic commented on GitHub (Feb 11, 2026):

I have the same issue with 0.15.4:

ollama run hf.co/lovedheart/Qwen3-Coder-Next-REAP-40B-A3B-GGUF:Q4_K_XL
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight'
llama_model_load_from_file_impl: failed to load model
<!-- gh-comment-id:3883392361 --> @Wladastic commented on GitHub (Feb 11, 2026): I have the same issue with 0.15.4: ``` ollama run hf.co/lovedheart/Qwen3-Coder-Next-REAP-40B-A3B-GGUF:Q4_K_XL Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight' llama_model_load_from_file_impl: failed to load model ```
Author
Owner

@lapo-luchini commented on GitHub (Feb 12, 2026):

Won't change until we see new commits in the llama folder.

<!-- gh-comment-id:3891123824 --> @lapo-luchini commented on GitHub (Feb 12, 2026): Won't change until [we see new commits in the llama folder](https://github.com/ollama/ollama/commits/main/llama).
Author
Owner

@rick-github commented on GitHub (Feb 12, 2026):

Possibly in #14134.

<!-- gh-comment-id:3891186419 --> @rick-github commented on GitHub (Feb 12, 2026): Possibly in #14134.
Author
Owner

@forresthopkinsa commented on GitHub (Feb 13, 2026):

same issue 0.16.1

<!-- gh-comment-id:3895346762 --> @forresthopkinsa commented on GitHub (Feb 13, 2026): same issue 0.16.1
Author
Owner

@mmontes11 commented on GitHub (Feb 14, 2026):

Same issue in 0.16.1. I can confirm that 0.15.5-rc1 works

<!-- gh-comment-id:3902432627 --> @mmontes11 commented on GitHub (Feb 14, 2026): Same issue in 0.16.1. I can confirm that 0.15.5-rc1 works
Author
Owner

@m1ali1373 commented on GitHub (Feb 15, 2026):

Does anyone have version 0.15.5-rc1? How can I download it?

<!-- gh-comment-id:3904694216 --> @m1ali1373 commented on GitHub (Feb 15, 2026): Does anyone have version 0.15.5-rc1? How can I download it?
Author
Owner

@lapo-luchini commented on GitHub (Feb 15, 2026):

v0.15.5-rc1 sources can be downloaded from GitHub, but then you have to build it yourself.

<!-- gh-comment-id:3904823231 --> @lapo-luchini commented on GitHub (Feb 15, 2026): [v0.15.5-rc1 sources](https://github.com/ollama/ollama/releases/tag/v0.15.5-rc1) can be downloaded from GitHub, but then you have to build it yourself.
Author
Owner

@ttait1 commented on GitHub (Feb 17, 2026):

It's not great that ollama lags so far behind the upstream llama in capabilities. It can't run a lot of cutting edge models that llama supports. People are going to switch away. Is this patch harder than I think or just not a priority because they are looking towards cloud model hosting and agents now?

<!-- gh-comment-id:3915051812 --> @ttait1 commented on GitHub (Feb 17, 2026): It's not great that ollama lags so far behind the upstream llama in capabilities. It can't run a lot of cutting edge models that llama supports. People are going to switch away. Is this patch harder than I think or just not a priority because they are looking towards cloud model hosting and agents now?
Author
Owner

@snapo commented on GitHub (Feb 17, 2026):

It's not great that ollama lags so far behind the upstream llama in capabilities. It can't run a lot of cutting edge models that llama supports. People are going to switch away. Is this patch harder than I think or just not a priority because they are looking towards cloud model hosting and agents now?

looks like since the VC takeover the only thing cared is getting as much people into ollama cloud llms to fill the vc pockets....
you also see it how they are extremely promoted in the models page when you "sort" new and still see cloud models....

maybe time to vibecode a ollama replacement with webgui

<!-- gh-comment-id:3915206764 --> @snapo commented on GitHub (Feb 17, 2026): > It's not great that ollama lags so far behind the upstream llama in capabilities. It can't run a lot of cutting edge models that llama supports. People are going to switch away. Is this patch harder than I think or just not a priority because they are looking towards cloud model hosting and agents now? looks like since the VC takeover the only thing cared is getting as much people into ollama cloud llms to fill the vc pockets.... you also see it how they are extremely promoted in the models page when you "sort" new and still see cloud models.... maybe time to vibecode a ollama replacement with webgui
Author
Owner

@mymel2001-holder commented on GitHub (Feb 21, 2026):

vibe code an ollama replacement? laughs in sllama. some of us already did that.

no, really, good luck with that. you’ll need it. i tried, and it didnt end up very good.

also, couldn’t help but notice someone here was using one of the models in my collection! (a sparksammy model). this sure wasnt on my metaphorical bingo card.

anyways, the easiest way to get 0.15.5-rc1 is to use Docker. no, really, that’s the easiest way to do so. it’s precompiled and all you have to do is pull the “0.15.5-rc1” tag instead of “latest”.

<!-- gh-comment-id:3938006591 --> @mymel2001-holder commented on GitHub (Feb 21, 2026): vibe code an ollama replacement? *laughs in sllama.* some of us already did that. no, really, good luck with that. you’ll need it. i tried, and it didnt end up very good. also, couldn’t help but notice someone here was using one of the models in my collection! (a sparksammy model). this sure wasnt on my metaphorical bingo card. anyways, the easiest way to get 0.15.5-rc1 is to use Docker. no, really, that’s the easiest way to do so. it’s precompiled and all you have to do is pull the “0.15.5-rc1” tag instead of “latest”.
Author
Owner

@ttait1 commented on GitHub (Feb 22, 2026):

I pulled down 0.16.3 and applied the patch they reverted, built and it works fine. 2 hunks were rejected because the changes were already there. Didn't test GLM-4.7-flash which they claimed had performance issues, but I did notice that after I committed the changes locally it spat this out:
delete mode 100644 llama/patches/0032-ggml-enable-MLA-flash-attention-for-GLM-4.7-flash.patch

But it runs all the quants of qwen3-coder-next including the REAP versions which is what I wanted.

Here is what I did (on linux). You may need to install dependencies first.

git clone --recurse-submodules https://github.com/ollama/ollama.git
cd ollama
git switch -d v0.16.3
nano version/version.go  # set version string here?
git diff 8f4a0081398d89a88a34d7c553b74c6578d212be ef00199fb4e6d045e11e76baaab9049f3234939d >  patch_13832.diff
git apply --reject patch_13832.diff
cmake --build build
go build .
<!-- gh-comment-id:3941293776 --> @ttait1 commented on GitHub (Feb 22, 2026): I pulled down 0.16.3 and applied the patch they reverted, built and it works fine. 2 hunks were rejected because the changes were already there. Didn't test GLM-4.7-flash which they claimed had performance issues, but I did notice that after I committed the changes locally it spat this out: `delete mode 100644 llama/patches/0032-ggml-enable-MLA-flash-attention-for-GLM-4.7-flash.patch` But it runs all the quants of qwen3-coder-next including the REAP versions which is what I wanted. Here is what I did (on linux). You may need to install dependencies first. ``` git clone --recurse-submodules https://github.com/ollama/ollama.git cd ollama git switch -d v0.16.3 nano version/version.go # set version string here? git diff 8f4a0081398d89a88a34d7c553b74c6578d212be ef00199fb4e6d045e11e76baaab9049f3234939d > patch_13832.diff git apply --reject patch_13832.diff cmake --build build go build . ```
Author
Owner

@ttait1 commented on GitHub (Feb 22, 2026):

I pulled down 0.16.3 and applied the patch they reverted, built and it works fine. 2 hunks were rejected because the changes were already there. Didn't test GLM-4.7-flash which they claimed had performance issues, but I did notice that after I committed the

I'm getting 34tk/s initially slowing to 16tk/s as the context grows to ~12000 on an RTX3090 with unsloth glm4.7-flash-REAP:IQ4_XS, so the issue is still there. For examplle with hf.co/mradermacher/Qwen3-Coder-Next-REAP-40B-A3B-i1-GGUF:Q3_K_M it starts at 45tk/s and only drops to 43.

<!-- gh-comment-id:3941696853 --> @ttait1 commented on GitHub (Feb 22, 2026): > I pulled down 0.16.3 and applied the patch they reverted, built and it works fine. 2 hunks were rejected because the changes were already there. Didn't test GLM-4.7-flash which they claimed had performance issues, but I did notice that after I committed the I'm getting 34tk/s initially slowing to 16tk/s as the context grows to ~12000 on an RTX3090 with unsloth glm4.7-flash-REAP:IQ4_XS, so the issue is still there. For examplle with hf.co/mradermacher/Qwen3-Coder-Next-REAP-40B-A3B-i1-GGUF:Q3_K_M it starts at 45tk/s and only drops to 43.
Author
Owner

@lapo-luchini commented on GitHub (Feb 26, 2026):

Possibly in #14134.

I just tried this PR with success on UD-Q3_K_XL. 🎉
(on a MacMini M4 with 48GB RAM, where the Q4 model wouldn't probably fit)

<!-- gh-comment-id:3968792106 --> @lapo-luchini commented on GitHub (Feb 26, 2026): > Possibly in [#14134](https://github.com/ollama/ollama/pull/14134). I just tried this PR with success on `UD-Q3_K_XL`. 🎉 (on a MacMini M4 with 48GB RAM, where the Q4 model wouldn't probably fit)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34957