[GH-ISSUE #13321] Error: 500 Internal Server Error: unable to load model: Ministral-3 #70857

Closed
opened 2026-05-04 23:13:53 -05:00 by GiteaMirror · 23 comments
Owner

Originally created by @SvenMeyer on GitHub (Dec 4, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13321

What is the issue?

Error: 500 Internal Server Error: unable to load model

Linux Manjaro
NVIDIA RTX4070 8GB
Intel 13700H CPU
RAM 128 GB

Relevant log output

$ ollama run hf.co/unsloth/Ministral-3-14B-Reasoning-2512-GGUF:Q4_K_XL --verbose
pulling manifest 
pulling 381f6e188ec2: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 8.4 GB                         
pulling 554f52849238: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  142 B                         
pulling bede7910d691: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ ▏ 877 MB/878 MB  6.7 MB/s      0s
pulling 9ad09f3bb5fa: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏   25 B                         
pulling 73b88741ba35: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  635 B                         
verifying sha256 digest 
writing manifest 
success 
Error: 500 Internal Server Error: unable to load model: /opt/AI-MODELS/ollama-models/blobs/sha256-381f6e188ec2689a433fda79b82986542b599662aee8fed51157d4bab74c8f72
$ ollama --version                                                              
ollama version is 0.13.1

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.13.1

Originally created by @SvenMeyer on GitHub (Dec 4, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13321 ### What is the issue? `Error: 500 Internal Server Error: unable to load model` Linux Manjaro NVIDIA RTX4070 8GB Intel 13700H CPU RAM 128 GB ### Relevant log output ```shell $ ollama run hf.co/unsloth/Ministral-3-14B-Reasoning-2512-GGUF:Q4_K_XL --verbose pulling manifest pulling 381f6e188ec2: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 8.4 GB pulling 554f52849238: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 142 B pulling bede7910d691: 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ ▏ 877 MB/878 MB 6.7 MB/s 0s pulling 9ad09f3bb5fa: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 25 B pulling 73b88741ba35: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 635 B verifying sha256 digest writing manifest success Error: 500 Internal Server Error: unable to load model: /opt/AI-MODELS/ollama-models/blobs/sha256-381f6e188ec2689a433fda79b82986542b599662aee8fed51157d4bab74c8f72 $ ollama --version ollama version is 0.13.1 ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.1
GiteaMirror added the bugneeds more info labels 2026-05-04 23:13:54 -05:00
Author
Owner

@SvenMeyer commented on GitHub (Dec 4, 2025):

Same result with the smaller model

$ ollama --version                                                              
ollama version is 0.13.1
$ ollama run hf.co/unsloth/Ministral-3-8B-Reasoning-2512-GGUF:Q4_K_XL --verbose
pulling manifest 
pulling 00300cf53173: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 5.3 GB                         
pulling 554f52849238: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  142 B                         
pulling 624657f83080: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 857 MB                         
pulling 9ad09f3bb5fa: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏   25 B                         
pulling 0addd1b589f1: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  635 B                         
verifying sha256 digest 
writing manifest 
success 
Error: 500 Internal Server Error: unable to load model: /opt/AI-MODELS/ollama-models/blobs/sha256-00300cf53173be004b77de572ffd31e58289abe59d0ae6c23f46e397652aff11
<!-- gh-comment-id:3609401368 --> @SvenMeyer commented on GitHub (Dec 4, 2025): Same result with the smaller model ```bash $ ollama --version ollama version is 0.13.1 $ ollama run hf.co/unsloth/Ministral-3-8B-Reasoning-2512-GGUF:Q4_K_XL --verbose pulling manifest pulling 00300cf53173: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 5.3 GB pulling 554f52849238: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 142 B pulling 624657f83080: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 857 MB pulling 9ad09f3bb5fa: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 25 B pulling 0addd1b589f1: 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 635 B verifying sha256 digest writing manifest success Error: 500 Internal Server Error: unable to load model: /opt/AI-MODELS/ollama-models/blobs/sha256-00300cf53173be004b77de572ffd31e58289abe59d0ae6c23f46e397652aff11 ```
Author
Owner

@rick-github commented on GitHub (Dec 4, 2025):

Server log will help in debugging.

<!-- gh-comment-id:3609685618 --> @rick-github commented on GitHub (Dec 4, 2025): [Server log](https://docs.ollama.com/troubleshooting) will help in debugging.
Author
Owner

@SvenMeyer commented on GitHub (Dec 4, 2025):

Sure 👍

Dec 04 14:17:56 xps15 ollama[996]: [GIN] 2025/12/04 - 14:17:56 | 200 |       24.05µs |       127.0.0.1 | HEAD     "/"
Dec 04 14:17:56 xps15 ollama[996]: [GIN] 2025/12/04 - 14:17:56 | 200 |   44.707741ms |       127.0.0.1 | POST     "/api/show"
Dec 04 14:17:56 xps15 ollama[996]: [GIN] 2025/12/04 - 14:17:56 | 200 |   42.470383ms |       127.0.0.1 | POST     "/api/show"
Dec 04 14:17:57 xps15 ollama[996]: time=2025-12-04T14:17:57.020+11:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34371"
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: loaded meta data with 48 key-value pairs and 309 tensors from /opt/AI-MODELS/ollama-models/blobs/sha256-00300cf53173be004b77de572ffd31e58289abe59d0ae6c23f46e397652aff11 (version GGUF V3 (latest))
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv   0:                       general.architecture str              = mistral3
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv   1:                               general.type str              = model
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv   2:                               general.name str              = Ministral-3-8B-Reasoning-2512
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv   3:                            general.version str              = 2512
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv   4:                           general.finetune str              = Reasoning
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv   5:                           general.basename str              = Ministral-3-8B-Reasoning-2512
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv   6:                       general.quantized_by str              = Unsloth
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv   7:                         general.size_label str              = 8B
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv   8:                           general.repo_url str              = https://huggingface.co/unsloth
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv   9:                       mistral3.block_count u32              = 34
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  10:                    mistral3.context_length u32              = 262144
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  11:                  mistral3.embedding_length u32              = 4096
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  12:               mistral3.feed_forward_length u32              = 14336
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  13:              mistral3.attention.head_count u32              = 32
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  14:           mistral3.attention.head_count_kv u32              = 8
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  15:  mistral3.attention.layer_norm_rms_epsilon f32              = 0.000010
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  16:              mistral3.attention.key_length u32              = 128
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  17:            mistral3.attention.value_length u32              = 128
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  18:                        mistral3.vocab_size u32              = 131072
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  19:              mistral3.rope.dimension_count u32              = 128
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  20:                 mistral3.rope.scaling.type str              = yarn
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  21:               mistral3.rope.scaling.factor f32              = 16.000000
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  22:       mistral3.rope.scaling.yarn_beta_fast f32              = 32.000000
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  23:       mistral3.rope.scaling.yarn_beta_slow f32              = 1.000000
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  24:  mistral3.rope.scaling.yarn_log_multiplier f32              = 1.000000
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  25: mistral3.rope.scaling.original_context_length u32              = 16384
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  26:                    mistral3.rope.freq_base f32              = 1000000.000000
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  27:       mistral3.attention.temperature_scale f32              = 0.100000
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  28:                       tokenizer.ggml.model str              = gpt2
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  29:                         tokenizer.ggml.pre str              = tekken
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  30:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "[INST]", "[...
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  31:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
Dec 04 14:17:59 xps15 ollama[996]: [132B blob data]
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  33:                tokenizer.ggml.bos_token_id u32              = 1
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  34:                tokenizer.ggml.eos_token_id u32              = 2
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  35:            tokenizer.ggml.unknown_token_id u32              = 0
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  36:            tokenizer.ggml.padding_token_id u32              = 11
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  37:               tokenizer.ggml.add_bos_token bool             = true
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  38:               tokenizer.ggml.add_sep_token bool             = false
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  39:               tokenizer.ggml.add_eos_token bool             = false
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  40:                    tokenizer.chat_template str              = {#- Unsloth template fixes #}\n{#- Def...
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  41:            tokenizer.ggml.add_space_prefix bool             = false
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  42:               general.quantization_version u32              = 2
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  43:                          general.file_type u32              = 15
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  44:                      quantize.imatrix.file str              = Ministral-3-8B-Reasoning-2512-GGUF/im...
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  45:                   quantize.imatrix.dataset str              = unsloth_calibration_Ministral-3-8B-Re...
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  46:             quantize.imatrix.entries_count u32              = 238
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv  47:              quantize.imatrix.chunks_count u32              = 139
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - type  f32:   69 tensors
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - type q4_K:  145 tensors
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - type q5_K:   25 tensors
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - type q6_K:   50 tensors
Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - type iq4_xs:   20 tensors
Dec 04 14:17:59 xps15 ollama[996]: print_info: file format = GGUF V3 (latest)
Dec 04 14:17:59 xps15 ollama[996]: print_info: file type   = Q4_K - Medium
Dec 04 14:17:59 xps15 ollama[996]: print_info: file size   = 4.92 GiB (4.98 BPW)
Dec 04 14:17:59 xps15 ollama[996]: llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'mistral3'
Dec 04 14:17:59 xps15 ollama[996]: llama_model_load_from_file_impl: failed to load model
Dec 04 14:17:59 xps15 ollama[996]: time=2025-12-04T14:17:59.061+11:00 level=INFO source=sched.go:425 msg="NewLlamaServer failed" model=/opt/AI-MODELS/ollama-models/blobs/sha256-00300cf53173be004b77de572ffd31e58289abe59d0ae6c23f46e397652aff11 error="unable to load model: /opt/AI-MODELS/ollama-models/blobs/sha256-00300cf53173be004b77de572ffd31e58289abe59d0ae6c23f46e397652aff11"
Dec 04 14:17:59 xps15 ollama[996]: [GIN] 2025/12/04 - 14:17:59 | 500 |  2.124164696s |       127.0.0.1 | POST     "/api/generate"
<!-- gh-comment-id:3609873124 --> @SvenMeyer commented on GitHub (Dec 4, 2025): Sure 👍 ```text Dec 04 14:17:56 xps15 ollama[996]: [GIN] 2025/12/04 - 14:17:56 | 200 | 24.05µs | 127.0.0.1 | HEAD "/" Dec 04 14:17:56 xps15 ollama[996]: [GIN] 2025/12/04 - 14:17:56 | 200 | 44.707741ms | 127.0.0.1 | POST "/api/show" Dec 04 14:17:56 xps15 ollama[996]: [GIN] 2025/12/04 - 14:17:56 | 200 | 42.470383ms | 127.0.0.1 | POST "/api/show" Dec 04 14:17:57 xps15 ollama[996]: time=2025-12-04T14:17:57.020+11:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34371" Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: loaded meta data with 48 key-value pairs and 309 tensors from /opt/AI-MODELS/ollama-models/blobs/sha256-00300cf53173be004b77de572ffd31e58289abe59d0ae6c23f46e397652aff11 (version GGUF V3 (latest)) Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 0: general.architecture str = mistral3 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 1: general.type str = model Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 2: general.name str = Ministral-3-8B-Reasoning-2512 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 3: general.version str = 2512 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 4: general.finetune str = Reasoning Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 5: general.basename str = Ministral-3-8B-Reasoning-2512 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 6: general.quantized_by str = Unsloth Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 7: general.size_label str = 8B Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 8: general.repo_url str = https://huggingface.co/unsloth Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 9: mistral3.block_count u32 = 34 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 10: mistral3.context_length u32 = 262144 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 11: mistral3.embedding_length u32 = 4096 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 12: mistral3.feed_forward_length u32 = 14336 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 13: mistral3.attention.head_count u32 = 32 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 14: mistral3.attention.head_count_kv u32 = 8 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 15: mistral3.attention.layer_norm_rms_epsilon f32 = 0.000010 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 16: mistral3.attention.key_length u32 = 128 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 17: mistral3.attention.value_length u32 = 128 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 18: mistral3.vocab_size u32 = 131072 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 19: mistral3.rope.dimension_count u32 = 128 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 20: mistral3.rope.scaling.type str = yarn Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 21: mistral3.rope.scaling.factor f32 = 16.000000 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 22: mistral3.rope.scaling.yarn_beta_fast f32 = 32.000000 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 23: mistral3.rope.scaling.yarn_beta_slow f32 = 1.000000 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 24: mistral3.rope.scaling.yarn_log_multiplier f32 = 1.000000 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 25: mistral3.rope.scaling.original_context_length u32 = 16384 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 26: mistral3.rope.freq_base f32 = 1000000.000000 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 27: mistral3.attention.temperature_scale f32 = 0.100000 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 28: tokenizer.ggml.model str = gpt2 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 29: tokenizer.ggml.pre str = tekken Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "[INST]", "[... Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 31: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... Dec 04 14:17:59 xps15 ollama[996]: [132B blob data] Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 33: tokenizer.ggml.bos_token_id u32 = 1 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 34: tokenizer.ggml.eos_token_id u32 = 2 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 35: tokenizer.ggml.unknown_token_id u32 = 0 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 11 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 38: tokenizer.ggml.add_sep_token bool = false Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 39: tokenizer.ggml.add_eos_token bool = false Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 40: tokenizer.chat_template str = {#- Unsloth template fixes #}\n{#- Def... Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 41: tokenizer.ggml.add_space_prefix bool = false Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 42: general.quantization_version u32 = 2 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 43: general.file_type u32 = 15 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 44: quantize.imatrix.file str = Ministral-3-8B-Reasoning-2512-GGUF/im... Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 45: quantize.imatrix.dataset str = unsloth_calibration_Ministral-3-8B-Re... Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 46: quantize.imatrix.entries_count u32 = 238 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - kv 47: quantize.imatrix.chunks_count u32 = 139 Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - type f32: 69 tensors Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - type q4_K: 145 tensors Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - type q5_K: 25 tensors Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - type q6_K: 50 tensors Dec 04 14:17:59 xps15 ollama[996]: llama_model_loader: - type iq4_xs: 20 tensors Dec 04 14:17:59 xps15 ollama[996]: print_info: file format = GGUF V3 (latest) Dec 04 14:17:59 xps15 ollama[996]: print_info: file type = Q4_K - Medium Dec 04 14:17:59 xps15 ollama[996]: print_info: file size = 4.92 GiB (4.98 BPW) Dec 04 14:17:59 xps15 ollama[996]: llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'mistral3' Dec 04 14:17:59 xps15 ollama[996]: llama_model_load_from_file_impl: failed to load model Dec 04 14:17:59 xps15 ollama[996]: time=2025-12-04T14:17:59.061+11:00 level=INFO source=sched.go:425 msg="NewLlamaServer failed" model=/opt/AI-MODELS/ollama-models/blobs/sha256-00300cf53173be004b77de572ffd31e58289abe59d0ae6c23f46e397652aff11 error="unable to load model: /opt/AI-MODELS/ollama-models/blobs/sha256-00300cf53173be004b77de572ffd31e58289abe59d0ae6c23f46e397652aff11" Dec 04 14:17:59 xps15 ollama[996]: [GIN] 2025/12/04 - 14:17:59 | 500 | 2.124164696s | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@SvenMeyer commented on GitHub (Dec 4, 2025):

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'mistral3'

<!-- gh-comment-id:3609879102 --> @SvenMeyer commented on GitHub (Dec 4, 2025): `llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'mistral3'`
Author
Owner

@SvenMeyer commented on GitHub (Dec 4, 2025):

older version had "model_type": "mistral",
https://huggingface.co/mistralai/Ministral-8B-Instruct-2410/blob/main/config.json

Ministral 3 has "model_type": "mistral3",
https://huggingface.co/mistralai/Ministral-3-14B-Reasoning-2512/blob/main/config.json

<!-- gh-comment-id:3609893832 --> @SvenMeyer commented on GitHub (Dec 4, 2025): older version had `"model_type": "mistral",` [https://huggingface.co/mistralai/Ministral-8B-Instruct-2410/blob/main/config.json](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410/blob/main/config.json) Ministral 3 has `"model_type": "mistral3",` [https://huggingface.co/mistralai/Ministral-3-14B-Reasoning-2512/blob/main/config.json](https://huggingface.co/mistralai/Ministral-3-14B-Reasoning-2512/blob/main/config.json)
Author
Owner

@rick-github commented on GitHub (Dec 4, 2025):

Models downloaded from HF typically are split format, separate text and vision weights. The ollama engine only supports fused multi-modal models, so the server tries to use the llama.cpp engine to load the model, which doesn't support the mistral3 architecture yet.

<!-- gh-comment-id:3611742173 --> @rick-github commented on GitHub (Dec 4, 2025): Models downloaded from HF typically are split format, separate text and vision weights. The ollama engine only supports fused multi-modal models, so the server tries to use the llama.cpp engine to load the model, which doesn't support the mistral3 architecture yet.
Author
Owner

@SvenMeyer commented on GitHub (Dec 4, 2025):

@rick-github thanks for the update, so again a "split (file) format" issue ?
... or would adding "somehow" mistral3 support be possible "easily" anyway ?

<!-- gh-comment-id:3611973885 --> @SvenMeyer commented on GitHub (Dec 4, 2025): @rick-github thanks for the update, so again a "split (file) format" issue ? ... or would adding "somehow" mistral3 support be possible "easily" anyway ?
Author
Owner

@jeepshop commented on GitHub (Dec 4, 2025):

Has nothing to do with split models!!! The unsloth model he is trying is a single file GUFF. I have the exact same issue.

Here is a link to the fused model I have been trying, and I get the exact same "Error: 500 Internal Server Error: unable to load model:"

https://huggingface.co/unsloth/Ministral-3-14B-Instruct-2512-GGUF/blob/main/Ministral-3-14B-Instruct-2512-UD-Q6_K_XL.gguf

<!-- gh-comment-id:3613871827 --> @jeepshop commented on GitHub (Dec 4, 2025): Has nothing to do with split models!!! The unsloth model he is trying is a single file GUFF. I have the exact same issue. Here is a link to the fused model I have been trying, and I get the exact same "Error: 500 Internal Server Error: unable to load model:" https://huggingface.co/unsloth/Ministral-3-14B-Instruct-2512-GGUF/blob/main/Ministral-3-14B-Instruct-2512-UD-Q6_K_XL.gguf
Author
Owner

@jessegross commented on GitHub (Dec 4, 2025):

time=2025-12-04T11:07:07.496-08:00 level=DEBUG source=sched.go:211 msg="loading first model" model=/Users/jesse/.ollama/models/blobs/sha256-6da70d68df738a4727c5ace74690ecf4f9c3c6facc291d4ce1d547c6bd94b5cc
time=2025-12-04T11:07:07.513-08:00 level=DEBUG source=server.go:154 msg="model not yet supported by Ollama engine, switching to compatibility mode" model=/Users/jesse/.ollama/models/blobs/sha256-6da70d68df738a4727c5ace74690ecf4f9c3c6facc291d4ce1d547c6bd94b5cc error="split vision models aren't supported"

Note the last part:
error="split vision models aren't supported"

<!-- gh-comment-id:3613935750 --> @jessegross commented on GitHub (Dec 4, 2025): ``` time=2025-12-04T11:07:07.496-08:00 level=DEBUG source=sched.go:211 msg="loading first model" model=/Users/jesse/.ollama/models/blobs/sha256-6da70d68df738a4727c5ace74690ecf4f9c3c6facc291d4ce1d547c6bd94b5cc time=2025-12-04T11:07:07.513-08:00 level=DEBUG source=server.go:154 msg="model not yet supported by Ollama engine, switching to compatibility mode" model=/Users/jesse/.ollama/models/blobs/sha256-6da70d68df738a4727c5ace74690ecf4f9c3c6facc291d4ce1d547c6bd94b5cc error="split vision models aren't supported" ``` Note the last part: `error="split vision models aren't supported"`
Author
Owner

@rick-github commented on GitHub (Dec 4, 2025):

Has nothing to do with split models!!! The unsloth model he is trying is a single file GUFF. I have the exact same issue.

$ ollama show --modelfile hf.co/unsloth/Ministral-3-14B-Instruct-2512-GGUF:Q6_K_XL | grep ^FROM
FROM /root/.ollama/models/blobs/sha256-6da70d68df738a4727c5ace74690ecf4f9c3c6facc291d4ce1d547c6bd94b5cc
FROM /root/.ollama/models/blobs/sha256-b0d2d17d39fa1fff5c1177b672c351cac55fb15b9794f03a8a551df185bffc10
<!-- gh-comment-id:3613971880 --> @rick-github commented on GitHub (Dec 4, 2025): > Has nothing to do with split models!!! The unsloth model he is trying is a single file GUFF. I have the exact same issue. ```console $ ollama show --modelfile hf.co/unsloth/Ministral-3-14B-Instruct-2512-GGUF:Q6_K_XL | grep ^FROM FROM /root/.ollama/models/blobs/sha256-6da70d68df738a4727c5ace74690ecf4f9c3c6facc291d4ce1d547c6bd94b5cc FROM /root/.ollama/models/blobs/sha256-b0d2d17d39fa1fff5c1177b672c351cac55fb15b9794f03a8a551df185bffc10 ```
Author
Owner

@jeepshop commented on GitHub (Dec 4, 2025):

Ok, then why does hf.co/unsloth/Devstral-Small-2507-GGUF:Q6_K_XL work in Ollama - it also shows two blobs!!!

$ ollama run hf.co/unsloth/Devstral-Small-2507-GGUF:Q6_K_XL
>>> What kind of model are you?
I am a Large Language Model trained by Mistral AI.
$ ollama show --modelfile hf.co/unsloth/Devstral-Small-2507-GGUF:Q6_K_XL
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM hf.co/unsloth/Devstral-Small-2507-GGUF:Q6_K_XL

FROM /usr/share/ollama/.ollama/models/blobs/sha256-05d141bcea979202ad6bc2628849d211798ab03b3226b8e68fdc58bdb6d3fb93
FROM /usr/share/ollama/.ollama/models/blobs/sha256-402640c0a0e4e00cdb1e94349adf7c2289acab05fee2b20ee635725ef588f994
TEMPLATE {{ if .System }}<s>[SYSTEM_PROMPT]{{ .System }}[/SYSTEM_PROMPT]{{ end }}{{ if .Prompt }}[INST]{{ .Prompt }}[/INST]{{ end }}{{ .Response }}</s>
PARAMETER stop <s>
PARAMETER stop [INST]
<!-- gh-comment-id:3614010886 --> @jeepshop commented on GitHub (Dec 4, 2025): Ok, then why does hf.co/unsloth/Devstral-Small-2507-GGUF:Q6_K_XL work in Ollama - it also shows two blobs!!! ``` $ ollama run hf.co/unsloth/Devstral-Small-2507-GGUF:Q6_K_XL >>> What kind of model are you? I am a Large Language Model trained by Mistral AI. ``` ``` $ ollama show --modelfile hf.co/unsloth/Devstral-Small-2507-GGUF:Q6_K_XL # Modelfile generated by "ollama show" # To build a new Modelfile based on this, replace FROM with: # FROM hf.co/unsloth/Devstral-Small-2507-GGUF:Q6_K_XL FROM /usr/share/ollama/.ollama/models/blobs/sha256-05d141bcea979202ad6bc2628849d211798ab03b3226b8e68fdc58bdb6d3fb93 FROM /usr/share/ollama/.ollama/models/blobs/sha256-402640c0a0e4e00cdb1e94349adf7c2289acab05fee2b20ee635725ef588f994 TEMPLATE {{ if .System }}<s>[SYSTEM_PROMPT]{{ .System }}[/SYSTEM_PROMPT]{{ end }}{{ if .Prompt }}[INST]{{ .Prompt }}[/INST]{{ end }}{{ .Response }}</s> PARAMETER stop <s> PARAMETER stop [INST] ```
Author
Owner

@rick-github commented on GitHub (Dec 4, 2025):

The llama.cpp engine supports the devstral architecture. The llama.cpp engine currently doesn't support the mistral3 architecture. If a split model is loaded, the ollama server will try to use the llama.cpp engine. If the llama.cpp engine doesn't support the architecture, the model load will fail. When loading devstral, the ollama server detects a split model and invokes the llama.cpp engine to run it. The llama.cpp engine supports the devstral architecture so the model load succeeds. When loading ministral-3, the ollama server detects a split model and invokes the llama.cpp engine to run it. The llama.cpp engine does not currently support the mistral3 architecture so the model load fails.

<!-- gh-comment-id:3614047511 --> @rick-github commented on GitHub (Dec 4, 2025): The llama.cpp engine supports the devstral architecture. The llama.cpp engine currently doesn't support the mistral3 architecture. If a split model is loaded, the ollama server will try to use the llama.cpp engine. If the llama.cpp engine doesn't support the architecture, the model load will fail. When loading devstral, the ollama server detects a split model and invokes the llama.cpp engine to run it. The llama.cpp engine supports the devstral architecture so the model load succeeds. When loading ministral-3, the ollama server detects a split model and invokes the llama.cpp engine to run it. The llama.cpp engine does not currently support the mistral3 architecture so the model load fails.
Author
Owner

@jeepshop commented on GitHub (Dec 4, 2025):

Huh, was under the impression that multi-part models weren't supported
period; As in the currently open ticket suggests

https://github.com/ollama/ollama/issues/5245

On Thu, Dec 4, 2025 at 2:42 PM frob @.***> wrote:

rick-github left a comment (ollama/ollama#13321)
https://github.com/ollama/ollama/issues/13321#issuecomment-3614047511

The llama.cpp engine supports the devstral architecture. The llama.cpp
engine currently doesn't support the mistral3 architecture. If a split
model is loaded, the ollama server will try to use the llama.cpp engine. If
the llama.cpp engine doesn't support the architecture, the model load will
fail. When loading devstral, the ollama server detects a split model and
invokes the llama.cpp engine to run it. The llama.cpp engine supports the
devstral architecture so the model load succeeds. When loading ministral-3,
the ollama server detects a split model and invokes the llama.cpp engine to
run it. The llama.cpp engine does not currently support the mistral3
architecture so the model load fails.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/13321#issuecomment-3614047511,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AF6EMSY5VEVN6PPLUREKWYD4ACFCJAVCNFSM6AAAAACN7HP7LGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTMMJUGA2DONJRGE
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:3614075351 --> @jeepshop commented on GitHub (Dec 4, 2025): Huh, was under the impression that multi-part models weren't supported period; As in the currently open ticket suggests https://github.com/ollama/ollama/issues/5245 On Thu, Dec 4, 2025 at 2:42 PM frob ***@***.***> wrote: > *rick-github* left a comment (ollama/ollama#13321) > <https://github.com/ollama/ollama/issues/13321#issuecomment-3614047511> > > The llama.cpp engine supports the devstral architecture. The llama.cpp > engine currently doesn't support the mistral3 architecture. If a split > model is loaded, the ollama server will try to use the llama.cpp engine. If > the llama.cpp engine doesn't support the architecture, the model load will > fail. When loading devstral, the ollama server detects a split model and > invokes the llama.cpp engine to run it. The llama.cpp engine supports the > devstral architecture so the model load succeeds. When loading ministral-3, > the ollama server detects a split model and invokes the llama.cpp engine to > run it. The llama.cpp engine does not currently support the mistral3 > architecture so the model load fails. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/13321#issuecomment-3614047511>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AF6EMSY5VEVN6PPLUREKWYD4ACFCJAVCNFSM6AAAAACN7HP7LGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTMMJUGA2DONJRGE> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Author
Owner

@rick-github commented on GitHub (Dec 4, 2025):

#5245 is a different issue. In 5245 the issue is about large models that have been sliced into smaller pieces for better recovery when a chunk download fails. The issue here is about multi-modal models that have two (or more, depending on the number of modes) separate collections of layers. The confusion is about "split", which is why I personally use the term "sharded" for #5245-style models and "fused" when talking about mutli-modal models.

<!-- gh-comment-id:3614095144 --> @rick-github commented on GitHub (Dec 4, 2025): [#5245](https://github.com/ollama/ollama/issues/5245) is a different issue. In 5245 the issue is about large models that have been sliced into smaller pieces for better recovery when a chunk download fails. The issue here is about multi-modal models that have two (or more, depending on the number of modes) separate collections of layers. The confusion is about "split", which is why I personally use the term "sharded" for #5245-style models and "fused" when talking about mutli-modal models.
Author
Owner

@jeepshop commented on GitHub (Dec 4, 2025):

Gotcha - always learning.

TLDR I need to wait until someone merges in the newer llama.cpp ( b7271
https://github.com/ggml-org/llama.cpp/releases/tag/b7271 ) in order to
get Ministral-3 support?

On Thu, Dec 4, 2025 at 2:56 PM frob @.***> wrote:

rick-github left a comment (ollama/ollama#13321)
https://github.com/ollama/ollama/issues/13321#issuecomment-3614095144

#5245 https://github.com/ollama/ollama/issues/5245 is a different
issue. In 5245 the issue is about large models that have been sliced into
smaller pieces for better recovery when a chunk download fails. The issue
here is about multi-modal models that have two (or more, depending on the
number of modes) separate collections of layers. The confusion is about
"split", which is why I personally use the term "sharded" for #5245
https://github.com/ollama/ollama/issues/5245-style models and "fused"
when talking about mutli-modal models.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/13321#issuecomment-3614095144,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AF6EMS734KBQTGV653HLBAL4ACGWZAVCNFSM6AAAAACN7HP7LGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTMMJUGA4TKMJUGQ
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:3614114427 --> @jeepshop commented on GitHub (Dec 4, 2025): Gotcha - always learning. TLDR I need to wait until someone merges in the newer llama.cpp ( b7271 <https://github.com/ggml-org/llama.cpp/releases/tag/b7271> ) in order to get Ministral-3 support? On Thu, Dec 4, 2025 at 2:56 PM frob ***@***.***> wrote: > *rick-github* left a comment (ollama/ollama#13321) > <https://github.com/ollama/ollama/issues/13321#issuecomment-3614095144> > > #5245 <https://github.com/ollama/ollama/issues/5245> is a different > issue. In 5245 the issue is about large models that have been sliced into > smaller pieces for better recovery when a chunk download fails. The issue > here is about multi-modal models that have two (or more, depending on the > number of modes) separate collections of layers. The confusion is about > "split", which is why I personally use the term "sharded" for #5245 > <https://github.com/ollama/ollama/issues/5245>-style models and "fused" > when talking about mutli-modal models. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/13321#issuecomment-3614095144>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AF6EMS734KBQTGV653HLBAL4ACGWZAVCNFSM6AAAAACN7HP7LGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTMMJUGA4TKMJUGQ> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Author
Owner

@rick-github commented on GitHub (Dec 4, 2025):

llama.cpp got mistral3 support in b7216, so the next ollama vendor sync at or after that point will allow split mistral3 models (separate text and vision weights) to be supported in ollama. #12992 just synced to b7209.

<!-- gh-comment-id:3614141221 --> @rick-github commented on GitHub (Dec 4, 2025): llama.cpp got mistral3 support in [b7216](https://github.com/ggml-org/llama.cpp/releases/tag/b7216), so the next ollama vendor sync at or after that point will allow split mistral3 models (separate text and vision weights) to be supported in ollama. #12992 just synced to [b7209](https://github.com/ggml-org/llama.cpp/releases/tag/b7209).
Author
Owner

@SvenMeyer commented on GitHub (Dec 11, 2025):

Looks like the small model works now - maybe because it fits completly into my 8GB VRAM ?

$ ollama run hf.co/unsloth/Ministral-3-8B-Reasoning-2512-GGUF:Q4_K_XL --verbose
>>> write a typescript program to upload a file to ipfs and return the cid
# TypeScript Program to Upload a File to IPFS and Return its CID

Here's a complete TypeScript program that uses the `ipfs-http-client` library to upload a file to IPFS and get its Content Identifier (CID) in return:
...

But the next size up, gives me an error

$ ollama run hf.co/unsloth/Ministral-3-14B-Reasoning-2512-GGUF:Q4_K_XL --verbose
pulling manifest 
pulling 381f6e188ec2: 100% ▕██████████████████████████████████████████████████████▏ 8.4 GB                         
pulling 554f52849238: 100% ▕██████████████████████████████████████████████████████▏  142 B                         
pulling bede7910d691: 100% ▕██████████████████████████████████████████████████████▏ 878 MB                         
pulling 9ad09f3bb5fa: 100% ▕██████████████████████████████████████████████████████▏   25 B                         
pulling 73b88741ba35: 100% ▕██████████████████████████████████████████████████████▏  635 B                         
verifying sha256 digest 
writing manifest 
success 
Error: 500 Internal Server Error: llama runner process has terminated: GGML_ASSERT(buffer) failed
$ 

$ journalctl -u ollama -f
Dec 11 23:26:22 xps15 ollama[36953]: r15    0x7f355e1fc990
Dec 11 23:26:22 xps15 ollama[36953]: rip    0x7f35a5c9890c
Dec 11 23:26:22 xps15 ollama[36953]: rflags 0x246
Dec 11 23:26:22 xps15 ollama[36953]: cs     0x33
Dec 11 23:26:22 xps15 ollama[36953]: fs     0x0
Dec 11 23:26:22 xps15 ollama[36953]: gs     0x0
Dec 11 23:26:23 xps15 ollama[36953]: time=2025-12-11T23:26:23.008+11:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server not responding"
Dec 11 23:26:23 xps15 ollama[36953]: time=2025-12-11T23:26:23.130+11:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 2"
Dec 11 23:26:23 xps15 ollama[36953]: time=2025-12-11T23:26:23.259+11:00 level=INFO source=sched.go:470 msg="Load failed" model=/opt/AI-MODELS/ollama-models/blobs/sha256-381f6e188ec2689a433fda79b82986542b599662aee8fed51157d4bab74c8f72 error="llama runner process has terminated: GGML_ASSERT(buffer) failed"
Dec 11 23:26:23 xps15 ollama[36953]: [GIN] 2025/12/11 - 23:26:23 | 500 |  4.276469881s |       127.0.0.1 | POST     "/api/generate"
<!-- gh-comment-id:3641827922 --> @SvenMeyer commented on GitHub (Dec 11, 2025): Looks like the small model works now - maybe because it fits completly into my 8GB VRAM ? ```bash $ ollama run hf.co/unsloth/Ministral-3-8B-Reasoning-2512-GGUF:Q4_K_XL --verbose >>> write a typescript program to upload a file to ipfs and return the cid # TypeScript Program to Upload a File to IPFS and Return its CID Here's a complete TypeScript program that uses the `ipfs-http-client` library to upload a file to IPFS and get its Content Identifier (CID) in return: ... ``` But the next size up, gives me an error ```bash $ ollama run hf.co/unsloth/Ministral-3-14B-Reasoning-2512-GGUF:Q4_K_XL --verbose pulling manifest pulling 381f6e188ec2: 100% ▕██████████████████████████████████████████████████████▏ 8.4 GB pulling 554f52849238: 100% ▕██████████████████████████████████████████████████████▏ 142 B pulling bede7910d691: 100% ▕██████████████████████████████████████████████████████▏ 878 MB pulling 9ad09f3bb5fa: 100% ▕██████████████████████████████████████████████████████▏ 25 B pulling 73b88741ba35: 100% ▕██████████████████████████████████████████████████████▏ 635 B verifying sha256 digest writing manifest success Error: 500 Internal Server Error: llama runner process has terminated: GGML_ASSERT(buffer) failed $ ``` ``` $ journalctl -u ollama -f Dec 11 23:26:22 xps15 ollama[36953]: r15 0x7f355e1fc990 Dec 11 23:26:22 xps15 ollama[36953]: rip 0x7f35a5c9890c Dec 11 23:26:22 xps15 ollama[36953]: rflags 0x246 Dec 11 23:26:22 xps15 ollama[36953]: cs 0x33 Dec 11 23:26:22 xps15 ollama[36953]: fs 0x0 Dec 11 23:26:22 xps15 ollama[36953]: gs 0x0 Dec 11 23:26:23 xps15 ollama[36953]: time=2025-12-11T23:26:23.008+11:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server not responding" Dec 11 23:26:23 xps15 ollama[36953]: time=2025-12-11T23:26:23.130+11:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="exit status 2" Dec 11 23:26:23 xps15 ollama[36953]: time=2025-12-11T23:26:23.259+11:00 level=INFO source=sched.go:470 msg="Load failed" model=/opt/AI-MODELS/ollama-models/blobs/sha256-381f6e188ec2689a433fda79b82986542b599662aee8fed51157d4bab74c8f72 error="llama runner process has terminated: GGML_ASSERT(buffer) failed" Dec 11 23:26:23 xps15 ollama[36953]: [GIN] 2025/12/11 - 23:26:23 | 500 | 4.276469881s | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@rick-github commented on GitHub (Jan 1, 2026):

Include the full log.

<!-- gh-comment-id:3703290504 --> @rick-github commented on GitHub (Jan 1, 2026): Include the full log.
Author
Owner

@rick-github commented on GitHub (Jan 14, 2026):

$ ollama -v
ollama version is 0.13.3
$ ollama run hf.co/unsloth/Ministral-3-14B-Reasoning-2512-GGUF:Q4_K_XL hello
Hello! How can I assist you today?
<!-- gh-comment-id:3749098582 --> @rick-github commented on GitHub (Jan 14, 2026): ```console $ ollama -v ollama version is 0.13.3 $ ollama run hf.co/unsloth/Ministral-3-14B-Reasoning-2512-GGUF:Q4_K_XL hello Hello! How can I assist you today? ```
Author
Owner

@SvenMeyer commented on GitHub (Jan 14, 2026):

@rick-github Does not work for me. Maybe because I have only 8GB VRAM ?

$ ollama --version                                                              
ollama version is 0.14.0
$ ollama run hf.co/unsloth/Ministral-3-14B-Reasoning-2512-GGUF:Q4_K_XL --verbose
Error: 500 Internal Server Error: llama runner process has terminated: GGML_ASSERT(buffer) failed

ollama-mistral.log

<!-- gh-comment-id:3749300645 --> @SvenMeyer commented on GitHub (Jan 14, 2026): @rick-github Does not work for me. Maybe because I have only 8GB VRAM ? ``` $ ollama --version ollama version is 0.14.0 $ ollama run hf.co/unsloth/Ministral-3-14B-Reasoning-2512-GGUF:Q4_K_XL --verbose Error: 500 Internal Server Error: llama runner process has terminated: GGML_ASSERT(buffer) failed ``` [ollama-mistral.log](https://github.com/user-attachments/files/24612882/ollama-mistral.log)
Author
Owner

@rick-github commented on GitHub (Jan 14, 2026):

Jan 14 22:54:32 xps15 ollama[168083]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 837.36 MiB on device 0: cudaMalloc failed: out of memory

The model is being run with the llama.cpp backend which is sometimes inaccurate in estimating how much memory it needs. This can result in OOMs as seen here. See here for ways to mitigate.

<!-- gh-comment-id:3749321460 --> @rick-github commented on GitHub (Jan 14, 2026): ``` Jan 14 22:54:32 xps15 ollama[168083]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 837.36 MiB on device 0: cudaMalloc failed: out of memory ``` The model is being run with the llama.cpp backend which is sometimes inaccurate in estimating how much memory it needs. This can result in OOMs as seen here. See [here](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288) for ways to mitigate.
Author
Owner

@SvenMeyer commented on GitHub (Jan 14, 2026):

@rick-github How can a specify via CLI how many layers should be offloaded to GPU before ollama tries to apply to automatic estimate and then crashes ?

Or would I need to try various configs in a model file ?

<!-- gh-comment-id:3749383115 --> @SvenMeyer commented on GitHub (Jan 14, 2026): @rick-github How can a specify via CLI how many layers should be offloaded to GPU before ollama tries to apply to automatic estimate and then crashes ? Or would I need to try various configs in a model file ?
Author
Owner

@rick-github commented on GitHub (Jan 14, 2026):

Create a new model as shown here.

<!-- gh-comment-id:3749407035 --> @rick-github commented on GitHub (Jan 14, 2026): Create a new model as shown [here](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650).
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70857