[GH-ISSUE #14232] Model request too large for system #55779

Closed
opened 2026-04-29 09:43:31 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @ka-admin on GitHub (Feb 13, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14232

What is the issue?

Hi

Some of huggingface downloaded models stops to work around version 0.15.5 (or maybe newer). Ollama engine states that I don't have enough memory to run them (GLM 4.7 K6, MiniMax 2.1 Q8). But other ollama site downloaded models works fine (Qwen3 235B Thinking Q8). I didn't change anything in my server, just updated ollama release to the latest (0.16.1).

RAM: 256GB
3GPU: 80GB total

I guess it started when you've changed context size automatic detection mechanism.

Relevant log output

Feb 13 13:06:49 ollama[18421]: [GIN] 2026/02/13 - 13:06:49 | 500 |  4.279961357s |  192.168.127.20 | POST     "/api/chat"
Feb 13 13:06:59 ollama[18421]: [GIN] 2026/02/13 - 13:06:59 | 200 |     366.193µs |  192.168.127.20 | GET      "/api/tags"
Feb 13 13:06:59 ollama[18421]: [GIN] 2026/02/13 - 13:06:59 | 200 |      10.721µs |  192.168.127.20 | GET      "/api/ps"
Feb 13 13:07:08 ollama[18421]: [GIN] 2026/02/13 - 13:07:08 | 200 |     359.132µs |  192.168.127.20 | GET      "/api/tags"
Feb 13 13:07:08 ollama[18421]: [GIN] 2026/02/13 - 13:07:08 | 200 |      17.001µs |  192.168.127.20 | GET      "/api/ps"
Feb 13 13:11:05 ollama[18421]: [GIN] 2026/02/13 - 13:11:05 | 200 |      21.911µs |       127.0.0.1 | HEAD     "/"
Feb 13 13:11:05 ollama[18421]: [GIN] 2026/02/13 - 13:11:05 | 200 |   24.847743ms |       127.0.0.1 | POST     "/api/create"
Feb 13 13:11:35 ollama[18421]: [GIN] 2026/02/13 - 13:11:35 | 200 |     431.483µs |  192.168.127.20 | GET      "/api/tags"
Feb 13 13:11:35 ollama[18421]: [GIN] 2026/02/13 - 13:11:35 | 200 |         9.5µs |  192.168.127.20 | GET      "/api/ps"
Feb 13 13:11:36 ollama[18421]: [GIN] 2026/02/13 - 13:11:36 | 200 |       37.92µs |  192.168.127.20 | GET      "/api/version"
Feb 13 13:11:43 ollama[18421]: time=2026-02-13T13:11:43.925+03:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39963"
Feb 13 13:11:44 ollama[18421]: llama_model_loader: loaded meta data with 60 key-value pairs and 1761 tensors from /ai/llm/models/blobs/sha256-1379d14cce58314d62f3a7fde521ac7e45a6be1d3114abddfc240e6cbd8e4ae4 (version GGUF V3 (latest))
Feb 13 13:11:44 ollama[18421]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv   0:                       general.architecture str              = glm4moe
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv   1:                               general.type str              = model
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv   2:                      general.sampling.temp f32              = 1.000000
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv   3:                               general.name str              = Glm-4.7
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv   4:                            general.version str              = 4.7
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv   5:                           general.basename str              = Glm-4.7
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv   6:                       general.quantized_by str              = Unsloth
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv   7:                         general.size_label str              = 160x21B
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv   8:                            general.license str              = mit
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv   9:                           general.repo_url str              = https://huggingface.co/unsloth
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  10:                   general.base_model.count u32              = 1
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  11:                  general.base_model.0.name str              = GLM 4.7
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  12:               general.base_model.0.version str              = 4.7
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  13:          general.base_model.0.organization str              = Zai Org
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  14:              general.base_model.0.repo_url str              = https://huggingface.co/zai-org/GLM-4.7
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  15:                               general.tags arr[str,2]       = ["unsloth", "text-generation"]
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  16:                          general.languages arr[str,2]       = ["en", "zh"]
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  17:                        glm4moe.block_count u32              = 93
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  18:                     glm4moe.context_length u32              = 202752
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  19:                   glm4moe.embedding_length u32              = 5120
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  20:                glm4moe.feed_forward_length u32              = 12288
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  21:               glm4moe.attention.head_count u32              = 96
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  22:            glm4moe.attention.head_count_kv u32              = 8
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  23:                     glm4moe.rope.freq_base f32              = 1000000.000000
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  24:   glm4moe.attention.layer_norm_rms_epsilon f32              = 0.000010
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  25:                  glm4moe.expert_used_count u32              = 8
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  26:                 glm4moe.expert_group_count u32              = 1
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  27:            glm4moe.expert_group_used_count u32              = 1
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  28:               glm4moe.attention.key_length u32              = 128
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  29:             glm4moe.attention.value_length u32              = 128
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  30:               glm4moe.rope.dimension_count u32              = 64
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  31:                       glm4moe.expert_count u32              = 160
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  32:         glm4moe.expert_feed_forward_length u32              = 1536
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  33:                glm4moe.expert_shared_count u32              = 1
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  34:          glm4moe.leading_dense_block_count u32              = 3
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  35:                 glm4moe.expert_gating_func u32              = 2
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  36:               glm4moe.expert_weights_scale f32              = 2.500000
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  37:                glm4moe.expert_weights_norm bool             = true
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  38:               glm4moe.nextn_predict_layers u32              = 1
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  39:                       tokenizer.ggml.model str              = gpt2
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  40:                         tokenizer.ggml.pre str              = glm4
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  41:                      tokenizer.ggml.tokens arr[str,151552]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  42:                  tokenizer.ggml.token_type arr[i32,151552]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  43:                      tokenizer.ggml.merges arr[str,318088]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  44:                tokenizer.ggml.eos_token_id u32              = 151329
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  45:            tokenizer.ggml.padding_token_id u32              = 151330
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  46:                tokenizer.ggml.bos_token_id u32              = 151331
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  47:                tokenizer.ggml.eot_token_id u32              = 151336
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  48:            tokenizer.ggml.unknown_token_id u32              = 151329
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  49:                tokenizer.ggml.eom_token_id u32              = 151338
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  50:                    tokenizer.chat_template str              = {# Unsloth template fixes #}\n[gMASK]<...
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  51:               general.quantization_version u32              = 2
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  52:                          general.file_type u32              = 18
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  53:                      quantize.imatrix.file str              = GLM-4.7-GGUF/imatrix_unsloth.gguf
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  54:                   quantize.imatrix.dataset str              = unsloth_calibration_GLM-4.7.txt
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  55:             quantize.imatrix.entries_count u32              = 1000
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  56:              quantize.imatrix.chunks_count u32              = 86
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  57:                                   split.no u16              = 0
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  58:                        split.tensors.count i32              = 1761
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv  59:                                split.count u16              = 0
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - type  f32:  835 tensors
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - type q8_0:   90 tensors
Feb 13 13:11:44 ollama[18421]: llama_model_loader: - type q6_K:  836 tensors
Feb 13 13:11:44 ollama[18421]: print_info: file format = GGUF V3 (latest)
Feb 13 13:11:44 ollama[18421]: print_info: file type   = Q6_K
Feb 13 13:11:44 ollama[18421]: print_info: file size   = 274.15 GiB (6.57 BPW)
Feb 13 13:11:44 ollama[18421]: load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect
Feb 13 13:11:44 ollama[18421]: load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect
Feb 13 13:11:44 ollama[18421]: load: printing all EOG tokens:
Feb 13 13:11:44 ollama[18421]: load:   - 151329 ('<|endoftext|>')
Feb 13 13:11:44 ollama[18421]: load:   - 151336 ('<|user|>')
Feb 13 13:11:44 ollama[18421]: load:   - 151338 ('<|observation|>')
Feb 13 13:11:44 ollama[18421]: load: special tokens cache size = 36
Feb 13 13:11:44 ollama[18421]: load: token to piece cache size = 0.9713 MB
Feb 13 13:11:44 ollama[18421]: print_info: arch             = glm4moe
Feb 13 13:11:44 ollama[18421]: print_info: vocab_only       = 1
Feb 13 13:11:44 ollama[18421]: print_info: no_alloc         = 0
Feb 13 13:11:44 ollama[18421]: print_info: model type       = ?B
Feb 13 13:11:44 ollama[18421]: print_info: model params     = 358.34 B
Feb 13 13:11:44 ollama[18421]: print_info: general.name     = Glm-4.7
Feb 13 13:11:44 ollama[18421]: print_info: vocab type       = BPE
Feb 13 13:11:44 ollama[18421]: print_info: n_vocab          = 151552
Feb 13 13:11:44 ollama[18421]: print_info: n_merges         = 318088
Feb 13 13:11:44 ollama[18421]: print_info: BOS token        = 151331 '[gMASK]'
Feb 13 13:11:44 ollama[18421]: print_info: EOS token        = 151329 '<|endoftext|>'
Feb 13 13:11:44 ollama[18421]: print_info: EOT token        = 151336 '<|user|>'
Feb 13 13:11:44 ollama[18421]: print_info: EOM token        = 151338 '<|observation|>'
Feb 13 13:11:44 ollama[18421]: print_info: UNK token        = 151329 '<|endoftext|>'
Feb 13 13:11:44 ollama[18421]: print_info: PAD token        = 151330 '[MASK]'
Feb 13 13:11:44 ollama[18421]: print_info: LF token         = 198 'Ċ'
Feb 13 13:11:44 ollama[18421]: print_info: FIM PRE token    = 151347 '<|code_prefix|>'
Feb 13 13:11:44 ollama[18421]: print_info: FIM SUF token    = 151349 '<|code_suffix|>'
Feb 13 13:11:44 ollama[18421]: print_info: FIM MID token    = 151348 '<|code_middle|>'
Feb 13 13:11:44 ollama[18421]: print_info: EOG token        = 151329 '<|endoftext|>'
Feb 13 13:11:44 ollama[18421]: print_info: EOG token        = 151336 '<|user|>'
Feb 13 13:11:44 ollama[18421]: print_info: EOG token        = 151338 '<|observation|>'
Feb 13 13:11:44 ollama[18421]: print_info: max token length = 1024
Feb 13 13:11:44 ollama[18421]: llama_model_load: vocab only - skipping tensors
Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=WARN source=server.go:168 msg="requested context size too large for model" num_ctx=262144 n_ctx_train=202752
Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /ai/llm/models/blobs/sha256-1379d14cce58314d62f3a7fde521ac7e45a6be1d3114abddfc240e6cbd8e4ae4 --port 36375"
Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=INFO source=sched.go:463 msg="system memory" total="245.1 GiB" free="238.3 GiB" free_swap="8.0 GiB"
Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=INFO source=sched.go:470 msg="gpu memory" id=GPU-7a420261-3e37-b15a-2f9b-2fd7da322dbe library=CUDA available="23.1 GiB" free="23.5 GiB" minimum="457.0 MiB" overhead="0 B"
Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=INFO source=sched.go:470 msg="gpu memory" id=GPU-0dcf0ac3-bef6-f019-91c8-276cd8e2c0c0 library=CUDA available="22.7 GiB" free="23.1 GiB" minimum="457.0 MiB" overhead="0 B"
Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=INFO source=sched.go:470 msg="gpu memory" id=GPU-dfc3d6a8-e942-e903-be73-9b14ce01db39 library=CUDA available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=INFO source=server.go:497 msg="loading model" "model layers"=94 requested=-1
Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.473+03:00 level=WARN source=server.go:1043 msg="model request too large for system" requested="279.9 GiB" available="246.3 GiB" total="245.1 GiB" free="238.3 GiB" swap="8.0 GiB"
Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.473+03:00 level=INFO source=sched.go:490 msg="Load failed" model=/ai/llm/models/blobs/sha256-1379d14cce58314d62f3a7fde521ac7e45a6be1d3114abddfc240e6cbd8e4ae4 error="model requires more system memory (279.9 GiB) than is available (246.3 GiB)"
Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.477+03:00 level=INFO source=runner.go:965 msg="starting go runner"
Feb 13 13:11:44 ollama[18421]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Feb 13 13:11:44 ollama[18421]: [GIN] 2026/02/13 - 13:11:44 | 500 |  644.613696ms |  192.168.127.20 | POST     "/api/chat"

Fri Feb 13 13:23:50 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.126.09             Driver Version: 580.126.09     CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4090        Off |   00000000:01:00.0 Off |                  Off |
|  0%   21C    P8              2W /  450W |       1MiB /  24564MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce RTX 4090        Off |   00000000:03:00.0 Off |                  Off |
|  0%   22C    P8             10W /  450W |       1MiB /  24564MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   2  Tesla V100-SXM2-32GB           Off |   00000000:09:00.0 Off |                    0 |
| N/A   21C    P0             20W /  300W |       0MiB /  32768MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.15.5-0.16.1

Originally created by @ka-admin on GitHub (Feb 13, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14232 ### What is the issue? Hi Some of huggingface downloaded models stops to work around version 0.15.5 (or maybe newer). Ollama engine states that I don't have enough memory to run them (GLM 4.7 K6, MiniMax 2.1 Q8). But other ollama site downloaded models works fine (Qwen3 235B Thinking Q8). I didn't change anything in my server, just updated ollama release to the latest (0.16.1). RAM: 256GB 3GPU: 80GB total I guess it started when you've changed context size automatic detection mechanism. ### Relevant log output ```shell Feb 13 13:06:49 ollama[18421]: [GIN] 2026/02/13 - 13:06:49 | 500 | 4.279961357s | 192.168.127.20 | POST "/api/chat" Feb 13 13:06:59 ollama[18421]: [GIN] 2026/02/13 - 13:06:59 | 200 | 366.193µs | 192.168.127.20 | GET "/api/tags" Feb 13 13:06:59 ollama[18421]: [GIN] 2026/02/13 - 13:06:59 | 200 | 10.721µs | 192.168.127.20 | GET "/api/ps" Feb 13 13:07:08 ollama[18421]: [GIN] 2026/02/13 - 13:07:08 | 200 | 359.132µs | 192.168.127.20 | GET "/api/tags" Feb 13 13:07:08 ollama[18421]: [GIN] 2026/02/13 - 13:07:08 | 200 | 17.001µs | 192.168.127.20 | GET "/api/ps" Feb 13 13:11:05 ollama[18421]: [GIN] 2026/02/13 - 13:11:05 | 200 | 21.911µs | 127.0.0.1 | HEAD "/" Feb 13 13:11:05 ollama[18421]: [GIN] 2026/02/13 - 13:11:05 | 200 | 24.847743ms | 127.0.0.1 | POST "/api/create" Feb 13 13:11:35 ollama[18421]: [GIN] 2026/02/13 - 13:11:35 | 200 | 431.483µs | 192.168.127.20 | GET "/api/tags" Feb 13 13:11:35 ollama[18421]: [GIN] 2026/02/13 - 13:11:35 | 200 | 9.5µs | 192.168.127.20 | GET "/api/ps" Feb 13 13:11:36 ollama[18421]: [GIN] 2026/02/13 - 13:11:36 | 200 | 37.92µs | 192.168.127.20 | GET "/api/version" Feb 13 13:11:43 ollama[18421]: time=2026-02-13T13:11:43.925+03:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39963" Feb 13 13:11:44 ollama[18421]: llama_model_loader: loaded meta data with 60 key-value pairs and 1761 tensors from /ai/llm/models/blobs/sha256-1379d14cce58314d62f3a7fde521ac7e45a6be1d3114abddfc240e6cbd8e4ae4 (version GGUF V3 (latest)) Feb 13 13:11:44 ollama[18421]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 0: general.architecture str = glm4moe Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 1: general.type str = model Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 2: general.sampling.temp f32 = 1.000000 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 3: general.name str = Glm-4.7 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 4: general.version str = 4.7 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 5: general.basename str = Glm-4.7 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 6: general.quantized_by str = Unsloth Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 7: general.size_label str = 160x21B Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 8: general.license str = mit Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 9: general.repo_url str = https://huggingface.co/unsloth Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 10: general.base_model.count u32 = 1 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 11: general.base_model.0.name str = GLM 4.7 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 12: general.base_model.0.version str = 4.7 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 13: general.base_model.0.organization str = Zai Org Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 14: general.base_model.0.repo_url str = https://huggingface.co/zai-org/GLM-4.7 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 15: general.tags arr[str,2] = ["unsloth", "text-generation"] Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 16: general.languages arr[str,2] = ["en", "zh"] Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 17: glm4moe.block_count u32 = 93 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 18: glm4moe.context_length u32 = 202752 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 19: glm4moe.embedding_length u32 = 5120 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 20: glm4moe.feed_forward_length u32 = 12288 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 21: glm4moe.attention.head_count u32 = 96 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 22: glm4moe.attention.head_count_kv u32 = 8 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 23: glm4moe.rope.freq_base f32 = 1000000.000000 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 24: glm4moe.attention.layer_norm_rms_epsilon f32 = 0.000010 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 25: glm4moe.expert_used_count u32 = 8 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 26: glm4moe.expert_group_count u32 = 1 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 27: glm4moe.expert_group_used_count u32 = 1 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 28: glm4moe.attention.key_length u32 = 128 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 29: glm4moe.attention.value_length u32 = 128 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 30: glm4moe.rope.dimension_count u32 = 64 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 31: glm4moe.expert_count u32 = 160 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 32: glm4moe.expert_feed_forward_length u32 = 1536 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 33: glm4moe.expert_shared_count u32 = 1 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 34: glm4moe.leading_dense_block_count u32 = 3 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 35: glm4moe.expert_gating_func u32 = 2 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 36: glm4moe.expert_weights_scale f32 = 2.500000 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 37: glm4moe.expert_weights_norm bool = true Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 38: glm4moe.nextn_predict_layers u32 = 1 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 39: tokenizer.ggml.model str = gpt2 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 40: tokenizer.ggml.pre str = glm4 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 41: tokenizer.ggml.tokens arr[str,151552] = ["!", "\"", "#", "$", "%", "&", "'", ... Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 42: tokenizer.ggml.token_type arr[i32,151552] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 43: tokenizer.ggml.merges arr[str,318088] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 44: tokenizer.ggml.eos_token_id u32 = 151329 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 45: tokenizer.ggml.padding_token_id u32 = 151330 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 46: tokenizer.ggml.bos_token_id u32 = 151331 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 47: tokenizer.ggml.eot_token_id u32 = 151336 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 48: tokenizer.ggml.unknown_token_id u32 = 151329 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 49: tokenizer.ggml.eom_token_id u32 = 151338 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 50: tokenizer.chat_template str = {# Unsloth template fixes #}\n[gMASK]<... Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 51: general.quantization_version u32 = 2 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 52: general.file_type u32 = 18 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 53: quantize.imatrix.file str = GLM-4.7-GGUF/imatrix_unsloth.gguf Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 54: quantize.imatrix.dataset str = unsloth_calibration_GLM-4.7.txt Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 55: quantize.imatrix.entries_count u32 = 1000 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 56: quantize.imatrix.chunks_count u32 = 86 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 57: split.no u16 = 0 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 58: split.tensors.count i32 = 1761 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - kv 59: split.count u16 = 0 Feb 13 13:11:44 ollama[18421]: llama_model_loader: - type f32: 835 tensors Feb 13 13:11:44 ollama[18421]: llama_model_loader: - type q8_0: 90 tensors Feb 13 13:11:44 ollama[18421]: llama_model_loader: - type q6_K: 836 tensors Feb 13 13:11:44 ollama[18421]: print_info: file format = GGUF V3 (latest) Feb 13 13:11:44 ollama[18421]: print_info: file type = Q6_K Feb 13 13:11:44 ollama[18421]: print_info: file size = 274.15 GiB (6.57 BPW) Feb 13 13:11:44 ollama[18421]: load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect Feb 13 13:11:44 ollama[18421]: load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect Feb 13 13:11:44 ollama[18421]: load: printing all EOG tokens: Feb 13 13:11:44 ollama[18421]: load: - 151329 ('<|endoftext|>') Feb 13 13:11:44 ollama[18421]: load: - 151336 ('<|user|>') Feb 13 13:11:44 ollama[18421]: load: - 151338 ('<|observation|>') Feb 13 13:11:44 ollama[18421]: load: special tokens cache size = 36 Feb 13 13:11:44 ollama[18421]: load: token to piece cache size = 0.9713 MB Feb 13 13:11:44 ollama[18421]: print_info: arch = glm4moe Feb 13 13:11:44 ollama[18421]: print_info: vocab_only = 1 Feb 13 13:11:44 ollama[18421]: print_info: no_alloc = 0 Feb 13 13:11:44 ollama[18421]: print_info: model type = ?B Feb 13 13:11:44 ollama[18421]: print_info: model params = 358.34 B Feb 13 13:11:44 ollama[18421]: print_info: general.name = Glm-4.7 Feb 13 13:11:44 ollama[18421]: print_info: vocab type = BPE Feb 13 13:11:44 ollama[18421]: print_info: n_vocab = 151552 Feb 13 13:11:44 ollama[18421]: print_info: n_merges = 318088 Feb 13 13:11:44 ollama[18421]: print_info: BOS token = 151331 '[gMASK]' Feb 13 13:11:44 ollama[18421]: print_info: EOS token = 151329 '<|endoftext|>' Feb 13 13:11:44 ollama[18421]: print_info: EOT token = 151336 '<|user|>' Feb 13 13:11:44 ollama[18421]: print_info: EOM token = 151338 '<|observation|>' Feb 13 13:11:44 ollama[18421]: print_info: UNK token = 151329 '<|endoftext|>' Feb 13 13:11:44 ollama[18421]: print_info: PAD token = 151330 '[MASK]' Feb 13 13:11:44 ollama[18421]: print_info: LF token = 198 'Ċ' Feb 13 13:11:44 ollama[18421]: print_info: FIM PRE token = 151347 '<|code_prefix|>' Feb 13 13:11:44 ollama[18421]: print_info: FIM SUF token = 151349 '<|code_suffix|>' Feb 13 13:11:44 ollama[18421]: print_info: FIM MID token = 151348 '<|code_middle|>' Feb 13 13:11:44 ollama[18421]: print_info: EOG token = 151329 '<|endoftext|>' Feb 13 13:11:44 ollama[18421]: print_info: EOG token = 151336 '<|user|>' Feb 13 13:11:44 ollama[18421]: print_info: EOG token = 151338 '<|observation|>' Feb 13 13:11:44 ollama[18421]: print_info: max token length = 1024 Feb 13 13:11:44 ollama[18421]: llama_model_load: vocab only - skipping tensors Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=WARN source=server.go:168 msg="requested context size too large for model" num_ctx=262144 n_ctx_train=202752 Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /ai/llm/models/blobs/sha256-1379d14cce58314d62f3a7fde521ac7e45a6be1d3114abddfc240e6cbd8e4ae4 --port 36375" Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=INFO source=sched.go:463 msg="system memory" total="245.1 GiB" free="238.3 GiB" free_swap="8.0 GiB" Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=INFO source=sched.go:470 msg="gpu memory" id=GPU-7a420261-3e37-b15a-2f9b-2fd7da322dbe library=CUDA available="23.1 GiB" free="23.5 GiB" minimum="457.0 MiB" overhead="0 B" Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=INFO source=sched.go:470 msg="gpu memory" id=GPU-0dcf0ac3-bef6-f019-91c8-276cd8e2c0c0 library=CUDA available="22.7 GiB" free="23.1 GiB" minimum="457.0 MiB" overhead="0 B" Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=INFO source=sched.go:470 msg="gpu memory" id=GPU-dfc3d6a8-e942-e903-be73-9b14ce01db39 library=CUDA available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.472+03:00 level=INFO source=server.go:497 msg="loading model" "model layers"=94 requested=-1 Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.473+03:00 level=WARN source=server.go:1043 msg="model request too large for system" requested="279.9 GiB" available="246.3 GiB" total="245.1 GiB" free="238.3 GiB" swap="8.0 GiB" Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.473+03:00 level=INFO source=sched.go:490 msg="Load failed" model=/ai/llm/models/blobs/sha256-1379d14cce58314d62f3a7fde521ac7e45a6be1d3114abddfc240e6cbd8e4ae4 error="model requires more system memory (279.9 GiB) than is available (246.3 GiB)" Feb 13 13:11:44 ollama[18421]: time=2026-02-13T13:11:44.477+03:00 level=INFO source=runner.go:965 msg="starting go runner" Feb 13 13:11:44 ollama[18421]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Feb 13 13:11:44 ollama[18421]: [GIN] 2026/02/13 - 13:11:44 | 500 | 644.613696ms | 192.168.127.20 | POST "/api/chat" Fri Feb 13 13:23:50 2026 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.126.09 Driver Version: 580.126.09 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off | | 0% 21C P8 2W / 450W | 1MiB / 24564MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce RTX 4090 Off | 00000000:03:00.0 Off | Off | | 0% 22C P8 10W / 450W | 1MiB / 24564MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 2 Tesla V100-SXM2-32GB Off | 00000000:09:00.0 Off | 0 | | N/A 21C P0 20W / 300W | 0MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.15.5-0.16.1
GiteaMirror added the bug label 2026-04-29 09:43:31 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 13, 2026):

#14116

<!-- gh-comment-id:3896392273 --> @rick-github commented on GitHub (Feb 13, 2026): #14116
Author
Owner

@ka-admin commented on GitHub (Feb 13, 2026):

My settings are

Environment="OLLAMA_MODELS=/ai/llm/models"
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_KEEP_ALIVE=1h"
Environment="OLLAMA_FLASH_ATTENTION=1"
Environment="OLLAMA_LOAD_TIMEOUT=1h"
Environment="GGML_CUDA_ENABLE_UNIFIED_MEMORY=1"
Environment="OLLAMA_NEW_ESTIMATES=1"
Environment="OLLAMA_NEW_ENGINE=1"
Environment="CUDA_VISIBLE_DEVICES=0,1,2"

I tried to use

  • OLLAMA_CONTEXT_LENGTH=xxxx
  • OLLAMA_NUM_PARALLEL=1

it didn't help

I'm using Open WebUI to set the context length and number of layers to offload to GPU

<!-- gh-comment-id:3896496897 --> @ka-admin commented on GitHub (Feb 13, 2026): My settings are > Environment="OLLAMA_MODELS=/ai/llm/models" Environment="OLLAMA_HOST=0.0.0.0" Environment="OLLAMA_KEEP_ALIVE=1h" Environment="OLLAMA_FLASH_ATTENTION=1" Environment="OLLAMA_LOAD_TIMEOUT=1h" Environment="GGML_CUDA_ENABLE_UNIFIED_MEMORY=1" Environment="OLLAMA_NEW_ESTIMATES=1" Environment="OLLAMA_NEW_ENGINE=1" Environment="CUDA_VISIBLE_DEVICES=0,1,2" I tried to use - OLLAMA_CONTEXT_LENGTH=xxxx - OLLAMA_NUM_PARALLEL=1 it didn't help I'm using Open WebUI to set the context length and number of layers to offload to GPU
Author
Owner

@rick-github commented on GitHub (Feb 13, 2026):

Setting OLLAMA_CONTEXT_LENGTH should work. A full server log may aid in debugging.

<!-- gh-comment-id:3896559590 --> @rick-github commented on GitHub (Feb 13, 2026): Setting `OLLAMA_CONTEXT_LENGTH` should work. A full server log may aid in debugging.
Author
Owner

@ka-admin commented on GitHub (Feb 13, 2026):

Yes it helps, but this is not normal if I have to change ollama service settings each time I select another model. Or is there an option that can be set in model card?

<!-- gh-comment-id:3897908845 --> @ka-admin commented on GitHub (Feb 13, 2026): Yes it helps, but this is not normal if I have to change ollama service settings each time I select another model. Or is there an option that can be set in model card?
Author
Owner

@rick-github commented on GitHub (Feb 13, 2026):

You don't need to change ollama settings each time. Setting OLLAMA_CONTEXT_LENGTH once in the server settings will set the default for all models. The reason for the change in behaviour is shown in the link above.

<!-- gh-comment-id:3897959790 --> @rick-github commented on GitHub (Feb 13, 2026): You don't need to change ollama settings each time. Setting `OLLAMA_CONTEXT_LENGTH` once in the server settings will set the default for all models. The reason for the change in behaviour is shown in the link above.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55779