[GH-ISSUE #13375] GPU #8832

Closed
opened 2026-04-12 21:36:59 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @Eb7CAPJi on GitHub (Dec 8, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13375

What is the issue?

The latest update to version 13.1 has caused problems with GPU usage – in fact, it simply refuses to use the GPU. Disabling the update isn’t possible, and that would be a good practice. Even better would be thorough testing of the product before release, instead of running experiments on users, especially now that AI is involved. Oh, and this software isn’t actually for AI? Exactly… Please, developers, use the software for what you intended it for and test it thoroughly.

Relevant log output


OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.13.1

Originally created by @Eb7CAPJi on GitHub (Dec 8, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13375 ### What is the issue? The latest update to version 13.1 has caused problems with GPU usage – in fact, it simply refuses to use the GPU. Disabling the update isn’t possible, and that would be a good practice. Even better would be thorough testing of the product before release, instead of running experiments on users, especially now that AI is involved. Oh, and this software isn’t actually for AI? Exactly… Please, developers, use the software for what you intended it for and test it thoroughly. ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.1
GiteaMirror added the nvidiabug labels 2026-04-12 21:36:59 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 8, 2025):

Server log will aid in debugging. Disable updates by setting OLLAMA_UPDATE_URL=: in the environment.

<!-- gh-comment-id:3627456502 --> @rick-github commented on GitHub (Dec 8, 2025): [Server log](https://docs.ollama.com/troubleshooting) will aid in debugging. Disable updates by setting `OLLAMA_UPDATE_URL=:` in the environment.
Author
Owner

@ianmock commented on GitHub (Dec 8, 2025):

Seeing the same thing after the latest update:
System has 3x 32Gb V100 GPUs.

Dec 08 20:10:23 ai2 systemd[1]: Stopping ollama.service - Ollama Service...
Dec 08 20:10:23 ai2 systemd[1]: ollama.service: Deactivated successfully.
Dec 08 20:10:23 ai2 systemd[1]: Stopped ollama.service - Ollama Service.
Dec 08 20:10:23 ai2 systemd[1]: ollama.service: Consumed 6.400s CPU time, 745.1M memory peak, 0B memory swap peak.
Dec 08 20:10:23 ai2 systemd[1]: Starting ollama.service - Ollama Service...
Dec 08 20:10:38 ai2 systemd[1]: Started ollama.service - Ollama Service.
Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.865Z level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:true OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:0 http_proxy: https_proxy: no_proxy:]"
Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.868Z level=INFO source=images.go:522 msg="total blobs: 34"
Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.870Z level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.870Z level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.2)"
Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.871Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.871Z level=WARN source=runner.go:485 msg="user overrode visible devices" ROCR_VISIBLE_DEVICES=0
Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.871Z level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"
Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.872Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40329"
Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.298Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35845"
Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.706Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.706Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42477"
Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.706Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41461"
Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.707Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39291"
Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.707Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34987"
Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.709Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42587"
Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.709Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36247"
Dec 08 20:10:40 ai2 ollama[1621513]: time=2025-12-08T20:10:40.079Z level=INFO source=types.go:42 msg="inference compute" id=GPU-01ded5e7-905c-046d-ae5c-dceb274e436a filter_id="" library=CUDA compute=7.0 name=CUDA0 description="Tesla V100-SXM2-32GB" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:04:00.0 type=discrete total="32.0 GiB" available="31.7 GiB"
Dec 08 20:10:40 ai2 ollama[1621513]: time=2025-12-08T20:10:40.079Z level=INFO source=types.go:42 msg="inference compute" id=GPU-9ca09234-43f4-9e08-6d7a-017a9374bb68 filter_id="" library=CUDA compute=7.0 name=CUDA1 description="Tesla V100-SXM2-32GB" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:0b:00.0 type=discrete total="32.0 GiB" available="31.4 GiB"
Dec 08 20:10:40 ai2 ollama[1621513]: time=2025-12-08T20:10:40.079Z level=INFO source=types.go:42 msg="inference compute" id=GPU-af4d4570-507f-098d-7d8c-0882bd928a93 filter_id="" library=CUDA compute=7.0 name=CUDA2 description="Tesla V100-SXM2-32GB" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:13:00.0 type=discrete total="32.0 GiB" available="31.4 GiB"
Dec 08 20:10:44 ai2 ollama[1621513]: [GIN] 2025/12/08 - 20:10:44 | 200 |    2.673309ms |      172.17.0.3 | GET      "/api/tags"
Dec 08 20:10:44 ai2 ollama[1621513]: [GIN] 2025/12/08 - 20:10:44 | 200 |     136.019µs |      172.17.0.3 | GET      "/api/ps"
Dec 08 20:10:48 ai2 ollama[1621513]: time=2025-12-08T20:10:48.395Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38503"
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: loaded meta data with 49 key-value pairs and 1736 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-275caf7d0bc263f595016079d1b73639b7b3f7ef4a2524f344e152c49427a2f6 (version GGUF V3 (latest))
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv   0:                       general.architecture str              = glm4moe
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv   1:                               general.type str              = model
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv   2:                               general.name str              = GLM-4.6-REAP-218B-A32B-FP8
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv   3:                           general.finetune str              = FP8
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv   4:                           general.basename str              = GLM-4.6-REAP
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv   5:                        general.description str              = This model was obtained by uniformly ...
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv   6:                         general.size_label str              = 218B-A32B
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv   7:                            general.license str              = mit
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv   8:                       general.license.link str              = https://huggingface.co/zai-org/GLM-4....
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv   9:                   general.base_model.count u32              = 1
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  10:                  general.base_model.0.name str              = GLM 4.6 FP8
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  11:          general.base_model.0.organization str              = Zai Org
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  12:              general.base_model.0.repo_url str              = https://huggingface.co/zai-org/GLM-4....
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  13:                               general.tags arr[str,5]       = ["glm", "MOE", "pruning", "compressio...
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  14:                          general.languages arr[str,1]       = ["en"]
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  15:                        glm4moe.block_count u32              = 92
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  16:                     glm4moe.context_length u32              = 202752
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  17:                   glm4moe.embedding_length u32              = 5120
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  18:                glm4moe.feed_forward_length u32              = 12288
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  19:               glm4moe.attention.head_count u32              = 96
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  20:            glm4moe.attention.head_count_kv u32              = 8
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  21:                     glm4moe.rope.freq_base f32              = 1000000.000000
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  22:   glm4moe.attention.layer_norm_rms_epsilon f32              = 0.000010
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  23:                  glm4moe.expert_used_count u32              = 8
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  24:               glm4moe.attention.key_length u32              = 128
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  25:             glm4moe.attention.value_length u32              = 128
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  26:                          general.file_type u32              = 21
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  27:               glm4moe.rope.dimension_count u32              = 64
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  28:                       glm4moe.expert_count u32              = 96
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  29:         glm4moe.expert_feed_forward_length u32              = 1536
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  30:                glm4moe.expert_shared_count u32              = 1
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  31:          glm4moe.leading_dense_block_count u32              = 3
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  32:                 glm4moe.expert_gating_func u32              = 2
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  33:               glm4moe.expert_weights_scale f32              = 2.500000
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  34:                glm4moe.expert_weights_norm bool             = true
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  35:               glm4moe.nextn_predict_layers u32              = 0
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  36:               general.quantization_version u32              = 2
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  37:                       tokenizer.ggml.model str              = gpt2
Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv  38:                         tokenizer.ggml.pre str              = glm4
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv  39:                      tokenizer.ggml.tokens arr[str,151552]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv  40:                  tokenizer.ggml.token_type arr[i32,151552]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv  41:                      tokenizer.ggml.merges arr[str,318088]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv  42:                tokenizer.ggml.eos_token_id u32              = 151329
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv  43:            tokenizer.ggml.padding_token_id u32              = 151329
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv  44:                tokenizer.ggml.bos_token_id u32              = 151331
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv  45:                tokenizer.ggml.eot_token_id u32              = 151336
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv  46:            tokenizer.ggml.unknown_token_id u32              = 151329
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv  47:                tokenizer.ggml.eom_token_id u32              = 151338
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv  48:                    tokenizer.chat_template str              = [gMASK]<sop>\n{%- if tools -%}\n<|syste...
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - type  f32:  823 tensors
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - type q2_K:  808 tensors
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - type q4_K:  104 tensors
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - type q6_K:    1 tensors
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: file format = GGUF V3 (latest)
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: file type   = Q2_K - Small
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: file size   = 67.41 GiB (2.65 BPW)
Dec 08 20:10:49 ai2 ollama[1621513]: load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect
Dec 08 20:10:49 ai2 ollama[1621513]: load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect
Dec 08 20:10:49 ai2 ollama[1621513]: load: printing all EOG tokens:
Dec 08 20:10:49 ai2 ollama[1621513]: load:   - 151329 ('<|endoftext|>')
Dec 08 20:10:49 ai2 ollama[1621513]: load:   - 151336 ('<|user|>')
Dec 08 20:10:49 ai2 ollama[1621513]: load:   - 151338 ('<|observation|>')
Dec 08 20:10:49 ai2 ollama[1621513]: load: special tokens cache size = 36
Dec 08 20:10:49 ai2 ollama[1621513]: load: token to piece cache size = 0.9713 MB
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: arch             = glm4moe
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: vocab_only       = 1
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: model type       = ?B
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: model params     = 218.38 B
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: general.name     = GLM-4.6-REAP-218B-A32B-FP8
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: vocab type       = BPE
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: n_vocab          = 151552
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: n_merges         = 318088
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: BOS token        = 151331 '[gMASK]'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: EOS token        = 151329 '<|endoftext|>'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: EOT token        = 151336 '<|user|>'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: EOM token        = 151338 '<|observation|>'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: UNK token        = 151329 '<|endoftext|>'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: PAD token        = 151329 '<|endoftext|>'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: LF token         = 198 'Ċ'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: FIM PRE token    = 151347 '<|code_prefix|>'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: FIM SUF token    = 151349 '<|code_suffix|>'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: FIM MID token    = 151348 '<|code_middle|>'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: EOG token        = 151329 '<|endoftext|>'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: EOG token        = 151336 '<|user|>'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: EOG token        = 151338 '<|observation|>'
Dec 08 20:10:49 ai2 ollama[1621513]: print_info: max token length = 1024
Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_load: vocab only - skipping tensors
Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.341Z level=INFO source=server.go:209 msg="enabling flash attention"
Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.341Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-275caf7d0bc263f595016079d1b73639b7b3f7ef4a2524f344e152c49427a2f6 --port 45739"
Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.342Z level=INFO source=sched.go:443 msg="system memory" total="62.8 GiB" free="60.4 GiB" free_swap="8.0 GiB"
Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.342Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-01ded5e7-905c-046d-ae5c-dceb274e436a library=CUDA available="31.3 GiB" free="31.7 GiB" minimum="457.0 MiB" overhead="0 B"
Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.342Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-9ca09234-43f4-9e08-6d7a-017a9374bb68 library=CUDA available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.342Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-af4d4570-507f-098d-7d8c-0882bd928a93 library=CUDA available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.342Z level=INFO source=server.go:459 msg="loading model" "model layers"=93 requested=256
Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.344Z level=WARN source=server.go:989 msg="model request too large for system" requested="89.6 GiB" available="68.4 GiB" total="62.8 GiB" free="60.4 GiB" swap="8.0 GiB"
Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.344Z level=INFO source=sched.go:470 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-275caf7d0bc263f595016079d1b73639b7b3f7ef4a2524f344e152c49427a2f6 error="model requires more system memory (89.6 GiB) than is available (68.4 GiB)"
Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.357Z level=INFO source=runner.go:963 msg="starting go runner"
Dec 08 20:10:49 ai2 ollama[1621513]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
Dec 08 20:10:49 ai2 ollama[1621513]: [GIN] 2025/12/08 - 20:10:49 | 500 |  1.271907784s |      172.17.0.3 | POST     "/api/chat"

<!-- gh-comment-id:3628838277 --> @ianmock commented on GitHub (Dec 8, 2025): Seeing the same thing after the latest update: System has 3x 32Gb V100 GPUs. ``` Dec 08 20:10:23 ai2 systemd[1]: Stopping ollama.service - Ollama Service... Dec 08 20:10:23 ai2 systemd[1]: ollama.service: Deactivated successfully. Dec 08 20:10:23 ai2 systemd[1]: Stopped ollama.service - Ollama Service. Dec 08 20:10:23 ai2 systemd[1]: ollama.service: Consumed 6.400s CPU time, 745.1M memory peak, 0B memory swap peak. Dec 08 20:10:23 ai2 systemd[1]: Starting ollama.service - Ollama Service... Dec 08 20:10:38 ai2 systemd[1]: Started ollama.service - Ollama Service. Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.865Z level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:true OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:0 http_proxy: https_proxy: no_proxy:]" Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.868Z level=INFO source=images.go:522 msg="total blobs: 34" Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.870Z level=INFO source=images.go:529 msg="total unused blobs removed: 0" Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.870Z level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.2)" Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.871Z level=INFO source=runner.go:67 msg="discovering available GPUs..." Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.871Z level=WARN source=runner.go:485 msg="user overrode visible devices" ROCR_VISIBLE_DEVICES=0 Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.871Z level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again" Dec 08 20:10:38 ai2 ollama[1621513]: time=2025-12-08T20:10:38.872Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 40329" Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.298Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 35845" Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.706Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.706Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42477" Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.706Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41461" Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.707Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 39291" Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.707Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 34987" Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.709Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 42587" Dec 08 20:10:39 ai2 ollama[1621513]: time=2025-12-08T20:10:39.709Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 36247" Dec 08 20:10:40 ai2 ollama[1621513]: time=2025-12-08T20:10:40.079Z level=INFO source=types.go:42 msg="inference compute" id=GPU-01ded5e7-905c-046d-ae5c-dceb274e436a filter_id="" library=CUDA compute=7.0 name=CUDA0 description="Tesla V100-SXM2-32GB" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:04:00.0 type=discrete total="32.0 GiB" available="31.7 GiB" Dec 08 20:10:40 ai2 ollama[1621513]: time=2025-12-08T20:10:40.079Z level=INFO source=types.go:42 msg="inference compute" id=GPU-9ca09234-43f4-9e08-6d7a-017a9374bb68 filter_id="" library=CUDA compute=7.0 name=CUDA1 description="Tesla V100-SXM2-32GB" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:0b:00.0 type=discrete total="32.0 GiB" available="31.4 GiB" Dec 08 20:10:40 ai2 ollama[1621513]: time=2025-12-08T20:10:40.079Z level=INFO source=types.go:42 msg="inference compute" id=GPU-af4d4570-507f-098d-7d8c-0882bd928a93 filter_id="" library=CUDA compute=7.0 name=CUDA2 description="Tesla V100-SXM2-32GB" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:13:00.0 type=discrete total="32.0 GiB" available="31.4 GiB" Dec 08 20:10:44 ai2 ollama[1621513]: [GIN] 2025/12/08 - 20:10:44 | 200 | 2.673309ms | 172.17.0.3 | GET "/api/tags" Dec 08 20:10:44 ai2 ollama[1621513]: [GIN] 2025/12/08 - 20:10:44 | 200 | 136.019µs | 172.17.0.3 | GET "/api/ps" Dec 08 20:10:48 ai2 ollama[1621513]: time=2025-12-08T20:10:48.395Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38503" Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: loaded meta data with 49 key-value pairs and 1736 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-275caf7d0bc263f595016079d1b73639b7b3f7ef4a2524f344e152c49427a2f6 (version GGUF V3 (latest)) Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 0: general.architecture str = glm4moe Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 1: general.type str = model Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 2: general.name str = GLM-4.6-REAP-218B-A32B-FP8 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 3: general.finetune str = FP8 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 4: general.basename str = GLM-4.6-REAP Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 5: general.description str = This model was obtained by uniformly ... Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 6: general.size_label str = 218B-A32B Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 7: general.license str = mit Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/zai-org/GLM-4.... Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 9: general.base_model.count u32 = 1 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 10: general.base_model.0.name str = GLM 4.6 FP8 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 11: general.base_model.0.organization str = Zai Org Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/zai-org/GLM-4.... Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 13: general.tags arr[str,5] = ["glm", "MOE", "pruning", "compressio... Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"] Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 15: glm4moe.block_count u32 = 92 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 16: glm4moe.context_length u32 = 202752 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 17: glm4moe.embedding_length u32 = 5120 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 18: glm4moe.feed_forward_length u32 = 12288 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 19: glm4moe.attention.head_count u32 = 96 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 20: glm4moe.attention.head_count_kv u32 = 8 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 21: glm4moe.rope.freq_base f32 = 1000000.000000 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 22: glm4moe.attention.layer_norm_rms_epsilon f32 = 0.000010 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 23: glm4moe.expert_used_count u32 = 8 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 24: glm4moe.attention.key_length u32 = 128 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 25: glm4moe.attention.value_length u32 = 128 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 26: general.file_type u32 = 21 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 27: glm4moe.rope.dimension_count u32 = 64 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 28: glm4moe.expert_count u32 = 96 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 29: glm4moe.expert_feed_forward_length u32 = 1536 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 30: glm4moe.expert_shared_count u32 = 1 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 31: glm4moe.leading_dense_block_count u32 = 3 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 32: glm4moe.expert_gating_func u32 = 2 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 33: glm4moe.expert_weights_scale f32 = 2.500000 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 34: glm4moe.expert_weights_norm bool = true Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 35: glm4moe.nextn_predict_layers u32 = 0 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 36: general.quantization_version u32 = 2 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 37: tokenizer.ggml.model str = gpt2 Dec 08 20:10:48 ai2 ollama[1621513]: llama_model_loader: - kv 38: tokenizer.ggml.pre str = glm4 Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv 39: tokenizer.ggml.tokens arr[str,151552] = ["!", "\"", "#", "$", "%", "&", "'", ... Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv 40: tokenizer.ggml.token_type arr[i32,151552] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv 41: tokenizer.ggml.merges arr[str,318088] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv 42: tokenizer.ggml.eos_token_id u32 = 151329 Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv 43: tokenizer.ggml.padding_token_id u32 = 151329 Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv 44: tokenizer.ggml.bos_token_id u32 = 151331 Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv 45: tokenizer.ggml.eot_token_id u32 = 151336 Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv 46: tokenizer.ggml.unknown_token_id u32 = 151329 Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv 47: tokenizer.ggml.eom_token_id u32 = 151338 Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - kv 48: tokenizer.chat_template str = [gMASK]<sop>\n{%- if tools -%}\n<|syste... Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - type f32: 823 tensors Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - type q2_K: 808 tensors Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - type q4_K: 104 tensors Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_loader: - type q6_K: 1 tensors Dec 08 20:10:49 ai2 ollama[1621513]: print_info: file format = GGUF V3 (latest) Dec 08 20:10:49 ai2 ollama[1621513]: print_info: file type = Q2_K - Small Dec 08 20:10:49 ai2 ollama[1621513]: print_info: file size = 67.41 GiB (2.65 BPW) Dec 08 20:10:49 ai2 ollama[1621513]: load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect Dec 08 20:10:49 ai2 ollama[1621513]: load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect Dec 08 20:10:49 ai2 ollama[1621513]: load: printing all EOG tokens: Dec 08 20:10:49 ai2 ollama[1621513]: load: - 151329 ('<|endoftext|>') Dec 08 20:10:49 ai2 ollama[1621513]: load: - 151336 ('<|user|>') Dec 08 20:10:49 ai2 ollama[1621513]: load: - 151338 ('<|observation|>') Dec 08 20:10:49 ai2 ollama[1621513]: load: special tokens cache size = 36 Dec 08 20:10:49 ai2 ollama[1621513]: load: token to piece cache size = 0.9713 MB Dec 08 20:10:49 ai2 ollama[1621513]: print_info: arch = glm4moe Dec 08 20:10:49 ai2 ollama[1621513]: print_info: vocab_only = 1 Dec 08 20:10:49 ai2 ollama[1621513]: print_info: model type = ?B Dec 08 20:10:49 ai2 ollama[1621513]: print_info: model params = 218.38 B Dec 08 20:10:49 ai2 ollama[1621513]: print_info: general.name = GLM-4.6-REAP-218B-A32B-FP8 Dec 08 20:10:49 ai2 ollama[1621513]: print_info: vocab type = BPE Dec 08 20:10:49 ai2 ollama[1621513]: print_info: n_vocab = 151552 Dec 08 20:10:49 ai2 ollama[1621513]: print_info: n_merges = 318088 Dec 08 20:10:49 ai2 ollama[1621513]: print_info: BOS token = 151331 '[gMASK]' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: EOS token = 151329 '<|endoftext|>' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: EOT token = 151336 '<|user|>' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: EOM token = 151338 '<|observation|>' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: UNK token = 151329 '<|endoftext|>' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: PAD token = 151329 '<|endoftext|>' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: LF token = 198 'Ċ' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: FIM PRE token = 151347 '<|code_prefix|>' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: FIM SUF token = 151349 '<|code_suffix|>' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: FIM MID token = 151348 '<|code_middle|>' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: EOG token = 151329 '<|endoftext|>' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: EOG token = 151336 '<|user|>' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: EOG token = 151338 '<|observation|>' Dec 08 20:10:49 ai2 ollama[1621513]: print_info: max token length = 1024 Dec 08 20:10:49 ai2 ollama[1621513]: llama_model_load: vocab only - skipping tensors Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.341Z level=INFO source=server.go:209 msg="enabling flash attention" Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.341Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-275caf7d0bc263f595016079d1b73639b7b3f7ef4a2524f344e152c49427a2f6 --port 45739" Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.342Z level=INFO source=sched.go:443 msg="system memory" total="62.8 GiB" free="60.4 GiB" free_swap="8.0 GiB" Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.342Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-01ded5e7-905c-046d-ae5c-dceb274e436a library=CUDA available="31.3 GiB" free="31.7 GiB" minimum="457.0 MiB" overhead="0 B" Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.342Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-9ca09234-43f4-9e08-6d7a-017a9374bb68 library=CUDA available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.342Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-af4d4570-507f-098d-7d8c-0882bd928a93 library=CUDA available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.342Z level=INFO source=server.go:459 msg="loading model" "model layers"=93 requested=256 Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.344Z level=WARN source=server.go:989 msg="model request too large for system" requested="89.6 GiB" available="68.4 GiB" total="62.8 GiB" free="60.4 GiB" swap="8.0 GiB" Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.344Z level=INFO source=sched.go:470 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-275caf7d0bc263f595016079d1b73639b7b3f7ef4a2524f344e152c49427a2f6 error="model requires more system memory (89.6 GiB) than is available (68.4 GiB)" Dec 08 20:10:49 ai2 ollama[1621513]: time=2025-12-08T20:10:49.357Z level=INFO source=runner.go:963 msg="starting go runner" Dec 08 20:10:49 ai2 ollama[1621513]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so Dec 08 20:10:49 ai2 ollama[1621513]: [GIN] 2025/12/08 - 20:10:49 | 500 | 1.271907784s | 172.17.0.3 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Dec 8, 2025):

What happens if you don't override num_ctx and don't set OLLAMA_SCHED_SPREAD?

<!-- gh-comment-id:3628891048 --> @rick-github commented on GitHub (Dec 8, 2025): What happens if you don't override `num_ctx` and don't set `OLLAMA_SCHED_SPREAD`?
Author
Owner

@ianmock commented on GitHub (Dec 8, 2025):

Removing overridden context seemed to do the track for this model. Is there a better way to do it?

<!-- gh-comment-id:3628937117 --> @ianmock commented on GitHub (Dec 8, 2025): Removing overridden context seemed to do the track for this model. Is there a better way to do it?
Author
Owner

@rick-github commented on GitHub (Dec 8, 2025):

Better way to do what?

<!-- gh-comment-id:3628942438 --> @rick-github commented on GitHub (Dec 8, 2025): Better way to do what?
Author
Owner

@ianmock commented on GitHub (Dec 8, 2025):

Increase context size.

<!-- gh-comment-id:3628949640 --> @ianmock commented on GitHub (Dec 8, 2025): Increase context size.
Author
Owner

@rick-github commented on GitHub (Dec 8, 2025):

More context requires more RAM. Setting OLLAMA_KV_CACHE_TYPE will decrease the memory footprint.

<!-- gh-comment-id:3628972481 --> @rick-github commented on GitHub (Dec 8, 2025): More context requires more RAM. Setting [`OLLAMA_KV_CACHE_TYPE`](https://github.com/ollama/ollama/blob/main/docs/faq.mdx#how-can-i-set-the-quantization-type-for-the-kv-cache) will decrease the memory footprint.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8832