[GH-ISSUE #12988] issue: Chat title generation does not honor mmap setting #55445

Closed
opened 2026-05-05 17:33:57 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @m-schenker on GitHub (Apr 17, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/12988

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

v0.6.5

Ollama Version (if applicable)

No response

Operating System

Fedora Linux 42

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have listed steps to reproduce the bug in detail.

Expected Behavior

When enabling the mmap setting, with cpu inferencing, in a chat to reduce the memory footprint the title generation and other tasks open-webui might execute using the model should honor this setting and start the title generation and possible other tasks with it as well.

Actual Behavior

Open-webui generates the answer to a promt using the mmap setting enabling a low memory footprint, which is especially useful when using cpu inferencing. After the answer is provided it generates the chat title using the same model only to load the entire model into main memory. If it succeeds it generated a high memory footprint doing so if it fails it generates a high memory footprint only to fail entirely if the main memory is not sufficient to keep the entire model (which would often be the case when using mmap).

Steps to Reproduce

  1. Set default settings
  2. Enable mmap for the chat
  3. Open a chat and send a prompt to answer
  4. The model gets loaded honoring the mmap setting and starts execution
  5. the model unloads after answering
  6. The model gets loaded again not honoring the mmap setting and starts execution
  7. The model unloads after generating the title

Logs & Screenshots

Can't include open-webui logs for privacy reasons.

Ollama container logs:

time=2025-04-17T18:55:18.665Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-832c4e4d49ef03a01fa6ecb79481df071a4b4e4fbba6e0dd269bb20230db41a0 --ctx-size 8192 --batch-size 512 --threads 64 --no-mmap --parallel 4 --port 44033"
time=2025-04-17T18:55:18.666Z level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-17T18:55:18.666Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-17T18:55:18.666Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-17T18:55:18.682Z level=INFO source=runner.go:853 msg="starting go runner"
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
time=2025-04-17T18:55:18.686Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-04-17T18:55:18.686Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:44033"
llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /root/.ollama/models/blobs/sha256-832c4e4d49ef03a01fa6ecb79481df071a4b4e4fbba6e0dd269bb20230db41a0 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 256x20B
llama_model_loader: - kv   3:                      deepseek2.block_count u32              = 61
llama_model_loader: - kv   4:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   5:                 deepseek2.embedding_length u32              = 7168
llama_model_loader: - kv   6:              deepseek2.feed_forward_length u32              = 18432
llama_model_loader: - kv   7:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv   8:          deepseek2.attention.head_count_kv u32              = 128
llama_model_loader: - kv   9:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  10: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  11:                deepseek2.expert_used_count u32              = 8
llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 3
llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 129280
llama_model_loader: - kv  14:            deepseek2.attention.q_lora_rank u32              = 1536
llama_model_loader: - kv  15:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  16:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  17:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  18:       deepseek2.expert_feed_forward_length u32              = 2048
llama_model_loader: - kv  19:                     deepseek2.expert_count u32              = 256
llama_model_loader: - kv  20:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  21:             deepseek2.expert_weights_scale f32              = 2.500000
llama_model_loader: - kv  22:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  23:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  24:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  25:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  26:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  27: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  28: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = deepseek-v3
llama_model_loader: - kv  31:                      tokenizer.ggml.tokens arr[str,129280]  = ["<|begin▁of▁sentence|>", "<�...
llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  36:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  37:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  38:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  39:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  40:               general.quantization_version u32              = 2
llama_model_loader: - kv  41:                          general.file_type u32              = 7
llama_model_loader: - type  f32:  361 tensors
llama_model_loader: - type q8_0:  664 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 664.29 GiB (8.50 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 818
load: token to piece cache size = 0.8223 MB
print_info: arch             = deepseek2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 163840
print_info: n_embd           = 7168
print_info: n_layer          = 61
print_info: n_head           = 128
print_info: n_head_kv        = 128
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: n_embd_head_k    = 192
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 1
print_info: n_embd_k_gqa     = 24576
print_info: n_embd_v_gqa     = 16384
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 18432
print_info: n_expert         = 256
print_info: n_expert_used    = 8
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = yarn
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 0.025
print_info: n_ctx_orig_yarn  = 4096
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 671B
print_info: model params     = 671.03 B
print_info: general.name     = n/a
print_info: n_layer_dense_lead   = 3
print_info: n_lora_q             = 1536
print_info: n_lora_kv            = 512
print_info: n_ff_exp             = 2048
print_info: n_expert_shared      = 1
print_info: expert_weights_scale = 2.5
print_info: expert_weights_norm  = 1
print_info: expert_gating_func   = sigmoid
print_info: rope_yarn_log_mul    = 0.1000
print_info: vocab type       = BPE
print_info: n_vocab          = 129280
print_info: n_merges         = 127741
print_info: BOS token        = 0 '<|begin▁of▁sentence|>'
print_info: EOS token        = 1 '<|end▁of▁sentence|>'
print_info: EOT token        = 1 '<|end▁of▁sentence|>'
print_info: PAD token        = 1 '<|end▁of▁sentence|>'
print_info: LF token         = 201 'Ċ'
print_info: FIM PRE token    = 128801 '<|fim▁begin|>'
print_info: FIM SUF token    = 128800 '<|fim▁hole|>'
print_info: FIM MID token    = 128802 '<|fim▁end|>'
print_info: EOG token        = 1 '<|end▁of▁sentence|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors:          CPU model buffer size = 680237.97 MiB
time=2025-04-17T18:55:18.917Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
llama_init_from_model: n_seq_max     = 4
llama_init_from_model: n_ctx         = 8192
llama_init_from_model: n_ctx_per_seq = 2048
llama_init_from_model: n_batch       = 2048
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 0
llama_init_from_model: freq_base     = 10000.0
llama_init_from_model: freq_scale    = 0.025
llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (163840) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 0
llama_kv_cache_init:        CPU KV buffer size = 39040.00 MiB
llama_init_from_model: KV self size  = 39040.00 MiB, K (f16): 23424.00 MiB, V (f16): 15616.00 MiB
llama_init_from_model:        CPU  output buffer size =     2.08 MiB
llama_init_from_model:        CPU compute buffer size =  2218.01 MiB
llama_init_from_model: graph nodes  = 5025
llama_init_from_model: graph splits = 1
time=2025-04-17T19:00:35.292Z level=INFO source=server.go:619 msg="llama runner started in 316.63 seconds"
[GIN] 2025/04/17 - 19:02:19 | 200 |          7m4s |     169.254.1.2 | POST     "/api/chat"
[GIN] 2025/04/17 - 19:04:17 | 200 |         1m57s |     169.254.1.2 | POST     "/api/chat"

Additional Information

No response

Originally created by @m-schenker on GitHub (Apr 17, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/12988 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version v0.6.5 ### Ollama Version (if applicable) _No response_ ### Operating System Fedora Linux 42 ### Browser (if applicable) _No response_ ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have listed steps to reproduce the bug in detail. ### Expected Behavior When enabling the mmap setting, with cpu inferencing, in a chat to reduce the memory footprint the title generation and other tasks open-webui might execute using the model should honor this setting and start the title generation and possible other tasks with it as well. ### Actual Behavior Open-webui generates the answer to a promt using the mmap setting enabling a low memory footprint, which is especially useful when using cpu inferencing. After the answer is provided it generates the chat title using the same model only to load the entire model into main memory. If it succeeds it generated a high memory footprint doing so if it fails it generates a high memory footprint only to fail entirely if the main memory is not sufficient to keep the entire model (which would often be the case when using mmap). ### Steps to Reproduce 1. Set default settings 2. Enable mmap for the chat 3. Open a chat and send a prompt to answer 4. The model gets loaded honoring the mmap setting and starts execution 5. the model unloads after answering 6. The model gets loaded again not honoring the mmap setting and starts execution 7. The model unloads after generating the title ### Logs & Screenshots Can't include open-webui logs for privacy reasons. Ollama container logs: ``` time=2025-04-17T18:55:18.665Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-832c4e4d49ef03a01fa6ecb79481df071a4b4e4fbba6e0dd269bb20230db41a0 --ctx-size 8192 --batch-size 512 --threads 64 --no-mmap --parallel 4 --port 44033" time=2025-04-17T18:55:18.666Z level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-17T18:55:18.666Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-17T18:55:18.666Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-17T18:55:18.682Z level=INFO source=runner.go:853 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-04-17T18:55:18.686Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-04-17T18:55:18.686Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:44033" llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /root/.ollama/models/blobs/sha256-832c4e4d49ef03a01fa6ecb79481df071a4b4e4fbba6e0dd269bb20230db41a0 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 256x20B llama_model_loader: - kv 3: deepseek2.block_count u32 = 61 llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<�... llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 40: general.quantization_version u32 = 2 llama_model_loader: - kv 41: general.file_type u32 = 7 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q8_0: 664 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 664.29 GiB (8.50 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 818 load: token to piece cache size = 0.8223 MB print_info: arch = deepseek2 print_info: vocab_only = 0 print_info: n_ctx_train = 163840 print_info: n_embd = 7168 print_info: n_layer = 61 print_info: n_head = 128 print_info: n_head_kv = 128 print_info: n_rot = 64 print_info: n_swa = 0 print_info: n_embd_head_k = 192 print_info: n_embd_head_v = 128 print_info: n_gqa = 1 print_info: n_embd_k_gqa = 24576 print_info: n_embd_v_gqa = 16384 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 18432 print_info: n_expert = 256 print_info: n_expert_used = 8 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = yarn print_info: freq_base_train = 10000.0 print_info: freq_scale_train = 0.025 print_info: n_ctx_orig_yarn = 4096 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 671B print_info: model params = 671.03 B print_info: general.name = n/a print_info: n_layer_dense_lead = 3 print_info: n_lora_q = 1536 print_info: n_lora_kv = 512 print_info: n_ff_exp = 2048 print_info: n_expert_shared = 1 print_info: expert_weights_scale = 2.5 print_info: expert_weights_norm = 1 print_info: expert_gating_func = sigmoid print_info: rope_yarn_log_mul = 0.1000 print_info: vocab type = BPE print_info: n_vocab = 129280 print_info: n_merges = 127741 print_info: BOS token = 0 '<|begin▁of▁sentence|>' print_info: EOS token = 1 '<|end▁of▁sentence|>' print_info: EOT token = 1 '<|end▁of▁sentence|>' print_info: PAD token = 1 '<|end▁of▁sentence|>' print_info: LF token = 201 'Ċ' print_info: FIM PRE token = 128801 '<|fim▁begin|>' print_info: FIM SUF token = 128800 '<|fim▁hole|>' print_info: FIM MID token = 128802 '<|fim▁end|>' print_info: EOG token = 1 '<|end▁of▁sentence|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: CPU model buffer size = 680237.97 MiB time=2025-04-17T18:55:18.917Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" llama_init_from_model: n_seq_max = 4 llama_init_from_model: n_ctx = 8192 llama_init_from_model: n_ctx_per_seq = 2048 llama_init_from_model: n_batch = 2048 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 10000.0 llama_init_from_model: freq_scale = 0.025 llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (163840) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 0 llama_kv_cache_init: CPU KV buffer size = 39040.00 MiB llama_init_from_model: KV self size = 39040.00 MiB, K (f16): 23424.00 MiB, V (f16): 15616.00 MiB llama_init_from_model: CPU output buffer size = 2.08 MiB llama_init_from_model: CPU compute buffer size = 2218.01 MiB llama_init_from_model: graph nodes = 5025 llama_init_from_model: graph splits = 1 time=2025-04-17T19:00:35.292Z level=INFO source=server.go:619 msg="llama runner started in 316.63 seconds" [GIN] 2025/04/17 - 19:02:19 | 200 | 7m4s | 169.254.1.2 | POST "/api/chat" [GIN] 2025/04/17 - 19:04:17 | 200 | 1m57s | 169.254.1.2 | POST "/api/chat" ``` ### Additional Information _No response_
GiteaMirror added the bug label 2026-05-05 17:33:57 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#55445