[GH-ISSUE #7394] The Open WebUI generate unsense text, but the cli terminal can chat normally with the same hosted model #30461

Closed
opened 2026-04-22 10:05:35 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @QiuJYWX on GitHub (Oct 28, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7394

What is the issue?

The results generated by cli terminal:
cli

The results generated by open webui:
webui
webui2

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.14

Originally created by @QiuJYWX on GitHub (Oct 28, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7394 ### What is the issue? The results generated by cli terminal: ![cli](https://github.com/user-attachments/assets/aa1ea3c5-bcd0-45e0-b9e4-480aa81036ec) The results generated by open webui: ![webui](https://github.com/user-attachments/assets/81f236c9-517c-4fc5-a3e3-6c9f284c7555) ![webui2](https://github.com/user-attachments/assets/4f35c1b5-a422-402c-82bc-eecdeba35934) ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.14
GiteaMirror added the needs more infobug labels 2026-04-22 10:05:36 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 28, 2024):

Set OLLAMA_DEBUG=1 in the server environment, use Open WebUI again, and then post the server logs here.

<!-- gh-comment-id:2441318716 --> @rick-github commented on GitHub (Oct 28, 2024): Set `OLLAMA_DEBUG=1` in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-linux), use Open WebUI again, and then post the [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) here.
Author
Owner

@QiuJYWX commented on GitHub (Oct 29, 2024):

屏幕截图 2024-10-29 082506

<!-- gh-comment-id:2442921608 --> @QiuJYWX commented on GitHub (Oct 29, 2024): ![屏幕截图 2024-10-29 082506](https://github.com/user-attachments/assets/80213fa4-36e7-4ae9-8296-3a6d78fea24b)
Author
Owner

@QiuJYWX commented on GitHub (Oct 29, 2024):

屏幕截图 2024-10-29 082855

<!-- gh-comment-id:2442926484 --> @QiuJYWX commented on GitHub (Oct 29, 2024): ![屏幕截图 2024-10-29 082855](https://github.com/user-attachments/assets/73eea912-52ca-4ae4-a809-1fba5916edbd)
Author
Owner

@rick-github commented on GitHub (Oct 29, 2024):

Please don't post screenshots, post the full text logs. If you've added OLLAMA_DEBUG=1 to the server environment, restart the server:

sudo systemctl stop ollama
sudo systemctl start ollama

Then use Open WebUI, and add the logs to this thread.

<!-- gh-comment-id:2442936544 --> @rick-github commented on GitHub (Oct 29, 2024): Please don't post screenshots, post the full text logs. If you've added `OLLAMA_DEBUG=1` to the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-linux), restart the server: ```sh sudo systemctl stop ollama sudo systemctl start ollama ``` Then use Open WebUI, and add the logs to this thread.
Author
Owner

@QiuJYWX commented on GitHub (Oct 29, 2024):

Thanks for your help and reminder. We have restart the ollama and web ui docker, and the full text logs is as follows:

time=2024-10-29T02:47:10.513Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
time=2024-10-29T02:47:11.385Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: offloading 27 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 28/28 layers to GPU
llm_load_tensors:        CPU buffer size =   400.00 MiB
llm_load_tensors:      CUDA0 buffer size = 29564.48 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 0.025
llama_kv_cache_init:      CUDA0 KV buffer size =  2160.00 MiB
llama_new_context_with_model: KV self size  = 2160.00 MiB, K (f16): 1296.00 MiB, V (f16):  864.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     1.59 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   296.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    20.01 MiB
llama_new_context_with_model: graph nodes  = 1924
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="140363244130304" timestamp=1730170035
time=2024-10-29T02:47:15.154Z level=INFO source=server.go:626 msg="llama runner started in 6.35 seconds"
check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
[GIN] 2024/10/29 - 02:47:22 | 200 | 20.263762301s |      172.17.0.1 | POST     "/api/chat"
check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
[GIN] 2024/10/29 - 02:51:14 | 200 |  5.920329725s |      172.17.0.1 | POST     "/api/chat"
check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want?
[GIN] 2024/10/29 - 02:51:14 | 200 |  167.141361ms |      172.17.0.1 | POST     "/v1/chat/completions"
<!-- gh-comment-id:2443065150 --> @QiuJYWX commented on GitHub (Oct 29, 2024): Thanks for your help and reminder. We have restart the ollama and web ui docker, and the full text logs is as follows: ``` time=2024-10-29T02:47:10.513Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" time=2024-10-29T02:47:11.385Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" llm_load_tensors: offloading 27 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 28/28 layers to GPU llm_load_tensors: CPU buffer size = 400.00 MiB llm_load_tensors: CUDA0 buffer size = 29564.48 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 0.025 llama_kv_cache_init: CUDA0 KV buffer size = 2160.00 MiB llama_new_context_with_model: KV self size = 2160.00 MiB, K (f16): 1296.00 MiB, V (f16): 864.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 1.59 MiB llama_new_context_with_model: CUDA0 compute buffer size = 296.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB llama_new_context_with_model: graph nodes = 1924 llama_new_context_with_model: graph splits = 2 INFO [main] model loaded | tid="140363244130304" timestamp=1730170035 time=2024-10-29T02:47:15.154Z level=INFO source=server.go:626 msg="llama runner started in 6.35 seconds" check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? [GIN] 2024/10/29 - 02:47:22 | 200 | 20.263762301s | 172.17.0.1 | POST "/api/chat" check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? [GIN] 2024/10/29 - 02:51:14 | 200 | 5.920329725s | 172.17.0.1 | POST "/api/chat" check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? [GIN] 2024/10/29 - 02:51:14 | 200 | 167.141361ms | 172.17.0.1 | POST "/v1/chat/completions" ```
Author
Owner

@rick-github commented on GitHub (Oct 29, 2024):

This log doesn't appear to have any debug information, and it's also not a complete log.

Set OLLAMA_DEBUG=1 in the server environment.
Run systemctl cat ollama and add the result to this thread.
Restart ollama: sudo systemctl stop ollama ; sudo systemctl start ollama
Use Open WebUI.
Run journalctl -u ollama --no-pager and add the result to this thread.

<!-- gh-comment-id:2443109601 --> @rick-github commented on GitHub (Oct 29, 2024): This log doesn't appear to have any debug information, and it's also not a complete log. Set `OLLAMA_DEBUG=1` in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-linux). Run `systemctl cat ollama` and add the result to this thread. Restart ollama: `sudo systemctl stop ollama ; sudo systemctl start ollama` Use Open WebUI. Run `journalctl -u ollama --no-pager` and add the result to this thread.
Author
Owner

@QiuJYWX commented on GitHub (Oct 29, 2024):

The following log is Ollama docker log:
time=2024-10-29T08:43:24.907Z level=INFO source=server.go:105 msg="system memory" total="2015.5 GiB" free="1961.8 GiB" free_swap="8.0 GiB" time=2024-10-29T08:43:24.908Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=28 layers.offload=28 layers.split="" memory.available="[78.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="31.9 GiB" memory.required.partial="31.9 GiB" memory.required.kv="2.1 GiB" memory.required.allocations="[31.9 GiB]" memory.weights.total="30.6 GiB" memory.weights.repeating="30.2 GiB" memory.weights.nonrepeating="400.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="391.4 MiB" time=2024-10-29T08:43:24.908Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-2de57aa898372715c457b7c06362bb9826f3e47ba2a83679f162fcb7b40e763b --ctx-size 8192 --batch-size 512 --embedding --n-gpu-layers 28 --threads 32 --parallel 4 --port 43789" time=2024-10-29T08:43:24.909Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-10-29T08:43:24.909Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" time=2024-10-29T08:43:24.909Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" INFO [main] starting c++ runner | tid="140660167258112" timestamp=1730191404 INFO [main] build info | build=10 commit="b45ed63" tid="140660167258112" timestamp=1730191404 INFO [main] system info | n_threads=32 n_threads_batch=32 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140660167258112" timestamp=1730191404 total_threads=128 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="127" port="43789" tid="140660167258112" timestamp=1730191404 llama_model_loader: loaded meta data with 38 key-value pairs and 377 tensors from /root/.ollama/models/blobs/sha256-2de57aa898372715c457b7c06362bb9826f3e47ba2a83679f162fcb7b40e763b (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.name str = DeepSeek-Coder-V2-Lite-Instruct llama_model_loader: - kv 2: deepseek2.block_count u32 = 27 llama_model_loader: - kv 3: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 4: deepseek2.embedding_length u32 = 2048 llama_model_loader: - kv 5: deepseek2.feed_forward_length u32 = 10944 llama_model_loader: - kv 6: deepseek2.attention.head_count u32 = 16 llama_model_loader: - kv 7: deepseek2.attention.head_count_kv u32 = 16 llama_model_loader: - kv 8: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 9: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: deepseek2.expert_used_count u32 = 6 llama_model_loader: - kv 11: general.file_type u32 = 1 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 1 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 102400 llama_model_loader: - kv 14: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 15: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 16: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 17: deepseek2.expert_feed_forward_length u32 = 1408 llama_model_loader: - kv 18: deepseek2.expert_count u32 = 64 llama_model_loader: - kv 19: deepseek2.expert_shared_count u32 = 2 llama_model_loader: - kv 20: deepseek2.expert_weights_scale f32 = 1.000000 llama_model_loader: - kv 21: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 22: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 23: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 24: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 25: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.070700 llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 27: tokenizer.ggml.pre str = deepseek-llm llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,102400] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e... llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 100000 llama_model_loader: - kv 32: tokenizer.ggml.eos_token_id u32 = 100001 llama_model_loader: - kv 33: tokenizer.ggml.padding_token_id u32 = 100001 llama_model_loader: - kv 34: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 35: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 36: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 37: general.quantization_version u32 = 2 llama_model_loader: - type f32: 108 tensors llama_model_loader: - type f16: 269 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 2400 time=2024-10-29T08:43:25.161Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: token to piece cache size = 0.6661 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 102400 llm_load_print_meta: n_merges = 99757 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 2048 llm_load_print_meta: n_layer = 27 llm_load_print_meta: n_head = 16 llm_load_print_meta: n_head_kv = 16 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 3072 llm_load_print_meta: n_embd_v_gqa = 2048 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 10944 llm_load_print_meta: n_expert = 64 llm_load_print_meta: n_expert_used = 6 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 16B llm_load_print_meta: model ftype = F16 llm_load_print_meta: model params = 15.71 B llm_load_print_meta: model size = 29.26 GiB (16.00 BPW) llm_load_print_meta: general.name = DeepSeek-Coder-V2-Lite-Instruct llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 100001 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 126 'Ä' llm_load_print_meta: EOG token = 100001 '<|end▁of▁sentence|>' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 1 llm_load_print_meta: n_lora_q = 0 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 1408 llm_load_print_meta: n_expert_shared = 2 llm_load_print_meta: expert_weights_scale = 1.0 llm_load_print_meta: rope_yarn_log_mul = 0.0707 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA H100 80GB HBM3, compute capability 9.0, VMM: yes llm_load_tensors: ggml ctx size = 0.32 MiB time=2024-10-29T08:43:26.617Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" time=2024-10-29T08:43:26.945Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" llm_load_tensors: offloading 27 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 28/28 layers to GPU llm_load_tensors: CPU buffer size = 400.00 MiB llm_load_tensors: CUDA0 buffer size = 29564.48 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 0.025 llama_kv_cache_init: CUDA0 KV buffer size = 2160.00 MiB llama_new_context_with_model: KV self size = 2160.00 MiB, K (f16): 1296.00 MiB, V (f16): 864.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 1.59 MiB llama_new_context_with_model: CUDA0 compute buffer size = 296.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB llama_new_context_with_model: graph nodes = 1924 llama_new_context_with_model: graph splits = 2 INFO [main] model loaded | tid="140660167258112" timestamp=1730191410 time=2024-10-29T08:43:30.966Z level=INFO source=server.go:626 msg="llama runner started in 6.06 seconds" check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? [GIN] 2024/10/29 - 08:43:36 | 200 | 15.443053609s | 172.17.0.1 | POST "/api/chat" check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? [GIN] 2024/10/29 - 08:43:36 | 200 | 182.011307ms | 172.17.0.1 | POST "/v1/chat/completions"

The following log is open webui docker log:
INFO:apps.ollama.main:url: http://host.docker.internal:11434 INFO: 10.1.0.23:3518 - "POST /ollama/api/chat HTTP/1.1" 200 OK INFO: 10.1.0.23:3518 - "POST /api/v1/chats/51e2f958-7e77-45de-b4bc-09b2a9a10de3 HTTP/1.1" 200 OK INFO: 10.1.0.23:3518 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO:apps.ollama.main:url: http://host.docker.internal:11434 INFO: 10.1.0.23:3518 - "POST /ollama/v1/chat/completions HTTP/1.1" 200 OK INFO: 10.1.0.23:3518 - "POST /api/v1/chats/51e2f958-7e77-45de-b4bc-09b2a9a10de3 HTTP/1.1" 200 OK INFO: 10.1.0.23:3518 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO: 10.1.0.23:3518 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO:apps.ollama.main:url: http://host.docker.internal:11434 INFO: 10.1.0.23:4064 - "POST /ollama/api/chat HTTP/1.1" 200 OK INFO: 10.1.0.23:4064 - "POST /api/v1/chats/51e2f958-7e77-45de-b4bc-09b2a9a10de3 HTTP/1.1" 200 OK INFO: 10.1.0.23:4064 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO:apps.ollama.main:url: http://host.docker.internal:11434 INFO: 10.1.0.23:4064 - "POST /ollama/v1/chat/completions HTTP/1.1" 200 OK INFO: 10.1.0.23:4064 - "POST /api/v1/chats/51e2f958-7e77-45de-b4bc-09b2a9a10de3 HTTP/1.1" 200 OK INFO: 10.1.0.23:4064 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO: 10.1.0.23:4064 - "GET /api/v1/chats/ HTTP/1.1" 200 OK

<!-- gh-comment-id:2443600003 --> @QiuJYWX commented on GitHub (Oct 29, 2024): The following log is Ollama docker log: `time=2024-10-29T08:43:24.907Z level=INFO source=server.go:105 msg="system memory" total="2015.5 GiB" free="1961.8 GiB" free_swap="8.0 GiB" time=2024-10-29T08:43:24.908Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=28 layers.offload=28 layers.split="" memory.available="[78.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="31.9 GiB" memory.required.partial="31.9 GiB" memory.required.kv="2.1 GiB" memory.required.allocations="[31.9 GiB]" memory.weights.total="30.6 GiB" memory.weights.repeating="30.2 GiB" memory.weights.nonrepeating="400.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="391.4 MiB" time=2024-10-29T08:43:24.908Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-2de57aa898372715c457b7c06362bb9826f3e47ba2a83679f162fcb7b40e763b --ctx-size 8192 --batch-size 512 --embedding --n-gpu-layers 28 --threads 32 --parallel 4 --port 43789" time=2024-10-29T08:43:24.909Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-10-29T08:43:24.909Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" time=2024-10-29T08:43:24.909Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" INFO [main] starting c++ runner | tid="140660167258112" timestamp=1730191404 INFO [main] build info | build=10 commit="b45ed63" tid="140660167258112" timestamp=1730191404 INFO [main] system info | n_threads=32 n_threads_batch=32 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140660167258112" timestamp=1730191404 total_threads=128 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="127" port="43789" tid="140660167258112" timestamp=1730191404 llama_model_loader: loaded meta data with 38 key-value pairs and 377 tensors from /root/.ollama/models/blobs/sha256-2de57aa898372715c457b7c06362bb9826f3e47ba2a83679f162fcb7b40e763b (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.name str = DeepSeek-Coder-V2-Lite-Instruct llama_model_loader: - kv 2: deepseek2.block_count u32 = 27 llama_model_loader: - kv 3: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 4: deepseek2.embedding_length u32 = 2048 llama_model_loader: - kv 5: deepseek2.feed_forward_length u32 = 10944 llama_model_loader: - kv 6: deepseek2.attention.head_count u32 = 16 llama_model_loader: - kv 7: deepseek2.attention.head_count_kv u32 = 16 llama_model_loader: - kv 8: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 9: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: deepseek2.expert_used_count u32 = 6 llama_model_loader: - kv 11: general.file_type u32 = 1 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 1 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 102400 llama_model_loader: - kv 14: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 15: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 16: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 17: deepseek2.expert_feed_forward_length u32 = 1408 llama_model_loader: - kv 18: deepseek2.expert_count u32 = 64 llama_model_loader: - kv 19: deepseek2.expert_shared_count u32 = 2 llama_model_loader: - kv 20: deepseek2.expert_weights_scale f32 = 1.000000 llama_model_loader: - kv 21: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 22: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 23: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 24: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 25: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.070700 llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 27: tokenizer.ggml.pre str = deepseek-llm llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,102400] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e... llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 100000 llama_model_loader: - kv 32: tokenizer.ggml.eos_token_id u32 = 100001 llama_model_loader: - kv 33: tokenizer.ggml.padding_token_id u32 = 100001 llama_model_loader: - kv 34: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 35: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 36: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 37: general.quantization_version u32 = 2 llama_model_loader: - type f32: 108 tensors llama_model_loader: - type f16: 269 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 2400 time=2024-10-29T08:43:25.161Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: token to piece cache size = 0.6661 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 102400 llm_load_print_meta: n_merges = 99757 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 2048 llm_load_print_meta: n_layer = 27 llm_load_print_meta: n_head = 16 llm_load_print_meta: n_head_kv = 16 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 3072 llm_load_print_meta: n_embd_v_gqa = 2048 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 10944 llm_load_print_meta: n_expert = 64 llm_load_print_meta: n_expert_used = 6 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 16B llm_load_print_meta: model ftype = F16 llm_load_print_meta: model params = 15.71 B llm_load_print_meta: model size = 29.26 GiB (16.00 BPW) llm_load_print_meta: general.name = DeepSeek-Coder-V2-Lite-Instruct llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 100001 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 126 'Ä' llm_load_print_meta: EOG token = 100001 '<|end▁of▁sentence|>' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 1 llm_load_print_meta: n_lora_q = 0 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 1408 llm_load_print_meta: n_expert_shared = 2 llm_load_print_meta: expert_weights_scale = 1.0 llm_load_print_meta: rope_yarn_log_mul = 0.0707 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA H100 80GB HBM3, compute capability 9.0, VMM: yes llm_load_tensors: ggml ctx size = 0.32 MiB time=2024-10-29T08:43:26.617Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" time=2024-10-29T08:43:26.945Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" llm_load_tensors: offloading 27 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 28/28 layers to GPU llm_load_tensors: CPU buffer size = 400.00 MiB llm_load_tensors: CUDA0 buffer size = 29564.48 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 0.025 llama_kv_cache_init: CUDA0 KV buffer size = 2160.00 MiB llama_new_context_with_model: KV self size = 2160.00 MiB, K (f16): 1296.00 MiB, V (f16): 864.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 1.59 MiB llama_new_context_with_model: CUDA0 compute buffer size = 296.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB llama_new_context_with_model: graph nodes = 1924 llama_new_context_with_model: graph splits = 2 INFO [main] model loaded | tid="140660167258112" timestamp=1730191410 time=2024-10-29T08:43:30.966Z level=INFO source=server.go:626 msg="llama runner started in 6.06 seconds" check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? [GIN] 2024/10/29 - 08:43:36 | 200 | 15.443053609s | 172.17.0.1 | POST "/api/chat" check_double_bos_eos: Added a BOS token to the prompt as specified by the model but the prompt also starts with a BOS token. So now the final prompt starts with 2 BOS tokens. Are you sure this is what you want? [GIN] 2024/10/29 - 08:43:36 | 200 | 182.011307ms | 172.17.0.1 | POST "/v1/chat/completions"` The following log is open webui docker log: `INFO:apps.ollama.main:url: http://host.docker.internal:11434 INFO: 10.1.0.23:3518 - "POST /ollama/api/chat HTTP/1.1" 200 OK INFO: 10.1.0.23:3518 - "POST /api/v1/chats/51e2f958-7e77-45de-b4bc-09b2a9a10de3 HTTP/1.1" 200 OK INFO: 10.1.0.23:3518 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO:apps.ollama.main:url: http://host.docker.internal:11434 INFO: 10.1.0.23:3518 - "POST /ollama/v1/chat/completions HTTP/1.1" 200 OK INFO: 10.1.0.23:3518 - "POST /api/v1/chats/51e2f958-7e77-45de-b4bc-09b2a9a10de3 HTTP/1.1" 200 OK INFO: 10.1.0.23:3518 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO: 10.1.0.23:3518 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO:apps.ollama.main:url: http://host.docker.internal:11434 INFO: 10.1.0.23:4064 - "POST /ollama/api/chat HTTP/1.1" 200 OK INFO: 10.1.0.23:4064 - "POST /api/v1/chats/51e2f958-7e77-45de-b4bc-09b2a9a10de3 HTTP/1.1" 200 OK INFO: 10.1.0.23:4064 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO:apps.ollama.main:url: http://host.docker.internal:11434 INFO: 10.1.0.23:4064 - "POST /ollama/v1/chat/completions HTTP/1.1" 200 OK INFO: 10.1.0.23:4064 - "POST /api/v1/chats/51e2f958-7e77-45de-b4bc-09b2a9a10de3 HTTP/1.1" 200 OK INFO: 10.1.0.23:4064 - "GET /api/v1/chats/ HTTP/1.1" 200 OK INFO: 10.1.0.23:4064 - "GET /api/v1/chats/ HTTP/1.1" 200 OK`
Author
Owner

@rick-github commented on GitHub (Oct 29, 2024):

There is no debug information in this log.

Add OLLAMA_DEBUG=1 to the container environment. If you are using plain docker: -e OLLAMA_DEBUG=1. If you are using docker compose:

services:
  ollama:
    environment:
      - OLLAMA_DEBUG=1

Restart the ollama docker container.
Use Open WebUI.
Get the full log: docker logs ollama 2>&1 > ollama.log. Attach the ollama.log file to this thread.

<!-- gh-comment-id:2443886937 --> @rick-github commented on GitHub (Oct 29, 2024): There is no debug information in this log. Add `OLLAMA_DEBUG=1` to the container environment. If you are using plain docker: `-e OLLAMA_DEBUG=1`. If you are using docker compose: ```yaml services: ollama: environment: - OLLAMA_DEBUG=1 ``` Restart the ollama docker container. Use Open WebUI. Get the full log: `docker logs ollama 2>&1 > ollama.log`. Attach the ollama.log file to this thread.
Author
Owner

@pdevine commented on GitHub (Nov 13, 2024):

I think we can close this as an issue from Open webui? I'll close it for now, but we can reopen it if it's an Ollama issue.

<!-- gh-comment-id:2474930239 --> @pdevine commented on GitHub (Nov 13, 2024): I think we can close this as an issue from Open webui? I'll close it for now, but we can reopen it if it's an Ollama issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30461