[GH-ISSUE #3760] Ollama does not support llama3 with "stream": false - "chat template not supported" error #48831

Closed
opened 2026-04-28 09:39:52 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @zedmango on GitHub (Apr 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3760

What is the issue?

server.log shows the following:

{"function":"validate_model_chat_template","level":"ERR","line":437,"msg":"The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses","tid":"25744","timestamp":1713547085}

This is with a template I created in the Modelfile. Why is it telling me this?

The model is llama3-70b-instructQ4KM, and I got this error with the front end Ollama Grid Search and also got the same error with curl:

$ curl http://localhost:11434/api/generate -d '{
  "model": "llama3-70b-instructQ4KM:latest",
  "prompt": "Why is the sky blue?",
  "stream": false
}'

I posted this bug under Ollama Grid Search: https://github.com/dezoito/ollama-grid-search/issues/7 and was told to post it here.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.1.32

Originally created by @zedmango on GitHub (Apr 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3760 ### What is the issue? `server.log` shows the following: ```{"function":"validate_model_chat_template","level":"ERR","line":437,"msg":"The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses","tid":"25744","timestamp":1713547085}``` This is with a template I created in the Modelfile. Why is it telling me this? The model is llama3-70b-instructQ4KM, and I got this error with the front end Ollama Grid Search and also got the same error with curl: ``` $ curl http://localhost:11434/api/generate -d '{ "model": "llama3-70b-instructQ4KM:latest", "prompt": "Why is the sky blue?", "stream": false }' ``` I posted this bug under Ollama Grid Search: https://github.com/dezoito/ollama-grid-search/issues/7 and was told to post it here. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.32
GiteaMirror added the bug label 2026-04-28 09:39:52 -05:00
Author
Owner

@zedmango commented on GitHub (Apr 19, 2024):

But it works fine from the command line with curl without "stream": false.

$ curl http://localhost:11434/api/generate -d '{
  "model": "llama3-70b-instructQ4KM:latest",
  "prompt": "Why is the sky blue?"
}'
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:23.493146Z","response":"Here","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:26.1218179Z","response":" is","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:28.7380225Z","response":" the","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:31.5457876Z","response":" unc","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:34.298514Z","response":"ensored","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:36.828654Z","response":" and","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:39.4424131Z","response":" complete","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:42.0443599Z","response":" answer","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:44.6436538Z","response":":\r\n\r\n","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:47.3470744Z","response":"The","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:49.9974092Z","response":" sky","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:52.7423936Z","response":" appears","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:55.2277731Z","response":" blue","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:57.7483048Z","response":" because","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:00.4720651Z","response":" of","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:03.0125387Z","response":" a","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:05.9710912Z","response":" phenomenon","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:08.6087881Z","response":" called","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:11.1774928Z","response":" Ray","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:13.8879123Z","response":"leigh","done":false}
{"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:16.3995868Z","response":" scattering","done":false}```
<!-- gh-comment-id:2067081093 --> @zedmango commented on GitHub (Apr 19, 2024): But it works fine from the command line with curl without "stream": false. ``` $ curl http://localhost:11434/api/generate -d '{ "model": "llama3-70b-instructQ4KM:latest", "prompt": "Why is the sky blue?" }' {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:23.493146Z","response":"Here","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:26.1218179Z","response":" is","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:28.7380225Z","response":" the","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:31.5457876Z","response":" unc","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:34.298514Z","response":"ensored","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:36.828654Z","response":" and","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:39.4424131Z","response":" complete","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:42.0443599Z","response":" answer","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:44.6436538Z","response":":\r\n\r\n","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:47.3470744Z","response":"The","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:49.9974092Z","response":" sky","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:52.7423936Z","response":" appears","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:55.2277731Z","response":" blue","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:17:57.7483048Z","response":" because","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:00.4720651Z","response":" of","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:03.0125387Z","response":" a","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:05.9710912Z","response":" phenomenon","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:08.6087881Z","response":" called","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:11.1774928Z","response":" Ray","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:13.8879123Z","response":"leigh","done":false} {"model":"llama3-70b-instructQ4KM:latest","created_at":"2024-04-19T18:18:16.3995868Z","response":" scattering","done":false}```
Author
Owner

@chrishart0 commented on GitHub (Apr 20, 2024):

I am seeing the same error with llama3 8b. I am also having an issue getting Ollama to use my GPU, doubt that is a related issue though. Here are my logs:

ollama      | time=2024-04-20T15:43:51.497Z level=WARN source=server.go:51 msg="requested context length is greater than model max context length" requested=8279 model=8192
ollama      | time=2024-04-20T15:43:51.497Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
ollama      | time=2024-04-20T15:43:51.497Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
ollama      | time=2024-04-20T15:43:51.498Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2218309333/runners/cuda_v11/libcudart.so.11.0]"
ollama      | time=2024-04-20T15:43:51.509Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama2218309333/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 999"
ollama      | time=2024-04-20T15:43:51.509Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
ollama      | time=2024-04-20T15:43:51.510Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.29.06]"
ollama      | time=2024-04-20T15:43:51.521Z level=INFO source=gpu.go:137 msg="Nvidia GPU detected via nvidia-ml"
ollama      | time=2024-04-20T15:43:51.521Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama      | time=2024-04-20T15:43:51.526Z level=INFO source=gpu.go:182 msg="[nvidia-ml] NVML CUDA Compute Capability detected: 8.9"
ollama      | time=2024-04-20T15:43:51.528Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
ollama      | time=2024-04-20T15:43:51.528Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
ollama      | time=2024-04-20T15:43:51.529Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2218309333/runners/cuda_v11/libcudart.so.11.0]"
ollama      | time=2024-04-20T15:43:51.531Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama2218309333/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 999"
ollama      | time=2024-04-20T15:43:51.531Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
ollama      | time=2024-04-20T15:43:51.532Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.29.06]"
ollama      | time=2024-04-20T15:43:51.538Z level=INFO source=gpu.go:137 msg="Nvidia GPU detected via nvidia-ml"
ollama      | time=2024-04-20T15:43:51.538Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama      | time=2024-04-20T15:43:51.544Z level=INFO source=gpu.go:182 msg="[nvidia-ml] NVML CUDA Compute Capability detected: 8.9"
ollama      | time=2024-04-20T15:43:51.545Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="16356.0 MiB" used="16356.0 MiB" available="21911.7 MiB" kv="1024.0 MiB" fulloffload="560.0 MiB" partialoffload="677.5 MiB"
ollama      | time=2024-04-20T15:43:51.545Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama      | time=2024-04-20T15:43:51.546Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama2218309333/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a4bbea838ebde985f2f99d710c849219979b9608e44e1c3c46416b5fbff72d64 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --port 34091"
ollama      | time=2024-04-20T15:43:51.546Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
ollama      | {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"140168993607680","timestamp":1713627831}
ollama      | {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"140168993607680","timestamp":1713627831}
ollama      | {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":8,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"140168993607680","timestamp":1713627831,"total_threads":16}
ollama      | llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-a4bbea838ebde985f2f99d710c849219979b9608e44e1c3c46416b5fbff72d64 (version GGUF V3 (latest))
ollama      | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama      | llama_model_loader: - kv   0:                       general.architecture str              = llama
ollama      | llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
ollama      | llama_model_loader: - kv   2:                          llama.block_count u32              = 32
ollama      | llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
ollama      | llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
ollama      | llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
ollama      | llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
ollama      | llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
ollama      | llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
ollama      | llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
ollama      | llama_model_loader: - kv  10:                          general.file_type u32              = 1
ollama      | llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
ollama      | llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
ollama      | llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
ollama      | llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
ollama      | llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
ollama      | llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
ollama      | llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 128000
ollama      | llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 128001
ollama      | llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
ollama      | llama_model_loader: - type  f32:   65 tensors
ollama      | llama_model_loader: - type  f16:  226 tensors
ollama      | llm_load_vocab: special tokens definition check successful ( 256/128256 ).
ollama      | llm_load_print_meta: format           = GGUF V3 (latest)
ollama      | llm_load_print_meta: arch             = llama
ollama      | llm_load_print_meta: vocab type       = BPE
ollama      | llm_load_print_meta: n_vocab          = 128256
ollama      | llm_load_print_meta: n_merges         = 280147
ollama      | llm_load_print_meta: n_ctx_train      = 8192
ollama      | llm_load_print_meta: n_embd           = 4096
ollama      | llm_load_print_meta: n_head           = 32
ollama      | llm_load_print_meta: n_head_kv        = 8
ollama      | llm_load_print_meta: n_layer          = 32
ollama      | llm_load_print_meta: n_rot            = 128
ollama      | llm_load_print_meta: n_embd_head_k    = 128
ollama      | llm_load_print_meta: n_embd_head_v    = 128
ollama      | llm_load_print_meta: n_gqa            = 4
ollama      | llm_load_print_meta: n_embd_k_gqa     = 1024
ollama      | llm_load_print_meta: n_embd_v_gqa     = 1024
ollama      | llm_load_print_meta: f_norm_eps       = 0.0e+00
ollama      | llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
ollama      | llm_load_print_meta: f_clamp_kqv      = 0.0e+00
ollama      | llm_load_print_meta: f_max_alibi_bias = 0.0e+00
ollama      | llm_load_print_meta: f_logit_scale    = 0.0e+00
ollama      | llm_load_print_meta: n_ff             = 14336
ollama      | llm_load_print_meta: n_expert         = 0
ollama      | llm_load_print_meta: n_expert_used    = 0
ollama      | llm_load_print_meta: causal attn      = 1
ollama      | llm_load_print_meta: pooling type     = 0
ollama      | llm_load_print_meta: rope type        = 0
ollama      | llm_load_print_meta: rope scaling     = linear
ollama      | llm_load_print_meta: freq_base_train  = 500000.0
ollama      | llm_load_print_meta: freq_scale_train = 1
ollama      | llm_load_print_meta: n_yarn_orig_ctx  = 8192
ollama      | llm_load_print_meta: rope_finetuned   = unknown
ollama      | llm_load_print_meta: ssm_d_conv       = 0
ollama      | llm_load_print_meta: ssm_d_inner      = 0
ollama      | llm_load_print_meta: ssm_d_state      = 0
ollama      | llm_load_print_meta: ssm_dt_rank      = 0
ollama      | llm_load_print_meta: model type       = 7B
ollama      | llm_load_print_meta: model ftype      = F16
ollama      | llm_load_print_meta: model params     = 8.03 B
ollama      | llm_load_print_meta: model size       = 14.96 GiB (16.00 BPW) 
ollama      | llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
ollama      | llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
ollama      | llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
ollama      | llm_load_print_meta: LF token         = 128 'Ä'
ollama      | ggml_cuda_init: failed to initialize CUDA: unknown error
ollama      | llm_load_tensors: ggml ctx size =    0.11 MiB
ollama      | llm_load_tensors: offloading 32 repeating layers to GPU
ollama      | llm_load_tensors: offloading non-repeating layers to GPU
ollama      | llm_load_tensors: offloaded 33/33 layers to GPU
ollama      | llm_load_tensors:        CPU buffer size = 15317.02 MiB
ollama      | .........................................................................................
ollama      | llama_new_context_with_model: n_ctx      = 8192
ollama      | llama_new_context_with_model: n_batch    = 512
ollama      | llama_new_context_with_model: n_ubatch   = 512
ollama      | llama_new_context_with_model: freq_base  = 500000.0
ollama      | llama_new_context_with_model: freq_scale = 1
ollama      | ggml_cuda_host_malloc: warning: failed to allocate 1024.00 MiB of pinned memory: unknown error
ollama      | llama_kv_cache_init:        CPU KV buffer size =  1024.00 MiB
ollama      | llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
ollama      | ggml_cuda_host_malloc: warning: failed to allocate 0.50 MiB of pinned memory: unknown error
ollama      | llama_new_context_with_model:        CPU  output buffer size =     0.50 MiB
ollama      | ggml_cuda_host_malloc: warning: failed to allocate 560.01 MiB of pinned memory: unknown error
ollama      | llama_new_context_with_model:  CUDA_Host compute buffer size =   560.01 MiB
ollama      | llama_new_context_with_model: graph nodes  = 1030
ollama      | llama_new_context_with_model: graph splits = 1
ollama      | {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":8192,"slot_id":0,"tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"validate_model_chat_template","level":"ERR","line":437,"msg":"The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses","tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"15","port":"34091","tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":34812,"status":200,"tid":"140150673195008","timestamp":1713627866}
ollama      | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":34824,"status":200,"tid":"140150681587712","timestamp":1713627866}
ollama      | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":34834,"status":200,"tid":"140168483364864","timestamp":1713627866}
ollama      | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":50528,"status":200,"tid":"140168547442688","timestamp":1713627866}
ollama      | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":50536,"status":200,"tid":"140168500150272","timestamp":1713627866}
ollama      | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":50544,"status":200,"tid":"140168491757568","timestamp":1713627866}
ollama      | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":37822,"status":200,"tid":"140150656409600","timestamp":1713627866}
ollama      | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":37822,"status":200,"tid":"140150656409600","timestamp":1713627866}
ollama      | {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/tokenize","remote_addr":"127.0.0.1","remote_port":37822,"status":200,"tid":"140150656409600","timestamp":1713627866}
ollama      | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":8,"tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":37822,"status":200,"tid":"140150656409600","timestamp":1713627866}
ollama      | {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":9,"tid":"140168993607680","timestamp":1713627866}
ollama      | {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":127,"slot_id":0,"task_id":9,"tid":"140168993607680","timestamp":1713627866}

<!-- gh-comment-id:2067712941 --> @chrishart0 commented on GitHub (Apr 20, 2024): I am seeing the same error with llama3 8b. I am also having an issue getting Ollama to use my GPU, doubt that is a related issue though. Here are my logs: ``` ollama | time=2024-04-20T15:43:51.497Z level=WARN source=server.go:51 msg="requested context length is greater than model max context length" requested=8279 model=8192 ollama | time=2024-04-20T15:43:51.497Z level=INFO source=gpu.go:121 msg="Detecting GPU type" ollama | time=2024-04-20T15:43:51.497Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" ollama | time=2024-04-20T15:43:51.498Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2218309333/runners/cuda_v11/libcudart.so.11.0]" ollama | time=2024-04-20T15:43:51.509Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama2218309333/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 999" ollama | time=2024-04-20T15:43:51.509Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" ollama | time=2024-04-20T15:43:51.510Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.29.06]" ollama | time=2024-04-20T15:43:51.521Z level=INFO source=gpu.go:137 msg="Nvidia GPU detected via nvidia-ml" ollama | time=2024-04-20T15:43:51.521Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama | time=2024-04-20T15:43:51.526Z level=INFO source=gpu.go:182 msg="[nvidia-ml] NVML CUDA Compute Capability detected: 8.9" ollama | time=2024-04-20T15:43:51.528Z level=INFO source=gpu.go:121 msg="Detecting GPU type" ollama | time=2024-04-20T15:43:51.528Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" ollama | time=2024-04-20T15:43:51.529Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2218309333/runners/cuda_v11/libcudart.so.11.0]" ollama | time=2024-04-20T15:43:51.531Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama2218309333/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 999" ollama | time=2024-04-20T15:43:51.531Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" ollama | time=2024-04-20T15:43:51.532Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.29.06]" ollama | time=2024-04-20T15:43:51.538Z level=INFO source=gpu.go:137 msg="Nvidia GPU detected via nvidia-ml" ollama | time=2024-04-20T15:43:51.538Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama | time=2024-04-20T15:43:51.544Z level=INFO source=gpu.go:182 msg="[nvidia-ml] NVML CUDA Compute Capability detected: 8.9" ollama | time=2024-04-20T15:43:51.545Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="16356.0 MiB" used="16356.0 MiB" available="21911.7 MiB" kv="1024.0 MiB" fulloffload="560.0 MiB" partialoffload="677.5 MiB" ollama | time=2024-04-20T15:43:51.545Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama | time=2024-04-20T15:43:51.546Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama2218309333/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a4bbea838ebde985f2f99d710c849219979b9608e44e1c3c46416b5fbff72d64 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --port 34091" ollama | time=2024-04-20T15:43:51.546Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding" ollama | {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"140168993607680","timestamp":1713627831} ollama | {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"140168993607680","timestamp":1713627831} ollama | {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":8,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"140168993607680","timestamp":1713627831,"total_threads":16} ollama | llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-a4bbea838ebde985f2f99d710c849219979b9608e44e1c3c46416b5fbff72d64 (version GGUF V3 (latest)) ollama | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama | llama_model_loader: - kv 0: general.architecture str = llama ollama | llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct ollama | llama_model_loader: - kv 2: llama.block_count u32 = 32 ollama | llama_model_loader: - kv 3: llama.context_length u32 = 8192 ollama | llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 ollama | llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 ollama | llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 ollama | llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 ollama | llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 ollama | llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 ollama | llama_model_loader: - kv 10: general.file_type u32 = 1 ollama | llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 ollama | llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 ollama | llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 ollama | llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... ollama | llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ollama | llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... ollama | llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 ollama | llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 ollama | llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... ollama | llama_model_loader: - type f32: 65 tensors ollama | llama_model_loader: - type f16: 226 tensors ollama | llm_load_vocab: special tokens definition check successful ( 256/128256 ). ollama | llm_load_print_meta: format = GGUF V3 (latest) ollama | llm_load_print_meta: arch = llama ollama | llm_load_print_meta: vocab type = BPE ollama | llm_load_print_meta: n_vocab = 128256 ollama | llm_load_print_meta: n_merges = 280147 ollama | llm_load_print_meta: n_ctx_train = 8192 ollama | llm_load_print_meta: n_embd = 4096 ollama | llm_load_print_meta: n_head = 32 ollama | llm_load_print_meta: n_head_kv = 8 ollama | llm_load_print_meta: n_layer = 32 ollama | llm_load_print_meta: n_rot = 128 ollama | llm_load_print_meta: n_embd_head_k = 128 ollama | llm_load_print_meta: n_embd_head_v = 128 ollama | llm_load_print_meta: n_gqa = 4 ollama | llm_load_print_meta: n_embd_k_gqa = 1024 ollama | llm_load_print_meta: n_embd_v_gqa = 1024 ollama | llm_load_print_meta: f_norm_eps = 0.0e+00 ollama | llm_load_print_meta: f_norm_rms_eps = 1.0e-05 ollama | llm_load_print_meta: f_clamp_kqv = 0.0e+00 ollama | llm_load_print_meta: f_max_alibi_bias = 0.0e+00 ollama | llm_load_print_meta: f_logit_scale = 0.0e+00 ollama | llm_load_print_meta: n_ff = 14336 ollama | llm_load_print_meta: n_expert = 0 ollama | llm_load_print_meta: n_expert_used = 0 ollama | llm_load_print_meta: causal attn = 1 ollama | llm_load_print_meta: pooling type = 0 ollama | llm_load_print_meta: rope type = 0 ollama | llm_load_print_meta: rope scaling = linear ollama | llm_load_print_meta: freq_base_train = 500000.0 ollama | llm_load_print_meta: freq_scale_train = 1 ollama | llm_load_print_meta: n_yarn_orig_ctx = 8192 ollama | llm_load_print_meta: rope_finetuned = unknown ollama | llm_load_print_meta: ssm_d_conv = 0 ollama | llm_load_print_meta: ssm_d_inner = 0 ollama | llm_load_print_meta: ssm_d_state = 0 ollama | llm_load_print_meta: ssm_dt_rank = 0 ollama | llm_load_print_meta: model type = 7B ollama | llm_load_print_meta: model ftype = F16 ollama | llm_load_print_meta: model params = 8.03 B ollama | llm_load_print_meta: model size = 14.96 GiB (16.00 BPW) ollama | llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct ollama | llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' ollama | llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' ollama | llm_load_print_meta: LF token = 128 'Ä' ollama | ggml_cuda_init: failed to initialize CUDA: unknown error ollama | llm_load_tensors: ggml ctx size = 0.11 MiB ollama | llm_load_tensors: offloading 32 repeating layers to GPU ollama | llm_load_tensors: offloading non-repeating layers to GPU ollama | llm_load_tensors: offloaded 33/33 layers to GPU ollama | llm_load_tensors: CPU buffer size = 15317.02 MiB ollama | ......................................................................................... ollama | llama_new_context_with_model: n_ctx = 8192 ollama | llama_new_context_with_model: n_batch = 512 ollama | llama_new_context_with_model: n_ubatch = 512 ollama | llama_new_context_with_model: freq_base = 500000.0 ollama | llama_new_context_with_model: freq_scale = 1 ollama | ggml_cuda_host_malloc: warning: failed to allocate 1024.00 MiB of pinned memory: unknown error ollama | llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB ollama | llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB ollama | ggml_cuda_host_malloc: warning: failed to allocate 0.50 MiB of pinned memory: unknown error ollama | llama_new_context_with_model: CPU output buffer size = 0.50 MiB ollama | ggml_cuda_host_malloc: warning: failed to allocate 560.01 MiB of pinned memory: unknown error ollama | llama_new_context_with_model: CUDA_Host compute buffer size = 560.01 MiB ollama | llama_new_context_with_model: graph nodes = 1030 ollama | llama_new_context_with_model: graph splits = 1 ollama | {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"140168993607680","timestamp":1713627866} ollama | {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":8192,"slot_id":0,"tid":"140168993607680","timestamp":1713627866} ollama | {"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"140168993607680","timestamp":1713627866} ollama | {"function":"validate_model_chat_template","level":"ERR","line":437,"msg":"The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses","tid":"140168993607680","timestamp":1713627866} ollama | {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"15","port":"34091","tid":"140168993607680","timestamp":1713627866} ollama | {"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140168993607680","timestamp":1713627866} ollama | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"140168993607680","timestamp":1713627866} ollama | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"140168993607680","timestamp":1713627866} ollama | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":34812,"status":200,"tid":"140150673195008","timestamp":1713627866} ollama | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"140168993607680","timestamp":1713627866} ollama | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":34824,"status":200,"tid":"140150681587712","timestamp":1713627866} ollama | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"140168993607680","timestamp":1713627866} ollama | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":34834,"status":200,"tid":"140168483364864","timestamp":1713627866} ollama | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"140168993607680","timestamp":1713627866} ollama | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":50528,"status":200,"tid":"140168547442688","timestamp":1713627866} ollama | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"140168993607680","timestamp":1713627866} ollama | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":50536,"status":200,"tid":"140168500150272","timestamp":1713627866} ollama | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":50544,"status":200,"tid":"140168491757568","timestamp":1713627866} ollama | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"140168993607680","timestamp":1713627866} ollama | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":37822,"status":200,"tid":"140150656409600","timestamp":1713627866} ollama | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"140168993607680","timestamp":1713627866} ollama | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":37822,"status":200,"tid":"140150656409600","timestamp":1713627866} ollama | {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/tokenize","remote_addr":"127.0.0.1","remote_port":37822,"status":200,"tid":"140150656409600","timestamp":1713627866} ollama | {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":8,"tid":"140168993607680","timestamp":1713627866} ollama | {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":37822,"status":200,"tid":"140150656409600","timestamp":1713627866} ollama | {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":9,"tid":"140168993607680","timestamp":1713627866} ollama | {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":127,"slot_id":0,"task_id":9,"tid":"140168993607680","timestamp":1713627866} ```
Author
Owner

@zedmango commented on GitHub (Apr 20, 2024):

and just to be clear, it is not generating at all with "stream" set to false, either on the command line or with the front end.

<!-- gh-comment-id:2067722259 --> @zedmango commented on GitHub (Apr 20, 2024): and just to be clear, it is not generating at all with "stream" set to false, either on the command line or with the front end.
Author
Owner

@regmibijay commented on GitHub (Apr 22, 2024):

i had same issue and figured it only generates when num_predict is set (i set it to 128). Some way like following

{ 
    "model": "your llama3 tag name", 
    "messages": "system and user messages", 
    "options": {
        "num_predict": 128
         }
    }
<!-- gh-comment-id:2070934597 --> @regmibijay commented on GitHub (Apr 22, 2024): i had same issue and figured it only generates when `num_predict` is set (i set it to 128). Some way like following ``` { "model": "your llama3 tag name", "messages": "system and user messages", "options": { "num_predict": 128 } }
Author
Owner

@EverThingy commented on GitHub (Apr 25, 2024):

Also getting the The chat template comes with this model is not yet supported error with the quantized version of the 70B llama 3 model (llama3:70b-instruct-q4_1), llama3:8b seems to work fine though.

time=2024-04-25T08:44:07.275Z level=INFO source=routes.go:97 msg="changing loaded model"
time=2024-04-25T08:44:08.688Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-25T08:44:08.689Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-25T08:44:08.689Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama577108160/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-25T08:44:08.703Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-25T08:44:08.703Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-25T08:44:09.056Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
time=2024-04-25T08:44:09.251Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-25T08:44:09.251Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-25T08:44:09.251Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama577108160/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-25T08:44:09.255Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-25T08:44:09.255Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-25T08:44:09.576Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
time=2024-04-25T08:44:09.771Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=81 layers=81 required="43696.0 MiB" used="43696.0 MiB" available="66180.1 MiB" kv="640.0 MiB" fulloffload="972.0 MiB" partialoffload="3313.4 MiB"
time=2024-04-25T08:44:09.771Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-25T08:44:09.771Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama577108160/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-b6e658b62a083448ad00a04f47a39c0753234e284db2f94b9206484f1e48a101 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --port 36039"
time=2024-04-25T08:44:09.771Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"140336364675072","timestamp":1714034649}
{"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"140336364675072","timestamp":1714034649}
{"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":32,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"140336364675072","timestamp":1714034649,"total_threads":64}
llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/.ollama/models/blobs/sha256-b6e658b62a083448ad00a04f47a39c0753234e284db2f94b9206484f1e48a101 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-70B-Instruct
llama_model_loader: - kv   2:                          llama.block_count u32              = 80
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 3
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  161 tensors
llama_model_loader: - type q4_1:  561 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 80
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 28672
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 70B
llm_load_print_meta: model ftype      = Q4_1
llm_load_print_meta: model params     = 70.55 B
llm_load_print_meta: model size       = 41.26 GiB (5.02 BPW)
llm_load_print_meta: general.name     = Meta-Llama-3-70B-Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token         = 128 'Ä'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 3 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
  Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
  Device 2: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
llm_load_tensors: ggml ctx size =    1.10 MiB
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors:        CPU buffer size =   626.25 MiB
llm_load_tensors:      CUDA0 buffer size = 14281.75 MiB
llm_load_tensors:      CUDA1 buffer size = 13771.69 MiB
llm_load_tensors:      CUDA2 buffer size = 13573.55 MiB
...................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   224.00 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =   216.00 MiB
llama_kv_cache_init:      CUDA2 KV buffer size =   200.00 MiB
llama_new_context_with_model: KV self size  =  640.00 MiB, K (f16):  320.00 MiB, V (f16):  320.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.52 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model:      CUDA0 compute buffer size =   400.01 MiB
llama_new_context_with_model:      CUDA1 compute buffer size =   400.01 MiB
llama_new_context_with_model:      CUDA2 compute buffer size =   400.02 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    32.02 MiB
llama_new_context_with_model: graph nodes  = 2566
llama_new_context_with_model: graph splits = 4
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"140336364675072","timestamp":1714034655}
{"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140336364675072","timestamp":1714034655}
{"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"140336364675072","timestamp":1714034655}
{"function":"validate_model_chat_template","level":"ERR","line":437,"msg":"The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses","tid":"140336364675072","timestamp":1714034655}
{"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"63","port":"36039","tid":"140336364675072","timestamp":1714034655}
{"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140336364675072","timestamp":1714034655}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"140336364675072","timestamp":1714034655}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"140336364675072","timestamp":1714034655}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53146,"status":200,"tid":"140334241271808","timestamp":1714034655}
<!-- gh-comment-id:2076684862 --> @EverThingy commented on GitHub (Apr 25, 2024): Also getting the `The chat template comes with this model is not yet supported` error with the quantized version of the 70B llama 3 model (`llama3:70b-instruct-q4_1`), llama3:8b seems to work fine though. ```txt time=2024-04-25T08:44:07.275Z level=INFO source=routes.go:97 msg="changing loaded model" time=2024-04-25T08:44:08.688Z level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-25T08:44:08.689Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-25T08:44:08.689Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama577108160/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-25T08:44:08.703Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-04-25T08:44:08.703Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-25T08:44:09.056Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9" time=2024-04-25T08:44:09.251Z level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-25T08:44:09.251Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-25T08:44:09.251Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama577108160/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-25T08:44:09.255Z level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-04-25T08:44:09.255Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-25T08:44:09.576Z level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9" time=2024-04-25T08:44:09.771Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=81 layers=81 required="43696.0 MiB" used="43696.0 MiB" available="66180.1 MiB" kv="640.0 MiB" fulloffload="972.0 MiB" partialoffload="3313.4 MiB" time=2024-04-25T08:44:09.771Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-25T08:44:09.771Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama577108160/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-b6e658b62a083448ad00a04f47a39c0753234e284db2f94b9206484f1e48a101 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --port 36039" time=2024-04-25T08:44:09.771Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding" {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"140336364675072","timestamp":1714034649} {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"140336364675072","timestamp":1714034649} {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":32,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"140336364675072","timestamp":1714034649,"total_threads":64} llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/.ollama/models/blobs/sha256-b6e658b62a083448ad00a04f47a39c0753234e284db2f94b9206484f1e48a101 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 80 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 8192 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 6: llama.attention.head_count u32 = 64 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 3 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 161 tensors llama_model_loader: - type q4_1: 561 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 256/128256 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_1 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 41.26 GiB (5.02 BPW) llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm_load_print_meta: LF token = 128 'Ä' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes Device 2: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes llm_load_tensors: ggml ctx size = 1.10 MiB llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU buffer size = 626.25 MiB llm_load_tensors: CUDA0 buffer size = 14281.75 MiB llm_load_tensors: CUDA1 buffer size = 13771.69 MiB llm_load_tensors: CUDA2 buffer size = 13573.55 MiB ................................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 224.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 216.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 200.00 MiB llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.52 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 400.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 400.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 400.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 32.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"140336364675072","timestamp":1714034655} {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140336364675072","timestamp":1714034655} {"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"140336364675072","timestamp":1714034655} {"function":"validate_model_chat_template","level":"ERR","line":437,"msg":"The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses","tid":"140336364675072","timestamp":1714034655} {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"63","port":"36039","tid":"140336364675072","timestamp":1714034655} {"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140336364675072","timestamp":1714034655} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"140336364675072","timestamp":1714034655} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"140336364675072","timestamp":1714034655} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53146,"status":200,"tid":"140334241271808","timestamp":1714034655} ```
Author
Owner

@JonZeolla commented on GitHub (May 1, 2024):

I'm hitting this for both llama3 8b and 70b
ollama_8b.log
ollama_70b.log

<!-- gh-comment-id:2088468471 --> @JonZeolla commented on GitHub (May 1, 2024): I'm hitting this for both llama3 8b and 70b [ollama_8b.log](https://github.com/ollama/ollama/files/15177486/ollama_8b.log) [ollama_70b.log](https://github.com/ollama/ollama/files/15177487/ollama_70b.log)
Author
Owner

@pdevine commented on GitHub (May 30, 2024):

This was an issue w/ the pretokenizer changes that ended up happening due to the BPL tokenizer changes. It's fixed now in 0.1.39, so I'll go ahead and close out the issue. LMK if any of you are still seeing it, but I just tested w/ 70b-instruct-q4_K_M and everything seems to be fine.

<!-- gh-comment-id:2140884881 --> @pdevine commented on GitHub (May 30, 2024): This was an issue w/ the pretokenizer changes that ended up happening due to the BPL tokenizer changes. It's fixed now in `0.1.39`, so I'll go ahead and close out the issue. LMK if any of you are still seeing it, but I just tested w/ `70b-instruct-q4_K_M` and everything seems to be fine.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48831