[GH-ISSUE #3819] llama3:70b generating gibberish #48873

Closed
opened 2026-04-28 09:56:37 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @holytony on GitHub (Apr 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3819

What is the issue?

#normal response......
#then I got this
from recognizing the shape of a Totem to to to, to\\ to\\\\\\.\\,\\\\,
of\\\\\\\\\.\\.\\ to\\ to\\\\\\ to\\\\. -,\\\\\\\\ to\\\\\\\.\\. to
and\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\and\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to a
to\\,\\\\\\\\\\\\,\\\\\\\\\\\\\\\,\\\\\\\\,\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
and\\ to\\\\\\\\\\\\\\\\\\\\\\\\\\\\.
a\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
to\\\\\\\\\\\\\\\\\\\ \\\\\\\\.\\\\\\\\\\\\\\\\\\\\ and\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
and\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\,
to\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \ and\\\\\\\\\\\\\\\\\\\\\\ a\\\\\\\\\\\\\\ to\\\\,\\\\
to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\
and\\\\\\\\\ to\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\
to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \\\\\\\ and\\\\ \ \\\\\\\\\\\\\\\\\\\ \ \ \\\\ \\\\\\\. and\\\\\\\\
to\\\\\\\\\\\\\ to\\\\\\\\\ to\\\\\\ a to\\\\\\\\\\\\ to,\\\\\\\\\\\\\\., a\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ and\\\\\\\\\\\\\\\\\\\\\\\\ the to\\\\\\\ of\\\\\\\\\\\\\\\\ to,,.\\\\\ a
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\ and\\\\\\\\\\,\\ to\\\\\\\\.\\\\\\\\\\\\\\
and\\\\\\\\\\\\\\\\\, a the\\\\\\,\\\\\\\\\\\ to\\\\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ and\\\\\\\\\\\\\\\ a\\ \\\\\\\\\\\\\\\\\\
is\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\ in\\\\\ a\\\\\\\\\\\\\\ \\\\\ to\\\\\\\\\\\ and\\\\.\\ of\\\\\ \\\\\\\\\\\\\
and\\\ the\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ a\\\\\\\\\\ to\\\\\\ to\\, in\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ and\\\\\\\\\\\\\\\\\\\\\
\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to to\\.\\\,\\\\\\\\\\\\
and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ is in\\\\\\\\\\\ \\\ the\\\\\\\\\\\\\\\\\\\\\\\
the\\,\\\\\\\\\\\\ a a\\ the others\\\\ the more. the\\ the a a the which\\\\\\\\\\\\\\ the a the to\\\\. the a\\. with other the a, in\\\\. that,\\,,,
the a a\\ the,, to\\\\\\. the the\\, a in\\\\\\\\ which a in which of the the\\\\ a as\\ \ the but\\\\\\\\ the in to\\\\, in\\.\\ a to,, the\\ a
other\\\\\\ a a the to to\\\\\\\\\\ the a to and the. to\\\\\\\\\\\\\\\\\\ which the and of.
.\\\\\\. a\, that\\..
to it\\\\\\\\\\. or of. in a\\\\ \\\\\\\\\\\,,.
the a in\\ \ a\\\\\\\\\\\\ the and in\\\ with, a\\\\\\ of and a a the with\\ the other\\ the a.,\\\\\\ a the is, to\\\\\\, the\\.
. \ a with\\\\\\\\\\\ a, to,, to a\\\\ in to.
is.\\\\\\\ the to of\\. a\\ the a\ to\\\\\\\, a\\\\. the the the a\\\\\,, to the\\ to \ \\\, in a\\\\\\\\.. a,.\\\\\\ which. the the. a and., the of
the\\ and\\\\ the a\\\\\\\\\\ and an\\,\\\\. the at \, the that the which,\\\,\\\\\\ \\\ a\\\\ and the in a\ \ a a\\. to\\\\ \, for in\\\\\\ a
a\\\\\\ with\\\\\\\\ a the. or\\\\\\\\. the it and the\\\ the, we\\\\\\ a,\\. of ,\\\\. but.\\\\\\ that a\\\ the,, to of to the a to a\\ that the
and,. the\ to a, a\\,,,\\\\ the for the the the of a, his\\\\ the a. to\\\\\\\\,\\\\ and ., a\\.
a of\\ \. in the\\. which. a\\\\ the a\\\\\\\\\\ a that the that. this in\\\\ \ the as\. with\\\\ the it\\., to \ the in\\ to.\\\\ to the the\\\\ the\\
and of the. the\\ that \\\\\, the a. and the. a. the to. the.
the the which\\\\ for\\ \\\\\\\ this the the, the, the,\\ of the of. the\\. of the the, the. which\\. the a. a\\. the of in the. the that to the which in\. but\\ of and the
\ the the, of\\.,. a. a to the\ are or\\\\ have. of\\. a. a. an. of. the. the. the,\\\\ the to\\\ the the the,\\ and, the in the of, the of it. that., at\\ the of the \ that \
the the to the,\\. some and to the with to in\ which in he. some to. a. the\\; the of the,\\ they. the, which\\ \\ the a to the the the of a and the the in. a. an of and the . the it, and
that the, in the a. the of. the the\\. the the \ of a of in the\\. a\\. a, the and the in. the the to the. the the the the, the. the of of the. the the the the in, the of of of .
\ the a that the and of the, that of. the. the the they the. the, a for. which of. the the. a. a. some to the\\ that the \\\\\\. a; of the with of the.
the the and the of the to the, it from in the in the, the\\\\ of. the of. the of the, this in and the the the. the the of\\; an., the to. to of. of is to the of it to\ and the of \. a of.
of, some. the of., a. to.. we which have.
, the in and,, the of.,. that of the. to this. are, it.
that of\\ and. they of the.
with a of \ the of and the and a and the the the that a\\. the a and a, for the of the as well as the of the as.
the his and of the or and the other's, . the the other's to the of the as of the other's of the other's to the other. ( , of them.

Auch of the other's of the other's of the other's of the other's of the other. of the other.
.

a few of the other's of the other
, (auch of the other's of the other's of the other'. Of the other's of the other.
auch of the other's of the other's of the other. of the other's of the other.

of the other.

auch of the other.of the other.of the other.
auch of the other.of the other.
auch of the other.of the other.
auch of the other.

auch of the other's of the other.
auchof the other.of the other.
auch of the other'.

of the other.
auch of the other.of the other
a of the other.
auch of the other.of of the other.
auch of the other.
auch of the other.

auch of the other's of the other.
auch of the other.
auch of the other.
auch of the other.

auch of the other.
auch of the other.
auch of the other.of of the other;
of of the other.
auch of the other.
auch of the other.
auch of the other.
auch of the other

are you alright
I'm just a language model, I don't have feelings or emotions like humans do. However, I can understand and respond to expressions of concern.

To answer your question, "Are you alright?" is a polite way of asking if someone is okay or if they need any help. It's often used as a greeting or as a response to someone who might be going through
a tough time.

If you're asking about me, the AI assistant, I'm functioning properly and ready to assist with any questions or tasks you may have!

here is what I got on the ollama serve side:

loading library /tmp/ollama1459929254/runners/cuda_v11/libext_server.so
time=2024-04-22T19:26:57.679+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1459929254/runners/cuda_v11/libext_server.so"
time=2024-04-22T19:26:57.679+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/llm/ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 80
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 8192
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 6: llama.attention.head_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 161 tensors
llama_model_loader: - type q4_0: 561 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_tensors: ggml ctx size = 0.55 MiB
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU buffer size = 563.62 MiB
llm_load_tensors: CUDA0 buffer size = 37546.98 MiB
...................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 640.00 MiB
llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 266.50 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 324.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 20.00 MiB
llama_new_context_with_model: graph nodes = 2644
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140379110233664","timestamp":1713785225}
{"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140379110233664","timestamp":1713785225}
time=2024-04-22T19:27:05.579+08:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
{"function":"update_slots","level":"INFO","line":1574,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140373568767552","timestamp":1713785225}
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140373568767552","timestamp":1713785225}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":233,"slot_id":0,"task_id":0,"tid":"140373568767552","timestamp":1713785225}
{"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140373568767552","timestamp":1713785225}
{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 662.32 ms / 233 tokens ( 2.84 ms per token, 351.79 tokens per second)","n_prompt_tokens_processed":233,"n_tokens_second":351.793163737825,"slot_id":0,"t_prompt_processing":662.321,"t_token":2.842579399141631,"task_id":0,"tid":"140373568767552","timestamp":1713785267}
{"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 41013.01 ms / 710 runs ( 57.76 ms per token, 17.31 tokens per second)","n_decoded":710,"n_tokens_second":17.311579488762725,"slot_id":0,"t_token":57.76480422535211,"t_token_generation":41013.011,"task_id":0,"tid":"140373568767552","timestamp":1713785267}
{"function":"print_timings","level":"INFO","line":289,"msg":" total time = 41675.33 ms","slot_id":0,"t_prompt_processing":662.321,"t_token_generation":41013.011,"t_total":41675.332,"task_id":0,"tid":"140373568767552","timestamp":1713785267}
{"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":943,"n_ctx":2048,"n_past":942,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140373568767552","timestamp":1713785267,"truncated":false}
[GIN] 2024/04/22 - 19:27:47 | 200 | 50.364082196s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/04/22 - 19:43:45 | 200 | 43.912µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/04/22 - 19:43:45 | 200 | 497.601µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/04/22 - 19:43:45 | 200 | 432.359µs | 127.0.0.1 | POST "/api/show"
time=2024-04-22T19:43:46.778+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-22T19:43:46.778+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
time=2024-04-22T19:43:46.778+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-22T19:43:46.778+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
time=2024-04-22T19:43:46.778+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama1459929254/runners/cuda_v11/libext_server.so
time=2024-04-22T19:43:46.778+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1459929254/runners/cuda_v11/libext_server.so"
time=2024-04-22T19:43:46.778+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/llm/ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 80
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 8192
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 6: llama.attention.head_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 161 tensors
llama_model_loader: - type q4_0: 561 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_tensors: ggml ctx size = 0.55 MiB
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU buffer size = 563.62 MiB
llm_load_tensors: CUDA0 buffer size = 37546.98 MiB
...................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 640.00 MiB
llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 266.50 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 324.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 20.00 MiB
llama_new_context_with_model: graph nodes = 2644
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140380754384448","timestamp":1713786234}
{"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140380754384448","timestamp":1713786234}
time=2024-04-22T19:43:54.478+08:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
[GIN] 2024/04/22 - 19:43:54 | 200 | 8.528894398s | 127.0.0.1 | POST "/api/chat"
{"function":"update_slots","level":"INFO","line":1574,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140373070435904","timestamp":1713786234}
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140373070435904","timestamp":1713786269}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":93,"slot_id":0,"task_id":0,"tid":"140373070435904","timestamp":1713786269}
{"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140373070435904","timestamp":1713786269}
{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 432.65 ms / 93 tokens ( 4.65 ms per token, 214.95 tokens per second)","n_prompt_tokens_processed":93,"n_tokens_second":214.95484792522348,"slot_id":0,"t_prompt_processing":432.649,"t_token":4.652139784946237,"task_id":0,"tid":"140373070435904","timestamp":1713786305}
{"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 35653.82 ms / 621 runs ( 57.41 ms per token, 17.42 tokens per second)","n_decoded":621,"n_tokens_second":17.417488504738063,"slot_id":0,"t_token":57.41355877616747,"t_token_generation":35653.82,"task_id":0,"tid":"140373070435904","timestamp":1713786305}
{"function":"print_timings","level":"INFO","line":289,"msg":" total time = 36086.47 ms","slot_id":0,"t_prompt_processing":432.649,"t_token_generation":35653.82,"t_total":36086.469,"task_id":0,"tid":"140373070435904","timestamp":1713786305}
{"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":714,"n_ctx":2048,"n_past":713,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140373070435904","timestamp":1713786305,"truncated":false}
[GIN] 2024/04/22 - 19:45:05 | 200 | 36.088775013s | 127.0.0.1 | POST "/api/chat"
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":624,"tid":"140373070435904","timestamp":1713786317}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":92,"n_past_se":0,"n_prompt_tokens_processed":731,"slot_id":0,"task_id":624,"tid":"140373070435904","timestamp":1713786317}
{"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":92,"slot_id":0,"task_id":624,"tid":"140373070435904","timestamp":1713786317}
{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 1992.45 ms / 731 tokens ( 2.73 ms per token, 366.89 tokens per second)","n_prompt_tokens_processed":731,"n_tokens_second":366.8853591160221,"slot_id":0,"t_prompt_processing":1992.448,"t_token":2.7256470588235295,"task_id":624,"tid":"140373070435904","timestamp":1713786354}
{"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 35060.12 ms / 601 runs ( 58.34 ms per token, 17.14 tokens per second)","n_decoded":601,"n_tokens_second":17.14198207462079,"slot_id":0,"t_token":58.33631114808652,"t_token_generation":35060.123,"task_id":624,"tid":"140373070435904","timestamp":1713786354}
{"function":"print_timings","level":"INFO","line":289,"msg":" total time = 37052.57 ms","slot_id":0,"t_prompt_processing":1992.448,"t_token_generation":35060.123,"t_total":37052.570999999996,"task_id":624,"tid":"140373070435904","timestamp":1713786354}
{"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":1424,"n_ctx":2048,"n_past":1423,"n_system_tokens":0,"slot_id":0,"task_id":624,"tid":"140373070435904","timestamp":1713786354,"truncated":false}
[GIN] 2024/04/22 - 19:45:54 | 200 | 37.066841427s | 127.0.0.1 | POST "/api/chat"
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":1228,"tid":"140373070435904","timestamp":1713786425}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":822,"n_past_se":0,"n_prompt_tokens_processed":644,"slot_id":0,"task_id":1228,"tid":"140373070435904","timestamp":1713786425}
{"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":822,"slot_id":0,"task_id":1228,"tid":"140373070435904","timestamp":1713786425}
{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 1979.55 ms / 644 tokens ( 3.07 ms per token, 325.33 tokens per second)","n_prompt_tokens_processed":644,"n_tokens_second":325.3261343980861,"slot_id":0,"t_prompt_processing":1979.552,"t_token":3.07383850931677,"task_id":1228,"tid":"140373070435904","timestamp":1713786452}
{"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 24798.05 ms / 418 runs ( 59.33 ms per token, 16.86 tokens per second)","n_decoded":418,"n_tokens_second":16.85616273407282,"slot_id":0,"t_token":59.325483253588516,"t_token_generation":24798.052,"task_id":1228,"tid":"140373070435904","timestamp":1713786452}
{"function":"print_timings","level":"INFO","line":289,"msg":" total time = 26777.60 ms","slot_id":0,"t_prompt_processing":1979.552,"t_token_generation":24798.052,"t_total":26777.604,"task_id":1228,"tid":"140373070435904","timestamp":1713786452}
{"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":1884,"n_ctx":2048,"n_past":1883,"n_system_tokens":0,"slot_id":0,"task_id":1228,"tid":"140373070435904","timestamp":1713786452,"truncated":false}
[GIN] 2024/04/22 - 19:47:32 | 200 | 26.799039944s | 127.0.0.1 | POST "/api/chat"
time=2024-04-22T19:56:00.986+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-22T19:56:01.101+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
time=2024-04-22T19:56:01.101+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-22T19:56:01.101+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
time=2024-04-22T19:56:01.101+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama1459929254/runners/cuda_v11/libext_server.so
time=2024-04-22T19:56:01.101+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1459929254/runners/cuda_v11/libext_server.so"
time=2024-04-22T19:56:01.101+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/llm/ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 80
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 8192
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 6: llama.attention.head_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 161 tensors
llama_model_loader: - type q4_0: 561 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_tensors: ggml ctx size = 0.55 MiB
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU buffer size = 563.62 MiB
llm_load_tensors: CUDA0 buffer size = 37546.98 MiB
...................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 640.00 MiB
llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 266.50 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 324.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 20.00 MiB
llama_new_context_with_model: graph nodes = 2644
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140377508013632","timestamp":1713786968}
{"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140377508013632","timestamp":1713786968}
time=2024-04-22T19:56:08.858+08:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
{"function":"update_slots","level":"INFO","line":1574,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140373045257792","timestamp":1713786968}
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713786968}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":2003,"slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713786968}
{"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713786968}
{"function":"update_slots","level":"INFO","line":1597,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":0,"n_left":2047,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713786976}
{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 5385.63 ms / 2003 tokens ( 2.69 ms per token, 371.92 tokens per second)","n_prompt_tokens_processed":2003,"n_tokens_second":371.9156347539656,"slot_id":0,"t_prompt_processing":5385.63,"t_token":2.6887818272591115,"task_id":0,"tid":"140373045257792","timestamp":1713787009}
{"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 35486.33 ms / 591 runs ( 60.04 ms per token, 16.65 tokens per second)","n_decoded":591,"n_tokens_second":16.654301810384602,"slot_id":0,"t_token":60.04454653130287,"t_token_generation":35486.327,"task_id":0,"tid":"140373045257792","timestamp":1713787009}
{"function":"print_timings","level":"INFO","line":289,"msg":" total time = 40871.96 ms","slot_id":0,"t_prompt_processing":5385.63,"t_token_generation":35486.327,"t_total":40871.956999999995,"task_id":0,"tid":"140373045257792","timestamp":1713787009}
{"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":1571,"n_ctx":2048,"n_past":1570,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713787009,"truncated":true}
[GIN] 2024/04/22 - 19:56:49 | 200 | 49.6301983s | 127.0.0.1 | POST "/api/chat"
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787071}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":1927,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787071}
{"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787071}
{"function":"update_slots","level":"INFO","line":1597,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":0,"n_left":2047,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787084}
{"function":"update_slots","level":"INFO","line":1597,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":0,"n_left":2047,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787145}
{"function":"update_slots","level":"INFO","line":1597,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":0,"n_left":2047,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787206}
{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 5242.44 ms / 1927 tokens ( 2.72 ms per token, 367.58 tokens per second)","n_prompt_tokens_processed":1927,"n_tokens_second":367.5772102892625,"slot_id":0,"t_prompt_processing":5242.436,"t_token":2.720516865594188,"task_id":594,"tid":"140373045257792","timestamp":1713787227}
{"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 151033.05 ms / 2517 runs ( 60.01 ms per token, 16.67 tokens per second)","n_decoded":2517,"n_tokens_second":16.66522603280454,"slot_id":0,"t_token":60.005186730234406,"t_token_generation":151033.055,"task_id":594,"tid":"140373045257792","timestamp":1713787227}
{"function":"print_timings","level":"INFO","line":289,"msg":" total time = 156275.49 ms","slot_id":0,"t_prompt_processing":5242.436,"t_token_generation":151033.055,"t_total":156275.49099999998,"task_id":594,"tid":"140373045257792","timestamp":1713787227}
{"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":1375,"n_ctx":2048,"n_past":1374,"n_system_tokens":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787227,"truncated":true}
[GIN] 2024/04/22 - 20:00:27 | 200 | 2m36s | 127.0.0.1 | POST "/api/chat"
time=2024-04-22T20:07:41.624+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-22T20:07:41.624+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
time=2024-04-22T20:07:41.624+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-22T20:07:41.624+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
time=2024-04-22T20:07:41.624+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama1459929254/runners/cuda_v11/libext_server.so
time=2024-04-22T20:07:41.624+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1459929254/runners/cuda_v11/libext_server.so"
time=2024-04-22T20:07:41.624+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/llm/ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 80
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 8192
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 6: llama.attention.head_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 161 tensors
llama_model_loader: - type q4_0: 561 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.31

Originally created by @holytony on GitHub (Apr 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3819 ### What is the issue? > #normal response...... >#then I got this >from recognizing the shape of a Totem to to to, to\\\\ to\\\\\\\\\\\\.\\\\,\\\\\\\\, > of\\\\\\\\\\\\\\\\\\.\\\\.\\\\ to\\\\ to\\\\\\\\\\\\ to\\\\\\\\. -,\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\\\.\\\\. to > and\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\and\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to a > to\\\\,\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ > and\\\\ to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\. > a\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ > to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ and\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ > and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\, > to\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \\ and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ a\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\,\\\\\\\\ > to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ > and\\\\\\\\\\\\\\\\\\ to\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ > to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \\\\\\\\\\\\\\ and\\\\\\\\ \\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \ \\ \\\\\\\\ \\\\\\\\\\\\\\. and\\\\\\\\\\\\\\\\ > to\\\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\ a to\\\\\\\\\\\\\\\\\\\\\\\\ to,\\\\\\\\\\\\\\\\\\\\\\\\\\\\., a\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\ > \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ the to\\\\\\\\\\\\\\ of\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to,,.\\\\\\\\\\ a > \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\ and\\\\\\\\\\\\\\\\\\\\,\\\\ to\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\ > and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\, a the\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ > \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ a\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ > is\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\ in\\\\\\\\\\ a\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \\\\\\\\\\ to\\\\\\\\\\\\\\\\\\\\\\ and\\\\\\\\.\\\\ of\\\\\\\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\ > and\\\\\\ the\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ a\\\\\\\\\\\\\\\\\\\\ to\\\\\\\\\\\\ to\\\\, in\\\\\\\\ > \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ > \\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ to to\\\\.\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\ > and\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ is in\\\\\\\\\\\\\\\\\\\\\\ \\\\\\ the\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ > the\\\\,\\\\\\\\\\\\\\\\\\\\\\\\ a a\\\\ the others\\\\\\\\ the more. the\\\\ the a a the which\\\\\\\\\\\\\\\\\\\\\\\\\\\\ the a the to\\\\\\\\. the a\\\\. with other the a, in\\\\\\\\. that,\\\\,,, > the a a\\\\ the,, to\\\\\\\\\\\\. the the\\\\, a in\\\\\\\\\\\\\\\\ which a in which of the the\\\\\\\\ a as\\\\ \\ the but\\\\\\\\\\\\\\\\ the in to\\\\\\\\, in\\\\.\\\\ a to,, the\\\\ a > other\\\\\\\\\\\\ a a the to to\\\\\\\\\\\\\\\\\\\\ the a to and the. to\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ which the and of. > .\\\\\\\\\\\\. a\\, that\\\\.. > to it\\\\\\\\\\\\\\\\\\\\. or of. in a\\\\\\\\ \\\\\\\\\\\\\\\\\\\\\\,,. > the a in\\\\ \\ a\\\\\\\\\\\\\\\\\\\\\\\\ the and in\\\\\\ with, a\\\\\\\\\\\\ of and a a the with\\\\ the other\\\\ the a.,\\\\\\\\\\\\ a the is, to\\\\\\\\\\\\, the\\\\. > . \ a with\\\\\\\\\\\\\\\\\\\\\\ a, to,, to a\\\\\\\\ in to. > is.\\\\\\\\\\\\\\ the to of\\\\. a\\\\ the a\\ to\\\\\\\\\\\\\\, a\\\\\\\\. the the the a\\\\\\\\\\,, to the\\\\ to \ \\\\\\, in a\\\\\\\\\\\\\\\\.. a,.\\\\\\\\\\\\ which. the the. a and., the of > the\\\\ and\\\\\\\\ the a\\\\\\\\\\\\\\\\\\\\ and an\\\\,\\\\\\\\. the at \\, the that the which,\\\\\\,\\\\\\\\\\\\ \\\\\\ a\\\\\\\\ and the in a\\ \\ a a\\\\. to\\\\\\\\ \\, for in\\\\\\\\\\\\ a > a\\\\\\\\\\\\ with\\\\\\\\\\\\\\\\ a the. or\\\\\\\\\\\\\\\\. the it and the\\\\\\ the, we\\\\\\\\\\\\ a,\\\\. of \,\\\\\\\\. but.\\\\\\\\\\\\ that a\\\\\\ the,, to of to the a to a\\\\ that the > and,. the\\ to a, a\\\\,,,\\\\\\\\ the for the the the of a, his\\\\\\\\ the a. to\\\\\\\\\\\\\\\\,\\\\\\\\ and ., a\\\\. > a of\\\\ \\. in the\\\\. which. a\\\\\\\\ the a\\\\\\\\\\\\\\\\\\\\ a that the that. this in\\\\\\\\ \\ the as\\. with\\\\\\\\ the it\\\\., to \\ the in\\\\ to.\\\\\\\\ to the the\\\\\\\\ the\\\\ > and of the. the\\\\ that \\\\\\\\\\, the a. and the. a. the to. the. > the the which\\\\\\\\ for\\\\ \\\\\\\\\\\\\ this the the, the, the,\\\\ of the of. the\\\\. of the the, the. which\\\\. the a. a\\\\. the of in the. the that to the which in\\. but\\\\ of and the > \\ the the, of\\\\.,. a. a to the\\ are or\\\\\\\\ have. of\\\\. a. a. an. of. the. the. the,\\\\\\\\ the to\\\\\\ the the the,\\\\ and, the in the of, the of it. that., at\\\\ the of the \\ that \\ > the the to the,\\\\. some and to the with to in\\ which in he. some to. a. the\\\\; the of the,\\\\ they. the, which\\\\ \\\ the a to the the the of a and the the in. a. an of and the \. the it, and > that the, in the a. the of. the the\\\\. the the \\ of a of in the\\\\. a\\\\. a, the and the in. the the to the. the the the the, the. the of of the. the the the the in, the of of of . > \\ the a that the and of the, that of. the. the the they the. the, a for. which of. the the. a. a. some to the\\\\ that the \\\\\\\\\\\\. a; of the with of the. > the the and the of the to the, it from in the in the, the\\\\\\\\ of. the of. the of the, this in and the the the. the the of\\\\; an., the to. to of. of is to the of it to\\ and the of \\. a of. > of, some. the of., a. to.. we which have. > , the in and,, the of.,. that of the. to this. are, it. > that of\\\\ and. they of the. > with a of \ the of and the and a and the the the that a\\\\. the a and a, for the of the as well as the of the as. > the his and of the or and the other's, . the the other's to the of the as of the other's of the other's to the other. ( , of them. > > Auch of the other's of the other's of the other's of the other's of the other. of the other. > . > > a few of the other's of the other > , (auch of the other's of the other's of the other'. Of the other's of the other. > auch of the other's of the other's of the other. of the other's of the other. > > of the other. > > auch of the other.of the other.of the other. > auch of the other.of the other. > auch of the other.of the other. > auch of the other. > > auch of the other's of the other. > auchof the other.of the other. > auch of the other'. > > of the other. > auch of the other.of the other > a of the other. > auch of the other.of of the other. > auch of the other. > auch of the other. > > auch of the other's of the other. > auch of the other. > auch of the other. > auch of the other. > > auch of the other. > auch of the other. > auch of the other.of of the other; > of of the other. > auch of the other. > auch of the other. > auch of the other. > auch of the other > > >>> are you alright > I'm just a language model, I don't have feelings or emotions like humans do. However, I can understand and respond to expressions of concern. > > To answer your question, "Are you alright?" is a polite way of asking if someone is okay or if they need any help. It's often used as a greeting or as a response to someone who might be going through > a tough time. > > If you're asking about me, the AI assistant, I'm functioning properly and ready to assist with any questions or tasks you may have! here is what I got on the ollama serve side: > loading library /tmp/ollama1459929254/runners/cuda_v11/libext_server.so > time=2024-04-22T19:26:57.679+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1459929254/runners/cuda_v11/libext_server.so" > time=2024-04-22T19:26:57.679+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server" > llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/llm/ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b (version GGUF V3 (latest)) > llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. > llama_model_loader: - kv 0: general.architecture str = llama > llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct > llama_model_loader: - kv 2: llama.block_count u32 = 80 > llama_model_loader: - kv 3: llama.context_length u32 = 8192 > llama_model_loader: - kv 4: llama.embedding_length u32 = 8192 > llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672 > llama_model_loader: - kv 6: llama.attention.head_count u32 = 64 > llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 > llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 > llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 > llama_model_loader: - kv 10: general.file_type u32 = 2 > llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 > llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 > llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 > llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... > llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... > llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... > llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 > llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 > llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... > llama_model_loader: - kv 20: general.quantization_version u32 = 2 > llama_model_loader: - type f32: 161 tensors > llama_model_loader: - type q4_0: 561 tensors > llama_model_loader: - type q6_K: 1 tensors > llm_load_vocab: special tokens definition check successful ( 256/128256 ). > llm_load_print_meta: format = GGUF V3 (latest) > llm_load_print_meta: arch = llama > llm_load_print_meta: vocab type = BPE > llm_load_print_meta: n_vocab = 128256 > llm_load_print_meta: n_merges = 280147 > llm_load_print_meta: n_ctx_train = 8192 > llm_load_print_meta: n_embd = 8192 > llm_load_print_meta: n_head = 64 > llm_load_print_meta: n_head_kv = 8 > llm_load_print_meta: n_layer = 80 > llm_load_print_meta: n_rot = 128 > llm_load_print_meta: n_embd_head_k = 128 > llm_load_print_meta: n_embd_head_v = 128 > llm_load_print_meta: n_gqa = 8 > llm_load_print_meta: n_embd_k_gqa = 1024 > llm_load_print_meta: n_embd_v_gqa = 1024 > llm_load_print_meta: f_norm_eps = 0.0e+00 > llm_load_print_meta: f_norm_rms_eps = 1.0e-05 > llm_load_print_meta: f_clamp_kqv = 0.0e+00 > llm_load_print_meta: f_max_alibi_bias = 0.0e+00 > llm_load_print_meta: f_logit_scale = 0.0e+00 > llm_load_print_meta: n_ff = 28672 > llm_load_print_meta: n_expert = 0 > llm_load_print_meta: n_expert_used = 0 > llm_load_print_meta: causal attn = 1 > llm_load_print_meta: pooling type = 0 > llm_load_print_meta: rope type = 0 > llm_load_print_meta: rope scaling = linear > llm_load_print_meta: freq_base_train = 500000.0 > llm_load_print_meta: freq_scale_train = 1 > llm_load_print_meta: n_yarn_orig_ctx = 8192 > llm_load_print_meta: rope_finetuned = unknown > llm_load_print_meta: ssm_d_conv = 0 > llm_load_print_meta: ssm_d_inner = 0 > llm_load_print_meta: ssm_d_state = 0 > llm_load_print_meta: ssm_dt_rank = 0 > llm_load_print_meta: model type = 70B > llm_load_print_meta: model ftype = Q4_0 > llm_load_print_meta: model params = 70.55 B > llm_load_print_meta: model size = 37.22 GiB (4.53 BPW) > llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct > llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' > llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' > llm_load_print_meta: LF token = 128 'Ä' > llm_load_tensors: ggml ctx size = 0.55 MiB > llm_load_tensors: offloading 80 repeating layers to GPU > llm_load_tensors: offloading non-repeating layers to GPU > llm_load_tensors: offloaded 81/81 layers to GPU > llm_load_tensors: CPU buffer size = 563.62 MiB > llm_load_tensors: CUDA0 buffer size = 37546.98 MiB > ................................................................................................... > llama_new_context_with_model: n_ctx = 2048 > llama_new_context_with_model: n_batch = 512 > llama_new_context_with_model: n_ubatch = 512 > llama_new_context_with_model: freq_base = 500000.0 > llama_new_context_with_model: freq_scale = 1 > llama_kv_cache_init: CUDA0 KV buffer size = 640.00 MiB > llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB > llama_new_context_with_model: CUDA_Host output buffer size = 266.50 MiB > llama_new_context_with_model: CUDA0 compute buffer size = 324.00 MiB > llama_new_context_with_model: CUDA_Host compute buffer size = 20.00 MiB > llama_new_context_with_model: graph nodes = 2644 > llama_new_context_with_model: graph splits = 2 > {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140379110233664","timestamp":1713785225} > {"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140379110233664","timestamp":1713785225} > time=2024-04-22T19:27:05.579+08:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop" > {"function":"update_slots","level":"INFO","line":1574,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140373568767552","timestamp":1713785225} > {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140373568767552","timestamp":1713785225} > {"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":233,"slot_id":0,"task_id":0,"tid":"140373568767552","timestamp":1713785225} > {"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140373568767552","timestamp":1713785225} > {"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 662.32 ms / 233 tokens ( 2.84 ms per token, 351.79 tokens per second)","n_prompt_tokens_processed":233,"n_tokens_second":351.793163737825,"slot_id":0,"t_prompt_processing":662.321,"t_token":2.842579399141631,"task_id":0,"tid":"140373568767552","timestamp":1713785267} > {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 41013.01 ms / 710 runs ( 57.76 ms per token, 17.31 tokens per second)","n_decoded":710,"n_tokens_second":17.311579488762725,"slot_id":0,"t_token":57.76480422535211,"t_token_generation":41013.011,"task_id":0,"tid":"140373568767552","timestamp":1713785267} > {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 41675.33 ms","slot_id":0,"t_prompt_processing":662.321,"t_token_generation":41013.011,"t_total":41675.332,"task_id":0,"tid":"140373568767552","timestamp":1713785267} > {"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":943,"n_ctx":2048,"n_past":942,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140373568767552","timestamp":1713785267,"truncated":false} > [GIN] 2024/04/22 - 19:27:47 | 200 | 50.364082196s | 127.0.0.1 | POST "/api/chat" > [GIN] 2024/04/22 - 19:43:45 | 200 | 43.912µs | 127.0.0.1 | HEAD "/" > [GIN] 2024/04/22 - 19:43:45 | 200 | 497.601µs | 127.0.0.1 | POST "/api/show" > [GIN] 2024/04/22 - 19:43:45 | 200 | 432.359µs | 127.0.0.1 | POST "/api/show" > time=2024-04-22T19:43:46.778+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-04-22T19:43:46.778+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9" > time=2024-04-22T19:43:46.778+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-04-22T19:43:46.778+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9" > time=2024-04-22T19:43:46.778+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > loading library /tmp/ollama1459929254/runners/cuda_v11/libext_server.so > time=2024-04-22T19:43:46.778+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1459929254/runners/cuda_v11/libext_server.so" > time=2024-04-22T19:43:46.778+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server" > llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/llm/ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b (version GGUF V3 (latest)) > llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. > llama_model_loader: - kv 0: general.architecture str = llama > llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct > llama_model_loader: - kv 2: llama.block_count u32 = 80 > llama_model_loader: - kv 3: llama.context_length u32 = 8192 > llama_model_loader: - kv 4: llama.embedding_length u32 = 8192 > llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672 > llama_model_loader: - kv 6: llama.attention.head_count u32 = 64 > llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 > llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 > llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 > llama_model_loader: - kv 10: general.file_type u32 = 2 > llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 > llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 > llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 > llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... > llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... > llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... > llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 > llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 > llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... > llama_model_loader: - kv 20: general.quantization_version u32 = 2 > llama_model_loader: - type f32: 161 tensors > llama_model_loader: - type q4_0: 561 tensors > llama_model_loader: - type q6_K: 1 tensors > llm_load_vocab: special tokens definition check successful ( 256/128256 ). > llm_load_print_meta: format = GGUF V3 (latest) > llm_load_print_meta: arch = llama > llm_load_print_meta: vocab type = BPE > llm_load_print_meta: n_vocab = 128256 > llm_load_print_meta: n_merges = 280147 > llm_load_print_meta: n_ctx_train = 8192 > llm_load_print_meta: n_embd = 8192 > llm_load_print_meta: n_head = 64 > llm_load_print_meta: n_head_kv = 8 > llm_load_print_meta: n_layer = 80 > llm_load_print_meta: n_rot = 128 > llm_load_print_meta: n_embd_head_k = 128 > llm_load_print_meta: n_embd_head_v = 128 > llm_load_print_meta: n_gqa = 8 > llm_load_print_meta: n_embd_k_gqa = 1024 > llm_load_print_meta: n_embd_v_gqa = 1024 > llm_load_print_meta: f_norm_eps = 0.0e+00 > llm_load_print_meta: f_norm_rms_eps = 1.0e-05 > llm_load_print_meta: f_clamp_kqv = 0.0e+00 > llm_load_print_meta: f_max_alibi_bias = 0.0e+00 > llm_load_print_meta: f_logit_scale = 0.0e+00 > llm_load_print_meta: n_ff = 28672 > llm_load_print_meta: n_expert = 0 > llm_load_print_meta: n_expert_used = 0 > llm_load_print_meta: causal attn = 1 > llm_load_print_meta: pooling type = 0 > llm_load_print_meta: rope type = 0 > llm_load_print_meta: rope scaling = linear > llm_load_print_meta: freq_base_train = 500000.0 > llm_load_print_meta: freq_scale_train = 1 > llm_load_print_meta: n_yarn_orig_ctx = 8192 > llm_load_print_meta: rope_finetuned = unknown > llm_load_print_meta: ssm_d_conv = 0 > llm_load_print_meta: ssm_d_inner = 0 > llm_load_print_meta: ssm_d_state = 0 > llm_load_print_meta: ssm_dt_rank = 0 > llm_load_print_meta: model type = 70B > llm_load_print_meta: model ftype = Q4_0 > llm_load_print_meta: model params = 70.55 B > llm_load_print_meta: model size = 37.22 GiB (4.53 BPW) > llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct > llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' > llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' > llm_load_print_meta: LF token = 128 'Ä' > llm_load_tensors: ggml ctx size = 0.55 MiB > llm_load_tensors: offloading 80 repeating layers to GPU > llm_load_tensors: offloading non-repeating layers to GPU > llm_load_tensors: offloaded 81/81 layers to GPU > llm_load_tensors: CPU buffer size = 563.62 MiB > llm_load_tensors: CUDA0 buffer size = 37546.98 MiB > ................................................................................................... > llama_new_context_with_model: n_ctx = 2048 > llama_new_context_with_model: n_batch = 512 > llama_new_context_with_model: n_ubatch = 512 > llama_new_context_with_model: freq_base = 500000.0 > llama_new_context_with_model: freq_scale = 1 > llama_kv_cache_init: CUDA0 KV buffer size = 640.00 MiB > llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB > llama_new_context_with_model: CUDA_Host output buffer size = 266.50 MiB > llama_new_context_with_model: CUDA0 compute buffer size = 324.00 MiB > llama_new_context_with_model: CUDA_Host compute buffer size = 20.00 MiB > llama_new_context_with_model: graph nodes = 2644 > llama_new_context_with_model: graph splits = 2 > {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140380754384448","timestamp":1713786234} > {"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140380754384448","timestamp":1713786234} > time=2024-04-22T19:43:54.478+08:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop" > [GIN] 2024/04/22 - 19:43:54 | 200 | 8.528894398s | 127.0.0.1 | POST "/api/chat" > {"function":"update_slots","level":"INFO","line":1574,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140373070435904","timestamp":1713786234} > {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140373070435904","timestamp":1713786269} > {"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":93,"slot_id":0,"task_id":0,"tid":"140373070435904","timestamp":1713786269} > {"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140373070435904","timestamp":1713786269} > {"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 432.65 ms / 93 tokens ( 4.65 ms per token, 214.95 tokens per second)","n_prompt_tokens_processed":93,"n_tokens_second":214.95484792522348,"slot_id":0,"t_prompt_processing":432.649,"t_token":4.652139784946237,"task_id":0,"tid":"140373070435904","timestamp":1713786305} > {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 35653.82 ms / 621 runs ( 57.41 ms per token, 17.42 tokens per second)","n_decoded":621,"n_tokens_second":17.417488504738063,"slot_id":0,"t_token":57.41355877616747,"t_token_generation":35653.82,"task_id":0,"tid":"140373070435904","timestamp":1713786305} > {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 36086.47 ms","slot_id":0,"t_prompt_processing":432.649,"t_token_generation":35653.82,"t_total":36086.469,"task_id":0,"tid":"140373070435904","timestamp":1713786305} > {"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":714,"n_ctx":2048,"n_past":713,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140373070435904","timestamp":1713786305,"truncated":false} > [GIN] 2024/04/22 - 19:45:05 | 200 | 36.088775013s | 127.0.0.1 | POST "/api/chat" > {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":624,"tid":"140373070435904","timestamp":1713786317} > {"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":92,"n_past_se":0,"n_prompt_tokens_processed":731,"slot_id":0,"task_id":624,"tid":"140373070435904","timestamp":1713786317} > {"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":92,"slot_id":0,"task_id":624,"tid":"140373070435904","timestamp":1713786317} > {"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 1992.45 ms / 731 tokens ( 2.73 ms per token, 366.89 tokens per second)","n_prompt_tokens_processed":731,"n_tokens_second":366.8853591160221,"slot_id":0,"t_prompt_processing":1992.448,"t_token":2.7256470588235295,"task_id":624,"tid":"140373070435904","timestamp":1713786354} > {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 35060.12 ms / 601 runs ( 58.34 ms per token, 17.14 tokens per second)","n_decoded":601,"n_tokens_second":17.14198207462079,"slot_id":0,"t_token":58.33631114808652,"t_token_generation":35060.123,"task_id":624,"tid":"140373070435904","timestamp":1713786354} > {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 37052.57 ms","slot_id":0,"t_prompt_processing":1992.448,"t_token_generation":35060.123,"t_total":37052.570999999996,"task_id":624,"tid":"140373070435904","timestamp":1713786354} > {"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":1424,"n_ctx":2048,"n_past":1423,"n_system_tokens":0,"slot_id":0,"task_id":624,"tid":"140373070435904","timestamp":1713786354,"truncated":false} > [GIN] 2024/04/22 - 19:45:54 | 200 | 37.066841427s | 127.0.0.1 | POST "/api/chat" > {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":1228,"tid":"140373070435904","timestamp":1713786425} > {"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":822,"n_past_se":0,"n_prompt_tokens_processed":644,"slot_id":0,"task_id":1228,"tid":"140373070435904","timestamp":1713786425} > {"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":822,"slot_id":0,"task_id":1228,"tid":"140373070435904","timestamp":1713786425} > {"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 1979.55 ms / 644 tokens ( 3.07 ms per token, 325.33 tokens per second)","n_prompt_tokens_processed":644,"n_tokens_second":325.3261343980861,"slot_id":0,"t_prompt_processing":1979.552,"t_token":3.07383850931677,"task_id":1228,"tid":"140373070435904","timestamp":1713786452} > {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 24798.05 ms / 418 runs ( 59.33 ms per token, 16.86 tokens per second)","n_decoded":418,"n_tokens_second":16.85616273407282,"slot_id":0,"t_token":59.325483253588516,"t_token_generation":24798.052,"task_id":1228,"tid":"140373070435904","timestamp":1713786452} > {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 26777.60 ms","slot_id":0,"t_prompt_processing":1979.552,"t_token_generation":24798.052,"t_total":26777.604,"task_id":1228,"tid":"140373070435904","timestamp":1713786452} > {"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":1884,"n_ctx":2048,"n_past":1883,"n_system_tokens":0,"slot_id":0,"task_id":1228,"tid":"140373070435904","timestamp":1713786452,"truncated":false} > [GIN] 2024/04/22 - 19:47:32 | 200 | 26.799039944s | 127.0.0.1 | POST "/api/chat" > time=2024-04-22T19:56:00.986+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-04-22T19:56:01.101+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9" > time=2024-04-22T19:56:01.101+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-04-22T19:56:01.101+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9" > time=2024-04-22T19:56:01.101+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > loading library /tmp/ollama1459929254/runners/cuda_v11/libext_server.so > time=2024-04-22T19:56:01.101+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1459929254/runners/cuda_v11/libext_server.so" > time=2024-04-22T19:56:01.101+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server" > llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/llm/ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b (version GGUF V3 (latest)) > llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. > llama_model_loader: - kv 0: general.architecture str = llama > llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct > llama_model_loader: - kv 2: llama.block_count u32 = 80 > llama_model_loader: - kv 3: llama.context_length u32 = 8192 > llama_model_loader: - kv 4: llama.embedding_length u32 = 8192 > llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672 > llama_model_loader: - kv 6: llama.attention.head_count u32 = 64 > llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 > llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 > llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 > llama_model_loader: - kv 10: general.file_type u32 = 2 > llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 > llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 > llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 > llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... > llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... > llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... > llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 > llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 > llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... > llama_model_loader: - kv 20: general.quantization_version u32 = 2 > llama_model_loader: - type f32: 161 tensors > llama_model_loader: - type q4_0: 561 tensors > llama_model_loader: - type q6_K: 1 tensors > llm_load_vocab: special tokens definition check successful ( 256/128256 ). > llm_load_print_meta: format = GGUF V3 (latest) > llm_load_print_meta: arch = llama > llm_load_print_meta: vocab type = BPE > llm_load_print_meta: n_vocab = 128256 > llm_load_print_meta: n_merges = 280147 > llm_load_print_meta: n_ctx_train = 8192 > llm_load_print_meta: n_embd = 8192 > llm_load_print_meta: n_head = 64 > llm_load_print_meta: n_head_kv = 8 > llm_load_print_meta: n_layer = 80 > llm_load_print_meta: n_rot = 128 > llm_load_print_meta: n_embd_head_k = 128 > llm_load_print_meta: n_embd_head_v = 128 > llm_load_print_meta: n_gqa = 8 > llm_load_print_meta: n_embd_k_gqa = 1024 > llm_load_print_meta: n_embd_v_gqa = 1024 > llm_load_print_meta: f_norm_eps = 0.0e+00 > llm_load_print_meta: f_norm_rms_eps = 1.0e-05 > llm_load_print_meta: f_clamp_kqv = 0.0e+00 > llm_load_print_meta: f_max_alibi_bias = 0.0e+00 > llm_load_print_meta: f_logit_scale = 0.0e+00 > llm_load_print_meta: n_ff = 28672 > llm_load_print_meta: n_expert = 0 > llm_load_print_meta: n_expert_used = 0 > llm_load_print_meta: causal attn = 1 > llm_load_print_meta: pooling type = 0 > llm_load_print_meta: rope type = 0 > llm_load_print_meta: rope scaling = linear > llm_load_print_meta: freq_base_train = 500000.0 > llm_load_print_meta: freq_scale_train = 1 > llm_load_print_meta: n_yarn_orig_ctx = 8192 > llm_load_print_meta: rope_finetuned = unknown > llm_load_print_meta: ssm_d_conv = 0 > llm_load_print_meta: ssm_d_inner = 0 > llm_load_print_meta: ssm_d_state = 0 > llm_load_print_meta: ssm_dt_rank = 0 > llm_load_print_meta: model type = 70B > llm_load_print_meta: model ftype = Q4_0 > llm_load_print_meta: model params = 70.55 B > llm_load_print_meta: model size = 37.22 GiB (4.53 BPW) > llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct > llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' > llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' > llm_load_print_meta: LF token = 128 'Ä' > llm_load_tensors: ggml ctx size = 0.55 MiB > llm_load_tensors: offloading 80 repeating layers to GPU > llm_load_tensors: offloading non-repeating layers to GPU > llm_load_tensors: offloaded 81/81 layers to GPU > llm_load_tensors: CPU buffer size = 563.62 MiB > llm_load_tensors: CUDA0 buffer size = 37546.98 MiB > ................................................................................................... > llama_new_context_with_model: n_ctx = 2048 > llama_new_context_with_model: n_batch = 512 > llama_new_context_with_model: n_ubatch = 512 > llama_new_context_with_model: freq_base = 500000.0 > llama_new_context_with_model: freq_scale = 1 > llama_kv_cache_init: CUDA0 KV buffer size = 640.00 MiB > llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB > llama_new_context_with_model: CUDA_Host output buffer size = 266.50 MiB > llama_new_context_with_model: CUDA0 compute buffer size = 324.00 MiB > llama_new_context_with_model: CUDA_Host compute buffer size = 20.00 MiB > llama_new_context_with_model: graph nodes = 2644 > llama_new_context_with_model: graph splits = 2 > {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140377508013632","timestamp":1713786968} > {"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140377508013632","timestamp":1713786968} > time=2024-04-22T19:56:08.858+08:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop" > {"function":"update_slots","level":"INFO","line":1574,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140373045257792","timestamp":1713786968} > {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713786968} > {"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":2003,"slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713786968} > {"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713786968} > {"function":"update_slots","level":"INFO","line":1597,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":0,"n_left":2047,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713786976} > {"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 5385.63 ms / 2003 tokens ( 2.69 ms per token, 371.92 tokens per second)","n_prompt_tokens_processed":2003,"n_tokens_second":371.9156347539656,"slot_id":0,"t_prompt_processing":5385.63,"t_token":2.6887818272591115,"task_id":0,"tid":"140373045257792","timestamp":1713787009} > {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 35486.33 ms / 591 runs ( 60.04 ms per token, 16.65 tokens per second)","n_decoded":591,"n_tokens_second":16.654301810384602,"slot_id":0,"t_token":60.04454653130287,"t_token_generation":35486.327,"task_id":0,"tid":"140373045257792","timestamp":1713787009} > {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 40871.96 ms","slot_id":0,"t_prompt_processing":5385.63,"t_token_generation":35486.327,"t_total":40871.956999999995,"task_id":0,"tid":"140373045257792","timestamp":1713787009} > {"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":1571,"n_ctx":2048,"n_past":1570,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140373045257792","timestamp":1713787009,"truncated":true} > [GIN] 2024/04/22 - 19:56:49 | 200 | 49.6301983s | 127.0.0.1 | POST "/api/chat" > {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787071} > {"function":"update_slots","ga_i":0,"level":"INFO","line":1805,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":1927,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787071} > {"function":"update_slots","level":"INFO","line":1832,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787071} > {"function":"update_slots","level":"INFO","line":1597,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":0,"n_left":2047,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787084} > {"function":"update_slots","level":"INFO","line":1597,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":0,"n_left":2047,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787145} > {"function":"update_slots","level":"INFO","line":1597,"msg":"slot context shift","n_cache_tokens":2048,"n_ctx":2048,"n_discard":1023,"n_keep":0,"n_left":2047,"n_past":2047,"n_system_tokens":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787206} > {"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 5242.44 ms / 1927 tokens ( 2.72 ms per token, 367.58 tokens per second)","n_prompt_tokens_processed":1927,"n_tokens_second":367.5772102892625,"slot_id":0,"t_prompt_processing":5242.436,"t_token":2.720516865594188,"task_id":594,"tid":"140373045257792","timestamp":1713787227} > {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 151033.05 ms / 2517 runs ( 60.01 ms per token, 16.67 tokens per second)","n_decoded":2517,"n_tokens_second":16.66522603280454,"slot_id":0,"t_token":60.005186730234406,"t_token_generation":151033.055,"task_id":594,"tid":"140373045257792","timestamp":1713787227} > {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 156275.49 ms","slot_id":0,"t_prompt_processing":5242.436,"t_token_generation":151033.055,"t_total":156275.49099999998,"task_id":594,"tid":"140373045257792","timestamp":1713787227} > {"function":"update_slots","level":"INFO","line":1636,"msg":"slot released","n_cache_tokens":1375,"n_ctx":2048,"n_past":1374,"n_system_tokens":0,"slot_id":0,"task_id":594,"tid":"140373045257792","timestamp":1713787227,"truncated":true} > [GIN] 2024/04/22 - 20:00:27 | 200 | 2m36s | 127.0.0.1 | POST "/api/chat" > time=2024-04-22T20:07:41.624+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-04-22T20:07:41.624+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9" > time=2024-04-22T20:07:41.624+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > time=2024-04-22T20:07:41.624+08:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9" > time=2024-04-22T20:07:41.624+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" > loading library /tmp/ollama1459929254/runners/cuda_v11/libext_server.so > time=2024-04-22T20:07:41.624+08:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1459929254/runners/cuda_v11/libext_server.so" > time=2024-04-22T20:07:41.624+08:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server" > llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /root/llm/ollama/models/blobs/sha256-4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b (version GGUF V3 (latest)) > llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. > llama_model_loader: - kv 0: general.architecture str = llama > llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct > llama_model_loader: - kv 2: llama.block_count u32 = 80 > llama_model_loader: - kv 3: llama.context_length u32 = 8192 > llama_model_loader: - kv 4: llama.embedding_length u32 = 8192 > llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672 > llama_model_loader: - kv 6: llama.attention.head_count u32 = 64 > llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 > llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 > llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 > llama_model_loader: - kv 10: general.file_type u32 = 2 > llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 > llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 > llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 > llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... > llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... > llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... > llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 > llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 > llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... > llama_model_loader: - kv 20: general.quantization_version u32 = 2 > llama_model_loader: - type f32: 161 tensors > llama_model_loader: - type q4_0: 561 tensors > llama_model_loader: - type q6_K: 1 tensors > llm_load_vocab: special tokens definition check successful ( 256/128256 ). > llm_load_print_meta: format = GGUF V3 (latest) > llm_load_print_meta: arch = llama > llm_load_print_meta: vocab type = BPE > llm_load_print_meta: n_vocab = 128256 > llm_load_print_meta: n_merges = 280147 > llm_load_print_meta: n_ctx_train = 8192 > llm_load_print_meta: n_embd = 8192 > llm_load_print_meta: n_head = 64 > llm_load_print_meta: n_head_kv = 8 > llm_load_print_meta: n_layer = 80 > llm_load_print_meta: n_rot = 128 > llm_load_print_meta: n_embd_head_k = 128 > llm_load_print_meta: n_embd_head_v = 128 > llm_load_print_meta: n_gqa = 8 > llm_load_print_meta: n_embd_k_gqa = 1024 > llm_load_print_meta: n_embd_v_gqa = 1024 > llm_load_print_meta: f_norm_eps = 0.0e+00 > llm_load_print_meta: f_norm_rms_eps = 1.0e-05 > llm_load_print_meta: f_clamp_kqv = 0.0e+00 > llm_load_print_meta: f_max_alibi_bias = 0.0e+00 > llm_load_print_meta: f_logit_scale = 0.0e+00 > llm_load_print_meta: n_ff = 28672 > llm_load_print_meta: n_expert = 0 > llm_load_print_meta: n_expert_used = 0 > llm_load_print_meta: causal attn = 1 > llm_load_print_meta: pooling type = 0 > llm_load_print_meta: rope type = 0 > llm_load_print_meta: rope scaling = linear > llm_load_print_meta: freq_base_train = 500000.0 > llm_load_print_meta: freq_scale_train = 1 ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.31
GiteaMirror added the bug label 2026-04-28 09:56:37 -05:00
Author
Owner

@chigkim commented on GitHub (Apr 22, 2024):

Try 0.1.32. It works nicely for me.
https://github.com/ollama/ollama/releases

<!-- gh-comment-id:2069306070 --> @chigkim commented on GitHub (Apr 22, 2024): Try 0.1.32. It works nicely for me. https://github.com/ollama/ollama/releases
Author
Owner

@phalexo commented on GitHub (Apr 22, 2024):

Yes, it does exactly this. The only conjecture I have is overrunning its rather small context length of 8K.

I have seen this many times using it with gpt-pilot, it puts garbage into generated files.

Not sure if every quantization is affected or not.

Ollama presumably by default uses 4bit.

<!-- gh-comment-id:2069869756 --> @phalexo commented on GitHub (Apr 22, 2024): Yes, it does exactly this. The only conjecture I have is overrunning its rather small context length of 8K. I have seen this many times using it with gpt-pilot, it puts garbage into generated files. Not sure if every quantization is affected or not. Ollama presumably by default uses 4bit.
Author
Owner

@moyix commented on GitHub (Apr 22, 2024):

Also seeing it output gibberish after the conversation gets past 8k tokens. Using dynamic RoPE scaling appears to let it remain coherent up to 32k but I don't know how to enable that in ollama.

<!-- gh-comment-id:2070259671 --> @moyix commented on GitHub (Apr 22, 2024): Also seeing it output gibberish after the conversation gets past 8k tokens. Using dynamic RoPE scaling appears to let it remain coherent up to 32k but I don't know how to enable that in ollama.
Author
Owner

@agj60 commented on GitHub (Apr 22, 2024):

Similar gibberish in llama3:70b, with no recurrent pattern. It just comes out of nowhere. A small sample:

to. is\\\\.
the of\\. to for\\,,\\\\ of are)\\. a , and.\\\\ to,..
the\ of\\\\\\ in i as a., \, in. a.
,. a\\.
..
\\\\
that on\\\\\\ \ for\\ the.\\ a\\ the,
,\\,\\ in is. to of\\ the and\ the\\ or\\ a\,\\\\. at
and […]
. the a.
in,); and\\ and. and\\
the,. in to'\ of\\, of.
. it\\ , by\\ \ the is as in.
\\ in:\\ in., the, to
to on\\\\ of and\\\\, of.
\\ to in in a\\ are a to\\ a,\\ it
to a a\,. the it it\\ it a,,.

\\ a\\ to\\.\\. in, a with., a to\\ to\\\ a. \\\\\,
.'t‍

<!-- gh-comment-id:2070646631 --> @agj60 commented on GitHub (Apr 22, 2024): Similar gibberish in llama3:70b, with no recurrent pattern. It just comes out of nowhere. A small sample: to. is\\\\\\\\. the of\\\\. to for\\\\,,\\\\\\\\ of are)\\\\. a , and.\\\\\\\\ to,.. the\ of\\\\\\\\\\\\ in i as a., \\, in. a. ,. a\\\\. .. \\\\\\\\ that on\\\\\\\\\\\\ \ for\\\\ the.\\\\ a\\\\ the, ,\\\\,\\\\ in is. to of\\\\ the and\ the\\\\ or\\\\ a\\,\\\\\\\\. at and […] . the a. in,); and\\\\ and. and\\\\ the,. in to\'\ of\\\\, of. . it\\\\ , by\\\\ \ the is as in. \\\\ in:\\\\ in., the, to to on\\\\\\\\ of and\\\\\\\\, of. \\\\ to in in a\\\\ are a to\\\\ a,\\\\ it to a a\\,. the it it\\\\ it a,,. \\\\ a\\\\ to\\\\.\\\\. in, a with., a to\\\\ to\\\\\\ a. \\\\\\\\\\, .'t‍
Author
Owner

@pdevine commented on GitHub (Apr 22, 2024):

Hey guys, it's happening when you hit the context size (which is set to 2048). You can increase the context as a work around w/ /set parameter num_ctx 8192 but it will just hit the context later (also, this will require you to have more memory). We're working on a fix which should be out soon.

<!-- gh-comment-id:2070756386 --> @pdevine commented on GitHub (Apr 22, 2024): Hey guys, it's happening when you hit the context size (which is set to 2048). You can increase the context as a work around w/ `/set parameter num_ctx 8192` but it will just hit the context later (also, this will require you to have more memory). We're working on a fix which should be out soon.
Author
Owner

@phalexo commented on GitHub (Apr 22, 2024):

Hey guys, it's happening when you hit the context size (which is set to 2048). You can increase the context as a work around w/ /set parameter num_ctx 8192 but it will just hit the context later (also, this will require you to have more memory). We're working on a fix which should be out soon.

Is it specific to llama.cpp or ollama or the model itself?

The model's context itself goes only up to 8K, how would you deal with that?

<!-- gh-comment-id:2070816275 --> @phalexo commented on GitHub (Apr 22, 2024): > Hey guys, it's happening when you hit the context size (which is set to 2048). You can increase the context as a work around w/ `/set parameter num_ctx 8192` but it will just hit the context later (also, this will require you to have more memory). We're working on a fix which should be out soon. Is it specific to llama.cpp or ollama or the model itself? The model's context itself goes only up to 8K, how would you deal with that?
Author
Owner

@pdevine commented on GitHub (Apr 22, 2024):

Is it specific to llama.cpp or ollama or the model itself?

The model is fine. The problem happens because the context by default is set to 2048 and is basically chopping the conversation apart in the middle of it. I've been testing it out with the 8k context size and it seems to not lose its mind when the context shift happens.

<!-- gh-comment-id:2070843901 --> @pdevine commented on GitHub (Apr 22, 2024): > Is it specific to llama.cpp or ollama or the model itself? The model is fine. The problem happens because the context by default is set to 2048 and is basically chopping the conversation apart in the middle of it. I've been testing it out with the 8k context size and it seems to not lose its mind when the context shift happens.
Author
Owner

@naure commented on GitHub (Apr 23, 2024):

Even if the context is less than 8K and less than 2K, answers look like proper English, but the model seems blind and confused with most of the input once it is somewhat large. It is fixed empirically by setting the correct context length.

Quick Fix

Set the length "num_ctx": 8192 in API calls:

curl http://localhost:11434/api/chat -d '{
  "model": "llama3",
  "messages": [
    {
      "role": "user",
      "content": "Hello!"
    }
  ],
  "options": {
    "num_ctx": 8192
  }
}'

Or for example in langchain:

llm = ChatOllama(
    model="llama3",
    num_ctx=8192,
    stop=["<|start_header_id|>", "<|end_header_id|>", "<|eot_id|>"],
)

Real Fix

This should be added to the Modelfiles:

num_ctx 8192
<!-- gh-comment-id:2073218603 --> @naure commented on GitHub (Apr 23, 2024): Even if the context is less than 8K and less than 2K, answers look like proper English, but the model seems blind and confused with most of the input once it is somewhat large. It is fixed empirically by setting the correct context length. ### Quick Fix Set the length `"num_ctx": 8192` in API calls: ``` curl http://localhost:11434/api/chat -d '{ "model": "llama3", "messages": [ { "role": "user", "content": "Hello!" } ], "options": { "num_ctx": 8192 } }' ``` Or for example in `langchain`: ```python llm = ChatOllama( model="llama3", num_ctx=8192, stop=["<|start_header_id|>", "<|end_header_id|>", "<|eot_id|>"], ) ``` ### Real Fix This should be added to the Modelfiles: ``` num_ctx 8192 ```
Author
Owner

@zereraz commented on GitHub (Apr 28, 2024):

I am on 0.1.32 and llama3:70b-instruct-q4_1 version is not working.

>>> /set parameter num_ctx 8192
Set parameter 'num_ctx' to '8192'
>>> why is the sky blue
I am a human assistant

>>> hi how are you doing?
assistant
<!-- gh-comment-id:2081661549 --> @zereraz commented on GitHub (Apr 28, 2024): I am on 0.1.32 and `llama3:70b-instruct-q4_1` version is not working. ``` >>> /set parameter num_ctx 8192 Set parameter 'num_ctx' to '8192' >>> why is the sky blue I am a human assistant >>> hi how are you doing? assistant ```
Author
Owner

@jmorganca commented on GitHub (May 10, 2024):

Hi this should be improved now, especially as of 0.1.33 with better memory estimation for large models such as Llama 3 – let me know if you're still seeing the issue.

<!-- gh-comment-id:2103654642 --> @jmorganca commented on GitHub (May 10, 2024): Hi this should be improved now, especially as of 0.1.33 with better memory estimation for large models such as Llama 3 – let me know if you're still seeing the issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48873