[GH-ISSUE #2157] Incoherent latency on ARM machine #1231

Closed
opened 2026-04-12 11:00:15 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @racso-dev on GitHub (Jan 23, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2157

I deployed mistral:7b on an ARM instance of Scaleway, with 32 vCPUs and 128GB of memory. I can't figure out why the inference times are on the order of several minutes and was wondering if you had any idea of the cause of the problem, and a potential solution.

For the record, I installed ollama via curl https://ollama.ai/install.sh | sh
And if you need more details about the machine I used, It's the biggest ARM instance available on Scaleway, the COPARM1-32-128G instance. You can find more information here.

I also tried bigger models, and one thing I noticed, was that when my inference was running, the memory that was being used was surprisingly low, around 2GB out the 128GB available, and that out of the 32 cores available about half were used.

Would be wonderful if anyone had an idea on how to solve this!

Originally created by @racso-dev on GitHub (Jan 23, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2157 I deployed mistral:7b on an ARM instance of Scaleway, with 32 vCPUs and 128GB of memory. I can't figure out why the inference times are on the order of several minutes and was wondering if you had any idea of the cause of the problem, and a potential solution. For the record, I installed ollama via `curl https://ollama.ai/install.sh | sh` And if you need more details about the machine I used, It's the biggest ARM instance available on Scaleway, the COPARM1-32-128G instance. You can find more information [here](https://www.scaleway.com/en/cost-optimized-instances-based-on-arm/). I also tried bigger models, and one thing I noticed, was that when my inference was running, the memory that was being used was surprisingly low, around 2GB out the 128GB available, and that out of the 32 cores available about half were used. Would be wonderful if anyone had an idea on how to solve this!
Author
Owner

@easp commented on GitHub (Jan 23, 2024):

I also tried bigger models, and one thing I noticed, was that when my inference was running, the memory that was being used was surprisingly low, around 2GB out the 128GB available, and that out of the 32 cores available about half were used.

Models are mmap-ed and are accounted for in the file cache, rather than the ollama process. Inference is limited by RAM bandwidth, rather than compute, so ollama/llama.cpp generally chooses 1/2 the number of CPUs. You can change this by setting num_thread manually in a modelfile, or inside the CLI with /set parameter num_thread, but they people I've seen that try don't find much more performance and what they do find isn't far from the default.

As for why inference times are several minutes, is that several minutes before you get the first token, or several minutes to finish generating tokens? How big is your prompt? What timing information do you get if you start the CLI with the --verbose flag, or use /set verbose once you are already in the CLI?

It looks like the arm instances probably run on 128 core machines with 8 DDR 4 channels. If it's not overprovisioned, 32 cores should get you 2 channels worth of memory bandwidth, which works out to about 35GB/s. That should get you about 10 tokens/s with a q4 quantization of a 7b model.

I'm suspect that in a virtualized environement your available RAM bandwidth may be cut if you are only using half the available cores, so in your case, I'd suggest trying to set num_thread to 32 to see if that helps.

<!-- gh-comment-id:1906511803 --> @easp commented on GitHub (Jan 23, 2024): > I also tried bigger models, and one thing I noticed, was that when my inference was running, the memory that was being used was surprisingly low, around 2GB out the 128GB available, and that out of the 32 cores available about half were used. Models are mmap-ed and are accounted for in the file cache, rather than the ollama process. Inference is limited by RAM bandwidth, rather than compute, so ollama/llama.cpp generally chooses 1/2 the number of CPUs. You can change this by setting num_thread manually in a modelfile, or inside the CLI with `/set parameter num_thread`, but they people I've seen that try don't find much more performance and what they do find isn't far from the default. As for why inference times are several minutes, is that several minutes before you get the first token, or several minutes to finish generating tokens? How big is your prompt? What timing information do you get if you start the CLI with the `--verbose` flag, or use `/set verbose` once you are already in the CLI? It looks like the arm instances probably run on 128 core machines with 8 DDR 4 channels. If it's not overprovisioned, 32 cores should get you 2 channels worth of memory bandwidth, which works out to about 35GB/s. That should get you about 10 tokens/s with a q4 quantization of a 7b model. I'm suspect that in a virtualized environement your available RAM bandwidth may be cut if you are only using half the available cores, so in your case, I'd suggest trying to set num_thread to 32 to see if that helps.
Author
Owner

@racso-dev commented on GitHub (Jan 23, 2024):

First of all, I'd like to thank you for your reply.
Secondly, regarding your advice on the num_thread parameter, it considerably improves inference time.
What I call inference time is in fact the time it takes for the model to respond in full, i.e. from the moment I send my request to the moment the model finishes responding.

My use case certainly has an impact on the inference time, but I suppose that with a small model, it's not really normal that with specs like those of the COPARM1-32C-128G I should have such a latency.
To clarify the use case, it's simply a RAG on 2 pdf documents totalling 40MB.
I ask a simple question about ten tokens long.
I use ollama-webui to achieve this.

Concerning the logs, here are the logs corresponding to the usecase with the num_thread parameter set to 32 as you advised:

janv. 23 18:43:14 autoscribe-llm ollama[8602]: 2024/01/23 18:43:14 llm.go:71: GPU not available, falling back to CPU
janv. 23 18:43:14 autoscribe-llm ollama[8602]: 2024/01/23 18:43:14 ext_server_common.go:136: Initializing internal llama server
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /usr/share/ollama/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f1>
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - tensor    0:                token_embd.weight q4_0     [  4096, 32000,     1,     1 ]
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - tensor    1:           blk.0.attn_norm.weight f32      [  4096,     1,     1,     1 ]
... TENSORS ...
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - tensor  290:               output_norm.weight f32      [  4096,     1,     1,     1 ]
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv   0:                       general.architecture str              = llama
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv   1:                               general.name str              = mistralai
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv   4:                          llama.block_count u32              = 32
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  11:                          general.file_type u32              = 2
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv  23:               general.quantization_version u32              = 2
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - type  f32:   65 tensors
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - type q4_0:  225 tensors
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - type q6_K:    1 tensors
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_vocab: special tokens definition check successful ( 259/32000 ).
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: format           = GGUF V3 (latest)
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: arch             = llama
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: vocab type       = SPM
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_vocab          = 32000
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_merges         = 0
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_ctx_train      = 32768
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_embd           = 4096
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_head           = 32
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_head_kv        = 8
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_layer          = 32
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_rot            = 128
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_gqa            = 4
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: f_norm_eps       = 0.0e+00
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_ff             = 14336
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_expert         = 0
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_expert_used    = 0
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: rope scaling     = linear
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: freq_base_train  = 1000000.0
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: freq_scale_train = 1
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_yarn_orig_ctx  = 32768
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: rope_finetuned   = unknown
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: model type       = 7B
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: model ftype      = Q4_0
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: model params     = 7.24 B
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW)
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: general.name     = mistralai
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: BOS token        = 1 '<s>'
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: EOS token        = 2 '</s>'
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: UNK token        = 0 '<unk>'
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: LF token         = 13 '<0x0A>'
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_tensors: ggml ctx size =    0.11 MiB
janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_tensors: mem required  = 3917.98 MiB
janv. 23 18:43:15 autoscribe-llm ollama[8602]: ...................................................................................................
janv. 23 18:43:15 autoscribe-llm ollama[8602]: llama_new_context_with_model: n_ctx      = 2048
janv. 23 18:43:15 autoscribe-llm ollama[8602]: llama_new_context_with_model: freq_base  = 1000000.0
janv. 23 18:43:15 autoscribe-llm ollama[8602]: llama_new_context_with_model: freq_scale = 1
janv. 23 18:43:15 autoscribe-llm ollama[8602]: llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
janv. 23 18:43:15 autoscribe-llm ollama[8602]: llama_build_graph: non-view tensors processed: 676/676
janv. 23 18:43:15 autoscribe-llm ollama[8602]: llama_new_context_with_model: compute buffer total size = 159.19 MiB
janv. 23 18:43:15 autoscribe-llm ollama[8602]: 2024/01/23 18:43:15 ext_server_common.go:144: Starting internal llama main loop
janv. 23 18:43:15 autoscribe-llm ollama[8602]: 2024/01/23 18:43:15 ext_server_common.go:158: loaded 0 images
janv. 23 18:44:29 autoscribe-llm ollama[8602]: [GIN] 2024/01/23 - 18:44:29 | 200 |         1m15s |    195.68.70.34 | POST     "/api/chat"
janv. 23 18:44:29 autoscribe-llm ollama[8602]: 2024/01/23 18:44:29 ext_server_common.go:158: loaded 0 images
<!-- gh-comment-id:1906724878 --> @racso-dev commented on GitHub (Jan 23, 2024): First of all, I'd like to thank you for your reply. Secondly, regarding your advice on the num_thread parameter, it considerably improves inference time. What I call inference time is in fact the time it takes for the model to respond in full, i.e. from the moment I send my request to the moment the model finishes responding. My use case certainly has an impact on the inference time, but I suppose that with a small model, it's not really normal that with specs like those of the COPARM1-32C-128G I should have such a latency. To clarify the use case, it's simply a RAG on 2 pdf documents totalling 40MB. I ask a simple question about ten tokens long. I use [ollama-webui](https://github.com/ollama-webui/ollama-webui/) to achieve this. Concerning the logs, here are the logs corresponding to the usecase with the num_thread parameter set to 32 as you advised: ```bash janv. 23 18:43:14 autoscribe-llm ollama[8602]: 2024/01/23 18:43:14 llm.go:71: GPU not available, falling back to CPU janv. 23 18:43:14 autoscribe-llm ollama[8602]: 2024/01/23 18:43:14 ext_server_common.go:136: Initializing internal llama server janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /usr/share/ollama/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f1> janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - tensor 0: token_embd.weight q4_0 [ 4096, 32000, 1, 1 ] janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - tensor 1: blk.0.attn_norm.weight f32 [ 4096, 1, 1, 1 ] ... TENSORS ... janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - tensor 290: output_norm.weight f32 [ 4096, 1, 1, 1 ] janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 0: general.architecture str = llama janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 1: general.name str = mistralai janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 2: llama.context_length u32 = 32768 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 4: llama.block_count u32 = 32 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 11: general.file_type u32 = 2 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 12: tokenizer.ggml.model str = llama janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - kv 23: general.quantization_version u32 = 2 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - type f32: 65 tensors janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - type q4_0: 225 tensors janv. 23 18:43:14 autoscribe-llm ollama[8602]: llama_model_loader: - type q6_K: 1 tensors janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_vocab: special tokens definition check successful ( 259/32000 ). janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: format = GGUF V3 (latest) janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: arch = llama janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: vocab type = SPM janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_vocab = 32000 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_merges = 0 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_ctx_train = 32768 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_embd = 4096 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_head = 32 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_head_kv = 8 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_layer = 32 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_rot = 128 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_gqa = 4 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: f_norm_eps = 0.0e+00 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_ff = 14336 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_expert = 0 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_expert_used = 0 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: rope scaling = linear janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: freq_base_train = 1000000.0 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: freq_scale_train = 1 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: n_yarn_orig_ctx = 32768 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: rope_finetuned = unknown janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: model type = 7B janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: model ftype = Q4_0 janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: model params = 7.24 B janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: general.name = mistralai janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: BOS token = 1 '<s>' janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: EOS token = 2 '</s>' janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: UNK token = 0 '<unk>' janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_print_meta: LF token = 13 '<0x0A>' janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_tensors: ggml ctx size = 0.11 MiB janv. 23 18:43:14 autoscribe-llm ollama[8602]: llm_load_tensors: mem required = 3917.98 MiB janv. 23 18:43:15 autoscribe-llm ollama[8602]: ................................................................................................... janv. 23 18:43:15 autoscribe-llm ollama[8602]: llama_new_context_with_model: n_ctx = 2048 janv. 23 18:43:15 autoscribe-llm ollama[8602]: llama_new_context_with_model: freq_base = 1000000.0 janv. 23 18:43:15 autoscribe-llm ollama[8602]: llama_new_context_with_model: freq_scale = 1 janv. 23 18:43:15 autoscribe-llm ollama[8602]: llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB janv. 23 18:43:15 autoscribe-llm ollama[8602]: llama_build_graph: non-view tensors processed: 676/676 janv. 23 18:43:15 autoscribe-llm ollama[8602]: llama_new_context_with_model: compute buffer total size = 159.19 MiB janv. 23 18:43:15 autoscribe-llm ollama[8602]: 2024/01/23 18:43:15 ext_server_common.go:144: Starting internal llama main loop janv. 23 18:43:15 autoscribe-llm ollama[8602]: 2024/01/23 18:43:15 ext_server_common.go:158: loaded 0 images janv. 23 18:44:29 autoscribe-llm ollama[8602]: [GIN] 2024/01/23 - 18:44:29 | 200 | 1m15s | 195.68.70.34 | POST "/api/chat" janv. 23 18:44:29 autoscribe-llm ollama[8602]: 2024/01/23 18:44:29 ext_server_common.go:158: loaded 0 images ```
Author
Owner

@easp commented on GitHub (Jan 23, 2024):

I experimented a bit with ollama-webui's RAG. In my tests it sends between 1-2k tokens to the LLM. I don't have a strong sense of what sorts of prompt processing speeds to expect from those CPUs, but I think 20-40 tokens/second is a reasonable assumption. That could take from ~1 minute to 1.5 minutes to process the prompt.

That VM is, in the ways that matter to LLM performance, on par with the CPU in a 4 year old midrange PC.

Now that you've adjusted the thread parameters the speeds seem in-line with the capabilities of the resource you are using.

<!-- gh-comment-id:1907029244 --> @easp commented on GitHub (Jan 23, 2024): I experimented a bit with ollama-webui's RAG. In my tests it sends between 1-2k tokens to the LLM. I don't have a strong sense of what sorts of prompt processing speeds to expect from those CPUs, but I think 20-40 tokens/second is a reasonable assumption. That could take from ~1 minute to 1.5 minutes to process the prompt. That VM is, in the ways that matter to LLM performance, on par with the CPU in a 4 year old midrange PC. Now that you've adjusted the thread parameters the speeds seem in-line with the capabilities of the resource you are using.
Author
Owner

@racso-dev commented on GitHub (Jan 24, 2024):

The inference time for my usecase with the thread parameter set to 32 is indeed around 1 minute.

So If I understand correctly It's a normal inference time with the specs of the machine and there's not really anything else that can be done to improve it?

I'm not at all questioning your expertise but It seems strange that this is the best we can get with this machine, given that Scaleway advertises these machines as a viable alternative to do inference at a fraction of the price thanks to ARM architecture, don't you agree?

<!-- gh-comment-id:1907686570 --> @racso-dev commented on GitHub (Jan 24, 2024): The inference time for my usecase with the thread parameter set to 32 is indeed around 1 minute. So If I understand correctly It's a normal inference time with the specs of the machine and there's not really anything else that can be done to improve it? I'm not at all questioning your expertise but It seems strange that this is the best we can get with this machine, given that Scaleway advertises these machines as a viable alternative to do inference at a fraction of the price thanks to ARM architecture, don't you agree?
Author
Owner

@easp commented on GitHub (Jan 24, 2024):

LLMs are demanding in ways that other AI inference workloads aren't. They are bottlenecked by memory bandwidth. The AI workloads that Scaleway and Ampere cite in their PR don't appear to be as memory intensive.

I'm not sure Ollama devs have invested much in optimized builds for arm64 on linux, but I'm not sure that's really an issue for you given that your observations are in-line with predictions based on the memory bandwidth available to you.

Perhaps scaleaway's support would be interested in investing a little effort in optimized builds for their platform.

<!-- gh-comment-id:1908549133 --> @easp commented on GitHub (Jan 24, 2024): LLMs are demanding in ways that other AI inference workloads aren't. They are bottlenecked by memory bandwidth. The AI workloads that Scaleway and Ampere cite in their PR don't appear to be as memory intensive. I'm not sure Ollama devs have invested much in optimized builds for arm64 on linux, but I'm not sure that's really an issue for you given that your observations are in-line with predictions based on the memory bandwidth available to you. Perhaps scaleaway's support would be interested in investing a little effort in optimized builds for their platform.
Author
Owner

@racso-dev commented on GitHub (Jan 25, 2024):

Okkk got it thanks for your informations and time ;)

<!-- gh-comment-id:1909821924 --> @racso-dev commented on GitHub (Jan 25, 2024): Okkk got it thanks for your informations and time ;)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1231