[GH-ISSUE #5519] Ultraslow Inference on Chromebook #29208

Closed
opened 2026-04-22 07:54:58 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @tinycrops on GitHub (Jul 6, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5519

Originally assigned to: @dhiltgen on GitHub.

Update: I used to run ollama on this chromebook when tinyllama came out and it ran great.

What is the issue?

image

After I install I get this warning:

WARNING: Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies.

Google Chrome: Version 126.0.6478.132 (Official Build) (64-bit)
Platform: 15886.44.0 (Official Build) stable-channel nami
Channel: stable-channel
Firmware Version: Google_Nami.10775.123.0
ARC Enabled: true
ARC: 11931109
Enterprise Enrolled: false
Developer Mode: false

I have a slower chromebook that qwen2:0.5b runs great on.

Jul 06 16:55:56 systemd[1]: Started ollama.service - Ollama Service.
Jul 06 16:55:56 ollama[470]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Jul 06 16:55:56 ollama[470]: 2024/07/06 16:55:56 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FL>
Jul 06 16:55:56 ollama[470]: time=2024-07-06T16:55:56.869-04:00 level=INFO source=images.go:730 msg="total blobs: 0"
Jul 06 16:55:56 ollama[470]: time=2024-07-06T16:55:56.870-04:00 level=INFO source=images.go:737 msg="total unused blobs removed: 0"
Jul 06 16:55:56 ollama[470]: time=2024-07-06T16:55:56.870-04:00 level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)"
Jul 06 16:55:56 ollama[470]: time=2024-07-06T16:55:56.871-04:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1676710808/runners
Jul 06 16:56:04 ollama[470]: time=2024-07-06T16:56:04.562-04:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60101]"
Jul 06 16:56:04 ollama[470]: time=2024-07-06T16:56:04.574-04:00 level=INFO source=types.go:98 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="6.5 GiB" available="6.4 GiB"
Jul 06 16:56:07 ollama[470]: [GIN] 2024/07/06 - 16:56:07 | 200 |      66.164µs |       127.0.0.1 | HEAD     "/"
Jul 06 16:56:07 ollama[470]: [GIN] 2024/07/06 - 16:56:07 | 404 |     452.152µs |       127.0.0.1 | POST     "/api/show"
Jul 06 16:56:09 ollama[470]: time=2024-07-06T16:56:09.329-04:00 level=INFO source=download.go:136 msg="downloading 8de95da68dc4 in 4 100 MB part(s)"
Jul 06 16:56:30 ollama[470]: time=2024-07-06T16:56:30.088-04:00 level=INFO source=download.go:136 msg="downloading 62fbfd9ed093 in 1 182 B part(s)"
Jul 06 16:56:31 ollama[470]: time=2024-07-06T16:56:31.820-04:00 level=INFO source=download.go:136 msg="downloading c156170b718e in 1 11 KB part(s)"
Jul 06 16:56:33 ollama[470]: time=2024-07-06T16:56:33.488-04:00 level=INFO source=download.go:136 msg="downloading f02dd72bb242 in 1 59 B part(s)"
Jul 06 16:56:35 ollama[470]: time=2024-07-06T16:56:35.152-04:00 level=INFO source=download.go:136 msg="downloading 2184ab82477b in 1 488 B part(s)"
Jul 06 16:56:37 ollama[470]: [GIN] 2024/07/06 - 16:56:37 | 200 | 30.055385932s |       127.0.0.1 | POST     "/api/pull"
Jul 06 16:56:37 ollama[470]: [GIN] 2024/07/06 - 16:56:37 | 200 |    75.64794ms |       127.0.0.1 | POST     "/api/show"
Jul 06 16:56:37 ollama[470]: time=2024-07-06T16:56:37.747-04:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[6.4 G>
Jul 06 16:56:37 ollama[470]: time=2024-07-06T16:56:37.748-04:00 level=INFO source=server.go:368 msg="starting llama server" cmd="/tmp/ollama1676710808/runners/cpu_avx2/ollama_llama_server --model /usr/share/oll>
Jul 06 16:56:37 ollama[470]: time=2024-07-06T16:56:37.774-04:00 level=INFO source=sched.go:382 msg="loaded runners" count=1
Jul 06 16:56:37 ollama[470]: time=2024-07-06T16:56:37.774-04:00 level=INFO source=server.go:556 msg="waiting for llama runner to start responding"
Jul 06 16:56:37 ollama[470]: time=2024-07-06T16:56:37.774-04:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error"
Jul 06 16:56:37 ollama[777]: INFO [main] build info | build=1 commit="7c26775" tid="140636357584768" timestamp=1720299397
Jul 06 16:56:37 ollama[777]: INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 >
Jul 06 16:56:37 ollama[777]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="3" port="45019" tid="140636357584768" timestamp=1720299397
Jul 06 16:56:37 ollama[470]: llama_model_loader: loaded meta data with 21 key-value pairs and 290 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc>
Jul 06 16:56:37 ollama[470]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv   0:                       general.architecture str              = qwen2
Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv   1:                               general.name str              = Qwen2-0.5B-Instruct
Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv   2:                          qwen2.block_count u32              = 24
Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv   3:                       qwen2.context_length u32              = 32768
Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv   4:                     qwen2.embedding_length u32              = 896
Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv   5:                  qwen2.feed_forward_length u32              = 4864
Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv   6:                 qwen2.attention.head_count u32              = 14
Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv   7:              qwen2.attention.head_count_kv u32              = 2
Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv   8:                       qwen2.rope.freq_base f32              = 1000000.000000
Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv   9:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv  10:                          general.file_type u32              = 2
Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv  12:                         tokenizer.ggml.pre str              = qwen2
Jul 06 16:56:38 ollama[470]: time=2024-07-06T16:56:38.029-04:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 151645
Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 151643
Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 151643
Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% for message in messages %}{% if lo...
Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv  20:               general.quantization_version u32              = 2
Jul 06 16:56:38 ollama[470]: llama_model_loader: - type  f32:  121 tensors
Jul 06 16:56:38 ollama[470]: llama_model_loader: - type q4_0:  168 tensors
Jul 06 16:56:38 ollama[470]: llama_model_loader: - type q8_0:    1 tensors
Jul 06 16:56:38 ollama[470]: llm_load_vocab: special tokens cache size = 293
Jul 06 16:56:38 ollama[470]: llm_load_vocab: token to piece cache size = 0.9338 MB
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: format           = GGUF V3 (latest)
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: arch             = qwen2
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: vocab type       = BPE
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_vocab          = 151936
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_merges         = 151387
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_ctx_train      = 32768
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_embd           = 896
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_head           = 14
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_head_kv        = 2
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_layer          = 24
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_rot            = 64
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_embd_head_k    = 64
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_embd_head_v    = 64
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_gqa            = 7
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_embd_k_gqa     = 128
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_embd_v_gqa     = 128
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_ff             = 4864
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_expert         = 0
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_expert_used    = 0
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: causal attn      = 1
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: pooling type     = 0
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: rope type        = 2
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: rope scaling     = linear
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: freq_base_train  = 1000000.0
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: freq_scale_train = 1
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_ctx_orig_yarn  = 32768
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: rope_finetuned   = unknown
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: ssm_d_conv       = 0
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: ssm_d_inner      = 0
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: ssm_d_state      = 0
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: ssm_dt_rank      = 0
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: model type       = 1B
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: model ftype      = Q4_0
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: model params     = 494.03 M
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: model size       = 330.17 MiB (5.61 BPW)
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: general.name     = Qwen2-0.5B-Instruct
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: LF token         = 148848 'ÄĬ'
Jul 06 16:56:38 ollama[470]: llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
Jul 06 16:56:38 ollama[470]: llm_load_tensors: ggml ctx size =    0.14 MiB
Jul 06 16:56:39 ollama[470]: llm_load_tensors:        CPU buffer size =   330.17 MiB
Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: n_ctx      = 2048
Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: n_batch    = 512
Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: n_ubatch   = 512
Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: flash_attn = 0
Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: freq_base  = 1000000.0
Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: freq_scale = 1
Jul 06 16:56:39 ollama[470]: llama_kv_cache_init:        CPU KV buffer size =    24.00 MiB
Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: KV self size  =   24.00 MiB, K (f16):   12.00 MiB, V (f16):   12.00 MiB
Jul 06 16:56:39 ollama[470]: llama_new_context_with_model:        CPU  output buffer size =     0.58 MiB
Jul 06 16:56:39 ollama[470]: llama_new_context_with_model:        CPU compute buffer size =   298.50 MiB
Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: graph nodes  = 846
Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: graph splits = 1
Jul 06 16:57:00 ollama[777]: INFO [main] model loaded | tid="140636357584768" timestamp=1720299420
Jul 06 16:57:01 ollama[470]: time=2024-07-06T16:57:01.140-04:00 level=INFO source=server.go:599 msg="llama runner started in 23.37 seconds"
Jul 06 16:57:01 ollama[470]: [GIN] 2024/07/06 - 16:57:01 | 200 | 23.451118015s |       127.0.0.1 | POST     "/api/chat"
Jul 06 16:59:26 ollama[470]: [GIN] 2024/07/06 - 16:59:26 | 200 |         2m15s |       127.0.0.1 | POST     "/api/chat"

OS

Linux

GPU

No response

CPU

No response

Ollama version

latest

Originally created by @tinycrops on GitHub (Jul 6, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5519 Originally assigned to: @dhiltgen on GitHub. Update: I used to run ollama on this chromebook when tinyllama came out and it ran great. ### What is the issue? ![image](https://github.com/ollama/ollama/assets/13264408/e37d1a70-8d92-4281-88fe-d7c48745980a) After I install I get this warning: ```bash WARNING: Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies. ``` Google Chrome: Version 126.0.6478.132 (Official Build) (64-bit) Platform: 15886.44.0 (Official Build) stable-channel nami Channel: stable-channel Firmware Version: Google_Nami.10775.123.0 ARC Enabled: true ARC: 11931109 Enterprise Enrolled: false Developer Mode: false I have a slower chromebook that qwen2:0.5b runs great on. ```bash Jul 06 16:55:56 systemd[1]: Started ollama.service - Ollama Service. Jul 06 16:55:56 ollama[470]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key. Jul 06 16:55:56 ollama[470]: 2024/07/06 16:55:56 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FL> Jul 06 16:55:56 ollama[470]: time=2024-07-06T16:55:56.869-04:00 level=INFO source=images.go:730 msg="total blobs: 0" Jul 06 16:55:56 ollama[470]: time=2024-07-06T16:55:56.870-04:00 level=INFO source=images.go:737 msg="total unused blobs removed: 0" Jul 06 16:55:56 ollama[470]: time=2024-07-06T16:55:56.870-04:00 level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)" Jul 06 16:55:56 ollama[470]: time=2024-07-06T16:55:56.871-04:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1676710808/runners Jul 06 16:56:04 ollama[470]: time=2024-07-06T16:56:04.562-04:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60101]" Jul 06 16:56:04 ollama[470]: time=2024-07-06T16:56:04.574-04:00 level=INFO source=types.go:98 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="6.5 GiB" available="6.4 GiB" Jul 06 16:56:07 ollama[470]: [GIN] 2024/07/06 - 16:56:07 | 200 | 66.164µs | 127.0.0.1 | HEAD "/" Jul 06 16:56:07 ollama[470]: [GIN] 2024/07/06 - 16:56:07 | 404 | 452.152µs | 127.0.0.1 | POST "/api/show" Jul 06 16:56:09 ollama[470]: time=2024-07-06T16:56:09.329-04:00 level=INFO source=download.go:136 msg="downloading 8de95da68dc4 in 4 100 MB part(s)" Jul 06 16:56:30 ollama[470]: time=2024-07-06T16:56:30.088-04:00 level=INFO source=download.go:136 msg="downloading 62fbfd9ed093 in 1 182 B part(s)" Jul 06 16:56:31 ollama[470]: time=2024-07-06T16:56:31.820-04:00 level=INFO source=download.go:136 msg="downloading c156170b718e in 1 11 KB part(s)" Jul 06 16:56:33 ollama[470]: time=2024-07-06T16:56:33.488-04:00 level=INFO source=download.go:136 msg="downloading f02dd72bb242 in 1 59 B part(s)" Jul 06 16:56:35 ollama[470]: time=2024-07-06T16:56:35.152-04:00 level=INFO source=download.go:136 msg="downloading 2184ab82477b in 1 488 B part(s)" Jul 06 16:56:37 ollama[470]: [GIN] 2024/07/06 - 16:56:37 | 200 | 30.055385932s | 127.0.0.1 | POST "/api/pull" Jul 06 16:56:37 ollama[470]: [GIN] 2024/07/06 - 16:56:37 | 200 | 75.64794ms | 127.0.0.1 | POST "/api/show" Jul 06 16:56:37 ollama[470]: time=2024-07-06T16:56:37.747-04:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[6.4 G> Jul 06 16:56:37 ollama[470]: time=2024-07-06T16:56:37.748-04:00 level=INFO source=server.go:368 msg="starting llama server" cmd="/tmp/ollama1676710808/runners/cpu_avx2/ollama_llama_server --model /usr/share/oll> Jul 06 16:56:37 ollama[470]: time=2024-07-06T16:56:37.774-04:00 level=INFO source=sched.go:382 msg="loaded runners" count=1 Jul 06 16:56:37 ollama[470]: time=2024-07-06T16:56:37.774-04:00 level=INFO source=server.go:556 msg="waiting for llama runner to start responding" Jul 06 16:56:37 ollama[470]: time=2024-07-06T16:56:37.774-04:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error" Jul 06 16:56:37 ollama[777]: INFO [main] build info | build=1 commit="7c26775" tid="140636357584768" timestamp=1720299397 Jul 06 16:56:37 ollama[777]: INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 > Jul 06 16:56:37 ollama[777]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="3" port="45019" tid="140636357584768" timestamp=1720299397 Jul 06 16:56:37 ollama[470]: llama_model_loader: loaded meta data with 21 key-value pairs and 290 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc> Jul 06 16:56:37 ollama[470]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv 0: general.architecture str = qwen2 Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv 1: general.name str = Qwen2-0.5B-Instruct Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv 2: qwen2.block_count u32 = 24 Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv 3: qwen2.context_length u32 = 32768 Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv 4: qwen2.embedding_length u32 = 896 Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 4864 Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 14 Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 2 Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000 Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 Jul 06 16:56:37 ollama[470]: llama_model_loader: - kv 10: general.file_type u32 = 2 Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2 Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2 Jul 06 16:56:38 ollama[470]: time=2024-07-06T16:56:38.029-04:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model" Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645 Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643 Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643 Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo... Jul 06 16:56:38 ollama[470]: llama_model_loader: - kv 20: general.quantization_version u32 = 2 Jul 06 16:56:38 ollama[470]: llama_model_loader: - type f32: 121 tensors Jul 06 16:56:38 ollama[470]: llama_model_loader: - type q4_0: 168 tensors Jul 06 16:56:38 ollama[470]: llama_model_loader: - type q8_0: 1 tensors Jul 06 16:56:38 ollama[470]: llm_load_vocab: special tokens cache size = 293 Jul 06 16:56:38 ollama[470]: llm_load_vocab: token to piece cache size = 0.9338 MB Jul 06 16:56:38 ollama[470]: llm_load_print_meta: format = GGUF V3 (latest) Jul 06 16:56:38 ollama[470]: llm_load_print_meta: arch = qwen2 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: vocab type = BPE Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_vocab = 151936 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_merges = 151387 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_ctx_train = 32768 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_embd = 896 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_head = 14 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_head_kv = 2 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_layer = 24 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_rot = 64 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_embd_head_k = 64 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_embd_head_v = 64 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_gqa = 7 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_embd_k_gqa = 128 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_embd_v_gqa = 128 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: f_norm_eps = 0.0e+00 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: f_logit_scale = 0.0e+00 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_ff = 4864 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_expert = 0 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_expert_used = 0 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: causal attn = 1 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: pooling type = 0 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: rope type = 2 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: rope scaling = linear Jul 06 16:56:38 ollama[470]: llm_load_print_meta: freq_base_train = 1000000.0 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: freq_scale_train = 1 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: n_ctx_orig_yarn = 32768 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: rope_finetuned = unknown Jul 06 16:56:38 ollama[470]: llm_load_print_meta: ssm_d_conv = 0 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: ssm_d_inner = 0 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: ssm_d_state = 0 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: ssm_dt_rank = 0 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: model type = 1B Jul 06 16:56:38 ollama[470]: llm_load_print_meta: model ftype = Q4_0 Jul 06 16:56:38 ollama[470]: llm_load_print_meta: model params = 494.03 M Jul 06 16:56:38 ollama[470]: llm_load_print_meta: model size = 330.17 MiB (5.61 BPW) Jul 06 16:56:38 ollama[470]: llm_load_print_meta: general.name = Qwen2-0.5B-Instruct Jul 06 16:56:38 ollama[470]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>' Jul 06 16:56:38 ollama[470]: llm_load_print_meta: EOS token = 151645 '<|im_end|>' Jul 06 16:56:38 ollama[470]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>' Jul 06 16:56:38 ollama[470]: llm_load_print_meta: LF token = 148848 'ÄĬ' Jul 06 16:56:38 ollama[470]: llm_load_print_meta: EOT token = 151645 '<|im_end|>' Jul 06 16:56:38 ollama[470]: llm_load_tensors: ggml ctx size = 0.14 MiB Jul 06 16:56:39 ollama[470]: llm_load_tensors: CPU buffer size = 330.17 MiB Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: n_ctx = 2048 Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: n_batch = 512 Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: n_ubatch = 512 Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: flash_attn = 0 Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: freq_base = 1000000.0 Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: freq_scale = 1 Jul 06 16:56:39 ollama[470]: llama_kv_cache_init: CPU KV buffer size = 24.00 MiB Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: KV self size = 24.00 MiB, K (f16): 12.00 MiB, V (f16): 12.00 MiB Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: CPU output buffer size = 0.58 MiB Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: CPU compute buffer size = 298.50 MiB Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: graph nodes = 846 Jul 06 16:56:39 ollama[470]: llama_new_context_with_model: graph splits = 1 Jul 06 16:57:00 ollama[777]: INFO [main] model loaded | tid="140636357584768" timestamp=1720299420 Jul 06 16:57:01 ollama[470]: time=2024-07-06T16:57:01.140-04:00 level=INFO source=server.go:599 msg="llama runner started in 23.37 seconds" Jul 06 16:57:01 ollama[470]: [GIN] 2024/07/06 - 16:57:01 | 200 | 23.451118015s | 127.0.0.1 | POST "/api/chat" Jul 06 16:59:26 ollama[470]: [GIN] 2024/07/06 - 16:59:26 | 200 | 2m15s | 127.0.0.1 | POST "/api/chat" ``` ### OS Linux ### GPU _No response_ ### CPU _No response_ ### Ollama version latest
GiteaMirror added the needs more infobug labels 2026-04-22 07:54:58 -05:00
Author
Owner

@Moonlight1220 commented on GitHub (Jul 7, 2024):

This may be a hardware issue, most chrome books don't have a native GPU and have integrated graphics and limited amounts of memory, I would suggest running it on a VM if you have access to one if not try running an older LLM if none of those work I would install Unbuntu, if you need any more assistance please let me know!

<!-- gh-comment-id:2212455766 --> @Moonlight1220 commented on GitHub (Jul 7, 2024): This may be a hardware issue, most chrome books don't have a native GPU and have integrated graphics and limited amounts of memory, I would suggest running it on a VM if you have access to one if not try running an older LLM if none of those work I would install Unbuntu, if you need any more assistance please let me know!
Author
Owner

@dhiltgen commented on GitHub (Jul 24, 2024):

@MeDott29 can you share the server log of the slower run? Perhaps it's using the runner without any AVX extensions, which will yield much slower performance. Can you also share the following information from both systems?

cat /proc/cpuinfo  | grep ^flags | tail -1
<!-- gh-comment-id:2248568410 --> @dhiltgen commented on GitHub (Jul 24, 2024): @MeDott29 can you share the server log of the slower run? Perhaps it's using the runner without any AVX extensions, which will yield much slower performance. Can you also share the following information from both systems? ``` cat /proc/cpuinfo | grep ^flags | tail -1 ```
Author
Owner

@tinycrops commented on GitHub (Jul 24, 2024):

The server logs for the problematic system are the ones I posted in my original comment above.

Problematic system:

flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves arat vnmi umip md_clear arch_capabilities

Screen recording 2024-07-24 2 53 43 PM

This system has less capable hardware (but runs Ollama faster)
(there was no terminal output when I ran

cat /proc/cpuinfo  | grep ^flags | tail -1

so I just ran

cat /proc/cpuinfo 

:

processor       : 0
BogoMIPS        : 26.00
Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 2

processor       : 1
BogoMIPS        : 26.00
Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 2

processor       : 2
BogoMIPS        : 26.00
Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant     : 0x0
CPU part        : 0xd.08
CPU revision    : 0

processor       : 3
BogoMIPS        : 26.00
Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant     : 0x0
CPU part        : 0xd08
CPU revision    : 0
<!-- gh-comment-id:2248665046 --> @tinycrops commented on GitHub (Jul 24, 2024): The server logs for the problematic system are the ones I posted in my original comment above. Problematic system: ``` flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves arat vnmi umip md_clear arch_capabilities ``` ![Screen recording 2024-07-24 2 53 43 PM](https://github.com/user-attachments/assets/e9f7c5cc-aabb-41da-93ca-4c587e0719f5) This system has less capable hardware (but runs Ollama faster) (there was no terminal output when I ran ``` cat /proc/cpuinfo | grep ^flags | tail -1 ``` so I just ran ``` cat /proc/cpuinfo ``` : ``` processor : 0 BogoMIPS : 26.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 2 processor : 1 BogoMIPS : 26.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 2 processor : 2 BogoMIPS : 26.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd.08 CPU revision : 0 processor : 3 BogoMIPS : 26.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd08 CPU revision : 0 ```
Author
Owner

@dhiltgen commented on GitHub (Jul 26, 2024):

It looks like you're comparing ARM vs. x86 systems, so I'm not sure the assumption of "less capable" necessarily maps.

Can you share the token rate for a smaller model on these two systems? Something like...

% ollama run orca-mini --verbose hello
<!-- gh-comment-id:2253138810 --> @dhiltgen commented on GitHub (Jul 26, 2024): It looks like you're comparing ARM vs. x86 systems, so I'm not sure the assumption of "less capable" necessarily maps. Can you share the token rate for a smaller model on these two systems? Something like... ``` % ollama run orca-mini --verbose hello ```
Author
Owner

@tinycrops commented on GitHub (Aug 1, 2024):

I ran

ollama run tinyllama --verbose hello

Older machine
image

Newer, buggy machine
with tinyllama
image
and orca
image

I'm curious to know what's going on, though it's obviously not a super important issue as there are 100 other much faster, free environments I can run a model on. But if you point at what the problem might be, I'll check it out.

<!-- gh-comment-id:2261819216 --> @tinycrops commented on GitHub (Aug 1, 2024): I ran ``` ollama run tinyllama --verbose hello ``` Older machine ![image](https://github.com/user-attachments/assets/ac49d026-b7fd-4bee-9d21-49fa1720958a) Newer, buggy machine with tinyllama ![image](https://github.com/user-attachments/assets/f2adf322-837a-4284-98d4-7773793a47c3) and orca ![image](https://github.com/user-attachments/assets/3d539ee1-2829-4be6-91f8-892b2a3b4180) I'm curious to know what's going on, though it's obviously not a super important issue as there are 100 other much faster, free environments I can run a model on. But if you point at what the problem might be, I'll check it out.
Author
Owner

@dhiltgen commented on GitHub (Aug 1, 2024):

I don't think you shared the server logs from the slower x86 system, but it's possible the thread count is getting set to a value that's under/over utilizing the system. While inference is going, are all cores at 100%? If not, playing with num_thread and adjusting how many threads we allocate might improve the situation.

<!-- gh-comment-id:2264222240 --> @dhiltgen commented on GitHub (Aug 1, 2024): I don't think you shared the server logs from the slower x86 system, but it's possible the thread count is getting set to a value that's under/over utilizing the system. While inference is going, are all cores at 100%? If not, playing with `num_thread` and adjusting how many threads we allocate might improve the situation.
Author
Owner

@Moonlight1220 commented on GitHub (Aug 10, 2024):

What are the specs of your machine?

<!-- gh-comment-id:2281229937 --> @Moonlight1220 commented on GitHub (Aug 10, 2024): What are the specs of your machine?
Author
Owner

@dhiltgen commented on GitHub (Sep 5, 2024):

If this is still a concern, please share a server log of the same model loading between the two systems and that might help us understand if there's a bug somewhere, or just the inherent performance difference between these two CPUs.

<!-- gh-comment-id:2332835084 --> @dhiltgen commented on GitHub (Sep 5, 2024): If this is still a concern, please share a server log of the same model loading between the two systems and that might help us understand if there's a bug somewhere, or just the inherent performance difference between these two CPUs.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29208