[GH-ISSUE #6866] High CPU load with Jetson Orin NX #66373

Closed
opened 2026-05-04 03:15:16 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @s0301132 on GitHub (Sep 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6866

What is the issue?

Using the amr64 build package and run it successfully
However when LLM answering the question the CPU load is 100% but the GPU is nearly 0 % in jtop
Is it normal or the amr64 build cannot use GPU by default?
Screenshot from 2024-09-18 19-20-47

OS

Linux

GPU

Nvidia

CPU

Other

Ollama version

0.3.11

Originally created by @s0301132 on GitHub (Sep 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6866 ### What is the issue? Using the amr64 build package and run it successfully However when LLM answering the question the CPU load is 100% but the GPU is nearly 0 % in `jtop` Is it normal or the amr64 build cannot use GPU by default? ![Screenshot from 2024-09-18 19-20-47](https://github.com/user-attachments/assets/3d92ceeb-c320-4531-be34-1cd0542475bd) ### OS Linux ### GPU Nvidia ### CPU Other ### Ollama version 0.3.11
GiteaMirror added the bug label 2026-05-04 03:15:16 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 19, 2024):

Server logs may aid in debugging. What's the output of nvidia-smi?

<!-- gh-comment-id:2359906898 --> @rick-github commented on GitHub (Sep 19, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging. What's the output of `nvidia-smi`?
Author
Owner

@s0301132 commented on GitHub (Sep 20, 2024):

After digging the log and reinstalling everything, now it can detect the GPU but still timeout every time.
(ca6f3760fb/scripts/install.sh)
The log:

Sep 20 14:34:55 ubuntu systemd[1]: ollama.service: Consumed 5min 42.309s CPU time.
Sep 20 14:34:58 ubuntu systemd[1]: Started Ollama Service.
Sep 20 14:34:58 ubuntu ollama[85698]: 2024/09/20 14:34:58 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Sep 20 14:34:58 ubuntu ollama[85698]: time=2024-09-20T14:34:58.450+08:00 level=INFO source=images.go:753 msg="total blobs: 10"
Sep 20 14:34:58 ubuntu ollama[85698]: time=2024-09-20T14:34:58.450+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
Sep 20 14:34:58 ubuntu ollama[85698]: time=2024-09-20T14:34:58.450+08:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.11)"
Sep 20 14:34:58 ubuntu ollama[85698]: time=2024-09-20T14:34:58.451+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama3292973111/runners
Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.249+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cuda_v11 cuda_v12]"
Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.249+08:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.337+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-cf0b9e33-9cc9-5fe4-b715-fea6a60edc54 library=cuda variant=jetpack6 compute=8.7 driver=12.2 name=Orin total="15.3 GiB" available="10.6 GiB"
Sep 20 14:35:12 ubuntu ollama[85698]: [GIN] 2024/09/20 - 14:35:12 | 200 |      61.954µs |       127.0.0.1 | HEAD     "/"
Sep 20 14:35:12 ubuntu ollama[85698]: [GIN] 2024/09/20 - 14:35:12 | 200 |   27.347237ms |       127.0.0.1 | POST     "/api/show"
Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.511+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe gpu=GPU-cf0b9e33-9cc9-5fe4-b715-fea6a60edc54 parallel=4 available=11325747200 required="6.2 GiB"
Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.511+08:00 level=INFO source=server.go:103 msg="system memory" total="15.3 GiB" free="10.5 GiB" free_swap="7.6 GiB"
Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.512+08:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[10.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.513+08:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama3292973111/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 38765"
Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.514+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.514+08:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.514+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
Sep 20 14:35:12 ubuntu ollama[85839]: INFO [main] build info | build=10 commit="a2e0145" tid="281473422542912" timestamp=1726814112
Sep 20 14:35:12 ubuntu ollama[85839]: INFO [main] system info | n_threads=8 n_threads_batch=8 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="281473422542912" timestamp=1726814112 total_threads=8
Sep 20 14:35:12 ubuntu ollama[85839]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="38765" tid="281473422542912" timestamp=1726814112
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest))
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv   0:                       general.architecture str              = llama
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv   1:                               general.type str              = model
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv   5:                         general.size_label str              = 8B
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv   6:                            general.license str              = llama3.1
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv   9:                          llama.block_count u32              = 32
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  17:                          general.file_type u32              = 2
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv  28:               general.quantization_version u32              = 2
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - type  f32:   66 tensors
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - type q4_0:  225 tensors
Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - type q6_K:    1 tensors
Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.765+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_vocab: special tokens cache size = 256
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_vocab: token to piece cache size = 0.7999 MB
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: format           = GGUF V3 (latest)
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: arch             = llama
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: vocab type       = BPE
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_vocab          = 128256
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_merges         = 280147
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: vocab_only       = 0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_ctx_train      = 131072
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_embd           = 4096
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_layer          = 32
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_head           = 32
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_head_kv        = 8
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_rot            = 128
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_swa            = 0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_embd_head_k    = 128
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_embd_head_v    = 128
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_gqa            = 4
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_embd_k_gqa     = 1024
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_embd_v_gqa     = 1024
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_ff             = 14336
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_expert         = 0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_expert_used    = 0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: causal attn      = 1
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: pooling type     = 0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: rope type        = 0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: rope scaling     = linear
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: freq_base_train  = 500000.0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: freq_scale_train = 1
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_ctx_orig_yarn  = 131072
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: rope_finetuned   = unknown
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: ssm_d_conv       = 0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: ssm_d_inner      = 0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: ssm_d_state      = 0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: ssm_dt_rank      = 0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: model type       = 8B
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: model ftype      = Q4_0
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: model params     = 8.03 B
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: general.name     = Meta Llama 3.1 8B Instruct
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: LF token         = 128 'Ä'
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: max token length = 256
Sep 20 14:35:13 ubuntu ollama[85698]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Sep 20 14:35:13 ubuntu ollama[85698]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Sep 20 14:35:13 ubuntu ollama[85698]: ggml_cuda_init: found 1 CUDA devices:
Sep 20 14:35:13 ubuntu ollama[85698]:   Device 0: Orin, compute capability 8.7, VMM: yes
Sep 20 14:38:40 ubuntu ollama[85698]: llm_load_tensors: ggml ctx size =    0.27 MiB
Sep 20 14:38:43 ubuntu ollama[85698]: llm_load_tensors: offloading 32 repeating layers to GPU
Sep 20 14:38:43 ubuntu ollama[85698]: llm_load_tensors: offloading non-repeating layers to GPU
Sep 20 14:38:43 ubuntu ollama[85698]: llm_load_tensors: offloaded 33/33 layers to GPU
Sep 20 14:38:43 ubuntu ollama[85698]: llm_load_tensors:        CPU buffer size =   281.81 MiB
Sep 20 14:38:43 ubuntu ollama[85698]: llm_load_tensors:      CUDA0 buffer size =  4156.00 MiB
Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: n_ctx      = 8192
Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: n_batch    = 512
Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: n_ubatch   = 512
Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: flash_attn = 0
Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: freq_base  = 500000.0
Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: freq_scale = 1
Sep 20 14:38:44 ubuntu ollama[85698]: llama_kv_cache_init:      CUDA0 KV buffer size =  1024.00 MiB
Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model:  CUDA_Host  output buffer size =     2.02 MiB
Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model:      CUDA0 compute buffer size =   560.00 MiB
Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model:  CUDA_Host compute buffer size =    24.01 MiB
Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: graph nodes  = 1030
Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: graph splits = 2
Sep 20 14:43:44 ubuntu ollama[85698]: time=2024-09-20T14:43:44.750+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 1.00 - "
Sep 20 14:43:44 ubuntu ollama[85698]: [GIN] 2024/09/20 - 14:43:44 | 500 |         8m32s |       127.0.0.1 | POST     "/api/generate"
Sep 20 14:43:49 ubuntu ollama[85698]: time=2024-09-20T14:43:49.849+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.098725631 model=/usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
Sep 20 14:43:50 ubuntu ollama[85698]: time=2024-09-20T14:43:50.099+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.348255973 model=/usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
Sep 20 14:43:50 ubuntu ollama[85698]: time=2024-09-20T14:43:50.349+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.598784823 model=/usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe

Since jetson doesn't support nvidia-smi, I use jtop to check the CPU/GPU status
Screenshot from 2024-09-20 14-53-18
It only use signal thread in setup so it's always timeout when starting the ollama server

Another thing I notice is when using the install.sh to install ollama it haven't detect the Nvidia JetPack and download the Linux JetPack bundle just like https://github.com/ollama/ollama/pull/6400#issuecomment-2336398350

The 'install.sh' in main branch don't have any Jetson Jetpack detection but only appear in branch ca6f376
ca6f3760fb/scripts/install.sh

@dhiltgen is it normal the 'install.sh' missing the JetPack detction?

<!-- gh-comment-id:2362974352 --> @s0301132 commented on GitHub (Sep 20, 2024): After digging the log and reinstalling everything, now it can detect the GPU but still timeout every time. (https://github.com/ollama/ollama/blob/ca6f3760fbdaa91644fff355f315f1d7ebe8ba08/scripts/install.sh) The log: ``` Sep 20 14:34:55 ubuntu systemd[1]: ollama.service: Consumed 5min 42.309s CPU time. Sep 20 14:34:58 ubuntu systemd[1]: Started Ollama Service. Sep 20 14:34:58 ubuntu ollama[85698]: 2024/09/20 14:34:58 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Sep 20 14:34:58 ubuntu ollama[85698]: time=2024-09-20T14:34:58.450+08:00 level=INFO source=images.go:753 msg="total blobs: 10" Sep 20 14:34:58 ubuntu ollama[85698]: time=2024-09-20T14:34:58.450+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" Sep 20 14:34:58 ubuntu ollama[85698]: time=2024-09-20T14:34:58.450+08:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.11)" Sep 20 14:34:58 ubuntu ollama[85698]: time=2024-09-20T14:34:58.451+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama3292973111/runners Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.249+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cuda_v11 cuda_v12]" Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.249+08:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs" Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.337+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-cf0b9e33-9cc9-5fe4-b715-fea6a60edc54 library=cuda variant=jetpack6 compute=8.7 driver=12.2 name=Orin total="15.3 GiB" available="10.6 GiB" Sep 20 14:35:12 ubuntu ollama[85698]: [GIN] 2024/09/20 - 14:35:12 | 200 | 61.954µs | 127.0.0.1 | HEAD "/" Sep 20 14:35:12 ubuntu ollama[85698]: [GIN] 2024/09/20 - 14:35:12 | 200 | 27.347237ms | 127.0.0.1 | POST "/api/show" Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.511+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe gpu=GPU-cf0b9e33-9cc9-5fe4-b715-fea6a60edc54 parallel=4 available=11325747200 required="6.2 GiB" Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.511+08:00 level=INFO source=server.go:103 msg="system memory" total="15.3 GiB" free="10.5 GiB" free_swap="7.6 GiB" Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.512+08:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[10.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.513+08:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama3292973111/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 38765" Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.514+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.514+08:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding" Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.514+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" Sep 20 14:35:12 ubuntu ollama[85839]: INFO [main] build info | build=10 commit="a2e0145" tid="281473422542912" timestamp=1726814112 Sep 20 14:35:12 ubuntu ollama[85839]: INFO [main] system info | n_threads=8 n_threads_batch=8 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="281473422542912" timestamp=1726814112 total_threads=8 Sep 20 14:35:12 ubuntu ollama[85839]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="38765" tid="281473422542912" timestamp=1726814112 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest)) Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 0: general.architecture str = llama Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 1: general.type str = model Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 3: general.finetune str = Instruct Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 5: general.size_label str = 8B Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 6: general.license str = llama3.1 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 9: llama.block_count u32 = 32 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 10: llama.context_length u32 = 131072 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 17: general.file_type u32 = 2 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - kv 28: general.quantization_version u32 = 2 Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - type f32: 66 tensors Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - type q4_0: 225 tensors Sep 20 14:35:12 ubuntu ollama[85698]: llama_model_loader: - type q6_K: 1 tensors Sep 20 14:35:12 ubuntu ollama[85698]: time=2024-09-20T14:35:12.765+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_vocab: special tokens cache size = 256 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_vocab: token to piece cache size = 0.7999 MB Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: format = GGUF V3 (latest) Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: arch = llama Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: vocab type = BPE Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_vocab = 128256 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_merges = 280147 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: vocab_only = 0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_ctx_train = 131072 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_embd = 4096 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_layer = 32 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_head = 32 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_head_kv = 8 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_rot = 128 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_swa = 0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_embd_head_k = 128 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_embd_head_v = 128 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_gqa = 4 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_embd_k_gqa = 1024 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_embd_v_gqa = 1024 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: f_norm_eps = 0.0e+00 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: f_logit_scale = 0.0e+00 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_ff = 14336 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_expert = 0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_expert_used = 0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: causal attn = 1 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: pooling type = 0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: rope type = 0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: rope scaling = linear Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: freq_base_train = 500000.0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: freq_scale_train = 1 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: n_ctx_orig_yarn = 131072 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: rope_finetuned = unknown Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: ssm_d_conv = 0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: ssm_d_inner = 0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: ssm_d_state = 0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: ssm_dt_rank = 0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: model type = 8B Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: model ftype = Q4_0 Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: model params = 8.03 B Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: LF token = 128 'Ä' Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' Sep 20 14:35:13 ubuntu ollama[85698]: llm_load_print_meta: max token length = 256 Sep 20 14:35:13 ubuntu ollama[85698]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Sep 20 14:35:13 ubuntu ollama[85698]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Sep 20 14:35:13 ubuntu ollama[85698]: ggml_cuda_init: found 1 CUDA devices: Sep 20 14:35:13 ubuntu ollama[85698]: Device 0: Orin, compute capability 8.7, VMM: yes Sep 20 14:38:40 ubuntu ollama[85698]: llm_load_tensors: ggml ctx size = 0.27 MiB Sep 20 14:38:43 ubuntu ollama[85698]: llm_load_tensors: offloading 32 repeating layers to GPU Sep 20 14:38:43 ubuntu ollama[85698]: llm_load_tensors: offloading non-repeating layers to GPU Sep 20 14:38:43 ubuntu ollama[85698]: llm_load_tensors: offloaded 33/33 layers to GPU Sep 20 14:38:43 ubuntu ollama[85698]: llm_load_tensors: CPU buffer size = 281.81 MiB Sep 20 14:38:43 ubuntu ollama[85698]: llm_load_tensors: CUDA0 buffer size = 4156.00 MiB Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: n_ctx = 8192 Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: n_batch = 512 Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: n_ubatch = 512 Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: flash_attn = 0 Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: freq_base = 500000.0 Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: freq_scale = 1 Sep 20 14:38:44 ubuntu ollama[85698]: llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: graph nodes = 1030 Sep 20 14:38:44 ubuntu ollama[85698]: llama_new_context_with_model: graph splits = 2 Sep 20 14:43:44 ubuntu ollama[85698]: time=2024-09-20T14:43:44.750+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 1.00 - " Sep 20 14:43:44 ubuntu ollama[85698]: [GIN] 2024/09/20 - 14:43:44 | 500 | 8m32s | 127.0.0.1 | POST "/api/generate" Sep 20 14:43:49 ubuntu ollama[85698]: time=2024-09-20T14:43:49.849+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.098725631 model=/usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe Sep 20 14:43:50 ubuntu ollama[85698]: time=2024-09-20T14:43:50.099+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.348255973 model=/usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe Sep 20 14:43:50 ubuntu ollama[85698]: time=2024-09-20T14:43:50.349+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.598784823 model=/usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe ``` Since jetson doesn't support `nvidia-smi`, I use `jtop` to check the CPU/GPU status ![Screenshot from 2024-09-20 14-53-18](https://github.com/user-attachments/assets/1a2b9a9e-7382-4b24-b012-6d1232865455) It only use signal thread in setup so it's always timeout when starting the ollama server Another thing I notice is when using the `install.sh` to install ollama it haven't detect the Nvidia JetPack and download the Linux JetPack bundle just like https://github.com/ollama/ollama/pull/6400#issuecomment-2336398350 The 'install.sh' in main branch don't have any Jetson Jetpack detection but only appear in branch `ca6f376` https://github.com/ollama/ollama/blob/ca6f3760fbdaa91644fff355f315f1d7ebe8ba08/scripts/install.sh @dhiltgen is it normal the 'install.sh' missing the JetPack detction?
Author
Owner

@litao-zhx commented on GitHub (Sep 20, 2024):

The LLM deployed on Jetson seems to be running on the CPU,I executed the command: ollam run qwen2:0.5b, but the startup keeps timing out.Have you encountered the same situation?

<!-- gh-comment-id:2363371466 --> @litao-zhx commented on GitHub (Sep 20, 2024): The LLM deployed on Jetson seems to be running on the CPU,I executed the command: ollam run qwen2:0.5b, but the startup keeps timing out.Have you encountered the same situation?
Author
Owner

@JIANGTUNAN commented on GitHub (Sep 20, 2024):

I had the same problem, but I looked at the logs and he seemed to be checking GPU and CUDA. But it's the CPU that keeps hogging it, which is weird.

image

image

llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 7B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5
llama_model_loader: - kv   5:                         general.size_label str              = 7B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-7...
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 7B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-7B
llama_model_loader: - kv  12:                               general.tags arr[str,2]       = ["chat", "text-generation"]
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          qwen2.block_count u32              = 28
llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 3584
llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 18944
llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 28
llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 4
llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  22:                          general.file_type u32              = 15
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["? ?", "?? ??", "i n", "? t",...
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type q4_K:  169 tensors
llama_model_loader: - type q6_K:   29 tensors
time=2024-09-20T10:54:03.763Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 3584
llm_load_print_meta: n_layer          = 28
llm_load_print_meta: n_head           = 28
llm_load_print_meta: n_head_kv        = 4
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 7
llm_load_print_meta: n_embd_k_gqa     = 512
llm_load_print_meta: n_embd_v_gqa     = 512
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 18944
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 7.62 B
llm_load_print_meta: model size       = 4.36 GiB (4.91 BPW) 
llm_load_print_meta: general.name     = Qwen2.5 7B Instruct
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 '??'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: Orin, compute capability 8.7, VMM: yes
llm_load_tensors: ggml ctx size =    0.30 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors:        CPU buffer size =   292.36 MiB
llm_load_tensors:      CUDA0 buffer size =  4168.09 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   448.00 MiB
llama_new_context_with_model: KV self size  =  448.00 MiB, K (f16):  224.00 MiB, V (f16):  224.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     2.38 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   492.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    23.01 MiB
llama_new_context_with_model: graph nodes  = 986
llama_new_context_with_model: graph splits = 2

<!-- gh-comment-id:2363448100 --> @JIANGTUNAN commented on GitHub (Sep 20, 2024): I had the same problem, but I looked at the logs and he seemed to be checking GPU and CUDA. But it's the CPU that keeps hogging it, which is weird. ![image](https://github.com/user-attachments/assets/831697c0-9f7b-4d46-8766-b7761575ab5b) ![image](https://github.com/user-attachments/assets/8dc35399-b79f-4d8b-97f7-4ddcc71b6f1e) ```shell llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 7B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 7B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 7B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-7B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 28 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["? ?", "?? ??", "i n", "? t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors time=2024-09-20T10:54:03.763Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 3584 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 28 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 512 llm_load_print_meta: n_embd_v_gqa = 512 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18944 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) llm_load_print_meta: general.name = Qwen2.5 7B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 '??' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: Orin, compute capability 8.7, VMM: yes llm_load_tensors: ggml ctx size = 0.30 MiB llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: CPU buffer size = 292.36 MiB llm_load_tensors: CUDA0 buffer size = 4168.09 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 448.00 MiB llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB llama_new_context_with_model: CUDA0 compute buffer size = 492.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 23.01 MiB llama_new_context_with_model: graph nodes = 986 llama_new_context_with_model: graph splits = 2 ```
Author
Owner

@JIANGTUNAN commented on GitHub (Sep 21, 2024):

I used this image to make good use of the GPU

https://hub.docker.com/r/dustynv/ollama

docker run --runtime nvidia -d -v ~/ollama:/root/.ollama -p 11434:11434 --name ollama dustynv/ollama:r36.2.0  /bin/ollama serve

image
image

<!-- gh-comment-id:2364973893 --> @JIANGTUNAN commented on GitHub (Sep 21, 2024): I used this image to make good use of the GPU [https://hub.docker.com/r/dustynv/ollama](url) ```shell docker run --runtime nvidia -d -v ~/ollama:/root/.ollama -p 11434:11434 --name ollama dustynv/ollama:r36.2.0 /bin/ollama serve ``` ![image](https://github.com/user-attachments/assets/125082b4-f9c1-42fd-bb4a-5f4adaca57f0) ![image](https://github.com/user-attachments/assets/4186386e-7394-44c1-a978-78c39eb7ab29)
Author
Owner

@s0301132 commented on GitHub (Sep 21, 2024):

Thanks @JIANGTUNAN I think this is the only option for now, I can't run the server even build it from source.

<!-- gh-comment-id:2365031548 --> @s0301132 commented on GitHub (Sep 21, 2024): Thanks @JIANGTUNAN I think this is the only option for now, I can't run the server even build it from source.
Author
Owner

@dhiltgen commented on GitHub (Sep 25, 2024):

Dup of #2408 fixed by PR #6400

<!-- gh-comment-id:2375233965 --> @dhiltgen commented on GitHub (Sep 25, 2024): Dup of #2408 fixed by PR #6400
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66373