[GH-ISSUE #7161] Problem with load llm model in Jetson AGX Orin Developer Kit (64GB) #4544

Closed
opened 2026-04-12 15:28:48 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @witold-gren on GitHub (Oct 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7161

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Hey, thanks for your great contribution to this project. I use it on a normal computer with an RTX 4090 card and everything works very well. However, I have a problem with my Nvidia Jetson AGX Orin. I'm trying to run it the same way:

and I just install ollama using command:

curl -fsSL https://ollama.com/install.sh | sh

but then when I try load llm model:

ollama run SpeakLeash/bielik-11b-v2.3-instruct:Q4_K_M

I see information that ollama can't load this model.. also I see that even if my GPU was recognised it is not use during load model. Below I added logs which from command journalctl -e -u ollama:

paź 10 14:48:09 jetson systemd[1]: Started Ollama Service.
paź 10 14:48:09 jetson ollama[5761]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
paź 10 14:48:09 jetson ollama[5761]: Your new public key is:
paź 10 14:48:09 jetson ollama[5761]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH5sELAWBM8Np0o8l13zZlj0nCPYuuApt4h+ijT5qYo6
paź 10 14:48:09 jetson ollama[5761]: 2024/10/10 14:48:09 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLA>
paź 10 14:48:09 jetson ollama[5761]: time=2024-10-10T14:48:09.967+02:00 level=INFO source=images.go:753 msg="total blobs: 0"
paź 10 14:48:09 jetson ollama[5761]: time=2024-10-10T14:48:09.967+02:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
paź 10 14:48:09 jetson ollama[5761]: time=2024-10-10T14:48:09.967+02:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)"
paź 10 14:48:09 jetson ollama[5761]: time=2024-10-10T14:48:09.968+02:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1142511168/runners
paź 10 14:48:26 jetson ollama[5761]: time=2024-10-10T14:48:26.857+02:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 cpu]"
paź 10 14:48:26 jetson ollama[5761]: time=2024-10-10T14:48:26.857+02:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
paź 10 14:48:27 jetson ollama[5761]: time=2024-10-10T14:48:27.120+02:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-771384f7-53b6-57c9-a4c5-4fa00f6622bd library=cuda variant=jetpack6 compute=8.7 driver=12.6 name=Orin total="61.4 GiB" av>
paź 10 14:48:31 jetson ollama[5761]: [GIN] 2024/10/10 - 14:48:31 | 200 |      551.04µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:49:01 jetson ollama[5761]: [GIN] 2024/10/10 - 14:49:01 | 200 |     273.633µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:49:31 jetson ollama[5761]: [GIN] 2024/10/10 - 14:49:31 | 200 |     192.897µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:50:01 jetson ollama[5761]: [GIN] 2024/10/10 - 14:50:01 | 200 |     168.992µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:50:31 jetson ollama[5761]: [GIN] 2024/10/10 - 14:50:31 | 200 |     227.073µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:51:01 jetson ollama[5761]: [GIN] 2024/10/10 - 14:51:01 | 200 |     255.329µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:51:31 jetson ollama[5761]: [GIN] 2024/10/10 - 14:51:31 | 200 |       194.4µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:52:01 jetson ollama[5761]: [GIN] 2024/10/10 - 14:52:01 | 200 |     225.184µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:52:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:52:32 | 200 |     264.224µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:53:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:53:02 | 200 |     216.992µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:53:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:53:32 | 200 |     169.984µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:54:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:54:02 | 200 |     265.888µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:54:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:54:32 | 200 |     417.057µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:55:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:55:02 | 200 |     178.624µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:55:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:55:32 | 200 |     273.664µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:56:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:56:02 | 200 |     270.304µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:56:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:56:32 | 200 |      389.92µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:57:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:57:02 | 200 |      187.68µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:57:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:57:32 | 200 |     168.749µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:58:03 jetson ollama[5761]: [GIN] 2024/10/10 - 14:58:03 | 200 |     187.565µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:58:33 jetson ollama[5761]: [GIN] 2024/10/10 - 14:58:33 | 200 |     264.209µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:59:03 jetson ollama[5761]: [GIN] 2024/10/10 - 14:59:03 | 200 |     220.493µs |       127.0.0.1 | GET      "/api/tags"
paź 10 14:59:33 jetson ollama[5761]: [GIN] 2024/10/10 - 14:59:33 | 200 |     257.006µs |       127.0.0.1 | GET      "/api/tags"
paź 10 15:00:03 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:03 | 200 |     220.075µs |       127.0.0.1 | GET      "/api/tags"
paź 10 15:00:12 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:12 | 200 |      74.756µs |       127.0.0.1 | GET      "/api/version"
paź 10 15:00:33 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:33 | 200 |     254.476µs |       127.0.0.1 | GET      "/api/tags"
paź 10 15:00:58 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:58 | 200 |      45.634µs |       127.0.0.1 | HEAD     "/"
paź 10 15:00:58 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:58 | 404 |     350.094µs |       127.0.0.1 | POST     "/api/show"
paź 10 15:00:59 jetson ollama[5761]: time=2024-10-10T15:00:59.926+02:00 level=INFO source=download.go:175 msg="downloading ece698889c07 in 16 420 MB part(s)"
paź 10 15:01:03 jetson ollama[5761]: [GIN] 2024/10/10 - 15:01:03 | 200 |     225.129µs |       127.0.0.1 | GET      "/api/tags"
paź 10 15:01:33 jetson ollama[5761]: [GIN] 2024/10/10 - 15:01:33 | 200 |     190.312µs |       127.0.0.1 | GET      "/api/tags"
paź 10 15:02:03 jetson ollama[5761]: [GIN] 2024/10/10 - 15:02:03 | 200 |     177.383µs |       127.0.0.1 | GET      "/api/tags"
paź 10 15:02:10 jetson ollama[5761]: time=2024-10-10T15:02:10.048+02:00 level=INFO source=download.go:175 msg="downloading f7426507909a in 1 263 B part(s)"
paź 10 15:02:12 jetson ollama[5761]: time=2024-10-10T15:02:12.166+02:00 level=INFO source=download.go:175 msg="downloading 3685c9d39c8b in 1 114 B part(s)"
paź 10 15:02:14 jetson ollama[5761]: time=2024-10-10T15:02:14.263+02:00 level=INFO source=download.go:175 msg="downloading d0b273b04783 in 1 414 B part(s)"
paź 10 15:02:22 jetson ollama[5761]: [GIN] 2024/10/10 - 15:02:22 | 200 |         1m23s |       127.0.0.1 | POST     "/api/pull"
paź 10 15:02:22 jetson ollama[5761]: [GIN] 2024/10/10 - 15:02:22 | 200 |   17.204756ms |       127.0.0.1 | POST     "/api/show"
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.393+02:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968ad7ad2>
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.393+02:00 level=INFO source=server.go:103 msg="system memory" total="61.4 GiB" free="54.8 GiB" free_swap="30.7 GiB"
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.395+02:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=51 layers.offload=51 layers.split="" memory.available="[54.6 GiB]" memory.gpu_overhead="0 B" me>
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.399+02:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama1142511168/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-ece6>
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.400+02:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.400+02:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.400+02:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
paź 10 15:02:22 jetson ollama[8696]: INFO [main] build info | build=10 commit="fd5a74e" tid="281472960698432" timestamp=1728565342
paź 10 15:02:22 jetson ollama[8696]: INFO [main] system info | n_threads=12 n_threads_batch=12 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 >
paź 10 15:02:22 jetson ollama[8696]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="42635" tid="281472960698432" timestamp=1728565342
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: loaded meta data with 32 key-value pairs and 453 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968ad7ad20961cf824c0b008895fe0506c87b834 (version GGUF V3 (latest>
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv   0:                       general.architecture str              = llama
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv   1:                               general.type str              = model
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv   2:                               general.name str              = tekken
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv   3:                            general.version str              = 0-2
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv   4:                         general.size_label str              = 11B
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv   5:                   general.base_model.count u32              = 0
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv   6:                               general.tags arr[str,2]       = ["mergekit", "merge"]
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv   7:                          llama.block_count u32              = 50
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv   8:                       llama.context_length u32              = 32768
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv   9:                     llama.embedding_length u32              = 4096
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  10:                  llama.feed_forward_length u32              = 14336
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  11:                 llama.attention.head_count u32              = 32
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  12:              llama.attention.head_count_kv u32              = 8
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  13:                       llama.rope.freq_base f32              = 1000000.000000
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  14:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  15:                          general.file_type u32              = 15
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  16:                           llama.vocab_size u32              = 32128
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  17:                 llama.rope.dimension_count u32              = 128
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  18:            tokenizer.ggml.add_space_prefix bool             = true
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = llama
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = default
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,32128]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  22:                      tokenizer.ggml.scores arr[f32,32128]   = [-1000.000000, -1000.000000, -1000.00...
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,32128]   = [3, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  24:                tokenizer.ggml.bos_token_id u32              = 1
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  25:                tokenizer.ggml.eos_token_id u32              = 32001
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  26:            tokenizer.ggml.unknown_token_id u32              = 0
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 2
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  28:               tokenizer.ggml.add_bos_token bool             = false
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  29:               tokenizer.ggml.add_eos_token bool             = false
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  30:                    tokenizer.chat_template str              = {{bos_token}}{% for message in messag...
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv  31:               general.quantization_version u32              = 2
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - type  f32:  101 tensors
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - type q4_K:  301 tensors
paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - type q6_K:   51 tensors
paź 10 15:02:22 jetson ollama[5761]: llm_load_vocab: special tokens cache size = 131
paź 10 15:02:22 jetson ollama[5761]: llm_load_vocab: token to piece cache size = 0.1654 MB
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: format           = GGUF V3 (latest)
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: arch             = llama
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: vocab type       = SPM
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_vocab          = 32128
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_merges         = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: vocab_only       = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_ctx_train      = 32768
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd           = 4096
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_layer          = 50
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_head           = 32
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_head_kv        = 8
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_rot            = 128
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_swa            = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd_head_k    = 128
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd_head_v    = 128
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_gqa            = 4
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd_k_gqa     = 1024
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd_v_gqa     = 1024
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_norm_eps       = 0.0e+00
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_logit_scale    = 0.0e+00
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_ff             = 14336
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_expert         = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_expert_used    = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: causal attn      = 1
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: pooling type     = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: rope type        = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: rope scaling     = linear
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: freq_base_train  = 1000000.0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: freq_scale_train = 1
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_ctx_orig_yarn  = 32768
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: rope_finetuned   = unknown
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_d_conv       = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_d_inner      = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_d_state      = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_dt_rank      = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: model type       = ?B
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: model ftype      = Q4_K - Medium
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: model params     = 11.17 B
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: model size       = 6.26 GiB (4.82 BPW)
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: general.name     = tekken
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: BOS token        = 1 '<s>'
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: EOS token        = 32001 '<|im_end|>'
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: UNK token        = 0 '<unk>'
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: PAD token        = 2 '</s>'
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: LF token         = 13 '<0x0A>'
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: EOT token        = 32001 '<|im_end|>'
paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: max token length = 48
paź 10 15:02:22 jetson ollama[5761]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
paź 10 15:02:22 jetson ollama[5761]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
paź 10 15:02:22 jetson ollama[5761]: ggml_cuda_init: found 1 CUDA devices:
paź 10 15:02:22 jetson ollama[5761]:   Device 0: Orin, compute capability 8.7, VMM: yes
paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.652+02:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
paź 10 15:02:33 jetson ollama[5761]: [GIN] 2024/10/10 - 15:02:33 | 200 |     595.667µs |       127.0.0.1 | GET      "/api/tags"
paź 10 15:07:22 jetson ollama[5761]: time=2024-10-10T15:07:22.421+02:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "
paź 10 15:07:22 jetson ollama[5761]: [GIN] 2024/10/10 - 15:07:22 | 500 |          5m0s |       127.0.0.1 | POST     "/api/generate"
paź 10 15:07:27 jetson ollama[5761]: time=2024-10-10T15:07:27.635+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.214104708 model=/usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968a>
paź 10 15:07:27 jetson ollama[5761]: time=2024-10-10T15:07:27.886+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.464459141 model=/usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968a>
paź 10 15:07:28 jetson ollama[5761]: time=2024-10-10T15:07:28.135+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.713591502 model=/usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968a>

What can I do to temporarily solve this problem? I use this jetpack version: sudo apt show nvidia-jetpack

Package: nvidia-jetpack
Version: 6.1+b123
Priority: standard
Section: metapackages
Source: nvidia-jetpack (6.1)
Maintainer: NVIDIA Corporation
Installed-Size: 199 kB
Depends: nvidia-jetpack-runtime (= 6.1+b123), nvidia-jetpack-dev (= 6.1+b123)
Homepage: http://developer.nvidia.com/jetson
Download-Size: 29,3 kB
APT-Sources: https://repo.download.nvidia.com/jetson/common r36.4/main arm64 Packages
Description: NVIDIA Jetpack Meta Package

OS

Linux

GPU

Nvidia

CPU

Other

Ollama version

0.3.12

Originally created by @witold-gren on GitHub (Oct 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7161 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Hey, thanks for your great contribution to this project. I use it on a normal computer with an RTX 4090 card and everything works very well. However, I have a problem with my Nvidia Jetson AGX Orin. I'm trying to run it the same way: and I just install ollama using command: ``` curl -fsSL https://ollama.com/install.sh | sh ``` but then when I try load llm model: ``` ollama run SpeakLeash/bielik-11b-v2.3-instruct:Q4_K_M ``` I see information that ollama can't load this model.. also I see that even if my GPU was recognised it is not use during load model. Below I added logs which from command `journalctl -e -u ollama`: ``` paź 10 14:48:09 jetson systemd[1]: Started Ollama Service. paź 10 14:48:09 jetson ollama[5761]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key. paź 10 14:48:09 jetson ollama[5761]: Your new public key is: paź 10 14:48:09 jetson ollama[5761]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH5sELAWBM8Np0o8l13zZlj0nCPYuuApt4h+ijT5qYo6 paź 10 14:48:09 jetson ollama[5761]: 2024/10/10 14:48:09 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLA> paź 10 14:48:09 jetson ollama[5761]: time=2024-10-10T14:48:09.967+02:00 level=INFO source=images.go:753 msg="total blobs: 0" paź 10 14:48:09 jetson ollama[5761]: time=2024-10-10T14:48:09.967+02:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" paź 10 14:48:09 jetson ollama[5761]: time=2024-10-10T14:48:09.967+02:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)" paź 10 14:48:09 jetson ollama[5761]: time=2024-10-10T14:48:09.968+02:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1142511168/runners paź 10 14:48:26 jetson ollama[5761]: time=2024-10-10T14:48:26.857+02:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 cpu]" paź 10 14:48:26 jetson ollama[5761]: time=2024-10-10T14:48:26.857+02:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs" paź 10 14:48:27 jetson ollama[5761]: time=2024-10-10T14:48:27.120+02:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-771384f7-53b6-57c9-a4c5-4fa00f6622bd library=cuda variant=jetpack6 compute=8.7 driver=12.6 name=Orin total="61.4 GiB" av> paź 10 14:48:31 jetson ollama[5761]: [GIN] 2024/10/10 - 14:48:31 | 200 | 551.04µs | 127.0.0.1 | GET "/api/tags" paź 10 14:49:01 jetson ollama[5761]: [GIN] 2024/10/10 - 14:49:01 | 200 | 273.633µs | 127.0.0.1 | GET "/api/tags" paź 10 14:49:31 jetson ollama[5761]: [GIN] 2024/10/10 - 14:49:31 | 200 | 192.897µs | 127.0.0.1 | GET "/api/tags" paź 10 14:50:01 jetson ollama[5761]: [GIN] 2024/10/10 - 14:50:01 | 200 | 168.992µs | 127.0.0.1 | GET "/api/tags" paź 10 14:50:31 jetson ollama[5761]: [GIN] 2024/10/10 - 14:50:31 | 200 | 227.073µs | 127.0.0.1 | GET "/api/tags" paź 10 14:51:01 jetson ollama[5761]: [GIN] 2024/10/10 - 14:51:01 | 200 | 255.329µs | 127.0.0.1 | GET "/api/tags" paź 10 14:51:31 jetson ollama[5761]: [GIN] 2024/10/10 - 14:51:31 | 200 | 194.4µs | 127.0.0.1 | GET "/api/tags" paź 10 14:52:01 jetson ollama[5761]: [GIN] 2024/10/10 - 14:52:01 | 200 | 225.184µs | 127.0.0.1 | GET "/api/tags" paź 10 14:52:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:52:32 | 200 | 264.224µs | 127.0.0.1 | GET "/api/tags" paź 10 14:53:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:53:02 | 200 | 216.992µs | 127.0.0.1 | GET "/api/tags" paź 10 14:53:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:53:32 | 200 | 169.984µs | 127.0.0.1 | GET "/api/tags" paź 10 14:54:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:54:02 | 200 | 265.888µs | 127.0.0.1 | GET "/api/tags" paź 10 14:54:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:54:32 | 200 | 417.057µs | 127.0.0.1 | GET "/api/tags" paź 10 14:55:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:55:02 | 200 | 178.624µs | 127.0.0.1 | GET "/api/tags" paź 10 14:55:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:55:32 | 200 | 273.664µs | 127.0.0.1 | GET "/api/tags" paź 10 14:56:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:56:02 | 200 | 270.304µs | 127.0.0.1 | GET "/api/tags" paź 10 14:56:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:56:32 | 200 | 389.92µs | 127.0.0.1 | GET "/api/tags" paź 10 14:57:02 jetson ollama[5761]: [GIN] 2024/10/10 - 14:57:02 | 200 | 187.68µs | 127.0.0.1 | GET "/api/tags" paź 10 14:57:32 jetson ollama[5761]: [GIN] 2024/10/10 - 14:57:32 | 200 | 168.749µs | 127.0.0.1 | GET "/api/tags" paź 10 14:58:03 jetson ollama[5761]: [GIN] 2024/10/10 - 14:58:03 | 200 | 187.565µs | 127.0.0.1 | GET "/api/tags" paź 10 14:58:33 jetson ollama[5761]: [GIN] 2024/10/10 - 14:58:33 | 200 | 264.209µs | 127.0.0.1 | GET "/api/tags" paź 10 14:59:03 jetson ollama[5761]: [GIN] 2024/10/10 - 14:59:03 | 200 | 220.493µs | 127.0.0.1 | GET "/api/tags" paź 10 14:59:33 jetson ollama[5761]: [GIN] 2024/10/10 - 14:59:33 | 200 | 257.006µs | 127.0.0.1 | GET "/api/tags" paź 10 15:00:03 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:03 | 200 | 220.075µs | 127.0.0.1 | GET "/api/tags" paź 10 15:00:12 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:12 | 200 | 74.756µs | 127.0.0.1 | GET "/api/version" paź 10 15:00:33 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:33 | 200 | 254.476µs | 127.0.0.1 | GET "/api/tags" paź 10 15:00:58 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:58 | 200 | 45.634µs | 127.0.0.1 | HEAD "/" paź 10 15:00:58 jetson ollama[5761]: [GIN] 2024/10/10 - 15:00:58 | 404 | 350.094µs | 127.0.0.1 | POST "/api/show" paź 10 15:00:59 jetson ollama[5761]: time=2024-10-10T15:00:59.926+02:00 level=INFO source=download.go:175 msg="downloading ece698889c07 in 16 420 MB part(s)" paź 10 15:01:03 jetson ollama[5761]: [GIN] 2024/10/10 - 15:01:03 | 200 | 225.129µs | 127.0.0.1 | GET "/api/tags" paź 10 15:01:33 jetson ollama[5761]: [GIN] 2024/10/10 - 15:01:33 | 200 | 190.312µs | 127.0.0.1 | GET "/api/tags" paź 10 15:02:03 jetson ollama[5761]: [GIN] 2024/10/10 - 15:02:03 | 200 | 177.383µs | 127.0.0.1 | GET "/api/tags" paź 10 15:02:10 jetson ollama[5761]: time=2024-10-10T15:02:10.048+02:00 level=INFO source=download.go:175 msg="downloading f7426507909a in 1 263 B part(s)" paź 10 15:02:12 jetson ollama[5761]: time=2024-10-10T15:02:12.166+02:00 level=INFO source=download.go:175 msg="downloading 3685c9d39c8b in 1 114 B part(s)" paź 10 15:02:14 jetson ollama[5761]: time=2024-10-10T15:02:14.263+02:00 level=INFO source=download.go:175 msg="downloading d0b273b04783 in 1 414 B part(s)" paź 10 15:02:22 jetson ollama[5761]: [GIN] 2024/10/10 - 15:02:22 | 200 | 1m23s | 127.0.0.1 | POST "/api/pull" paź 10 15:02:22 jetson ollama[5761]: [GIN] 2024/10/10 - 15:02:22 | 200 | 17.204756ms | 127.0.0.1 | POST "/api/show" paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.393+02:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968ad7ad2> paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.393+02:00 level=INFO source=server.go:103 msg="system memory" total="61.4 GiB" free="54.8 GiB" free_swap="30.7 GiB" paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.395+02:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=51 layers.offload=51 layers.split="" memory.available="[54.6 GiB]" memory.gpu_overhead="0 B" me> paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.399+02:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama1142511168/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-ece6> paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.400+02:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.400+02:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding" paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.400+02:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" paź 10 15:02:22 jetson ollama[8696]: INFO [main] build info | build=10 commit="fd5a74e" tid="281472960698432" timestamp=1728565342 paź 10 15:02:22 jetson ollama[8696]: INFO [main] system info | n_threads=12 n_threads_batch=12 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 > paź 10 15:02:22 jetson ollama[8696]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="42635" tid="281472960698432" timestamp=1728565342 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: loaded meta data with 32 key-value pairs and 453 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968ad7ad20961cf824c0b008895fe0506c87b834 (version GGUF V3 (latest> paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 0: general.architecture str = llama paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 1: general.type str = model paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 2: general.name str = tekken paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 3: general.version str = 0-2 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 4: general.size_label str = 11B paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 5: general.base_model.count u32 = 0 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 6: general.tags arr[str,2] = ["mergekit", "merge"] paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 7: llama.block_count u32 = 50 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 8: llama.context_length u32 = 32768 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 9: llama.embedding_length u32 = 4096 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 10: llama.feed_forward_length u32 = 14336 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 11: llama.attention.head_count u32 = 32 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 12: llama.attention.head_count_kv u32 = 8 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 13: llama.rope.freq_base f32 = 1000000.000000 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 14: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 15: general.file_type u32 = 15 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 16: llama.vocab_size u32 = 32128 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 18: tokenizer.ggml.add_space_prefix bool = true paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 19: tokenizer.ggml.model str = llama paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 20: tokenizer.ggml.pre str = default paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,32128] = ["<unk>", "<s>", "</s>", "<0x00>", "<... paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 22: tokenizer.ggml.scores arr[f32,32128] = [-1000.000000, -1000.000000, -1000.00... paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,32128] = [3, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 24: tokenizer.ggml.bos_token_id u32 = 1 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 32001 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 26: tokenizer.ggml.unknown_token_id u32 = 0 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 2 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 28: tokenizer.ggml.add_bos_token bool = false paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 29: tokenizer.ggml.add_eos_token bool = false paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 30: tokenizer.chat_template str = {{bos_token}}{% for message in messag... paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - kv 31: general.quantization_version u32 = 2 paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - type f32: 101 tensors paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - type q4_K: 301 tensors paź 10 15:02:22 jetson ollama[5761]: llama_model_loader: - type q6_K: 51 tensors paź 10 15:02:22 jetson ollama[5761]: llm_load_vocab: special tokens cache size = 131 paź 10 15:02:22 jetson ollama[5761]: llm_load_vocab: token to piece cache size = 0.1654 MB paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: format = GGUF V3 (latest) paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: arch = llama paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: vocab type = SPM paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_vocab = 32128 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_merges = 0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: vocab_only = 0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_ctx_train = 32768 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd = 4096 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_layer = 50 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_head = 32 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_head_kv = 8 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_rot = 128 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_swa = 0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd_head_k = 128 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd_head_v = 128 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_gqa = 4 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd_k_gqa = 1024 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_embd_v_gqa = 1024 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_norm_eps = 0.0e+00 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: f_logit_scale = 0.0e+00 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_ff = 14336 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_expert = 0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_expert_used = 0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: causal attn = 1 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: pooling type = 0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: rope type = 0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: rope scaling = linear paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: freq_base_train = 1000000.0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: freq_scale_train = 1 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: n_ctx_orig_yarn = 32768 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: rope_finetuned = unknown paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_d_conv = 0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_d_inner = 0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_d_state = 0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_dt_rank = 0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: ssm_dt_b_c_rms = 0 paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: model type = ?B paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: model ftype = Q4_K - Medium paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: model params = 11.17 B paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: model size = 6.26 GiB (4.82 BPW) paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: general.name = tekken paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: BOS token = 1 '<s>' paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: EOS token = 32001 '<|im_end|>' paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: UNK token = 0 '<unk>' paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: PAD token = 2 '</s>' paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: LF token = 13 '<0x0A>' paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: EOT token = 32001 '<|im_end|>' paź 10 15:02:22 jetson ollama[5761]: llm_load_print_meta: max token length = 48 paź 10 15:02:22 jetson ollama[5761]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no paź 10 15:02:22 jetson ollama[5761]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no paź 10 15:02:22 jetson ollama[5761]: ggml_cuda_init: found 1 CUDA devices: paź 10 15:02:22 jetson ollama[5761]: Device 0: Orin, compute capability 8.7, VMM: yes paź 10 15:02:22 jetson ollama[5761]: time=2024-10-10T15:02:22.652+02:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" paź 10 15:02:33 jetson ollama[5761]: [GIN] 2024/10/10 - 15:02:33 | 200 | 595.667µs | 127.0.0.1 | GET "/api/tags" paź 10 15:07:22 jetson ollama[5761]: time=2024-10-10T15:07:22.421+02:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " paź 10 15:07:22 jetson ollama[5761]: [GIN] 2024/10/10 - 15:07:22 | 500 | 5m0s | 127.0.0.1 | POST "/api/generate" paź 10 15:07:27 jetson ollama[5761]: time=2024-10-10T15:07:27.635+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.214104708 model=/usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968a> paź 10 15:07:27 jetson ollama[5761]: time=2024-10-10T15:07:27.886+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.464459141 model=/usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968a> paź 10 15:07:28 jetson ollama[5761]: time=2024-10-10T15:07:28.135+02:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.713591502 model=/usr/share/ollama/.ollama/models/blobs/sha256-ece698889c07d4a98a8fb7c9968a> ``` What can I do to temporarily solve this problem? I use this jetpack version: `sudo apt show nvidia-jetpack` ``` Package: nvidia-jetpack Version: 6.1+b123 Priority: standard Section: metapackages Source: nvidia-jetpack (6.1) Maintainer: NVIDIA Corporation Installed-Size: 199 kB Depends: nvidia-jetpack-runtime (= 6.1+b123), nvidia-jetpack-dev (= 6.1+b123) Homepage: http://developer.nvidia.com/jetson Download-Size: 29,3 kB APT-Sources: https://repo.download.nvidia.com/jetson/common r36.4/main arm64 Packages Description: NVIDIA Jetpack Meta Package ``` ### OS Linux ### GPU Nvidia ### CPU Other ### Ollama version 0.3.12
GiteaMirror added the nvidiabug labels 2026-04-12 15:28:48 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 10, 2024):

Setting OLLAMA_LOAD_TIMEOUT to a value more than 5 minutes may work. But the question is why is it taking more than 5 minutes to load a 6.7GB model. If you set OLLAMA_DEBUG=1 there may be more information about what is going on.

<!-- gh-comment-id:2405436271 --> @rick-github commented on GitHub (Oct 10, 2024): Setting `OLLAMA_LOAD_TIMEOUT` to a value more than 5 minutes may work. But the question is why is it taking more than 5 minutes to load a 6.7GB model. If you set `OLLAMA_DEBUG=1` there may be more information about what is going on.
Author
Owner

@dhiltgen commented on GitHub (Oct 11, 2024):

Sorry it's taking a while to get #6400 across the finish line. Until that's merged, you'll need to build from source. The ARM64 cuda library we bundle today in the binary releases only works for discrete GPUs on ARM64 systems. The JetPack cuda libraries aren't compatible at runtime.

Jetpack 6 is tracked via issue #2408

<!-- gh-comment-id:2408240908 --> @dhiltgen commented on GitHub (Oct 11, 2024): Sorry it's taking a while to get #6400 across the finish line. Until that's merged, you'll need to build from source. The ARM64 cuda library we bundle today in the binary releases only works for discrete GPUs on ARM64 systems. The JetPack cuda libraries aren't compatible at runtime. Jetpack 6 is tracked via issue #2408
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4544