[GH-ISSUE #2266] Unable to load CUDA management library is shown in the log - does it have any consequences on performance? #27061

Closed
opened 2026-04-22 03:58:20 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @BananaAcid on GitHub (Jan 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2266

 gpu.go:294: INFO Unable to load CUDA management library /usr/lib/libnvidia-ml.so.535.146.02: Unable to load /usr/lib/libnvidia-ml.so.535.146.02 library to query for Nvidia GPUs: /usr/lib/libnvidia-ml.so.535.146.02: wrong ELF class: ELFCLASS32

Full Log:

Jan 30 09:11:08 ollama-host systemd[1]: Started ollama.service - Ollama Service.
░░ Subject: A start job for unit ollama.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ A start job for unit ollama.service has finished successfully.
░░ 
░░ The job identifier is 4869.
Jan 30 09:11:08 ollama-host ollama[5307]: 2024/01/30 09:11:08 images.go:857: INFO total blobs: 5
Jan 30 09:11:08 ollama-host ollama[5307]: 2024/01/30 09:11:08 images.go:864: INFO total unused blobs removed: 0
Jan 30 09:11:08 ollama-host ollama[5307]: 2024/01/30 09:11:08 routes.go:950: INFO Listening on [::]:11434 (version 0.1.22)
Jan 30 09:11:08 ollama-host ollama[5307]: 2024/01/30 09:11:08 payload_common.go:106: INFO Extracting dynamic libraries...
Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 payload_common.go:145: INFO Dynamic LLM libraries [cuda_v11 rocm_v5 cpu_avx cpu_avx2 cpu rocm_v6]
Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:94: INFO Detecting GPU type
Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:236: INFO Searching for GPU management library libnvidia-ml.so
Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:282: INFO Discovered GPU libraries: [/usr/lib/libnvidia-ml.so.535.146.02 /usr/lib64/libnvidia-ml.so.535.146.02]

Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:294: INFO Unable to load CUDA management library /usr/lib/libnvidia-ml.so.535.146.02: Unable to load /usr/lib/libnvidia-ml.so.535.146.02 library to query for Nvidia GPUs: /usr/lib/libnvidia-ml.so.535.146.02: wrong ELF class: ELFCLASS32

Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:99: INFO Nvidia GPU detected
Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:140: INFO CUDA Compute Capability detected: 8.6
Jan 30 09:11:15 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:11:15 | 200 |    2.002182ms |  192.168.178.10 | GET      "/api/tags"
Jan 30 09:11:15 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:11:15 | 200 |     109.804µs |  192.168.178.10 | GET      "/api/version"
Jan 30 09:11:15 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:11:15 | 200 |      88.694µs |  192.168.178.10 | GET      "/api/version"
Jan 30 09:12:00 ollama-host ollama[5307]: 2024/01/30 09:12:00 gpu.go:140: INFO CUDA Compute Capability detected: 8.6
Jan 30 09:12:00 ollama-host ollama[5307]: 2024/01/30 09:12:00 gpu.go:140: INFO CUDA Compute Capability detected: 8.6
Jan 30 09:12:00 ollama-host ollama[5307]: 2024/01/30 09:12:00 cpu_common.go:11: INFO CPU has AVX2
Jan 30 09:12:00 ollama-host ollama[5307]: 2024/01/30 09:12:00 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama2095882123/cuda_v11/libext_server.so
Jan 30 09:12:00 ollama-host ollama[5307]: 2024/01/30 09:12:00 dyn_ext_server.go:145: INFO Initializing llama server
Jan 30 09:12:00 ollama-host ollama[5307]: ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
Jan 30 09:12:00 ollama-host ollama[5307]: ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
Jan 30 09:12:00 ollama-host ollama[5307]: ggml_init_cublas: found 6 CUDA devices:
Jan 30 09:12:00 ollama-host ollama[5307]:   Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes
Jan 30 09:12:00 ollama-host ollama[5307]:   Device 1: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes
Jan 30 09:12:00 ollama-host ollama[5307]:   Device 2: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes
Jan 30 09:12:00 ollama-host ollama[5307]:   Device 3: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes
Jan 30 09:12:00 ollama-host ollama[5307]:   Device 4: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes
Jan 30 09:12:00 ollama-host ollama[5307]:   Device 5: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /srv/ollama-models/blobs/sha256:e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b (version GGUF V3 (latest))
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv   0:                       general.architecture str              = llama
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv   1:                               general.name str              = mistralai
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv   4:                          llama.block_count u32              = 32
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  13:                          general.file_type u32              = 2
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv  25:               general.quantization_version u32              = 2
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - type  f32:   65 tensors
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - type  f16:   32 tensors
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - type q4_0:  833 tensors
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - type q8_0:   64 tensors
Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - type q6_K:    1 tensors
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_vocab: special tokens definition check successful ( 259/32000 ).
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: format           = GGUF V3 (latest)
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: arch             = llama
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: vocab type       = SPM
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_vocab          = 32000
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_merges         = 0
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_ctx_train      = 32768
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_embd           = 4096
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_head           = 32
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_head_kv        = 8
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_layer          = 32
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_rot            = 128
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_embd_head_k    = 128
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_embd_head_v    = 128
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_gqa            = 4
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_embd_k_gqa     = 1024
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_embd_v_gqa     = 1024
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_ff             = 14336
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_expert         = 8
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_expert_used    = 2
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: rope scaling     = linear
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: freq_base_train  = 1000000.0
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: freq_scale_train = 1
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_yarn_orig_ctx  = 32768
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: rope_finetuned   = unknown
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: model type       = 7B
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: model ftype      = Q4_0
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: model params     = 46.70 B
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: model size       = 24.62 GiB (4.53 BPW)
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: general.name     = mistralai
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: BOS token        = 1 '<s>'
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: EOS token        = 2 '</s>'
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: UNK token        = 0 '<unk>'
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: LF token         = 13 '<0x0A>'
Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_tensors: ggml ctx size =    2.66 MiB
Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: offloading 32 repeating layers to GPU
Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: offloading non-repeating layers to GPU
Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: offloaded 33/33 layers to GPU
Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors:        CPU buffer size =    70.31 MiB
Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors:      CUDA0 buffer size =  4695.56 MiB
Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors:      CUDA1 buffer size =  3912.97 MiB
Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors:      CUDA2 buffer size =  4695.56 MiB
Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors:      CUDA3 buffer size =  3912.97 MiB
Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors:      CUDA4 buffer size =  4695.56 MiB
Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors:      CUDA5 buffer size =  3232.93 MiB
Jan 30 09:14:36 ollama-host ollama[5307]: ........................................................[GIN] 2024/01/30 - 09:14:36 | 200 |    26.13022ms |  192.168.178.10 | GET      "/api/tags"
Jan 30 09:14:37 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:14:37 | 200 |     462.561µs |  192.168.178.10 | GET      "/api/tags"
Jan 30 09:15:38 ollama-host ollama[5307]: ............................................
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: n_ctx      = 2048
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: freq_base  = 1000000.0
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: freq_scale = 1
Jan 30 09:15:38 ollama-host ollama[5307]: llama_kv_cache_init:      CUDA0 KV buffer size =    48.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_kv_cache_init:      CUDA1 KV buffer size =    40.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_kv_cache_init:      CUDA2 KV buffer size =    48.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_kv_cache_init:      CUDA3 KV buffer size =    40.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_kv_cache_init:      CUDA4 KV buffer size =    48.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_kv_cache_init:      CUDA5 KV buffer size =    32.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model:  CUDA_Host input buffer size   =    12.01 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model:      CUDA0 compute buffer size =   184.03 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model:      CUDA1 compute buffer size =   192.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model:      CUDA2 compute buffer size =   192.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model:      CUDA3 compute buffer size =   192.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model:      CUDA4 compute buffer size =   192.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model:      CUDA5 compute buffer size =   192.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model:  CUDA_Host compute buffer size =     8.00 MiB
Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: graph splits (measure): 13
Jan 30 09:15:39 ollama-host ollama[5307]: 2024/01/30 09:15:39 dyn_ext_server.go:156: INFO Starting llama main loop
Jan 30 09:15:39 ollama-host ollama[5307]: 2024/01/30 09:15:39 dyn_ext_server.go:170: INFO loaded 0 images

Jan 30 09:15:41 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:15:41 | 200 |      62.882µs |       127.0.0.1 | HEAD     "/"
Jan 30 09:15:41 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:15:41 | 200 |     835.184µs |       127.0.0.1 | GET      "/api/tags"
Jan 30 09:15:44 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:15:44 | 200 |         3m44s |  192.168.178.10 | POST     "/api/chat"
Jan 30 09:15:44 ollama-host ollama[5307]: 2024/01/30 09:15:44 dyn_ext_server.go:170: INFO loaded 0 images
    ⠀⠀⢀⣤⣴⣶⣶⣶⣦⣤⡀⠀⣀⣠⣤⣴⣶⣶⣶⣶⣶⣶⣶⣶⣤⣤⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀   ----------------- 
    ⠀⣰⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣶⣤⡀⠀⠀⠀⠀⠀⠀⠀⠀   OS: Nobara Linux 39 (KDE Plasma) x86_64 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣄⠀⠀⠀⠀⠀⠀   Host: BTC B250 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣄⠀⠀⠀⠀   Kernel: 6.7.0-203.fsync.fc39.x86_64 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⠀⠀⠀   Uptime: 4 hours, 25 mins 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⠋⠉⠁⠀⠀⠉⠉⠛⠿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⠀⠀   Packages: 2649 (rpm), 8 (flatpak) 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⠁⠀⠀⠀⢀⣀⣀⡀⠀⠀⠀⠈⢻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡇⠀   Shell: bash 5.2.21 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡏⠀⠀⠀⢠⣾⣿⣿⣿⣿⣷⡄⠀⠀⠀⠻⠿⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀   Terminal: /dev/pts/2 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠁⠀⠀⠀⣿⣿⣿⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⣀⣀⣬⣽⣿⣿⣿⣿⣿⣿⠀   CPU: Intel i5-6500 (4) @ 3.200GHz 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⠈⠻⢿⣿⣿⡿⠟⠁⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀   GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀   GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣤⣤⣄⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀   GPU: Intel HD Graphics 530 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣄⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀   GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣇⠀⠀⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀   GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate 
    ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠟⠛⠉⠉⠛⠛⢿⣿⣿⠀⠀⠀⠀⠀⠸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠀   GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate 
    ⠀⠘⢿⣿⣿⣿⣿⣿⣿⣿⡿⠋⠀⠀⠀⠀⠀⠀⠀⠀⠈⢿⠀⠀⠀⠀⠀⠀⠙⢿⣿⣿⣿⣿⣿⣿⣿⠟⠁⠀   GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate 
    ⠀⠀⠀⠈⠙⠛⠛⠛⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠛⠛⠛⠛⠉⠁⠀⠀⠀   Memory: 2714MiB / 11867MiB 
Originally created by @BananaAcid on GitHub (Jan 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2266 ``` gpu.go:294: INFO Unable to load CUDA management library /usr/lib/libnvidia-ml.so.535.146.02: Unable to load /usr/lib/libnvidia-ml.so.535.146.02 library to query for Nvidia GPUs: /usr/lib/libnvidia-ml.so.535.146.02: wrong ELF class: ELFCLASS32 ``` Full Log: ```log Jan 30 09:11:08 ollama-host systemd[1]: Started ollama.service - Ollama Service. ░░ Subject: A start job for unit ollama.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel ░░ ░░ A start job for unit ollama.service has finished successfully. ░░ ░░ The job identifier is 4869. Jan 30 09:11:08 ollama-host ollama[5307]: 2024/01/30 09:11:08 images.go:857: INFO total blobs: 5 Jan 30 09:11:08 ollama-host ollama[5307]: 2024/01/30 09:11:08 images.go:864: INFO total unused blobs removed: 0 Jan 30 09:11:08 ollama-host ollama[5307]: 2024/01/30 09:11:08 routes.go:950: INFO Listening on [::]:11434 (version 0.1.22) Jan 30 09:11:08 ollama-host ollama[5307]: 2024/01/30 09:11:08 payload_common.go:106: INFO Extracting dynamic libraries... Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 payload_common.go:145: INFO Dynamic LLM libraries [cuda_v11 rocm_v5 cpu_avx cpu_avx2 cpu rocm_v6] Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:94: INFO Detecting GPU type Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:236: INFO Searching for GPU management library libnvidia-ml.so Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:282: INFO Discovered GPU libraries: [/usr/lib/libnvidia-ml.so.535.146.02 /usr/lib64/libnvidia-ml.so.535.146.02] Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:294: INFO Unable to load CUDA management library /usr/lib/libnvidia-ml.so.535.146.02: Unable to load /usr/lib/libnvidia-ml.so.535.146.02 library to query for Nvidia GPUs: /usr/lib/libnvidia-ml.so.535.146.02: wrong ELF class: ELFCLASS32 Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:99: INFO Nvidia GPU detected Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:140: INFO CUDA Compute Capability detected: 8.6 Jan 30 09:11:15 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:11:15 | 200 | 2.002182ms | 192.168.178.10 | GET "/api/tags" Jan 30 09:11:15 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:11:15 | 200 | 109.804µs | 192.168.178.10 | GET "/api/version" Jan 30 09:11:15 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:11:15 | 200 | 88.694µs | 192.168.178.10 | GET "/api/version" Jan 30 09:12:00 ollama-host ollama[5307]: 2024/01/30 09:12:00 gpu.go:140: INFO CUDA Compute Capability detected: 8.6 Jan 30 09:12:00 ollama-host ollama[5307]: 2024/01/30 09:12:00 gpu.go:140: INFO CUDA Compute Capability detected: 8.6 Jan 30 09:12:00 ollama-host ollama[5307]: 2024/01/30 09:12:00 cpu_common.go:11: INFO CPU has AVX2 Jan 30 09:12:00 ollama-host ollama[5307]: 2024/01/30 09:12:00 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama2095882123/cuda_v11/libext_server.so Jan 30 09:12:00 ollama-host ollama[5307]: 2024/01/30 09:12:00 dyn_ext_server.go:145: INFO Initializing llama server Jan 30 09:12:00 ollama-host ollama[5307]: ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no Jan 30 09:12:00 ollama-host ollama[5307]: ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes Jan 30 09:12:00 ollama-host ollama[5307]: ggml_init_cublas: found 6 CUDA devices: Jan 30 09:12:00 ollama-host ollama[5307]: Device 0: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes Jan 30 09:12:00 ollama-host ollama[5307]: Device 1: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes Jan 30 09:12:00 ollama-host ollama[5307]: Device 2: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes Jan 30 09:12:00 ollama-host ollama[5307]: Device 3: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes Jan 30 09:12:00 ollama-host ollama[5307]: Device 4: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes Jan 30 09:12:00 ollama-host ollama[5307]: Device 5: NVIDIA GeForce RTX 3060 Ti, compute capability 8.6, VMM: yes Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /srv/ollama-models/blobs/sha256:e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2b (version GGUF V3 (latest)) Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 0: general.architecture str = llama Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 1: general.name str = mistralai Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 2: llama.context_length u32 = 32768 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 4: llama.block_count u32 = 32 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 9: llama.expert_count u32 = 8 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 13: general.file_type u32 = 2 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 14: tokenizer.ggml.model str = llama Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 24: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - kv 25: general.quantization_version u32 = 2 Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - type f32: 65 tensors Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - type f16: 32 tensors Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - type q4_0: 833 tensors Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - type q8_0: 64 tensors Jan 30 09:12:13 ollama-host ollama[5307]: llama_model_loader: - type q6_K: 1 tensors Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_vocab: special tokens definition check successful ( 259/32000 ). Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: format = GGUF V3 (latest) Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: arch = llama Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: vocab type = SPM Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_vocab = 32000 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_merges = 0 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_ctx_train = 32768 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_embd = 4096 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_head = 32 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_head_kv = 8 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_layer = 32 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_rot = 128 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_embd_head_k = 128 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_embd_head_v = 128 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_gqa = 4 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_embd_k_gqa = 1024 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_embd_v_gqa = 1024 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: f_norm_eps = 0.0e+00 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_ff = 14336 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_expert = 8 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_expert_used = 2 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: rope scaling = linear Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: freq_base_train = 1000000.0 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: freq_scale_train = 1 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: n_yarn_orig_ctx = 32768 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: rope_finetuned = unknown Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: model type = 7B Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: model ftype = Q4_0 Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: model params = 46.70 B Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: general.name = mistralai Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: BOS token = 1 '<s>' Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: EOS token = 2 '</s>' Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: UNK token = 0 '<unk>' Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: LF token = 13 '<0x0A>' Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_tensors: ggml ctx size = 2.66 MiB Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: offloading 32 repeating layers to GPU Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: offloading non-repeating layers to GPU Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: offloaded 33/33 layers to GPU Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: CPU buffer size = 70.31 MiB Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: CUDA0 buffer size = 4695.56 MiB Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: CUDA1 buffer size = 3912.97 MiB Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: CUDA2 buffer size = 4695.56 MiB Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: CUDA3 buffer size = 3912.97 MiB Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: CUDA4 buffer size = 4695.56 MiB Jan 30 09:13:15 ollama-host ollama[5307]: llm_load_tensors: CUDA5 buffer size = 3232.93 MiB Jan 30 09:14:36 ollama-host ollama[5307]: ........................................................[GIN] 2024/01/30 - 09:14:36 | 200 | 26.13022ms | 192.168.178.10 | GET "/api/tags" Jan 30 09:14:37 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:14:37 | 200 | 462.561µs | 192.168.178.10 | GET "/api/tags" Jan 30 09:15:38 ollama-host ollama[5307]: ............................................ Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: n_ctx = 2048 Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: freq_base = 1000000.0 Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: freq_scale = 1 Jan 30 09:15:38 ollama-host ollama[5307]: llama_kv_cache_init: CUDA0 KV buffer size = 48.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_kv_cache_init: CUDA1 KV buffer size = 40.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_kv_cache_init: CUDA2 KV buffer size = 48.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_kv_cache_init: CUDA3 KV buffer size = 40.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_kv_cache_init: CUDA4 KV buffer size = 48.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_kv_cache_init: CUDA5 KV buffer size = 32.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: CUDA_Host input buffer size = 12.01 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: CUDA0 compute buffer size = 184.03 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: CUDA1 compute buffer size = 192.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: CUDA2 compute buffer size = 192.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: CUDA3 compute buffer size = 192.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: CUDA4 compute buffer size = 192.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: CUDA5 compute buffer size = 192.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: CUDA_Host compute buffer size = 8.00 MiB Jan 30 09:15:38 ollama-host ollama[5307]: llama_new_context_with_model: graph splits (measure): 13 Jan 30 09:15:39 ollama-host ollama[5307]: 2024/01/30 09:15:39 dyn_ext_server.go:156: INFO Starting llama main loop Jan 30 09:15:39 ollama-host ollama[5307]: 2024/01/30 09:15:39 dyn_ext_server.go:170: INFO loaded 0 images Jan 30 09:15:41 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:15:41 | 200 | 62.882µs | 127.0.0.1 | HEAD "/" Jan 30 09:15:41 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:15:41 | 200 | 835.184µs | 127.0.0.1 | GET "/api/tags" Jan 30 09:15:44 ollama-host ollama[5307]: [GIN] 2024/01/30 - 09:15:44 | 200 | 3m44s | 192.168.178.10 | POST "/api/chat" Jan 30 09:15:44 ollama-host ollama[5307]: 2024/01/30 09:15:44 dyn_ext_server.go:170: INFO loaded 0 images ``` ```neofetch ⠀⠀⢀⣤⣴⣶⣶⣶⣦⣤⡀⠀⣀⣠⣤⣴⣶⣶⣶⣶⣶⣶⣶⣶⣤⣤⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ----------------- ⠀⣰⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣶⣤⡀⠀⠀⠀⠀⠀⠀⠀⠀ OS: Nobara Linux 39 (KDE Plasma) x86_64 ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣄⠀⠀⠀⠀⠀⠀ Host: BTC B250 ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣄⠀⠀⠀⠀ Kernel: 6.7.0-203.fsync.fc39.x86_64 ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⠀⠀⠀ Uptime: 4 hours, 25 mins ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⠋⠉⠁⠀⠀⠉⠉⠛⠿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⠀⠀ Packages: 2649 (rpm), 8 (flatpak) ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⠁⠀⠀⠀⢀⣀⣀⡀⠀⠀⠀⠈⢻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡇⠀ Shell: bash 5.2.21 ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡏⠀⠀⠀⢠⣾⣿⣿⣿⣿⣷⡄⠀⠀⠀⠻⠿⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀ Terminal: /dev/pts/2 ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠁⠀⠀⠀⣿⣿⣿⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⣀⣀⣬⣽⣿⣿⣿⣿⣿⣿⠀ CPU: Intel i5-6500 (4) @ 3.200GHz ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⠈⠻⢿⣿⣿⡿⠟⠁⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀ GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀ GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣤⣤⣄⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀ GPU: Intel HD Graphics 530 ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣄⠀⠀⠀⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀ GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣇⠀⠀⠀⠀⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀ GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate ⠀⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠟⠛⠉⠉⠛⠛⢿⣿⣿⠀⠀⠀⠀⠀⠸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠀ GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate ⠀⠘⢿⣿⣿⣿⣿⣿⣿⣿⡿⠋⠀⠀⠀⠀⠀⠀⠀⠀⠈⢿⠀⠀⠀⠀⠀⠀⠙⢿⣿⣿⣿⣿⣿⣿⣿⠟⠁⠀ GPU: NVIDIA GeForce RTX 3060 Ti Lite Hash Rate ⠀⠀⠀⠈⠙⠛⠛⠛⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠛⠛⠛⠛⠉⠁⠀⠀⠀ Memory: 2714MiB / 11867MiB ```
Author
Owner

@remy415 commented on GitHub (Jan 31, 2024):

@BananaAcid

Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:282: INFO Discovered GPU libraries: [/usr/lib/libnvidia-ml.so.535.146.02 /usr/lib64/libnvidia-ml.so.535.146.02]

It seems like your system has 32bit and 64 bit so files. gpu.go file attempts to load everything with the prefix libnvidia-ml.so*, and if the first one fails it tries to load the next. Looks to me like the /usr/lib64/libnvidia-ml.so.535.146.02 file loaded just fine.

Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: model size = 24.62 GiB (4.53 BPW)

Your slowdown is probably because you're trying to load a 24GB model. Multi-gpu inference is extremely difficult, and I'm not sure it's supported yet by llama_cpp and/or ollama. Try using the mistral model or one of the other 7b models with a smaller model size (aim for 3.5-4GB)

<!-- gh-comment-id:1918166284 --> @remy415 commented on GitHub (Jan 31, 2024): @BananaAcid > Jan 30 09:11:11 ollama-host ollama[5307]: 2024/01/30 09:11:11 gpu.go:282: INFO Discovered GPU libraries: [/usr/lib/libnvidia-ml.so.535.146.02 /usr/lib64/libnvidia-ml.so.535.146.02] It seems like your system has 32bit and 64 bit so files. gpu.go file attempts to load everything with the prefix libnvidia-ml.so*, and if the first one fails it tries to load the next. Looks to me like the /usr/lib64/libnvidia-ml.so.535.146.02 file loaded just fine. > Jan 30 09:12:13 ollama-host ollama[5307]: llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) Your slowdown is probably because you're trying to load a 24GB model. Multi-gpu inference is extremely difficult, and I'm not sure it's supported yet by llama_cpp and/or ollama. Try using the mistral model or one of the other 7b models with a smaller model size (aim for 3.5-4GB)
Author
Owner

@BananaAcid commented on GitHub (Feb 1, 2024):

@remy415 hi thanks for the insight.

The loading performance is already super slow, because it is using USB2 raisers (each separate PCI lanes) on that mining rack. I just did not want to add a performance loss on top ;) nvidia-smi shows load and mem usage just fine. Will try to add the other 12 cards back in.

<!-- gh-comment-id:1921558694 --> @BananaAcid commented on GitHub (Feb 1, 2024): @remy415 hi thanks for the insight. The loading performance is already super slow, because it is using USB2 raisers (each separate PCI lanes) on that mining rack. I just did not want to add a performance loss on top ;) `nvidia-smi` shows load and mem usage just fine. Will try to add the other 12 cards back in.
Author
Owner

@navr32 commented on GitHub (Apr 4, 2024):

Hi could you says if you have finally some interesting result with this setup for inference ? I was thinking to try this kind of config. But was thinking this is lost time but ??? do you ? Thanks.Have a nice day.

<!-- gh-comment-id:2038286843 --> @navr32 commented on GitHub (Apr 4, 2024): Hi could you says if you have finally some interesting result with this setup for inference ? I was thinking to try this kind of config. But was thinking this is lost time but ??? do you ? Thanks.Have a nice day.
Author
Owner

@remy415 commented on GitHub (Apr 4, 2024):

@navr32 are you referring to multi-gpu inference? I think it is still being developed, support for this would be on the llama_cpp page, you may find information there.

<!-- gh-comment-id:2038334377 --> @remy415 commented on GitHub (Apr 4, 2024): @navr32 are you referring to multi-gpu inference? I think it is still being developed, support for this would be on the llama_cpp page, you may find information there.
Author
Owner

@navr32 commented on GitHub (Apr 4, 2024):

Yes it is but with the specific case of a rig setup..so reduce pci-express lane on each card. and bannana acid is trying this setup..

<!-- gh-comment-id:2038337676 --> @navr32 commented on GitHub (Apr 4, 2024): Yes it is but with the specific case of a rig setup..so reduce pci-express lane on each card. and bannana acid is trying this setup..
Author
Owner

@BananaAcid commented on GitHub (Apr 5, 2024):

@navr32 inference is ok. With mixtral and those 6 cards used for a single model for single requests only, it is at least 45-70 Tokens/Sec (just testet), starting immediately (no delay). Having multiple of those rigs in parallel with a nginx load balancer is a working and cheap setup for us.

<!-- gh-comment-id:2038626088 --> @BananaAcid commented on GitHub (Apr 5, 2024): @navr32 inference is ok. With mixtral and those 6 cards used for a single model for single requests only, it is at least 45-70 Tokens/Sec (just testet), starting immediately (no delay). Having multiple of those rigs in parallel with a nginx load balancer is a working and cheap setup for us.
Author
Owner

@navr32 commented on GitHub (Apr 10, 2024):

Very good ! I am very surprised you have so good result with just Rig pci express 0ne lane per cards connection. Thanks for the feedback. Have a nice day.

<!-- gh-comment-id:2046930470 --> @navr32 commented on GitHub (Apr 10, 2024): Very good ! I am very surprised you have so good result with just Rig pci express 0ne lane per cards connection. Thanks for the feedback. Have a nice day.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27061