[GH-ISSUE #7666] ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version - driver 515.65.01 #4893

Closed
opened 2026-04-12 15:55:52 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @daocoder2 on GitHub (Nov 14, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7666

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

When I upgraded the image to 0.4.0, the previous model encountered this error. The overall information is as follows:

2024/11/14 11:29:13 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-14T11:29:13.399Z level=INFO source=images.go:755 msg="total blobs: 50"
time=2024-11-14T11:29:13.399Z level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-14T11:29:13.400Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.0)"
time=2024-11-14T11:29:13.400Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-11-14T11:29:13.400Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-14T11:29:13.528Z level=INFO source=types.go:123 msg="inference compute" id=GPU-807da1fa-7fac-08aa-4a8c-7c176f72f13b library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="19.4 GiB"
time=2024-11-14T11:30:00.848Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 library=cuda parallel=4 required="5.6 GiB"
time=2024-11-14T11:30:00.964Z level=INFO source=server.go:105 msg="system memory" total="2015.3 GiB" free="1896.2 GiB" free_swap="0 B"
time=2024-11-14T11:30:00.964Z level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[19.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.6 GiB" memory.required.partial="5.6 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[5.6 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB"
time=2024-11-14T11:30:00.965Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 --ctx-size 8192 --batch-size 512 --embedding --n-gpu-layers 29 --threads 64 --parallel 4 --port 37950"
time=2024-11-14T11:30:00.965Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-14T11:30:00.965Z level=INFO source=server.go:567 msg="waiting for llama runner to start responding"
time=2024-11-14T11:30:00.965Z level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server error"
time=2024-11-14T11:30:01.013Z level=INFO source=runner.go:869 msg="starting go runner"
time=2024-11-14T11:30:01.013Z level=INFO source=runner.go:870 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=64
time=2024-11-14T11:30:01.013Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:37950"
llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from /root/.ollama/models/blobs/sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 7B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5
llama_model_loader: - kv   5:                         general.size_label str              = 7B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-7...
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 7B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-7B
llama_model_loader: - kv  12:                               general.tags arr[str,2]       = ["chat", "text-generation"]
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          qwen2.block_count u32              = 28
llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 3584
llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 18944
llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 28
llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 4
llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  22:                          general.file_type u32              = 15
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type q4_K:  169 tensors
llama_model_loader: - type q6_K:   29 tensors
time=2024-11-14T11:30:01.217Z level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 3584
llm_load_print_meta: n_layer          = 28
llm_load_print_meta: n_head           = 28
llm_load_print_meta: n_head_kv        = 4
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 7
llm_load_print_meta: n_embd_k_gqa     = 512
llm_load_print_meta: n_embd_v_gqa     = 512
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 18944
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 7.62 B
llm_load_print_meta: model size       = 4.36 GiB (4.91 BPW) 
llm_load_print_meta: general.name     = Qwen2.5 7B Instruct
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version
llm_load_tensors: ggml ctx size =    0.15 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors:        CPU buffer size =  4460.45 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_cuda_host_malloc: failed to allocate 448.00 MiB of pinned memory: CUDA driver version is insufficient for CUDA runtime version
llama_kv_cache_init:        CPU KV buffer size =   448.00 MiB
llama_new_context_with_model: KV self size  =  448.00 MiB, K (f16):  224.00 MiB, V (f16):  224.00 MiB
ggml_cuda_host_malloc: failed to allocate 2.38 MiB of pinned memory: CUDA driver version is insufficient for CUDA runtime version
llama_new_context_with_model:        CPU  output buffer size =     2.38 MiB
ggml_cuda_host_malloc: failed to allocate 492.01 MiB of pinned memory: CUDA driver version is insufficient for CUDA runtime version
llama_new_context_with_model:  CUDA_Host compute buffer size =   492.01 MiB
llama_new_context_with_model: graph nodes  = 986
llama_new_context_with_model: graph splits = 1
time=2024-11-14T11:30:01.970Z level=INFO source=server.go:606 msg="llama runner started in 1.00 seconds"
llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from /root/.ollama/models/blobs/sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 7B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5
llama_model_loader: - kv   5:                         general.size_label str              = 7B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-7...
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 7B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-7B
llama_model_loader: - kv  12:                               general.tags arr[str,2]       = ["chat", "text-generation"]
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          qwen2.block_count u32              = 28
llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 3584
llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 18944
llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 28
llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 4
llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  22:                          general.file_type u32              = 15
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type q4_K:  169 tensors
llama_model_loader: - type q6_K:   29 tensors
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 1
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = all F32
llm_load_print_meta: model params     = 7.62 B
llm_load_print_meta: model size       = 4.36 GiB (4.91 BPW) 
llm_load_print_meta: general.name     = Qwen2.5 7B Instruct
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2024/11/14 - 11:32:16 | 200 |         2m16s |   172.29.232.34 | POST     "/api/chat"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.4.0

Originally created by @daocoder2 on GitHub (Nov 14, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7666 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? When I upgraded the image to 0.4.0, the previous model encountered this error. The overall information is as follows: ``` 2024/11/14 11:29:13 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-11-14T11:29:13.399Z level=INFO source=images.go:755 msg="total blobs: 50" time=2024-11-14T11:29:13.399Z level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-14T11:29:13.400Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.0)" time=2024-11-14T11:29:13.400Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]" time=2024-11-14T11:29:13.400Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-14T11:29:13.528Z level=INFO source=types.go:123 msg="inference compute" id=GPU-807da1fa-7fac-08aa-4a8c-7c176f72f13b library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="19.4 GiB" time=2024-11-14T11:30:00.848Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 library=cuda parallel=4 required="5.6 GiB" time=2024-11-14T11:30:00.964Z level=INFO source=server.go:105 msg="system memory" total="2015.3 GiB" free="1896.2 GiB" free_swap="0 B" time=2024-11-14T11:30:00.964Z level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[19.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.6 GiB" memory.required.partial="5.6 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[5.6 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB" time=2024-11-14T11:30:00.965Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 --ctx-size 8192 --batch-size 512 --embedding --n-gpu-layers 29 --threads 64 --parallel 4 --port 37950" time=2024-11-14T11:30:00.965Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-14T11:30:00.965Z level=INFO source=server.go:567 msg="waiting for llama runner to start responding" time=2024-11-14T11:30:00.965Z level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server error" time=2024-11-14T11:30:01.013Z level=INFO source=runner.go:869 msg="starting go runner" time=2024-11-14T11:30:01.013Z level=INFO source=runner.go:870 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=64 time=2024-11-14T11:30:01.013Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:37950" llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from /root/.ollama/models/blobs/sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 7B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 7B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 7B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-7B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 28 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors time=2024-11-14T11:30:01.217Z level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 3584 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 28 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 512 llm_load_print_meta: n_embd_v_gqa = 512 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18944 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) llm_load_print_meta: general.name = Qwen2.5 7B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: failed to initialize CUDA: CUDA driver version is insufficient for CUDA runtime version llm_load_tensors: ggml ctx size = 0.15 MiB llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: CPU buffer size = 4460.45 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 ggml_cuda_host_malloc: failed to allocate 448.00 MiB of pinned memory: CUDA driver version is insufficient for CUDA runtime version llama_kv_cache_init: CPU KV buffer size = 448.00 MiB llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB ggml_cuda_host_malloc: failed to allocate 2.38 MiB of pinned memory: CUDA driver version is insufficient for CUDA runtime version llama_new_context_with_model: CPU output buffer size = 2.38 MiB ggml_cuda_host_malloc: failed to allocate 492.01 MiB of pinned memory: CUDA driver version is insufficient for CUDA runtime version llama_new_context_with_model: CUDA_Host compute buffer size = 492.01 MiB llama_new_context_with_model: graph nodes = 986 llama_new_context_with_model: graph splits = 1 time=2024-11-14T11:30:01.970Z level=INFO source=server.go:606 msg="llama runner started in 1.00 seconds" llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from /root/.ollama/models/blobs/sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 7B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 7B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 7B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-7B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 28 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) llm_load_print_meta: general.name = Qwen2.5 7B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2024/11/14 - 11:32:16 | 200 | 2m16s | 172.29.232.34 | POST "/api/chat" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.4.0
GiteaMirror added the dockerlinuxnvidianeeds more infobug labels 2026-04-12 15:55:52 -05:00
Author
Owner

@dhiltgen commented on GitHub (Nov 15, 2024):

Can you share the output of nvidia-smi on your system? It looks like you have an older driver, and we correctly used the v11 runner instead of v12.

<!-- gh-comment-id:2477681543 --> @dhiltgen commented on GitHub (Nov 15, 2024): Can you share the output of `nvidia-smi` on your system? It looks like you have an older driver, and we correctly used the v11 runner instead of v12.
Author
Owner

@daocoder2 commented on GitHub (Nov 15, 2024):

Of course, thank you for your reply. The screenshot below is relevant information, which I have confirmed before.

This is the driver information of the host computer.
1731634370112

This is the driver information of the docker container.
1731634348704
1731634328414

<!-- gh-comment-id:2477774262 --> @daocoder2 commented on GitHub (Nov 15, 2024): Of course, thank you for your reply. The screenshot below is relevant information, which I have confirmed before. This is the driver information of the host computer. <img width="536" alt="1731634370112" src="https://github.com/user-attachments/assets/7569917d-827a-46f1-8e5c-510450e6c0de"> This is the driver information of the docker container. <img width="935" alt="1731634348704" src="https://github.com/user-attachments/assets/9575555d-0a8e-4d42-9d6c-d3dadcbda90b"> <img width="959" alt="1731634328414" src="https://github.com/user-attachments/assets/82b4a810-b25f-423b-be08-994555111339">
Author
Owner

@daocoder2 commented on GitHub (Nov 15, 2024):

Of course, thank you for your reply. The screenshot below is relevant information, which I have confirmed before.

This is the driver information of the host computer. 1731634370112

This is the driver information of the docker container. 1731634348704 1731634328414

In addition, I replaced the previous 0.3.13 image, which works normally.

<!-- gh-comment-id:2477775853 --> @daocoder2 commented on GitHub (Nov 15, 2024): > Of course, thank you for your reply. The screenshot below is relevant information, which I have confirmed before. > > This is the driver information of the host computer. <img alt="1731634370112" width="536" src="https://private-user-images.githubusercontent.com/19505806/386423024-7569917d-827a-46f1-8e5c-510450e6c0de.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzE2MzQ4MzIsIm5iZiI6MTczMTYzNDUzMiwicGF0aCI6Ii8xOTUwNTgwNi8zODY0MjMwMjQtNzU2OTkxN2QtODI3YS00NmYxLThlNWMtNTEwNDUwZTZjMGRlLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDExMTUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMTE1VDAxMzUzMlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWRlZmIzNDg0MDBiN2Y2ZmI3MDA1ZGVlNGQwNGY1NmNiNDU4ZGRjMzU4Y2Q4MTU5NGNmMDc2YjU1M2ViNjc4YjkmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.zBfJtPNmFlvOGMh4cnahfBSjfg-94jUK6FPhBvt_IqY"> > > This is the driver information of the docker container. <img alt="1731634348704" width="935" src="https://private-user-images.githubusercontent.com/19505806/386423030-9575555d-0a8e-4d42-9d6c-d3dadcbda90b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzE2MzQ4MzIsIm5iZiI6MTczMTYzNDUzMiwicGF0aCI6Ii8xOTUwNTgwNi8zODY0MjMwMzAtOTU3NTU1NWQtMGE4ZS00ZDQyLTlkNmMtZDNkYWRjYmRhOTBiLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDExMTUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMTE1VDAxMzUzMlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTAzNjQzNjZjMGNhMTU2ZmY2MGMwOTNmMzYyMWVlM2FkZTZmNmFmZTI0ZjZlZWEzNjU5Yjk4NDg3NDJmMmU0ZjgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.T0d9h8zbhErzaIGwNqjRyu5scBG8gF3dU6yaPys88bw"> <img alt="1731634328414" width="959" src="https://private-user-images.githubusercontent.com/19505806/386423035-82b4a810-b25f-423b-be08-994555111339.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzE2MzQ4MzIsIm5iZiI6MTczMTYzNDUzMiwicGF0aCI6Ii8xOTUwNTgwNi8zODY0MjMwMzUtODJiNGE4MTAtYjI1Zi00MjNiLWJlMDgtOTk0NTU1MTExMzM5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDExMTUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMTE1VDAxMzUzMlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTk4MGM3N2Y1Y2NkZGVjNzc4MTQ3MzQxZWQxNmNmYTQzOWQ3ODBlOTJhNTcwOTA3YzI5MmNhYWEyY2MxNDE0YTgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.oIRFZeyrhJjwy_B0Mj9mUX0OIyxa66Od7qftXHXyQss"> In addition, I replaced the previous 0.3.13 image, which works normally.
Author
Owner

@dhiltgen commented on GitHub (Nov 18, 2024):

It looks like there's more to this than just the driver major version. On ubuntu 20.04, I installed cuda-drivers-515 which contains Driver Version: 515.105.01 CUDA Version: 11.7 and the latest Ollama container image seems to work correctly. My test system has dual GPUs and they were both discovered

% docker logs --tail 3 ollama
time=2024-11-18T23:18:12.366Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-18T23:18:12.636Z level=INFO source=types.go:123 msg="inference compute" id=GPU-d9bdc19d-a9f0-663d-27dc-d8e6b4c715db library=cuda variant=v11 compute=6.1 driver=11.7 name="NVIDIA GeForce GTX 1060 6GB" total="5.9 GiB" available="5.9 GiB"
time=2024-11-18T23:18:12.636Z level=INFO source=types.go:123 msg="inference compute" id=GPU-01d878f3-35f7-039e-7662-8f82589c8efe library=cuda variant=v11 compute=5.0 driver=11.7 name="NVIDIA GeForce GTX 750 Ti" total="2.0 GiB" available="1.9 GiB"
# ollama run --verbose orca-mini hello
 Hello! How may I assist you today?

total duration:       3.407100868s
load duration:        2.65578304s
prompt eval count:    42 token(s)
prompt eval duration: 534ms
prompt eval rate:     78.65 tokens/s
eval count:           10 token(s)
eval duration:        215ms
eval rate:            46.51 tokens/s
# ollama ps
NAME                ID              SIZE      PROCESSOR    UNTIL
orca-mini:latest    2dbd9f439647    5.9 GB    100% GPU     4 minutes from now
<!-- gh-comment-id:2484377573 --> @dhiltgen commented on GitHub (Nov 18, 2024): It looks like there's more to this than just the driver major version. On ubuntu 20.04, I installed `cuda-drivers-515` which contains `Driver Version: 515.105.01 CUDA Version: 11.7` and the latest Ollama container image seems to work correctly. My test system has dual GPUs and they were both discovered ``` % docker logs --tail 3 ollama time=2024-11-18T23:18:12.366Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-18T23:18:12.636Z level=INFO source=types.go:123 msg="inference compute" id=GPU-d9bdc19d-a9f0-663d-27dc-d8e6b4c715db library=cuda variant=v11 compute=6.1 driver=11.7 name="NVIDIA GeForce GTX 1060 6GB" total="5.9 GiB" available="5.9 GiB" time=2024-11-18T23:18:12.636Z level=INFO source=types.go:123 msg="inference compute" id=GPU-01d878f3-35f7-039e-7662-8f82589c8efe library=cuda variant=v11 compute=5.0 driver=11.7 name="NVIDIA GeForce GTX 750 Ti" total="2.0 GiB" available="1.9 GiB" ``` ``` # ollama run --verbose orca-mini hello Hello! How may I assist you today? total duration: 3.407100868s load duration: 2.65578304s prompt eval count: 42 token(s) prompt eval duration: 534ms prompt eval rate: 78.65 tokens/s eval count: 10 token(s) eval duration: 215ms eval rate: 46.51 tokens/s # ollama ps NAME ID SIZE PROCESSOR UNTIL orca-mini:latest 2dbd9f439647 5.9 GB 100% GPU 4 minutes from now ```
Author
Owner

@dhiltgen commented on GitHub (Nov 18, 2024):

Is it possible that your container runtime and host driver somehow got out of sync? Perhaps try uninstalling the nvidia container toolkit and re-installing it and see if that clears up the incompatibility.

<!-- gh-comment-id:2484383279 --> @dhiltgen commented on GitHub (Nov 18, 2024): Is it possible that your container runtime and host driver somehow got out of sync? Perhaps try uninstalling the nvidia container toolkit and re-installing it and see if that clears up the incompatibility.
Author
Owner

@daocoder2 commented on GitHub (Nov 19, 2024):

Thank you for checking the problem.

Is it possible that your container runtime and host driver somehow got out of sync?

Maybe, I tried what I knew and didn't detect this. I'm ashamed.

time=2024-11-19T01:40:48.749Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-19T01:40:49.336Z level=INFO source=types.go:123 msg="inference compute" id=GPU-6f563848-22e0-a1fd-227d-a5db11238125 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="78.9 GiB"
time=2024-11-19T01:40:49.336Z level=INFO source=types.go:123 msg="inference compute" id=GPU-433191ba-e15c-50e1-c43b-4daf15252229 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="72.0 GiB"

I tried it, and the GPU devices could be found in the 0.4.0 container's log, but it was still the log of previous running errors.

I just used a 0.4.1 image to run it again, and it can successfully run the service.

Thank you again for your reply. If there are any clues later, I can provide them. In addition, this question can be lowered according to the situation.

<!-- gh-comment-id:2484539688 --> @daocoder2 commented on GitHub (Nov 19, 2024): Thank you for checking the problem. > Is it possible that your container runtime and host driver somehow got out of sync? Maybe, I tried what I knew and didn't detect this. I'm ashamed. ``` time=2024-11-19T01:40:48.749Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-19T01:40:49.336Z level=INFO source=types.go:123 msg="inference compute" id=GPU-6f563848-22e0-a1fd-227d-a5db11238125 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="78.9 GiB" time=2024-11-19T01:40:49.336Z level=INFO source=types.go:123 msg="inference compute" id=GPU-433191ba-e15c-50e1-c43b-4daf15252229 library=cuda variant=v11 compute=8.0 driver=11.7 name="NVIDIA Graphics Device" total="79.3 GiB" available="72.0 GiB" ``` I tried it, and the GPU devices could be found in the 0.4.0 container's log, but it was still the log of previous running errors. I just used a 0.4.1 image to run it again, and it can successfully run the service. Thank you again for your reply. If there are any clues later, I can provide them. In addition, this question can be lowered according to the situation.
Author
Owner

@dhiltgen commented on GitHub (Nov 19, 2024):

I just used a 0.4.1 image to run it again, and it can successfully run the service.

That's good to hear. So it sounds like the problem is resolved and you're able to run on the GPU.

(If I misunderstood, please share an updated server log and I'll reopen)

<!-- gh-comment-id:2486674810 --> @dhiltgen commented on GitHub (Nov 19, 2024): > I just used a 0.4.1 image to run it again, and it can successfully run the service. That's good to hear. So it sounds like the problem is resolved and you're able to run on the GPU. (If I misunderstood, please share an updated server log and I'll reopen)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4893