[GH-ISSUE #8963] GPU not utilized #5816

Closed
opened 2026-04-12 17:09:36 -05:00 by GiteaMirror · 38 comments
Owner

Originally created by @NewbieCoder282 on GitHub (Feb 9, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8963

What is the issue?

I installed Ollama version 0.5.7 and attempted to run the model deepseek-r1:671b using the command ollama run deepseek-r1:671b. However, the model is fully loaded into memory, and the GPU is not being utilized during the process.How can I utilize the GPU?

Relevant log output

$ ./ollama ps
NAME                   ID              SIZE      PROCESSOR    UNTIL   
deepseek-r1:671b-8k    d9bfc20ebf89    496 GB    100% CPU     Forever
-----------------------------------------------------------------------------------
$ nvidia-smi
Sun Feb  9 12:15:39 2025       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.54.03              Driver Version: 535.54.03    CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla V100-SXM2-32GB           Off | 00000000:2D:00.0 Off |                    0 |
| N/A   27C    P0              44W / 300W |      3MiB / 32768MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  Tesla V100-SXM2-32GB           Off | 00000000:32:00.0 Off |                    0 |
| N/A   26C    P0              42W / 300W |      3MiB / 32768MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   2  Tesla V100-SXM2-32GB           Off | 00000000:5B:00.0 Off |                    0 |
| N/A   26C    P0              41W / 300W |      3MiB / 32768MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   3  Tesla V100-SXM2-32GB           Off | 00000000:5F:00.0 Off |                    0 |
| N/A   25C    P0              42W / 300W |      3MiB / 32768MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   4  Tesla V100-SXM2-32GB           Off | 00000000:B5:00.0 Off |                    0 |
| N/A   26C    P0              40W / 300W |      3MiB / 32768MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   5  Tesla V100-SXM2-32GB           Off | 00000000:BE:00.0 Off |                    0 |
| N/A   28C    P0              41W / 300W |      3MiB / 32768MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   6  Tesla V100-SXM2-32GB           Off | 00000000:E1:00.0 Off |                    0 |
| N/A   29C    P0              41W / 300W |      3MiB / 32768MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   7  Tesla V100-SXM2-32GB           Off | 00000000:E9:00.0 Off |                    0 |
| N/A   25C    P0              43W / 300W |      3MiB / 32768MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+
2025/02/09 11:07:54 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1,2,3,4,5,6,7 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:262144000000 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:2562047h47m16.854775807s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/7859b431efb84f7d88b3e2e1acab4765/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-09T11:07:55.377+08:00 level=INFO source=images.go:432 msg="total blobs: 9"
time=2025-02-09T11:07:55.610+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-02-09T11:07:55.987+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-02-09T11:07:56.703+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]"
time=2025-02-09T11:07:56.716+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
[GIN] 2025/02/09 - 11:09:23 | 200 |   82.631819ms |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/09 - 11:09:38 | 200 | 14.896631958s |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/09 - 11:10:57 | 200 |      59.782µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/09 - 11:10:59 | 200 |  1.599195319s |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/02/09 - 11:12:00 | 200 |      41.684µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/09 - 11:12:01 | 200 |  1.468830252s |       127.0.0.1 | POST     "/api/show"
time=2025-02-09T11:12:06.172+08:00 level=INFO source=server.go:104 msg="system memory" total="501.8 GiB" free="494.5 GiB" free_swap="0 B"
time=2025-02-09T11:12:06.727+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB"
time=2025-02-09T11:12:07.239+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 31 --threads 36 --no-mmap --parallel 1 --port 36579"
time=2025-02-09T11:12:07.276+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-09T11:12:07.276+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-09T11:12:07.298+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-09T11:12:08.792+08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-02-09T11:12:08.863+08:00 level=INFO source=runner.go:937 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=36
time=2025-02-09T11:12:08.870+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:36579"
time=2025-02-09T11:12:09.066+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 256x20B
llama_model_loader: - kv   3:                      deepseek2.block_count u32              = 61
llama_model_loader: - kv   4:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   5:                 deepseek2.embedding_length u32              = 7168
llama_model_loader: - kv   6:              deepseek2.feed_forward_length u32              = 18432
llama_model_loader: - kv   7:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv   8:          deepseek2.attention.head_count_kv u32              = 128
llama_model_loader: - kv   9:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  10: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  11:                deepseek2.expert_used_count u32              = 8
llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 3
llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 129280
llama_model_loader: - kv  14:            deepseek2.attention.q_lora_rank u32              = 1536
llama_model_loader: - kv  15:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  16:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  17:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  18:       deepseek2.expert_feed_forward_length u32              = 2048
llama_model_loader: - kv  19:                     deepseek2.expert_count u32              = 256
llama_model_loader: - kv  20:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  21:             deepseek2.expert_weights_scale f32              = 2.500000
llama_model_loader: - kv  22:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  23:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  24:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  25:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  26:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  27: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  28: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = deepseek-v3
llama_model_loader: - kv  31:                      tokenizer.ggml.tokens arr[str,129280]  = ["<|begin▁of▁sentence|>", "<�...
llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  36:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  37:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  38:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  39:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  40:               general.quantization_version u32              = 2
llama_model_loader: - kv  41:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  361 tensors
llama_model_loader: - type q4_K:  606 tensors
llama_model_loader: - type q6_K:   58 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 818
llm_load_vocab: token to piece cache size = 0.8223 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = deepseek2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 129280
llm_load_print_meta: n_merges         = 127741
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 163840
llm_load_print_meta: n_embd           = 7168
llm_load_print_meta: n_layer          = 61
llm_load_print_meta: n_head           = 128
llm_load_print_meta: n_head_kv        = 128
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 192
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 24576
llm_load_print_meta: n_embd_v_gqa     = 16384
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 18432
llm_load_print_meta: n_expert         = 256
llm_load_print_meta: n_expert_used    = 8
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = yarn
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 0.025
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 671B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 671.03 B
llm_load_print_meta: model size       = 376.65 GiB (4.82 BPW) 
llm_load_print_meta: general.name     = n/a
llm_load_print_meta: BOS token        = 0 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 131 'Ä'
llm_load_print_meta: FIM PRE token    = 128801 '<|fim▁begin|>'
llm_load_print_meta: FIM SUF token    = 128800 '<|fim▁hole|>'
llm_load_print_meta: FIM MID token    = 128802 '<|fim▁end|>'
llm_load_print_meta: EOG token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: max token length = 256
llm_load_print_meta: n_layer_dense_lead   = 3
llm_load_print_meta: n_lora_q             = 1536
llm_load_print_meta: n_lora_kv            = 512
llm_load_print_meta: n_ff_exp             = 2048
llm_load_print_meta: n_expert_shared      = 1
llm_load_print_meta: expert_weights_scale = 2.5
llm_load_print_meta: expert_weights_norm  = 1
llm_load_print_meta: expert_gating_func   = sigmoid
llm_load_print_meta: rope_yarn_log_mul    = 0.1000
llm_load_tensors:          CPU model buffer size = 385689.62 MiB

OS

Linux

GPU

Nvidia

CPU

No response

Ollama version

0.5.7

Originally created by @NewbieCoder282 on GitHub (Feb 9, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8963 ### What is the issue? I installed Ollama version 0.5.7 and attempted to run the model deepseek-r1:671b using the command ollama run deepseek-r1:671b. However, the model is fully loaded into memory, and the GPU is not being utilized during the process.How can I utilize the GPU? ### Relevant log output ```shell $ ./ollama ps NAME ID SIZE PROCESSOR UNTIL deepseek-r1:671b-8k d9bfc20ebf89 496 GB 100% CPU Forever ----------------------------------------------------------------------------------- $ nvidia-smi Sun Feb 9 12:15:39 2025 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.54.03 Driver Version: 535.54.03 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Tesla V100-SXM2-32GB Off | 00000000:2D:00.0 Off | 0 | | N/A 27C P0 44W / 300W | 3MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 1 Tesla V100-SXM2-32GB Off | 00000000:32:00.0 Off | 0 | | N/A 26C P0 42W / 300W | 3MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 2 Tesla V100-SXM2-32GB Off | 00000000:5B:00.0 Off | 0 | | N/A 26C P0 41W / 300W | 3MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 3 Tesla V100-SXM2-32GB Off | 00000000:5F:00.0 Off | 0 | | N/A 25C P0 42W / 300W | 3MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 4 Tesla V100-SXM2-32GB Off | 00000000:B5:00.0 Off | 0 | | N/A 26C P0 40W / 300W | 3MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 5 Tesla V100-SXM2-32GB Off | 00000000:BE:00.0 Off | 0 | | N/A 28C P0 41W / 300W | 3MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 6 Tesla V100-SXM2-32GB Off | 00000000:E1:00.0 Off | 0 | | N/A 29C P0 41W / 300W | 3MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 7 Tesla V100-SXM2-32GB Off | 00000000:E9:00.0 Off | 0 | | N/A 25C P0 43W / 300W | 3MiB / 32768MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ 2025/02/09 11:07:54 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1,2,3,4,5,6,7 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:262144000000 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:2562047h47m16.854775807s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/7859b431efb84f7d88b3e2e1acab4765/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-09T11:07:55.377+08:00 level=INFO source=images.go:432 msg="total blobs: 9" time=2025-02-09T11:07:55.610+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-02-09T11:07:55.987+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-02-09T11:07:56.703+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]" time=2025-02-09T11:07:56.716+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T11:07:58.384+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" [GIN] 2025/02/09 - 11:09:23 | 200 | 82.631819ms | 127.0.0.1 | HEAD "/" [GIN] 2025/02/09 - 11:09:38 | 200 | 14.896631958s | 127.0.0.1 | POST "/api/show" [GIN] 2025/02/09 - 11:10:57 | 200 | 59.782µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/09 - 11:10:59 | 200 | 1.599195319s | 127.0.0.1 | POST "/api/show" [GIN] 2025/02/09 - 11:12:00 | 200 | 41.684µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/09 - 11:12:01 | 200 | 1.468830252s | 127.0.0.1 | POST "/api/show" time=2025-02-09T11:12:06.172+08:00 level=INFO source=server.go:104 msg="system memory" total="501.8 GiB" free="494.5 GiB" free_swap="0 B" time=2025-02-09T11:12:06.727+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB" time=2025-02-09T11:12:07.239+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 31 --threads 36 --no-mmap --parallel 1 --port 36579" time=2025-02-09T11:12:07.276+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-09T11:12:07.276+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-09T11:12:07.298+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-09T11:12:08.792+08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-09T11:12:08.863+08:00 level=INFO source=runner.go:937 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=36 time=2025-02-09T11:12:08.870+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:36579" time=2025-02-09T11:12:09.066+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 256x20B llama_model_loader: - kv 3: deepseek2.block_count u32 = 61 llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<�... llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 40: general.quantization_version u32 = 2 llama_model_loader: - kv 41: general.file_type u32 = 15 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q4_K: 606 tensors llama_model_loader: - type q6_K: 58 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 818 llm_load_vocab: token to piece cache size = 0.8223 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 129280 llm_load_print_meta: n_merges = 127741 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 7168 llm_load_print_meta: n_layer = 61 llm_load_print_meta: n_head = 128 llm_load_print_meta: n_head_kv = 128 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 24576 llm_load_print_meta: n_embd_v_gqa = 16384 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18432 llm_load_print_meta: n_expert = 256 llm_load_print_meta: n_expert_used = 8 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 671B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 671.03 B llm_load_print_meta: model size = 376.65 GiB (4.82 BPW) llm_load_print_meta: general.name = n/a llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 131 'Ä' llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 3 llm_load_print_meta: n_lora_q = 1536 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 2048 llm_load_print_meta: n_expert_shared = 1 llm_load_print_meta: expert_weights_scale = 2.5 llm_load_print_meta: expert_weights_norm = 1 llm_load_print_meta: expert_gating_func = sigmoid llm_load_print_meta: rope_yarn_log_mul = 0.1000 llm_load_tensors: CPU model buffer size = 385689.62 MiB ``` ### OS Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-12 17:09:36 -05:00
Author
Owner

@NewbieCoder282 commented on GitHub (Feb 9, 2025):

Does Ollama support NVIDIA Tesla V100?

<!-- gh-comment-id:2646100054 --> @NewbieCoder282 commented on GitHub (Feb 9, 2025): Does Ollama support NVIDIA Tesla V100?
Author
Owner

@YonTracks commented on GitHub (Feb 9, 2025):

do other models use the gpu, like llama3.1 or whatever, if they do, then, I think a multi/parallel kind of issue?
someone will know.
good luck

<!-- gh-comment-id:2646118234 --> @YonTracks commented on GitHub (Feb 9, 2025): do other models use the gpu, like llama3.1 or whatever, if they do, then, I think a multi/parallel kind of issue? someone will know. good luck
Author
Owner

@NewbieCoder282 commented on GitHub (Feb 9, 2025):

do other models use the gpu, like llama3.1 or whatever, if they do, then, I think a multi/parallel kind of issue? someone will know. good luck

This is a good idea, I'll try it.Thank you~

<!-- gh-comment-id:2646141319 --> @NewbieCoder282 commented on GitHub (Feb 9, 2025): > do other models use the gpu, like llama3.1 or whatever, if they do, then, I think a multi/parallel kind of issue? someone will know. good luck This is a good idea, I'll try it.Thank you~
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

time=2025-02-09T11:07:56.703+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]"

time=2025-02-09T11:12:06.727+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB"
time=2025-02-09T11:12:07.239+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 31 --threads 36 --no-mmap --parallel 1 --port 36579"

Your GPUs were detected, there are GPU enabled runners, yet ollama used the CPU runner. Very strange. Can you add OLLAMA_DEBUG=1 to the server environment and post the resulting logs.

<!-- gh-comment-id:2646149208 --> @rick-github commented on GitHub (Feb 9, 2025): ``` time=2025-02-09T11:07:56.703+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]" time=2025-02-09T11:12:06.727+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB" time=2025-02-09T11:12:07.239+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 31 --threads 36 --no-mmap --parallel 1 --port 36579" ``` Your GPUs were detected, there are GPU enabled runners, yet ollama used the CPU runner. Very strange. Can you add `OLLAMA_DEBUG=1` to the server environment and post the resulting logs.
Author
Owner

@shiyi099 commented on GitHub (Feb 9, 2025):

time=2025-02-09T11:07:56.703+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]"

time=2025-02-09T11:12:06.727+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB"
time=2025-02-09T11:12:07.239+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 31 --threads 36 --no-mmap --parallel 1 --port 36579"

Your GPUs were detected, there are GPU enabled runners, yet ollama used the CPU runner. Very strange. Can you add OLLAMA_DEBUG=1 to the server environment and post the resulting logs.

I use H800 for inferecing. Why showed that "time=2025-02-09T17:56:22.873+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu]". ?

It is ensured that ollama used cpu to infer after comparing the time that i ran the model on my RTX3060 desktop.

root@12jsficnrl475-0:/usr/share/ollama/.ollama# OLLAMA_DEBUG=1 ollama serve
2025/02/09 17:56:22 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-09T17:56:22.872+08:00 level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-02-09T17:56:22.872+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-02-09T17:56:22.873+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-02-09T17:56:22.873+08:00 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in"
time=2025-02-09T17:56:22.873+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu]
time=2025-02-09T17:56:22.873+08:00 level=DEBUG source=routes.go:1268 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2025-02-09T17:56:22.873+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2025-02-09T17:56:22.873+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-09T17:56:22.905+08:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-02-09T17:56:22.905+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so*
time=2025-02-09T17:56:22.905+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[libcuda.so* /usr/local/cuda/lib64/libcuda.so* /usr/share/ollama/.ollama/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-02-09T17:56:22.909+08:00 level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.535.104.12]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.104.12
dlsym: cuInit - 0x7f790ce60510
dlsym: cuDriverGetVersion - 0x7f790ce60530
dlsym: cuDeviceGetCount - 0x7f790ce60570
dlsym: cuDeviceGet - 0x7f790ce60550
dlsym: cuDeviceGetAttribute - 0x7f790ce60650
dlsym: cuDeviceGetUuid - 0x7f790ce605b0
dlsym: cuDeviceGetName - 0x7f790ce60590
dlsym: cuCtxCreate_v3 - 0x7f790ce683c0
dlsym: cuMemGetInfo_v2 - 0x7f790ce737e0
dlsym: cuCtxDestroy - 0x7f790cec2220
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2025-02-09T17:56:23.096+08:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.535.104.12
[GPU-a2c4e33e-f84b-796c-15f1-c2e3b62a0ee1] CUDA totalMem 81008 mb
[GPU-a2c4e33e-f84b-796c-15f1-c2e3b62a0ee1] CUDA freeMem 80478 mb
[GPU-a2c4e33e-f84b-796c-15f1-c2e3b62a0ee1] Compute Capability 9.0
time=2025-02-09T17:56:23.349+08:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-02-09T17:56:23.349+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-a2c4e33e-f84b-796c-15f1-c2e3b62a0ee1 library=cuda variant=v12 compute=9.0 driver=12.2 name="NVIDIA H800" total="79.1 GiB" available="78.6 GiB"
<!-- gh-comment-id:2646155161 --> @shiyi099 commented on GitHub (Feb 9, 2025): > ``` > time=2025-02-09T11:07:56.703+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]" > > time=2025-02-09T11:12:06.727+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB" > time=2025-02-09T11:12:07.239+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 31 --threads 36 --no-mmap --parallel 1 --port 36579" > ``` > > Your GPUs were detected, there are GPU enabled runners, yet ollama used the CPU runner. Very strange. Can you add `OLLAMA_DEBUG=1` to the server environment and post the resulting logs. I use H800 for inferecing. Why showed that "time=2025-02-09T17:56:22.873+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu]". ? It is ensured that ollama used cpu to infer after comparing the time that i ran the model on my RTX3060 desktop. ``` root@12jsficnrl475-0:/usr/share/ollama/.ollama# OLLAMA_DEBUG=1 ollama serve 2025/02/09 17:56:22 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-09T17:56:22.872+08:00 level=INFO source=images.go:432 msg="total blobs: 5" time=2025-02-09T17:56:22.872+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-02-09T17:56:22.873+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-02-09T17:56:22.873+08:00 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in" time=2025-02-09T17:56:22.873+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu] time=2025-02-09T17:56:22.873+08:00 level=DEBUG source=routes.go:1268 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2025-02-09T17:56:22.873+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2025-02-09T17:56:22.873+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-09T17:56:22.905+08:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA" time=2025-02-09T17:56:22.905+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so* time=2025-02-09T17:56:22.905+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[libcuda.so* /usr/local/cuda/lib64/libcuda.so* /usr/share/ollama/.ollama/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-02-09T17:56:22.909+08:00 level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.535.104.12] initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.104.12 dlsym: cuInit - 0x7f790ce60510 dlsym: cuDriverGetVersion - 0x7f790ce60530 dlsym: cuDeviceGetCount - 0x7f790ce60570 dlsym: cuDeviceGet - 0x7f790ce60550 dlsym: cuDeviceGetAttribute - 0x7f790ce60650 dlsym: cuDeviceGetUuid - 0x7f790ce605b0 dlsym: cuDeviceGetName - 0x7f790ce60590 dlsym: cuCtxCreate_v3 - 0x7f790ce683c0 dlsym: cuMemGetInfo_v2 - 0x7f790ce737e0 dlsym: cuCtxDestroy - 0x7f790cec2220 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2025-02-09T17:56:23.096+08:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.535.104.12 [GPU-a2c4e33e-f84b-796c-15f1-c2e3b62a0ee1] CUDA totalMem 81008 mb [GPU-a2c4e33e-f84b-796c-15f1-c2e3b62a0ee1] CUDA freeMem 80478 mb [GPU-a2c4e33e-f84b-796c-15f1-c2e3b62a0ee1] Compute Capability 9.0 time=2025-02-09T17:56:23.349+08:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2025-02-09T17:56:23.349+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-a2c4e33e-f84b-796c-15f1-c2e3b62a0ee1 library=cuda variant=v12 compute=9.0 driver=12.2 name="NVIDIA H800" total="79.1 GiB" available="78.6 GiB" ```
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

@shiyi099
Your problem is different, ollama can't find the GPU enabled runners. How did you install ollama?

<!-- gh-comment-id:2646158246 --> @rick-github commented on GitHub (Feb 9, 2025): @shiyi099 Your problem is different, ollama can't find the GPU enabled runners. How did you install ollama?
Author
Owner

@shiyi099 commented on GitHub (Feb 9, 2025):

@rick-github
By following the official guidance ”curl -fsSL https://ollama.com/install.sh | sh“

<!-- gh-comment-id:2646159528 --> @shiyi099 commented on GitHub (Feb 9, 2025): @rick-github By following the official guidance ”curl -fsSL https://ollama.com/install.sh | sh“
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

What's the result of

ollama -v
command -v ollama
<!-- gh-comment-id:2646160460 --> @rick-github commented on GitHub (Feb 9, 2025): What's the result of ``` ollama -v command -v ollama ```
Author
Owner

@shiyi099 commented on GitHub (Feb 9, 2025):

What's the result of

ollama -v
command -v ollama
ollama version is 0.5.7
/usr/local/bin/ollama
<!-- gh-comment-id:2646161037 --> @shiyi099 commented on GitHub (Feb 9, 2025): > What's the result of > > ``` > ollama -v > command -v ollama > ``` ``` ollama version is 0.5.7 /usr/local/bin/ollama ```
Author
Owner

@shiyi099 commented on GitHub (Feb 9, 2025):

@rick-github shall I reinstall by Manual installing?

<!-- gh-comment-id:2646163273 --> @shiyi099 commented on GitHub (Feb 9, 2025): @rick-github shall I reinstall by Manual installing?
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

No, that is usually worse. What's the result of

find /usr/local/lib/ollama/
<!-- gh-comment-id:2646163778 --> @rick-github commented on GitHub (Feb 9, 2025): No, that is usually worse. What's the result of ``` find /usr/local/lib/ollama/ ```
Author
Owner

@shiyi099 commented on GitHub (Feb 9, 2025):

No, that is usually worse. What's the result of

find /usr/local/lib/ollama/

find: '/usr/local/lib/ollama/': No such file or directory

<!-- gh-comment-id:2646164241 --> @shiyi099 commented on GitHub (Feb 9, 2025): > No, that is usually worse. What's the result of > > ``` > find /usr/local/lib/ollama/ > ``` `find: '/usr/local/lib/ollama/': No such file or directory`
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

ls -l /usr/local/lib
<!-- gh-comment-id:2646165059 --> @rick-github commented on GitHub (Feb 9, 2025): ``` ls -l /usr/local/lib ```
Author
Owner

@shiyi099 commented on GitHub (Feb 9, 2025):

ls -l /usr/local/lib
drwxr-xr-x 4 root root 4096 Jan  5  2024 bazel
drwxr-xr-x 3 root root 4096 Jan  5  2024 cmake
drwxr-xr-x 1 root root 4096 Jul  7  2024 inais
drwxr-xr-x 1 root root 4096 Jan  5  2024 python3.10
lrwxrwxrwx 1 root root   20 Dec 19  2023 singularity -> /.singularity.d/libs
drwxr-xr-x 2 root root 4096 Jan  5  2024 tensorflow
<!-- gh-comment-id:2646165503 --> @shiyi099 commented on GitHub (Feb 9, 2025): > ``` > ls -l /usr/local/lib > ``` ``` drwxr-xr-x 4 root root 4096 Jan 5 2024 bazel drwxr-xr-x 3 root root 4096 Jan 5 2024 cmake drwxr-xr-x 1 root root 4096 Jul 7 2024 inais drwxr-xr-x 1 root root 4096 Jan 5 2024 python3.10 lrwxrwxrwx 1 root root 20 Dec 19 2023 singularity -> /.singularity.d/libs drwxr-xr-x 2 root root 4096 Jan 5 2024 tensorflow ```
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

Your ollama installation is incomplete. Re-run curl -fsSL https://ollama.com/install.sh | sh and watch for any errors.

<!-- gh-comment-id:2646165881 --> @rick-github commented on GitHub (Feb 9, 2025): Your ollama installation is incomplete. Re-run `curl -fsSL https://ollama.com/install.sh | sh` and watch for any errors.
Author
Owner

@shiyi099 commented on GitHub (Feb 9, 2025):

Your ollama installation is incomplete. Re-run curl -fsSL https://ollama.com/install.sh | sh and watch for any errors.

Thanks for your patience! I will retry!

<!-- gh-comment-id:2646166496 --> @shiyi099 commented on GitHub (Feb 9, 2025): > Your ollama installation is incomplete. Re-run `curl -fsSL https://ollama.com/install.sh | sh` and watch for any errors. Thanks for your patience! I will retry!
Author
Owner

@YonTracks commented on GitHub (Feb 9, 2025):

time=2025-02-09T17:56:22.873+08:00 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in"

could this be https://github.com/ollama/ollama/blob/main/docs/development.md

https://github.com/ollama/ollama/releases/tag/v0.5.8-rc12

I see fresh 0.5.7 needing the new non runners way?

No, that is usually worse. What's the result of

find /usr/local/lib/ollama/

find: '/usr/local/lib/ollama/': No such file or directory

<!-- gh-comment-id:2646167575 --> @YonTracks commented on GitHub (Feb 9, 2025): ``` time=2025-02-09T17:56:22.873+08:00 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in" ``` could this be https://github.com/ollama/ollama/blob/main/docs/development.md https://github.com/ollama/ollama/releases/tag/v0.5.8-rc12 I see `fresh` 0.5.7 needing the new non runners way? > > No, that is usually worse. What's the result of > > ``` > > find /usr/local/lib/ollama/ > > ``` > > `find: '/usr/local/lib/ollama/': No such file or directory`
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

0.5.7 puts runners in /usr/local/lib/ollama.

<!-- gh-comment-id:2646167987 --> @rick-github commented on GitHub (Feb 9, 2025): 0.5.7 puts runners in /usr/local/lib/ollama.
Author
Owner

@shiyi099 commented on GitHub (Feb 9, 2025):

@rick-github It works, thank you! Have a nice day!

<!-- gh-comment-id:2646168262 --> @shiyi099 commented on GitHub (Feb 9, 2025): @rick-github It works, thank you! Have a nice day!
Author
Owner

@NewbieCoder282 commented on GitHub (Feb 9, 2025):

time=2025-02-09T11:07:56.703+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]"

time=2025-02-09T11:12:06.727+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB"
time=2025-02-09T11:12:07.239+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 31 --threads 36 --no-mmap --parallel 1 --port 36579"

Your GPUs were detected, there are GPU enabled runners, yet ollama used the CPU runner. Very strange. Can you add OLLAMA_DEBUG=1 to the server environment and post the resulting logs.

I found this log, but I don't understand it. Is there any way to resolve it?

time=2025-02-09T17:58:30.085+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.087+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB" time=2025-02-09T17:58:30.087+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model

The detailed log:

2025/02/09 17:58:05 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1,2,3,4,5,6,7 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:262144000000 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:2562047h47m16.854775807s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/7859b431efb84f7d88b3e2e1acab4765/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-09T17:58:06.347+08:00 level=INFO source=images.go:432 msg="total blobs: 11"
time=2025-02-09T17:58:06.573+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-02-09T17:58:07.018+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-02-09T17:58:07.078+08:00 level=DEBUG source=common.go:80 msg="runners located" dir=/home/semtp/notebooks/ollama/lib/ollama/runners
time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/rocm_avx/ollama_llama_server
time=2025-02-09T17:58:07.574+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v11_avx cuda_v12_avx rocm_avx cpu cpu_avx cpu_avx2]"
time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=routes.go:1268 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2025-02-09T17:58:07.605+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-09T17:58:07.635+08:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-02-09T17:58:07.635+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so*
time=2025-02-09T17:58:07.635+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/home/semtp/notebooks/ollama/lib/ollama/libcuda.so* /home/semtp/notebooks/ollama/lib/ollama/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-02-09T17:58:07.812+08:00 level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths=[/usr/local/nvidia/lib64/libcuda.so.535.54.03]
initializing /usr/local/nvidia/lib64/libcuda.so.535.54.03
dlsym: cuInit - 0x7fb5b26975b0
dlsym: cuDriverGetVersion - 0x7fb5b26975d0
dlsym: cuDeviceGetCount - 0x7fb5b2697610
dlsym: cuDeviceGet - 0x7fb5b26975f0
dlsym: cuDeviceGetAttribute - 0x7fb5b26976f0
dlsym: cuDeviceGetUuid - 0x7fb5b2697650
dlsym: cuDeviceGetName - 0x7fb5b2697630
dlsym: cuCtxCreate_v3 - 0x7fb5b269ee80
dlsym: cuMemGetInfo_v2 - 0x7fb5b26a98b0
dlsym: cuCtxDestroy - 0x7fb5b26f3f40
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 8
time=2025-02-09T17:58:08.110+08:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=8 library=/usr/local/nvidia/lib64/libcuda.so.535.54.03
[GPU-10347b1b-ad86-a6ac-53c1-c475d85294da] CUDA totalMem 32501 mb
[GPU-10347b1b-ad86-a6ac-53c1-c475d85294da] CUDA freeMem 32191 mb
[GPU-10347b1b-ad86-a6ac-53c1-c475d85294da] Compute Capability 7.0
[GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1] CUDA totalMem 32501 mb
[GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1] CUDA freeMem 32191 mb
[GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1] Compute Capability 7.0
[GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3] CUDA totalMem 32501 mb
[GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3] CUDA freeMem 32191 mb
[GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3] Compute Capability 7.0
[GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24] CUDA totalMem 32501 mb
[GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24] CUDA freeMem 32191 mb
[GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24] Compute Capability 7.0
[GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6] CUDA totalMem 32501 mb
[GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6] CUDA freeMem 32191 mb
[GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6] Compute Capability 7.0
[GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3] CUDA totalMem 32501 mb
[GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3] CUDA freeMem 32191 mb
[GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3] Compute Capability 7.0
[GPU-22efb4b3-3744-7c53-659c-682cad051c63] CUDA totalMem 32501 mb
[GPU-22efb4b3-3744-7c53-659c-682cad051c63] CUDA freeMem 32191 mb
[GPU-22efb4b3-3744-7c53-659c-682cad051c63] Compute Capability 7.0
[GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a] CUDA totalMem 32501 mb
[GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a] CUDA freeMem 32191 mb
[GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a] Compute Capability 7.0
time=2025-02-09T17:58:09.299+08:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
[GIN] 2025/02/09 - 17:58:25 | 200 |      57.474Âľs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/09 - 17:58:27 | 200 |  1.641496864s |       127.0.0.1 | POST     "/api/show"
time=2025-02-09T17:58:28.415+08:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="501.8 GiB" before.free="494.8 GiB" before.free_swap="0 B" now.total="501.8 GiB" now.free="494.6 GiB" now.free_swap="0 B"
initializing /usr/local/nvidia/lib64/libcuda.so.535.54.03
dlsym: cuInit - 0x7fb5b26975b0
dlsym: cuDriverGetVersion - 0x7fb5b26975d0
dlsym: cuDeviceGetCount - 0x7fb5b2697610
dlsym: cuDeviceGet - 0x7fb5b26975f0
dlsym: cuDeviceGetAttribute - 0x7fb5b26976f0
dlsym: cuDeviceGetUuid - 0x7fb5b2697650
dlsym: cuDeviceGetName - 0x7fb5b2697630
dlsym: cuCtxCreate_v3 - 0x7fb5b269ee80
dlsym: cuMemGetInfo_v2 - 0x7fb5b26a98b0
dlsym: cuCtxDestroy - 0x7fb5b26f3f40
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 8
time=2025-02-09T17:58:28.541+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:28.664+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:28.786+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:28.907+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:29.027+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:29.147+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:29.268+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-22efb4b3-3744-7c53-659c-682cad051c63 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:29.387+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
releasing cuda driver library
time=2025-02-09T17:58:29.388+08:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x55f42dec74c0 gpu_count=8
time=2025-02-09T17:58:30.085+08:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9
time=2025-02-09T17:58:30.085+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.087+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB"
time=2025-02-09T17:58:30.087+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.087+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.088+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB"
time=2025-02-09T17:58:30.089+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.089+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.090+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB"
time=2025-02-09T17:58:30.090+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.090+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.092+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB"
time=2025-02-09T17:58:30.092+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.092+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.122+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB"
time=2025-02-09T17:58:30.122+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.122+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.124+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB"
time=2025-02-09T17:58:30.124+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.124+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.125+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB"
time=2025-02-09T17:58:30.125+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.125+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.126+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB"
time=2025-02-09T17:58:30.126+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.126+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.127+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB"
time=2025-02-09T17:58:30.127+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.128+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.129+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB"
time=2025-02-09T17:58:30.129+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.129+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.131+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB"
time=2025-02-09T17:58:30.131+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.131+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.132+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB"
time=2025-02-09T17:58:30.132+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.132+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.134+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB"
time=2025-02-09T17:58:30.134+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.134+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.135+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB"
time=2025-02-09T17:58:30.135+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.135+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.136+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB"
time=2025-02-09T17:58:30.136+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.136+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]"
time=2025-02-09T17:58:30.137+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB"
time=2025-02-09T17:58:30.137+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.137+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=8 available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]"
time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB"
time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB"
time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB"
time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB"
time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB"
time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB"
time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB"
time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB"
time=2025-02-09T17:58:30.140+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.140+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=8 available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]"
time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="501.8 GiB" before.free="494.6 GiB" before.free_swap="0 B" now.total="501.8 GiB" now.free="494.5 GiB" now.free_swap="0 B"
initializing /usr/local/nvidia/lib64/libcuda.so.535.54.03
dlsym: cuInit - 0x7fb5b26975b0
dlsym: cuDriverGetVersion - 0x7fb5b26975d0
dlsym: cuDeviceGetCount - 0x7fb5b2697610
dlsym: cuDeviceGet - 0x7fb5b26975f0
dlsym: cuDeviceGetAttribute - 0x7fb5b26976f0
dlsym: cuDeviceGetUuid - 0x7fb5b2697650
dlsym: cuDeviceGetName - 0x7fb5b2697630
dlsym: cuCtxCreate_v3 - 0x7fb5b269ee80
dlsym: cuMemGetInfo_v2 - 0x7fb5b26a98b0
dlsym: cuCtxDestroy - 0x7fb5b26f3f40
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 8
time=2025-02-09T17:58:30.292+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:30.682+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:30.801+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:30.919+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:31.035+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:31.155+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:31.272+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-22efb4b3-3744-7c53-659c-682cad051c63 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
time=2025-02-09T17:58:31.389+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB"
releasing cuda driver library
time=2025-02-09T17:58:31.389+08:00 level=INFO source=server.go:104 msg="system memory" total="501.8 GiB" free="494.5 GiB" free_swap="0 B"
time=2025-02-09T17:58:31.389+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=8 available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]"
time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"
time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-09T17:58:31.916+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2025-02-09T17:58:31.916+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2025-02-09T17:58:31.916+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
time=2025-02-09T17:58:31.916+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
time=2025-02-09T17:58:31.916+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/rocm_avx/ollama_llama_server
time=2025-02-09T17:58:31.916+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB"
time=2025-02-09T17:58:32.405+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2025-02-09T17:58:32.406+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2025-02-09T17:58:32.406+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
time=2025-02-09T17:58:32.406+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
time=2025-02-09T17:58:32.406+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/rocm_avx/ollama_llama_server
time=2025-02-09T17:58:32.406+08:00 level=DEBUG source=gpu.go:713 msg="no filter required for library cpu"
time=2025-02-09T17:58:32.406+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 31 --verbose --threads 36 --no-mmap --parallel 1 --port 46403"
time=2025-02-09T17:58:32.406+08:00 level=DEBUG source=server.go:393 msg=subprocess environment="[CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 LD_LIBRARY_PATH=/home/semtp/notebooks/ollama/lib/ollama:/home/semtp/notebooks/ollama/lib/ollama:/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2:/usr/local/nvidia/lib64/:/usr/local/nvidia/lib/ PATH=/usr/local/nvidia/bin:/usr/local/cuda-10.2/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/semtp/notebooks/ollama/bin:/home/semtp/notebooks/ollama/lib/ollama]"
time=2025-02-09T17:58:32.445+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-09T17:58:32.445+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-09T17:58:32.479+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-09T17:58:33.790+08:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-02-09T17:58:33.835+08:00 level=INFO source=runner.go:937 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=36
time=2025-02-09T17:58:33.843+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:46403"
time=2025-02-09T17:58:33.995+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 256x20B
llama_model_loader: - kv   3:                      deepseek2.block_count u32              = 61
llama_model_loader: - kv   4:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   5:                 deepseek2.embedding_length u32              = 7168
llama_model_loader: - kv   6:              deepseek2.feed_forward_length u32              = 18432
llama_model_loader: - kv   7:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv   8:          deepseek2.attention.head_count_kv u32              = 128
llama_model_loader: - kv   9:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  10: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  11:                deepseek2.expert_used_count u32              = 8
llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 3
llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 129280
llama_model_loader: - kv  14:            deepseek2.attention.q_lora_rank u32              = 1536
llama_model_loader: - kv  15:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  16:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  17:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  18:       deepseek2.expert_feed_forward_length u32              = 2048
llama_model_loader: - kv  19:                     deepseek2.expert_count u32              = 256
llama_model_loader: - kv  20:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  21:             deepseek2.expert_weights_scale f32              = 2.500000
llama_model_loader: - kv  22:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  23:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  24:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  25:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  26:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  27: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  28: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = deepseek-v3
llama_model_loader: - kv  31:                      tokenizer.ggml.tokens arr[str,129280]  = ["<|begin▁of▁sentence|>", "<ď...
llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,127741]  = ["Ä  t", "Ä  a", "i n", "Ä  Ä ", "h e...
llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  36:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  37:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  38:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  39:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  40:               general.quantization_version u32              = 2
llama_model_loader: - kv  41:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  361 tensors
llama_model_loader: - type q4_K:  606 tensors
llama_model_loader: - type q6_K:   58 tensors
<!-- gh-comment-id:2646168499 --> @NewbieCoder282 commented on GitHub (Feb 9, 2025): > ``` > time=2025-02-09T11:07:56.703+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]" > > time=2025-02-09T11:12:06.727+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB" > time=2025-02-09T11:12:07.239+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 31 --threads 36 --no-mmap --parallel 1 --port 36579" > ``` > > Your GPUs were detected, there are GPU enabled runners, yet ollama used the CPU runner. Very strange. Can you add `OLLAMA_DEBUG=1` to the server environment and post the resulting logs. I found this log, but I don't understand it. Is there any way to resolve it? `time=2025-02-09T17:58:30.085+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.087+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB" time=2025-02-09T17:58:30.087+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model ` The detailed log: ``` 2025/02/09 17:58:05 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1,2,3,4,5,6,7 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:262144000000 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:2562047h47m16.854775807s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/7859b431efb84f7d88b3e2e1acab4765/ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-09T17:58:06.347+08:00 level=INFO source=images.go:432 msg="total blobs: 11" time=2025-02-09T17:58:06.573+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-02-09T17:58:07.018+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-02-09T17:58:07.078+08:00 level=DEBUG source=common.go:80 msg="runners located" dir=/home/semtp/notebooks/ollama/lib/ollama/runners time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx/ollama_llama_server time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cuda_v11_avx/ollama_llama_server time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cuda_v12_avx/ollama_llama_server time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/rocm_avx/ollama_llama_server time=2025-02-09T17:58:07.574+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v11_avx cuda_v12_avx rocm_avx cpu cpu_avx cpu_avx2]" time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=routes.go:1268 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2025-02-09T17:58:07.574+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2025-02-09T17:58:07.605+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-09T17:58:07.635+08:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA" time=2025-02-09T17:58:07.635+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so* time=2025-02-09T17:58:07.635+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/home/semtp/notebooks/ollama/lib/ollama/libcuda.so* /home/semtp/notebooks/ollama/lib/ollama/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-02-09T17:58:07.812+08:00 level=DEBUG source=gpu.go:576 msg="discovered GPU libraries" paths=[/usr/local/nvidia/lib64/libcuda.so.535.54.03] initializing /usr/local/nvidia/lib64/libcuda.so.535.54.03 dlsym: cuInit - 0x7fb5b26975b0 dlsym: cuDriverGetVersion - 0x7fb5b26975d0 dlsym: cuDeviceGetCount - 0x7fb5b2697610 dlsym: cuDeviceGet - 0x7fb5b26975f0 dlsym: cuDeviceGetAttribute - 0x7fb5b26976f0 dlsym: cuDeviceGetUuid - 0x7fb5b2697650 dlsym: cuDeviceGetName - 0x7fb5b2697630 dlsym: cuCtxCreate_v3 - 0x7fb5b269ee80 dlsym: cuMemGetInfo_v2 - 0x7fb5b26a98b0 dlsym: cuCtxDestroy - 0x7fb5b26f3f40 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 8 time=2025-02-09T17:58:08.110+08:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=8 library=/usr/local/nvidia/lib64/libcuda.so.535.54.03 [GPU-10347b1b-ad86-a6ac-53c1-c475d85294da] CUDA totalMem 32501 mb [GPU-10347b1b-ad86-a6ac-53c1-c475d85294da] CUDA freeMem 32191 mb [GPU-10347b1b-ad86-a6ac-53c1-c475d85294da] Compute Capability 7.0 [GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1] CUDA totalMem 32501 mb [GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1] CUDA freeMem 32191 mb [GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1] Compute Capability 7.0 [GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3] CUDA totalMem 32501 mb [GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3] CUDA freeMem 32191 mb [GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3] Compute Capability 7.0 [GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24] CUDA totalMem 32501 mb [GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24] CUDA freeMem 32191 mb [GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24] Compute Capability 7.0 [GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6] CUDA totalMem 32501 mb [GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6] CUDA freeMem 32191 mb [GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6] Compute Capability 7.0 [GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3] CUDA totalMem 32501 mb [GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3] CUDA freeMem 32191 mb [GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3] Compute Capability 7.0 [GPU-22efb4b3-3744-7c53-659c-682cad051c63] CUDA totalMem 32501 mb [GPU-22efb4b3-3744-7c53-659c-682cad051c63] CUDA freeMem 32191 mb [GPU-22efb4b3-3744-7c53-659c-682cad051c63] Compute Capability 7.0 [GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a] CUDA totalMem 32501 mb [GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a] CUDA freeMem 32191 mb [GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a] Compute Capability 7.0 time=2025-02-09T17:58:09.299+08:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-02-09T17:58:09.300+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" [GIN] 2025/02/09 - 17:58:25 | 200 | 57.474Âľs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/09 - 17:58:27 | 200 | 1.641496864s | 127.0.0.1 | POST "/api/show" time=2025-02-09T17:58:28.415+08:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="501.8 GiB" before.free="494.8 GiB" before.free_swap="0 B" now.total="501.8 GiB" now.free="494.6 GiB" now.free_swap="0 B" initializing /usr/local/nvidia/lib64/libcuda.so.535.54.03 dlsym: cuInit - 0x7fb5b26975b0 dlsym: cuDriverGetVersion - 0x7fb5b26975d0 dlsym: cuDeviceGetCount - 0x7fb5b2697610 dlsym: cuDeviceGet - 0x7fb5b26975f0 dlsym: cuDeviceGetAttribute - 0x7fb5b26976f0 dlsym: cuDeviceGetUuid - 0x7fb5b2697650 dlsym: cuDeviceGetName - 0x7fb5b2697630 dlsym: cuCtxCreate_v3 - 0x7fb5b269ee80 dlsym: cuMemGetInfo_v2 - 0x7fb5b26a98b0 dlsym: cuCtxDestroy - 0x7fb5b26f3f40 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 8 time=2025-02-09T17:58:28.541+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:28.664+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:28.786+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:28.907+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:29.027+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:29.147+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:29.268+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-22efb4b3-3744-7c53-659c-682cad051c63 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:29.387+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" releasing cuda driver library time=2025-02-09T17:58:29.388+08:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x55f42dec74c0 gpu_count=8 time=2025-02-09T17:58:30.085+08:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 time=2025-02-09T17:58:30.085+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.087+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB" time=2025-02-09T17:58:30.087+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.087+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.088+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB" time=2025-02-09T17:58:30.089+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.089+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.090+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB" time=2025-02-09T17:58:30.090+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.090+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.092+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB" time=2025-02-09T17:58:30.092+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.092+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.122+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB" time=2025-02-09T17:58:30.122+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.122+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.124+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB" time=2025-02-09T17:58:30.124+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.124+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.125+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB" time=2025-02-09T17:58:30.125+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.125+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.126+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="8.2 GiB" time=2025-02-09T17:58:30.126+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.126+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.127+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB" time=2025-02-09T17:58:30.127+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.128+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.129+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB" time=2025-02-09T17:58:30.129+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.129+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.131+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB" time=2025-02-09T17:58:30.131+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.131+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.132+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB" time=2025-02-09T17:58:30.132+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.132+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.134+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB" time=2025-02-09T17:58:30.134+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.134+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.135+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB" time=2025-02-09T17:58:30.135+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.135+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.136+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB" time=2025-02-09T17:58:30.136+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.136+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[31.4 GiB]" time=2025-02-09T17:58:30.137+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="2.2 GiB" time=2025-02-09T17:58:30.137+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.137+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=8 available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB" time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB" time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB" time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB" time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB" time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB" time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB" time=2025-02-09T17:58:30.139+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="2.8 GiB" gpu_zer_overhead="0 B" partial_offload="11.3 GiB" full_offload="11.3 GiB" time=2025-02-09T17:58:30.140+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.140+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=8 available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:30.141+08:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="501.8 GiB" before.free="494.6 GiB" before.free_swap="0 B" now.total="501.8 GiB" now.free="494.5 GiB" now.free_swap="0 B" initializing /usr/local/nvidia/lib64/libcuda.so.535.54.03 dlsym: cuInit - 0x7fb5b26975b0 dlsym: cuDriverGetVersion - 0x7fb5b26975d0 dlsym: cuDeviceGetCount - 0x7fb5b2697610 dlsym: cuDeviceGet - 0x7fb5b26975f0 dlsym: cuDeviceGetAttribute - 0x7fb5b26976f0 dlsym: cuDeviceGetUuid - 0x7fb5b2697650 dlsym: cuDeviceGetName - 0x7fb5b2697630 dlsym: cuCtxCreate_v3 - 0x7fb5b269ee80 dlsym: cuMemGetInfo_v2 - 0x7fb5b26a98b0 dlsym: cuCtxDestroy - 0x7fb5b26f3f40 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 8 time=2025-02-09T17:58:30.292+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:30.682+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:30.801+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:30.919+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:31.035+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:31.155+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:31.272+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-22efb4b3-3744-7c53-659c-682cad051c63 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" time=2025-02-09T17:58:31.389+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="309.8 MiB" releasing cuda driver library time=2025-02-09T17:58:31.389+08:00 level=INFO source=server.go:104 msg="system memory" total="501.8 GiB" free="494.5 GiB" free_swap="0 B" time=2025-02-09T17:58:31.389+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=8 available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-10347b1b-ad86-a6ac-53c1-c475d85294da library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ad074606-2412-d9a7-2d5b-a064e2ef62e1 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-5c95549f-53d6-5910-4c95-9ae3e631a3f3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-ebc4ecb3-c112-3316-d892-2d89ce832b24 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-e02620d6-328e-c6b8-3301-62db78d3f1f6 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-88c127e9-d4cc-d10d-8338-9efaa75e73c3 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-22efb4b3-3744-7c53-659c-682cad051c63 library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-09T17:58:31.916+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx/ollama_llama_server time=2025-02-09T17:58:31.916+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server time=2025-02-09T17:58:31.916+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cuda_v11_avx/ollama_llama_server time=2025-02-09T17:58:31.916+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cuda_v12_avx/ollama_llama_server time=2025-02-09T17:58:31.916+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/rocm_avx/ollama_llama_server time=2025-02-09T17:58:31.916+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB" time=2025-02-09T17:58:32.405+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx/ollama_llama_server time=2025-02-09T17:58:32.406+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server time=2025-02-09T17:58:32.406+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cuda_v11_avx/ollama_llama_server time=2025-02-09T17:58:32.406+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/cuda_v12_avx/ollama_llama_server time=2025-02-09T17:58:32.406+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/home/semtp/notebooks/ollama/lib/ollama/runners/rocm_avx/ollama_llama_server time=2025-02-09T17:58:32.406+08:00 level=DEBUG source=gpu.go:713 msg="no filter required for library cpu" time=2025-02-09T17:58:32.406+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 31 --verbose --threads 36 --no-mmap --parallel 1 --port 46403" time=2025-02-09T17:58:32.406+08:00 level=DEBUG source=server.go:393 msg=subprocess environment="[CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 LD_LIBRARY_PATH=/home/semtp/notebooks/ollama/lib/ollama:/home/semtp/notebooks/ollama/lib/ollama:/home/semtp/notebooks/ollama/lib/ollama/runners/cpu_avx2:/usr/local/nvidia/lib64/:/usr/local/nvidia/lib/ PATH=/usr/local/nvidia/bin:/usr/local/cuda-10.2/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/semtp/notebooks/ollama/bin:/home/semtp/notebooks/ollama/lib/ollama]" time=2025-02-09T17:58:32.445+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-09T17:58:32.445+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-09T17:58:32.479+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-09T17:58:33.790+08:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-09T17:58:33.835+08:00 level=INFO source=runner.go:937 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=36 time=2025-02-09T17:58:33.843+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:46403" time=2025-02-09T17:58:33.995+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /data/7859b431efb84f7d88b3e2e1acab4765/ollama_models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 256x20B llama_model_loader: - kv 3: deepseek2.block_count u32 = 61 llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<ď... llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ä  t", "Ä  a", "i n", "Ä  Ä ", "h e... llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 40: general.quantization_version u32 = 2 llama_model_loader: - kv 41: general.file_type u32 = 15 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q4_K: 606 tensors llama_model_loader: - type q6_K: 58 tensors ```
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"

OK, this looks like a bug - 31G available and couldn't fit a single layer of 985M.

<!-- gh-comment-id:2646171491 --> @rick-github commented on GitHub (Feb 9, 2025): ``` time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" ``` OK, this looks like a bug - 31G available and couldn't fit a single layer of 985M.
Author
Owner

@NewbieCoder282 commented on GitHub (Feb 9, 2025):

time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"

OK, this looks like a bug - 31G available and couldn't fit a single layer of 985M.

Thank you so much for your response! Perhaps I should try models with different parameter sizes.

<!-- gh-comment-id:2646180811 --> @NewbieCoder282 commented on GitHub (Feb 9, 2025): > ``` > time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" > ``` > > OK, this looks like a bug - 31G available and couldn't fit a single layer of 985M. Thank you so much for your response! Perhaps I should try models with different parameter sizes.
Author
Owner

@YonTracks commented on GitHub (Feb 9, 2025):

time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB"

OK, this looks like a bug - 31G available and couldn't fit a single layer of 985M.

sorry if hinder, but just in case it helps. what is this repeating and weights bit? wow is that like 500+ gb 8-900gb?

 time=2025-02-09T17:58:31.916+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB"
<!-- gh-comment-id:2646197981 --> @YonTracks commented on GitHub (Feb 9, 2025): > ``` > time=2025-02-09T17:58:31.391+08:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-0dddcf80-4c2a-9f37-35eb-4e1f3b51853a library=cuda variant=v12 compute=7.0 driver=12.2 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" minimum_memory=479199232 layer_size="985.5 MiB" gpu_zer_overhead="0 B" partial_offload="3.0 GiB" full_offload="3.0 GiB" > ``` > > OK, this looks like a bug - 31G available and couldn't fit a single layer of 985M. sorry if hinder, but just in case it helps. what is this repeating and weights bit? wow is that like 500+ gb 8-900gb? ``` time=2025-02-09T17:58:31.916+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB" ```
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

time=2025-02-09T17:58:31.916+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB"

The problem is the context size. A num_ctx of 8192 requires 38.1GiB, which is too big too fit in a GPU with 32G of VRAM. Reduce the size of the context and ollama will be able to load the model in VRAM.

<!-- gh-comment-id:2646210823 --> @rick-github commented on GitHub (Feb 9, 2025): ``` time=2025-02-09T17:58:31.916+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB" ``` The problem is the context size. A `num_ctx` of 8192 requires 38.1GiB, which is too big too fit in a GPU with 32G of VRAM. Reduce the size of the context and ollama will be able to load the model in VRAM.
Author
Owner

@YonTracks commented on GitHub (Feb 9, 2025):

time=2025-02-09T17:58:31.916+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB"

The problem is the context size. A num_ctx of 8192 requires 38.1GiB, which is too big too fit in a GPU with 32G of VRAM. Reduce the size of the context and ollama will be able to load the model in VRAM.

what about OLLAMA_GPU_OVERHEAD:0 is there a way to force 0? also maybe.

<!-- gh-comment-id:2646215202 --> @YonTracks commented on GitHub (Feb 9, 2025): > ``` > time=2025-02-09T17:58:31.916+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB" > ``` > > The problem is the context size. A `num_ctx` of 8192 requires 38.1GiB, which is too big too fit in a GPU with 32G of VRAM. Reduce the size of the context and ollama will be able to load the model in VRAM. what about `OLLAMA_GPU_OVERHEAD:0` is there a way to force 0? also maybe.
Author
Owner

@YonTracks commented on GitHub (Feb 9, 2025):

actually, Its that... lmao. boom!!! set the env OLLAMA_GPU_OVERHEAD:0 also

<!-- gh-comment-id:2646218844 --> @YonTracks commented on GitHub (Feb 9, 2025): actually, Its that... lmao. boom!!! set the env `OLLAMA_GPU_OVERHEAD:0` also
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

Weights are the "artificial neurons" that comprises the models layers. Most of the layers are similar in structure, these are the "hidden layers" and are considered "repeating". The non-repeating layer is the final or "output" layer.

<!-- gh-comment-id:2646219805 --> @rick-github commented on GitHub (Feb 9, 2025): Weights are the "artificial neurons" that comprises the models layers. Most of the layers are similar in structure, these are the "hidden layers" and are considered "repeating". The non-repeating layer is the final or "output" layer.
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

what about OLLAMA_GPU_OVERHEAD:0 is there a way to force 0? also maybe.

Good catch.

<!-- gh-comment-id:2646226029 --> @rick-github commented on GitHub (Feb 9, 2025): > what about OLLAMA_GPU_OVERHEAD:0 is there a way to force 0? also maybe. Good catch.
Author
Owner

@YonTracks commented on GitHub (Feb 9, 2025):

I am shockingly bad at math, but the numbers looked epic, far out still do, very impressive wow.
well done.

<!-- gh-comment-id:2646240300 --> @YonTracks commented on GitHub (Feb 9, 2025): I am shockingly bad at math, but the numbers looked epic, far out still do, very impressive wow. well done.
Author
Owner

@NewbieCoder282 commented on GitHub (Feb 9, 2025):

time=2025-02-09T17:58:31.916+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB"

The problem is the context size. A num_ctx of 8192 requires 38.1GiB, which is too big too fit in a GPU with 32G of VRAM. Reduce the size of the context and ollama will be able to load the model in VRAM.

How is the 38.1GiB here calculated?
Even with the default parameter num_ctx=2048, I am still unable to load the model onto the GPU.

<!-- gh-comment-id:2646307738 --> @NewbieCoder282 commented on GitHub (Feb 9, 2025): > ``` > time=2025-02-09T17:58:31.916+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=31 layers.model=62 layers.offload=0 layers.split="" memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="244.1 GiB" memory.required.full="462.2 GiB" memory.required.partial="0 B" memory.required.kv="38.1 GiB" memory.required.allocations="[0 B 0 B 0 B 0 B 0 B 0 B 0 B 0 B]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB" > ``` > > The problem is the context size. A `num_ctx` of 8192 requires 38.1GiB, which is too big too fit in a GPU with 32G of VRAM. Reduce the size of the context and ollama will be able to load the model in VRAM. How is the 38.1GiB here calculated? Even with the default parameter num_ctx=2048, I am still unable to load the model onto the GPU.
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

YonTracks hit the nail on the head, you have set OLLAMA_GPU_OVERHEAD too large.

<!-- gh-comment-id:2646308489 --> @rick-github commented on GitHub (Feb 9, 2025): YonTracks hit the nail on the head, you have set `OLLAMA_GPU_OVERHEAD` too large.
Author
Owner

@NewbieCoder282 commented on GitHub (Feb 9, 2025):

YonTracks hit the nail on the head, you have set OLLAMA_GPU_OVERHEAD too large.

OK,it works. Thank you and YonTracks so much for your responses! I truly appreciate it!

<!-- gh-comment-id:2646323645 --> @NewbieCoder282 commented on GitHub (Feb 9, 2025): > YonTracks hit the nail on the head, you have set `OLLAMA_GPU_OVERHEAD` too large. OK,it works. Thank you and YonTracks so much for your responses! I truly appreciate it!
Author
Owner

@rabbitlss commented on GitHub (Feb 10, 2025):

@rick-github

0.5.7 puts runners in /usr/local/lib/ollama.

I install ollama in linux machine offline by getting tgz file from : https://github.com/ollama/ollama/releases, version is 057(https://github.com/ollama/ollama/releases/download/v0.5.7/ollama-linux-amd64.tgz), but I still get the info"time=2025-02-10T19:56:38.961+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-02-10T19:56:38.961+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu]
time=2025-02-10T19:56:38.961+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-10T19:56:39.438+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-e60f4fea-a93d-f491-3afe-6a77562c6790 library=cuda variant=v11 compute=8.0 driver=11.8 name="NVIDIA A100-SXM4-80GB" total="79.3 GiB" available="78.9 GiB"
time=2025-02-10T19:56:39.438+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-d740f156-9d27-e0e8-7334-db03ccc6a55c library=cuda variant=v11 compute=8.0 driver=11.8 name="NVIDIA A100-SXM4-80GB" total="79.3 GiB" available="78.9 GiB"", at the same time , the gpu resources not used at all.

because my ollama runner is at: /usr/bin/ollama
and run "command -v ollama" ->
/usr/bin/ollama

Mannualy I put ollama runners into /usr/local/lib/ollama, and run "ls -l /usr/local/lib/ollama" ->
/usr/local/lib/ollama
/usr/local/lib/ollama/ollama

hope for your reply !

<!-- gh-comment-id:2647828531 --> @rabbitlss commented on GitHub (Feb 10, 2025): @rick-github > 0.5.7 puts runners in /usr/local/lib/ollama. I install ollama in linux machine offline by getting tgz file from : https://github.com/ollama/ollama/releases, version is 057(https://github.com/ollama/ollama/releases/download/v0.5.7/ollama-linux-amd64.tgz), but I still get the info"time=2025-02-10T19:56:38.961+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-02-10T19:56:38.961+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu] time=2025-02-10T19:56:38.961+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-10T19:56:39.438+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-e60f4fea-a93d-f491-3afe-6a77562c6790 library=cuda variant=v11 compute=8.0 driver=11.8 name="NVIDIA A100-SXM4-80GB" total="79.3 GiB" available="78.9 GiB" time=2025-02-10T19:56:39.438+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-d740f156-9d27-e0e8-7334-db03ccc6a55c library=cuda variant=v11 compute=8.0 driver=11.8 name="NVIDIA A100-SXM4-80GB" total="79.3 GiB" available="78.9 GiB"", at the same time , the gpu resources not used at all. because my ollama runner is at: /usr/bin/ollama and run "command -v ollama" -> /usr/bin/ollama Mannualy I put ollama runners into /usr/local/lib/ollama, and run "ls -l /usr/local/lib/ollama" -> /usr/local/lib/ollama /usr/local/lib/ollama/ollama hope for your reply !
Author
Owner

@rick-github commented on GitHub (Feb 10, 2025):

How do you extract the tar file?

<!-- gh-comment-id:2647842006 --> @rick-github commented on GitHub (Feb 10, 2025): How do you extract the tar file?
Author
Owner

@rabbitlss commented on GitHub (Feb 10, 2025):

@rick-github

How do you extract the tar file?

by running below comand:
tar -xzvf ollama-linux-amd64.tgz &&
mv bin/ollama /usr/bin/ollama &&
useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama &&
usermod -a -G ollama ollama &&
mv ollama.service /etc/systemd/system/ &&
ln -sf /lib64/liblzma.so.5 /usr/local/conda/lib/liblzma.so.5

<!-- gh-comment-id:2647851205 --> @rabbitlss commented on GitHub (Feb 10, 2025): @rick-github > How do you extract the tar file? by running below comand: tar -xzvf ollama-linux-amd64.tgz && \ mv bin/ollama /usr/bin/ollama && \ useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama && \ usermod -a -G ollama ollama && \ mv ollama.service /etc/systemd/system/ && \ ln -sf /lib64/liblzma.so.5 /usr/local/conda/lib/liblzma.so.5
Author
Owner

@rick-github commented on GitHub (Feb 10, 2025):

https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903

<!-- gh-comment-id:2647853843 --> @rick-github commented on GitHub (Feb 10, 2025): https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903
Author
Owner

@rabbitlss commented on GitHub (Feb 10, 2025):

#8532 (comment)
@rick-github

I check the tgz file, find binary file olllma is in path : /bin/ollama, and runners is in : /lib/runners/ , should I also put runners directory into /usr/local/lib/ollma/ while I already put /bin/ollama binary file into /usr/local/bin/ ? and also make a link between "/usr/bin/ollama" and "/usr/local/bin/ollama" : ln -s /usr/local/bin/ollama /usr/bin/ollama ?
in this way , /usr/bin/ollama -> /usr/local/bin/ollama -> /usr/local/lib/ollma/runners/*

<!-- gh-comment-id:2647926471 --> @rabbitlss commented on GitHub (Feb 10, 2025): > [#8532 (comment)](https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903) @rick-github I check the tgz file, find binary file olllma is in path : /bin/ollama, and runners is in : /lib/runners/ , should I also put runners directory into /usr/local/lib/ollma/ while I already put /bin/ollama binary file into /usr/local/bin/ ? and also make a link between "/usr/bin/ollama" and "/usr/local/bin/ollama" : ln -s /usr/local/bin/ollama /usr/bin/ollama ? in this way , /usr/bin/ollama -> /usr/local/bin/ollama -> /usr/local/lib/ollma/runners/*
Author
Owner

@rabbitlss commented on GitHub (Feb 11, 2025):

#8532 (comment)
@rick-github
thanks a lot! it works now

<!-- gh-comment-id:2649956620 --> @rabbitlss commented on GitHub (Feb 11, 2025): > [#8532 (comment)](https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903) @rick-github thanks a lot! it works now
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5816