[GH-ISSUE #8995] Ollama is splitting the model between CPU and one GPU instead of using second GPU #5843

Closed
opened 2026-04-12 17:11:02 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @tu-step on GitHub (Feb 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8995

What is the issue?

Problem description

My Setup

I use ollama on my Laptop with an external GPU.
My Laptop has an internal Nvidia Quadro M2000M.
Over Thunderbolt 3 I have a Razer Core X Chroma eGPU case, with an Nvidia RTX 3070 inside.
OS: Debian 12

Expected behaviour

When I load a model which requires more than 8gb RAM I expect ollama to share the load between both GPUs.

Actual behaviour

Ollama splits up the model between the RTX 3070 and the CPU.

Things I've tried

Setting environment variables

OLLAMA_SCHED_SPREAD

I've added Environment="OLLAMA_SCHED_SPREAD=1 to ollama.service
This doesn't appear to do anything.

CUDA_VISIBLE_DEVICES

Using nvidia-smi, I found out, that gpu 0 is the quadro M2000M and gpu 1 is the RTX 3070

  • Environment="CUDA_VISIBLE_DEVICES=0" The model is split between the RTX 3070 and the cpu.
  • Environment="CUDA_VISIBLE_DEVICES=1" The model is split between the quadro M2000M and the cpu.
  • Environment="CUDA_VISIBLE_DEVICES=0,1" The model is split between the RTX 3070 and the cpu.

Upgrading ollama

I encountered the bug using version 0.5.7.
I upgraded to version 0.5.8-rc12 and still have the same problem.

Relevant log output

Feb 10 14:37:58 debian systemd[1]: Started ollama.service - Ollama Service.
Feb 10 14:37:58 debian ollama[1538]: 2025/02/10 14:37:58 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Feb 10 14:37:58 debian ollama[1538]: time=2025-02-10T14:37:58.363+01:00 level=INFO source=images.go:432 msg="total blobs: 49"
Feb 10 14:37:58 debian ollama[1538]: time=2025-02-10T14:37:58.365+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
Feb 10 14:37:58 debian ollama[1538]: time=2025-02-10T14:37:58.366+01:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.8-rc12)"
Feb 10 14:37:58 debian ollama[1538]: time=2025-02-10T14:37:58.367+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Feb 10 14:37:58 debian ollama[1538]: time=2025-02-10T14:37:58.799+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-0e2f9e2f-4ac0-eaec-c5bc-951629435b10 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3070" total="7.7 GiB" available="7.5 GiB"
Feb 10 14:37:58 debian ollama[1538]: time=2025-02-10T14:37:58.799+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-74a4c527-563a-d089-7d1f-2acc60c24e9a library=cuda variant=v11 compute=5.0 driver=12.8 name="Quadro M2000M" total="3.9 GiB" available="3.9 GiB"
Feb 10 14:39:56 debian ollama[1538]: [GIN] 2025/02/10 - 14:39:56 | 200 |     731.176µs |       127.0.0.1 | HEAD     "/"
Feb 10 14:39:56 debian ollama[1538]: [GIN] 2025/02/10 - 14:39:56 | 200 |   31.076599ms |       127.0.0.1 | POST     "/api/show"
Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.190+01:00 level=INFO source=server.go:100 msg="system memory" total="23.4 GiB" free="21.5 GiB" free_swap="976.0 MiB"
Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.190+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=37 layers.split="" memory.available="[7.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.9 GiB" memory.required.partial="7.4 GiB" memory.required.kv="384.0 MiB" memory.required.allocations="[7.4 GiB]" memory.weights.total="7.7 GiB" memory.weights.repeating="7.1 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB"
Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.191+01:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed --ctx-size 2048 --batch-size 512 --n-gpu-layers 37 --threads 4 --parallel 1 --port 36555"
Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.191+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.191+01:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding"
Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.191+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error"
Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.208+01:00 level=INFO source=runner.go:936 msg="starting go runner"
Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.208+01:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=4
Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.208+01:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:36555"
Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.445+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server loading model"
Feb 10 14:39:57 debian ollama[1538]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Feb 10 14:39:57 debian ollama[1538]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Feb 10 14:39:57 debian ollama[1538]: ggml_cuda_init: found 1 CUDA devices:
Feb 10 14:39:57 debian ollama[1538]:   Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes
Feb 10 14:39:57 debian ollama[1538]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Feb 10 14:39:57 debian ollama[1538]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
Feb 10 14:39:57 debian ollama[1538]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3070) - 7689 MiB free
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed (version GGUF V3 (latest))
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv   0:                       general.architecture str              = qwen2
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv   1:                               general.type str              = model
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 Coder 14B Instruct
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5-Coder
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv   5:                         general.size_label str              = 14B
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv   6:                            general.license str              = apache-2.0
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-C...
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 Coder 14B
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-C...
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  12:                               general.tags arr[str,6]       = ["code", "codeqwen", "chat", "qwen", ...
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  14:                          qwen2.block_count u32              = 48
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 5120
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 13824
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 40
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 8
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  22:                          general.file_type u32              = 15
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv  33:               general.quantization_version u32              = 2
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - type  f32:  241 tensors
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - type q4_K:  289 tensors
Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - type q6_K:   49 tensors
Feb 10 14:39:58 debian ollama[1538]: llm_load_vocab: special tokens cache size = 22
Feb 10 14:39:58 debian ollama[1538]: llm_load_vocab: token to piece cache size = 0.9310 MB
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: format           = GGUF V3 (latest)
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: arch             = qwen2
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: vocab type       = BPE
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_vocab          = 152064
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_merges         = 151387
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: vocab_only       = 0
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_ctx_train      = 32768
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_embd           = 5120
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_layer          = 48
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_head           = 40
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_head_kv        = 8
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_rot            = 128
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_swa            = 0
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_embd_head_k    = 128
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_embd_head_v    = 128
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_gqa            = 5
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_embd_k_gqa     = 1024
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_embd_v_gqa     = 1024
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_ff             = 13824
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_expert         = 0
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_expert_used    = 0
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: causal attn      = 1
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: pooling type     = 0
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: rope type        = 2
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: rope scaling     = linear
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: freq_base_train  = 1000000.0
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: freq_scale_train = 1
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_ctx_orig_yarn  = 32768
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: rope_finetuned   = unknown
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: ssm_d_conv       = 0
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: ssm_d_inner      = 0
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: ssm_d_state      = 0
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: ssm_dt_rank      = 0
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: model type       = 14B
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: model ftype      = Q4_K - Medium
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: model params     = 14.77 B
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: model size       = 8.37 GiB (4.87 BPW)
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: general.name     = Qwen2.5 Coder 14B Instruct
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: LF token         = 148848 'ÄĬ'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: max token length = 256
Feb 10 14:40:15 debian ollama[1538]: llm_load_tensors: offloading 37 repeating layers to GPU
Feb 10 14:40:15 debian ollama[1538]: llm_load_tensors: offloaded 37/49 layers to GPU
Feb 10 14:40:15 debian ollama[1538]: llm_load_tensors:        CUDA0 model buffer size =  5766.09 MiB
Feb 10 14:40:15 debian ollama[1538]: llm_load_tensors:   CPU_Mapped model buffer size =  8566.04 MiB
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: n_seq_max     = 1
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: n_ctx         = 2048
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: n_ctx_per_seq = 2048
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: n_batch       = 512
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: n_ubatch      = 512
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: flash_attn    = 0
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: freq_base     = 1000000.0
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: freq_scale    = 1
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
Feb 10 14:40:18 debian ollama[1538]: llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
Feb 10 14:40:18 debian ollama[1538]: llama_kv_cache_init:      CUDA0 KV buffer size =   296.00 MiB
Feb 10 14:40:18 debian ollama[1538]: llama_kv_cache_init:        CPU KV buffer size =    88.00 MiB
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: KV self size  =  384.00 MiB, K (f16):  192.00 MiB, V (f16):  192.00 MiB
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model:        CPU  output buffer size =     0.60 MiB
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model:      CUDA0 compute buffer size =   916.08 MiB
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model:  CUDA_Host compute buffer size =    14.01 MiB
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: graph nodes  = 1686
Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: graph splits = 158 (with bs=512), 3 (with bs=1)
Feb 10 14:40:18 debian ollama[1538]: time=2025-02-10T14:40:18.518+01:00 level=INFO source=server.go:597 msg="llama runner started in 21.33 seconds"
Feb 10 14:40:18 debian ollama[1538]: [GIN] 2025/02/10 - 14:40:18 | 200 | 21.819790133s |       127.0.0.1 | POST     "/api/generate"



Mon Feb 10 14:41:39 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.86.15              Driver Version: 570.86.15      CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  Quadro M2000M                  On  |   00000000:01:00.0 Off |                  N/A |
| N/A   42C    P8             N/A /  200W |       9MiB /   4096MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce RTX 3070        Off |   00000000:3D:00.0 Off |                  N/A |
|  0%   32C    P0             70W /  280W |    7180MiB /   8192MiB |     18%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            1743      G   /usr/lib/xorg/Xorg                        2MiB |
|    1   N/A  N/A           63931      C   /usr/local/bin/ollama                  7170MiB |
+-----------------------------------------------------------------------------------------+

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7

Originally created by @tu-step on GitHub (Feb 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8995 ### What is the issue? # Problem description ## My Setup I use ollama on my Laptop with an external GPU. My Laptop has an internal Nvidia Quadro M2000M. Over Thunderbolt 3 I have a Razer Core X Chroma eGPU case, with an Nvidia RTX 3070 inside. OS: Debian 12 ## Expected behaviour When I load a model which requires more than 8gb RAM I expect ollama to share the load between both GPUs. ## Actual behaviour Ollama splits up the model between the RTX 3070 and the CPU. # Things I've tried ## Setting environment variables ### OLLAMA_SCHED_SPREAD I've added `Environment="OLLAMA_SCHED_SPREAD=1` to ollama.service This doesn't appear to do anything. ### CUDA_VISIBLE_DEVICES Using nvidia-smi, I found out, that gpu 0 is the quadro M2000M and gpu 1 is the RTX 3070 - `Environment="CUDA_VISIBLE_DEVICES=0"` The model is split between the RTX 3070 and the cpu. - `Environment="CUDA_VISIBLE_DEVICES=1"` The model is split between the quadro M2000M and the cpu. - `Environment="CUDA_VISIBLE_DEVICES=0,1"` The model is split between the RTX 3070 and the cpu. ## Upgrading ollama I encountered the bug using version 0.5.7. I upgraded to version 0.5.8-rc12 and still have the same problem. ### Relevant log output ```shell Feb 10 14:37:58 debian systemd[1]: Started ollama.service - Ollama Service. Feb 10 14:37:58 debian ollama[1538]: 2025/02/10 14:37:58 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Feb 10 14:37:58 debian ollama[1538]: time=2025-02-10T14:37:58.363+01:00 level=INFO source=images.go:432 msg="total blobs: 49" Feb 10 14:37:58 debian ollama[1538]: time=2025-02-10T14:37:58.365+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" Feb 10 14:37:58 debian ollama[1538]: time=2025-02-10T14:37:58.366+01:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.8-rc12)" Feb 10 14:37:58 debian ollama[1538]: time=2025-02-10T14:37:58.367+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Feb 10 14:37:58 debian ollama[1538]: time=2025-02-10T14:37:58.799+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-0e2f9e2f-4ac0-eaec-c5bc-951629435b10 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3070" total="7.7 GiB" available="7.5 GiB" Feb 10 14:37:58 debian ollama[1538]: time=2025-02-10T14:37:58.799+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-74a4c527-563a-d089-7d1f-2acc60c24e9a library=cuda variant=v11 compute=5.0 driver=12.8 name="Quadro M2000M" total="3.9 GiB" available="3.9 GiB" Feb 10 14:39:56 debian ollama[1538]: [GIN] 2025/02/10 - 14:39:56 | 200 | 731.176µs | 127.0.0.1 | HEAD "/" Feb 10 14:39:56 debian ollama[1538]: [GIN] 2025/02/10 - 14:39:56 | 200 | 31.076599ms | 127.0.0.1 | POST "/api/show" Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.190+01:00 level=INFO source=server.go:100 msg="system memory" total="23.4 GiB" free="21.5 GiB" free_swap="976.0 MiB" Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.190+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=37 layers.split="" memory.available="[7.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.9 GiB" memory.required.partial="7.4 GiB" memory.required.kv="384.0 MiB" memory.required.allocations="[7.4 GiB]" memory.weights.total="7.7 GiB" memory.weights.repeating="7.1 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB" Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.191+01:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed --ctx-size 2048 --batch-size 512 --n-gpu-layers 37 --threads 4 --parallel 1 --port 36555" Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.191+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.191+01:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding" Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.191+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error" Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.208+01:00 level=INFO source=runner.go:936 msg="starting go runner" Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.208+01:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=4 Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.208+01:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:36555" Feb 10 14:39:57 debian ollama[1538]: time=2025-02-10T14:39:57.445+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server loading model" Feb 10 14:39:57 debian ollama[1538]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Feb 10 14:39:57 debian ollama[1538]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Feb 10 14:39:57 debian ollama[1538]: ggml_cuda_init: found 1 CUDA devices: Feb 10 14:39:57 debian ollama[1538]: Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes Feb 10 14:39:57 debian ollama[1538]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Feb 10 14:39:57 debian ollama[1538]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so Feb 10 14:39:57 debian ollama[1538]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3070) - 7689 MiB free Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed (version GGUF V3 (latest)) Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 0: general.architecture str = qwen2 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 1: general.type str = model Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 14B Instruct Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 3: general.finetune str = Instruct Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 5: general.size_label str = 14B Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 6: general.license str = apache-2.0 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 8: general.base_model.count u32 = 1 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 14B Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 14: qwen2.block_count u32 = 48 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 22: general.file_type u32 = 15 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - kv 33: general.quantization_version u32 = 2 Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - type f32: 241 tensors Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - type q4_K: 289 tensors Feb 10 14:39:57 debian ollama[1538]: llama_model_loader: - type q6_K: 49 tensors Feb 10 14:39:58 debian ollama[1538]: llm_load_vocab: special tokens cache size = 22 Feb 10 14:39:58 debian ollama[1538]: llm_load_vocab: token to piece cache size = 0.9310 MB Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: format = GGUF V3 (latest) Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: arch = qwen2 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: vocab type = BPE Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_vocab = 152064 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_merges = 151387 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: vocab_only = 0 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_ctx_train = 32768 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_embd = 5120 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_layer = 48 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_head = 40 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_head_kv = 8 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_rot = 128 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_swa = 0 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_embd_head_k = 128 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_embd_head_v = 128 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_gqa = 5 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_embd_k_gqa = 1024 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_embd_v_gqa = 1024 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: f_norm_eps = 0.0e+00 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: f_logit_scale = 0.0e+00 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_ff = 13824 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_expert = 0 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_expert_used = 0 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: causal attn = 1 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: pooling type = 0 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: rope type = 2 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: rope scaling = linear Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: freq_base_train = 1000000.0 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: freq_scale_train = 1 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: n_ctx_orig_yarn = 32768 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: rope_finetuned = unknown Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: ssm_d_conv = 0 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: ssm_d_inner = 0 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: ssm_d_state = 0 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: ssm_dt_rank = 0 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: model type = 14B Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: model ftype = Q4_K - Medium Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: model params = 14.77 B Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: model size = 8.37 GiB (4.87 BPW) Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: general.name = Qwen2.5 Coder 14B Instruct Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOS token = 151645 '<|im_end|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOT token = 151645 '<|im_end|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: LF token = 148848 'ÄĬ' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOG token = 151645 '<|im_end|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>' Feb 10 14:39:58 debian ollama[1538]: llm_load_print_meta: max token length = 256 Feb 10 14:40:15 debian ollama[1538]: llm_load_tensors: offloading 37 repeating layers to GPU Feb 10 14:40:15 debian ollama[1538]: llm_load_tensors: offloaded 37/49 layers to GPU Feb 10 14:40:15 debian ollama[1538]: llm_load_tensors: CUDA0 model buffer size = 5766.09 MiB Feb 10 14:40:15 debian ollama[1538]: llm_load_tensors: CPU_Mapped model buffer size = 8566.04 MiB Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: n_seq_max = 1 Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: n_ctx = 2048 Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: n_ctx_per_seq = 2048 Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: n_batch = 512 Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: n_ubatch = 512 Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: flash_attn = 0 Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: freq_base = 1000000.0 Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: freq_scale = 1 Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized Feb 10 14:40:18 debian ollama[1538]: llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 Feb 10 14:40:18 debian ollama[1538]: llama_kv_cache_init: CUDA0 KV buffer size = 296.00 MiB Feb 10 14:40:18 debian ollama[1538]: llama_kv_cache_init: CPU KV buffer size = 88.00 MiB Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: CPU output buffer size = 0.60 MiB Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: CUDA0 compute buffer size = 916.08 MiB Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: CUDA_Host compute buffer size = 14.01 MiB Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: graph nodes = 1686 Feb 10 14:40:18 debian ollama[1538]: llama_new_context_with_model: graph splits = 158 (with bs=512), 3 (with bs=1) Feb 10 14:40:18 debian ollama[1538]: time=2025-02-10T14:40:18.518+01:00 level=INFO source=server.go:597 msg="llama runner started in 21.33 seconds" Feb 10 14:40:18 debian ollama[1538]: [GIN] 2025/02/10 - 14:40:18 | 200 | 21.819790133s | 127.0.0.1 | POST "/api/generate" Mon Feb 10 14:41:39 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.86.15 Driver Version: 570.86.15 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Quadro M2000M On | 00000000:01:00.0 Off | N/A | | N/A 42C P8 N/A / 200W | 9MiB / 4096MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce RTX 3070 Off | 00000000:3D:00.0 Off | N/A | | 0% 32C P0 70W / 280W | 7180MiB / 8192MiB | 18% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1743 G /usr/lib/xorg/Xorg 2MiB | | 1 N/A N/A 63931 C /usr/local/bin/ollama 7170MiB | +-----------------------------------------------------------------------------------------+ ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-12 17:11:02 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 10, 2025):

M2000M has compute capability of 5.0 and support may be suffering. https://github.com/ollama/ollama/pull/8567 may fix it.

<!-- gh-comment-id:2648463796 --> @rick-github commented on GitHub (Feb 10, 2025): M2000M has compute capability of 5.0 and support may be suffering. https://github.com/ollama/ollama/pull/8567 may fix it.
Author
Owner

@rick-github commented on GitHub (Feb 10, 2025):

Actually, https://github.com/ollama/ollama/pull/6983 may fix it.

<!-- gh-comment-id:2648491510 --> @rick-github commented on GitHub (Feb 10, 2025): Actually, https://github.com/ollama/ollama/pull/6983 may fix it.
Author
Owner

@rick-github commented on GitHub (Feb 10, 2025):

You can try setting OLLAMA_LLM_LIBRARY=cuda_v11 in the server environment to force ollama to use the lowest common denominator, although with the recent build changes, I'm not sure if that is currently supported.

<!-- gh-comment-id:2648508412 --> @rick-github commented on GitHub (Feb 10, 2025): You can try setting `OLLAMA_LLM_LIBRARY=cuda_v11` in the server environment to force ollama to use the lowest common denominator, although with the recent build changes, I'm not sure if that is currently supported.
Author
Owner

@tu-step commented on GitHub (Feb 10, 2025):

#6983 looks like it could be a solution

I've tried it with OLLAMA_LLM_LIBRARY=cuda_v11
According to the logs it looks like it is using cuda v11 but it still doesn't use the M2000M

Feb 10 17:37:53 debian systemd[1]: Started ollama.service - Ollama Service.
Feb 10 17:37:54 debian ollama[1741]: 2025/02/10 17:37:54 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda_v11 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Feb 10 17:37:54 debian ollama[1741]: time=2025-02-10T17:37:54.077+01:00 level=INFO source=images.go:432 msg="total blobs: 49"
Feb 10 17:37:54 debian ollama[1741]: time=2025-02-10T17:37:54.080+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
Feb 10 17:37:54 debian ollama[1741]: time=2025-02-10T17:37:54.082+01:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.8-rc12)"
Feb 10 17:37:54 debian ollama[1741]: time=2025-02-10T17:37:54.085+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Feb 10 17:37:56 debian ollama[1741]: time=2025-02-10T17:37:56.275+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-0e2f9e2f-4ac0-eaec-c5bc-951629435b10 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3070" total="7.7 GiB" available="7.5 GiB"
Feb 10 17:37:56 debian ollama[1741]: time=2025-02-10T17:37:56.275+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-74a4c527-563a-d089-7d1f-2acc60c24e9a library=cuda variant=v11 compute=5.0 driver=12.8 name="Quadro M2000M" total="3.9 GiB" available="3.9 GiB"
Feb 10 17:38:47 debian ollama[1741]: [GIN] 2025/02/10 - 17:38:47 | 200 |     274.033µs |       127.0.0.1 | HEAD     "/"
Feb 10 17:38:47 debian ollama[1741]: [GIN] 2025/02/10 - 17:38:47 | 200 |   32.901179ms |       127.0.0.1 | POST     "/api/show"
Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.908+01:00 level=INFO source=server.go:100 msg="system memory" total="23.4 GiB" free="21.6 GiB" free_swap="976.0 MiB"
Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.909+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=37 layers.split="" memory.available="[7.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.9 GiB" memory.required.partial="7.4 GiB" memory.required.kv="384.0 MiB" memory.required.allocations="[7.4 GiB]" memory.weights.total="7.7 GiB" memory.weights.repeating="7.1 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB"
Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.909+01:00 level=INFO source=server.go:245 msg="using requested gpu library" requested=cuda_v11
Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.909+01:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed --ctx-size 2048 --batch-size 512 --n-gpu-layers 37 --threads 4 --parallel 1 --port 34515"
Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.909+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.909+01:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding"
Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.910+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error"
Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.927+01:00 level=INFO source=runner.go:936 msg="starting go runner"
Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.927+01:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=4
Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.928+01:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:34515"
Feb 10 17:38:48 debian ollama[1741]: time=2025-02-10T17:38:48.164+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server loading model"
Feb 10 17:38:48 debian ollama[1741]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Feb 10 17:38:48 debian ollama[1741]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Feb 10 17:38:48 debian ollama[1741]: ggml_cuda_init: found 1 CUDA devices:
Feb 10 17:38:48 debian ollama[1741]:   Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes
Feb 10 17:38:48 debian ollama[1741]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Feb 10 17:38:48 debian ollama[1741]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Feb 10 17:38:48 debian ollama[1741]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Feb 10 17:38:48 debian ollama[1741]: ggml_cuda_init: found 1 CUDA devices:
Feb 10 17:38:48 debian ollama[1741]:   Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes
Feb 10 17:38:48 debian ollama[1741]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v11/libggml-cuda.so
Feb 10 17:38:48 debian ollama[1741]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
Feb 10 17:38:48 debian ollama[1741]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3070) - 7689 MiB free
Feb 10 17:38:50 debian ollama[1741]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3070) - 7609 MiB free
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed (version GGUF V3 (latest))
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv   0:                       general.architecture str              = qwen2
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv   1:                               general.type str              = model
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 Coder 14B Instruct
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5-Coder
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv   5:                         general.size_label str              = 14B
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv   6:                            general.license str              = apache-2.0
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-C...
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 Coder 14B
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-C...
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  12:                               general.tags arr[str,6]       = ["code", "codeqwen", "chat", "qwen", ...
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  14:                          qwen2.block_count u32              = 48
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 5120
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 13824
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 40
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 8
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  22:                          general.file_type u32              = 15
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv  33:               general.quantization_version u32              = 2
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - type  f32:  241 tensors
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - type q4_K:  289 tensors
Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - type q6_K:   49 tensors
Feb 10 17:38:50 debian ollama[1741]: llm_load_vocab: special tokens cache size = 22
Feb 10 17:38:50 debian ollama[1741]: llm_load_vocab: token to piece cache size = 0.9310 MB
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: format           = GGUF V3 (latest)
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: arch             = qwen2
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: vocab type       = BPE
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_vocab          = 152064
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_merges         = 151387
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: vocab_only       = 0
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_ctx_train      = 32768
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_embd           = 5120
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_layer          = 48
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_head           = 40
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_head_kv        = 8
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_rot            = 128
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_swa            = 0
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_embd_head_k    = 128
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_embd_head_v    = 128
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_gqa            = 5
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_embd_k_gqa     = 1024
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_embd_v_gqa     = 1024
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_ff             = 13824
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_expert         = 0
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_expert_used    = 0
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: causal attn      = 1
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: pooling type     = 0
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: rope type        = 2
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: rope scaling     = linear
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: freq_base_train  = 1000000.0
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: freq_scale_train = 1
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_ctx_orig_yarn  = 32768
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: rope_finetuned   = unknown
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: ssm_d_conv       = 0
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: ssm_d_inner      = 0
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: ssm_d_state      = 0
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: ssm_dt_rank      = 0
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: model type       = 14B
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: model ftype      = Q4_K - Medium
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: model params     = 14.77 B
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: model size       = 8.37 GiB (4.87 BPW)
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: general.name     = Qwen2.5 Coder 14B Instruct
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: LF token         = 148848 'ÄĬ'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: max token length = 256
Feb 10 17:39:08 debian ollama[1741]: llm_load_tensors: offloading 37 repeating layers to GPU
Feb 10 17:39:08 debian ollama[1741]: llm_load_tensors: offloaded 37/49 layers to GPU
Feb 10 17:39:08 debian ollama[1741]: llm_load_tensors:        CUDA0 model buffer size =  2845.92 MiB
Feb 10 17:39:08 debian ollama[1741]: llm_load_tensors:        CUDA0 model buffer size =  2920.17 MiB
Feb 10 17:39:08 debian ollama[1741]: llm_load_tensors:   CPU_Mapped model buffer size =  8566.04 MiB
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: n_seq_max     = 1
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: n_ctx         = 2048
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: n_ctx_per_seq = 2048
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: n_batch       = 512
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: n_ubatch      = 512
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: flash_attn    = 0
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: freq_base     = 1000000.0
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: freq_scale    = 1
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
Feb 10 17:39:11 debian ollama[1741]: llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
Feb 10 17:39:11 debian ollama[1741]: llama_kv_cache_init:      CUDA0 KV buffer size =   144.00 MiB
Feb 10 17:39:11 debian ollama[1741]: llama_kv_cache_init:      CUDA0 KV buffer size =   152.00 MiB
Feb 10 17:39:11 debian ollama[1741]: llama_kv_cache_init:        CPU KV buffer size =    88.00 MiB
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: KV self size  =  384.00 MiB, K (f16):  192.00 MiB, V (f16):  192.00 MiB
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model:        CPU  output buffer size =     0.60 MiB
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model:      CUDA0 compute buffer size =   916.08 MiB
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model:      CUDA0 compute buffer size =   204.00 MiB
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model:  CUDA_Host compute buffer size =    14.01 MiB
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: graph nodes  = 1686
Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: graph splits = 159 (with bs=512), 4 (with bs=1)
Feb 10 17:39:11 debian ollama[1741]: time=2025-02-10T17:39:11.244+01:00 level=INFO source=server.go:597 msg="llama runner started in 23.33 seconds"
<!-- gh-comment-id:2648653982 --> @tu-step commented on GitHub (Feb 10, 2025): #6983 looks like it could be a solution I've tried it with `OLLAMA_LLM_LIBRARY=cuda_v11` According to the logs it looks like it is using cuda v11 but it still doesn't use the M2000M ``` Feb 10 17:37:53 debian systemd[1]: Started ollama.service - Ollama Service. Feb 10 17:37:54 debian ollama[1741]: 2025/02/10 17:37:54 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:cuda_v11 OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Feb 10 17:37:54 debian ollama[1741]: time=2025-02-10T17:37:54.077+01:00 level=INFO source=images.go:432 msg="total blobs: 49" Feb 10 17:37:54 debian ollama[1741]: time=2025-02-10T17:37:54.080+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" Feb 10 17:37:54 debian ollama[1741]: time=2025-02-10T17:37:54.082+01:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.8-rc12)" Feb 10 17:37:54 debian ollama[1741]: time=2025-02-10T17:37:54.085+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Feb 10 17:37:56 debian ollama[1741]: time=2025-02-10T17:37:56.275+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-0e2f9e2f-4ac0-eaec-c5bc-951629435b10 library=cuda variant=v12 compute=8.6 driver=12.8 name="NVIDIA GeForce RTX 3070" total="7.7 GiB" available="7.5 GiB" Feb 10 17:37:56 debian ollama[1741]: time=2025-02-10T17:37:56.275+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-74a4c527-563a-d089-7d1f-2acc60c24e9a library=cuda variant=v11 compute=5.0 driver=12.8 name="Quadro M2000M" total="3.9 GiB" available="3.9 GiB" Feb 10 17:38:47 debian ollama[1741]: [GIN] 2025/02/10 - 17:38:47 | 200 | 274.033µs | 127.0.0.1 | HEAD "/" Feb 10 17:38:47 debian ollama[1741]: [GIN] 2025/02/10 - 17:38:47 | 200 | 32.901179ms | 127.0.0.1 | POST "/api/show" Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.908+01:00 level=INFO source=server.go:100 msg="system memory" total="23.4 GiB" free="21.6 GiB" free_swap="976.0 MiB" Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.909+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=37 layers.split="" memory.available="[7.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.9 GiB" memory.required.partial="7.4 GiB" memory.required.kv="384.0 MiB" memory.required.allocations="[7.4 GiB]" memory.weights.total="7.7 GiB" memory.weights.repeating="7.1 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB" Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.909+01:00 level=INFO source=server.go:245 msg="using requested gpu library" requested=cuda_v11 Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.909+01:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed --ctx-size 2048 --batch-size 512 --n-gpu-layers 37 --threads 4 --parallel 1 --port 34515" Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.909+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.909+01:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding" Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.910+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error" Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.927+01:00 level=INFO source=runner.go:936 msg="starting go runner" Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.927+01:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=4 Feb 10 17:38:47 debian ollama[1741]: time=2025-02-10T17:38:47.928+01:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:34515" Feb 10 17:38:48 debian ollama[1741]: time=2025-02-10T17:38:48.164+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server loading model" Feb 10 17:38:48 debian ollama[1741]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Feb 10 17:38:48 debian ollama[1741]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Feb 10 17:38:48 debian ollama[1741]: ggml_cuda_init: found 1 CUDA devices: Feb 10 17:38:48 debian ollama[1741]: Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes Feb 10 17:38:48 debian ollama[1741]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Feb 10 17:38:48 debian ollama[1741]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Feb 10 17:38:48 debian ollama[1741]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Feb 10 17:38:48 debian ollama[1741]: ggml_cuda_init: found 1 CUDA devices: Feb 10 17:38:48 debian ollama[1741]: Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes Feb 10 17:38:48 debian ollama[1741]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v11/libggml-cuda.so Feb 10 17:38:48 debian ollama[1741]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so Feb 10 17:38:48 debian ollama[1741]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3070) - 7689 MiB free Feb 10 17:38:50 debian ollama[1741]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3070) - 7609 MiB free Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed (version GGUF V3 (latest)) Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 0: general.architecture str = qwen2 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 1: general.type str = model Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 14B Instruct Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 3: general.finetune str = Instruct Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 5: general.size_label str = 14B Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 6: general.license str = apache-2.0 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 8: general.base_model.count u32 = 1 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 14B Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 14: qwen2.block_count u32 = 48 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 22: general.file_type u32 = 15 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - kv 33: general.quantization_version u32 = 2 Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - type f32: 241 tensors Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - type q4_K: 289 tensors Feb 10 17:38:50 debian ollama[1741]: llama_model_loader: - type q6_K: 49 tensors Feb 10 17:38:50 debian ollama[1741]: llm_load_vocab: special tokens cache size = 22 Feb 10 17:38:50 debian ollama[1741]: llm_load_vocab: token to piece cache size = 0.9310 MB Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: format = GGUF V3 (latest) Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: arch = qwen2 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: vocab type = BPE Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_vocab = 152064 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_merges = 151387 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: vocab_only = 0 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_ctx_train = 32768 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_embd = 5120 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_layer = 48 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_head = 40 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_head_kv = 8 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_rot = 128 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_swa = 0 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_embd_head_k = 128 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_embd_head_v = 128 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_gqa = 5 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_embd_k_gqa = 1024 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_embd_v_gqa = 1024 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: f_norm_eps = 0.0e+00 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: f_logit_scale = 0.0e+00 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_ff = 13824 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_expert = 0 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_expert_used = 0 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: causal attn = 1 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: pooling type = 0 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: rope type = 2 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: rope scaling = linear Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: freq_base_train = 1000000.0 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: freq_scale_train = 1 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: n_ctx_orig_yarn = 32768 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: rope_finetuned = unknown Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: ssm_d_conv = 0 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: ssm_d_inner = 0 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: ssm_d_state = 0 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: ssm_dt_rank = 0 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: model type = 14B Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: model ftype = Q4_K - Medium Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: model params = 14.77 B Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: model size = 8.37 GiB (4.87 BPW) Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: general.name = Qwen2.5 Coder 14B Instruct Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOS token = 151645 '<|im_end|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOT token = 151645 '<|im_end|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: LF token = 148848 'ÄĬ' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOG token = 151645 '<|im_end|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>' Feb 10 17:38:50 debian ollama[1741]: llm_load_print_meta: max token length = 256 Feb 10 17:39:08 debian ollama[1741]: llm_load_tensors: offloading 37 repeating layers to GPU Feb 10 17:39:08 debian ollama[1741]: llm_load_tensors: offloaded 37/49 layers to GPU Feb 10 17:39:08 debian ollama[1741]: llm_load_tensors: CUDA0 model buffer size = 2845.92 MiB Feb 10 17:39:08 debian ollama[1741]: llm_load_tensors: CUDA0 model buffer size = 2920.17 MiB Feb 10 17:39:08 debian ollama[1741]: llm_load_tensors: CPU_Mapped model buffer size = 8566.04 MiB Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: n_seq_max = 1 Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: n_ctx = 2048 Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: n_ctx_per_seq = 2048 Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: n_batch = 512 Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: n_ubatch = 512 Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: flash_attn = 0 Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: freq_base = 1000000.0 Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: freq_scale = 1 Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized Feb 10 17:39:11 debian ollama[1741]: llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 Feb 10 17:39:11 debian ollama[1741]: llama_kv_cache_init: CUDA0 KV buffer size = 144.00 MiB Feb 10 17:39:11 debian ollama[1741]: llama_kv_cache_init: CUDA0 KV buffer size = 152.00 MiB Feb 10 17:39:11 debian ollama[1741]: llama_kv_cache_init: CPU KV buffer size = 88.00 MiB Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: CPU output buffer size = 0.60 MiB Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: CUDA0 compute buffer size = 916.08 MiB Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: CUDA0 compute buffer size = 204.00 MiB Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: CUDA_Host compute buffer size = 14.01 MiB Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: graph nodes = 1686 Feb 10 17:39:11 debian ollama[1741]: llama_new_context_with_model: graph splits = 159 (with bs=512), 4 (with bs=1) Feb 10 17:39:11 debian ollama[1741]: time=2025-02-10T17:39:11.244+01:00 level=INFO source=server.go:597 msg="llama runner started in 23.33 seconds" ```
Author
Owner

@rick-github commented on GitHub (Feb 10, 2025):

You could try rolling back to 0.5.0, OLLAMA_LLM_LIBRARY works there. But other stuff might not.

<!-- gh-comment-id:2648672840 --> @rick-github commented on GitHub (Feb 10, 2025): You could try rolling back to 0.5.0, `OLLAMA_LLM_LIBRARY` works there. But other stuff might not.
Author
Owner

@tu-step commented on GitHub (Feb 10, 2025):

I rolled back to 0.5.0 but it didn't fix the issue.

using nvidia-smi I can confirm, that it's using cuda v11

Mon Feb 10 18:31:24 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.86.15              Driver Version: 570.86.15      CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  Quadro M2000M                  On  |   00000000:01:00.0 Off |                  N/A |
| N/A   45C    P8             N/A /  200W |       9MiB /   4096MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce RTX 3070        Off |   00000000:3D:00.0 Off |                  N/A |
| 53%   32C    P0             65W /  280W |    7504MiB /   8192MiB |      5%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            1869      G   /usr/lib/xorg/Xorg                        2MiB |
|    1   N/A  N/A           16300      C   .../cuda_v11/ollama_llama_server       7494MiB |
+-----------------------------------------------------------------------------------------+
<!-- gh-comment-id:2648764195 --> @tu-step commented on GitHub (Feb 10, 2025): I rolled back to 0.5.0 but it didn't fix the issue. using `nvidia-smi` I can confirm, that it's using cuda v11 ``` Mon Feb 10 18:31:24 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.86.15 Driver Version: 570.86.15 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Quadro M2000M On | 00000000:01:00.0 Off | N/A | | N/A 45C P8 N/A / 200W | 9MiB / 4096MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce RTX 3070 Off | 00000000:3D:00.0 Off | N/A | | 53% 32C P0 65W / 280W | 7504MiB / 8192MiB | 5% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1869 G /usr/lib/xorg/Xorg 2MiB | | 1 N/A N/A 16300 C .../cuda_v11/ollama_llama_server 7494MiB | +-----------------------------------------------------------------------------------------+ ```
Author
Owner

@tu-step commented on GitHub (Aug 23, 2025):

I tried it with the latest version and it works now

<!-- gh-comment-id:3216588532 --> @tu-step commented on GitHub (Aug 23, 2025): I tried it with the latest version and it works now
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5843