[GH-ISSUE #9063] CPU usage on low vram and nvidia_uvm enabled #52412

Closed
opened 2026-04-28 23:10:56 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @sapphirepro on GitHub (Feb 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9063

What is the issue?

Used latest version: 0.5.8.
OS: OpenSUSE Tumbleweed X86_64.

Problem is gpu has some lack of vram. And there is nvidia_uvm loaded that is similar to unified memory. But logic is should allow gpu use system memory for gpu purposes. But in ollama seems when cpu uses ordinary it comes to hybrid compute that runs both cpu and gpu AI compute. How to force never use cpu for AI and allow nvidia_uvm borrow ram for GPU, not CPU side compute?

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.8

Originally created by @sapphirepro on GitHub (Feb 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9063 ### What is the issue? Used latest version: 0.5.8. OS: OpenSUSE Tumbleweed X86_64. Problem is gpu has some lack of vram. And there is nvidia_uvm loaded that is similar to unified memory. But logic is should allow gpu use system memory for gpu purposes. But in ollama seems when cpu uses ordinary it comes to hybrid compute that runs both cpu and gpu AI compute. How to force never use cpu for AI and allow nvidia_uvm borrow ram for GPU, not CPU side compute? ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.8
GiteaMirror added the bug label 2026-04-28 23:10:56 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 13, 2025):

Set GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 in server environment.

<!-- gh-comment-id:2655960050 --> @rick-github commented on GitHub (Feb 13, 2025): Set `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` in server environment.
Author
Owner

@sapphirepro commented on GitHub (Feb 13, 2025):

GGML_CUDA_ENABLE_UNIFIED_MEMORY=1

Maybe I am bit dumb.
Tried this but some sorta error. How to use it correctly?
export env GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 /usr/local/bin/ollama serve bash: export: /usr/local/bin/ollama': not a valid identifier
`

<!-- gh-comment-id:2656039460 --> @sapphirepro commented on GitHub (Feb 13, 2025): > `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` Maybe I am bit dumb. Tried this but some sorta error. How to use it correctly? `export env GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 /usr/local/bin/ollama serve bash: export: `/usr/local/bin/ollama': not a valid identifier `
Author
Owner

@rick-github commented on GitHub (Feb 13, 2025):

GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 /usr/local/bin/ollama serve
<!-- gh-comment-id:2656050490 --> @rick-github commented on GitHub (Feb 13, 2025): ``` GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 /usr/local/bin/ollama serve ```
Author
Owner

@sapphirepro commented on GitHub (Feb 13, 2025):

GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 /usr/local/bin/ollama serve

Result below. Still all cores 60% use, while gpu ram used, GPU processing is only 5-15% in use only.

Sapphire@SapphirePro:~> GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 /usr/local/bin/ollama serve 2025/02/13 10:59:06 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/Sapphire/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-13T10:59:06.828+01:00 level=INFO source=images.go:432 msg="total blobs: 30" time=2025-02-13T10:59:06.829+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-13T10:59:06.829+01:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.8)" time=2025-02-13T10:59:06.829+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-13T10:59:06.939+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-830dad3a-2751-63ee-2299-65a3bb9dcf1e library=cuda variant=v12 compute=6.1 driver=12.8 name="Quadro P3000" total="5.9 GiB" available="4.6 GiB" time=2025-02-13T10:59:14.697+01:00 level=INFO source=server.go:100 msg="system memory" total="62.7 GiB" free="41.4 GiB" free_swap="0 B" time=2025-02-13T10:59:14.698+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=19 layers.split="" memory.available="[4.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.1 GiB" memory.required.partial="4.6 GiB" memory.required.kv="384.0 MiB" memory.required.allocations="[4.6 GiB]" memory.weights.total="7.7 GiB" memory.weights.repeating="7.1 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB" time=2025-02-13T10:59:14.699+01:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /home/Sapphire/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed --ctx-size 2048 --batch-size 512 --n-gpu-layers 19 --threads 4 --parallel 1 --port 36721" time=2025-02-13T10:59:14.700+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-13T10:59:14.700+01:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding" time=2025-02-13T10:59:14.700+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error" time=2025-02-13T10:59:14.717+01:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-13T10:59:14.717+01:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=4 time=2025-02-13T10:59:14.717+01:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:36721" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: Quadro P3000, compute capability 6.1, VMM: yes load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so llama_load_model_from_file: using device CUDA0 (Quadro P3000) - 4700 MiB free llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /home/Sapphire/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 14B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder llama_model_loader: - kv 5: general.size_label str = 14B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 14B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 48 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... time=2025-02-13T10:59:14.951+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_layer = 48 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 5 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 13824 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 14B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 14.77 B llm_load_print_meta: model size = 8.37 GiB (4.87 BPW) llm_load_print_meta: general.name = Qwen2.5 Coder 14B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 19 repeating layers to GPU llm_load_tensors: offloaded 19/49 layers to GPU llm_load_tensors: CUDA0 model buffer size = 3012.34 MiB llm_load_tensors: CPU_Mapped model buffer size = 8566.04 MiB llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 152.00 MiB llama_kv_cache_init: CPU KV buffer size = 232.00 MiB llama_new_context_with_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB llama_new_context_with_model: CPU output buffer size = 0.60 MiB llama_new_context_with_model: CUDA0 compute buffer size = 916.08 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 14.01 MiB llama_new_context_with_model: graph nodes = 1686 llama_new_context_with_model: graph splits = 410 (with bs=512), 3 (with bs=1) time=2025-02-13T10:59:16.960+01:00 level=INFO source=server.go:597 msg="llama runner started in 2.26 seconds" llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /home/Sapphire/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 14B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder llama_model_loader: - kv 5: general.size_label str = 14B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 14B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 48 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 14.77 B llm_load_print_meta: model size = 8.37 GiB (4.87 BPW) llm_load_print_meta: general.name = Qwen2.5 Coder 14B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-02-13T10:59:17.721+01:00 level=WARN source=runner.go:129 msg="truncating input prompt" limit=2048 prompt=3102 keep=4 new=2048 [GIN] 2025/02/13 - 10:59:53 | 200 | 1.376101ms | 127.0.0.1 | GET "/api/tags"

<!-- gh-comment-id:2656090521 --> @sapphirepro commented on GitHub (Feb 13, 2025): > GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 /usr/local/bin/ollama serve Result below. Still all cores 60% use, while gpu ram used, GPU processing is only 5-15% in use only. `Sapphire@SapphirePro:~> GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 /usr/local/bin/ollama serve 2025/02/13 10:59:06 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/Sapphire/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-13T10:59:06.828+01:00 level=INFO source=images.go:432 msg="total blobs: 30" time=2025-02-13T10:59:06.829+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-13T10:59:06.829+01:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.8)" time=2025-02-13T10:59:06.829+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-13T10:59:06.939+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-830dad3a-2751-63ee-2299-65a3bb9dcf1e library=cuda variant=v12 compute=6.1 driver=12.8 name="Quadro P3000" total="5.9 GiB" available="4.6 GiB" time=2025-02-13T10:59:14.697+01:00 level=INFO source=server.go:100 msg="system memory" total="62.7 GiB" free="41.4 GiB" free_swap="0 B" time=2025-02-13T10:59:14.698+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=19 layers.split="" memory.available="[4.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.1 GiB" memory.required.partial="4.6 GiB" memory.required.kv="384.0 MiB" memory.required.allocations="[4.6 GiB]" memory.weights.total="7.7 GiB" memory.weights.repeating="7.1 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB" time=2025-02-13T10:59:14.699+01:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /home/Sapphire/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed --ctx-size 2048 --batch-size 512 --n-gpu-layers 19 --threads 4 --parallel 1 --port 36721" time=2025-02-13T10:59:14.700+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-13T10:59:14.700+01:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding" time=2025-02-13T10:59:14.700+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error" time=2025-02-13T10:59:14.717+01:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-13T10:59:14.717+01:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=4 time=2025-02-13T10:59:14.717+01:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:36721" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: Quadro P3000, compute capability 6.1, VMM: yes load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so llama_load_model_from_file: using device CUDA0 (Quadro P3000) - 4700 MiB free llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /home/Sapphire/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 14B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder llama_model_loader: - kv 5: general.size_label str = 14B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 14B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 48 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... time=2025-02-13T10:59:14.951+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_layer = 48 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 5 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 13824 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 14B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 14.77 B llm_load_print_meta: model size = 8.37 GiB (4.87 BPW) llm_load_print_meta: general.name = Qwen2.5 Coder 14B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 19 repeating layers to GPU llm_load_tensors: offloaded 19/49 layers to GPU llm_load_tensors: CUDA0 model buffer size = 3012.34 MiB llm_load_tensors: CPU_Mapped model buffer size = 8566.04 MiB llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 152.00 MiB llama_kv_cache_init: CPU KV buffer size = 232.00 MiB llama_new_context_with_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB llama_new_context_with_model: CPU output buffer size = 0.60 MiB llama_new_context_with_model: CUDA0 compute buffer size = 916.08 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 14.01 MiB llama_new_context_with_model: graph nodes = 1686 llama_new_context_with_model: graph splits = 410 (with bs=512), 3 (with bs=1) time=2025-02-13T10:59:16.960+01:00 level=INFO source=server.go:597 msg="llama runner started in 2.26 seconds" llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from /home/Sapphire/.ollama/models/blobs/sha256-ac9bc7a69dab38da1c790838955f1293420b55ab555ef6b4615efa1c1507b1ed (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 14B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder llama_model_loader: - kv 5: general.size_label str = 14B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 14B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 48 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 14.77 B llm_load_print_meta: model size = 8.37 GiB (4.87 BPW) llm_load_print_meta: general.name = Qwen2.5 Coder 14B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-02-13T10:59:17.721+01:00 level=WARN source=runner.go:129 msg="truncating input prompt" limit=2048 prompt=3102 keep=4 new=2048 [GIN] 2025/02/13 - 10:59:53 | 200 | 1.376101ms | 127.0.0.1 | GET "/api/tags" `
Author
Owner

@rick-github commented on GitHub (Feb 13, 2025):

https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900

<!-- gh-comment-id:2656098417 --> @rick-github commented on GitHub (Feb 13, 2025): https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900
Author
Owner

@sapphirepro commented on GitHub (Feb 13, 2025):

#7584 (comment)

Hmm. I read it but fail to understand solution. What means override "num_gpu"?

<!-- gh-comment-id:2656124404 --> @sapphirepro commented on GitHub (Feb 13, 2025): > [#7584 (comment)](https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900) Hmm. I read it but fail to understand solution. What means override "num_gpu"?
Author
Owner

@rick-github commented on GitHub (Feb 13, 2025):

https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650

<!-- gh-comment-id:2656156890 --> @rick-github commented on GitHub (Feb 13, 2025): https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650
Author
Owner

@sapphirepro commented on GitHub (Feb 13, 2025):

#6950 (comment)

still no improvements
More over balance is like this, while GPU has 2GB free

Sapphire@SapphirePro:~> /usr/local/bin/ollama ps NAME ID SIZE PROCESSOR UNTIL qwen2.5-coder:14b 3028237cc8c5 14 GB 69%/31% CPU/GPU 29 minutes from now

<!-- gh-comment-id:2656207542 --> @sapphirepro commented on GitHub (Feb 13, 2025): > [#6950 (comment)](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650) still no improvements More over balance is like this, while GPU has 2GB free `Sapphire@SapphirePro:~> /usr/local/bin/ollama ps NAME ID SIZE PROCESSOR UNTIL qwen2.5-coder:14b 3028237cc8c5 14 GB 69%/31% CPU/GPU 29 minutes from now `
Author
Owner

@rick-github commented on GitHub (Feb 13, 2025):

Server logs as text attachment.

<!-- gh-comment-id:2656217364 --> @rick-github commented on GitHub (Feb 13, 2025): Server logs as text attachment.
Author
Owner

@sapphirepro commented on GitHub (Feb 13, 2025):

Server logs as text attachment.

logs.txt

<!-- gh-comment-id:2656236916 --> @sapphirepro commented on GitHub (Feb 13, 2025): > Server logs as text attachment. [logs.txt](https://github.com/user-attachments/files/18782715/logs.txt)
Author
Owner

@rick-github commented on GitHub (Feb 13, 2025):

time=2025-02-13T11:42:52.992+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=9 layers.split="" memory.available="[4.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="13.6 GiB" memory.required.partial="4.3 GiB" memory.required.kv="3.0 GiB" memory.required.allocations="[4.3 GiB]" memory.weights.total="10.4 GiB" memory.weights.repeating="9.8 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="1.3 GiB" memory.graph.partial="1.6 GiB"

You haven't overridden num_gpu (layers.requested=-1). Note that when you do override, the output of ollama ps will not take that into account, it will show the original CPU/GPU split it calculated.

<!-- gh-comment-id:2656281026 --> @rick-github commented on GitHub (Feb 13, 2025): ``` time=2025-02-13T11:42:52.992+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=9 layers.split="" memory.available="[4.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="13.6 GiB" memory.required.partial="4.3 GiB" memory.required.kv="3.0 GiB" memory.required.allocations="[4.3 GiB]" memory.weights.total="10.4 GiB" memory.weights.repeating="9.8 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="1.3 GiB" memory.graph.partial="1.6 GiB" ``` You haven't overridden `num_gpu` (`layers.requested=-1`). Note that when you do override, the output of `ollama ps` will not take that into account, it will show the original CPU/GPU split it calculated.
Author
Owner

@sapphirepro commented on GitHub (Feb 13, 2025):

time=2025-02-13T11:42:52.992+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=9 layers.split="" memory.available="[4.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="13.6 GiB" memory.required.partial="4.3 GiB" memory.required.kv="3.0 GiB" memory.required.allocations="[4.3 GiB]" memory.weights.total="10.4 GiB" memory.weights.repeating="9.8 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="1.3 GiB" memory.graph.partial="1.6 GiB"

You haven't overridden num_gpu. Note that when you do override, the output of ollama ps will not take that into account, it will show the original CPU/GPU split it calculated.

What value exactly should I place there? I'd need more precise step by step instructions, due to being newbie to that. Thanks in advance

<!-- gh-comment-id:2656292308 --> @sapphirepro commented on GitHub (Feb 13, 2025): > ``` > time=2025-02-13T11:42:52.992+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=9 layers.split="" memory.available="[4.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="13.6 GiB" memory.required.partial="4.3 GiB" memory.required.kv="3.0 GiB" memory.required.allocations="[4.3 GiB]" memory.weights.total="10.4 GiB" memory.weights.repeating="9.8 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="1.3 GiB" memory.graph.partial="1.6 GiB" > ``` > > You haven't overridden `num_gpu`. Note that when you do override, the output of `ollama ps` will not take that into account, it will show the original CPU/GPU split it calculated. What value exactly should I place there? I'd need more precise step by step instructions, due to being newbie to that. Thanks in advance
Author
Owner

@rick-github commented on GitHub (Feb 13, 2025):

As mentioned in https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900, offloading too many layers may result in a performance penalty. So you need to choose a value between what ollama thinks it can fit (layers.offload=9) and the entire size of the model (layers.model=49). You can experiment to see what works best.

$ ollama run qwen2.5-coder:14b --verbose
>>> hello
....
eval rate:            ?.?? tokens/s
>>> /set parameter num_gpu 10
Set parameter 'num_gpu' to '10'
>>> hello
...
eval rate:            ?.?? tokens/s
>>> /set parameter num_gpu 20
Set parameter 'num_gpu' to '20'
>>> hello
...
eval rate:            ?.?? tokens/s
<!-- gh-comment-id:2656310007 --> @rick-github commented on GitHub (Feb 13, 2025): As mentioned in https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900, offloading too many layers may result in a performance penalty. So you need to choose a value between what ollama thinks it can fit (`layers.offload=9`) and the entire size of the model (`layers.model=49`). You can experiment to see what works best. ```console $ ollama run qwen2.5-coder:14b --verbose >>> hello .... eval rate: ?.?? tokens/s >>> /set parameter num_gpu 10 Set parameter 'num_gpu' to '10' >>> hello ... eval rate: ?.?? tokens/s >>> /set parameter num_gpu 20 Set parameter 'num_gpu' to '20' >>> hello ... eval rate: ?.?? tokens/s ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52412