[GH-ISSUE #8964] CUDA memory allocation failed in multiple GPU environment #67873

Closed
opened 2026-05-04 11:56:57 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @PC-DOS on GitHub (Feb 9, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8964

What is the issue?

Problem

When using ollama run deepseek-r1:671b comand to launch DeepSeek-R1 on my server, Ollama reports:

Error: llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer

in Ollama server console, the error was:

ggml_cuda_host_malloc: failed to allocate 335445.83 MiB of pinned memory: out of memory
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 28426.68 MiB on device 0: cudaMalloc failed: out of memory

Loading and running smaller models (e.g. deepseek-r1:70b) is successful.

Then, I tried limiting CUDA_VISIBLE_DEVICES to 1, restarting Ollama server, and run the same model. The model was launched successfully with 5/62 layers offloaded to a single GPU.

Expected Behavior

I tried to load the same model (by copying the model's GGUF file and rename is as deepseek-r1-671b.gguf) in LM Studio, the model was loadded on all of my GPUs successfully. All of the GPUs and CPU can participate in inferencing.

Ollama Version

0.5.7

OS Version

Windows Server 2022 Datacenter, Build 20348.2700

CPU

2x AMD EPYC 9654

GPU

2x NVIDIA GeForce RTX 3090 24GB (NVLink not connected)

Driver version 561.09

CUDA Toolkit version 12.6

RAM

16x 32GB DDR5 RECC 4800MHz

Relevant log output

// All GPUs visible
C:\Users\Administrator>set CUDA_VISIBLE_DEVICES=0,1

C:\Users\Administrator>ollama serve
2025/02/09 14:18:58 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:12450 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\Ollama\\Models OLLAMA_MULTIUSER_CACHE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES:]"
time=2025-02-09T14:18:58.932+08:00 level=INFO source=images.go:432 msg="total blobs: 9"
time=2025-02-09T14:18:58.941+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-09T14:18:58.941+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:12450 (version 0.5.7)"
time=2025-02-09T14:18:58.942+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu cpu_avx]"
time=2025-02-09T14:18:58.944+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-09T14:18:58.945+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2025-02-09T14:18:58.945+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=192 efficiency=0 threads=384
time=2025-02-09T14:18:58.945+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=192 efficiency=0 threads=384
time=2025-02-09T14:18:59.145+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="571.9 MiB"
time=2025-02-09T14:18:59.299+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="956.2 MiB"
time=2025-02-09T14:18:59.301+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"
time=2025-02-09T14:18:59.301+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"
[GIN] 2025/02/09 - 14:19:08 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/09 - 14:19:08 | 200 |     14.6405ms |       127.0.0.1 | POST     "/api/show"
time=2025-02-09T14:19:08.280+08:00 level=INFO source=server.go:104 msg="system memory" total="511.6 GiB" free="499.3 GiB" free_swap="511.6 GiB"
time=2025-02-09T14:19:08.301+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=62 layers.offload=7 layers.split=4,3 memory.available="[22.8 GiB 22.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="428.7 GiB" memory.required.partial="36.3 GiB" memory.required.kv="19.1 GiB" memory.required.allocations="[18.5 GiB 17.8 GiB]" memory.weights.total="394.5 GiB" memory.weights.repeating="393.8 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="1.6 GiB" memory.graph.partial="1.6 GiB"
time=2025-02-09T14:19:08.302+08:00 level=WARN source=server.go:216 msg="flash attention enabled but not supported by model"
time=2025-02-09T14:19:08.314+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12_avx\\ollama_llama_server.exe runner --model E:\\Ollama\\Models\\blobs\\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 4096 --batch-size 512 --n-gpu-layers 7 --threads 384 --no-mmap --parallel 2 --tensor-split 4,3 --multiuser-cache --port 58163"
time=2025-02-09T14:19:08.365+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-09T14:19:08.365+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-09T14:19:08.367+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-09T14:19:08.483+08:00 level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
  Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-09T14:19:08.673+08:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=384
time=2025-02-09T14:19:08.673+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:58163"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23306 MiB free
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23306 MiB free
llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from E:\Ollama\Models\blobs\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 256x20B
llama_model_loader: - kv   3:                      deepseek2.block_count u32              = 61
llama_model_loader: - kv   4:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   5:                 deepseek2.embedding_length u32              = 7168
llama_model_loader: - kv   6:              deepseek2.feed_forward_length u32              = 18432
llama_model_loader: - kv   7:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv   8:          deepseek2.attention.head_count_kv u32              = 128
llama_model_loader: - kv   9:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  10: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  11:                deepseek2.expert_used_count u32              = 8
llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 3
llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 129280
llama_model_loader: - kv  14:            deepseek2.attention.q_lora_rank u32              = 1536
llama_model_loader: - kv  15:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  16:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  17:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  18:       deepseek2.expert_feed_forward_length u32              = 2048
llama_model_loader: - kv  19:                     deepseek2.expert_count u32              = 256
llama_model_loader: - kv  20:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  21:             deepseek2.expert_weights_scale f32              = 2.500000
llama_model_loader: - kv  22:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  23:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  24:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  25:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  26:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  27: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  28: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = deepseek-v3
llama_model_loader: - kv  31:                      tokenizer.ggml.tokens arr[str,129280]  = ["<|begin▁of▁sentence|>", "<�...
llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  36:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  37:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  38:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  39:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  40:               general.quantization_version u32              = 2
llama_model_loader: - kv  41:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  361 tensors
llama_model_loader: - type q4_K:  606 tensors
llama_model_loader: - type q6_K:   58 tensors
time=2025-02-09T14:19:08.871+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 818
llm_load_vocab: token to piece cache size = 0.8223 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = deepseek2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 129280
llm_load_print_meta: n_merges         = 127741
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 163840
llm_load_print_meta: n_embd           = 7168
llm_load_print_meta: n_layer          = 61
llm_load_print_meta: n_head           = 128
llm_load_print_meta: n_head_kv        = 128
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 192
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 24576
llm_load_print_meta: n_embd_v_gqa     = 16384
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 18432
llm_load_print_meta: n_expert         = 256
llm_load_print_meta: n_expert_used    = 8
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = yarn
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 0.025
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 671B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 671.03 B
llm_load_print_meta: model size       = 376.65 GiB (4.82 BPW)
llm_load_print_meta: general.name     = n/a
llm_load_print_meta: BOS token        = 0 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 131 'Ä'
llm_load_print_meta: FIM PRE token    = 128801 '<|fim▁begin|>'
llm_load_print_meta: FIM SUF token    = 128800 '<|fim▁hole|>'
llm_load_print_meta: FIM MID token    = 128802 '<|fim▁end|>'
llm_load_print_meta: EOG token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: max token length = 256
llm_load_print_meta: n_layer_dense_lead   = 3
llm_load_print_meta: n_lora_q             = 1536
llm_load_print_meta: n_lora_kv            = 512
llm_load_print_meta: n_ff_exp             = 2048
llm_load_print_meta: n_expert_shared      = 1
llm_load_print_meta: expert_weights_scale = 2.5
llm_load_print_meta: expert_weights_norm  = 1
llm_load_print_meta: expert_gating_func   = sigmoid
llm_load_print_meta: rope_yarn_log_mul    = 0.1000
ggml_cuda_host_malloc: failed to allocate 335445.83 MiB of pinned memory: out of memory
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 28426.68 MiB on device 0: cudaMalloc failed: out of memory
llama_model_load: error loading model: unable to allocate CUDA0 buffer
llama_load_model_from_file: failed to load model
panic: unable to load model: E:\Ollama\Models\blobs\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9

goroutine 7 [running]:
github.com/ollama/ollama/llama/runner.(*Server).loadModel(0xc00019a1b0, {0x7, 0x0, 0x0, 0x0, {0xc00000b3b0, 0x2, 0x2}, 0xc000022210, 0x0}, ...)
        github.com/ollama/ollama/llama/runner/runner.go:852 +0x3ad
created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1
        github.com/ollama/ollama/llama/runner/runner.go:970 +0xd0d
time=2025-02-09T14:19:11.375+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer"
[GIN] 2025/02/09 - 14:19:11 | 500 |    3.1833398s |       127.0.0.1 | POST     "/api/generate"

// Only 1 GPU visible
C:\Users\Administrator>set CUDA_VISIBLE_DEVICES=1

C:\Users\Administrator>ollama serve
2025/02/09 14:19:48 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:12450 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\Ollama\\Models OLLAMA_MULTIUSER_CACHE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES:]"
time=2025-02-09T14:19:48.150+08:00 level=INFO source=images.go:432 msg="total blobs: 9"
time=2025-02-09T14:19:48.151+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-09T14:19:48.154+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:12450 (version 0.5.7)"
time=2025-02-09T14:19:48.154+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]"
time=2025-02-09T14:19:48.154+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-09T14:19:48.155+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2025-02-09T14:19:48.155+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=192 efficiency=0 threads=384
time=2025-02-09T14:19:48.155+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=192 efficiency=0 threads=384
time=2025-02-09T14:19:48.366+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="956.2 MiB"
time=2025-02-09T14:19:48.367+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB"
[GIN] 2025/02/09 - 14:19:51 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/09 - 14:19:51 | 200 |     14.0106ms |       127.0.0.1 | POST     "/api/show"
time=2025-02-09T14:19:51.568+08:00 level=INFO source=server.go:104 msg="system memory" total="511.6 GiB" free="499.3 GiB" free_swap="511.6 GiB"
time=2025-02-09T14:19:51.584+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=62 layers.offload=5 layers.split="" memory.available="[22.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="426.0 GiB" memory.required.partial="19.1 GiB" memory.required.kv="19.1 GiB" memory.required.allocations="[19.1 GiB]" memory.weights.total="394.5 GiB" memory.weights.repeating="393.8 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.6 GiB"
time=2025-02-09T14:19:51.585+08:00 level=WARN source=server.go:216 msg="flash attention enabled but not supported by model"
time=2025-02-09T14:19:51.595+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12_avx\\ollama_llama_server.exe runner --model E:\\Ollama\\Models\\blobs\\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 4096 --batch-size 512 --n-gpu-layers 5 --threads 384 --no-mmap --parallel 2 --multiuser-cache --port 58185"
time=2025-02-09T14:19:51.628+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-09T14:19:51.628+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-09T14:19:51.630+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-09T14:19:51.737+08:00 level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-09T14:19:51.809+08:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=384
time=2025-02-09T14:19:51.809+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:58185"
time=2025-02-09T14:19:51.884+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23306 MiB free
llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from E:\Ollama\Models\blobs\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 256x20B
llama_model_loader: - kv   3:                      deepseek2.block_count u32              = 61
llama_model_loader: - kv   4:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   5:                 deepseek2.embedding_length u32              = 7168
llama_model_loader: - kv   6:              deepseek2.feed_forward_length u32              = 18432
llama_model_loader: - kv   7:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv   8:          deepseek2.attention.head_count_kv u32              = 128
llama_model_loader: - kv   9:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  10: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  11:                deepseek2.expert_used_count u32              = 8
llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 3
llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 129280
llama_model_loader: - kv  14:            deepseek2.attention.q_lora_rank u32              = 1536
llama_model_loader: - kv  15:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  16:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  17:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  18:       deepseek2.expert_feed_forward_length u32              = 2048
llama_model_loader: - kv  19:                     deepseek2.expert_count u32              = 256
llama_model_loader: - kv  20:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  21:             deepseek2.expert_weights_scale f32              = 2.500000
llama_model_loader: - kv  22:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  23:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  24:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  25:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  26:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  27: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  28: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = deepseek-v3
llama_model_loader: - kv  31:                      tokenizer.ggml.tokens arr[str,129280]  = ["<|begin▁of▁sentence|>", "<�...
llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  36:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  37:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  38:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  39:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  40:               general.quantization_version u32              = 2
llama_model_loader: - kv  41:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  361 tensors
llama_model_loader: - type q4_K:  606 tensors
llama_model_loader: - type q6_K:   58 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 818
llm_load_vocab: token to piece cache size = 0.8223 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = deepseek2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 129280
llm_load_print_meta: n_merges         = 127741
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 163840
llm_load_print_meta: n_embd           = 7168
llm_load_print_meta: n_layer          = 61
llm_load_print_meta: n_head           = 128
llm_load_print_meta: n_head_kv        = 128
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 192
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 24576
llm_load_print_meta: n_embd_v_gqa     = 16384
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 18432
llm_load_print_meta: n_expert         = 256
llm_load_print_meta: n_expert_used    = 8
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = yarn
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 0.025
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 671B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 671.03 B
llm_load_print_meta: model size       = 376.65 GiB (4.82 BPW)
llm_load_print_meta: general.name     = n/a
llm_load_print_meta: BOS token        = 0 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 131 'Ä'
llm_load_print_meta: FIM PRE token    = 128801 '<|fim▁begin|>'
llm_load_print_meta: FIM SUF token    = 128800 '<|fim▁hole|>'
llm_load_print_meta: FIM MID token    = 128802 '<|fim▁end|>'
llm_load_print_meta: EOG token        = 1 '<|end▁of▁sentence|>'
llm_load_print_meta: max token length = 256
llm_load_print_meta: n_layer_dense_lead   = 3
llm_load_print_meta: n_lora_q             = 1536
llm_load_print_meta: n_lora_kv            = 512
llm_load_print_meta: n_ff_exp             = 2048
llm_load_print_meta: n_expert_shared      = 1
llm_load_print_meta: expert_weights_scale = 2.5
llm_load_print_meta: expert_weights_norm  = 1
llm_load_print_meta: expert_gating_func   = sigmoid
llm_load_print_meta: rope_yarn_log_mul    = 0.1000
ggml_cuda_host_malloc: failed to allocate 349659.17 MiB of pinned memory: out of memory
llm_load_tensors: offloading 5 repeating layers to GPU
llm_load_tensors: offloaded 5/62 layers to GPU
llm_load_tensors:          CPU model buffer size =   497.11 MiB
llm_load_tensors:          CPU model buffer size = 349659.17 MiB
llm_load_tensors:        CUDA0 model buffer size = 35533.34 MiB

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.5.7

Originally created by @PC-DOS on GitHub (Feb 9, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8964 ### What is the issue? # Problem When using `ollama run deepseek-r1:671b` comand to launch DeepSeek-R1 on my server, Ollama reports: ``` Error: llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer ``` in Ollama server console, the error was: ``` ggml_cuda_host_malloc: failed to allocate 335445.83 MiB of pinned memory: out of memory ggml_backend_cuda_buffer_type_alloc_buffer: allocating 28426.68 MiB on device 0: cudaMalloc failed: out of memory ``` Loading and running smaller models (e.g. `deepseek-r1:70b`) is successful. Then, I tried limiting `CUDA_VISIBLE_DEVICES` to `1`, restarting Ollama server, and run the same model. The model was launched successfully with 5/62 layers offloaded to a single GPU. # Expected Behavior I tried to load the same model (by copying the model's GGUF file and rename is as `deepseek-r1-671b.gguf`) in LM Studio, the model was loadded on all of my GPUs successfully. All of the GPUs and CPU can participate in inferencing. # Ollama Version 0.5.7 # OS Version Windows Server 2022 Datacenter, Build `20348.2700` # CPU 2x AMD EPYC 9654 # GPU 2x NVIDIA GeForce RTX 3090 24GB (NVLink not connected) Driver version 561.09 CUDA Toolkit version 12.6 # RAM 16x 32GB DDR5 RECC 4800MHz ### Relevant log output ```shell // All GPUs visible C:\Users\Administrator>set CUDA_VISIBLE_DEVICES=0,1 C:\Users\Administrator>ollama serve 2025/02/09 14:18:58 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:12450 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\Ollama\\Models OLLAMA_MULTIUSER_CACHE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES:]" time=2025-02-09T14:18:58.932+08:00 level=INFO source=images.go:432 msg="total blobs: 9" time=2025-02-09T14:18:58.941+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-09T14:18:58.941+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:12450 (version 0.5.7)" time=2025-02-09T14:18:58.942+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu cpu_avx]" time=2025-02-09T14:18:58.944+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-09T14:18:58.945+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2 time=2025-02-09T14:18:58.945+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=192 efficiency=0 threads=384 time=2025-02-09T14:18:58.945+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=192 efficiency=0 threads=384 time=2025-02-09T14:18:59.145+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="571.9 MiB" time=2025-02-09T14:18:59.299+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="956.2 MiB" time=2025-02-09T14:18:59.301+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b59ca5b6-076f-f044-a6bf-d76630812a4a library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" time=2025-02-09T14:18:59.301+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" [GIN] 2025/02/09 - 14:19:08 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/09 - 14:19:08 | 200 | 14.6405ms | 127.0.0.1 | POST "/api/show" time=2025-02-09T14:19:08.280+08:00 level=INFO source=server.go:104 msg="system memory" total="511.6 GiB" free="499.3 GiB" free_swap="511.6 GiB" time=2025-02-09T14:19:08.301+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=62 layers.offload=7 layers.split=4,3 memory.available="[22.8 GiB 22.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="428.7 GiB" memory.required.partial="36.3 GiB" memory.required.kv="19.1 GiB" memory.required.allocations="[18.5 GiB 17.8 GiB]" memory.weights.total="394.5 GiB" memory.weights.repeating="393.8 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="1.6 GiB" memory.graph.partial="1.6 GiB" time=2025-02-09T14:19:08.302+08:00 level=WARN source=server.go:216 msg="flash attention enabled but not supported by model" time=2025-02-09T14:19:08.314+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12_avx\\ollama_llama_server.exe runner --model E:\\Ollama\\Models\\blobs\\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 4096 --batch-size 512 --n-gpu-layers 7 --threads 384 --no-mmap --parallel 2 --tensor-split 4,3 --multiuser-cache --port 58163" time=2025-02-09T14:19:08.365+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-09T14:19:08.365+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-09T14:19:08.367+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-09T14:19:08.483+08:00 level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-09T14:19:08.673+08:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=384 time=2025-02-09T14:19:08.673+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:58163" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23306 MiB free llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23306 MiB free llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from E:\Ollama\Models\blobs\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 256x20B llama_model_loader: - kv 3: deepseek2.block_count u32 = 61 llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<�... llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 40: general.quantization_version u32 = 2 llama_model_loader: - kv 41: general.file_type u32 = 15 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q4_K: 606 tensors llama_model_loader: - type q6_K: 58 tensors time=2025-02-09T14:19:08.871+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 818 llm_load_vocab: token to piece cache size = 0.8223 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 129280 llm_load_print_meta: n_merges = 127741 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 7168 llm_load_print_meta: n_layer = 61 llm_load_print_meta: n_head = 128 llm_load_print_meta: n_head_kv = 128 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 24576 llm_load_print_meta: n_embd_v_gqa = 16384 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18432 llm_load_print_meta: n_expert = 256 llm_load_print_meta: n_expert_used = 8 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 671B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 671.03 B llm_load_print_meta: model size = 376.65 GiB (4.82 BPW) llm_load_print_meta: general.name = n/a llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 131 'Ä' llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 3 llm_load_print_meta: n_lora_q = 1536 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 2048 llm_load_print_meta: n_expert_shared = 1 llm_load_print_meta: expert_weights_scale = 2.5 llm_load_print_meta: expert_weights_norm = 1 llm_load_print_meta: expert_gating_func = sigmoid llm_load_print_meta: rope_yarn_log_mul = 0.1000 ggml_cuda_host_malloc: failed to allocate 335445.83 MiB of pinned memory: out of memory ggml_backend_cuda_buffer_type_alloc_buffer: allocating 28426.68 MiB on device 0: cudaMalloc failed: out of memory llama_model_load: error loading model: unable to allocate CUDA0 buffer llama_load_model_from_file: failed to load model panic: unable to load model: E:\Ollama\Models\blobs\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 goroutine 7 [running]: github.com/ollama/ollama/llama/runner.(*Server).loadModel(0xc00019a1b0, {0x7, 0x0, 0x0, 0x0, {0xc00000b3b0, 0x2, 0x2}, 0xc000022210, 0x0}, ...) github.com/ollama/ollama/llama/runner/runner.go:852 +0x3ad created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:970 +0xd0d time=2025-02-09T14:19:11.375+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer" [GIN] 2025/02/09 - 14:19:11 | 500 | 3.1833398s | 127.0.0.1 | POST "/api/generate" // Only 1 GPU visible C:\Users\Administrator>set CUDA_VISIBLE_DEVICES=1 C:\Users\Administrator>ollama serve 2025/02/09 14:19:48 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES:1 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:12450 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\Ollama\\Models OLLAMA_MULTIUSER_CACHE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES:]" time=2025-02-09T14:19:48.150+08:00 level=INFO source=images.go:432 msg="total blobs: 9" time=2025-02-09T14:19:48.151+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-09T14:19:48.154+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:12450 (version 0.5.7)" time=2025-02-09T14:19:48.154+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]" time=2025-02-09T14:19:48.154+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-09T14:19:48.155+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2 time=2025-02-09T14:19:48.155+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=192 efficiency=0 threads=384 time=2025-02-09T14:19:48.155+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=192 efficiency=0 threads=384 time=2025-02-09T14:19:48.366+08:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" overhead="956.2 MiB" time=2025-02-09T14:19:48.367+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-86bb12be-ee76-eff3-db9a-35f024dc6c8e library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3090" total="24.0 GiB" available="22.8 GiB" [GIN] 2025/02/09 - 14:19:51 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/09 - 14:19:51 | 200 | 14.0106ms | 127.0.0.1 | POST "/api/show" time=2025-02-09T14:19:51.568+08:00 level=INFO source=server.go:104 msg="system memory" total="511.6 GiB" free="499.3 GiB" free_swap="511.6 GiB" time=2025-02-09T14:19:51.584+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=62 layers.offload=5 layers.split="" memory.available="[22.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="426.0 GiB" memory.required.partial="19.1 GiB" memory.required.kv="19.1 GiB" memory.required.allocations="[19.1 GiB]" memory.weights.total="394.5 GiB" memory.weights.repeating="393.8 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.6 GiB" time=2025-02-09T14:19:51.585+08:00 level=WARN source=server.go:216 msg="flash attention enabled but not supported by model" time=2025-02-09T14:19:51.595+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12_avx\\ollama_llama_server.exe runner --model E:\\Ollama\\Models\\blobs\\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 4096 --batch-size 512 --n-gpu-layers 5 --threads 384 --no-mmap --parallel 2 --multiuser-cache --port 58185" time=2025-02-09T14:19:51.628+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-09T14:19:51.628+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-09T14:19:51.630+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-09T14:19:51.737+08:00 level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-09T14:19:51.809+08:00 level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=384 time=2025-02-09T14:19:51.809+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:58185" time=2025-02-09T14:19:51.884+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23306 MiB free llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from E:\Ollama\Models\blobs\sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = deepseek2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 256x20B llama_model_loader: - kv 3: deepseek2.block_count u32 = 61 llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840 llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168 llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432 llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128 llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128 llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8 llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3 llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280 llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048 llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256 llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1 llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000 llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2 llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64 llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000 llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3 llama_model_loader: - kv 31: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<�... llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 40: general.quantization_version u32 = 2 llama_model_loader: - kv 41: general.file_type u32 = 15 llama_model_loader: - type f32: 361 tensors llama_model_loader: - type q4_K: 606 tensors llama_model_loader: - type q6_K: 58 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 818 llm_load_vocab: token to piece cache size = 0.8223 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = deepseek2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 129280 llm_load_print_meta: n_merges = 127741 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 163840 llm_load_print_meta: n_embd = 7168 llm_load_print_meta: n_layer = 61 llm_load_print_meta: n_head = 128 llm_load_print_meta: n_head_kv = 128 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 192 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 24576 llm_load_print_meta: n_embd_v_gqa = 16384 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18432 llm_load_print_meta: n_expert = 256 llm_load_print_meta: n_expert_used = 8 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = yarn llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 0.025 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 671B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 671.03 B llm_load_print_meta: model size = 376.65 GiB (4.82 BPW) llm_load_print_meta: general.name = n/a llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 131 'Ä' llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' llm_load_print_meta: max token length = 256 llm_load_print_meta: n_layer_dense_lead = 3 llm_load_print_meta: n_lora_q = 1536 llm_load_print_meta: n_lora_kv = 512 llm_load_print_meta: n_ff_exp = 2048 llm_load_print_meta: n_expert_shared = 1 llm_load_print_meta: expert_weights_scale = 2.5 llm_load_print_meta: expert_weights_norm = 1 llm_load_print_meta: expert_gating_func = sigmoid llm_load_print_meta: rope_yarn_log_mul = 0.1000 ggml_cuda_host_malloc: failed to allocate 349659.17 MiB of pinned memory: out of memory llm_load_tensors: offloading 5 repeating layers to GPU llm_load_tensors: offloaded 5/62 layers to GPU llm_load_tensors: CPU model buffer size = 497.11 MiB llm_load_tensors: CPU model buffer size = 349659.17 MiB llm_load_tensors: CUDA0 model buffer size = 35533.34 MiB ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-05-04 11:56:57 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 9, 2025):

https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288

<!-- gh-comment-id:2646148067 --> @rick-github commented on GitHub (Feb 9, 2025): https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67873