[GH-ISSUE #8825] Cuda malloc error #5722

Closed
opened 2026-04-12 17:00:45 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @ppereirasky on GitHub (Feb 4, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8825

CUDA malloc error, even if there is more than enough VRAM.

Hi There,

I'm trying to run deepseek-r1:32b model on my Windows 10 Pro machine with the latest version of ollama, an i7-7700K, 8Gb RAM and 2x 4060 Ti 16Gb, with more than enough VRAM for this model (that needs) around 24.9Gb total, however when I try to run it, it fails with a CUDA malloc:

[GIN] 2025/02/04 - 16:56:25 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/04 - 16:56:25 | 200 | 19.7884ms | 127.0.0.1 | POST "/api/show"
time=2025-02-04T16:56:25.304Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=C:\Users\AI.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 library=cuda parallel=4 required="23.4 GiB"
time=2025-02-04T16:56:25.331Z level=INFO source=server.go:104 msg="system memory" total="7.9 GiB" free="4.3 GiB" free_swap="10.7 GiB"
time=2025-02-04T16:56:25.332Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=65 layers.offload=65 layers.split=33,32 memory.available="[14.9 GiB 14.9 GiB]" memory.gpu_overhead="1.0 GiB" memory.required.full="23.4 GiB" memory.required.partial="23.4 GiB" memory.required.kv="2.0 GiB" memory.required.allocations="[12.0 GiB 11.4 GiB]" memory.weights.total="19.5 GiB" memory.weights.repeating="18.9 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="916.1 MiB" memory.graph.partial="916.1 MiB"
time=2025-02-04T16:56:25.340Z level=INFO source=server.go:376 msg="starting llama server" cmd="C:\Users\AI\AppData\Local\Programs\Ollama\lib\ollama\runners\cuda_v12_avx\ollama_llama_server.exe runner --model C:\Users\AI\.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 --ctx-size 8192 --batch-size 512 --n-gpu-layers 65 --threads 4 --no-mmap --parallel 4 --tensor-split 33,32 --port 49983"
time=2025-02-04T16:56:25.386Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-04T16:56:25.386Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-04T16:56:25.387Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-04T16:56:25.470Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
Device 1: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
time=2025-02-04T16:56:25.635Z level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=4
time=2025-02-04T16:56:25.637Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:49983"
time=2025-02-04T16:56:25.641Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4060 Ti) - 15225 MiB free
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 4060 Ti) - 15225 MiB free
llama_model_loader: loaded meta data with 26 key-value pairs and 771 tensors from C:\Users\AI.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 32B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 4: general.size_label str = 32B
llama_model_loader: - kv 5: qwen2.block_count u32 = 64
llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 27648
llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = deepseek-r1-qwen
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 321 tensors
llama_model_loader: - type q4_K: 385 tensors
llama_model_loader: - type q6_K: 65 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_layer = 64
llm_load_print_meta: n_head = 40
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 5
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 27648
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 32B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 32.76 B
llm_load_print_meta: model size = 18.48 GiB (4.85 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 32B
llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9297.10 MiB on device 1: cudaMalloc failed: out of memory
llama_model_load: error loading model: unable to allocate CUDA1 buffer
llama_load_model_from_file: failed to load model
panic: unable to load model: C:\Users\AI.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93

goroutine 7 [running]:
github.com/ollama/ollama/llama/runner.(*Server).loadModel(0xc00011a1b0, {0x41, 0x0, 0x0, 0x0, {0xc00000a500, 0x2, 0x2}, 0xc0000221f0, 0x0}, ...)
github.com/ollama/ollama/llama/runner/runner.go:852 +0x3ad
created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:970 +0xd0d
time=2025-02-04T16:56:26.392Z level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA1 buffer"
[GIN] 2025/02/04 - 16:56:26 | 500 | 1.1850698s | 127.0.0.1 | POST "/api/generate"
time=2025-02-04T16:56:31.423Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0303434 model=C:\Users\AI.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93
time=2025-02-04T16:56:31.673Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2803346 model=C:\Users\AI.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93
time=2025-02-04T16:56:31.923Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5302892 model=C:\Users\AI.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93

Nothing besided Ollama is using the GPUs - what am I missing? or is this a BUG?

Best Regards,

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7

Originally created by @ppereirasky on GitHub (Feb 4, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8825 ### CUDA malloc error, even if there is more than enough VRAM. Hi There, I'm trying to run deepseek-r1:32b model on my Windows 10 Pro machine with the latest version of ollama, an i7-7700K, 8Gb RAM and 2x 4060 Ti 16Gb, with more than enough VRAM for this model (that needs) around 24.9Gb total, however when I try to run it, it fails with a CUDA malloc: [GIN] 2025/02/04 - 16:56:25 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/04 - 16:56:25 | 200 | 19.7884ms | 127.0.0.1 | POST "/api/show" time=2025-02-04T16:56:25.304Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=C:\Users\AI\.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 library=cuda parallel=4 required="23.4 GiB" time=2025-02-04T16:56:25.331Z level=INFO source=server.go:104 msg="system memory" total="7.9 GiB" free="4.3 GiB" free_swap="10.7 GiB" time=2025-02-04T16:56:25.332Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=65 layers.offload=65 layers.split=33,32 memory.available="[14.9 GiB 14.9 GiB]" memory.gpu_overhead="1.0 GiB" memory.required.full="23.4 GiB" memory.required.partial="23.4 GiB" memory.required.kv="2.0 GiB" memory.required.allocations="[12.0 GiB 11.4 GiB]" memory.weights.total="19.5 GiB" memory.weights.repeating="18.9 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="916.1 MiB" memory.graph.partial="916.1 MiB" time=2025-02-04T16:56:25.340Z level=INFO source=server.go:376 msg="starting llama server" cmd="C:\\Users\\AI\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12_avx\\ollama_llama_server.exe runner --model C:\\Users\\AI\\.ollama\\models\\blobs\\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 --ctx-size 8192 --batch-size 512 --n-gpu-layers 65 --threads 4 --no-mmap --parallel 4 --tensor-split 33,32 --port 49983" time=2025-02-04T16:56:25.386Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-04T16:56:25.386Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-04T16:56:25.387Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-04T16:56:25.470Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes Device 1: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes time=2025-02-04T16:56:25.635Z level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=4 time=2025-02-04T16:56:25.637Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:49983" time=2025-02-04T16:56:25.641Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4060 Ti) - 15225 MiB free llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 4060 Ti) - 15225 MiB free llama_model_loader: loaded meta data with 26 key-value pairs and 771 tensors from C:\Users\AI\.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 32B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen2.block_count u32 = 64 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 27648 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = deepseek-r1-qwen llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 321 tensors llama_model_loader: - type q4_K: 385 tensors llama_model_loader: - type q6_K: 65 tensors llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_layer = 64 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 5 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 27648 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 32B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 32.76 B llm_load_print_meta: model size = 18.48 GiB (4.85 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 32B llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 9297.10 MiB on device 1: cudaMalloc failed: out of memory llama_model_load: error loading model: unable to allocate CUDA1 buffer llama_load_model_from_file: failed to load model panic: unable to load model: C:\Users\AI\.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 goroutine 7 [running]: github.com/ollama/ollama/llama/runner.(*Server).loadModel(0xc00011a1b0, {0x41, 0x0, 0x0, 0x0, {0xc00000a500, 0x2, 0x2}, 0xc0000221f0, 0x0}, ...) github.com/ollama/ollama/llama/runner/runner.go:852 +0x3ad created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:970 +0xd0d time=2025-02-04T16:56:26.392Z level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA1 buffer" [GIN] 2025/02/04 - 16:56:26 | 500 | 1.1850698s | 127.0.0.1 | POST "/api/generate" time=2025-02-04T16:56:31.423Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.0303434 model=C:\Users\AI\.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 time=2025-02-04T16:56:31.673Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.2803346 model=C:\Users\AI\.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 time=2025-02-04T16:56:31.923Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5302892 model=C:\Users\AI\.ollama\models\blobs\sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 Nothing besided Ollama is using the GPUs - what am I missing? or is this a BUG? Best Regards, ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-12 17:00:45 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 4, 2025):

ollama's estimates are sometimes off.

https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288

<!-- gh-comment-id:2634637181 --> @rick-github commented on GitHub (Feb 4, 2025): ollama's estimates are sometimes off. https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288
Author
Owner

@ppereirasky commented on GitHub (Feb 4, 2025):

I think I found out what the problem was.

My pagefile was managed by Windows and was too small - only around 2Gb. I set it manually to 24Gb and voilá (16Gb dit not work), the model loaded with no issues. My 8Gbs of System RAM + 2Gb Pagefile was not enough to divide the model and load it in the 2 GPUs.

I think the malloc will use system ram (+ pagefile if not enough System RAM available) to load things into the GPU VRAM. Maybe this can be done differently to prevent this kind of issues, since in this case the memory that should be loaded (in the end) is the GPU VRAM. It should not be necessary to use so much System RAM + pagefile just for that.

After the model is loaded and working my total System Ram Usage is only 4.4Gb (for the all Windows 10 machine - not only ollama) and ollama ps returns:
NAME ID SIZE PROCESSOR UNTIL
deepseek-r1:32b 38056bbcbb2d 25 GB 100% GPU Forever

<!-- gh-comment-id:2634721497 --> @ppereirasky commented on GitHub (Feb 4, 2025): I think I found out what the problem was. My pagefile was managed by Windows and was too small - only around 2Gb. I set it manually to 24Gb and voilá (16Gb dit not work), the model loaded with no issues. My 8Gbs of System RAM + 2Gb Pagefile was not enough to divide the model and load it in the 2 GPUs. I think the malloc will use system ram (+ pagefile if not enough System RAM available) to load things into the GPU VRAM. Maybe this can be done differently to prevent this kind of issues, since in this case the memory that should be loaded (in the end) is the GPU VRAM. It should not be necessary to use so much System RAM + pagefile just for that. After the model is loaded and working my total System Ram Usage is only 4.4Gb (for the all Windows 10 machine - not only ollama) and ollama ps returns: NAME ID SIZE PROCESSOR UNTIL deepseek-r1:32b 38056bbcbb2d 25 GB 100% GPU Forever
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5722