[GH-ISSUE #6558] Multiple GPU´s Nvidia 56GB VRAM gemma2:27b #4127

Closed
opened 2026-04-12 15:01:28 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @paulopais on GitHub (Aug 29, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6558

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Hi,
Error: cudaMalloc failed: out of memory

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.3.8

Originally created by @paulopais on GitHub (Aug 29, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6558 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Hi, Error: cudaMalloc failed: out of memory ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.8
GiteaMirror added the bug label 2026-04-12 15:01:28 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 29, 2024):

Server logs will aid in debugging.

<!-- gh-comment-id:2318258735 --> @rick-github commented on GitHub (Aug 29, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@paulopais commented on GitHub (Aug 29, 2024):

there: last nvidia drivers
server.log

<!-- gh-comment-id:2318269676 --> @paulopais commented on GitHub (Aug 29, 2024): there: last nvidia drivers [server.log](https://github.com/user-attachments/files/16802113/server.log)
Author
Owner

@rick-github commented on GitHub (Aug 29, 2024):

gpu.go didn't report one of the GPUs but it showed up later, which seems odd. Please add OLLAMA_DEBUG=1 to the server environment, try reloading and then post the log.

<!-- gh-comment-id:2318327930 --> @rick-github commented on GitHub (Aug 29, 2024): gpu.go didn't report one of the GPUs but it showed up later, which seems odd. Please [add](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-windows) `OLLAMA_DEBUG=1` to the server environment, try reloading and then post the log.
Author
Owner

@paulopais commented on GitHub (Aug 29, 2024):

Debug Enable - server.log
Some GPUS disable - server_GPUdisable.log

<!-- gh-comment-id:2318423520 --> @paulopais commented on GitHub (Aug 29, 2024): Debug Enable - [server.log](https://github.com/user-attachments/files/16802963/server.log) Some GPUS disable - [server_GPUdisable.log](https://github.com/user-attachments/files/16802964/server_GPUdisable.log)
Author
Owner

@rick-github commented on GitHub (Aug 29, 2024):

time=2024-08-29T18:17:32.866+01:00 level=INFO source=gpu.go:292 msg="detected OS VRAM overhead" id=GPU-4b2ad278-bccb-6196-270c-82a4fdd95065 library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" overhead="867.3 MiB"
time=2024-08-29T18:17:33.036+01:00 level=INFO source=gpu.go:292 msg="detected OS VRAM overhead" id=GPU-ab3f1087-a352-d62c-5af9-f6037c3b3117 library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" overhead="811.9 MiB"
time=2024-08-29T18:17:33.178+01:00 level=INFO source=gpu.go:292 msg="detected OS VRAM overhead" id=GPU-68b31f94-a055-1bd7-76ba-ef3538fec9c3 library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" overhead="602.9 MiB"
time=2024-08-29T18:17:33.572+01:00 level=INFO source=gpu.go:292 msg="detected OS VRAM overhead" id=GPU-0eb3a589-a697-56c2-6d04-0e35193a44ff library=cuda compute=7.5 driver=12.6 name="NVIDIA GeForce RTX 2070 SUPER" overhead="4.9 GiB"
time=2024-08-29T18:17:33.574+01:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-4b2ad278-bccb-6196-270c-82a4fdd95065 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB"
time=2024-08-29T18:17:33.574+01:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-ab3f1087-a352-d62c-5af9-f6037c3b3117 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB"
time=2024-08-29T18:17:33.574+01:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-68b31f94-a055-1bd7-76ba-ef3538fec9c3 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB"
time=2024-08-29T18:17:33.574+01:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-90919ffc-17c6-1bb8-338c-20346ee76233 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB"
time=2024-08-29T18:17:33.574+01:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-0eb3a589-a697-56c2-6d04-0e35193a44ff library=cuda variant=v12 compute=7.5 driver=12.6 name="NVIDIA GeForce RTX 2070 SUPER" total="8.0 GiB" available="7.0 GiB"

time=2024-08-29T18:17:34.479+01:00 level=DEBUG source=server.go:101 msg="system memory" total="3.9 GiB" free="2.0 GiB" free_swap="8.0 GiB"
time=2024-08-29T18:17:34.479+01:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=5 available="[11.0 GiB 11.0 GiB 11.0 GiB 7.8 GiB 7.0 GiB]"
time=2024-08-29T18:17:34.480+01:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=47 layers.offload=47 layers.split=10,10,9,9,9 memory.available="[11.0 GiB 11.0 GiB 11.0 GiB 7.8 GiB 7.0 GiB]" memory.required.full="28.4 GiB" memory.required.partial="28.4 GiB" memory.required.kv="2.9 GiB" memory.required.allocations="[5.8 GiB 6.3 GiB 5.4 GiB 5.4 GiB 5.4 GiB]" memory.weights.total="16.5 GiB" memory.weights.repeating="15.6 GiB" memory.weights.nonrepeating="922.9 MiB" memory.graph.full="1.4 GiB" memory.graph.partial="1.4 GiB"

time=2024-08-29T18:17:34.490+01:00 level=INFO source=server.go:391 msg="starting llama server" cmd="C:\\Users\\PP\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12\\ollama_llama_server.exe --model C:\\Users\\PP\\.ollama\\models\\blobs\\sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 47 --verbose --no-mmap --parallel 4 --tensor-split 10,10,9,9,9 --port 50268"

ggml_cuda_init: found 5 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
  Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
  Device 2: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
  Device 3: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
  Device 4: NVIDIA GeForce RTX 2070 SUPER, compute capability 7.5, VMM: yes
llm_load_tensors: ggml ctx size =    1.36 MiB
ggml_cuda_host_malloc: failed to allocate 922.85 MiB of pinned memory: out of memory
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2734.38 MiB on device 3: cudaMalloc failed: out of memory
llama_model_load: error loading model: unable to allocate backend buffer

Somebody with more familiarity with Windows will have better insight. The ggml_cuda_host_malloc error would seem to indicate that your system RAM is tight, but the logs show 2G free. The runner is using --no-mmap which may be a factor, what happens if you run ollama run gemma2:27b and then when you get a >>> prompt, enter /set parameter use_mmap true and try asking a question.

The ggml_backend_cuda_buffer_type_alloc_buffer error indicates OOM on the GPU, which again from the logs looks like it should have plenty, It may be a cascading failure from the earlier ggml_cuda_host_malloc error though.

If it's memory pressure, you can try loading less layers on the GPU (set num_gpu in an API call or through /set parameter) or reducing the KV cache size by setting OLLAMA_NUM_PARALLEL=1 in the server environment.

<!-- gh-comment-id:2318531358 --> @rick-github commented on GitHub (Aug 29, 2024): ``` time=2024-08-29T18:17:32.866+01:00 level=INFO source=gpu.go:292 msg="detected OS VRAM overhead" id=GPU-4b2ad278-bccb-6196-270c-82a4fdd95065 library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" overhead="867.3 MiB" time=2024-08-29T18:17:33.036+01:00 level=INFO source=gpu.go:292 msg="detected OS VRAM overhead" id=GPU-ab3f1087-a352-d62c-5af9-f6037c3b3117 library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" overhead="811.9 MiB" time=2024-08-29T18:17:33.178+01:00 level=INFO source=gpu.go:292 msg="detected OS VRAM overhead" id=GPU-68b31f94-a055-1bd7-76ba-ef3538fec9c3 library=cuda compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" overhead="602.9 MiB" time=2024-08-29T18:17:33.572+01:00 level=INFO source=gpu.go:292 msg="detected OS VRAM overhead" id=GPU-0eb3a589-a697-56c2-6d04-0e35193a44ff library=cuda compute=7.5 driver=12.6 name="NVIDIA GeForce RTX 2070 SUPER" overhead="4.9 GiB" time=2024-08-29T18:17:33.574+01:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-4b2ad278-bccb-6196-270c-82a4fdd95065 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB" time=2024-08-29T18:17:33.574+01:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-ab3f1087-a352-d62c-5af9-f6037c3b3117 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB" time=2024-08-29T18:17:33.574+01:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-68b31f94-a055-1bd7-76ba-ef3538fec9c3 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB" time=2024-08-29T18:17:33.574+01:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-90919ffc-17c6-1bb8-338c-20346ee76233 library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB" time=2024-08-29T18:17:33.574+01:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-0eb3a589-a697-56c2-6d04-0e35193a44ff library=cuda variant=v12 compute=7.5 driver=12.6 name="NVIDIA GeForce RTX 2070 SUPER" total="8.0 GiB" available="7.0 GiB" time=2024-08-29T18:17:34.479+01:00 level=DEBUG source=server.go:101 msg="system memory" total="3.9 GiB" free="2.0 GiB" free_swap="8.0 GiB" time=2024-08-29T18:17:34.479+01:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=5 available="[11.0 GiB 11.0 GiB 11.0 GiB 7.8 GiB 7.0 GiB]" time=2024-08-29T18:17:34.480+01:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=47 layers.offload=47 layers.split=10,10,9,9,9 memory.available="[11.0 GiB 11.0 GiB 11.0 GiB 7.8 GiB 7.0 GiB]" memory.required.full="28.4 GiB" memory.required.partial="28.4 GiB" memory.required.kv="2.9 GiB" memory.required.allocations="[5.8 GiB 6.3 GiB 5.4 GiB 5.4 GiB 5.4 GiB]" memory.weights.total="16.5 GiB" memory.weights.repeating="15.6 GiB" memory.weights.nonrepeating="922.9 MiB" memory.graph.full="1.4 GiB" memory.graph.partial="1.4 GiB" time=2024-08-29T18:17:34.490+01:00 level=INFO source=server.go:391 msg="starting llama server" cmd="C:\\Users\\PP\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12\\ollama_llama_server.exe --model C:\\Users\\PP\\.ollama\\models\\blobs\\sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 47 --verbose --no-mmap --parallel 4 --tensor-split 10,10,9,9,9 --port 50268" ggml_cuda_init: found 5 CUDA devices: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes Device 3: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes Device 4: NVIDIA GeForce RTX 2070 SUPER, compute capability 7.5, VMM: yes llm_load_tensors: ggml ctx size = 1.36 MiB ggml_cuda_host_malloc: failed to allocate 922.85 MiB of pinned memory: out of memory ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2734.38 MiB on device 3: cudaMalloc failed: out of memory llama_model_load: error loading model: unable to allocate backend buffer ``` Somebody with more familiarity with Windows will have better insight. The `ggml_cuda_host_malloc` error would seem to indicate that your system RAM is tight, but the logs show 2G free. The runner is using `--no-mmap` which may be a factor, what happens if you run `ollama run gemma2:27b` and then when you get a `>>>` prompt, enter `/set parameter use_mmap true` and try asking a question. The `ggml_backend_cuda_buffer_type_alloc_buffer` error indicates OOM on the GPU, which again from the logs looks like it should have plenty, It may be a cascading failure from the earlier `ggml_cuda_host_malloc` error though. If it's memory pressure, you can try loading less layers on the GPU (set `num_gpu` in an API call or through `/set parameter`) or reducing the KV cache size by [setting](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-windows) `OLLAMA_NUM_PARALLEL=1` in the server environment.
Author
Owner

@paulopais commented on GitHub (Aug 29, 2024):

set parameter use_mmap true -> error is on start never reach the prompt

cascading failure -> for me is this the problem

OLLAMA_NUM_PARALLEL=1 -> same error

<!-- gh-comment-id:2318568023 --> @paulopais commented on GitHub (Aug 29, 2024): set parameter use_mmap true -> error is on start never reach the prompt cascading failure -> for me is this the problem OLLAMA_NUM_PARALLEL=1 -> same error
Author
Owner

@rick-github commented on GitHub (Aug 29, 2024):

set parameter use_mmap true -> error is on start never reach the prompt

OK, I thought you would be able to skip over the initial failure and then set the parameter, but that doesn't work then try calling the API directly:

curl http://localhost:11434/api/generate -d "{\"model\":\"gemma2:27b\",\"options\":{\"use_mmap\":true},\"prompt\":\"hello\",\"stream\":false}"
<!-- gh-comment-id:2318582484 --> @rick-github commented on GitHub (Aug 29, 2024): > set parameter use_mmap true -> error is on start never reach the prompt OK, I thought you would be able to skip over the initial failure and then set the parameter, but that doesn't work then try calling the API directly: ``` curl http://localhost:11434/api/generate -d "{\"model\":\"gemma2:27b\",\"options\":{\"use_mmap\":true},\"prompt\":\"hello\",\"stream\":false}" ```
Author
Owner

@paulopais commented on GitHub (Aug 29, 2024):

same error ..

<!-- gh-comment-id:2318587275 --> @paulopais commented on GitHub (Aug 29, 2024): same error ..
Author
Owner

@rick-github commented on GitHub (Aug 29, 2024):

What happens if you reduce the number of layers offloaded to the GPU:

curl http://localhost:11434/api/generate -d "{\"model\":\"gemma2:27b\",\"options\":{\"num_gpu\":40},\"prompt\":\"hello\",\"stream\":false}"
<!-- gh-comment-id:2318600967 --> @rick-github commented on GitHub (Aug 29, 2024): What happens if you reduce the number of layers offloaded to the GPU: ``` curl http://localhost:11434/api/generate -d "{\"model\":\"gemma2:27b\",\"options\":{\"num_gpu\":40},\"prompt\":\"hello\",\"stream\":false}" ```
Author
Owner

@paulopais commented on GitHub (Aug 29, 2024):

{"error":"llama runner process has terminated: CUDA error"}

<!-- gh-comment-id:2318611993 --> @paulopais commented on GitHub (Aug 29, 2024): {"error":"llama runner process has terminated: CUDA error"}
Author
Owner

@rick-github commented on GitHub (Aug 29, 2024):

Can you post the logs from the num_gpu failure?

<!-- gh-comment-id:2318616040 --> @rick-github commented on GitHub (Aug 29, 2024): Can you post the logs from the `num_gpu` failure?
Author
Owner

@paulopais commented on GitHub (Aug 29, 2024):

server.log
this error only occurs if i call over the network and not on the localhost:
curl http://192.168.5.12:11434/api/generate -d "{"model":"gemma2:27b","options":{"use_mmap":true},"prompt":"hello","stream":false}"

<!-- gh-comment-id:2318656907 --> @paulopais commented on GitHub (Aug 29, 2024): [server.log](https://github.com/user-attachments/files/16804401/server.log) this error only occurs if i call over the network and not on the localhost: curl http://192.168.5.12:11434/api/generate -d "{\"model\":\"gemma2:27b\",\"options\":{\"use_mmap\":true},\"prompt\":\"hello\",\"stream\":false}"
Author
Owner

@rick-github commented on GitHub (Aug 29, 2024):

The log is from the use_mmap attempt, do you have the logs from the num_gpu attempt?

<!-- gh-comment-id:2318888640 --> @rick-github commented on GitHub (Aug 29, 2024): The log is from the `use_mmap` attempt, do you have the logs from the `num_gpu` attempt?
Author
Owner

@paulopais commented on GitHub (Aug 29, 2024):

server.log

<!-- gh-comment-id:2318896085 --> @paulopais commented on GitHub (Aug 29, 2024): [server.log](https://github.com/user-attachments/files/16805577/server.log)
Author
Owner

@paulopais commented on GitHub (Aug 29, 2024):

Problem solved. You need to have virtual memory > than the model.

<!-- gh-comment-id:2318983458 --> @paulopais commented on GitHub (Aug 29, 2024): Problem solved. You need to have virtual memory > than the model.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4127