[GH-ISSUE #5843] How to offload all layers to GPU? #65681

Closed
opened 2026-05-03 22:13:11 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @RakshitAralimatti on GitHub (Jul 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5843

Currently when I am running gemma2 (using Ollama serve) on my device by default only 27 layers are offloaded on GPU, but I want to offload all 43 layers to GPU
Does anyone know how I can do that?

Originally created by @RakshitAralimatti on GitHub (Jul 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5843 Currently when I am running gemma2 (using Ollama serve) on my device by default only 27 layers are offloaded on GPU, but I want to offload all 43 layers to GPU Does anyone know how I can do that?
GiteaMirror added the question label 2026-05-03 22:13:11 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 22, 2024):

ollama offloads as many layers as it thinks will fit in GPU VRAM. If you think ollama is incorrect, provide server logs and the output of nvidia-smi.

<!-- gh-comment-id:2242278808 --> @rick-github commented on GitHub (Jul 22, 2024): ollama offloads as many layers as it thinks will fit in GPU VRAM. If you think ollama is incorrect, provide server logs and the output of `nvidia-smi`.
Author
Owner

@RakshitAralimatti commented on GitHub (Jul 22, 2024):

@rick-github I think Ollama is correct !!
Is there any way to check the Token per sec speed of Models to compare the performance of different LLMS

<!-- gh-comment-id:2242547615 --> @RakshitAralimatti commented on GitHub (Jul 22, 2024): @rick-github I think Ollama is correct !! Is there any way to check the **Token per sec** speed of Models to compare the performance of different LLMS
Author
Owner

@rick-github commented on GitHub (Jul 22, 2024):

The JSON structure that is returned by ollama has two fields that can be used to calculate tokens per second.

$ curl -s localhost:11434/api/generate -d '{"model":"gemma2:9b-instruct-q4_0", "prompt":"count from 1 to 50", "stream":false}' | jq 'del(.context)'
{
  "model": "gemma2:9b-instruct-q4_0",
  "created_at": "2024-07-22T10:06:43.685128983Z",
  "response": "1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50  \n",
  "done": true,
  "done_reason": "stop",
  "total_duration": 3305842756,
  "load_duration": 23975557,
  "prompt_eval_count": 18,
  "prompt_eval_duration": 25726000,
  "eval_count": 192,
  "eval_duration": 3209729000
}

eval_count is the number of tokens generated, and eval_duration is the time in nanoseconds that it took to generate the tokens. So tps can be derived:

$ curl -s localhost:11434/api/generate -d '{"model":"gemma2:9b-instruct-q4_0", "prompt":"count from 1 to 50", "stream":false}' | jq '{"tps":(.eval_count/(.eval_duration/1000000000))}'
{
  "tps": 59.817761451129044
}
<!-- gh-comment-id:2242592143 --> @rick-github commented on GitHub (Jul 22, 2024): The JSON structure that is returned by ollama has two fields that can be used to calculate tokens per second. ``` $ curl -s localhost:11434/api/generate -d '{"model":"gemma2:9b-instruct-q4_0", "prompt":"count from 1 to 50", "stream":false}' | jq 'del(.context)' { "model": "gemma2:9b-instruct-q4_0", "created_at": "2024-07-22T10:06:43.685128983Z", "response": "1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50 \n", "done": true, "done_reason": "stop", "total_duration": 3305842756, "load_duration": 23975557, "prompt_eval_count": 18, "prompt_eval_duration": 25726000, "eval_count": 192, "eval_duration": 3209729000 } ``` `eval_count` is the number of tokens generated, and `eval_duration` is the time in nanoseconds that it took to generate the tokens. So tps can be derived: ``` $ curl -s localhost:11434/api/generate -d '{"model":"gemma2:9b-instruct-q4_0", "prompt":"count from 1 to 50", "stream":false}' | jq '{"tps":(.eval_count/(.eval_duration/1000000000))}' { "tps": 59.817761451129044 } ```
Author
Owner

@RakshitAralimatti commented on GitHub (Jul 22, 2024):

@rick-github Thanks !!

<!-- gh-comment-id:2242612153 --> @RakshitAralimatti commented on GitHub (Jul 22, 2024): @rick-github Thanks !!
Author
Owner

@AeneasZhu commented on GitHub (Jul 22, 2024):

I encountered the same problem recently:https://github.com/ollama/ollama/issues/5821
After I reinstall it, the problem seems to be fixed for a while. But when I reload gemma 2 today, ollama didn't offload all layers again. How to customize the layers offloaded, if ollama doesn't offload enough layers?

<!-- gh-comment-id:2243030966 --> @AeneasZhu commented on GitHub (Jul 22, 2024): I encountered the same problem recently:https://github.com/ollama/ollama/issues/5821 After I reinstall it, the problem seems to be fixed for a while. But when I reload gemma 2 today, ollama didn't offload all layers again. How to customize the layers offloaded, if ollama doesn't offload enough layers?
Author
Owner

@rick-github commented on GitHub (Jul 22, 2024):

ollama offloads as many layers as it thinks will fit in GPU VRAM. If you think ollama is incorrect, provide server logs and the output of nvidia-smi.

<!-- gh-comment-id:2243057417 --> @rick-github commented on GitHub (Jul 22, 2024): ollama offloads as many layers as it thinks will fit in GPU VRAM. If you think ollama is incorrect, provide server logs and the output of `nvidia-smi`.
Author
Owner

@slapglif commented on GitHub (Jul 22, 2024):

effectively, when you see the layer count lower than your avail, some other application is using some % of your gpu - ive had a lot of ghost app using mine in the past and preventing that little bit of ram for all the layers, leading to cpu inference for some stuff...gah - my suggestion is nvidia-smi -> catch all the pids -> kill them all -> retry

<!-- gh-comment-id:2243238139 --> @slapglif commented on GitHub (Jul 22, 2024): effectively, when you see the layer count lower than your avail, some other application is using some % of your gpu - ive had a lot of ghost app using mine in the past and preventing that little bit of ram for all the layers, leading to cpu inference for some stuff...gah - my suggestion is nvidia-smi -> catch all the pids -> kill them all -> retry
Author
Owner

@AeneasZhu commented on GitHub (Jul 22, 2024):

@rick-github

Mon Jul 22 22:04:18 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 531.68                 Driver Version: 531.68       CUDA Version: 12.1     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                      TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3070 T...  WDDM | 00000000:01:00.0 Off |                  N/A |
| N/A   51C    P3               34W /  N/A|   6734MiB /  8192MiB |     39%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      6420    C+G   ...Brave-Browser\Application\brave.exe    N/A      |
|    0   N/A  N/A     12864      C   ...\cuda_v11.3\ollama_llama_server.exe    N/A      |
+---------------------------------------------------------------------------------------+
<!-- gh-comment-id:2243246003 --> @AeneasZhu commented on GitHub (Jul 22, 2024): @rick-github ``` Mon Jul 22 22:04:18 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 531.68 Driver Version: 531.68 CUDA Version: 12.1 | |-----------------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 3070 T... WDDM | 00000000:01:00.0 Off | N/A | | N/A 51C P3 34W / N/A| 6734MiB / 8192MiB | 39% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 6420 C+G ...Brave-Browser\Application\brave.exe N/A | | 0 N/A N/A 12864 C ...\cuda_v11.3\ollama_llama_server.exe N/A | +---------------------------------------------------------------------------------------+ ```
Author
Owner

@rick-github commented on GitHub (Jul 22, 2024):

And server logs?

<!-- gh-comment-id:2243314942 --> @rick-github commented on GitHub (Jul 22, 2024): And server logs?
Author
Owner

@RakshitAralimatti commented on GitHub (Jul 23, 2024):

@rick-github
Is there any way to get the possible Maximum token per second of the model?
Because for the different task the tps are different I want to know the Maximum possible.
For example, for gemma2 times I get 30, sometimes 36 etc...

<!-- gh-comment-id:2244967515 --> @RakshitAralimatti commented on GitHub (Jul 23, 2024): @rick-github Is there any way to get the possible Maximum token per second of the model? Because for the different task the tps are different I want to know the Maximum possible. For example, for` gemma2` times I get 30, sometimes 36 etc...
Author
Owner

@rick-github commented on GitHub (Jul 23, 2024):

Token generation depends on the model, prompt, configuration, and hardware. There's no "maximum token per second of the model", it depends on resources and workload.

<!-- gh-comment-id:2244979792 --> @rick-github commented on GitHub (Jul 23, 2024): Token generation depends on the model, prompt, configuration, and hardware. There's no "maximum token per second of the model", it depends on resources and workload.
Author
Owner

@dhiltgen commented on GitHub (Jul 24, 2024):

@RakshitAralimatti it looks like your questions have been answered. If you still have any further questions let us know, and also check out the docs here https://github.com/ollama/ollama/tree/main/docs

<!-- gh-comment-id:2248857983 --> @dhiltgen commented on GitHub (Jul 24, 2024): @RakshitAralimatti it looks like your questions have been answered. If you still have any further questions let us know, and also check out the docs here https://github.com/ollama/ollama/tree/main/docs
Author
Owner

@RangerMauve commented on GitHub (Sep 18, 2024):

IS there a similar command for debugging this with rocm? I'm seeing only 33% GPU vram usage and only 5/27 layers of gemma 2 being offloaded in the logs.

<!-- gh-comment-id:2359208029 --> @RangerMauve commented on GitHub (Sep 18, 2024): IS there a similar command for debugging this with rocm? I'm seeing only 33% GPU vram usage and only 5/27 layers of gemma 2 being offloaded in the logs.
Author
Owner

@rick-github commented on GitHub (Sep 19, 2024):

Server logs may aid in debugging.

<!-- gh-comment-id:2359696442 --> @rick-github commented on GitHub (Sep 19, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging.
Author
Owner

@ramayer commented on GitHub (Nov 17, 2024):

Errors are common here too; especially when using models that support larger context lengths.

root@989c54d844b8:/# ollama show llama3.2:latest
  Model
    architecture        llama     
    parameters          3.2B      
    context length      131072    
    embedding length    3072      
    quantization        Q4_K_M    
... 

And calling it with

resp = ollama.chat(model='llama3.2:latest', options= {"num_ctx": 100000}, messages=[{'role': 'user', 'content': msgtxt}])

Server logs look like

  Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
llm_load_tensors: ggml ctx size =    0.24 MiB
llm_load_tensors: offloading 11 repeating layers to GPU
llm_load_tensors: offloaded 11/29 layers to GPU
llm_load_tensors:        CPU buffer size =  1918.35 MiB
llm_load_tensors:      CUDA0 buffer size =   642.21 MiB
llama_new_context_with_model: n_ctx      = 40096
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
time=2024-11-17T19:38:21.425Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
time=2024-11-17T19:38:22.290Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
llama_kv_cache_init:  CUDA_Host KV buffer size =  2662.62 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =  1722.88 MiB
llama_new_context_with_model: KV self size  = 4385.50 MiB, K (f16): 2192.75 MiB, V (f16): 2192.75 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.50 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =  2204.75 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    88.32 MiB
llama_new_context_with_model: graph nodes  = 902
llama_new_context_with_model: graph splits = 225
INFO [main] model loaded | tid="139891060371456" timestamp=1731872303
time=2024-11-17T19:38:23.243Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
time=2024-11-17T19:38:23.494Z level=INFO source=server.go:626 msg="llama runner started in 3.77 seconds"
CUDA error: out of memory
  current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:376
  cuMemCreate(&handle, reserve_size, &prop, 0)
/go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:102: CUDA error
[GIN] 2024/11/17 - 19:39:35 | 500 |         1m16s |      172.17.0.1 | POST     "/api/chat"

<!-- gh-comment-id:2481494598 --> @ramayer commented on GitHub (Nov 17, 2024): Errors are common here too; especially when using models that support larger context lengths. ``` root@989c54d844b8:/# ollama show llama3.2:latest Model architecture llama parameters 3.2B context length 131072 embedding length 3072 quantization Q4_K_M ... ``` And calling it with ``` resp = ollama.chat(model='llama3.2:latest', options= {"num_ctx": 100000}, messages=[{'role': 'user', 'content': msgtxt}]) ``` Server logs look like ``` Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes llm_load_tensors: ggml ctx size = 0.24 MiB llm_load_tensors: offloading 11 repeating layers to GPU llm_load_tensors: offloaded 11/29 layers to GPU llm_load_tensors: CPU buffer size = 1918.35 MiB llm_load_tensors: CUDA0 buffer size = 642.21 MiB llama_new_context_with_model: n_ctx = 40096 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 time=2024-11-17T19:38:21.425Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" time=2024-11-17T19:38:22.290Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" llama_kv_cache_init: CUDA_Host KV buffer size = 2662.62 MiB llama_kv_cache_init: CUDA0 KV buffer size = 1722.88 MiB llama_new_context_with_model: KV self size = 4385.50 MiB, K (f16): 2192.75 MiB, V (f16): 2192.75 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB llama_new_context_with_model: CUDA0 compute buffer size = 2204.75 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 88.32 MiB llama_new_context_with_model: graph nodes = 902 llama_new_context_with_model: graph splits = 225 INFO [main] model loaded | tid="139891060371456" timestamp=1731872303 time=2024-11-17T19:38:23.243Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" time=2024-11-17T19:38:23.494Z level=INFO source=server.go:626 msg="llama runner started in 3.77 seconds" CUDA error: out of memory current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:376 cuMemCreate(&handle, reserve_size, &prop, 0) /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:102: CUDA error [GIN] 2024/11/17 - 19:39:35 | 500 | 1m16s | 172.17.0.1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Nov 17, 2024):

You left out useful bits of the logs. Long story short: reduce the memory footprint by reducing num_gpu.

<!-- gh-comment-id:2481498329 --> @rick-github commented on GitHub (Nov 17, 2024): You left out useful bits of the logs. Long story short: reduce the memory footprint by reducing `num_gpu`.
Author
Owner

@ramayer commented on GitHub (Nov 17, 2024):

Thx!!!

Another workaround that seems to be working for me is setting the GGML_CUDA_ENABLE_UNIFIED_MEMORY environment variable like this:

docker run -d -e GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 --restart always --gpus=all -v $(pwd)/volumes/ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Seems it may be picking something close to the edge of what my GPU can handle?

<!-- gh-comment-id:2481510255 --> @ramayer commented on GitHub (Nov 17, 2024): Thx!!! Another workaround that seems to be working for me is setting the `GGML_CUDA_ENABLE_UNIFIED_MEMORY` environment variable like this: docker run -d -e GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 --restart always --gpus=all -v $(pwd)/volumes/ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama Seems it may be picking something close to the edge of what my GPU can handle?
Author
Owner

@rick-github commented on GitHub (Nov 17, 2024):

Yes, GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 adds a buffer that prevents OOMing. However, it can adversely affect performance if more than a few layers use unified memory: https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900

<!-- gh-comment-id:2481512835 --> @rick-github commented on GitHub (Nov 17, 2024): Yes, `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` adds a buffer that prevents OOMing. However, it can adversely affect performance if more than a few layers use unified memory: https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900
Author
Owner

@cyberluke commented on GitHub (Feb 18, 2025):

Ollama offloads only half layers to gpu, half to cpu on 4x L4 (4x 24GB)

export OLLAMA_MODELS="$HOME/.ollama/models"
export CMAKE_BIN="$HOME/cmake-3.31.5-linux-x86_64/bin"
export CMAKE_ROOT="$HOME/cmake-3.31.5-linux-x86_64/share/cmake-3.31"

export WEBUI_NAME="NANOTRIK.AI ALPHA"
export OLLAMA_ACCELERATE=1
export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_NOPRUNE=1
export OLLAMA_MAX_LOADED_MODELS=5
export OLLAMA_LOAD_TIMEOUT=0
export OLLAMA_NOHISTORY=0
export OLLAMA_KEEP_ALIVE=-1
#export OLLAMA_DISABLE_CPU=1
export OLLAMA_DEBUG=0
export OLLAMA_ORIGINS="*"
export OLLAMA_HOST="http://0.0.0.0:11434"

Full GPU allocation

export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 # All 8 GPUs
export OLLAMA_NUM_GPU_LAYERS=9999 # Force full offload
export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_BATCH_SIZE=8192
export OLLAMA_GPUMEMORY=24000MB # 22GB per GPU (adjust for your VRAM)
export OLLAMA_GPUSPLIT="24,24,24,24,24,24,24,24"

~ nvidia-smi
Tue Feb 18 02:22:57 2025
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.230.02 Driver Version: 535.230.02 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA L4 Off | 00000000:38:00.0 Off | 0 |
| N/A 53C P0 50W / 72W | 13853MiB / 23034MiB | 96% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA L4 Off | 00000000:3A:00.0 Off | 0 |
| N/A 43C P0 27W / 72W | 12727MiB / 23034MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA L4 Off | 00000000:3C:00.0 Off | 0 |
| N/A 41C P0 26W / 72W | 12727MiB / 23034MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 3 NVIDIA L4 Off | 00000000:3E:00.0 Off | 0 |
| N/A 42C P0 27W / 72W | 12727MiB / 23034MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+

<!-- gh-comment-id:2664461422 --> @cyberluke commented on GitHub (Feb 18, 2025): Ollama offloads only half layers to gpu, half to cpu on 4x L4 (4x 24GB) export OLLAMA_MODELS="$HOME/.ollama/models" export CMAKE_BIN="$HOME/cmake-3.31.5-linux-x86_64/bin" export CMAKE_ROOT="$HOME/cmake-3.31.5-linux-x86_64/share/cmake-3.31" export WEBUI_NAME="NANOTRIK.AI ALPHA" export OLLAMA_ACCELERATE=1 export OLLAMA_FLASH_ATTENTION=1 export OLLAMA_NOPRUNE=1 export OLLAMA_MAX_LOADED_MODELS=5 export OLLAMA_LOAD_TIMEOUT=0 export OLLAMA_NOHISTORY=0 export OLLAMA_KEEP_ALIVE=-1 #export OLLAMA_DISABLE_CPU=1 export OLLAMA_DEBUG=0 export OLLAMA_ORIGINS="*" export OLLAMA_HOST="http://0.0.0.0:11434" # Full GPU allocation export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 # All 8 GPUs export OLLAMA_NUM_GPU_LAYERS=9999 # Force full offload export OLLAMA_FLASH_ATTENTION=1 export OLLAMA_BATCH_SIZE=8192 export OLLAMA_GPUMEMORY=24000MB # 22GB per GPU (adjust for your VRAM) export OLLAMA_GPUSPLIT="24,24,24,24,24,24,24,24" ~ nvidia-smi Tue Feb 18 02:22:57 2025 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.230.02 Driver Version: 535.230.02 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA L4 Off | 00000000:38:00.0 Off | 0 | | N/A 53C P0 50W / 72W | 13853MiB / 23034MiB | 96% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 1 NVIDIA L4 Off | 00000000:3A:00.0 Off | 0 | | N/A 43C P0 27W / 72W | 12727MiB / 23034MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 2 NVIDIA L4 Off | 00000000:3C:00.0 Off | 0 | | N/A 41C P0 26W / 72W | 12727MiB / 23034MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ | 3 NVIDIA L4 Off | 00000000:3E:00.0 Off | 0 | | N/A 42C P0 27W / 72W | 12727MiB / 23034MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| +---------------------------------------------------------------------------------------+
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65681