[GH-ISSUE #6405] Implement layer-by-layer paging from CPU RAM into GPU for large models. #66060

Closed
opened 2026-05-03 23:49:20 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @Speedway1 on GitHub (Aug 18, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6405

While the GPU makers want us to believe that the main crunch point is not enough GPU power, the real issue with self-hosted LLMs is lack of memory. Especially when we're inferencing at large context windows (which is where the magic starts to happen).

At the moment Ollama loads all the model's layers and does a very good job of trying to fit everything into GPU and then spilling over to CPU. But CPU is super slow.

A better way to handle large models might be:

  1. Load the entire model into RAM and set aside the KV storage, etc for large contexts in the GPU's VRAM.
  2. Work out how much VRAM is available and then translate that into how many layers could be loaded into VRAM. Let;'s call it n layers.
  3. Load in the first n layers, then when inference need to move to n+1, replace the current n layers in VRAM with the next n layers, and so on until the last layer is reached.

Some people have implemented single layer paging but by paging as many layers as possible, there should, in theory at least, be some efficiency gains. Similarly loading the layers into RAM rather than off disc is for maximum access and transfer speeds.

Where more than one GPU card is in the machine this might make for an even more efficient algorithm as layers can be loaded into the dormant GPU while processing continues with with active GPU.

Is this possible?

Originally created by @Speedway1 on GitHub (Aug 18, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6405 While the GPU makers want us to believe that the main crunch point is not enough GPU power, the real issue with self-hosted LLMs is lack of memory. Especially when we're inferencing at large context windows (which is where the magic starts to happen). At the moment Ollama loads all the model's layers and does a very good job of trying to fit everything into GPU and then spilling over to CPU. But CPU is super slow. A better way to handle large models might be: 1) Load the entire model into RAM and set aside the KV storage, etc for large contexts in the GPU's VRAM. 2) Work out how much VRAM is available and then translate that into how many layers could be loaded into VRAM. Let;'s call it _n_ layers. 3) Load in the first _n_ layers, then when inference need to move to _n_+1, replace the current _n_ layers in VRAM with the next _n_ layers, and so on until the last layer is reached. Some people have implemented single layer paging but by paging as many layers as possible, there should, in theory at least, be some efficiency gains. Similarly loading the layers into RAM rather than off disc is for maximum access and transfer speeds. Where more than one GPU card is in the machine this might make for an even more efficient algorithm as layers can be loaded into the dormant GPU while processing continues with with active GPU. Is this possible?
GiteaMirror added the feature request label 2026-05-03 23:49:20 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 18, 2024):

If it's possible, it would be a feature request for llama.cpp.

<!-- gh-comment-id:2295310192 --> @rick-github commented on GitHub (Aug 18, 2024): If it's possible, it would be a feature request for [llama.cpp](https://github.com/ggerganov/llama.cpp/issues).
Author
Owner

@Speedway1 commented on GitHub (Aug 18, 2024):

Thanks @rick-github . I'll raise the request there.

<!-- gh-comment-id:2295368472 --> @Speedway1 commented on GitHub (Aug 18, 2024): Thanks @rick-github . I'll raise the request there.
Author
Owner

@igorschlum commented on GitHub (Aug 18, 2024):

@Speedway1 on macOS, Ram and Vram are shared. I understand the need or Linux and Windows and yes, it should be implemented in llama.cpp first. If you raise the request there, it's preferable to close the issue here.

<!-- gh-comment-id:2295369617 --> @igorschlum commented on GitHub (Aug 18, 2024): @Speedway1 on macOS, Ram and Vram are shared. I understand the need or Linux and Windows and yes, it should be implemented in llama.cpp first. If you raise the request there, it's preferable to close the issue here.
Author
Owner

@Speedway1 commented on GitHub (Aug 18, 2024):

@Speedway1 on macOS, Ram and Vram are shared. I understand the need or Linux and Windows and yes, it should be implemented in llama.cpp first. If you raise the request there, it's preferable to close the issue here.

I have raised it there so will close this. But thanks for the info on MacOS. What an absolute win that is! So MacOS sounds like it is a much better platform for running LLMs locally then?

<!-- gh-comment-id:2295371739 --> @Speedway1 commented on GitHub (Aug 18, 2024): > @Speedway1 on macOS, Ram and Vram are shared. I understand the need or Linux and Windows and yes, it should be implemented in llama.cpp first. If you raise the request there, it's preferable to close the issue here. I have raised it there so will close this. But thanks for the info on MacOS. What an absolute win that is! So MacOS sounds like it is a much better platform for running LLMs locally then?
Author
Owner

@Speedway1 commented on GitHub (Aug 18, 2024):

Thank you to all involved. Really interesting information.

<!-- gh-comment-id:2295371863 --> @Speedway1 commented on GitHub (Aug 18, 2024): Thank you to all involved. Really interesting information.
Author
Owner

@igorschlum commented on GitHub (Aug 18, 2024):

@Speedway1 It depends on the size of the LLM you want to run. I bought a Mac workstation with 192 GB of RAM—it's expensive, but I know I'll be able to sell it if needed. I can run all the LLMs locally. It's great for testing and learning.

<!-- gh-comment-id:2295376710 --> @igorschlum commented on GitHub (Aug 18, 2024): @Speedway1 It depends on the size of the LLM you want to run. I bought a Mac workstation with 192 GB of RAM—it's expensive, but I know I'll be able to sell it if needed. I can run all the LLMs locally. It's great for testing and learning.
Author
Owner

@Speedway1 commented on GitHub (Aug 18, 2024):

@Speedway1 It depends on the size of the LLM you want to run. I bought a Mac workstation with 192 GB of RAM—it's expensive, but I know I'll be able to sell it if needed. I can run all the LLMs locally. It's great for testing and learning.

Thank you for your reply. I am particularly interested in running Mistral Large (with a reasonable generation speed) and Llama 3.1 405B with the 131k/128k context lengths. Will your Mac handle that? And what sort of generation speed would you likely get?

I have Nvidia 4090 in 1 box and two Radeon 7900 XTX (24GB x 2) in another box. Just worked out a spec that will allow me to have 3 x Radeon 7900XTX but can't get a motherboard that supports more than that. So the most VRAM on Linux would be 3 x 24GB (72GB), not really enough for Mistral Large and definitely not able to run Llama 405B.

If I run Mistral Large on the twin AMD box the CPU offloading is to slow it's not really useful and the GPUs kick in once every 20 seconds or so as they briefly get processing that fits onto their layers.

So if the Mac has a higher bandwidth of RAM directly accessible by the GPU sounds like a winner.

<!-- gh-comment-id:2295380764 --> @Speedway1 commented on GitHub (Aug 18, 2024): > @Speedway1 It depends on the size of the LLM you want to run. I bought a Mac workstation with 192 GB of RAM—it's expensive, but I know I'll be able to sell it if needed. I can run all the LLMs locally. It's great for testing and learning. Thank you for your reply. I am particularly interested in running Mistral Large (with a reasonable generation speed) and Llama 3.1 405B with the 131k/128k context lengths. Will your Mac handle that? And what sort of generation speed would you likely get? I have Nvidia 4090 in 1 box and two Radeon 7900 XTX (24GB x 2) in another box. Just worked out a spec that will allow me to have 3 x Radeon 7900XTX but can't get a motherboard that supports more than that. So the most VRAM on Linux would be 3 x 24GB (72GB), not really enough for Mistral Large and definitely not able to run Llama 405B. If I run Mistral Large on the twin AMD box the CPU offloading is to slow it's not really useful and the GPUs kick in once every 20 seconds or so as they briefly get processing that fits onto their layers. So if the Mac has a higher bandwidth of RAM directly accessible by the GPU sounds like a winner.
Author
Owner

@igorschlum commented on GitHub (Aug 18, 2024):

@Speedway1 Yes I can run 405b-instruct-q3_K_S but you have to increase the memory that can be used for the VRAM. Ususally it's not more than 66% of the RAM, but with a command, you can increase it to 180 GB and leaving 7GB for macOS and terminal.

sudo sysctl iogpu.wired_limit_mb=184320

I prefer the 70B with Q8_0 quantization. I get better answers.
I will try also mistral large next week.
If you want to run a test script or an app. I can do it for you. I'm happy to learn and to discover use case.

<!-- gh-comment-id:2295384776 --> @igorschlum commented on GitHub (Aug 18, 2024): @Speedway1 Yes I can run 405b-instruct-q3_K_S but you have to increase the memory that can be used for the VRAM. Ususally it's not more than 66% of the RAM, but with a command, you can increase it to 180 GB and leaving 7GB for macOS and terminal. ```bash sudo sysctl iogpu.wired_limit_mb=184320 ``` I prefer the 70B with Q8_0 quantization. I get better answers. I will try also mistral large next week. If you want to run a test script or an app. I can do it for you. I'm happy to learn and to discover use case.
Author
Owner

@Speedway1 commented on GitHub (Aug 18, 2024):

@Speedway1 Yes I can run 405b-instruct-q3_K_S but you have to increase the memory that can be used for the VRAM. Ususally it's not more than 66% of the RAM, but with a command, you can increase it to 180 GB and leaving 7GB for macOS and terminal.

sudo sysctl iogpu.wired_limit_mb=184320

I prefer the 70B with Q8_0 quantization. I get better answers. I will try also mistral large next week. If you want to run a test script or an app. I can do it for you. I'm happy to learn and to discover use case.

Thank you very much! I am looking forward to hearing the results of your test with Mistral Large. Also thank you for the command to increase the allocated RAM. We're testing MOA here with the various boxes running specific LLMs and then getting best of breed results but it's very early stages as we're discovering quite a significant drop-off in attention to original instructions but can see how powerful the MOA technology could be if we got it right.

<!-- gh-comment-id:2295387437 --> @Speedway1 commented on GitHub (Aug 18, 2024): > @Speedway1 Yes I can run 405b-instruct-q3_K_S but you have to increase the memory that can be used for the VRAM. Ususally it's not more than 66% of the RAM, but with a command, you can increase it to 180 GB and leaving 7GB for macOS and terminal. > > ```shell > sudo sysctl iogpu.wired_limit_mb=184320 > ``` > > I prefer the 70B with Q8_0 quantization. I get better answers. I will try also mistral large next week. If you want to run a test script or an app. I can do it for you. I'm happy to learn and to discover use case. Thank you very much! I am looking forward to hearing the results of your test with Mistral Large. Also thank you for the command to increase the allocated RAM. We're testing MOA here with the various boxes running specific LLMs and then getting best of breed results but it's very early stages as we're discovering quite a significant drop-off in attention to original instructions but can see how powerful the MOA technology could be if we got it right.
Author
Owner

@rick-github commented on GitHub (Aug 18, 2024):

FYI, recent Nvidia GPUs implement unified memory, where the GPU can reach across the PCI bus and access system RAM, thereby theoretically allowing the GPU to directly access models as large as system RAM. This would do away the need to physically swap layers out of VRAM. Practically, the bottleneck is the PCI bus, which is much slower than the internal GPU bus. Support for unified memory was recently merged into llama.cpp, unfortunately the performance gains were minor and only for models that were slightly larger than would fit in VRAM.

<!-- gh-comment-id:2295411448 --> @rick-github commented on GitHub (Aug 18, 2024): FYI, recent Nvidia GPUs implement [unified memory](https://developer.nvidia.com/blog/unified-memory-cuda-beginners/), where the GPU can reach across the PCI bus and access system RAM, thereby theoretically allowing the GPU to directly access models as large as system RAM. This would do away the need to physically swap layers out of VRAM. Practically, the bottleneck is the PCI bus, which is much slower than the internal GPU bus. Support for unified memory was [recently merged](https://github.com/ggerganov/llama.cpp/pull/8035) into llama.cpp, unfortunately the performance gains were minor and only for models that were slightly larger than would fit in VRAM.
Author
Owner

@rick-github commented on GitHub (Aug 18, 2024):

This is actually available in ollama 0.3.6. I set the environment variable GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 in the server environment and then overrode the layer calculations that ollama does with options:{"num_gpu":xx}. I tried it with codeup:13b-llama2-chat-q4_0, a 41 layer model that normally will load only 18 layers into the GPU. With the override, llama.cpp loaded all 41 layers into memory managed by the GPU. Inference is slow, though, at 793 seconds compared to 78 seconds for the normal 18 layer split load.

<!-- gh-comment-id:2295431682 --> @rick-github commented on GitHub (Aug 18, 2024): This is actually available in ollama 0.3.6. I set the environment variable `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` in the server environment and then overrode the layer calculations that ollama does with `options:{"num_gpu":xx}`. I tried it with codeup:13b-llama2-chat-q4_0, a 41 layer model that normally will load only 18 layers into the GPU. With the override, llama.cpp loaded all 41 layers into memory managed by the GPU. Inference is slow, though, at 793 seconds compared to 78 seconds for the normal 18 layer split load.
Author
Owner

@JamesTryand commented on GitHub (Jan 31, 2025):

Thank you to all involved. Really interesting information.

have you got a link to the issue on llama.cpp ?

<!-- gh-comment-id:2626914012 --> @JamesTryand commented on GitHub (Jan 31, 2025): > Thank you to all involved. Really interesting information. have you got a link to the issue on llama.cpp ?
Author
Owner

@KuSh commented on GitHub (Feb 16, 2025):

Should to be that discussion

<!-- gh-comment-id:2661429073 --> @KuSh commented on GitHub (Feb 16, 2025): Should to be [that discussion](https://github.com/ggml-org/llama.cpp/discussions/9083)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66060