[GH-ISSUE #4643] Llama.cpp now supports distributed inference across multiple machines. #64954

Open
opened 2026-05-03 19:23:06 -05:00 by GiteaMirror · 45 comments
Owner

Originally created by @AncientMystic on GitHub (May 26, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4643

Llama.cpp now supports distribution across multiple devices to boost speeds, this would be a great addition to Ollama

https://github.com/ggerganov/llama.cpp/tree/master/examples/rpc

https://www.reddit.com/r/LocalLLaMA/comments/1cyzi9e/llamacpp_now_supports_distributed_inference/

Originally created by @AncientMystic on GitHub (May 26, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4643 Llama.cpp now supports distribution across multiple devices to boost speeds, this would be a great addition to Ollama https://github.com/ggerganov/llama.cpp/tree/master/examples/rpc https://www.reddit.com/r/LocalLLaMA/comments/1cyzi9e/llamacpp_now_supports_distributed_inference/
GiteaMirror added the feature request label 2026-05-03 19:23:06 -05:00
Author
Owner

@easp commented on GitHub (May 27, 2024):

I don't think this offers any speedup, yet. It just increases the size of the models you can run. Still useful, though.

<!-- gh-comment-id:2133905006 --> @easp commented on GitHub (May 27, 2024): I don't think this offers any speedup, yet. It just increases the size of the models you can run. Still useful, though.
Author
Owner

@SamuraiBarbi commented on GitHub (May 30, 2024):

I don't think this offers any speedup, yet. It just increases the size of the models you can run. Still useful, though.

Incredibly useful. There's plenty of us that have multiple computers each with their own GPU but for different reasons can't run a machine with multiple GPU's. Adding this support will allow those in the community to run more competent models such as Command-R-Plus, or WizardLM-2-8x22B for example. My workstation has a 3090 ( 24GB ) and a 1080TI ( 11GB ), but I also have an HTPC with a 1080TI ( 11GB ) and another machine with a 1080TI ( 11GB ). With distributed inference I'd have roughly 57 GB of VRAM to work with, which makes a huge different in the quality of responses I can get were I to be able to run larger more capable models.

<!-- gh-comment-id:2141016988 --> @SamuraiBarbi commented on GitHub (May 30, 2024): > I don't think this offers any speedup, yet. It just increases the size of the models you can run. Still useful, though. Incredibly useful. There's plenty of us that have multiple computers each with their own GPU but for different reasons can't run a machine with multiple GPU's. Adding this support will allow those in the community to run more competent models such as Command-R-Plus, or WizardLM-2-8x22B for example. My workstation has a 3090 ( 24GB ) and a 1080TI ( 11GB ), but I also have an HTPC with a 1080TI ( 11GB ) and another machine with a 1080TI ( 11GB ). With distributed inference I'd have roughly 57 GB of VRAM to work with, which makes a huge different in the quality of responses I can get were I to be able to run larger more capable models.
Author
Owner

@AncientMystic commented on GitHub (May 31, 2024):

I agree i mean it wouldn't offer a performance boost over the capabilities of the card/cards but for example running a 40gb model on a device with 11gb vram and getting 0.2 t/s vs distributed and having like waywardspooky said 57gb im sure 57gb of vram would do better than trying to run it in ram with cpu so that would be a performance boost in a sense by not suffering a massive drop when running larger models.

I personally have a SFF pc as a server with a Tesla P4 8GB and a laptop with a 1050Ti 4gb not great really.... but 12GB vram would offer the ability to run better quantisations of 7B models and some 13B models without the drop to 2 t/s or less at least

Most people just don't have a bunch of high vram cards in the same pc and can't afford to so anything at all that can help improve user experiences on lower end devices in anyway is a godsend

<!-- gh-comment-id:2141030972 --> @AncientMystic commented on GitHub (May 31, 2024): I agree i mean it wouldn't offer a performance boost over the capabilities of the card/cards but for example running a 40gb model on a device with 11gb vram and getting 0.2 t/s vs distributed and having like waywardspooky said 57gb im sure 57gb of vram would do better than trying to run it in ram with cpu so that would be a performance boost in a sense by not suffering a massive drop when running larger models. I personally have a SFF pc as a server with a Tesla P4 8GB and a laptop with a 1050Ti 4gb not great really.... but 12GB vram would offer the ability to run better quantisations of 7B models and some 13B models without the drop to 2 t/s or less at least Most people just don't have a bunch of high vram cards in the same pc and can't afford to so anything at all that can help improve user experiences on lower end devices in anyway is a godsend
Author
Owner

@danividalg commented on GitHub (Jun 30, 2024):

Hope this feature can be added to ollama soon :)

<!-- gh-comment-id:2198644107 --> @danividalg commented on GitHub (Jun 30, 2024): Hope this feature can be added to ollama soon :)
Author
Owner

@AncientMystic commented on GitHub (Jun 30, 2024):

Hope this feature can be added to ollama soon :)

Me too, it would really help with using larger models and it would be a lot easier to string together random devices with a few gb of vram then try to afford the top high end GPUs

<!-- gh-comment-id:2198646536 --> @AncientMystic commented on GitHub (Jun 30, 2024): > Hope this feature can be added to ollama soon :) Me too, it would really help with using larger models and it would be a lot easier to string together random devices with a few gb of vram then try to afford the top high end GPUs
Author
Owner

@0x77dev commented on GitHub (Jul 16, 2024):

This would be cool in combination with something like Redis to monitor and balance requests. An orchestration layer could ensure models are pulled on specific nodes. Additionally, the ability to specify a custom registry would make Ollama a fully-fledged, enterprise-ready, open-source solution for hosting large language models, but this is out of scope for this issue 🙃

<!-- gh-comment-id:2231908474 --> @0x77dev commented on GitHub (Jul 16, 2024): This would be cool in combination with something like Redis to monitor and balance requests. An orchestration layer could ensure models are pulled on specific nodes. Additionally, the ability to specify a custom registry would make Ollama a fully-fledged, enterprise-ready, open-source solution for hosting large language models, but this is out of scope for this issue 🙃
Author
Owner

@saul-jb commented on GitHub (Jul 24, 2024):

This would be hugely helpful to get Llama 3.1 405B running on consumer hardware.

<!-- gh-comment-id:2246653177 --> @saul-jb commented on GitHub (Jul 24, 2024): This would be hugely helpful to get Llama 3.1 405B running on consumer hardware.
Author
Owner

@ahmetegesel commented on GitHub (Jul 24, 2024):

I have two MBP 32GB with apple silicon and it would be extremely helpful if ollama implemented this feature. Now that Llama 3.1 is out and tried 70B's capabilities, dying to be able to run its better quants locally.

<!-- gh-comment-id:2247192153 --> @ahmetegesel commented on GitHub (Jul 24, 2024): I have two MBP 32GB with apple silicon and it would be extremely helpful if ollama implemented this feature. Now that Llama 3.1 is out and tried 70B's capabilities, dying to be able to run its better quants locally.
Author
Owner

@gkpln3 commented on GitHub (Jul 25, 2024):

I would love to see such a feature in Ollama, self hosting llama3.1:405b can be amazing

<!-- gh-comment-id:2249632706 --> @gkpln3 commented on GitHub (Jul 25, 2024): I would love to see such a feature in Ollama, self hosting llama3.1:405b can be amazing
Author
Owner

@CesarPetrescu commented on GitHub (Jul 25, 2024):

I am looking forward to this. 4 PC's with 4090 yet i cant use multiple on a PC. Would help a lot for llama3.1:70b for me

<!-- gh-comment-id:2250422839 --> @CesarPetrescu commented on GitHub (Jul 25, 2024): I am looking forward to this. 4 PC's with 4090 yet i cant use multiple on a PC. Would help a lot for llama3.1:70b for me
Author
Owner

@marabgol commented on GitHub (Jul 27, 2024):

following, I believe this is very high demanded! how can we involve and help?

<!-- gh-comment-id:2253684231 --> @marabgol commented on GitHub (Jul 27, 2024): following, I believe this is very high demanded! how can we involve and help?
Author
Owner

@Qualzz commented on GitHub (Jul 29, 2024):

That's probably the feature i'm waiting the most from ollama.

<!-- gh-comment-id:2256156627 --> @Qualzz commented on GitHub (Jul 29, 2024): That's probably the feature i'm waiting the most from ollama.
Author
Owner

@Readon commented on GitHub (Aug 15, 2024):

There is also an auto schedule algorithm that exo do to mix CPU, GPU resources together.
However, exo is a little bit buggy on intel cpu based machine.

<!-- gh-comment-id:2291389398 --> @Readon commented on GitHub (Aug 15, 2024): There is also an auto schedule algorithm that [exo](https://github.com/exo-explore/exo) do to mix CPU, GPU resources together. However, exo is a little bit buggy on intel cpu based machine.
Author
Owner

@ecyht2 commented on GitHub (Sep 10, 2024):

I created a PR adding distributed inference, check it out and test if there are any bugs.

<!-- gh-comment-id:2340990611 --> @ecyht2 commented on GitHub (Sep 10, 2024): I created a PR adding distributed inference, check it out and test if there are any bugs.
Author
Owner

@EvilFreelancer commented on GitHub (Sep 15, 2024):

Hi! I've packed rpc-server to Docker-images with multiple CPU arch supported by Docker.

https://hub.docker.com/r/evilfreelancer/llama.cpp-rpc
https://github.com/EvilFreelancer/docker-llama.cpp-rpc/blob/main/README.en.md

<!-- gh-comment-id:2351553966 --> @EvilFreelancer commented on GitHub (Sep 15, 2024): Hi! I've packed `rpc-server` to Docker-images with multiple CPU arch supported by Docker. https://hub.docker.com/r/evilfreelancer/llama.cpp-rpc https://github.com/EvilFreelancer/docker-llama.cpp-rpc/blob/main/README.en.md
Author
Owner

@hidden1nin commented on GitHub (Sep 24, 2024):

Is there a pull request or open development on-going for this?

<!-- gh-comment-id:2369981249 --> @hidden1nin commented on GitHub (Sep 24, 2024): Is there a pull request or open development on-going for this?
Author
Owner

@ecyht2 commented on GitHub (Sep 25, 2024):

Is there a pull request or open development on-going for this?

Yup, I am working on it. See this PR: #6729.

<!-- gh-comment-id:2373351431 --> @ecyht2 commented on GitHub (Sep 25, 2024): > Is there a pull request or open development on-going for this? Yup, I am working on it. See this PR: #6729.
Author
Owner

@hidden1nin commented on GitHub (Sep 27, 2024):

Made a pull request to your pull request, probably didn't do it right but I wanna help!

<!-- gh-comment-id:2378510468 --> @hidden1nin commented on GitHub (Sep 27, 2024): Made a pull request to your pull request, probably didn't do it right but I wanna help!
Author
Owner

@lipere123 commented on GitHub (Jan 1, 2025):

Hello.
I have a supercomputer and I am heavly using LLama.cpp RPC.
It is working on 6 nodes with one RTX A4000 with 20Go GRAM each.
How can I help to integrate with ollama, please ?
Thanks in advance.
Benjamin.

<!-- gh-comment-id:2567175793 --> @lipere123 commented on GitHub (Jan 1, 2025): Hello. I have a supercomputer and I am heavly using LLama.cpp RPC. It is working on 6 nodes with one RTX A4000 with 20Go GRAM each. How can I help to integrate with ollama, please ? Thanks in advance. Benjamin.
Author
Owner

@SamuraiBarbi commented on GitHub (Jan 13, 2025):

Is there a pull request or open development on-going for this?

Yup, I am working on it. See this PR: #6729.

Any movement on this?

<!-- gh-comment-id:2587910515 --> @SamuraiBarbi commented on GitHub (Jan 13, 2025): > > Is there a pull request or open development on-going for this? > > Yup, I am working on it. See this PR: #6729. Any movement on this?
Author
Owner

@mianderson2469 commented on GitHub (Feb 5, 2025):

I have two Dell poweredge servers each with two nvidia Tesla p40 GPUs. I would be interested in being able to use the combined resources of these machines even if there is a penalty as long as overall it is a net positive.

<!-- gh-comment-id:2635663583 --> @mianderson2469 commented on GitHub (Feb 5, 2025): I have two Dell poweredge servers each with two nvidia Tesla p40 GPUs. I would be interested in being able to use the combined resources of these machines even if there is a penalty as long as overall it is a net positive.
Author
Owner

@Kreijstal commented on GitHub (Feb 24, 2025):

I don't think this offers any speedup, yet. It just increases the size of the models you can run. Still useful, though.

how are you running deepseek r1 if it isnt distributed? no quants

<!-- gh-comment-id:2678572400 --> @Kreijstal commented on GitHub (Feb 24, 2025): > I don't think this offers any speedup, yet. It just increases the size of the models you can run. Still useful, though. how are you running deepseek r1 if it isnt distributed? no quants
Author
Owner

@FayeSpica commented on GitHub (Mar 20, 2025):

Waiting for this

<!-- gh-comment-id:2738750311 --> @FayeSpica commented on GitHub (Mar 20, 2025): Waiting for this
Author
Owner

@jli113 commented on GitHub (Mar 25, 2025):

+1

<!-- gh-comment-id:2750893940 --> @jli113 commented on GitHub (Mar 25, 2025): +1
Author
Owner

@misterjice commented on GitHub (Apr 5, 2025):

+1

<!-- gh-comment-id:2780655229 --> @misterjice commented on GitHub (Apr 5, 2025): +1
Author
Owner

@chboishabba commented on GitHub (Apr 28, 2025):

+1

<!-- gh-comment-id:2835086203 --> @chboishabba commented on GitHub (Apr 28, 2025): +1
Author
Owner

@iAdanos commented on GitHub (May 4, 2025):

+1

<!-- gh-comment-id:2848994739 --> @iAdanos commented on GitHub (May 4, 2025): +1
Author
Owner

@ecyht2 commented on GitHub (May 4, 2025):

Sorry for the wait, my PR should be good for testing now.

<!-- gh-comment-id:2849062837 --> @ecyht2 commented on GitHub (May 4, 2025): Sorry for the wait, my PR should be good for testing now.
Author
Owner

@zhangddjs commented on GitHub (May 15, 2025):

Wow, thank you for the awesome work! And btw may I ask, does the PR support multiple distribute CPUs utilization ? @ecyht2

<!-- gh-comment-id:2882664498 --> @zhangddjs commented on GitHub (May 15, 2025): Wow, thank you for the awesome work! And btw may I ask, does the PR support multiple distribute CPUs utilization ? @ecyht2
Author
Owner

@Kreijstal commented on GitHub (May 15, 2025):

merge when

<!-- gh-comment-id:2882786099 --> @Kreijstal commented on GitHub (May 15, 2025): merge when
Author
Owner

@ecyht2 commented on GitHub (May 15, 2025):

Wow, thank you for the awesome work! And btw may I ask, does the PR support multiple distribute CPUs utilization ? @ecyht2

Never tested it but, having 2 different RPC servers running on CPU should work.

<!-- gh-comment-id:2883367739 --> @ecyht2 commented on GitHub (May 15, 2025): > Wow, thank you for the awesome work! And btw may I ask, does the PR support multiple distribute CPUs utilization ? [@ecyht2](https://github.com/ecyht2) Never tested it but, having 2 different RPC servers running on CPU should work.
Author
Owner

@gkpln3 commented on GitHub (May 16, 2025):

Building on what @ecyht2 started, I'm pretty sure I managed to get it to work, the usage I currently have is:
On slave devices, run:

ollama rpc

On master device:

OLLAMA_RPC_SERVERS="[ip1]:50052,[ip2]:50052" ollama serve

I will test it tomorrow, and update here if that works.

<!-- gh-comment-id:2887798818 --> @gkpln3 commented on GitHub (May 16, 2025): Building on what @ecyht2 started, I'm pretty sure I managed to get it to work, the usage I currently have is: On slave devices, run: ``` ollama rpc ``` On master device: ``` OLLAMA_RPC_SERVERS="[ip1]:50052,[ip2]:50052" ollama serve ``` I will test it tomorrow, and update here if that works.
Author
Owner

@gkpln3 commented on GitHub (May 17, 2025):

I managed to get an initial version to work. You can find it here: https://github.com/gkpln3/ollama/tree/feat/rpc

<!-- gh-comment-id:2888497227 --> @gkpln3 commented on GitHub (May 17, 2025): I managed to get an initial version to work. You can find it here: https://github.com/gkpln3/ollama/tree/feat/rpc
Author
Owner

@aquarat commented on GitHub (May 18, 2025):

@gkpln3

I managed to get an initial version to work. You can find it here: https://github.com/gkpln3/ollama/tree/feat/rpc

It unfortunately doesn't build via Docker for me :( but otherwise the changes look good 👍

 > [build 6/6] RUN --mount=type=cache,target=/root/.cache/go-build     go build -trimpath -buildmode=pie -o /bin/ollama .:                                                                                                                                     
15.36 # github.com/ollama/ollama/discover                                                                                                                                                                                                                      
15.36 discover/gpu.go:51:18: undefined: RPCServerInfo                                                                                                                                                                                                          
15.36 discover/gpu.go:256:16: undefined: CheckRPCServers                                                                                                                                                                                                       
15.36 discover/gpu.go:416:16: undefined: CheckRPCServers    
<!-- gh-comment-id:2888901838 --> @aquarat commented on GitHub (May 18, 2025): @gkpln3 > I managed to get an initial version to work. You can find it here: https://github.com/gkpln3/ollama/tree/feat/rpc It unfortunately doesn't build via Docker for me :( but otherwise the changes look good 👍 ``` > [build 6/6] RUN --mount=type=cache,target=/root/.cache/go-build go build -trimpath -buildmode=pie -o /bin/ollama .: 15.36 # github.com/ollama/ollama/discover 15.36 discover/gpu.go:51:18: undefined: RPCServerInfo 15.36 discover/gpu.go:256:16: undefined: CheckRPCServers 15.36 discover/gpu.go:416:16: undefined: CheckRPCServers ```
Author
Owner

@gkpln3 commented on GitHub (May 18, 2025):

@aquarat Thanks, I'll fix it.

<!-- gh-comment-id:2888902976 --> @gkpln3 commented on GitHub (May 18, 2025): @aquarat Thanks, I'll fix it.
Author
Owner

@gkpln3 commented on GitHub (May 24, 2025):

@aquarat Fixed :) should work now

<!-- gh-comment-id:2906887713 --> @gkpln3 commented on GitHub (May 24, 2025): @aquarat Fixed :) should work now
Author
Owner

@robertgro commented on GitHub (Jul 16, 2025):

This is the new link: https://github.com/ggml-org/llama.cpp/tree/master/tools/rpc

<!-- gh-comment-id:3080295852 --> @robertgro commented on GitHub (Jul 16, 2025): This is the new link: https://github.com/ggml-org/llama.cpp/tree/master/tools/rpc
Author
Owner

@misterjice commented on GitHub (Aug 15, 2025):

So, does this work now for ollama? olol is not working at all....

<!-- gh-comment-id:3190744514 --> @misterjice commented on GitHub (Aug 15, 2025): So, does this work now for ollama? olol is not working at all....
Author
Owner

@aquarat commented on GitHub (Aug 21, 2025):

I'm using llama.cpp directly.

<!-- gh-comment-id:3210654965 --> @aquarat commented on GitHub (Aug 21, 2025): I'm using llama.cpp directly.
Author
Owner

@robertschulze commented on GitHub (Aug 31, 2025):

I'm using llama.cpp directly.

But how can you integrate this into the LLM workflow e.g. make it usable via a REST API?

<!-- gh-comment-id:3239794877 --> @robertschulze commented on GitHub (Aug 31, 2025): > I'm using llama.cpp directly. But how can you integrate this into the LLM workflow e.g. make it usable via a REST API?
Author
Owner

@aquarat commented on GitHub (Aug 31, 2025):

llama.cpp has a server component called llama-server, which exposes the model on an OpenAI compatible endpoint. There is another tool that can dynamically switch models using llama-server as needed (llama-swap), but I don’t use that. I just run one local model currently.

<!-- gh-comment-id:3239825336 --> @aquarat commented on GitHub (Aug 31, 2025): llama.cpp has a server component called llama-server, which exposes the model on an OpenAI compatible endpoint. There is another tool that can dynamically switch models using llama-server as needed (llama-swap), but I don’t use that. I just run one local model currently.
Author
Owner

@overcuriousity commented on GitHub (Jan 23, 2026):

Is this feature still being considered for implementation, any status updates?

<!-- gh-comment-id:3789456686 --> @overcuriousity commented on GitHub (Jan 23, 2026): Is this feature still being considered for implementation, any status updates?
Author
Owner

@henri9813 commented on GitHub (Apr 20, 2026):

Hello,

Any news ?

<!-- gh-comment-id:4280505229 --> @henri9813 commented on GitHub (Apr 20, 2026): Hello, Any news ?
Author
Owner

@nomore1007 commented on GitHub (May 1, 2026):

Been waiting 2 years for this haha

<!-- gh-comment-id:4361474454 --> @nomore1007 commented on GitHub (May 1, 2026): Been waiting 2 years for this haha
Author
Owner

@regulad commented on GitHub (May 2, 2026):

Llama.cpp's server binary now supports a router mode.

On Fri, May 1, 2026 at 4:28 PM nomore1007 @.***> wrote:

nomore1007 left a comment (ollama/ollama#4643)
https://github.com/ollama/ollama/issues/4643#issuecomment-4361474454

Been waiting 2 years for this haha


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/4643#issuecomment-4361474454,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AMQAOMTB7PKC5PQOO7UFYE34YUCFLAVCNFSM6AAAAABIJM3WESVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DGNRRGQ3TINBVGQ
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you are subscribed to this thread.Message
ID: @.***>

<!-- gh-comment-id:4363732443 --> @regulad commented on GitHub (May 2, 2026): Llama.cpp's server binary now supports a router mode. On Fri, May 1, 2026 at 4:28 PM nomore1007 ***@***.***> wrote: > *nomore1007* left a comment (ollama/ollama#4643) > <https://github.com/ollama/ollama/issues/4643#issuecomment-4361474454> > > Been waiting 2 years for this haha > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/4643#issuecomment-4361474454>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AMQAOMTB7PKC5PQOO7UFYE34YUCFLAVCNFSM6AAAAABIJM3WESVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DGNRRGQ3TINBVGQ> > . > Triage notifications on the go with GitHub Mobile for iOS > <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> > or Android > <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>. > > You are receiving this because you are subscribed to this thread.Message > ID: ***@***.***> >
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64954