[PR #13087] Add Support for Distributed Inferencing (continued on AMD strix halo) #45317

Open
opened 2026-04-25 01:02:22 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/13087
Author: @Gulianrdgd
Created: 11/14/2025
Status: 🔄 Open

Base: mainHead: rpc_clean


📝 Commits (10+)

  • 7015fce feat: Added support for llama.cpp RPC
  • c234eea doc: Added documentation for distributed inferencing
  • bf80325 feat: Added Memory Check for RPC Servers
  • ed78baa feat: Added option to change RPC servers in HTTP options
  • 4f76c4b doc: Added docs for new API options
  • 056bd69 doc: Updated request for changing RPC server to be generate instead of chat
  • b177dcf Merge remote-tracking branch 'upstream/main' into feat/rpc
  • 8068cd1 Merge remote-tracking branch 'upstream/main' into feat/rpc
  • ca5c567 server/sched.go: Fixed missing legacy gpu module
  • f645eec dicover/gpu.go: Updated RPC communication to support new protocol

📊 Changes

28 files changed (+3160 additions, -17 deletions)

View changed files

📝 CMakeLists.txt (+4 -0)
📝 Dockerfile (+77 -10)
📝 api/types.go (+2 -0)
📝 cmd/cmd.go (+14 -0)
cmd/rpc_server.go (+48 -0)
discover/gpu_rpc.go (+285 -0)
📝 discover/types.go (+9 -0)
📝 docs/api.md (+2 -1)
docs/distributed_inferencing.md (+37 -0)
📝 envconfig/config.go (+3 -0)
📝 llama/llama.go (+11 -0)
📝 llm/server.go (+51 -2)
📝 ml/backend.go (+3 -0)
📝 ml/backend/ggml/ggml.go (+103 -2)
📝 ml/backend/ggml/ggml/.rsync-filter (+3 -0)
ml/backend/ggml/ggml/include/rpc-server.h (+10 -0)
ml/backend/ggml/ggml/src/ggml-rpc/CMakeLists.txt (+9 -0)
ml/backend/ggml/ggml/src/ggml-rpc/ggml-rpc.cpp (+2057 -0)
ml/backend/ggml/ggml/src/ggml-rpc/rpc-server.cpp (+336 -0)
ml/backend/ggml/ggml/src/ggml-rpc/rpc.go (+6 -0)

...and 8 more files

📄 Description

This PR is mostly from PR #10844, @gkpln3 did some amazing work! We wanted to make this work on our Framework desktops. We connected the Framework desktops via Thunderbolt and added ROCm 7 support.

Using this PR we successfully ran qwen3:235b split across both servers.

It works, and it's pretty fast! The model distributes across both machines, and inference runs smoothly.

I do not have any performance numbers yet. I will update this PR in the coming days.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/13087 **Author:** [@Gulianrdgd](https://github.com/Gulianrdgd) **Created:** 11/14/2025 **Status:** 🔄 Open **Base:** `main` ← **Head:** `rpc_clean` --- ### 📝 Commits (10+) - [`7015fce`](https://github.com/ollama/ollama/commit/7015fce447687276a10bdd2caefc20614048a6dd) feat: Added support for llama.cpp RPC - [`c234eea`](https://github.com/ollama/ollama/commit/c234eea9ecb54b905ccffdeba551e4bb8cd4c186) doc: Added documentation for distributed inferencing - [`bf80325`](https://github.com/ollama/ollama/commit/bf803251be33e8d4a588caefa49d59a058a04a12) feat: Added Memory Check for RPC Servers - [`ed78baa`](https://github.com/ollama/ollama/commit/ed78baaed7c50eeae5761b0836c70ba1307ded72) feat: Added option to change RPC servers in HTTP options - [`4f76c4b`](https://github.com/ollama/ollama/commit/4f76c4b26e40e76ae6c6208f9287c1773cc3b4a6) doc: Added docs for new API options - [`056bd69`](https://github.com/ollama/ollama/commit/056bd69ed9687c0ebeaf844c2bec23cf4890e91d) doc: Updated request for changing RPC server to be generate instead of chat - [`b177dcf`](https://github.com/ollama/ollama/commit/b177dcf52430b9e0766b6f0d4977d14e39fc6336) Merge remote-tracking branch 'upstream/main' into feat/rpc - [`8068cd1`](https://github.com/ollama/ollama/commit/8068cd10cfb255d9c831c384a4b286ad5275a720) Merge remote-tracking branch 'upstream/main' into feat/rpc - [`ca5c567`](https://github.com/ollama/ollama/commit/ca5c5677be4f7e0faf716cc7fc9f93b69b1edea5) server/sched.go: Fixed missing legacy gpu module - [`f645eec`](https://github.com/ollama/ollama/commit/f645eec1eab25649fe9d8bcf607585bd00cfd253) dicover/gpu.go: Updated RPC communication to support new protocol ### 📊 Changes **28 files changed** (+3160 additions, -17 deletions) <details> <summary>View changed files</summary> 📝 `CMakeLists.txt` (+4 -0) 📝 `Dockerfile` (+77 -10) 📝 `api/types.go` (+2 -0) 📝 `cmd/cmd.go` (+14 -0) ➕ `cmd/rpc_server.go` (+48 -0) ➕ `discover/gpu_rpc.go` (+285 -0) 📝 `discover/types.go` (+9 -0) 📝 `docs/api.md` (+2 -1) ➕ `docs/distributed_inferencing.md` (+37 -0) 📝 `envconfig/config.go` (+3 -0) 📝 `llama/llama.go` (+11 -0) 📝 `llm/server.go` (+51 -2) 📝 `ml/backend.go` (+3 -0) 📝 `ml/backend/ggml/ggml.go` (+103 -2) 📝 `ml/backend/ggml/ggml/.rsync-filter` (+3 -0) ➕ `ml/backend/ggml/ggml/include/rpc-server.h` (+10 -0) ➕ `ml/backend/ggml/ggml/src/ggml-rpc/CMakeLists.txt` (+9 -0) ➕ `ml/backend/ggml/ggml/src/ggml-rpc/ggml-rpc.cpp` (+2057 -0) ➕ `ml/backend/ggml/ggml/src/ggml-rpc/rpc-server.cpp` (+336 -0) ➕ `ml/backend/ggml/ggml/src/ggml-rpc/rpc.go` (+6 -0) _...and 8 more files_ </details> ### 📄 Description This PR is mostly from PR #10844, @gkpln3 did some amazing work! We wanted to make this work on our Framework desktops. We connected the Framework desktops via Thunderbolt and added ROCm 7 support. Using this PR we successfully ran qwen3:235b split across both servers. It works, and it's pretty fast! The model distributes across both machines, and inference runs smoothly. I do not have any performance numbers yet. I will update this PR in the coming days. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-25 01:02:22 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#45317