[PR #13165] [CLOSED] Support overriding tensor-split from config #39978

Closed
opened 2026-04-23 00:59:12 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/13165
Author: @seulement55
Created: 11/20/2025
Status: Closed

Base: mainHead: support-overriding-tensor-split-from-config


📝 Commits (2)

  • 90029da Support overriding tensor-split from config
  • e860834 docs: document tensor-split overrides

📊 Changes

9 files changed (+583 additions, -11 deletions)

View changed files

📝 docs/gpu.mdx (+31 -1)
📝 envconfig/config.go (+4 -0)
envconfig/override.go (+116 -0)
envconfig/override_test.go (+140 -0)
📝 llm/server.go (+157 -2)
📝 llm/server_test.go (+123 -0)
📝 server/routes_generate_test.go (+3 -2)
📝 server/sched.go (+4 -2)
📝 server/sched_test.go (+5 -4)

📄 Description

Resolves #12010
Resolves #10172
Resolves #7047


When it comes to automatic determination of how to optimally split a model that doesn't fit into VRAM of a single GPU, ollama does a rather poor job. Up until 0.11.4, the binary could be wrapped with a shell script and the layer split parameters overridden to manually optimize the layer split across GPUs, but this went away with 0.11.5.

How bad is the problem? For example, for llama 3.x 70b models and variants thereof (e.g. hermes 4), ollama offloads a total of 19/81 layers to my 4x22GB GPUs with context set to maximum 128K. With the manual override configured to 18,21,21,21 81/81 layers are offloaded, still with the context maxed out at 128K, without OOMs, tested up to full 128K of context.

While evolving better default heuristics is appreciated, it seems very unrealistic to expect that they will ever be perfect for every model, and when they are not, there is currently no way to override the split to optimize it. hermes4:70b 19/81 with layers offloaded is completely unusable, but with 81/81 in VRAM across the 4 GPUs, it is very quick.

This patch introduces a clean feature to optionally explicitly override the layer split on a model-by-model basis using overrides stored in an .ini file (defaults to ~/.ollama.ini). If the file is present (config file path/name can be specified using OLLAMA_OVERRIDE_CONFIG if anyone deems the default unsuitable for any reason), it will be used. If it is not, or the requested model doesn't have an override config specified in the .ini file, default heuristic will be used.
Example config block in the .ini file:

[hermes4:70b]
tensor-split=18,21,21,21

Test cases are also included for regression testing against future updates.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/13165 **Author:** [@seulement55](https://github.com/seulement55) **Created:** 11/20/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `support-overriding-tensor-split-from-config` --- ### 📝 Commits (2) - [`90029da`](https://github.com/ollama/ollama/commit/90029da60557a29e7d4b5b5d87efacf32912c64c) Support overriding tensor-split from config - [`e860834`](https://github.com/ollama/ollama/commit/e860834e001b5cc52b25c7a6fa1d0e57e49bae44) docs: document tensor-split overrides ### 📊 Changes **9 files changed** (+583 additions, -11 deletions) <details> <summary>View changed files</summary> 📝 `docs/gpu.mdx` (+31 -1) 📝 `envconfig/config.go` (+4 -0) ➕ `envconfig/override.go` (+116 -0) ➕ `envconfig/override_test.go` (+140 -0) 📝 `llm/server.go` (+157 -2) 📝 `llm/server_test.go` (+123 -0) 📝 `server/routes_generate_test.go` (+3 -2) 📝 `server/sched.go` (+4 -2) 📝 `server/sched_test.go` (+5 -4) </details> ### 📄 Description Resolves #12010 Resolves #10172 Resolves #7047 --- When it comes to automatic determination of how to optimally split a model that doesn't fit into VRAM of a single GPU, ollama does a rather poor job. Up until 0.11.4, the binary could be wrapped with a shell script and the layer split parameters overridden to manually optimize the layer split across GPUs, but this went away with 0.11.5. How bad is the problem? For example, for llama 3.x 70b models and variants thereof (e.g. hermes 4), ollama offloads a total of 19/81 layers to my 4x22GB GPUs with context set to maximum 128K. With the manual override configured to 18,21,21,21 81/81 layers are offloaded, still with the context maxed out at 128K, without OOMs, tested up to full 128K of context. While evolving better default heuristics is appreciated, it seems very unrealistic to expect that they will ever be perfect for every model, and when they are not, there is currently no way to override the split to optimize it. hermes4:70b 19/81 with layers offloaded is completely unusable, but with 81/81 in VRAM across the 4 GPUs, it is very quick. This patch introduces a clean feature to optionally explicitly override the layer split on a model-by-model basis using overrides stored in an .ini file (defaults to `~/.ollama.ini`). If the file is present (config file path/name can be specified using `OLLAMA_OVERRIDE_CONFIG` if anyone deems the default unsuitable for any reason), it will be used. If it is not, or the requested model doesn't have an override config specified in the .ini file, default heuristic will be used. Example config block in the .ini file: ```ini [hermes4:70b] tensor-split=18,21,21,21 ``` Test cases are also included for regression testing against future updates. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-23 00:59:12 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#39978