[GH-ISSUE #9286] Ability to specify GPU priority for model splitting, and don't split model unless needed #6054

Closed
opened 2026-04-12 17:22:56 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @Kamryx on GitHub (Feb 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9286

This may be a feature already, it really seems like it would be, assuming it's possible, but I can't find how I would do it.

My system has 2 RTX 3060s, and a Titan XP. The 3060s perform way better for models under 24GB due to their tensor cores. By default, Ollama seems to split all models across all cards. I'd like the ability to say;

Don't split models at all unless you need to, and when you do need to split, split in this order of cards: 3060 1, 3060 2, Titan XP

That way I get to keep smaller models contained to my faster cards, and if needed, extent onto my slower cards, rather than splitting across all my cards from the get-go and being bottlenecked by my slowest.

I've played around with CUDA_VISIBLE_DEVICES thinking specifying an order there would do this behavior but it doesn't for me. If I exclude the Titan XP it gets me half way there but, yanno, then I lose the Titan completely which I don't actually wanna do.

Originally created by @Kamryx on GitHub (Feb 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9286 This may be a feature already, it really seems like it would be, assuming it's possible, but I can't find how I would do it. My system has 2 RTX 3060s, and a Titan XP. The 3060s perform way better for models under 24GB due to their tensor cores. By default, Ollama seems to split all models across all cards. I'd like the ability to say; Don't split models at all unless you need to, and when you do need to split, split in this order of cards: 3060 1, 3060 2, Titan XP That way I get to keep smaller models contained to my faster cards, and if needed, extent onto my slower cards, rather than splitting across all my cards from the get-go and being bottlenecked by my slowest. I've played around with CUDA_VISIBLE_DEVICES thinking specifying an order there would do this behavior but it doesn't for me. If I exclude the Titan XP it gets me half way there but, yanno, then I lose the Titan completely which I don't actually wanna do.
GiteaMirror added the feature request label 2026-04-12 17:22:56 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 22, 2025):

Don't split models at all unless you need to,

ollama already does this.

and when you do need to split, split in this order of cards: 3060 1, 3060 2, Titan XP

There's a mechanism, tensor-split, that allows control of layer allocation to the GPU devices. Unfortunately this is not currently configurable at the user level, if you want to control this you need to shim the runner to modify the layout.

<!-- gh-comment-id:2676183463 --> @rick-github commented on GitHub (Feb 22, 2025): > Don't split models at all unless you need to, ollama already does this. > and when you do need to split, split in this order of cards: 3060 1, 3060 2, Titan XP There's a mechanism, [`tensor-split`](https://github.com/ollama/ollama/blob/68bac1e0a646e00a215b6bffb6f294f895c32238/runner/ollamarunner/runner.go#L858), that allows control of layer allocation to the GPU devices. Unfortunately this is not currently configurable at the user level, if you want to control this you need to shim the runner to modify the layout.
Author
Owner

@Kamryx commented on GitHub (Feb 23, 2025):

Don't split models at all unless you need to,

ollama already does this.

How do I get it to behave this way? My default behavior splits all models, all the time. At least in my testing.

<!-- gh-comment-id:2677125175 --> @Kamryx commented on GitHub (Feb 23, 2025): > > Don't split models at all unless you need to, > > ollama already does this. How do I get it to behave this way? My default behavior splits all models, all the time. At least in my testing.
Author
Owner

@rick-github commented on GitHub (Feb 23, 2025):

If the model is being distributed across multiple devices, ollama thinks it doesn't fit in one GPU. Look at the logs for lines with source=sched.go, they will show the decisions ollama is making in scheduling the model.

<!-- gh-comment-id:2677171503 --> @rick-github commented on GitHub (Feb 23, 2025): If the model is being distributed across multiple devices, ollama thinks it doesn't fit in one GPU. Look at the logs for lines with `source=sched.go`, they will show the decisions ollama is making in scheduling the model.
Author
Owner

@rick-github commented on GitHub (Feb 23, 2025):

Note if you have set OLLAMA_SCHED_SPREAD=1 then ollama will always try to spread the model.

<!-- gh-comment-id:2677172169 --> @rick-github commented on GitHub (Feb 23, 2025): Note if you have set `OLLAMA_SCHED_SPREAD=1` then ollama will always try to spread the model.
Author
Owner

@Mikec78660 commented on GitHub (Nov 7, 2025):

I have been trying to figure this out as well. If anyone has any insight would be greatly appreciated.

I have a 24GB 3090 and a 24GB P40. I have a model that is 30GB. I would want it to fit all the layers it can on the 3090 and what is left on the P40 as the 3090 will offer faster inference. But I can only find the option to use only one GPU or spit evenly over 2 GPUs. If I have it set up so that I have a P40 and 2 P4s available it seems to use the p40 for everything it can and only if something doesn't fit it puts some on one or both of the P4. So I can't tell what the logic is exactly.

Would love to know how to "shim the runner"

<!-- gh-comment-id:3505102328 --> @Mikec78660 commented on GitHub (Nov 7, 2025): I have been trying to figure this out as well. If anyone has any insight would be greatly appreciated. I have a 24GB 3090 and a 24GB P40. I have a model that is 30GB. I would want it to fit all the layers it can on the 3090 and what is left on the P40 as the 3090 will offer faster inference. But I can only find the option to use only one GPU or spit evenly over 2 GPUs. If I have it set up so that I have a P40 and 2 P4s available it seems to use the p40 for everything it can and only if something doesn't fit it puts some on one or both of the P4. So I can't tell what the logic is exactly. Would love to know how to "shim the runner"
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6054