[GH-ISSUE #14062] 2 gpu's? #9187

Closed
opened 2026-04-12 22:02:13 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @jekv2 on GitHub (Feb 4, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14062

I have a 3060TIFE and an integrated AMD GPU in the AM5 9950x.

Am I able to utilize the granite ridge GPU as well to help utilize models?

Image
Originally created by @jekv2 on GitHub (Feb 4, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14062 I have a 3060TIFE and an integrated AMD GPU in the AM5 9950x. Am I able to utilize the granite ridge GPU as well to help utilize models? <img width="387" height="563" alt="Image" src="https://github.com/user-attachments/assets/fd720388-46e6-478d-bf51-ee0a4ad1d74a" />
Author
Owner

@rick-github commented on GitHub (Feb 4, 2026):

Two GPUs of different brands can't be used for a single model, so at best you could run one model per GPU. However, if I read the screen shot correctly, the AMD has only 512MB, which is only big enough for the tiniest models, eg smollm2:135m. That's assuming that the AMD GPU is even supported - you likely would have to try the Vulkan driver.

<!-- gh-comment-id:3844874007 --> @rick-github commented on GitHub (Feb 4, 2026): Two GPUs of different brands can't be used for a single model, so at best you could run one model per GPU. However, if I read the screen shot correctly, the AMD has only 512MB, which is only big enough for the tiniest models, eg [smollm2:135m](https://ollama.com/library/smollm2:135m). That's assuming that the AMD GPU is even supported - you likely would have to try the [Vulkan](https://docs.ollama.com/gpu#vulkan-gpu-support) driver.
Author
Owner

@jekv2 commented on GitHub (Feb 4, 2026):

Two GPUs of different brands can't be used for a single model, so at best you could run one model per GPU. However, if I read the screen shot correctly, the AMD has only 512MB, which is only big enough for the tiniest models, eg smollm2:135m. That's assuming that the AMD GPU is even supported - you likely would have to try the Vulkan driver.

Gotcha, not worth trying.

I have a GTX-960-2GD5 lying around, with 2GB,if I slap that in the windows PC, can the RTX and GTX run together to utilize one model?
https://www.techpowerup.com/gpu-specs/msi-gtx-960.b3178

Thanks.

<!-- gh-comment-id:3844973155 --> @jekv2 commented on GitHub (Feb 4, 2026): > Two GPUs of different brands can't be used for a single model, so at best you could run one model per GPU. However, if I read the screen shot correctly, the AMD has only 512MB, which is only big enough for the tiniest models, eg [smollm2:135m](https://ollama.com/library/smollm2:135m). That's assuming that the AMD GPU is even supported - you likely would have to try the [Vulkan](https://docs.ollama.com/gpu#vulkan-gpu-support) driver. Gotcha, not worth trying. I have a GTX-960-2GD5 lying around, with 2GB,if I slap that in the windows PC, can the RTX and GTX run together to utilize one model? https://www.techpowerup.com/gpu-specs/msi-gtx-960.b3178 Thanks.
Author
Owner

@rick-github commented on GitHub (Feb 4, 2026):

The GTX-960 is supported and ollama should use both Nvidia GPUs when loading a model.

<!-- gh-comment-id:3846785022 --> @rick-github commented on GitHub (Feb 4, 2026): The GTX-960 is [supported](https://github.com/ollama/ollama/blob/main/docs/gpu.mdx#nvidia:~:text=980%20GTX%20970-,GTX%20960,-GTX%20950) and ollama should use both Nvidia GPUs when loading a model.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9187