[GH-ISSUE #5846] [Very minor] When eGPU gets disconnected, Ollama falls back to using CPU and doesn't detec when eGPU is connected again. #65684

Closed
opened 2026-05-03 22:14:05 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @kosmallafilip on GitHub (Jul 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5846

What is the issue?

This is quite a big issue, but affects probably less than 0.01% of users, so I am tagging this as minor.

I am using Ollama locally on Asus ROG Flow Z13 with XG Mobile eGPU - this is Asus's proprietary eGPU technology, it's quite different than connecting with Thunderbolt, so I can't really tell if it's going to affect other types of eGPUs as well.

If, for any reason, the eGPU disconnects, Ollama falls back to using a CPU for generating messages, which is great, but when I connect it back I'd expect Ollama to detect a new GPU being connected and use it instead. This doesn't happen and I was left wondering why Ollama uses so much CPU and lags the whole system.

Due to how I use Ollama, I am still on 0.1.45, but let me know if you want me to test this the latest release.
I don't expect this to get fixed anytime soon, but I wanted to let you know it happens.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.1.45

Originally created by @kosmallafilip on GitHub (Jul 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5846 ### What is the issue? This is quite a big issue, but affects probably less than 0.01% of users, so I am tagging this as minor. I am using Ollama locally on Asus ROG Flow Z13 with XG Mobile eGPU - this is Asus's proprietary eGPU technology, it's quite different than connecting with Thunderbolt, so I can't really tell if it's going to affect other types of eGPUs as well. If, for any reason, the eGPU disconnects, Ollama falls back to using a CPU for generating messages, which is great, but when I connect it back I'd expect Ollama to detect a new GPU being connected and use it instead. This doesn't happen and I was left wondering why Ollama uses so much CPU and lags the whole system. Due to how I use Ollama, I am still on 0.1.45, but let me know if you want me to test this the latest release. I don't expect this to get fixed anytime soon, but I wanted to let you know it happens. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.45
GiteaMirror added the bug label 2026-05-03 22:14:05 -05:00
Author
Owner

@kosmallafilip commented on GitHub (Jul 22, 2024):

I've just found #5411 , so I assume this will get closed as not planned, but since it's a bit different case, I am leaving this open so you're aware that it's happening

<!-- gh-comment-id:2242477767 --> @kosmallafilip commented on GitHub (Jul 22, 2024): I've just found #5411 , so I assume this will get closed as not planned, but since it's a bit different case, I am leaving this open so you're aware that it's happening
Author
Owner

@dhiltgen commented on GitHub (Jul 23, 2024):

Hotplug/external GPU support is tracked via #5411

<!-- gh-comment-id:2245894027 --> @dhiltgen commented on GitHub (Jul 23, 2024): Hotplug/external GPU support is tracked via #5411
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65684