[GH-ISSUE #9568] Can't Olama use both GPU and CPU for inference and computation at the same time #32001

Closed
opened 2026-04-22 12:52:30 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @bouyeijiang on GitHub (Mar 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9568

When I loaded the 7B parameter, the GPU worked, but when I loaded the 32B GPU, it didn't run

Originally created by @bouyeijiang on GitHub (Mar 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9568 When I loaded the 7B parameter, the GPU worked, but when I loaded the 32B GPU, it didn't run
GiteaMirror added the needs more infobug labels 2026-04-22 12:52:30 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 7, 2025):

Server logs will help in debugging.

<!-- gh-comment-id:2706102101 --> @rick-github commented on GitHub (Mar 7, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32001