[PR #11845] [CLOSED] add cuda 13.x support #39506

Closed
opened 2026-04-23 00:21:58 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/11845
Author: @nope8
Created: 8/10/2025
Status: Closed

Base: mainHead: main


📝 Commits (1)

📊 Changes

1 file changed (+4 additions, -0 deletions)

View changed files

📝 discover/cuda_common.go (+4 -0)

📄 Description

fix ollama can't use nvidia gpu when use ollama with cuda 13.0, meet error log:

Error: 500 Internal Server Error: llama runner process has terminated: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 8891928576

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/11845 **Author:** [@nope8](https://github.com/nope8) **Created:** 8/10/2025 **Status:** ❌ Closed **Base:** `main` ← **Head:** `main` --- ### 📝 Commits (1) - [`845c2ce`](https://github.com/ollama/ollama/commit/845c2ce1ec6397843f999f1716b02c825718b653) add cuda 13.x support ### 📊 Changes **1 file changed** (+4 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `discover/cuda_common.go` (+4 -0) </details> ### 📄 Description fix ollama can't use nvidia gpu when use ollama with cuda 13.0, meet error log: Error: 500 Internal Server Error: llama runner process has terminated: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 8891928576 --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-23 00:21:58 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#39506