[PR #1028] [CLOSED] WIP: Apply a patch for building with CUDA on Linux #15711

Closed
opened 2026-04-16 05:05:33 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/1028
Author: @xyproto
Created: 11/7/2023
Status: Closed

Base: mainHead: llama-f16-mul-mat-fix


📝 Commits (1)

  • 5d93389 Apply a patch for building with CUDA on Linux

📊 Changes

2 files changed (+45 additions, -0 deletions)

View changed files

📝 llm/llama.cpp/generate_linux.go (+1 -0)
llm/llama.cpp/patches/0006-fix-f16-mul-mat.patch (+44 -0)

📄 Description

Might fix #1024, maybe.

The patch is from a llama.cpp commit: 2833a6f63c


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/1028 **Author:** [@xyproto](https://github.com/xyproto) **Created:** 11/7/2023 **Status:** ❌ Closed **Base:** `main` ← **Head:** `llama-f16-mul-mat-fix` --- ### 📝 Commits (1) - [`5d93389`](https://github.com/ollama/ollama/commit/5d93389ce90371e1b402000c73220a845ea4cc5a) Apply a patch for building with CUDA on Linux ### 📊 Changes **2 files changed** (+45 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `llm/llama.cpp/generate_linux.go` (+1 -0) ➕ `llm/llama.cpp/patches/0006-fix-f16-mul-mat.patch` (+44 -0) </details> ### 📄 Description Might fix #1024, maybe. The patch is from a llama.cpp commit: https://github.com/ggerganov/llama.cpp/commit/2833a6f63c1b87c7f4ac574bcf7a15a2f3bf3ede --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-16 05:05:33 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#15711