[PR #8059] [CLOSED] tmp #43855

Closed
opened 2026-04-24 23:25:42 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/8059
Author: @mxyng
Created: 12/12/2024
Status: Closed

Base: mxyng/next-ggmlHead: mxyng/next-gpu


📝 Commits (2)

📊 Changes

18 files changed (+447 additions, -201 deletions)

View changed files

Makefile2 (+112 -0)
📝 fs/ggml/ggml.go (+26 -5)
📝 fs/ggml/gguf.go (+1 -1)
📝 llama/README.md (+1 -2)
📝 make/common-defs.make (+0 -1)
📝 make/gpu.make (+9 -52)
📝 ml/backend/ggml/ggml.go (+224 -101)
📝 ml/backend/ggml/ggml/ggml-cpu/cpu.go (+1 -2)
ml/backend/ggml/ggml/ggml-cuda/.gitignore (+1 -0)
ml/backend/ggml/ggml/ggml-cuda/Makefile (+65 -0)
📝 ml/backend/ggml/ggml/ggml-cuda/cuda.go (+4 -0)
📝 ml/backend/ggml/ggml/ggml.go (+1 -2)
📝 ml/backend/ggml/ggml/ggml_cuda.go (+1 -0)
ml/backend/ggml/ggml_darwin_amd64.go (+0 -8)
ml/backend/ggml/ggml_darwin_arm64.go (+0 -8)
ml/backend/ggml/ggml_linux.go (+0 -8)
ml/backend/ggml/ggml_windows.go (+0 -8)
📝 scripts/build_darwin.sh (+1 -3)

📄 Description

No description provided


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/8059 **Author:** [@mxyng](https://github.com/mxyng) **Created:** 12/12/2024 **Status:** ❌ Closed **Base:** `mxyng/next-ggml` ← **Head:** `mxyng/next-gpu` --- ### 📝 Commits (2) - [`eaa1602`](https://github.com/ollama/ollama/commit/eaa160283e8bebc147a810762cbccb94418c9c0f) build: recursive make ggml-cuda - [`5eb0947`](https://github.com/ollama/ollama/commit/5eb094764c689b675651296b791aeffe53b6b77d) tmp ### 📊 Changes **18 files changed** (+447 additions, -201 deletions) <details> <summary>View changed files</summary> ➕ `Makefile2` (+112 -0) 📝 `fs/ggml/ggml.go` (+26 -5) 📝 `fs/ggml/gguf.go` (+1 -1) 📝 `llama/README.md` (+1 -2) 📝 `make/common-defs.make` (+0 -1) 📝 `make/gpu.make` (+9 -52) 📝 `ml/backend/ggml/ggml.go` (+224 -101) 📝 `ml/backend/ggml/ggml/ggml-cpu/cpu.go` (+1 -2) ➕ `ml/backend/ggml/ggml/ggml-cuda/.gitignore` (+1 -0) ➕ `ml/backend/ggml/ggml/ggml-cuda/Makefile` (+65 -0) 📝 `ml/backend/ggml/ggml/ggml-cuda/cuda.go` (+4 -0) 📝 `ml/backend/ggml/ggml/ggml.go` (+1 -2) 📝 `ml/backend/ggml/ggml/ggml_cuda.go` (+1 -0) ➖ `ml/backend/ggml/ggml_darwin_amd64.go` (+0 -8) ➖ `ml/backend/ggml/ggml_darwin_arm64.go` (+0 -8) ➖ `ml/backend/ggml/ggml_linux.go` (+0 -8) ➖ `ml/backend/ggml/ggml_windows.go` (+0 -8) 📝 `scripts/build_darwin.sh` (+1 -3) </details> ### 📄 Description _No description provided_ --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-24 23:25:42 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#43855