[PR #12739] feat(ggml): sync implementations for other CPU architectures with ggml #45183

Open
opened 2026-04-25 00:52:50 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12739
Author: @wszqkzqk
Created: 10/22/2025
Status: 🔄 Open

Base: mainHead: other-archs


📝 Commits (1)

  • aad800b feat(ggml): sync implementations for other CPU architectures with ggml

📊 Changes

7 files changed (+9475 additions, -0 deletions)

View changed files

ml/backend/ggml/ggml/src/ggml-cpu/arch/loongarch/quants.c (+2160 -0)
ml/backend/ggml/ggml/src/ggml-cpu/arch/powerpc/cpu-feats.cpp (+82 -0)
ml/backend/ggml/ggml/src/ggml-cpu/arch/powerpc/quants.c (+2305 -0)
ml/backend/ggml/ggml/src/ggml-cpu/arch/riscv/quants.c (+1897 -0)
ml/backend/ggml/ggml/src/ggml-cpu/arch/riscv/repack.cpp (+342 -0)
ml/backend/ggml/ggml/src/ggml-cpu/arch/s390/quants.c (+1468 -0)
ml/backend/ggml/ggml/src/ggml-cpu/arch/wasm/quants.c (+1221 -0)

📄 Description

Add other architectures supported in ggml upstream here to fix build on these platforms.

Tested on Arch Linux for Loong64 with this PR and #12737 applied. Here is the log:
ollama-0.12.6-1-loong64-build.log

  • Failed during final linking: It still fetch go mod from unpatched # github.com/ollama/ollama

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12739 **Author:** [@wszqkzqk](https://github.com/wszqkzqk) **Created:** 10/22/2025 **Status:** 🔄 Open **Base:** `main` ← **Head:** `other-archs` --- ### 📝 Commits (1) - [`aad800b`](https://github.com/ollama/ollama/commit/aad800b2f479b95be69ad5418937f3f24c11abe2) feat(ggml): sync implementations for other CPU architectures with ggml ### 📊 Changes **7 files changed** (+9475 additions, -0 deletions) <details> <summary>View changed files</summary> ➕ `ml/backend/ggml/ggml/src/ggml-cpu/arch/loongarch/quants.c` (+2160 -0) ➕ `ml/backend/ggml/ggml/src/ggml-cpu/arch/powerpc/cpu-feats.cpp` (+82 -0) ➕ `ml/backend/ggml/ggml/src/ggml-cpu/arch/powerpc/quants.c` (+2305 -0) ➕ `ml/backend/ggml/ggml/src/ggml-cpu/arch/riscv/quants.c` (+1897 -0) ➕ `ml/backend/ggml/ggml/src/ggml-cpu/arch/riscv/repack.cpp` (+342 -0) ➕ `ml/backend/ggml/ggml/src/ggml-cpu/arch/s390/quants.c` (+1468 -0) ➕ `ml/backend/ggml/ggml/src/ggml-cpu/arch/wasm/quants.c` (+1221 -0) </details> ### 📄 Description Add other architectures supported in ggml upstream here to fix build on these platforms. Tested on Arch Linux for Loong64 with this PR and #12737 applied. Here is the log: [ollama-0.12.6-1-loong64-build.log](https://github.com/user-attachments/files/23056810/ollama-0.12.6-1-loong64-build.log) * Failed during final linking: It still fetch go mod from unpatched `# github.com/ollama/ollama` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-25 00:52:50 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#45183