[PR #10322] Add support Intel GPU by OneApi /SYCL #39084

Open
opened 2026-04-22 23:44:11 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/10322
Author: @chnxq
Created: 4/17/2025
Status: 🔄 Open

Base: mainHead: chnxq/add-oneapi


📝 Commits (10+)

  • 6c4f99c Add support Intel OneApi GPU.--draft
  • d5ecf05 Add Readme
  • 88e9e75 Merge branch 'main' into chnxq/add-oneapi
  • 81f41cd sync llama.cpp/ggml/sycl lib
  • 669d872 Merge branch 'main' into chnxq/add-oneapi
  • 553586a Merge branch 'main' into chnxq/add-oneapi
  • 69b6690 add readme & merge main branch
  • 71382f8 merge main
  • 5be3ff5 I don't know why after adding the AVX-VNNI instruction set, the Intel compiler cannot correctly recognize it. Temporarily roll back.
  • 66d2809 Merge branch 'main' into chnxq/add-oneapi

📊 Changes

60 files changed (+21653 additions, -26 deletions)

View changed files

📝 CMakeLists.txt (+1 -0)
📝 discover/gpu.go (+123 -24)
📝 discover/gpu_info.h (+1 -0)
discover/gpu_info_sycl.c (+97 -0)
discover/gpu_info_sycl.h (+29 -0)
📝 discover/gpu_windows.go (+7 -0)
llama/README-Intel-OneApi.md (+75 -0)
📝 llm/server.go (+32 -0)
📝 ml/backend/ggml/ggml/.rsync-filter (+1 -0)
📝 ml/backend/ggml/ggml/src/CMakeLists.txt (+2 -2)
ml/backend/ggml/ggml/src/ggml-sycl/CMakeLists.txt (+183 -0)
ml/backend/ggml/ggml/src/ggml-sycl/backend.hpp (+36 -0)
ml/backend/ggml/ggml/src/ggml-sycl/binbcast.cpp (+350 -0)
ml/backend/ggml/ggml/src/ggml-sycl/binbcast.hpp (+39 -0)
ml/backend/ggml/ggml/src/ggml-sycl/common.cpp (+83 -0)
ml/backend/ggml/ggml/src/ggml-sycl/common.hpp (+501 -0)
ml/backend/ggml/ggml/src/ggml-sycl/concat.cpp (+197 -0)
ml/backend/ggml/ggml/src/ggml-sycl/concat.hpp (+20 -0)
ml/backend/ggml/ggml/src/ggml-sycl/conv.cpp (+100 -0)
ml/backend/ggml/ggml/src/ggml-sycl/conv.hpp (+20 -0)

...and 40 more files

📄 Description

as
#10244


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/10322 **Author:** [@chnxq](https://github.com/chnxq) **Created:** 4/17/2025 **Status:** 🔄 Open **Base:** `main` ← **Head:** `chnxq/add-oneapi` --- ### 📝 Commits (10+) - [`6c4f99c`](https://github.com/ollama/ollama/commit/6c4f99c8f995a7dbfcc17d4cfe2ed0bb969b21e9) Add support Intel OneApi GPU.--draft - [`d5ecf05`](https://github.com/ollama/ollama/commit/d5ecf058e3e970dc2a8ab7ab454288cf67e1cad6) Add Readme - [`88e9e75`](https://github.com/ollama/ollama/commit/88e9e75e489ae056906992e6390fe088e5bae3df) Merge branch 'main' into chnxq/add-oneapi - [`81f41cd`](https://github.com/ollama/ollama/commit/81f41cd8f9681b0d28c07fb1e66399c1ab341940) sync llama.cpp/ggml/sycl lib - [`669d872`](https://github.com/ollama/ollama/commit/669d872a2944a7749490efab41a4a0d1d5e73e64) Merge branch 'main' into chnxq/add-oneapi - [`553586a`](https://github.com/ollama/ollama/commit/553586aade0a87bbae8326e64c98746bad88924a) Merge branch 'main' into chnxq/add-oneapi - [`69b6690`](https://github.com/ollama/ollama/commit/69b6690398b6d279c5a1a1436367424f2d7dffb8) add readme & merge main branch - [`71382f8`](https://github.com/ollama/ollama/commit/71382f855cd69cfbf787633c146ede47abb116d5) merge main - [`5be3ff5`](https://github.com/ollama/ollama/commit/5be3ff50db03a10674c1e821199dd232ddfb12d1) I don't know why after adding the AVX-VNNI instruction set, the Intel compiler cannot correctly recognize it. Temporarily roll back. - [`66d2809`](https://github.com/ollama/ollama/commit/66d280991f19864862b51c4784a9f4d5408be982) Merge branch 'main' into chnxq/add-oneapi ### 📊 Changes **60 files changed** (+21653 additions, -26 deletions) <details> <summary>View changed files</summary> 📝 `CMakeLists.txt` (+1 -0) 📝 `discover/gpu.go` (+123 -24) 📝 `discover/gpu_info.h` (+1 -0) ➕ `discover/gpu_info_sycl.c` (+97 -0) ➕ `discover/gpu_info_sycl.h` (+29 -0) 📝 `discover/gpu_windows.go` (+7 -0) ➕ `llama/README-Intel-OneApi.md` (+75 -0) 📝 `llm/server.go` (+32 -0) 📝 `ml/backend/ggml/ggml/.rsync-filter` (+1 -0) 📝 `ml/backend/ggml/ggml/src/CMakeLists.txt` (+2 -2) ➕ `ml/backend/ggml/ggml/src/ggml-sycl/CMakeLists.txt` (+183 -0) ➕ `ml/backend/ggml/ggml/src/ggml-sycl/backend.hpp` (+36 -0) ➕ `ml/backend/ggml/ggml/src/ggml-sycl/binbcast.cpp` (+350 -0) ➕ `ml/backend/ggml/ggml/src/ggml-sycl/binbcast.hpp` (+39 -0) ➕ `ml/backend/ggml/ggml/src/ggml-sycl/common.cpp` (+83 -0) ➕ `ml/backend/ggml/ggml/src/ggml-sycl/common.hpp` (+501 -0) ➕ `ml/backend/ggml/ggml/src/ggml-sycl/concat.cpp` (+197 -0) ➕ `ml/backend/ggml/ggml/src/ggml-sycl/concat.hpp` (+20 -0) ➕ `ml/backend/ggml/ggml/src/ggml-sycl/conv.cpp` (+100 -0) ➕ `ml/backend/ggml/ggml/src/ggml-sycl/conv.hpp` (+20 -0) _...and 40 more files_ </details> ### 📄 Description as [#10244](https://github.com/ollama/ollama/issues/10244) --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-22 23:44:11 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#39084