[PR #15872] [CLOSED] Intel level zero #62035

Closed
opened 2026-04-29 16:59:27 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/15872
Author: @piyushmakhija28
Created: 4/29/2026
Status: Closed

Base: mainHead: intel-level-zero


📝 Commits (10+)

  • f0eccd3 cmake: add Intel Level Zero SDK find module and backend build presets
  • 88b14b7 discover: add Intel Level Zero GPU enumeration and device capabilities
  • 83d6661 ml/backend/ggml/ggml/src/ggml-level-zero: add Level Zero GGML compute backend with buffer-layer fix
  • 4920190 ml/backend: add Level Zero Go backend bindings and device detection
  • ab19a74 llm: wire Level Zero backend into server model loading path
  • 2b70d13 ci: add Intel Level Zero build workflow, Dockerfile support, and release artifacts
  • 76f8212 integration: add Level Zero backend integration test suite
  • 8e5e796 docs: add Level Zero backend setup and architecture documentation
  • c9d3af4 Merge branch 'ollama:main' into intel-level-zero
  • 31a936c docs: add Claude Code project guidance file

📊 Changes

50 files changed (+14778 additions, -3 deletions)

View changed files

.github/workflows/ci-intel.yaml (+47 -0)
📝 .github/workflows/release.yaml (+41 -0)
📝 .github/workflows/test.yaml (+18 -1)
📝 .gitignore (+16 -0)
CHANGELOG.md (+27 -0)
CLAUDE.md (+116 -0)
📝 CMakeLists.txt (+40 -0)
📝 CMakePresets.json (+56 -0)
📝 Dockerfile (+65 -0)
INTEL_L0_BUILD_FIX_STATE.md (+784 -0)
INTEL_L0_EXECUTION_STATE.md (+4185 -0)
📝 README.md (+2 -0)
cmake/modules/FindLevelZero.cmake (+90 -0)
discover/gpu_level_zero.go (+132 -0)
discover/level_zero_info.c (+170 -0)
discover/level_zero_info.h (+94 -0)
docs/level-zero.mdx (+151 -0)
envconfig/level_zero.go (+38 -0)
integration/level_zero_npu_test.go (+287 -0)
integration/level_zero_test.go (+1026 -0)

...and 30 more files

📄 Description

No description provided


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/15872 **Author:** [@piyushmakhija28](https://github.com/piyushmakhija28) **Created:** 4/29/2026 **Status:** ❌ Closed **Base:** `main` ← **Head:** `intel-level-zero` --- ### 📝 Commits (10+) - [`f0eccd3`](https://github.com/ollama/ollama/commit/f0eccd32e506fe5e0422fc8a2aa0762a53efb38d) cmake: add Intel Level Zero SDK find module and backend build presets - [`88b14b7`](https://github.com/ollama/ollama/commit/88b14b7746cf814748bdbc714c2f641d1aee171e) discover: add Intel Level Zero GPU enumeration and device capabilities - [`83d6661`](https://github.com/ollama/ollama/commit/83d6661709a78c88059e8fd20fac9ca4e7da6662) ml/backend/ggml/ggml/src/ggml-level-zero: add Level Zero GGML compute backend with buffer-layer fix - [`4920190`](https://github.com/ollama/ollama/commit/49201900a0ee9196d27832a423a1f26155ac620d) ml/backend: add Level Zero Go backend bindings and device detection - [`ab19a74`](https://github.com/ollama/ollama/commit/ab19a74a8fa597c9ab512309b0f6514264d170f3) llm: wire Level Zero backend into server model loading path - [`2b70d13`](https://github.com/ollama/ollama/commit/2b70d13a2a2b605013e2c9d0751625355f0c70bf) ci: add Intel Level Zero build workflow, Dockerfile support, and release artifacts - [`76f8212`](https://github.com/ollama/ollama/commit/76f82120397eb13a13559bdb666db46a9c27b724) integration: add Level Zero backend integration test suite - [`8e5e796`](https://github.com/ollama/ollama/commit/8e5e796553a52a28e7b8518b96890e234edea142) docs: add Level Zero backend setup and architecture documentation - [`c9d3af4`](https://github.com/ollama/ollama/commit/c9d3af4a925d8d93b41b321ecfabfd84ff01d128) Merge branch 'ollama:main' into intel-level-zero - [`31a936c`](https://github.com/ollama/ollama/commit/31a936cc1e1e6a40da5b993587a01c1abbaa809a) docs: add Claude Code project guidance file ### 📊 Changes **50 files changed** (+14778 additions, -3 deletions) <details> <summary>View changed files</summary> ➕ `.github/workflows/ci-intel.yaml` (+47 -0) 📝 `.github/workflows/release.yaml` (+41 -0) 📝 `.github/workflows/test.yaml` (+18 -1) 📝 `.gitignore` (+16 -0) ➕ `CHANGELOG.md` (+27 -0) ➕ `CLAUDE.md` (+116 -0) 📝 `CMakeLists.txt` (+40 -0) 📝 `CMakePresets.json` (+56 -0) 📝 `Dockerfile` (+65 -0) ➕ `INTEL_L0_BUILD_FIX_STATE.md` (+784 -0) ➕ `INTEL_L0_EXECUTION_STATE.md` (+4185 -0) 📝 `README.md` (+2 -0) ➕ `cmake/modules/FindLevelZero.cmake` (+90 -0) ➕ `discover/gpu_level_zero.go` (+132 -0) ➕ `discover/level_zero_info.c` (+170 -0) ➕ `discover/level_zero_info.h` (+94 -0) ➕ `docs/level-zero.mdx` (+151 -0) ➕ `envconfig/level_zero.go` (+38 -0) ➕ `integration/level_zero_npu_test.go` (+287 -0) ➕ `integration/level_zero_test.go` (+1026 -0) _...and 30 more files_ </details> ### 📄 Description _No description provided_ --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 16:59:27 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62035