[PR #12654] vulkan: Add memory detection for Intel GPU using Level Zero Sysman #12639

Open
opened 2025-11-12 16:41:43 -06:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12654
Author: @rillomas
Created: 10/16/2025
Status: 🔄 Open

Base: mainHead: level-zero-memory-detection


📝 Commits (10+)

  • 2471396 initial commit for l0 sysman support
  • a072035 adding APIs
  • a69f05a Connecting L0 API to ggml API
  • addab3b WIP
  • ee6877a Connected memory size detection with zesMemoryGetState
  • 5559199 Experimenting Linux build
  • 26eaf7e minor fix
  • c171d4a Changed to use zes_memstate_t::size
  • a4bf226 Fixed logging on Windows
  • 1c2d391 Updated docker file to include L0 libs

📊 Changes

6 files changed (+892 additions, -9 deletions)

View changed files

📝 Dockerfile (+12 -1)
llama/patches/0032-Add-memory-detection-for-Intel-GPU-using-Level-Zero.patch (+478 -0)
📝 ml/backend/ggml/ggml/src/CMakeLists.txt (+1 -0)
📝 ml/backend/ggml/ggml/src/ggml-impl.h (+3 -0)
📝 ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp (+26 -8)
ml/backend/ggml/ggml/src/mem_l0_sysman.cpp (+372 -0)

📄 Description


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12654 **Author:** [@rillomas](https://github.com/rillomas) **Created:** 10/16/2025 **Status:** 🔄 Open **Base:** `main` ← **Head:** `level-zero-memory-detection` --- ### 📝 Commits (10+) - [`2471396`](https://github.com/ollama/ollama/commit/24713961d162bb663b73444588d278db421111b3) initial commit for l0 sysman support - [`a072035`](https://github.com/ollama/ollama/commit/a072035a079d6b2ea71a41c59ed41cac39606f4e) adding APIs - [`a69f05a`](https://github.com/ollama/ollama/commit/a69f05afb19515fe2a7bc9b66538ef06c42e9605) Connecting L0 API to ggml API - [`addab3b`](https://github.com/ollama/ollama/commit/addab3b17dca743e2abdb82a3de88706bccd7fd4) WIP - [`ee6877a`](https://github.com/ollama/ollama/commit/ee6877a3e9149c8edc022e228f3b525977ee9ce7) Connected memory size detection with zesMemoryGetState - [`5559199`](https://github.com/ollama/ollama/commit/5559199b2b5d973526b32ae20dfe5803a2bd9092) Experimenting Linux build - [`26eaf7e`](https://github.com/ollama/ollama/commit/26eaf7e859ae4ec23bf73d4a1ee28f62808bb3a3) minor fix - [`c171d4a`](https://github.com/ollama/ollama/commit/c171d4a88c86ebf3dc95f92aca3b2523ce741c68) Changed to use zes_memstate_t::size - [`a4bf226`](https://github.com/ollama/ollama/commit/a4bf226e0967e970dba77cc166620a60523be035) Fixed logging on Windows - [`1c2d391`](https://github.com/ollama/ollama/commit/1c2d391c871b436a9d542f1ffbf60154793361f7) Updated docker file to include L0 libs ### 📊 Changes **6 files changed** (+892 additions, -9 deletions) <details> <summary>View changed files</summary> 📝 `Dockerfile` (+12 -1) ➕ `llama/patches/0032-Add-memory-detection-for-Intel-GPU-using-Level-Zero.patch` (+478 -0) 📝 `ml/backend/ggml/ggml/src/CMakeLists.txt` (+1 -0) 📝 `ml/backend/ggml/ggml/src/ggml-impl.h` (+3 -0) 📝 `ml/backend/ggml/ggml/src/ggml-vulkan/ggml-vulkan.cpp` (+26 -8) ➕ `ml/backend/ggml/ggml/src/mem_l0_sysman.cpp` (+372 -0) </details> ### 📄 Description - [x] Fix docker build - [x] ~~Fix documentation~~ Addressed in https://github.com/ollama/ollama/pull/12711 - [x] Remove redundant logging - [x] Add patch files for ggml --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2025-11-12 16:41:43 -06:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#12639