[PR #11160] Enable Intel GPU support with SYCL backend #44706

Open
opened 2026-04-25 00:20:32 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/11160
Author: @desmondsow
Created: 6/22/2025
Status: 🔄 Open

Base: mainHead: main


📝 Commits (10+)

📊 Changes

75 files changed (+22691 additions, -34 deletions)

View changed files

📝 .github/workflows/release.yaml (+50 -4)
📝 .github/workflows/test.yaml (+36 -2)
📝 CMakeLists.txt (+135 -7)
📝 CMakePresets.json (+13 -0)
📝 Dockerfile (+35 -3)
📝 app/ollama.iss (+12 -1)
📝 discover/gpu.go (+152 -10)
📝 discover/gpu_info.h (+1 -0)
discover/gpu_info_sycl.c (+97 -0)
discover/gpu_info_sycl.h (+29 -0)
📝 discover/gpu_linux.go (+9 -0)
discover/gpu_sycl.go (+21 -0)
📝 discover/gpu_windows.go (+8 -0)
📝 discover/types.go (+6 -0)
📝 docs/development.md (+19 -0)
📝 docs/docker.md (+8 -0)
📝 envconfig/config.go (+3 -3)
llama/patches/0021-catch-ggml-sycl-init-exception-to-prevent-crashing-w.patch (+41 -0)
📝 ml/backend/ggml/ggml/.rsync-filter (+1 -0)
ml/backend/ggml/ggml/src/ggml-sycl/CMakeLists.txt (+185 -0)

...and 55 more files

📄 Description

This is a complete PR that enables Intel GPU support with the SYCL backend. It has been tested on both Ubuntu and Windows, using both integrated (iGPU) and discrete (dGPU) GPUs. The project can be built easily using the scripts/build_linux.sh and scripts/build_windows.ps1 scripts. For build environment setup, refer to docs/development.md.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/11160 **Author:** [@desmondsow](https://github.com/desmondsow) **Created:** 6/22/2025 **Status:** 🔄 Open **Base:** `main` ← **Head:** `main` --- ### 📝 Commits (10+) - [`b6826bf`](https://github.com/ollama/ollama/commit/b6826bfe19fc4d356877298da39b1351008e5872) sync ggml-sycl from llama.cpp - [`741b91e`](https://github.com/ollama/ollama/commit/741b91e51af4cbe2183fe17c087a3d376516587c) add sycl support - [`166a864`](https://github.com/ollama/ollama/commit/166a864f4d4ceafff92f235d03ab1e34494562ea) add discover by sycl - [`f473711`](https://github.com/ollama/ollama/commit/f473711866380e52374bbd1f9782185e49941b33) compile with dpct - [`afa0777`](https://github.com/ollama/ollama/commit/afa0777f494c0f0a2dbec6d114fa22bfcbeedbbf) improve sycl discovery - [`d2fd672`](https://github.com/ollama/ollama/commit/d2fd672c2e7bdc678832f1bcd5765076b3b271f0) set UnreliableFreeMemory - [`92599f5`](https://github.com/ollama/ollama/commit/92599f5c53e39abd646a88bfa944cc531e0d8aa1) use sycl api by default - [`0fd78eb`](https://github.com/ollama/ollama/commit/0fd78eb32a750b8e77336ec5b72b08afccf5f45b) remove sycl_wrapper - [`387fb06`](https://github.com/ollama/ollama/commit/387fb06120779ddd62396f36b8ee5bbeefe825a1) add ci changes - [`28320d5`](https://github.com/ollama/ollama/commit/28320d5618b869d8d91ddae9d72afe62a50a79f1) add workflow_dispatch ### 📊 Changes **75 files changed** (+22691 additions, -34 deletions) <details> <summary>View changed files</summary> 📝 `.github/workflows/release.yaml` (+50 -4) 📝 `.github/workflows/test.yaml` (+36 -2) 📝 `CMakeLists.txt` (+135 -7) 📝 `CMakePresets.json` (+13 -0) 📝 `Dockerfile` (+35 -3) 📝 `app/ollama.iss` (+12 -1) 📝 `discover/gpu.go` (+152 -10) 📝 `discover/gpu_info.h` (+1 -0) ➕ `discover/gpu_info_sycl.c` (+97 -0) ➕ `discover/gpu_info_sycl.h` (+29 -0) 📝 `discover/gpu_linux.go` (+9 -0) ➕ `discover/gpu_sycl.go` (+21 -0) 📝 `discover/gpu_windows.go` (+8 -0) 📝 `discover/types.go` (+6 -0) 📝 `docs/development.md` (+19 -0) 📝 `docs/docker.md` (+8 -0) 📝 `envconfig/config.go` (+3 -3) ➕ `llama/patches/0021-catch-ggml-sycl-init-exception-to-prevent-crashing-w.patch` (+41 -0) 📝 `ml/backend/ggml/ggml/.rsync-filter` (+1 -0) ➕ `ml/backend/ggml/ggml/src/ggml-sycl/CMakeLists.txt` (+185 -0) _...and 55 more files_ </details> ### 📄 Description This is a complete PR that enables Intel GPU support with the SYCL backend. It has been tested on both Ubuntu and Windows, using both integrated (iGPU) and discrete (dGPU) GPUs. The project can be built easily using the `scripts/build_linux.sh` and `scripts/build_windows.ps1` scripts. For build environment setup, refer to `docs/development.md`. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-25 00:20:32 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#44706