[PR #3278] [MERGED] Enabling ollama to run on Intel GPUs with SYCL backend #11106

Closed
opened 2026-04-12 23:21:25 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/3278
Author: @zhewang1-intc
Created: 3/21/2024
Status: Merged
Merged: 5/28/2024
Merged by: @dhiltgen

Base: mainHead: rebase_ollama_main


📝 Commits (1)

  • fd5971b support ollama run on Intel GPUs

📊 Changes

7 files changed (+614 additions, -31 deletions)

View changed files

📝 gpu/gpu.go (+93 -31)
📝 gpu/gpu_info.h (+1 -0)
gpu/gpu_info_oneapi.c (+214 -0)
gpu/gpu_info_oneapi.h (+211 -0)
gpu/gpu_oneapi.go (+21 -0)
📝 llm/generate/gen_linux.sh (+30 -0)
📝 llm/generate/gen_windows.ps1 (+44 -0)

📄 Description

Hi, I am submitting this pr to enable ollama to run on Intel GPUs with SYCL as the backend. This pr was originally started by @felipeagc who is currently unable to actively participate due to relocation.
The original pr had fallen behind the main branch, making it inconvenient for maintainers @mxyng @jmorganca @dhiltgen to review. Therefore, I rebased the latest main branch and opened this new pull request. I have verified that it works correctly on Ubuntu 22.04 with ARC 770 GPU.
While I am not very familiar with this project and I welcome any guidance and assistance from the community. Let’s work together to make ollama support Intel GPU platforms. cc:@hshen14 @kevinintel @airmeng

UPDATE: works well on windows10 + ARC 770
UPDATE: works well on oneapi-docker-image(oneapi-basekit-Ubuntu22.04) + ARC770


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/3278 **Author:** [@zhewang1-intc](https://github.com/zhewang1-intc) **Created:** 3/21/2024 **Status:** ✅ Merged **Merged:** 5/28/2024 **Merged by:** [@dhiltgen](https://github.com/dhiltgen) **Base:** `main` ← **Head:** `rebase_ollama_main` --- ### 📝 Commits (1) - [`fd5971b`](https://github.com/ollama/ollama/commit/fd5971be0bb11d1b5903fc6778c329b4fd93d569) support ollama run on Intel GPUs ### 📊 Changes **7 files changed** (+614 additions, -31 deletions) <details> <summary>View changed files</summary> 📝 `gpu/gpu.go` (+93 -31) 📝 `gpu/gpu_info.h` (+1 -0) ➕ `gpu/gpu_info_oneapi.c` (+214 -0) ➕ `gpu/gpu_info_oneapi.h` (+211 -0) ➕ `gpu/gpu_oneapi.go` (+21 -0) 📝 `llm/generate/gen_linux.sh` (+30 -0) 📝 `llm/generate/gen_windows.ps1` (+44 -0) </details> ### 📄 Description Hi, I am submitting this pr to enable ollama to run on Intel GPUs with SYCL as the backend. This pr was [originally](https://github.com/ollama/ollama/pull/2458) started by @felipeagc who is currently unable to actively participate due to relocation. The original pr had fallen behind the main branch, making it inconvenient for maintainers @mxyng @jmorganca @dhiltgen to review. Therefore, I rebased the latest main branch and opened this new pull request. I have verified that it works correctly on Ubuntu 22.04 with ARC 770 GPU. While I am not very familiar with this project and I welcome any guidance and assistance from the community. Let’s work together to make ollama support Intel GPU platforms. cc:@hshen14 @kevinintel @airmeng UPDATE: works well on windows10 + ARC 770 UPDATE: works well on oneapi-docker-image(oneapi-basekit-Ubuntu22.04) + ARC770 --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-12 23:21:25 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#11106