[GH-ISSUE #2631] OpenLlama on Intel graphics card? #1553

Closed
opened 2026-04-12 11:27:49 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @tambetvali on GitHub (Feb 21, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2631

Hello!

I'm using CodeLlama-7b on Ubuntu 22.01, Visual Studio Code 1.86 or Ollama command line tool, HP ProBook 440 G6 with Intel® Core™ i3-8145U CPU @ 2.10GHz × 4, 16 GB memory and Mesa Intel® UHD Graphics 620 (WHL GT2) graphics card, which they call also Intel Corporation WhiskeyLake-U GT2 [UHD Graphics 620].

I installed IPEX (Intel Extension for PyTorch) and in the process something like CUDA drivers from Intel's site. It's hard to get something accelerated with this, tough.

With this setting, CodeLlama from Visual Studio Code is extremely slow and CodeLlama from command line is also quite slow, and I'm using only the 7b version.

Instructions to update CodeLlama tell me things, which do not seem to be very easy - I am asked to install numerous things from HuggingFace, but there seems to be no link between HuggingFace database, Ollama tool and Visual Studio Code - neither of them allows me to directly use this HuggingFace training. I don't actually understand, what it lacks and why I simply do not have acceleration option from those other tools. Also it seems more like building, than installing, an application - so it won't update automatically, for example (maybe unless I want to do this on Gentoo).

I have the following questions:

  • Should the 7b database be slow or fast on my computer (right now I wait several minutes on VSC and for simple questions, some time on Ollama) and is this possible to get going with 13b or 70b databases on this computer?
  • What I have to do if I want to get intel acceleration on VSC and Ollama applications/plugins, and not as separate python application, and to integrate them into my infrastructure with automatic updates, like all other programs have. After this, should the Ollama application be fast or slow on my computer? I don't know what I should expect from my computer, is searching for acceleration methods worth the time etc. Maybe it should alert with red when the processor use is not acceptable and it cannot live well with another programs, telling me "CodeLlama is slowed down by 17% by Firefox and PDF reader", or that I should replace VSC with another IDE to get better results ..with this speed, I don't even expect I'm going to use some real-time code completion etc.

Is there somewhere Ollama speed benchmark for different generations of processors, memory amounts and graphics cards, and drivers, and other relevant parameters, and installation options? Why don't you have anonymous information collection with those parameters, speeds of answers with information on the complexity of answering (if there are some varying parameters for each question) and resource load (are other apps using the processor, memory etc. actively). From this - how relevant is my computer use pattern, settings, computer type, OS, and the tool from which I use CodeLlama, to it's overall speed and what I should expect if I fall into higher rank of users, who, for example, have installed HuggingFace Ollama or got their accelerations updated otherwise (this way, it should be synchronized also with different builds, using the same database). I don't know what you should do with hackers, who report false information, maybe there should be also trust networks :) I think performance database for AI's shared between users is somewhat critical for getting their applications installed - for example, I don't really know does 70b database drop instantly onto hard drive from memory on my computer, or what happens if another program is actively using half of the memory - I should see those relations for my computer, and how the other users have boosted the same database, the same training, the same algorithm etc., and got it going with different clients to Ollama server and others - maybe the possibility, also, to test with standard test (questions, check of answers) and explain, how this is boosted, or select checkboxes for acceleration, moderation of computer use etc.

Originally created by @tambetvali on GitHub (Feb 21, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2631 Hello! I'm using CodeLlama-7b on Ubuntu 22.01, Visual Studio Code 1.86 or Ollama command line tool, HP ProBook 440 G6 with Intel® Core™ i3-8145U CPU @ 2.10GHz × 4, 16 GB memory and Mesa Intel® UHD Graphics 620 (WHL GT2) graphics card, which they call also Intel Corporation WhiskeyLake-U GT2 [UHD Graphics 620]. I installed IPEX (Intel Extension for PyTorch) and in the process something like CUDA drivers from Intel's site. It's hard to get something accelerated with this, tough. With this setting, CodeLlama from Visual Studio Code is extremely slow and CodeLlama from command line is also quite slow, and I'm using only the 7b version. Instructions to update CodeLlama tell me things, which do not seem to be very easy - I am asked to install numerous things from HuggingFace, but there seems to be no link between HuggingFace database, Ollama tool and Visual Studio Code - neither of them allows me to directly use this HuggingFace training. I don't actually understand, what it lacks and why I simply do not have acceleration option from those other tools. Also it seems more like building, than installing, an application - so it won't update automatically, for example (maybe unless I want to do this on Gentoo). I have the following questions: * Should the 7b database be slow or fast on my computer (right now I wait several minutes on VSC and for simple questions, some time on Ollama) and is this possible to get going with 13b or 70b databases on this computer? * What I have to do if I want to get intel acceleration on VSC and Ollama applications/plugins, and not as separate python application, and to integrate them into my infrastructure with automatic updates, like all other programs have. After this, should the Ollama application be fast or slow on my computer? I don't know what I should expect from my computer, is searching for acceleration methods worth the time etc. Maybe it should alert with red when the processor use is not acceptable and it cannot live well with another programs, telling me "CodeLlama is slowed down by 17% by Firefox and PDF reader", or that I should replace VSC with another IDE to get better results ..with this speed, I don't even expect I'm going to use some real-time code completion etc. Is there somewhere Ollama speed benchmark for different generations of processors, memory amounts and graphics cards, and drivers, and other relevant parameters, and installation options? Why don't you have anonymous information collection with those parameters, speeds of answers with information on the complexity of answering (if there are some varying parameters for each question) and resource load (are other apps using the processor, memory etc. actively). From this - how relevant is my computer use pattern, settings, computer type, OS, and the tool from which I use CodeLlama, to it's overall speed and what I should expect if I fall into higher rank of users, who, for example, have installed HuggingFace Ollama or got their accelerations updated otherwise (this way, it should be synchronized also with different builds, using the same database). I don't know what you should do with hackers, who report false information, maybe there should be also trust networks :) I think performance database for AI's shared between users is somewhat critical for getting their applications installed - for example, I don't really know does 70b database drop instantly onto hard drive from memory on my computer, or what happens if another program is actively using half of the memory - I should see those relations for my computer, and how the other users have boosted the same database, the same training, the same algorithm etc., and got it going with different clients to Ollama server and others - maybe the possibility, also, to test with standard test (questions, check of answers) and explain, how this is boosted, or select checkboxes for acceleration, moderation of computer use etc.
GiteaMirror added the question label 2026-04-12 11:27:49 -05:00
Author
Owner

@dhiltgen commented on GitHub (Jul 24, 2024):

Intel GPU support is tracked in #1590

Without GPU support, you'll be running on CPU, and I believe that's a 2-core CPU, so I would expect fairly slow token rates.

<!-- gh-comment-id:2249007971 --> @dhiltgen commented on GitHub (Jul 24, 2024): Intel GPU support is tracked in #1590 Without GPU support, you'll be running on CPU, and I believe that's a 2-core CPU, so I would expect fairly slow token rates.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1553