[GH-ISSUE #533] GPU Support for Ollama on Microsoft Windows #62282

Closed
opened 2026-05-03 08:05:46 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @dcasota on GitHub (Sep 15, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/533

Originally assigned to: @BruceMacD on GitHub.

Hi,

To make run Ollama from source code with Nvidia GPU on Microsoft Windows, actually there is no setup description and the Ollama sourcecode has some ToDo's as well, is that right ?

Here some thoughts.

Setup

  1. NVidia drivers

    1A. Software drivers: https://www.nvidia.com/download/index.aspx

    1B. Nvidia CUDA Toolkit https://developer.nvidia.com/cuda-downloads
    Check the GPU support in nvidia-smi.exe and nvcc.exe for cuda compilation tools ../11/12.

    1C. NVidia Omniverse >PhysX>Blast seems to become necessary for NVidia gpu support, as well.
    git clone https://github.com/NVIDIA-Omniverse/PhysX
    call .\PhysX\blast\build.bat

  2. Git https://git-scm.com/download/win

  3. Python https://www.python.org/downloads/windows/

  4. Go https://go.dev/doc/install

  5. Gcc https://sourceforge.net/projects/mingw-w64/files/mingw-w64/mingw-w64-release/

  6. Cmake https://cmake.org/download/

  7. Winlibs https://winlibs.com/

  8. Bazel https://github.com/bazelbuild/bazel/releases

edited:
With respect to the content in .\examples, there are a few additional tools necessary, to make run requirements.txt on Microsoft Windows. Some of the dependencies have to be installed (steps 6-8) and most can be added simply by pip install.
The following code snippet still produces warnings, but it helps to make start the .\examples\langchain-document\main.py.

pip install unstructured
pip install pdf2image
pip install pdfminer
pip install pdfminer.six
pip install pyproject.toml
pip install pysqlite3
pip install gpt4all
pip install chromadb
pip install tensorflow
pip install opencv-python
pip install bazel-runfiles
pip install -r .\examples\langchain-document\requirements.txt
pip install langchain

After that, install Ollama.
git clone https://github.com/jmorganca/ollama
cd .\ollama
mkdir ..\.ollama
go generate .\...
go build -ldflags '-linkmode external -extldflags "-static"' .
Check if the executable ollama.exe has been created.

Foreseen sourcecode modifications

llm\llama.go, function chooseRunner, function NumGPU
docs\development.md
generate_darwin_amd64.go (compare with generate_linux.go for cuda)
...

Originally created by @dcasota on GitHub (Sep 15, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/533 Originally assigned to: @BruceMacD on GitHub. Hi, To make run Ollama from source code with Nvidia GPU on Microsoft Windows, actually there is no setup description and the Ollama sourcecode has some ToDo's as well, is that right ? Here some thoughts. Setup - 1. NVidia drivers 1A. Software drivers: https://www.nvidia.com/download/index.aspx 1B. Nvidia CUDA Toolkit https://developer.nvidia.com/cuda-downloads Check the GPU support in nvidia-smi.exe and nvcc.exe for cuda compilation tools ../11/12. ~1C. NVidia Omniverse >PhysX>Blast seems to become necessary for NVidia gpu support, as well.~ ~`git clone https://github.com/NVIDIA-Omniverse/PhysX`~ ~`call .\PhysX\blast\build.bat`~ 2. Git https://git-scm.com/download/win 3. Python https://www.python.org/downloads/windows/ 4. Go https://go.dev/doc/install 5. Gcc https://sourceforge.net/projects/mingw-w64/files/mingw-w64/mingw-w64-release/ 6. Cmake https://cmake.org/download/ 7. Winlibs https://winlibs.com/ 8. Bazel https://github.com/bazelbuild/bazel/releases edited: With respect to the content in .\examples, there are a few additional tools necessary, to make run requirements.txt on Microsoft Windows. Some of the dependencies have to be installed (steps 6-8) and most can be added simply by `pip install`. The following code snippet still produces warnings, but it helps to make start the .\examples\langchain-document\main.py. ``` pip install unstructured pip install pdf2image pip install pdfminer pip install pdfminer.six pip install pyproject.toml pip install pysqlite3 pip install gpt4all pip install chromadb pip install tensorflow pip install opencv-python pip install bazel-runfiles pip install -r .\examples\langchain-document\requirements.txt pip install langchain ``` After that, install Ollama. `git clone https://github.com/jmorganca/ollama` `cd .\ollama` `mkdir ..\.ollama` `go generate .\...` `go build -ldflags '-linkmode external -extldflags "-static"' .` Check if the executable ollama.exe has been created. Foreseen sourcecode modifications - llm\llama.go, function chooseRunner, function NumGPU docs\development.md generate_darwin_amd64.go (compare with generate_linux.go for cuda) ...
Author
Owner

@yc1ggsddu commented on GitHub (Sep 15, 2023):

I have no idea why this error happens, would you please help me?
llm\llama.go:29:12: pattern llama.cpp/*/build/*/bin/*: no matching files found

<!-- gh-comment-id:1721522913 --> @yc1ggsddu commented on GitHub (Sep 15, 2023): I have no idea why this error happens, would you please help me? `llm\llama.go:29:12: pattern llama.cpp/*/build/*/bin/*: no matching files found`
Author
Owner

@dcasota commented on GitHub (Sep 15, 2023):

Hi @yc1ggsddu , for my question about what to do, I think I found a solving direction by studying the source https://github.com/ggerganov/llama.cpp/blob/master/.github/workflows/build.yml.

For your question about line 29 in llama.go - in the actual source, the line is a comment. Hence it won't be processed.

image

I'm not sure if I've understood your question.
The directory structure changed from version v.0.14 to latest, so in my lab, I simply recreated from the github source. Have you tried it?

<!-- gh-comment-id:1721552673 --> @dcasota commented on GitHub (Sep 15, 2023): Hi @yc1ggsddu , for my question about what to do, I think I found a solving direction by studying the source https://github.com/ggerganov/llama.cpp/blob/master/.github/workflows/build.yml. For your question about line 29 in llama.go - in the [actual source](https://github.com/jmorganca/ollama/commit/2540c9181c986825652fa5f2ea5379b6e7662fd4), the line is a comment. Hence it won't be processed. <img src="https://github.com/jmorganca/ollama/assets/14890243/36b5adff-566c-457f-aeca-59b185b1d2e5" alt="image" width="200"> I'm not sure if I've understood your question. The directory structure changed from version v.0.14 to latest, so in my lab, I simply recreated from the github source. Have you tried it?
Author
Owner

@yc1ggsddu commented on GitHub (Sep 15, 2023):

Hi @dcasota , I reclone the code and that as you said solve my problem(but I haven't finished the process). Thanks for your patient answer!!!

Another question for you: Do I have to write a " CMakeLists.txt" file in ollama folder before I run go generate ./...?
Because I encounter CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage -- Configuring incomplete, errors occurred! llm\llama.cpp\generate.go:12: running "cmake": exit status 1 problem here.

As I don't know how "CMake" works, so I searched on the internet for the error, the most common answer is this. Is this answer correct? If it is, what should I do to solve my problem?

Sincerely looking forward to your response!

<!-- gh-comment-id:1721595887 --> @yc1ggsddu commented on GitHub (Sep 15, 2023): Hi @dcasota , I reclone the code and that as you said solve my problem(but I haven't finished the process). Thanks for your patient answer!!! Another question for you: Do I have to write a " CMakeLists.txt" file in ollama folder before I run `go generate ./...`? Because I encounter `CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage -- Configuring incomplete, errors occurred! llm\llama.cpp\generate.go:12: running "cmake": exit status 1` problem here. As I don't know how "CMake" works, so I searched on the internet for the error, the most common answer is [this](https://stackoverflow.com/questions/70524164/cmake-c-compiler-not-set-after-enablelanguage). Is this answer correct? If it is, what should I do to solve my problem? Sincerely looking forward to your response!
Author
Owner

@yc1ggsddu commented on GitHub (Sep 15, 2023):

PS @dcasota Actually, the line 12 in llm\llama.cpp\generate.go is actually a comment again. But I don't know why I have to run the line.

<!-- gh-comment-id:1721601277 --> @yc1ggsddu commented on GitHub (Sep 15, 2023): PS @dcasota Actually, the line 12 in [llm\llama.cpp\generate.go](https://github.com/jmorganca/ollama/blob/main/llm/llama.cpp/generate.go) is actually a comment again. But I don't know why I have to run the line.
Author
Owner

@dcasota commented on GitHub (Sep 15, 2023):

@yc1ggsddu I'm assuming that cmake is not correctly configured.

  • When installing e.g. cmake-3.27.5-windows-x86_64.msi (from cmake.org), it asks at the end about modifying the PATH variable. No modification, for 'all users' or for 'current user only' are the options. Select for all users. It adds the cmake directory to the path variable. You can check the existence in control panel>system and security>system>advanced system settings>environment variables.

  • image

Execute go generate ./... in the ollama directory.
Yes, the similar generate_darwin_amd64.go content has a command switch for specifying a cpu build, and not for a gpu build.

<!-- gh-comment-id:1721670370 --> @dcasota commented on GitHub (Sep 15, 2023): @yc1ggsddu I'm assuming that cmake is not correctly configured. - When installing e.g. cmake-3.27.5-windows-x86_64.msi (from [cmake.org](https://cmake.org/download/)), it asks at the end about modifying the PATH variable. No modification, for 'all users' or for 'current user only' are the options. Select for all users. It adds the cmake directory to the path variable. You can check the existence in control panel>system and security>system>advanced system settings>environment variables. - <img src="https://github.com/jmorganca/ollama/assets/14890243/93bc0ca0-5ed6-4532-acdc-b1035675262f" alt="image" width="300"> Execute `go generate ./...` in the ollama directory. Yes, the similar generate_darwin_amd64.go content has a command switch for specifying a cpu build, and not for a gpu build.
Author
Owner

@yc1ggsddu commented on GitHub (Sep 16, 2023):

@dcasota Thanks for your answer! However, I've already configured cmake correctly as shown.

  • image
  • image

Anyway, thank you very much for your answer! I'll try to figure it out.

<!-- gh-comment-id:1722128838 --> @yc1ggsddu commented on GitHub (Sep 16, 2023): @dcasota Thanks for your answer! However, I've already configured cmake correctly as shown. - ![image](https://github.com/jmorganca/ollama/assets/98590971/492860be-b639-4bc9-9c4d-d24a1b5850a2) - ![image](https://github.com/jmorganca/ollama/assets/98590971/71893ef3-5f28-49c5-8b9b-c4f4d30c06f4) Anyway, thank you very much for your answer! I'll try to figure it out.
Author
Owner

@yc1ggsddu commented on GitHub (Sep 16, 2023):

@dcasota I can get the result after running your command-lines!!!

I finally solved my problems by re-comment the "comment lines" shown in the error messages. Really ridiculous......

<!-- gh-comment-id:1722223102 --> @yc1ggsddu commented on GitHub (Sep 16, 2023): @dcasota I can get the result after running your command-lines!!! I finally solved my problems by re-comment the "comment lines" shown in the error messages. Really ridiculous......
Author
Owner

@dcasota commented on GitHub (Sep 18, 2023):

findings fyi

Tried to use the langchain-document example with a large PDF. With a fresh lab, latest Ollama source compiled on Windows 11, during the first phase, the built-in GPU has been quite active, the CPU load was quite lower, and the NVidia GPU wasn't used at all.

image

The process stopped with following error.

PS C:\Users\dcaso\ollama> python .\examples\langchain-document\main.py
Traceback (most recent call last):
  File "C:\Users\dcaso\ollama\examples\langchain-document\main.py", line 34, in <module>
    vectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings())
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\vectorstores\chroma.py", line 637, in from_documents
    return cls.from_texts(
           ^^^^^^^^^^^^^^^
  File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\vectorstores\chroma.py", line 601, in from_texts
    chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
  File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\vectorstores\chroma.py", line 224, in add_texts
    raise e
  File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\vectorstores\chroma.py", line 210, in add_texts
    self._collection.upsert(
  File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\chromadb\api\models\Collection.py", line 298, in upsert
    self._client._upsert(
  File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\chromadb\api\segment.py", line 290, in _upsert
    self._producer.submit_embeddings(coll["topic"], records_to_submit)
  File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\chromadb\db\mixins\embeddings_queue.py", line 127, in submit_embeddings
    raise ValueError(
ValueError:
                Cannot submit more than 5,461 embeddings at once.
                Please submit your embeddings in batches of size
                5,461 or less.

The Chroma issue seems to correlate to an issue https://github.com/chroma-core/chroma/issues/1049.

<!-- gh-comment-id:1723495828 --> @dcasota commented on GitHub (Sep 18, 2023): findings fyi Tried to use the langchain-document example with a large PDF. With a fresh lab, latest Ollama source compiled on Windows 11, during the first phase, the built-in GPU has been quite active, the CPU load was quite lower, and the NVidia GPU wasn't used at all. <img src="https://github.com/jmorganca/ollama/assets/14890243/094b32d5-35fa-4e2c-a4be-cfc65d7a6163" alt="image" width="400"> The process stopped with following error. ``` PS C:\Users\dcaso\ollama> python .\examples\langchain-document\main.py Traceback (most recent call last): File "C:\Users\dcaso\ollama\examples\langchain-document\main.py", line 34, in <module> vectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\vectorstores\chroma.py", line 637, in from_documents return cls.from_texts( ^^^^^^^^^^^^^^^ File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\vectorstores\chroma.py", line 601, in from_texts chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids) File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\vectorstores\chroma.py", line 224, in add_texts raise e File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\vectorstores\chroma.py", line 210, in add_texts self._collection.upsert( File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\chromadb\api\models\Collection.py", line 298, in upsert self._client._upsert( File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\chromadb\api\segment.py", line 290, in _upsert self._producer.submit_embeddings(coll["topic"], records_to_submit) File "C:\Users\dcaso\AppData\Local\Programs\Python\Python311\Lib\site-packages\chromadb\db\mixins\embeddings_queue.py", line 127, in submit_embeddings raise ValueError( ValueError: Cannot submit more than 5,461 embeddings at once. Please submit your embeddings in batches of size 5,461 or less. ``` The Chroma issue seems to correlate to an issue https://github.com/chroma-core/chroma/issues/1049.
Author
Owner

@BruceMacD commented on GitHub (Sep 28, 2023):

@yc1ggsddu your issue should be resolved once #637 gets in

<!-- gh-comment-id:1739962244 --> @BruceMacD commented on GitHub (Sep 28, 2023): @yc1ggsddu your issue should be resolved once #637 gets in
Author
Owner

@NeoPrint3D commented on GitHub (Oct 17, 2023):

Ollama is also for some reason running on CPU when it should be running on GPU in WSL2

<!-- gh-comment-id:1766217320 --> @NeoPrint3D commented on GitHub (Oct 17, 2023): Ollama is also for some reason running on CPU when it should be running on GPU in WSL2
Author
Owner

@jmorganca commented on GitHub (Oct 26, 2023):

@NeoPrint3D do you have the logs available (journalctl -u ollama)? GPU should definitely be enabled on WSL2 with nvidia GPUs.

For this issue, merging it with #403

<!-- gh-comment-id:1780239867 --> @jmorganca commented on GitHub (Oct 26, 2023): @NeoPrint3D do you have the logs available (`journalctl -u ollama`)? GPU should definitely be enabled on WSL2 with nvidia GPUs. For this issue, merging it with #403
Author
Owner

@NeoPrint3D commented on GitHub (Oct 26, 2023):

sure thing

https://drive.google.com/file/d/13eWwgRA07L6-bHGhX82Bnl14IMb1cm6L/view?usp=drive_link

<!-- gh-comment-id:1780247220 --> @NeoPrint3D commented on GitHub (Oct 26, 2023): sure thing https://drive.google.com/file/d/13eWwgRA07L6-bHGhX82Bnl14IMb1cm6L/view?usp=drive_link
Author
Owner

@maogeigei commented on GitHub (Mar 17, 2024):

image
BUILD ERROR - YOUR DEVELOPMENT ENVIRONMENT IS NOT SET UP CORRECTLY,have no idea where is wrong
and there is no .\examples\langchain-document\requirements.txt
image

<!-- gh-comment-id:2002288580 --> @maogeigei commented on GitHub (Mar 17, 2024): ![image](https://github.com/ollama/ollama/assets/122587001/1256f761-0e39-43ab-b499-4da4ae13ee8e) BUILD ERROR - YOUR DEVELOPMENT ENVIRONMENT IS NOT SET UP CORRECTLY,have no idea where is wrong and there is no .\examples\langchain-document\requirements.txt ![image](https://github.com/ollama/ollama/assets/122587001/a3de499b-8d31-4bd2-8a05-ad8ac11b0fee)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62282