[GH-ISSUE #403] Ollama Windows version #25943

Closed
opened 2026-04-22 01:48:04 -05:00 by GiteaMirror · 36 comments
Owner

Originally created by @deadcoder0904 on GitHub (Aug 24, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/403

Originally assigned to: @dhiltgen on GitHub.

I saw it is coming but didn't mention when? Would be great if you pinned this issue as more people use Windows & ollama has such a great dx.

The project looks absolutely brilliant. Would love to use text (gpt-4) & code (copilot) locally.

Originally created by @deadcoder0904 on GitHub (Aug 24, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/403 Originally assigned to: @dhiltgen on GitHub. I saw it is coming but didn't mention when? Would be great if you pinned this issue as more people use Windows & ollama has such a great dx. The project looks absolutely brilliant. Would love to use text (gpt-4) & code (copilot) locally.
GiteaMirror added the feature requestwindows labels 2026-04-22 01:48:05 -05:00
Author
Owner

@LogicalPizza commented on GitHub (Aug 25, 2023):

Let me piggyback on this, I managed to build and run Ollama on Windows, however, I'm able to leverage only the CPU.

It takes 2 minutes to answer "Hi", Does anyone have any hint on how to enable the processing on the GPU? Thanks a lot!

<!-- gh-comment-id:1693222434 --> @LogicalPizza commented on GitHub (Aug 25, 2023): Let me piggyback on this, I managed to build and run Ollama on Windows, however, I'm able to leverage only the CPU. It takes 2 minutes to answer "Hi", Does anyone have any hint on how to enable the processing on the GPU? Thanks a lot!
Author
Owner

@technovangelist commented on GitHub (Aug 25, 2023):

There are a few steps we need to tackle. We are close on the runner but we still need the rest of the app that surrounds it. This is probably a week or 2 out at least. But it is coming. Thanks for finding the project and being as excited as we are about it

<!-- gh-comment-id:1693817999 --> @technovangelist commented on GitHub (Aug 25, 2023): There are a few steps we need to tackle. We are close on the runner but we still need the rest of the app that surrounds it. This is probably a week or 2 out at least. But it is coming. Thanks for finding the project and being as excited as we are about it
Author
Owner

@jmorganca commented on GitHub (Aug 26, 2023):

What @technovangelist said :) Although I'll re-open this so we can track it

<!-- gh-comment-id:1694508625 --> @jmorganca commented on GitHub (Aug 26, 2023): What @technovangelist said :) Although I'll re-open this so we can track it
Author
Owner

@darkacorn commented on GitHub (Sep 10, 2023):

generator files for the os need -DLLAMA_CUBLAS=ON for the llama.cpp repos
there should be no other breaking changes at all at least i did not see any on linux yet

<!-- gh-comment-id:1712693513 --> @darkacorn commented on GitHub (Sep 10, 2023): generator files for the os need -DLLAMA_CUBLAS=ON for the llama.cpp repos there should be no other breaking changes at all at least i did not see any on linux yet
Author
Owner

@JimmiHfan101 commented on GitHub (Oct 2, 2023):

@technovangelist @BruceMacD Any update on Windows release by chance?

<!-- gh-comment-id:1743176638 --> @JimmiHfan101 commented on GitHub (Oct 2, 2023): @technovangelist @BruceMacD Any update on Windows release by chance?
Author
Owner

@clebio commented on GitHub (Oct 8, 2023):

I'm able to compile it on Windows and run it, but not getting GPU support. I don't know CMake well, but can handle the submodule updates and basic bits like cmake .. -DLLAMA_CUBLAS=ON'ing. Anything I can do to help out?

<!-- gh-comment-id:1751901368 --> @clebio commented on GitHub (Oct 8, 2023): I'm able to compile it on Windows and run it, but not getting GPU support. I don't know CMake well, but can handle the submodule updates and basic bits like `cmake .. -DLLAMA_CUBLAS=ON`'ing. Anything I can do to help out?
Author
Owner

@Xasomoeru commented on GitHub (Nov 17, 2023):

Is it possible to run this in an Linux vm on windows? Haven't tried, just wondering cause it's been 3 weeks.

<!-- gh-comment-id:1815726103 --> @Xasomoeru commented on GitHub (Nov 17, 2023): Is it possible to run this in an Linux vm on windows? Haven't tried, just wondering cause it's been 3 weeks.
Author
Owner

@lukethacoder commented on GitHub (Nov 17, 2023):

Managed to get this running on Windows via WSL 2 without any issues. 99% sure it ran with my RTX 3070 and not the CPU out of the box without the need to adjust any config.

<!-- gh-comment-id:1815736032 --> @lukethacoder commented on GitHub (Nov 17, 2023): Managed to get this running on Windows via WSL 2 without any issues. 99% sure it ran with my RTX 3070 and not the CPU out of the box without the need to adjust any config.
Author
Owner

@Alexandre-Fernandez commented on GitHub (Nov 24, 2023):

Native windows support would be great.

<!-- gh-comment-id:1826064970 --> @Alexandre-Fernandez commented on GitHub (Nov 24, 2023): Native windows support would be great.
Author
Owner

@deadcoder0904 commented on GitHub (Dec 7, 2023):

winget support would be cool as well as i use that to install anything on windows nowadays. officially supported by microsoft itself.

<!-- gh-comment-id:1845255065 --> @deadcoder0904 commented on GitHub (Dec 7, 2023): winget support would be cool as well as i use that to install anything on windows nowadays. officially supported by microsoft itself.
Author
Owner

@cooleydw494 commented on GitHub (Dec 21, 2023):

Managed to get this running on Windows via WSL 2 without any issues. 99% sure it ran with my RTX 3070 and not the CPU out of the box without the need to adjust any config.

That's amazing but I tried with no success. Every time I serve it, I get a message saying the nvidia-smi command failed. I installed drivers on WSL2 and my windows machine to no avail.

<!-- gh-comment-id:1865298251 --> @cooleydw494 commented on GitHub (Dec 21, 2023): > Managed to get this running on Windows via WSL 2 without any issues. 99% sure it ran with my RTX 3070 and not the CPU out of the box without the need to adjust any config. That's amazing but I tried with no success. Every time I serve it, I get a message saying the nvidia-smi command failed. I installed drivers on WSL2 and my windows machine to no avail.
Author
Owner

@prabirshrestha commented on GitHub (Jan 5, 2024):

In case anyone is looking to manually compile ollama as a native windows app here is what I did.

Install scoop. This is similar to apt-get for linux and homebrew for mac.

Then run the following commands to build ollama.exe.

scoop install go cmake gcc
set CGO_ENABLED="1"
git clone https://github.com/jmorganca/ollama.git
go generate ./...
go build .
  • update 1.
    Install Visual Studio Community 2022 with C++ Profiling Tools.
<!-- gh-comment-id:1877991839 --> @prabirshrestha commented on GitHub (Jan 5, 2024): In case anyone is looking to manually compile ollama as a native windows app here is what I did. Install [scoop](https://scoop.sh/). This is similar to apt-get for linux and homebrew for mac. Then run the following commands to build `ollama.exe`. ```bash scoop install go cmake gcc set CGO_ENABLED="1" git clone https://github.com/jmorganca/ollama.git go generate ./... go build . ``` * update 1. Install Visual Studio Community 2022 with C++ Profiling Tools.
Author
Owner

@mesopa commented on GitHub (Jan 9, 2024):

In case anyone is looking to manually compile ollama as a native windows app here is what I did.

Install scoop. This is similar to apt-get for linux and homebrew for mac.

Then run the following commands to build ollama.exe.

scoop install go cmake gcc
set CGO_ENABLED="1"
git clone https://github.com/jmorganca/ollama.git
go generate ./...
go build .

I tried on Windows 11 but it always ends with the same error:

dumpbin : The term 'dumpbin' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
<!-- gh-comment-id:1883146760 --> @mesopa commented on GitHub (Jan 9, 2024): > In case anyone is looking to manually compile ollama as a native windows app here is what I did. > > Install [scoop](https://scoop.sh/). This is similar to apt-get for linux and homebrew for mac. > > Then run the following commands to build `ollama.exe`. > > ```shell > scoop install go cmake gcc > set CGO_ENABLED="1" > git clone https://github.com/jmorganca/ollama.git > go generate ./... > go build . > ``` I tried on Windows 11 but it always ends with the same error: ```sh dumpbin : The term 'dumpbin' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. ```
Author
Owner

@BruceMacD commented on GitHub (Jan 9, 2024):

@mesopa that looks like a c++ dependency error. If you have Visual Studio installed in your system try using the Developer Command Prompt for Visual Studio instead of regular command prompt to build the project.

<!-- gh-comment-id:1883443915 --> @BruceMacD commented on GitHub (Jan 9, 2024): @mesopa that looks like a c++ dependency error. If you have Visual Studio installed in your system try using the `Developer Command Prompt for Visual Studio` instead of regular command prompt to build the project.
Author
Owner

@prabirshrestha commented on GitHub (Jan 10, 2024):

@mesopa updated the comment. You need to Install Visual Studio Community 2022 with C++ Profiling Tools for dumpbin. I noticed this change when was trying to compile the latest master.

<!-- gh-comment-id:1883999914 --> @prabirshrestha commented on GitHub (Jan 10, 2024): @mesopa updated the comment. You need to Install Visual Studio Community 2022 with C++ Profiling Tools for dumpbin. I noticed this change when was trying to compile the latest master.
Author
Owner

@LiteSoul commented on GitHub (Jan 10, 2024):

Any update on a Windows version? 6 months has passed since it was a week away, thanks.

<!-- gh-comment-id:1885934308 --> @LiteSoul commented on GitHub (Jan 10, 2024): Any update on a Windows version? 6 months has passed since it was a week away, thanks.
Author
Owner

@zcfrank1st commented on GitHub (Jan 17, 2024):

when windows...

<!-- gh-comment-id:1895660450 --> @zcfrank1st commented on GitHub (Jan 17, 2024): when windows...
Author
Owner

@yanghuantian commented on GitHub (Jan 18, 2024):

when windows...

<!-- gh-comment-id:1898339891 --> @yanghuantian commented on GitHub (Jan 18, 2024): when windows...
Author
Owner

@dhiltgen commented on GitHub (Jan 18, 2024):

Sorry about the dumpbin hard dependency. I've made a number of improvements for the windows build in #2007 which should improve the situation. It also should be better now at detecting cuda and skipping that part of the build if it isn't detected like we do on linux.

While https://github.com/jmorganca/ollama/blob/main/docs/development.md#windows could be enhanced, please let us know if you're unable to build locally with those instructions. (either doc improvements, or fixing corner cases in the build scripts for windows.)

As far as "when windows" - we're working to get the main ollama runtime in good shape on windows, and then package it up with an installable app much like we do on MacOS. Hopefully folks who are comfortable building from source can start leveraging their GPUs in a native ollama.exe from main now, and the installable app is coming soon.

<!-- gh-comment-id:1899131657 --> @dhiltgen commented on GitHub (Jan 18, 2024): Sorry about the `dumpbin` hard dependency. I've made a number of improvements for the windows build in #2007 which should improve the situation. It also should be better now at detecting cuda and skipping that part of the build if it isn't detected like we do on linux. While https://github.com/jmorganca/ollama/blob/main/docs/development.md#windows could be enhanced, please let us know if you're unable to build locally with those instructions. (either doc improvements, or fixing corner cases in the build scripts for windows.) As far as "when windows" - we're working to get the main ollama runtime in good shape on windows, and then package it up with an installable app much like we do on MacOS. Hopefully folks who are comfortable building from source can start leveraging their GPUs in a native `ollama.exe` from `main` now, and the installable app is coming soon.
Author
Owner

@Don0001 commented on GitHub (Jan 18, 2024):

I futzed around and was in over my head. Sucess came when I followed these instructions:
https://m.youtube.com/watch?v=C7rFk-GbdCg
Basic steps all the way through and now I'm running ollama in WSL2 and Ubuntu,

<!-- gh-comment-id:1899216395 --> @Don0001 commented on GitHub (Jan 18, 2024): I futzed around and was in over my head. Sucess came when I followed these instructions: https://m.youtube.com/watch?v=C7rFk-GbdCg Basic steps all the way through and now I'm running ollama in WSL2 and Ubuntu,
Author
Owner

@BradKML commented on GitHub (Jan 26, 2024):

@dhiltgen thanks for the update, and once there is an installable a Chocolatey package will surely be a good addition. There are less dependencies on Linux-exclusive tools now right?

<!-- gh-comment-id:1911462725 --> @BradKML commented on GitHub (Jan 26, 2024): @dhiltgen thanks for the update, and once there is an installable a Chocolatey package will surely be a good addition. There are less dependencies on Linux-exclusive tools now right?
Author
Owner

@SpanishHearts commented on GitHub (Feb 3, 2024):

Yea I'm really, really missing a proper Windows 11 executable

<!-- gh-comment-id:1925442262 --> @SpanishHearts commented on GitHub (Feb 3, 2024): Yea I'm really, really missing a proper Windows 11 executable
Author
Owner

@brettforbes commented on GitHub (Feb 9, 2024):

Hi,
We have an open-source electron app that works on Windows, Mac and Linux, and we would like to use Ollama on all platforms, yet we are not in control of the laptop and cannot install WSL or docker Desktop? Unfortunately we cannot yet use Ollama without a native Windows capability

Is there any status on the native Windows version so we can start to use Ollama?
Thanks

<!-- gh-comment-id:1935334387 --> @brettforbes commented on GitHub (Feb 9, 2024): Hi, We have an open-source electron app that works on Windows, Mac and Linux, and we would like to use Ollama on all platforms, yet we are not in control of the laptop and cannot install WSL or docker Desktop? Unfortunately we cannot yet use Ollama without a native Windows capability Is there any status on the native Windows version so we can start to use Ollama? Thanks
Author
Owner

@BruceMacD commented on GitHub (Feb 9, 2024):

@brettforbes you can build Ollama from source for Windows. That's how I packaged it into the Windows version of the chatd electron app (source available on my GitHub).

<!-- gh-comment-id:1935355700 --> @BruceMacD commented on GitHub (Feb 9, 2024): @brettforbes you can build Ollama from source for Windows. That's how I packaged it into the Windows version of the chatd electron app (source available on my GitHub).
Author
Owner

@deependhulla commented on GitHub (Feb 9, 2024):

You can also try Llamafile, a single executable with models or just the program and use a different model. It might be useful; check out the repository. I have tried it on Linux and Windows on an i3 with 8GB RAM, and it performed well, might be good for your app integration.

Visit the GitHub repository for more information.
https://github.com/Mozilla-Ocho/llamafile

<!-- gh-comment-id:1935481568 --> @deependhulla commented on GitHub (Feb 9, 2024): You can also try Llamafile, a single executable with models or just the program and use a different model. It might be useful; check out the repository. I have tried it on Linux and Windows on an i3 with 8GB RAM, and it performed well, might be good for your app integration. Visit the GitHub repository for more information. https://github.com/Mozilla-Ocho/llamafile
Author
Owner

@BradKML commented on GitHub (Feb 10, 2024):

@BruceMacD could you roll yours into a Chocolatey installer of the compiled software before the official Ollama app gets released? Or should we all wait a while?

<!-- gh-comment-id:1936911102 --> @BradKML commented on GitHub (Feb 10, 2024): @BruceMacD could you roll yours into a Chocolatey installer of the compiled software before the official Ollama app gets released? Or should we all wait a while?
Author
Owner

@dhiltgen commented on GitHub (Feb 15, 2024):

The Windows Preview is now available

https://ollama.com/download/windows

<!-- gh-comment-id:1946926503 --> @dhiltgen commented on GitHub (Feb 15, 2024): The Windows Preview is now available https://ollama.com/download/windows
Author
Owner

@Don0001 commented on GitHub (Feb 15, 2024):

Super easy install and ran perfectly. You all did a great job with this one.

On Thu, Feb 15, 2024 at 10:44 AM Daniel Hiltgen @.***>
wrote:

The Windows Preview is now available

https://ollama.com/download/windows


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/403#issuecomment-1946926503, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/A7SMNXFU3XVZPXBNANBWNDLYTZJPDAVCNFSM6AAAAAA34ONDIGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBWHEZDMNJQGM
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:1947370768 --> @Don0001 commented on GitHub (Feb 15, 2024): Super easy install and ran perfectly. You all did a great job with this one. On Thu, Feb 15, 2024 at 10:44 AM Daniel Hiltgen ***@***.***> wrote: > The Windows Preview is now available > > https://ollama.com/download/windows > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/403#issuecomment-1946926503>, or > unsubscribe > <https://github.com/notifications/unsubscribe-auth/A7SMNXFU3XVZPXBNANBWNDLYTZJPDAVCNFSM6AAAAAA34ONDIGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBWHEZDMNJQGM> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Author
Owner

@dhiltgen commented on GitHub (Feb 16, 2024):

I think we can mark this one resolved now. If folks run into any problems with the preview, please let us know on discord or file new issues.

<!-- gh-comment-id:1948793436 --> @dhiltgen commented on GitHub (Feb 16, 2024): I think we can mark this one resolved now. If folks run into any problems with the preview, please let us know on discord or file new issues.
Author
Owner

@jpablo-ortiz commented on GitHub (Feb 16, 2024):

Me gustaría saber si este lanzamiento se arregló lo de que se use la GPU, porque ya instalé la versión de windows, pero cuando abro el administrador de tareas y ejecuto un modelo veo que mi GPU no llega ni al 2% de uso. ¿Saben a qué se deba?

<!-- gh-comment-id:1949493765 --> @jpablo-ortiz commented on GitHub (Feb 16, 2024): Me gustaría saber si este lanzamiento se arregló lo de que se use la GPU, porque ya instalé la versión de windows, pero cuando abro el administrador de tareas y ejecuto un modelo veo que mi GPU no llega ni al 2% de uso. ¿Saben a qué se deba?
Author
Owner

@dhiltgen commented on GitHub (Feb 17, 2024):

@jpablo-ortiz if your GPU has a small amount of VRAM larger models will not fit and can result in low GPU utilization. Look in the server.log file for a line that looks like this:

llm_load_tensors: offloaded 33/33 layers to GPU

The more layers on the GPU the better.

<!-- gh-comment-id:1949508855 --> @dhiltgen commented on GitHub (Feb 17, 2024): @jpablo-ortiz if your GPU has a small amount of VRAM larger models will not fit and can result in low GPU utilization. Look in the server.log file for a line that looks like this: ``` llm_load_tensors: offloaded 33/33 layers to GPU ``` The more layers on the GPU the better.
Author
Owner

@jpablo-ortiz commented on GitHub (Feb 17, 2024):

I understand, but in my case I previously used ollama from Ubuntu on windows with WSL and it was perfect and used the gpu.

<!-- gh-comment-id:1949522239 --> @jpablo-ortiz commented on GitHub (Feb 17, 2024): I understand, but in my case I previously used ollama from Ubuntu on windows with WSL and it was perfect and used the gpu.
Author
Owner

@dhiltgen commented on GitHub (Feb 17, 2024):

@jpablo-ortiz are you running an AMD Radeon card perhaps? We only support NVIDIA in the native windows build right now.

If that's not it, please open a new issue and attach a server.log (ideally both from within WSL2 and the native windows version so we can see the differences)

<!-- gh-comment-id:1949531916 --> @dhiltgen commented on GitHub (Feb 17, 2024): @jpablo-ortiz are you running an AMD Radeon card perhaps? We only support NVIDIA in the native windows build right now. If that's not it, please open a new issue and attach a server.log (ideally both from within WSL2 and the native windows version so we can see the differences)
Author
Owner

@jpablo-ortiz commented on GitHub (Feb 17, 2024):

I was able to fix it by reinstalling ollama, and now it works perfectly. Thank you very much for your help.

<!-- gh-comment-id:1949539941 --> @jpablo-ortiz commented on GitHub (Feb 17, 2024): I was able to fix it by reinstalling ollama, and now it works perfectly. Thank you very much for your help.
Author
Owner

@IvanSoregashi commented on GitHub (May 6, 2024):

Hello
I have noticed that ollama can be installed with winget
winget install --id=Ollama.Ollama -e
https://winstall.app/apps/Ollama.Ollama

Is this official?

<!-- gh-comment-id:2095195052 --> @IvanSoregashi commented on GitHub (May 6, 2024): Hello I have noticed that ollama can be installed with winget `winget install --id=Ollama.Ollama -e` https://winstall.app/apps/Ollama.Ollama Is this official?
Author
Owner

@BruceMacD commented on GitHub (May 6, 2024):

Hi @IvanSoregashi the core team doesn't maintain the winget installation, but there are a few different third party installation methods (like brew) that are maintained by community members.

<!-- gh-comment-id:2096809098 --> @BruceMacD commented on GitHub (May 6, 2024): Hi @IvanSoregashi the core team doesn't maintain the winget installation, but there are a few different third party installation methods (like brew) that are maintained by community members.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#25943