[GH-ISSUE #1265] Correctly set up WSL environment to enable CUDA in building Ollama #62684

Closed
opened 2026-05-03 09:57:57 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @taweili on GitHub (Nov 24, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1265

After probing around the environment setup and the source codes for a few days, I finally figured out how to correctly build Ollama to support CUDA under WSL.

  1. WSL, by default, includes Windows's PATH, and there is an nvcc if one has installed the cuda environment in Windows.
  2. The default path to Linux's cuda isn't probably set in the environment

To fix this:

  1. Take out Windows path inclusion in the /etc/wsl.conf
[interop]
appendWindowsPath = false
  1. Set up Linux for CUDA development in your ~/.bashrc
# set up cuda
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64

And follow the build instructions to generate and build ollama

Wow, it's totally awesome to be able to use the GPU with Ollama. 60+ tokens/minute on a Titan RTX!

Originally created by @taweili on GitHub (Nov 24, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1265 After probing around the environment setup and the source codes for a few days, I finally figured out how to correctly build Ollama to support CUDA under WSL. 1. WSL, by default, includes Windows's PATH, and there is an nvcc if one has installed the cuda environment in Windows. 2. The default path to Linux's cuda isn't probably set in the environment To fix this: 1. Take out Windows path inclusion in the `/etc/wsl.conf` ``` [interop] appendWindowsPath = false ``` 2. Set up Linux for CUDA development in your `~/.bashrc` ``` # set up cuda export PATH=/usr/local/cuda/bin:$PATH export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64 ``` And follow the build instructions to generate and build ollama Wow, it's totally awesome to be able to use the GPU with Ollama. 60+ tokens/minute on a Titan RTX!
Author
Owner

@BruceMacD commented on GitHub (Nov 24, 2023):

Thanks for documenting this @taweili, for future reference the compiled released will also work with CUDA from WSL:
https://www.ollama.ai/download/linux

Closing this for now as there doesn't seem to be any pending actions, but it can be referenced by people searching in the future.

<!-- gh-comment-id:1826109185 --> @BruceMacD commented on GitHub (Nov 24, 2023): Thanks for documenting this @taweili, for future reference the compiled released will also work with CUDA from WSL: https://www.ollama.ai/download/linux Closing this for now as there doesn't seem to be any pending actions, but it can be referenced by people searching in the future.
Author
Owner

@taweili commented on GitHub (Nov 25, 2023):

@BruceMacD Thanks. I posted this mostly for the record in case it may be useful. I have been searching for solution on Ollama not using the GPU in WSL since 0.1.10 and updating to 0.1.11 didn't help. I decided to compile the codes myself and found that WSL's default path setup could be a problem.

I tried both releases and I can't find a consistent answer on whether or not looking at the issues posted here. Some said that it wasn't supposed to work and some had it working.

I also notice that my compiled binary is about 20% bigger than the released version. I don't yet have a chance to investigate the difference.

<!-- gh-comment-id:1826248622 --> @taweili commented on GitHub (Nov 25, 2023): @BruceMacD Thanks. I posted this mostly for the record in case it may be useful. I have been searching for solution on Ollama not using the GPU in WSL since 0.1.10 and updating to 0.1.11 didn't help. I decided to compile the codes myself and found that WSL's default path setup could be a problem. I tried both releases and I can't find a consistent answer on whether or not looking at the issues posted here. Some said that it wasn't supposed to work and some had it working. I also notice that my compiled binary is about 20% bigger than the released version. I don't yet have a chance to investigate the difference.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62684