[GH-ISSUE #8043] Running in WSL2 seems to be a little bit slow. #51653

Closed
opened 2026-04-28 20:42:21 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @cycleuser on GitHub (Dec 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8043

Sharing models downloaded under Windows to WSL2

Since I have installed ollama under Windows,I just go ahead and install an ollama inside wsl2, and then link the path to the model files under Windows to the ollama directory under wsl.

sudo ln -s /mnt/c/Users/USERNAME/.ollama/ /usr/share/ollama/.ollama/

Then I do can use the models downloaded under windows directly by ollama inside the wsl2 Ubuntu.

9cb83fd486ecb40a9031af33f2fe808

But it seems a little bit slower than running directly under Windows.

f9d59cb537d6985abd69ff6636d6699

Sorry that this seems not to be a bug at all, but I didn't find out how to change it to WSL2 labeled.

So, is it caused by the IO speed limitation of WSL2 or some other reason?

Just curious.

OS

WSL2

GPU

Nvidia

CPU

AMD

Ollama version

0.5.1

Originally created by @cycleuser on GitHub (Dec 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8043 ### Sharing models downloaded under Windows to WSL2 Since I have installed ollama under Windows,I just go ahead and install an ollama inside wsl2, and then link the path to the model files under Windows to the ollama directory under wsl. ```Bash sudo ln -s /mnt/c/Users/USERNAME/.ollama/ /usr/share/ollama/.ollama/ ``` Then I do can use the models downloaded under windows directly by ollama inside the wsl2 Ubuntu. <img width="609" alt="9cb83fd486ecb40a9031af33f2fe808" src="https://github.com/user-attachments/assets/f6c7ad7a-9555-498a-8ae5-ddfb5420149e"> But it seems a little bit slower than running directly under Windows. <img width="619" alt="f9d59cb537d6985abd69ff6636d6699" src="https://github.com/user-attachments/assets/636640b4-deb1-40fe-8313-543a2e7dda20"> Sorry that this seems not to be a bug at all, but I didn't find out how to change it to WSL2 labeled. So, is it caused by the IO speed limitation of WSL2 or some other reason? Just curious. ### OS WSL2 ### GPU Nvidia ### CPU AMD ### Ollama version 0.5.1
GiteaMirror added the wslbug labels 2026-04-28 20:42:21 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 11, 2024):

If you run ollama with --verbose you will get quantifiable speed measurements.

<!-- gh-comment-id:2535816393 --> @rick-github commented on GitHub (Dec 11, 2024): If you run ollama with `--verbose` you will get quantifiable speed measurements.
Author
Owner

@Salpingopharyngeus commented on GitHub (Dec 12, 2024):

Bit late to the party, but you might also consider running it using the windows native app as opposed to within WSL.
While not completely the same I was running into huge speed bottlenecks while running ollama out of docker through WSL2 and I found switching to the windows app made life substantially easier as reading files through wsl occurs through the plan9 file share, and will be cached in ram (unless disabled), and will automatically handle GPU usage out of the box.
You can try this while leaving the models where they are and just adding OLLAMA_MODELS as a windows enviornment variable.

<!-- gh-comment-id:2537496212 --> @Salpingopharyngeus commented on GitHub (Dec 12, 2024): Bit late to the party, but you might also consider running it using the windows native app as opposed to within WSL. While not completely the same I was running into huge speed bottlenecks while running ollama out of docker through WSL2 and I found switching to the windows app made life substantially easier as reading files through wsl occurs through the plan9 file share, and will be cached in ram (unless disabled), and will automatically handle GPU usage out of the box. You can try this while leaving the models where they are and just adding OLLAMA_MODELS <Ollama Model Directory> as a windows enviornment variable.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51653