[GH-ISSUE #3512] Experimental LLM Library Override does not appear to work on Windows #64202

Closed
opened 2026-05-03 16:35:30 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @lrq3000 on GitHub (Apr 6, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3512

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I tried the Experimental LLM Library Override on Windows via two means:

  • Temporary environment variable definition: SET OLLAMA_LLM_LIBRARY="cpu_avx2" & ollama run deepseek-coder
  • Permanent environment variable definition in the Windows System dialog.

Both failed, in server.log I get the following error:

time=2024-04-06T10:11:46.333+02:00 level=INFO source=llm.go:147 msg="Invalid OLLAMA_LLM_LIBRARY \"cpu_avx2\" - not found"

And ollama proceeds to use my GPU.

See the full server.log attached:

server.log

What did you expect to see?

Ollama should be using cpu_avx2 instead of the GPU.

Steps to reproduce

No response

Are there any recent changes that introduced the issue?

No response

OS

Windows

Architecture

x86

Platform

No response

Ollama version

0.1.30

GPU

Nvidia

GPU info

Nvidia GeForce 3060 Laptop

CPU

Intel

Other software

Intel i7-12700H

Originally created by @lrq3000 on GitHub (Apr 6, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3512 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I tried the [Experimental LLM Library Override](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#llm-libraries) on Windows via two means: * Temporary environment variable definition: `SET OLLAMA_LLM_LIBRARY="cpu_avx2" & ollama run deepseek-coder` * Permanent environment variable definition in the Windows System dialog. Both failed, in server.log I get the following error: `time=2024-04-06T10:11:46.333+02:00 level=INFO source=llm.go:147 msg="Invalid OLLAMA_LLM_LIBRARY \"cpu_avx2\" - not found"` And ollama proceeds to use my GPU. See the full server.log attached: [server.log](https://github.com/ollama/ollama/files/14892936/server.log) ### What did you expect to see? Ollama should be using cpu_avx2 instead of the GPU. ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS Windows ### Architecture x86 ### Platform _No response_ ### Ollama version 0.1.30 ### GPU Nvidia ### GPU info Nvidia GeForce 3060 Laptop ### CPU Intel ### Other software Intel i7-12700H
GiteaMirror added the windows label 2026-05-03 16:35:30 -05:00
Author
Owner

@dhiltgen commented on GitHub (Apr 12, 2024):

It looks like you included the quotation marks in the value, which is causing problems. If you remove the quotes it should work.

<!-- gh-comment-id:2052683441 --> @dhiltgen commented on GitHub (Apr 12, 2024): It looks like you included the quotation marks in the value, which is causing problems. If you remove the quotes it should work.
Author
Owner

@lrq3000 commented on GitHub (Apr 17, 2024):

Thank you for the suggestion @dhiltgen , good try, I almost facepalmed, but unfortunately the issue persists.

I tried:

SET OLLAMA_LLM_LIBRARY=cpu_avx2 & ollama run deepseek-coder:6.7b-instruct-q8_0

I got (ensuring the log was clean beforehand):
time=2024-04-17T02:26:58.335+02:00 level=INFO source=server.go:150 msg="Invalid OLLAMA_LLM_LIBRARY cpu_avx2 - not found"

What is strange is that the DLLs are correctly loaded before:

time=2024-04-17T02:26:57.746+02:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v5.7 cpu cpu_avx]"

And this specific one is detected on my system:
time=2024-04-17T02:26:58.335+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"

Here is the whole log:

server.log

<!-- gh-comment-id:2060111969 --> @lrq3000 commented on GitHub (Apr 17, 2024): Thank you for the suggestion @dhiltgen , good try, I almost facepalmed, but unfortunately the issue persists. I tried: `SET OLLAMA_LLM_LIBRARY=cpu_avx2 & ollama run deepseek-coder:6.7b-instruct-q8_0` I got (ensuring the log was clean beforehand): `time=2024-04-17T02:26:58.335+02:00 level=INFO source=server.go:150 msg="Invalid OLLAMA_LLM_LIBRARY cpu_avx2 - not found"` What is strange is that the DLLs are correctly loaded before: `time=2024-04-17T02:26:57.746+02:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v5.7 cpu cpu_avx]"` And this specific one is detected on my system: `time=2024-04-17T02:26:58.335+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"` Here is the whole log: [server.log](https://github.com/ollama/ollama/files/15004015/server.log)
Author
Owner

@dhiltgen commented on GitHub (Apr 23, 2024):

"Invalid OLLAMA_LLM_LIBRARY cpu_avx2 - not found"

It's subtle, but there's an extra space at the end of cpu_avx2 .

I'll add some hardening code to strip white space (and quotes) but if you set it to exactly the string cpu_avx2 without spaces or quotes it should work.

<!-- gh-comment-id:2071150171 --> @dhiltgen commented on GitHub (Apr 23, 2024): `"Invalid OLLAMA_LLM_LIBRARY cpu_avx2 - not found"` It's subtle, but there's an extra space at the end of `cpu_avx2 `. I'll add some hardening code to strip white space (and quotes) but if you set it to exactly the string `cpu_avx2` without spaces or quotes it should work.
Author
Owner

@lrq3000 commented on GitHub (Apr 23, 2024):

You are correct @dhiltgen , you've got very good eyes!

So it already works indeed with v0.1.32 with the following command (need to remove the whitespace between cpu_avx2 and &):

SET OLLAMA_LLM_LIBRARY=cpu_avx2& ollama run deepseek-coder:6.7b-instruct-q8_0

Thank you very much for implementing the fix to autotrim, it will make life easier!

<!-- gh-comment-id:2073288435 --> @lrq3000 commented on GitHub (Apr 23, 2024): You are correct @dhiltgen , you've got very good eyes! So it already works indeed with v0.1.32 with the following command (need to remove the whitespace between `cpu_avx2` and `&`): `SET OLLAMA_LLM_LIBRARY=cpu_avx2& ollama run deepseek-coder:6.7b-instruct-q8_0` Thank you very much for implementing the fix to autotrim, it will make life easier!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64202