[GH-ISSUE #4095] Is there a problem with the document? #64581

Closed
opened 2026-05-03 18:15:49 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @ggjk616 on GitHub (May 2, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4095

What is the issue?

Can you help me,In the documentation, I noticed the following statement: “You can set OLLAMA_LLM_LIBRARY to any of the available LLM libraries to bypass autodetection, so for example, if you have a CUDA card, but want to force the CPU LLM library with AVX2 vector support, use:
OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve”
But After setting OLLAMA_LLM_LIBRARY=“cpu_avx2”, the program still detects my GPU when loading the model, resulting in an error: Error: Post “https://127.0.0.1:11434/api/chat”: read tcp 127.0.0.1:56915->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.

OS

Windows

GPU

AMD

CPU

Intel

Ollama version

No response

Originally created by @ggjk616 on GitHub (May 2, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4095 ### What is the issue? Can you help me,In the documentation, I noticed the following statement: “You can set OLLAMA_LLM_LIBRARY to any of the available LLM libraries to bypass autodetection, so for example, if you have a CUDA card, but want to force the CPU LLM library with AVX2 vector support, use: OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve” But After setting OLLAMA_LLM_LIBRARY=“cpu_avx2”, the program still detects my GPU when loading the model, resulting in an error: Error: Post “https://127.0.0.1:11434/api/chat”: read tcp 127.0.0.1:56915->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host. ### OS Windows ### GPU AMD ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-05-03 18:15:49 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64581