[GH-ISSUE #9057] ollama for win 10 compilation 19044? #5896

Closed
opened 2026-04-12 17:13:57 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @sixt00 on GitHub (Feb 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9057

Originally assigned to: @dhiltgen on GitHub.

Hello, I have win 10 pro build 19042! and the Olmama requirement is for build 19044! There are no ways to hack this?, it doesn't start for me in cmd

Originally created by @sixt00 on GitHub (Feb 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9057 Originally assigned to: @dhiltgen on GitHub. Hello, I have win 10 pro build 19042! and the Olmama requirement is for build 19044! There are no ways to hack this?, it doesn't start for me in cmd
GiteaMirror added the needs more infobugwindows labels 2026-04-12 17:13:57 -05:00
Author
Owner

@awlawl commented on GitHub (Feb 13, 2025):

Is it just hanging when on the prompt? Ollama 0.5.7? I am having the same issue. According to windows update and winver I have the latest version of windows 10: 19045.5487.

There isn't anything interesting in the logs, the server logs don't even show that it attempts to run anything.

I noticed that it still works when I use tooling like VSCode Continue. So that means that the server portion is ok, this seems to be related to just the command line tool. It is also working fine on the same version for mac.

<!-- gh-comment-id:2657012641 --> @awlawl commented on GitHub (Feb 13, 2025): Is it just hanging when on the prompt? Ollama 0.5.7? I am having the same issue. According to windows update and winver I have the latest version of windows 10: 19045.5487. There isn't anything interesting in the logs, the server logs don't even show that it attempts to run anything. I noticed that it still works when I use tooling like VSCode Continue. So that means that the server portion is ok, this seems to be related to just the command line tool. It is also working fine on the same version for mac.
Author
Owner

@sixt00 commented on GitHub (Feb 13, 2025):

I also tried turning off the Windows firewall, but Windows Defender didn't work. I could try the antivirus too.

<!-- gh-comment-id:2657615326 --> @sixt00 commented on GitHub (Feb 13, 2025): I also tried turning off the Windows firewall, but Windows Defender didn't work. I could try the antivirus too.
Author
Owner

@mchiang0610 commented on GitHub (Feb 14, 2025):

Hey @sixt00 sorry about this. Possible to help us track this issue? Do you see anything in the logs? How did you install Ollama?

We don't do anything to make Ollama not work with that build, so it's definitely a bug.

<!-- gh-comment-id:2658466961 --> @mchiang0610 commented on GitHub (Feb 14, 2025): Hey @sixt00 sorry about this. Possible to help us track this issue? Do you see anything in the logs? How did you install Ollama? We don't do anything to make Ollama not work with that build, so it's definitely a bug.
Author
Owner

@awlawl commented on GitHub (Feb 14, 2025):

I just installed 0.5.11
Here are my logs app.log:

time=2025-02-14T09:29:26.034-05:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\llm\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-14T09:29:26.059-05:00 level=INFO source=server.go:182 msg="unable to connect to server"
time=2025-02-14T09:29:26.059-05:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-02-14T09:29:26.095-05:00 level=INFO source=server.go:127 msg="started ollama server with pid 27836"
time=2025-02-14T09:29:26.095-05:00 level=INFO source=server.go:129 msg="ollama server logs C:\\Users\\allen\\AppData\\Local\\Ollama\\server.log"

After the fresh 0.5.11 install I ran these commands:
ollama ps
ollama --version
ollama list
ollama run llama3.2:1b (this just hung)
server.log:

2025/02/14 09:29:26 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\llm\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-14T09:29:26.168-05:00 level=INFO source=images.go:432 msg="total blobs: 43"
time=2025-02-14T09:29:26.170-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-14T09:29:26.172-05:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.11)"
time=2025-02-14T09:29:26.172-05:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-14T09:29:26.172-05:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-02-14T09:29:26.172-05:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=0 threads=24
time=2025-02-14T09:29:26.365-05:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-47378615-cdca-1821-2865-e3c5c9a2ce4d library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3070 Ti" total="8.0 GiB" available="6.9 GiB"
[GIN] 2025/02/14 - 09:30:03 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/14 - 09:30:03 | 200 |            0s |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/02/14 - 09:30:08 | 200 |       534.1µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/02/14 - 09:30:14 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/14 - 09:30:14 | 200 |     36.0808ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/02/14 - 09:30:24 | 200 |            0s |       127.0.0.1 | HEAD     "/"

You can see the corresponding api http calls in the logs for each. The last HEAD call is when I used the run command.

I can still use tools like Continue, and a ollama ps shows that it is able to run local models just fine.

OK, I just discovered something that is probably very relevant. This is working fine in CMD, Powershell and Bash within VS Code. It only fails for me in GIT Bash for windows.

@sixt00 Can you confirm you are just using CMD and not bash?

<!-- gh-comment-id:2659512947 --> @awlawl commented on GitHub (Feb 14, 2025): I just installed 0.5.11 Here are my logs `app.log`: ```time=2025-02-14T09:29:26.034-05:00 level=INFO source=logging.go:50 msg="ollama app started" time=2025-02-14T09:29:26.034-05:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\llm\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-14T09:29:26.059-05:00 level=INFO source=server.go:182 msg="unable to connect to server" time=2025-02-14T09:29:26.059-05:00 level=INFO source=server.go:141 msg="starting server..." time=2025-02-14T09:29:26.095-05:00 level=INFO source=server.go:127 msg="started ollama server with pid 27836" time=2025-02-14T09:29:26.095-05:00 level=INFO source=server.go:129 msg="ollama server logs C:\\Users\\allen\\AppData\\Local\\Ollama\\server.log" ``` After the fresh 0.5.11 install I ran these commands: `ollama ps` `ollama --version` `ollama list` `ollama run llama3.2:1b` (this just hung) server.log: ``` 2025/02/14 09:29:26 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\llm\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-14T09:29:26.168-05:00 level=INFO source=images.go:432 msg="total blobs: 43" time=2025-02-14T09:29:26.170-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-14T09:29:26.172-05:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.11)" time=2025-02-14T09:29:26.172-05:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-14T09:29:26.172-05:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-02-14T09:29:26.172-05:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=0 threads=24 time=2025-02-14T09:29:26.365-05:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-47378615-cdca-1821-2865-e3c5c9a2ce4d library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3070 Ti" total="8.0 GiB" available="6.9 GiB" [GIN] 2025/02/14 - 09:30:03 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/14 - 09:30:03 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2025/02/14 - 09:30:08 | 200 | 534.1µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/02/14 - 09:30:14 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/14 - 09:30:14 | 200 | 36.0808ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/02/14 - 09:30:24 | 200 | 0s | 127.0.0.1 | HEAD "/" ``` You can see the corresponding api http calls in the logs for each. The last HEAD call is when I used the run command. I can still use tools like Continue, and a `ollama ps` shows that it is able to run local models just fine. OK, I just discovered something that is probably very relevant. This is working fine in CMD, Powershell and Bash within VS Code. **It only fails for me in GIT Bash for windows.** @sixt00 Can you confirm you are just using CMD and not bash?
Author
Owner

@sixt00 commented on GitHub (Feb 15, 2025):

Hi before I see this problem
I get an error when running Ollama
Before I thought it was because of the 7 GB disk space I had, I took it to 11.7 GB available... I open cmd, I put Start Olama and it does nothing, then I put Olma Serve and it gives me this message
Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
I put rocama run and I get Error: requires at least 1 arg(s), only received 0
Available Commands:
serve Start pot
create Create a model from a Modelfile
show Show information for a model
run Run a model
stop Stop a running model
pull Pull a model from a registry
push Push a model to a registry
list List models
ps List running models
cp Copy a model
rm Remove a model
help Help about any command

When execute ollama start not work directly
I do not know do any more
Yes i only use cmd

<!-- gh-comment-id:2660706766 --> @sixt00 commented on GitHub (Feb 15, 2025): Hi before I see this problem I get an error when running Ollama Before I thought it was because of the 7 GB disk space I had, I took it to 11.7 GB available... I open cmd, I put Start Olama and it does nothing, then I put Olma Serve and it gives me this message Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted. I put rocama run and I get Error: requires at least 1 arg(s), only received 0 Available Commands: serve Start pot create Create a model from a Modelfile show Show information for a model run Run a model stop Stop a running model pull Pull a model from a registry push Push a model to a registry list List models ps List running models cp Copy a model rm Remove a model help Help about any command When execute ollama start not work directly I do not know do any more Yes i only use cmd
Author
Owner

@dhiltgen commented on GitHub (Jul 4, 2025):

Are you still seeing this on the latest Ollama version (0.9.5)

<!-- gh-comment-id:3037295544 --> @dhiltgen commented on GitHub (Jul 4, 2025): Are you still seeing this on the latest Ollama version (0.9.5)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5896