[GH-ISSUE #2596] Unable to launch on windows 10. #63567

Closed
opened 2026-05-03 14:12:53 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @CaptainCursor on GitHub (Feb 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2596

Originally assigned to: @dhiltgen on GitHub.

app.log
server.log

I have downloaded ollama and it starts and downloads manifests fine.

When I go to run the server i get:

Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:49855->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.

I have disabled all firewalls I can and tried setting enviroment varables (probably incorrectly) and this does not appear to make a difference.

I have asked multiple times for help on discord but I am not even acknowledged.

Originally created by @CaptainCursor on GitHub (Feb 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2596 Originally assigned to: @dhiltgen on GitHub. [app.log](https://github.com/ollama/ollama/files/14334822/app.log) [server.log](https://github.com/ollama/ollama/files/14334823/server.log) I have downloaded ollama and it starts and downloads manifests fine. When I go to run the server i get: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:49855->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host. I have disabled all firewalls I can and tried setting enviroment varables (probably incorrectly) and this does not appear to make a difference. I have asked multiple times for help on discord but I am not even acknowledged.
Author
Owner

@dhiltgen commented on GitHub (Feb 19, 2024):

From the logs, it looks like you hit #2527 - your CPU only supports AVX, but we mistakenly built the GPU libraries with AVX2. We'll get this fixed in the next release.

time=2024-02-19T13:59:58.880Z level=INFO source=cpu_common.go:15 msg="CPU has AVX"
...
[1708351199] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | 
[1708351199] Performing pre-initialization of GPU
Exception 0xc000001d 0x0 0x0 0x7ffdd3ded257
PC=0x7ffdd3ded257
signal arrived during external code execution
<!-- gh-comment-id:1953139552 --> @dhiltgen commented on GitHub (Feb 19, 2024): From the logs, it looks like you hit #2527 - your CPU only supports AVX, but we mistakenly built the GPU libraries with AVX2. We'll get this fixed in the next release. ``` time=2024-02-19T13:59:58.880Z level=INFO source=cpu_common.go:15 msg="CPU has AVX" ... [1708351199] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | [1708351199] Performing pre-initialization of GPU Exception 0xc000001d 0x0 0x0 0x7ffdd3ded257 PC=0x7ffdd3ded257 signal arrived during external code execution ```
Author
Owner

@CaptainCursor commented on GitHub (Feb 19, 2024):

Bless you sir.

Thank you for taking the time to look and reply.

My apologies for my rubbish pc and it's lack of avx2 support. My 2019
MacBook Pro is working wonderfully!

Regards

Simon

On Mon, 19 Feb 2024, 20:46 Daniel Hiltgen, @.***> wrote:

From the logs, it looks like you hit #2527
https://github.com/ollama/ollama/issues/2527 - your CPU only supports
AVX, but we mistakenly built the GPU libraries with AVX2. We'll get this
fixed in the next release.

time=2024-02-19T13:59:58.880Z level=INFO source=cpu_common.go:15 msg="CPU has AVX"
...
[1708351199] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 |
[1708351199] Performing pre-initialization of GPU
Exception 0xc000001d 0x0 0x0 0x7ffdd3ded257
PC=0x7ffdd3ded257
signal arrived during external code execution


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/2596#issuecomment-1953139552,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ATZJ4256X3NQOROOLNJ4J63YUO22RAVCNFSM6AAAAABDPYLDOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJTGEZTSNJVGI
.
You are receiving this because you authored the thread.Message ID:
@.***>

<!-- gh-comment-id:1953174762 --> @CaptainCursor commented on GitHub (Feb 19, 2024): Bless you sir. Thank you for taking the time to look and reply. My apologies for my rubbish pc and it's lack of avx2 support. My 2019 MacBook Pro is working wonderfully! Regards Simon On Mon, 19 Feb 2024, 20:46 Daniel Hiltgen, ***@***.***> wrote: > From the logs, it looks like you hit #2527 > <https://github.com/ollama/ollama/issues/2527> - your CPU only supports > AVX, but we mistakenly built the GPU libraries with AVX2. We'll get this > fixed in the next release. > > time=2024-02-19T13:59:58.880Z level=INFO source=cpu_common.go:15 msg="CPU has AVX" > ... > [1708351199] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | > [1708351199] Performing pre-initialization of GPU > Exception 0xc000001d 0x0 0x0 0x7ffdd3ded257 > PC=0x7ffdd3ded257 > signal arrived during external code execution > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/2596#issuecomment-1953139552>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ATZJ4256X3NQOROOLNJ4J63YUO22RAVCNFSM6AAAAABDPYLDOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJTGEZTSNJVGI> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> >
Author
Owner

@ASHISH-1793 commented on GitHub (Dec 10, 2024):

Hello @dhiltgen Greetings of the day. I am also facing the same issue. Is there any other software releases done to support the same? I dont have dedicated GPU

time=2024-12-10T18:13:28.825+05:30 level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-12-10T18:13:28.825+05:30 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-12-10T18:13:28.826+05:30 level=INFO source=routes.go:1246 msg="Listening on 127.0.0.1:11434 (version 0.5.1)"
time=2024-12-10T18:13:28.827+05:30 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 rocm cpu cpu_avx cpu_avx2 cuda_v11]"
time=2024-12-10T18:13:28.827+05:30 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-12-10T18:13:28.827+05:30 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2024-12-10T18:13:28.827+05:30 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=2 efficiency=0 threads=4
time=2024-12-10T18:13:28.838+05:30 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
time=2024-12-10T18:13:28.839+05:30 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="11.9 GiB" available="6.9 GiB"

<!-- gh-comment-id:2531539557 --> @ASHISH-1793 commented on GitHub (Dec 10, 2024): Hello @dhiltgen Greetings of the day. I am also facing the same issue. Is there any other software releases done to support the same? I dont have dedicated GPU time=2024-12-10T18:13:28.825+05:30 level=INFO source=images.go:753 msg="total blobs: 0" time=2024-12-10T18:13:28.825+05:30 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-12-10T18:13:28.826+05:30 level=INFO source=routes.go:1246 msg="Listening on 127.0.0.1:11434 (version 0.5.1)" time=2024-12-10T18:13:28.827+05:30 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 rocm cpu cpu_avx cpu_avx2 cuda_v11]" time=2024-12-10T18:13:28.827+05:30 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-12-10T18:13:28.827+05:30 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2024-12-10T18:13:28.827+05:30 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=2 efficiency=0 threads=4 time=2024-12-10T18:13:28.838+05:30 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" time=2024-12-10T18:13:28.839+05:30 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="11.9 GiB" available="6.9 GiB"
Author
Owner

@dhiltgen commented on GitHub (Dec 10, 2024):

@ASHISH-1793 I don't see a problem reported in your log, and it is unrelated to this issue where we built the GPU runners with AVX2 accidentally on release v0.1.25. If you don't have a discrete GPU, then "inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="11.9 GiB" available="6.9 GiB" sounds like it's likely correct.

<!-- gh-comment-id:2532583890 --> @dhiltgen commented on GitHub (Dec 10, 2024): @ASHISH-1793 I don't see a problem reported in your log, and it is unrelated to this issue where we built the GPU runners with AVX2 accidentally on release v0.1.25. If you don't have a discrete GPU, then `"inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="11.9 GiB" available="6.9 GiB"` sounds like it's likely correct.
Author
Owner

@ASHISH-1793 commented on GitHub (Dec 10, 2024):

Not sure what's going wrong, I have a Windows 10 64Bit machine. Installed
Ollama but it doesn't launch. I tried reinstalling, tried initiating with
admin access but nothing seems to be working.
Regards, Ashish Bansal

On Wed, 11 Dec, 2024, 12:06 am Daniel Hiltgen, @.***>
wrote:

@ASHISH-1793 https://github.com/ASHISH-1793 I don't see a problem
reported in your log, and it is unrelated to this issue where we built the
GPU runners with AVX2 accidentally on release v0.1.25. If you don't have a
discrete GPU, then "inference compute" id=0 library=cpu variant=avx2
compute="" driver=0.0 name="" total="11.9 GiB" available="6.9 GiB" sounds
like it's likely correct.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/2596#issuecomment-2532583890,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ANBRTZ4YQKR3L3IYPK6VBGD2E4YDTAVCNFSM6AAAAABTLCTE6GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMZSGU4DGOBZGA
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:2532600037 --> @ASHISH-1793 commented on GitHub (Dec 10, 2024): Not sure what's going wrong, I have a Windows 10 64Bit machine. Installed Ollama but it doesn't launch. I tried reinstalling, tried initiating with admin access but nothing seems to be working. Regards, Ashish Bansal On Wed, 11 Dec, 2024, 12:06 am Daniel Hiltgen, ***@***.***> wrote: > @ASHISH-1793 <https://github.com/ASHISH-1793> I don't see a problem > reported in your log, and it is unrelated to this issue where we built the > GPU runners with AVX2 accidentally on release v0.1.25. If you don't have a > discrete GPU, then "inference compute" id=0 library=cpu variant=avx2 > compute="" driver=0.0 name="" total="11.9 GiB" available="6.9 GiB" sounds > like it's likely correct. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/2596#issuecomment-2532583890>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ANBRTZ4YQKR3L3IYPK6VBGD2E4YDTAVCNFSM6AAAAABTLCTE6GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMZSGU4DGOBZGA> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@dhiltgen commented on GitHub (Dec 10, 2024):

Ollama but it doesn't launch.

Please clarify what you mean? Do you get an error from the CLI when you try to pull or run models? Does the tray icon not start up?

<!-- gh-comment-id:2532682072 --> @dhiltgen commented on GitHub (Dec 10, 2024): > Ollama but it doesn't launch. Please clarify what you mean? Do you get an error from the CLI when you try to pull or run models? Does the tray icon not start up?
Author
Owner

@ASHISH-1793 commented on GitHub (Dec 10, 2024):

No errors are coming. Tray icon comes & stays. But the GUI interface
doesn't come up.
Regards, Ashish Bansal

On Wed, 11 Dec, 2024, 12:58 am Daniel Hiltgen, @.***>
wrote:

Ollama but it doesn't launch.

Please clarify what you mean? Do you get an error from the CLI when you
try to pull or run models? Does the tray icon not start up?


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/2596#issuecomment-2532682072,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ANBRTZ7UAT7M66TMUOFYNSL2E46FZAVCNFSM6AAAAABTLCTE6GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMZSGY4DEMBXGI
.
You are receiving this because you were mentioned.Message ID:
@.***>

<!-- gh-comment-id:2532700458 --> @ASHISH-1793 commented on GitHub (Dec 10, 2024): No errors are coming. Tray icon comes & stays. But the GUI interface doesn't come up. Regards, Ashish Bansal On Wed, 11 Dec, 2024, 12:58 am Daniel Hiltgen, ***@***.***> wrote: > Ollama but it doesn't launch. > > Please clarify what you mean? Do you get an error from the CLI when you > try to pull or run models? Does the tray icon not start up? > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/2596#issuecomment-2532682072>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ANBRTZ7UAT7M66TMUOFYNSL2E46FZAVCNFSM6AAAAABTLCTE6GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMZSGY4DEMBXGI> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63567