[GH-ISSUE #2884] 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it. #48274

Closed
opened 2026-04-28 07:31:57 -05:00 by GiteaMirror · 16 comments
Owner

Originally created by @tommcg on GitHub (Mar 2, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2884

Originally assigned to: @dhiltgen on GitHub.

I'm receiving the error on Windows 10. I've closed all of the command prompts and quit the Ollama app via the icon in the systray.

Error: Post "http://127.0.0.1:11434/api/chat": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.

I can keep running them over and over, and the icons in the systray will start to pile up.

Can someone help me with finding the process that will kill the localhost connection so I can restart Ollama?

Thanks.

Originally created by @tommcg on GitHub (Mar 2, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2884 Originally assigned to: @dhiltgen on GitHub. I'm receiving the error on Windows 10. I've closed all of the command prompts and quit the Ollama app via the icon in the systray. Error: Post "http://127.0.0.1:11434/api/chat": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it. I can keep running them over and over, and the icons in the systray will start to pile up. Can someone help me with finding the process that will kill the localhost connection so I can restart Ollama? Thanks.
GiteaMirror added the bug label 2026-04-28 07:31:57 -05:00
GiteaMirror added the windows label 2026-04-28 07:32:58 -05:00
Author
Owner

@dhiltgen commented on GitHub (Mar 6, 2024):

What version are you running? We had a bug a few versions back that would allow multiple copies to start. Can you try uninstalling and re-installing the latest version to see if that clears up your problem?

<!-- gh-comment-id:1981349489 --> @dhiltgen commented on GitHub (Mar 6, 2024): What version are you running? We had a bug a few versions back that would allow multiple copies to start. Can you try uninstalling and re-installing the latest version to see if that clears up your problem?
Author
Owner

@jmorganca commented on GitHub (Mar 12, 2024):

Hi there, this should be fixed on Ollama 0.1.28 and later – would it be possible to try this version and let us know if the problem persists? You can download it here https://ollama.com/download/windows. Thanks so much!

<!-- gh-comment-id:1989759675 --> @jmorganca commented on GitHub (Mar 12, 2024): Hi there, this should be fixed on Ollama 0.1.28 and later – would it be possible to try this version and let us know if the problem persists? You can download it here [https://ollama.com/download/windows](https://ollama.com/download/windows). Thanks so much!
Author
Owner

@HWiwoiiii commented on GitHub (Mar 16, 2024):

yes the problem still exists

image

<!-- gh-comment-id:2001142526 --> @HWiwoiiii commented on GitHub (Mar 16, 2024): yes the problem still exists ![image](https://github.com/ollama/ollama/assets/103039908/3bb836fe-cd5f-4f7c-9d0a-171062f124fd)
Author
Owner

@dhiltgen commented on GitHub (Mar 18, 2024):

@HWiwoiiii can you share your server logs?

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md

<!-- gh-comment-id:2003048590 --> @dhiltgen commented on GitHub (Mar 18, 2024): @HWiwoiiii can you share your server logs? https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md
Author
Owner

@s1mple-donk commented on GitHub (Apr 26, 2024):

What version are you running? We had a bug a few versions back that would allow multiple copies to start. Can you try uninstalling and re-installing the latest version to see if that clears up your problem?

Hello, my windows version is win10 professional version 22H2. On March 1, I downloaded the ollama 0.1.27 version installation package, and everything ran normally after installation. But today I updated ollama to version 0.1.32, and the problem described in this issue and the multiple copies startup problem you mentioned appeared.

<!-- gh-comment-id:2078920502 --> @s1mple-donk commented on GitHub (Apr 26, 2024): > What version are you running? We had a bug a few versions back that would allow multiple copies to start. Can you try uninstalling and re-installing the latest version to see if that clears up your problem? Hello, my windows version is win10 professional version 22H2. On March 1, I downloaded the ollama 0.1.27 version installation package, and everything ran normally after installation. But today I updated ollama to version 0.1.32, and the problem described in this issue and the multiple copies startup problem you mentioned appeared.
Author
Owner

@dhiltgen commented on GitHub (Apr 28, 2024):

@s1mple-donk the pre-release for 0.1.33 is available now, which likely will resolve the problem. Please give it a try.

<!-- gh-comment-id:2081586988 --> @dhiltgen commented on GitHub (Apr 28, 2024): @s1mple-donk the pre-release for [0.1.33](https://github.com/ollama/ollama/releases) is available now, which likely will resolve the problem. Please give it a try.
Author
Owner

@s1mple-donk commented on GitHub (Apr 29, 2024):

@s1mple-donk the pre-release for 0.1.33 is available now, which likely will resolve the problem. Please give it a try.

Hi, still not works with 0.1.33-rc5
image
image
image

<!-- gh-comment-id:2081986844 --> @s1mple-donk commented on GitHub (Apr 29, 2024): > @s1mple-donk the pre-release for [0.1.33](https://github.com/ollama/ollama/releases) is available now, which likely will resolve the problem. Please give it a try. Hi, still not works with 0.1.33-rc5 ![image](https://github.com/ollama/ollama/assets/165746537/c10e6de0-be9a-4759-8335-1e9a5afec77b) ![image](https://github.com/ollama/ollama/assets/165746537/7eb4edf3-8832-4dc9-b941-e1d211622d1a) ![image](https://github.com/ollama/ollama/assets/165746537/0fb78a32-7bf0-4450-825d-21c8e8299a13)
Author
Owner

@dhiltgen commented on GitHub (May 1, 2024):

@s1mple-donk can you share your server logs?

<!-- gh-comment-id:2088829071 --> @dhiltgen commented on GitHub (May 1, 2024): @s1mple-donk can you share your server logs?
Author
Owner

@s1mple-donk commented on GitHub (May 6, 2024):

@dhiltgen
image

<!-- gh-comment-id:2095085609 --> @s1mple-donk commented on GitHub (May 6, 2024): @dhiltgen ![image](https://github.com/ollama/ollama/assets/165746537/37ffeb84-d6e5-4a6d-96a8-173d936adacc)
Author
Owner

@dhiltgen commented on GitHub (May 7, 2024):

@s1mple-donk it looks like something is going wrong while we're probing the GPU information on the system. What kind of GPU do you have?

We also just shipped 0.1.34, so it might be good to upgrade to that just to make sure we're not chasing a ghost bug that was already fixed.

Assuming it's still not behaving properly, please quit the tray application so the server isn't constantly restarting, and in a powershell terminal run:

$env:OLLAMA_DEBUG="1"
ollama serve 2>&1 | % ToString | Tee-Object server.log

From the log you shared above, I'm expecting that will run for a moment, then crash, and hopefully shed more light on where/why. If it doesn't crash, then in another terminal, try running ollama run llama3 hello world

<!-- gh-comment-id:2099481742 --> @dhiltgen commented on GitHub (May 7, 2024): @s1mple-donk it looks like something is going wrong while we're probing the GPU information on the system. What kind of GPU do you have? We also just shipped 0.1.34, so it might be good to upgrade to that just to make sure we're not chasing a ghost bug that was already fixed. Assuming it's still not behaving properly, please quit the tray application so the server isn't constantly restarting, and in a powershell terminal run: ``` $env:OLLAMA_DEBUG="1" ollama serve 2>&1 | % ToString | Tee-Object server.log ``` From the log you shared above, I'm expecting that will run for a moment, then crash, and hopefully shed more light on where/why. If it doesn't crash, then in another terminal, try running `ollama run llama3 hello world`
Author
Owner

@s1mple-donk commented on GitHub (May 20, 2024):

@dhiltgen Hi, it does crash after running
$env:OLLAMA_DEBUG="1" ollama serve 2>&1 | % ToString | Tee-Object server.log
The output is as follows:

failed to get console mode for stdout: The handle is invalid.
failed to get console mode for stderr: The handle is invalid.
2024/05/20 16:18:28 routes.go:1008: INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR:C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_TMPDIR:]"
time=2024-05-20T16:18:28.102+08:00 level=INFO source=images.go:704 msg="total blobs: 5"
time=2024-05-20T16:18:28.102+08:00 level=INFO source=images.go:711 msg="total unused blobs removed: 0"
time=2024-05-20T16:18:28.103+08:00 level=INFO source=routes.go:1054 msg="Listening on 127.0.0.1:11434 (version 0.1.38)"
time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\ollama_runners\cpu
time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx
time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2
time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3
time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\ollama_runners\rocm_v5.7
time=2024-05-20T16:18:28.103+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v5.7 cpu cpu_avx cpu_avx2 cuda_v11.3]"
time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=sched.go:90 msg="starting llm scheduler"
time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=gpu.go:122 msg="Detecting GPUs"
time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=gpu.go:261 msg="Searching for GPU library" name=nvcuda.dll
time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=gpu.go:280 msg="gpu library search" globs="[C:\\windows\\system32\\nvcuda.dll* C:\\windows\\nvcuda.dll* C:\\windows\\System32\\Wbem\\nvcuda.dll* C:\\windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvcuda.dll* C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\Administrator\\nvcuda.dll* C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]"
time=2024-05-20T16:18:28.105+08:00 level=DEBUG source=gpu.go:285 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*"
time=2024-05-20T16:18:28.106+08:00 level=DEBUG source=gpu.go:313 msg="discovered GPU libraries" paths=[C:\windows\system32\nvcuda.dll]
<!-- gh-comment-id:2119943469 --> @s1mple-donk commented on GitHub (May 20, 2024): @dhiltgen Hi, it does crash after running ```$env:OLLAMA_DEBUG="1" ollama serve 2>&1 | % ToString | Tee-Object server.log``` The output is as follows: ``` failed to get console mode for stdout: The handle is invalid. failed to get console mode for stderr: The handle is invalid. 2024/05/20 16:18:28 routes.go:1008: INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR:C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_TMPDIR:]" time=2024-05-20T16:18:28.102+08:00 level=INFO source=images.go:704 msg="total blobs: 5" time=2024-05-20T16:18:28.102+08:00 level=INFO source=images.go:711 msg="total unused blobs removed: 0" time=2024-05-20T16:18:28.103+08:00 level=INFO source=routes.go:1054 msg="Listening on 127.0.0.1:11434 (version 0.1.38)" time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\ollama_runners\cpu time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2 time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3 time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\Administrator\AppData\Local\Programs\Ollama\ollama_runners\rocm_v5.7 time=2024-05-20T16:18:28.103+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v5.7 cpu cpu_avx cpu_avx2 cuda_v11.3]" time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=sched.go:90 msg="starting llm scheduler" time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=gpu.go:122 msg="Detecting GPUs" time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=gpu.go:261 msg="Searching for GPU library" name=nvcuda.dll time=2024-05-20T16:18:28.103+08:00 level=DEBUG source=gpu.go:280 msg="gpu library search" globs="[C:\\windows\\system32\\nvcuda.dll* C:\\windows\\nvcuda.dll* C:\\windows\\System32\\Wbem\\nvcuda.dll* C:\\windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvcuda.dll* C:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\Administrator\\nvcuda.dll* C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]" time=2024-05-20T16:18:28.105+08:00 level=DEBUG source=gpu.go:285 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*" time=2024-05-20T16:18:28.106+08:00 level=DEBUG source=gpu.go:313 msg="discovered GPU libraries" paths=[C:\windows\system32\nvcuda.dll] ```
Author
Owner

@dhiltgen commented on GitHub (May 23, 2024):

@s1mple-donk to clarify, did the server crash or exit at the end of this run? What you pasted looks like a normal startup up to that point. What I would have expected to come next is something like this:

...
time=2024-05-23T09:51:53.988-07:00 level=DEBUG source=gpu.go:386 msg="discovered GPU libraries" paths=[C:\windows\system32\nvcuda.dll]
CUDA driver version: 12.3
time=2024-05-23T09:51:54.011-07:00 level=DEBUG source=gpu.go:146 msg="detected GPUs" count=1 library=C:\windows\system32\nvcuda.dll
[GPU-13b3d4ff-808b-ab50-e395-de65e58aa716] CUDA totalMem 24563 mb
[GPU-13b3d4ff-808b-ab50-e395-de65e58aa716] CUDA freeMem 23008 mb
[GPU-13b3d4ff-808b-ab50-e395-de65e58aa716] Compute Capability 8.9
...

If that doesn't come, this may imply the nvcuda.dll is crashing during initialization. Are other GPU apps working properly on your setup? What does nvidia-smi.exe report?

<!-- gh-comment-id:2127640199 --> @dhiltgen commented on GitHub (May 23, 2024): @s1mple-donk to clarify, did the server crash or exit at the end of this run? What you pasted looks like a normal startup up to that point. What I would have expected to come next is something like this: ``` ... time=2024-05-23T09:51:53.988-07:00 level=DEBUG source=gpu.go:386 msg="discovered GPU libraries" paths=[C:\windows\system32\nvcuda.dll] CUDA driver version: 12.3 time=2024-05-23T09:51:54.011-07:00 level=DEBUG source=gpu.go:146 msg="detected GPUs" count=1 library=C:\windows\system32\nvcuda.dll [GPU-13b3d4ff-808b-ab50-e395-de65e58aa716] CUDA totalMem 24563 mb [GPU-13b3d4ff-808b-ab50-e395-de65e58aa716] CUDA freeMem 23008 mb [GPU-13b3d4ff-808b-ab50-e395-de65e58aa716] Compute Capability 8.9 ... ``` If that doesn't come, this may imply the nvcuda.dll is crashing during initialization. Are other GPU apps working properly on your setup? What does `nvidia-smi.exe` report?
Author
Owner

@s1mple-donk commented on GitHub (May 24, 2024):

@dhiltgen To be precise, The server did not start, my PC's GPU is AMD Radeon RX 5600, not NVIDIA GPU

<!-- gh-comment-id:2128610752 --> @s1mple-donk commented on GitHub (May 24, 2024): @dhiltgen To be precise, The server did not start, my PC's GPU is AMD Radeon RX 5600, not NVIDIA GPU
Author
Owner

@dhiltgen commented on GitHub (May 25, 2024):

@s1mple-donk in that case, do other GPU apps using the radeon GPU work? rocminfo, etc.

I would have expected more output, but it sounds like we may be crashing when trying to access amdhip64.dll to query the GPU. This may be a bug in the AMD ROCm libraries with this specific GPU.

That said, I believe your GPU is a gfx1010, and support for that is tracked via #2503

<!-- gh-comment-id:2131316498 --> @dhiltgen commented on GitHub (May 25, 2024): @s1mple-donk in that case, do other GPU apps using the radeon GPU work? `rocminfo`, etc. I would have expected more output, but it sounds like we may be crashing when trying to access amdhip64.dll to query the GPU. This may be a bug in the AMD ROCm libraries with this specific GPU. That said, I believe your GPU is a gfx1010, and support for that is tracked via #2503
Author
Owner

@s1mple-donk commented on GitHub (May 27, 2024):

@dhiltgen thank you! You are right, rocminfo not works.
Ollama version v0.1.27 works on my Windows PC. Judging from the logs, pure CPU inference is used and the GPU is not used. Could you please tell me what features were introduced in subsequent versions that caused the GPU detection to fail and crash?
And, can you make the service start normally even if the GPU detection fails or not supported, and then do not use the GPU for inference and only use the CPU, just like the v0.1.27 version?
Here are the v0.1.27 server logs:

time=2024-05-20T16:31:07.510+08:00 level=INFO source=images.go:710 msg="total blobs: 5"
time=2024-05-20T16:31:07.515+08:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0"
time=2024-05-20T16:31:07.516+08:00 level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.27)"
time=2024-05-20T16:31:07.516+08:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-05-20T16:31:07.658+08:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cuda_v11.3 cpu_avx cpu_avx2 cpu]"
[GIN] 2024/05/20 - 16:34:10 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/05/20 - 16:34:10 | 200 |      1.5311ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/05/20 - 16:52:53 | 200 |       509.9µs |       127.0.0.1 | GET      "/api/tags"
time=2024-05-20T16:53:15.704+08:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-05-20T16:53:15.704+08:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library nvml.dll"
time=2024-05-20T16:53:15.710+08:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [c:\\Windows\\System32\\nvml.dll C:\\windows\\system32\\nvml.dll]"
time=2024-05-20T16:53:15.727+08:00 level=INFO source=gpu.go:323 msg="Unable to load CUDA management library c:\\Windows\\System32\\nvml.dll: nvml vram init failure: 9"
time=2024-05-20T16:53:15.727+08:00 level=INFO source=gpu.go:323 msg="Unable to load CUDA management library C:\\windows\\system32\\nvml.dll: nvml vram init failure: 9"
time=2024-05-20T16:53:15.727+08:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library rocm_smi64.dll"
time=2024-05-20T16:53:15.732+08:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-05-20T16:53:15.732+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-20T16:53:15.732+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-20T16:53:15.732+08:00 level=INFO source=llm.go:77 msg="G
PU not available, falling back to CPU"
time=2024-05-20T16:53:15.732+08:00 level=INFO source=dyn_ext_server.g
o:385 msg="Updating PATH to C:\\Users\\ADMINI~1\\AppData\\Local\\Temp
\\ollama196047631\\cpu_avx2;C:\\Users\\Administrator\\AppData\\Local\
\Programs\\Ollama;C:\\windows\\system32;C:\\windows;C:\\windows\\Syst
em32\\Wbem;C:\\windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\windo
ws\\System32\\OpenSSH\\;C:\\Program Files (x86)\\NVIDIA Corporation\\
PhysX\\Common;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C
:\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\
Users\\Administrator\\AppData\\Local\\Programs\\Ollama "
time=2024-05-20T16:53:15.734+08:00 level=INFO source=dyn_ext_server.g
o:90 msg="Loading Dynamic llm server: C:\\Users\\ADMINI~1\\AppData\\L
ocal\\Temp\\ollama196047631\\cpu_avx2\\ext_server.dll"
time=2024-05-20T16:53:15.734+08:00 level=INFO source=dyn_ext_server.g
o:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 25 key-value pairs and 291
tensors from C:\Users\Administrator\.ollama\models\blobs\sha256-4fed7
364ee3e0c7cb4fe0880148bfdfcd1b630981efa0802a6b62ee52e7da97e (version
GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides
do not apply in this output.
llama_model_loader: - kv   0:                       general.architect
ure str              = llama
llama_model_loader: - kv   1:                               general.n
ame str              = LLaMA v2
llama_model_loader: - kv   2:                           llama.vocab_s
ize u32              = 32064
llama_model_loader: - kv   3:                       llama.context_len
gth u32              = 4096
llama_model_loader: - kv   4:                     llama.embedding_len
gth u32              = 3072
llama_model_loader: - kv   5:                          llama.block_co
unt u32              = 32
llama_model_loader: - kv   6:                  llama.feed_forward_len
gth u32              = 8192
llama_model_loader: - kv   7:                 llama.rope.dimension_co
unt u32              = 96
llama_model_loader: - kv   8:                 llama.attention.head_co
unt u32              = 32
llama_model_loader: - kv   9:              llama.attention.head_count
_kv u32              = 32
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsi
lon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_b
ase f32              = 10000.000000
llama_model_loader: - kv  12:                          general.file_t
ype u32              = 15
llama_model_loader: - kv  13:                       tokenizer.ggml.mo
del str              = llama
llama_model_loader: - kv  14:                      tokenizer.ggml.tok
ens arr[str,32064]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  15:                      tokenizer.ggml.sco
res arr[f32,32064]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_t
ype arr[i32,32064]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token
_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token
_id u32              = 32000
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token
_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token
_id u32              = 32000
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_to
ken bool             = true
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_to
ken bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_templ
ate str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  24:               general.quantization_vers
ion u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
llm_load_vocab: special tokens definition check successful ( 323/3206
4 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32064
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 3072
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 96
llm_load_print_meta: n_embd_head_k    = 96
llm_load_print_meta: n_embd_head_v    = 96
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 3072
llm_load_print_meta: n_embd_v_gqa     = 3072
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 8192
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 3.82 B
llm_load_print_meta: model size       = 2.16 GiB (4.85 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 32000 '<|endoftext|>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 32000 '<|endoftext|>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.11 MiB
llm_load_tensors:        CPU buffer size =  2210.78 MiB
.....................................................................
............................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   768.00 MiB
llama_new_context_with_model: KV self size  =  768.00 MiB, K (f16):
384.00 MiB, V (f16):  384.00 MiB
llama_new_context_with_model:        CPU input buffer size   =    11.
02 MiB
llama_new_context_with_model:        CPU compute buffer size =   152.
00 MiB
llama_new_context_with_model: graph splits (measure): 1
time=2024-05-20T16:53:20.582+08:00 level=INFO source=dyn_ext_server.g
o:161 msg="Starting llama main loop"
[GIN] 2024/05/20 - 16:54:58 | 200 |         1m43s |       127.0.0.1 |
 POST     "/api/generate"
time=2024-05-20T17:00:10.843+08:00 level=INFO source=cpu_common.go:11
 msg="CPU has AVX2"
time=2024-05-20T17:00:10.843+08:00 level=INFO source=cpu_common.go:11
 msg="CPU has AVX2"
time=2024-05-20T17:00:10.843+08:00 level=INFO source=llm.go:77 msg="G
PU not available, falling back to CPU"
time=2024-05-20T17:00:10.843+08:00 level=INFO source=dyn_ext_server.g
o:90 msg="Loading Dynamic llm server: C:\\Users\\ADMINI~1\\AppData\\L
ocal\\Temp\\ollama196047631\\cpu_avx2\\ext_server.dll"
<!-- gh-comment-id:2132585471 --> @s1mple-donk commented on GitHub (May 27, 2024): @dhiltgen thank you! You are right, ```rocminfo``` not works. Ollama version v0.1.27 works on my Windows PC. Judging from the logs, pure CPU inference is used and the GPU is not used. Could you please tell me what features were introduced in subsequent versions that caused the GPU detection to fail and crash? And, can you make the service start normally even if the GPU detection fails or not supported, and then do not use the GPU for inference and only use the CPU, just like the v0.1.27 version? Here are the v0.1.27 server logs: ``` time=2024-05-20T16:31:07.510+08:00 level=INFO source=images.go:710 msg="total blobs: 5" time=2024-05-20T16:31:07.515+08:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0" time=2024-05-20T16:31:07.516+08:00 level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.27)" time=2024-05-20T16:31:07.516+08:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-05-20T16:31:07.658+08:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cuda_v11.3 cpu_avx cpu_avx2 cpu]" [GIN] 2024/05/20 - 16:34:10 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/05/20 - 16:34:10 | 200 | 1.5311ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/05/20 - 16:52:53 | 200 | 509.9µs | 127.0.0.1 | GET "/api/tags" time=2024-05-20T16:53:15.704+08:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-05-20T16:53:15.704+08:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library nvml.dll" time=2024-05-20T16:53:15.710+08:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [c:\\Windows\\System32\\nvml.dll C:\\windows\\system32\\nvml.dll]" time=2024-05-20T16:53:15.727+08:00 level=INFO source=gpu.go:323 msg="Unable to load CUDA management library c:\\Windows\\System32\\nvml.dll: nvml vram init failure: 9" time=2024-05-20T16:53:15.727+08:00 level=INFO source=gpu.go:323 msg="Unable to load CUDA management library C:\\windows\\system32\\nvml.dll: nvml vram init failure: 9" time=2024-05-20T16:53:15.727+08:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library rocm_smi64.dll" time=2024-05-20T16:53:15.732+08:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []" time=2024-05-20T16:53:15.732+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-20T16:53:15.732+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-20T16:53:15.732+08:00 level=INFO source=llm.go:77 msg="G PU not available, falling back to CPU" time=2024-05-20T16:53:15.732+08:00 level=INFO source=dyn_ext_server.g o:385 msg="Updating PATH to C:\\Users\\ADMINI~1\\AppData\\Local\\Temp \\ollama196047631\\cpu_avx2;C:\\Users\\Administrator\\AppData\\Local\ \Programs\\Ollama;C:\\windows\\system32;C:\\windows;C:\\windows\\Syst em32\\Wbem;C:\\windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\windo ws\\System32\\OpenSSH\\;C:\\Program Files (x86)\\NVIDIA Corporation\\ PhysX\\Common;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C :\\Users\\Administrator\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\ Users\\Administrator\\AppData\\Local\\Programs\\Ollama " time=2024-05-20T16:53:15.734+08:00 level=INFO source=dyn_ext_server.g o:90 msg="Loading Dynamic llm server: C:\\Users\\ADMINI~1\\AppData\\L ocal\\Temp\\ollama196047631\\cpu_avx2\\ext_server.dll" time=2024-05-20T16:53:15.734+08:00 level=INFO source=dyn_ext_server.g o:150 msg="Initializing llama server" llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from C:\Users\Administrator\.ollama\models\blobs\sha256-4fed7 364ee3e0c7cb4fe0880148bfdfcd1b630981efa0802a6b62ee52e7da97e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architect ure str = llama llama_model_loader: - kv 1: general.n ame str = LLaMA v2 llama_model_loader: - kv 2: llama.vocab_s ize u32 = 32064 llama_model_loader: - kv 3: llama.context_len gth u32 = 4096 llama_model_loader: - kv 4: llama.embedding_len gth u32 = 3072 llama_model_loader: - kv 5: llama.block_co unt u32 = 32 llama_model_loader: - kv 6: llama.feed_forward_len gth u32 = 8192 llama_model_loader: - kv 7: llama.rope.dimension_co unt u32 = 96 llama_model_loader: - kv 8: llama.attention.head_co unt u32 = 32 llama_model_loader: - kv 9: llama.attention.head_count _kv u32 = 32 llama_model_loader: - kv 10: llama.attention.layer_norm_rms_epsi lon f32 = 0.000010 llama_model_loader: - kv 11: llama.rope.freq_b ase f32 = 10000.000000 llama_model_loader: - kv 12: general.file_t ype u32 = 15 llama_model_loader: - kv 13: tokenizer.ggml.mo del str = llama llama_model_loader: - kv 14: tokenizer.ggml.tok ens arr[str,32064] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 15: tokenizer.ggml.sco res arr[f32,32064] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 16: tokenizer.ggml.token_t ype arr[i32,32064] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 17: tokenizer.ggml.bos_token _id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token _id u32 = 32000 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token _id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.padding_token _id u32 = 32000 llama_model_loader: - kv 21: tokenizer.ggml.add_bos_to ken bool = true llama_model_loader: - kv 22: tokenizer.ggml.add_eos_to ken bool = false llama_model_loader: - kv 23: tokenizer.chat_templ ate str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 24: general.quantization_vers ion u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors llm_load_vocab: special tokens definition check successful ( 323/3206 4 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32064 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 96 llm_load_print_meta: n_embd_head_k = 96 llm_load_print_meta: n_embd_head_v = 96 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 3072 llm_load_print_meta: n_embd_v_gqa = 3072 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 8192 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 3.82 B llm_load_print_meta: model size = 2.16 GiB (4.85 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 32000 '<|endoftext|>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 32000 '<|endoftext|>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.11 MiB llm_load_tensors: CPU buffer size = 2210.78 MiB ..................................................................... ............................ llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 768.00 MiB llama_new_context_with_model: KV self size = 768.00 MiB, K (f16): 384.00 MiB, V (f16): 384.00 MiB llama_new_context_with_model: CPU input buffer size = 11. 02 MiB llama_new_context_with_model: CPU compute buffer size = 152. 00 MiB llama_new_context_with_model: graph splits (measure): 1 time=2024-05-20T16:53:20.582+08:00 level=INFO source=dyn_ext_server.g o:161 msg="Starting llama main loop" [GIN] 2024/05/20 - 16:54:58 | 200 | 1m43s | 127.0.0.1 | POST "/api/generate" time=2024-05-20T17:00:10.843+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-20T17:00:10.843+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-20T17:00:10.843+08:00 level=INFO source=llm.go:77 msg="G PU not available, falling back to CPU" time=2024-05-20T17:00:10.843+08:00 level=INFO source=dyn_ext_server.g o:90 msg="Loading Dynamic llm server: C:\\Users\\ADMINI~1\\AppData\\L ocal\\Temp\\ollama196047631\\cpu_avx2\\ext_server.dll" ```
Author
Owner

@dhiltgen commented on GitHub (May 28, 2024):

@s1mple-donk we changed which library we use to discover the GPUs, and it looks like that rocm library is crashing on your system. As a workaround until you can fix the ROCm library, you can force it to use the CPU runner described here https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#llm-libraries

<!-- gh-comment-id:2135699710 --> @dhiltgen commented on GitHub (May 28, 2024): @s1mple-donk we changed which library we use to discover the GPUs, and it looks like that rocm library is crashing on your system. As a workaround until you can fix the ROCm library, you can force it to use the CPU runner described here https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#llm-libraries
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48274