[GH-ISSUE #4984] Ollama not using GPU after OS Reboot #65188

Closed
opened 2026-05-03 19:57:38 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @lukasmwerner on GitHub (Jun 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4984

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

After installing ollama from ollama.com it is able to use my GPU but after rebooting it no longer is able to find the GPU giving the message:

CUDA driver version: 12-5
time=2024-06-11T11:46:56.544-07:00 level=DEBUG source=gpu.go:148 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_30_9.dll" count=1
time=2024-06-11T11:46:56.545-07:00 level=DEBUG source=cpu_common.go:11 msg="CPU has AVX2"
[GPU-ffffffff-0000-0000-00c0-000000000000] CUDA totalMem 4294967295
[GPU-ffffffff-0000-0000-00c0-000000000000] CUDA freeMem 3617587199
[GPU-ffffffff-0000-0000-00c0-000000000000] Compute Capability 1.0
time=2024-06-11T11:46:56.635-07:00 level=INFO source=gpu.go:214 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"
time=2024-06-11T11:46:56.636-07:00 level=DEBUG source=amd_windows.go:31 msg="unable to load amdhip64.dll: The specified module could not be found."

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.1.42

Originally created by @lukasmwerner on GitHub (Jun 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4984 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? After installing ollama from ollama.com it is able to use my GPU but after rebooting it no longer is able to find the GPU giving the message: ``` CUDA driver version: 12-5 time=2024-06-11T11:46:56.544-07:00 level=DEBUG source=gpu.go:148 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_30_9.dll" count=1 time=2024-06-11T11:46:56.545-07:00 level=DEBUG source=cpu_common.go:11 msg="CPU has AVX2" [GPU-ffffffff-0000-0000-00c0-000000000000] CUDA totalMem 4294967295 [GPU-ffffffff-0000-0000-00c0-000000000000] CUDA freeMem 3617587199 [GPU-ffffffff-0000-0000-00c0-000000000000] Compute Capability 1.0 time=2024-06-11T11:46:56.635-07:00 level=INFO source=gpu.go:214 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0" time=2024-06-11T11:46:56.636-07:00 level=DEBUG source=amd_windows.go:31 msg="unable to load amdhip64.dll: The specified module could not be found." ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.42
GiteaMirror added the bugwindows labels 2026-05-03 19:57:38 -05:00
Author
Owner

@AncientMystic commented on GitHub (Jun 12, 2024):

Did your gpu driver update or something? There was an issue with it not working with the latest drivers from nvidia which i am not sure if has been resolved yet so one possibility is it updated on reboot and now its not seeing it because of the new driver.

Could check your driver version to make sure that didn't happen and/or reinstall ollama and see if either of those fixes your problem.

If not hopefully someone else can provide more assistance

<!-- gh-comment-id:2161862100 --> @AncientMystic commented on GitHub (Jun 12, 2024): Did your gpu driver update or something? There was an issue with it not working with the latest drivers from nvidia which i am not sure if has been resolved yet so one possibility is it updated on reboot and now its not seeing it because of the new driver. Could check your driver version to make sure that didn't happen and/or reinstall ollama and see if either of those fixes your problem. If not hopefully someone else can provide more assistance
Author
Owner

@lukasmwerner commented on GitHub (Jun 12, 2024):

I am running on the latest driver now however the driver recommended in #4563 had the exact same behavior.

The thing that confuses me is that it works on the 555 driver only when it was freshly installed but not after reboot.

<!-- gh-comment-id:2163583240 --> @lukasmwerner commented on GitHub (Jun 12, 2024): I am running on the latest driver now however the driver recommended in #4563 had the exact same behavior. The thing that confuses me is that it works on the 555 driver only when it was freshly installed but not after reboot.
Author
Owner

@dhiltgen commented on GitHub (Jun 13, 2024):

The PhysX runtime library is know to cause problems, and we added logic to skip that library for GPU discovery, so I'm confused why it's showing up again.

Can you try running the server with debug so we can see why this library is getting picked up incorrectly? Quit the tray application, then in a powershell terminal run:

$env:OLLAMA_DEBUG="1"
ollama serve  2>&1 | % ToString | Tee-Object server.log
<!-- gh-comment-id:2166418790 --> @dhiltgen commented on GitHub (Jun 13, 2024): The `PhysX` runtime library is know to cause problems, and we added logic to [skip that library](https://github.com/ollama/ollama/blob/main/gpu/gpu.go#L296-L299) for GPU discovery, so I'm confused why it's showing up again. Can you try running the server with debug so we can see why this library is getting picked up incorrectly? Quit the tray application, then in a powershell terminal run: ``` $env:OLLAMA_DEBUG="1" ollama serve 2>&1 | % ToString | Tee-Object server.log ```
Author
Owner

@lukasmwerner commented on GitHub (Jun 13, 2024):

Sure thing!

failed to get console mode for stdout: The handle is invalid.
failed to get console mode for stderr: The handle is invalid.
2024/06/13 13:05:28 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\lukas\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_TMPDIR:]"
time=2024-06-13T13:05:28.939-07:00 level=INFO source=images.go:740 msg="total blobs: 10"
time=2024-06-13T13:05:28.940-07:00 level=INFO source=images.go:747 msg="total unused blobs removed: 0"
time=2024-06-13T13:05:28.941-07:00 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.43)"
time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\lukas\AppData\Local\Programs\Ollama\ollama_runners\cpu
time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\lukas\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx
time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\lukas\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2
time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\lukas\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3
time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\lukas\AppData\Local\Programs\Ollama\ollama_runners\rocm_v5.7
time=2024-06-13T13:05:28.941-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v5.7 cpu cpu_avx]"
time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=sched.go:90 msg="starting llm scheduler"
time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=gpu.go:132 msg="Detecting GPUs"
time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=gpu.go:274 msg="Searching for GPU library" name=nvcuda.dll
time=2024-06-13T13:05:28.942-07:00 level=DEBUG source=gpu.go:293 msg="gpu library search" globs="[C:\\Program Files\\PowerShell\\7\\nvcuda.dll* C:\\Program Files\\Oculus\\Support\\oculus-runtime\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\nvcuda.dll* C:\\Users\\lukas\\.cargo\\bin\\nvcuda.dll* B:\\Program Files\\ARM\\10 2020-q4-major\\bin\\nvcuda.dll* C:\\Program Files\\Java\\jdk-13.0.1\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\lukas\\.dotnet\\tools\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Android\\Sdk\\platform-tools\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\GitHubDesktop\\bin\\nvcuda.dll* C:\\Users\\lukas\\.deno\\bin\\nvcuda.dll* B:\\Program Files\\CMake\\bin\\nvcuda.dll* C:\\Program Files (x86)\\GitHub CLI\\nvcuda.dll* C:\\Users\\lukas\\go\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Roaming\\npm\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\JetBrains\\Toolbox\\scripts\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Program Files\\Neovim\\bin\\nvcuda.dll* C:\\mingw64\\bin\\nvcuda.dll* C:\\Program Files\\Git\\cmd\\nvcuda.dll* C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\nvcuda.dll* C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn\\nvcuda.dll* C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\nvcuda.dll* C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn\\nvcuda.dll* C:\\Program Files (x86)\\Windows Kits\\8.1\\Windows Performance Toolkit\\nvcuda.dll* C:\\Program Files\\Tailscale\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR\\nvcuda.dll* C:\\Program Files\\Go\\bin\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* C:\\WINDOWS\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\nvcuda.dll* C:\\Users\\lukas\\.cargo\\bin\\nvcuda.dll* B:\\Program Files\\ARM\\10 2020-q4-major\\bin\\nvcuda.dll* C:\\Program Files\\Java\\jdk-13.0.1\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\lukas\\.dotnet\\tools\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Android\\Sdk\\platform-tools\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\GitHubDesktop\\bin\\nvcuda.dll* C:\\Users\\lukas\\.deno\\bin\\nvcuda.dll* B:\\Program Files\\CMake\\bin\\nvcuda.dll* C:\\Program Files (x86)\\GitHub CLI\\nvcuda.dll* C:\\Users\\lukas\\go\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Roaming\\npm\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\JetBrains\\Toolbox\\scripts\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\mingw64\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WinGet\\Packages\\schollz.croc_Microsoft.Winget.Source_8wekyb3d8bbwe\\nvcuda.dll* C:\\Users\\lukas\\go\\bin\\nvcuda.dll* C:\\Program Files\\Neovim\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WinGet\\Packages\\zig.zig_Microsoft.Winget.Source_8wekyb3d8bbwe\\zig-windows-x86_64-0.12.0\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WinGet\\Packages\\Gyan.FFmpeg_Microsoft.Winget.Source_8wekyb3d8bbwe\\ffmpeg-7.0.1-full_build\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Ollama\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]"
time=2024-06-13T13:05:28.946-07:00 level=DEBUG source=gpu.go:298 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*"
time=2024-06-13T13:05:28.949-07:00 level=DEBUG source=gpu.go:326 msg="discovered GPU libraries" paths=[]
time=2024-06-13T13:05:28.949-07:00 level=DEBUG source=gpu.go:274 msg="Searching for GPU library" name=cudart64_*.dll
time=2024-06-13T13:05:28.949-07:00 level=DEBUG source=gpu.go:293 msg="gpu library search" globs="[C:\\Program Files\\PowerShell\\7\\cudart64_*.dll* C:\\Program Files\\Oculus\\Support\\oculus-runtime\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\cudart64_*.dll* C:\\Users\\lukas\\.cargo\\bin\\cudart64_*.dll* B:\\Program Files\\ARM\\10 2020-q4-major\\bin\\cudart64_*.dll* C:\\Program Files\\Java\\jdk-13.0.1\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\cudart64_*.dll* C:\\Users\\lukas\\.dotnet\\tools\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Android\\Sdk\\platform-tools\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\GitHubDesktop\\bin\\cudart64_*.dll* C:\\Users\\lukas\\.deno\\bin\\cudart64_*.dll* B:\\Program Files\\CMake\\bin\\cudart64_*.dll* C:\\Program Files (x86)\\GitHub CLI\\cudart64_*.dll* C:\\Users\\lukas\\go\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Roaming\\npm\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\JetBrains\\Toolbox\\scripts\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Program Files\\Neovim\\bin\\cudart64_*.dll* C:\\mingw64\\bin\\cudart64_*.dll* C:\\Program Files\\Git\\cmd\\cudart64_*.dll* C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\cudart64_*.dll* C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn\\cudart64_*.dll* C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\cudart64_*.dll* C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn\\cudart64_*.dll* C:\\Program Files (x86)\\Windows Kits\\8.1\\Windows Performance Toolkit\\cudart64_*.dll* C:\\Program Files\\Tailscale\\cudart64_*.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR\\cudart64_*.dll* C:\\Program Files\\Go\\bin\\cudart64_*.dll* C:\\Program Files\\dotnet\\cudart64_*.dll* C:\\WINDOWS\\System32\\OpenSSH\\cudart64_*.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\cudart64_*.dll* C:\\Users\\lukas\\.cargo\\bin\\cudart64_*.dll* B:\\Program Files\\ARM\\10 2020-q4-major\\bin\\cudart64_*.dll* C:\\Program Files\\Java\\jdk-13.0.1\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\cudart64_*.dll* C:\\Users\\lukas\\.dotnet\\tools\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Android\\Sdk\\platform-tools\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\GitHubDesktop\\bin\\cudart64_*.dll* C:\\Users\\lukas\\.deno\\bin\\cudart64_*.dll* B:\\Program Files\\CMake\\bin\\cudart64_*.dll* C:\\Program Files (x86)\\GitHub CLI\\cudart64_*.dll* C:\\Users\\lukas\\go\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Roaming\\npm\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\JetBrains\\Toolbox\\scripts\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\mingw64\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WinGet\\Packages\\schollz.croc_Microsoft.Winget.Source_8wekyb3d8bbwe\\cudart64_*.dll* C:\\Users\\lukas\\go\\bin\\cudart64_*.dll* C:\\Program Files\\Neovim\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WinGet\\Packages\\zig.zig_Microsoft.Winget.Source_8wekyb3d8bbwe\\zig-windows-x86_64-0.12.0\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WinGet\\Packages\\Gyan.FFmpeg_Microsoft.Winget.Source_8wekyb3d8bbwe\\ffmpeg-7.0.1-full_build\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Ollama\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*\\bin\\cudart64_*.dll]"
time=2024-06-13T13:05:28.954-07:00 level=DEBUG source=gpu.go:298 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll*"
time=2024-06-13T13:05:28.957-07:00 level=DEBUG source=gpu.go:326 msg="discovered GPU libraries" paths="[C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_30_9.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll C:\\Users\\lukas\\AppData\\Local\\Programs\\Ollama\\cudart64_110.dll]"
time=2024-06-13T13:05:28.979-07:00 level=DEBUG source=gpu.go:148 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_30_9.dll" count=1
time=2024-06-13T13:05:28.979-07:00 level=DEBUG source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-06-13T13:05:29.058-07:00 level=INFO source=gpu.go:214 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0"
time=2024-06-13T13:05:29.059-07:00 level=DEBUG source=amd_windows.go:31 msg="unable to load amdhip64.dll: The specified module could not be found."
time=2024-06-13T13:05:29.061-07:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="15.9 GiB" available="8.7 GiB"

From what I can tell the cudart64_*.dll is the one that is bugging out.

<!-- gh-comment-id:2166676559 --> @lukasmwerner commented on GitHub (Jun 13, 2024): Sure thing! ``` failed to get console mode for stdout: The handle is invalid. failed to get console mode for stderr: The handle is invalid. 2024/06/13 13:05:28 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\lukas\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_TMPDIR:]" time=2024-06-13T13:05:28.939-07:00 level=INFO source=images.go:740 msg="total blobs: 10" time=2024-06-13T13:05:28.940-07:00 level=INFO source=images.go:747 msg="total unused blobs removed: 0" time=2024-06-13T13:05:28.941-07:00 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.43)" time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\lukas\AppData\Local\Programs\Ollama\ollama_runners\cpu time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\lukas\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\lukas\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2 time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\lukas\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3 time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\lukas\AppData\Local\Programs\Ollama\ollama_runners\rocm_v5.7 time=2024-06-13T13:05:28.941-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v5.7 cpu cpu_avx]" time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=sched.go:90 msg="starting llm scheduler" time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=gpu.go:132 msg="Detecting GPUs" time=2024-06-13T13:05:28.941-07:00 level=DEBUG source=gpu.go:274 msg="Searching for GPU library" name=nvcuda.dll time=2024-06-13T13:05:28.942-07:00 level=DEBUG source=gpu.go:293 msg="gpu library search" globs="[C:\\Program Files\\PowerShell\\7\\nvcuda.dll* C:\\Program Files\\Oculus\\Support\\oculus-runtime\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\nvcuda.dll* C:\\Users\\lukas\\.cargo\\bin\\nvcuda.dll* B:\\Program Files\\ARM\\10 2020-q4-major\\bin\\nvcuda.dll* C:\\Program Files\\Java\\jdk-13.0.1\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\lukas\\.dotnet\\tools\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Android\\Sdk\\platform-tools\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\GitHubDesktop\\bin\\nvcuda.dll* C:\\Users\\lukas\\.deno\\bin\\nvcuda.dll* B:\\Program Files\\CMake\\bin\\nvcuda.dll* C:\\Program Files (x86)\\GitHub CLI\\nvcuda.dll* C:\\Users\\lukas\\go\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Roaming\\npm\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\JetBrains\\Toolbox\\scripts\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Program Files\\Neovim\\bin\\nvcuda.dll* C:\\mingw64\\bin\\nvcuda.dll* C:\\Program Files\\Git\\cmd\\nvcuda.dll* C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\nvcuda.dll* C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn\\nvcuda.dll* C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\nvcuda.dll* C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn\\nvcuda.dll* C:\\Program Files (x86)\\Windows Kits\\8.1\\Windows Performance Toolkit\\nvcuda.dll* C:\\Program Files\\Tailscale\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR\\nvcuda.dll* C:\\Program Files\\Go\\bin\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* C:\\WINDOWS\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\nvcuda.dll* C:\\Users\\lukas\\.cargo\\bin\\nvcuda.dll* B:\\Program Files\\ARM\\10 2020-q4-major\\bin\\nvcuda.dll* C:\\Program Files\\Java\\jdk-13.0.1\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\lukas\\.dotnet\\tools\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Android\\Sdk\\platform-tools\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\GitHubDesktop\\bin\\nvcuda.dll* C:\\Users\\lukas\\.deno\\bin\\nvcuda.dll* B:\\Program Files\\CMake\\bin\\nvcuda.dll* C:\\Program Files (x86)\\GitHub CLI\\nvcuda.dll* C:\\Users\\lukas\\go\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Roaming\\npm\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\JetBrains\\Toolbox\\scripts\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\mingw64\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WinGet\\Packages\\schollz.croc_Microsoft.Winget.Source_8wekyb3d8bbwe\\nvcuda.dll* C:\\Users\\lukas\\go\\bin\\nvcuda.dll* C:\\Program Files\\Neovim\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WinGet\\Packages\\zig.zig_Microsoft.Winget.Source_8wekyb3d8bbwe\\zig-windows-x86_64-0.12.0\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WinGet\\Packages\\Gyan.FFmpeg_Microsoft.Winget.Source_8wekyb3d8bbwe\\ffmpeg-7.0.1-full_build\\bin\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Ollama\\nvcuda.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]" time=2024-06-13T13:05:28.946-07:00 level=DEBUG source=gpu.go:298 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*" time=2024-06-13T13:05:28.949-07:00 level=DEBUG source=gpu.go:326 msg="discovered GPU libraries" paths=[] time=2024-06-13T13:05:28.949-07:00 level=DEBUG source=gpu.go:274 msg="Searching for GPU library" name=cudart64_*.dll time=2024-06-13T13:05:28.949-07:00 level=DEBUG source=gpu.go:293 msg="gpu library search" globs="[C:\\Program Files\\PowerShell\\7\\cudart64_*.dll* C:\\Program Files\\Oculus\\Support\\oculus-runtime\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\cudart64_*.dll* C:\\Users\\lukas\\.cargo\\bin\\cudart64_*.dll* B:\\Program Files\\ARM\\10 2020-q4-major\\bin\\cudart64_*.dll* C:\\Program Files\\Java\\jdk-13.0.1\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\cudart64_*.dll* C:\\Users\\lukas\\.dotnet\\tools\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Android\\Sdk\\platform-tools\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\GitHubDesktop\\bin\\cudart64_*.dll* C:\\Users\\lukas\\.deno\\bin\\cudart64_*.dll* B:\\Program Files\\CMake\\bin\\cudart64_*.dll* C:\\Program Files (x86)\\GitHub CLI\\cudart64_*.dll* C:\\Users\\lukas\\go\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Roaming\\npm\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\JetBrains\\Toolbox\\scripts\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Program Files\\Neovim\\bin\\cudart64_*.dll* C:\\mingw64\\bin\\cudart64_*.dll* C:\\Program Files\\Git\\cmd\\cudart64_*.dll* C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\cudart64_*.dll* C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn\\cudart64_*.dll* C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\cudart64_*.dll* C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn\\cudart64_*.dll* C:\\Program Files (x86)\\Windows Kits\\8.1\\Windows Performance Toolkit\\cudart64_*.dll* C:\\Program Files\\Tailscale\\cudart64_*.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA App\\NvDLISR\\cudart64_*.dll* C:\\Program Files\\Go\\bin\\cudart64_*.dll* C:\\Program Files\\dotnet\\cudart64_*.dll* C:\\WINDOWS\\System32\\OpenSSH\\cudart64_*.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Python\\Python311\\cudart64_*.dll* C:\\Users\\lukas\\.cargo\\bin\\cudart64_*.dll* B:\\Program Files\\ARM\\10 2020-q4-major\\bin\\cudart64_*.dll* C:\\Program Files\\Java\\jdk-13.0.1\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\cudart64_*.dll* C:\\Users\\lukas\\.dotnet\\tools\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Android\\Sdk\\platform-tools\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\GitHubDesktop\\bin\\cudart64_*.dll* C:\\Users\\lukas\\.deno\\bin\\cudart64_*.dll* B:\\Program Files\\CMake\\bin\\cudart64_*.dll* C:\\Program Files (x86)\\GitHub CLI\\cudart64_*.dll* C:\\Users\\lukas\\go\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Roaming\\npm\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\JetBrains\\Toolbox\\scripts\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\mingw64\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WinGet\\Packages\\schollz.croc_Microsoft.Winget.Source_8wekyb3d8bbwe\\cudart64_*.dll* C:\\Users\\lukas\\go\\bin\\cudart64_*.dll* C:\\Program Files\\Neovim\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WinGet\\Packages\\zig.zig_Microsoft.Winget.Source_8wekyb3d8bbwe\\zig-windows-x86_64-0.12.0\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Microsoft\\WinGet\\Packages\\Gyan.FFmpeg_Microsoft.Winget.Source_8wekyb3d8bbwe\\ffmpeg-7.0.1-full_build\\bin\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Ollama\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll* C:\\Users\\lukas\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*\\bin\\cudart64_*.dll]" time=2024-06-13T13:05:28.954-07:00 level=DEBUG source=gpu.go:298 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll*" time=2024-06-13T13:05:28.957-07:00 level=DEBUG source=gpu.go:326 msg="discovered GPU libraries" paths="[C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_30_9.dll C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_65.dll C:\\Users\\lukas\\AppData\\Local\\Programs\\Ollama\\cudart64_110.dll]" time=2024-06-13T13:05:28.979-07:00 level=DEBUG source=gpu.go:148 msg="detected GPUs" library="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_30_9.dll" count=1 time=2024-06-13T13:05:28.979-07:00 level=DEBUG source=cpu_common.go:11 msg="CPU has AVX2" time=2024-06-13T13:05:29.058-07:00 level=INFO source=gpu.go:214 msg="[0] CUDA GPU is too old. Compute Capability detected: 1.0" time=2024-06-13T13:05:29.059-07:00 level=DEBUG source=amd_windows.go:31 msg="unable to load amdhip64.dll: The specified module could not be found." time=2024-06-13T13:05:29.061-07:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="15.9 GiB" available="8.7 GiB" ``` From what I can tell the `cudart64_*.dll` is the one that is bugging out.
Author
Owner

@srchong commented on GitHub (Jun 18, 2024):

I have the same issue, i downloaded yesterday ollama, but not found the amdhip64.dll

it is weird because im intel :(

Reading a little in the class

https://github.com/ollama/ollama/blob/main/gpu/gpu_windows.go

the loading files i have in another location...

D:\Program Files\bin

looking this file description...

image

<!-- gh-comment-id:2176646940 --> @srchong commented on GitHub (Jun 18, 2024): I have the same issue, i downloaded yesterday ollama, but not found the amdhip64.dll it is weird because im intel :( Reading a little in the class https://github.com/ollama/ollama/blob/main/gpu/gpu_windows.go the loading files i have in another location... D:\Program Files\bin looking this file description... ![image](https://github.com/ollama/ollama/assets/61468749/a0d68849-a68a-4675-bf37-6fd8c6f6c015)
Author
Owner

@srchong commented on GitHub (Jun 18, 2024):

well i tried to reinstall but i have not sucess

time=2024-06-18T12:03:37.654-06:00 level=DEBUG source=gpu.go:293 msg="gpu library search" globs="[C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin\\nvcuda.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Program Files\\Common Files\\Oracle\\Java\\javapath\\nvcuda.dll* C:\\Program Files\\Broadcom\\Broadcom 802.11 Network Adapter\\nvcuda.dll* C:\\Windows\\system32\\nvcuda.dll* C:\\Windows\\nvcuda.dll* C:\\Windows\\System32\\Wbem\\nvcuda.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\Windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* D:\\windows\\installations\\nodejs\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\WINDOWS\\system32\\nvcuda.dll* C:\\WINDOWS\\nvcuda.dll* C:\\WINDOWS\\System32\\Wbem\\nvcuda.dll* C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\WINDOWS\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvcuda.dll* C:\\Program Files\\Git\\cmd\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.2.0\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\macki\\.dotnet\\tools\\nvcuda.dll* D:\\windows\\installations\\nodejs\\nvcuda.dll* C:\\Users\\macki\\.detaspace\\bin\\nvcuda.dll* C:\\Users\\macki\\AppData\\Roaming\\npm\\nvcuda.dll* D:\\windows\\installations\\nodejs\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]"
time=2024-06-18T12:03:37.668-06:00 level=DEBUG source=gpu.go:298 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*"
time=2024-06-18T12:03:37.692-06:00 level=DEBUG source=gpu.go:327 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvcuda.dll C:\\WINDOWS\\system32\\nvcuda.dll]"
time=2024-06-18T12:03:37.766-06:00 level=DEBUG source=gpu.go:137 msg="detected GPUs" count=1 library=C:\Windows\system32\nvcuda.dll
time=2024-06-18T12:03:37.766-06:00 level=DEBUG source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-06-18T12:03:37.901-06:00 level=DEBUG source=amd_windows.go:31 msg="unable to load amdhip64.dll: The specified module could not be found."
<!-- gh-comment-id:2176684898 --> @srchong commented on GitHub (Jun 18, 2024): well i tried to reinstall but i have not sucess ``` time=2024-06-18T12:03:37.654-06:00 level=DEBUG source=gpu.go:293 msg="gpu library search" globs="[C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\bin\\nvcuda.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.5\\libnvvp\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Program Files\\Common Files\\Oracle\\Java\\javapath\\nvcuda.dll* C:\\Program Files\\Broadcom\\Broadcom 802.11 Network Adapter\\nvcuda.dll* C:\\Windows\\system32\\nvcuda.dll* C:\\Windows\\nvcuda.dll* C:\\Windows\\System32\\Wbem\\nvcuda.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\Windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* D:\\windows\\installations\\nodejs\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\WINDOWS\\system32\\nvcuda.dll* C:\\WINDOWS\\nvcuda.dll* C:\\WINDOWS\\System32\\Wbem\\nvcuda.dll* C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\WINDOWS\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvcuda.dll* C:\\Program Files\\Git\\cmd\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2024.2.0\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Programs\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\macki\\.dotnet\\tools\\nvcuda.dll* D:\\windows\\installations\\nodejs\\nvcuda.dll* C:\\Users\\macki\\.detaspace\\bin\\nvcuda.dll* C:\\Users\\macki\\AppData\\Roaming\\npm\\nvcuda.dll* D:\\windows\\installations\\nodejs\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\Users\\macki\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]" time=2024-06-18T12:03:37.668-06:00 level=DEBUG source=gpu.go:298 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*" time=2024-06-18T12:03:37.692-06:00 level=DEBUG source=gpu.go:327 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvcuda.dll C:\\WINDOWS\\system32\\nvcuda.dll]" time=2024-06-18T12:03:37.766-06:00 level=DEBUG source=gpu.go:137 msg="detected GPUs" count=1 library=C:\Windows\system32\nvcuda.dll time=2024-06-18T12:03:37.766-06:00 level=DEBUG source=cpu_common.go:11 msg="CPU has AVX2" time=2024-06-18T12:03:37.901-06:00 level=DEBUG source=amd_windows.go:31 msg="unable to load amdhip64.dll: The specified module could not be found." ```
Author
Owner

@dhiltgen commented on GitHub (Jun 19, 2024):

@srchong a debug level log reporting that we couldn't find amdhip64.dll only means we wont try to run on AMD GPUs. If you don't have AMD GPUs, this is expected behavior. Can you share a more complete server log including trying to load a model so I can see why it's running on CPU when it should be running on your CUDA GPU?

<!-- gh-comment-id:2177304833 --> @dhiltgen commented on GitHub (Jun 19, 2024): @srchong a debug level log reporting that we couldn't find amdhip64.dll only means we wont try to run on AMD GPUs. If you don't have AMD GPUs, this is expected behavior. Can you share a more complete server log including trying to load a model so I can see why it's running on CPU when it should be running on your CUDA GPU?
Author
Owner

@thaynes43 commented on GitHub (Jun 25, 2024):

@dhiltgen hello, I am also losing the use of my GPUs after rebooting a Pop!_OS Linux VM running on top of proxmox with two 3090s. If I run the install command again after rebooting the GPUs are used again. I probably need to open a new issue with logs but since this was so fresh I wanted to check first to see if there was a known issue like this windows one. If not, please let me know what commands to run to collect the necessary log files and I will create a new ticket with those.

Thanks!

<!-- gh-comment-id:2187863514 --> @thaynes43 commented on GitHub (Jun 25, 2024): @dhiltgen hello, I am also losing the use of my GPUs after rebooting a Pop!_OS Linux VM running on top of proxmox with two 3090s. If I run the install command again after rebooting the GPUs are used again. I probably need to open a new issue with logs but since this was so fresh I wanted to check first to see if there was a known issue like this windows one. If not, please let me know what commands to run to collect the necessary log files and I will create a new ticket with those. Thanks!
Author
Owner

@dhiltgen commented on GitHub (Jun 25, 2024):

@thaynes43 please take a look at our troubleshooting guide as that will likely cover your scenario.

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#container-fails-to-run-on-nvidia-gpu

<!-- gh-comment-id:2189274701 --> @dhiltgen commented on GitHub (Jun 25, 2024): @thaynes43 please take a look at our troubleshooting guide as that will likely cover your scenario. https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#container-fails-to-run-on-nvidia-gpu
Author
Owner

@thaynes43 commented on GitHub (Jun 25, 2024):

@dhiltgen thanks for the link! Right now I am not running in docker, I have used the following command to install:

curl -fsSL https://ollama.com/install.sh | sh

After a reboot I can still use ollama but it only uses the CPU until I rerun install.sh. I was actually planning on trying to run in docker to see if that corrected this.

<!-- gh-comment-id:2189355043 --> @thaynes43 commented on GitHub (Jun 25, 2024): @dhiltgen thanks for the link! Right now I am not running in docker, I have used the following command to install: ``` curl -fsSL https://ollama.com/install.sh | sh ``` After a reboot I can still use ollama but it only uses the CPU until I rerun `install.sh`. I was actually planning on trying to run in docker to see if that corrected this.
Author
Owner

@dhiltgen commented on GitHub (Jun 25, 2024):

Some of the steps in the container troubleshooting are on the host based on uvm driver quirks, and those may apply to your usecase even without running in a container.

<!-- gh-comment-id:2189447478 --> @dhiltgen commented on GitHub (Jun 25, 2024): Some of the steps in the container troubleshooting are on the host based on uvm driver quirks, and those may apply to your usecase even without running in a container.
Author
Owner

@AncientMystic commented on GitHub (Jun 25, 2024):

Some of the steps in the container troubleshooting are on the host based on uvm driver quirks, and those may apply to your usecase even without running in a container.

UVM is a fun one, such as in this case on proxmox, if you're using any kind of virtual environment with vGPU, UVM is not supported on the host or within LXC containers (I managed to merge consumer and kvm drivers and get a Frankensteins monster of a driver but still doesn't work right...) and vGPU does NOT support nesting on VMs which means we cant use docker or other similar nested applications for GPU inside vms either since they rely upon virtualization....

I am currently using docker in lxc to run open-webui and other containers without gpu then running a windows 10 VM with vGPU for ollama (ubuntu/linux vms have better ram performance so are better for this i just have many other apps that need vram too so i put it all on windows for now)

P.s. You also have to make sure your vgpu unlock and possibly licensing is setup right in the vm or it wont work right in the vm either, you also have to make sure the profile override is properly configured to make sure cuda is functioning and enough vram is allocated unless you pass the entire gpu through which you have to have another gpu or igpu to do (last i checked too the 3-4000 series doesnt work right with the consumer unlock for vgpu so that might be the only option via proxmox)

<!-- gh-comment-id:2189467634 --> @AncientMystic commented on GitHub (Jun 25, 2024): > Some of the steps in the container troubleshooting are on the host based on uvm driver quirks, and those may apply to your usecase even without running in a container. UVM is a fun one, such as in this case on proxmox, if you're using any kind of virtual environment with vGPU, UVM is not supported on the host or within LXC containers (I managed to merge consumer and kvm drivers and get a Frankensteins monster of a driver but still doesn't work right...) and vGPU does NOT support nesting on VMs which means we cant use docker or other similar nested applications for GPU inside vms either since they rely upon virtualization.... I am currently using docker in lxc to run open-webui and other containers without gpu then running a windows 10 VM with vGPU for ollama (ubuntu/linux vms have better ram performance so are better for this i just have many other apps that need vram too so i put it all on windows for now) P.s. You also have to make sure your vgpu unlock and possibly licensing is setup right in the vm or it wont work right in the vm either, you also have to make sure the profile override is properly configured to make sure cuda is functioning and enough vram is allocated unless you pass the entire gpu through which you have to have another gpu or igpu to do (last i checked too the 3-4000 series doesnt work right with the consumer unlock for vgpu so that might be the only option via proxmox)
Author
Owner

@thaynes43 commented on GitHub (Jun 25, 2024):

Ahh got it! The two 3090s are fully passed through, when I get back to the server I'll reboot and do some troubleshooting as to why they can no longer can be used by the service. Works great otherwise.

<!-- gh-comment-id:2189532520 --> @thaynes43 commented on GitHub (Jun 25, 2024): Ahh got it! The two 3090s are fully passed through, when I get back to the server I'll reboot and do some troubleshooting as to why they can no longer can be used by the service. Works great otherwise.
Author
Owner

@AncientMystic commented on GitHub (Jun 25, 2024):

Ahh got it! The two 3090s are fully passed through, when I get back to the server I'll reboot and do some troubleshooting as to why they can no longer can be used by the service. Works great otherwise.

If they are fully passed to the vm you should have similar use equal to a bare metal system and nesting, etc, should function fine, could just be the driver version you are using or something silly like that. (I have found on ubuntu based distros (such as pop) the main repo is terrible for nvidia drivers, i forget which is the good one but you can find other repos online with better drivers that function great right from the moment they are installed, the main repo drivers seem to be riddled with issues.)

<!-- gh-comment-id:2189546623 --> @AncientMystic commented on GitHub (Jun 25, 2024): > Ahh got it! The two 3090s are fully passed through, when I get back to the server I'll reboot and do some troubleshooting as to why they can no longer can be used by the service. Works great otherwise. If they are fully passed to the vm you should have similar use equal to a bare metal system and nesting, etc, should function fine, could just be the driver version you are using or something silly like that. (I have found on ubuntu based distros (such as pop) the main repo is terrible for nvidia drivers, i forget which is the good one but you can find other repos online with better drivers that function great right from the moment they are installed, the main repo drivers seem to be riddled with issues.)
Author
Owner

@3DAlgoLab commented on GitHub (Nov 14, 2024):

I also had similar problems even in Ubuntu OS. I guess this bug comes from that the ollama service is started faster than the init. of GPUs. So I make an ad-hoc solution. Instead of service, I just make a script delaying start serve(ollama serve).

# ollama_run 
echo "Delayed Ollama Runner Start, it delays 10 sec."
sleep 10
ollama serve

Then I make this called from Ubuntu Startup Application Preferences. I think its delaying may not be needed as it is called after GPUs initialization is finished anyway.
capture 2024-11-15 042959

WARNING: After starting ollama by calling directly ollama serve, model storage directory is changed to ~/.ollama/models.(I don't know why?) So previously dowonloaded model is not loaded. In that case, you can copy or move whole models folder to '[home folder]/.ollama from/usr/share/ollama/.ollama`.

<!-- gh-comment-id:2477251430 --> @3DAlgoLab commented on GitHub (Nov 14, 2024): I also had similar problems even in Ubuntu OS. I guess this bug comes from that the ollama service is started faster than the init. of GPUs. So I make an **ad-hoc** solution. Instead of service, I just make a script delaying start serve(ollama serve). ```bash # ollama_run echo "Delayed Ollama Runner Start, it delays 10 sec." sleep 10 ollama serve ``` Then I make this called from *Ubuntu Startup Application Preferences*. I think its delaying may not be needed as it is called after GPUs initialization is finished anyway. ![capture 2024-11-15 042959](https://github.com/user-attachments/assets/80ca322d-49ea-4baa-991d-15b1413612ef) **WARNING**: After starting ollama by calling directly `ollama serve`, model storage directory is changed to `~/.ollama/models`.(I don't know why?) So previously dowonloaded model is not loaded. In that case, you can copy or move whole models folder to '[home folder]/.ollama` from `/usr/share/ollama/.ollama`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65188