[GH-ISSUE #6311] Error: no suitable llama servers found #3958

Closed
opened 2026-04-12 14:50:00 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @vagitablebirdcode on GitHub (Aug 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6311

What is the issue?

when i use command: ollama run qwen2:0.5b, then error: suitable llama servers found occured.
the specific debug log is as follows:
time=2024-08-12T00:20:56.118+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="27.9 GiB" before.free="11.4 GiB" before.free_swap="33.5 GiB" now.total="27.9 GiB" now.free="11.4 GiB" now.free_swap="33.4 GiB"
time=2024-08-12T00:20:56.134+08:00 level=DEBUG source=gpu.go:407 msg="updating cuda memory data" gpu=GPU-045c0234-9e9a-658c-53f4-ce9c92f98381 name="NVIDIA GeForce RTX 3060 Laptop GPU" overhead="0 B" before.total="6.0 GiB" before.free="4.1 GiB" now.total="6.0 GiB" now.free="4.1 GiB" now.used="1.9 GiB"
releasing nvml library
time=2024-08-12T00:20:56.149+08:00 level=DEBUG source=sched.go:219 msg="loading first model" model=D:\AI-app\ollama_model\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8
time=2024-08-12T00:20:56.150+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[4.1 GiB]"
time=2024-08-12T00:20:56.150+08:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=D:\AI-app\ollama_model\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 gpu=GPU-045c0234-9e9a-658c-53f4-ce9c92f98381 parallel=4 available=4379136000 required="1.2 GiB"
time=2024-08-12T00:20:56.150+08:00 level=DEBUG source=server.go:101 msg="system memory" total="27.9 GiB" free="11.4 GiB" free_swap="33.4 GiB"
time=2024-08-12T00:20:56.150+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[4.1 GiB]"
time=2024-08-12T00:20:56.150+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[4.1 GiB]" memory.required.full="1.2 GiB" memory.required.partial="1.2 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.2 GiB]" memory.weights.total="288.2 MiB" memory.weights.repeating="150.3 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB"
time=2024-08-12T00:20:56.152+08:00 level=INFO source=sched.go:424 msg="NewLlamaServer failed" model=D:\AI-app\ollama_model\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 error="no suitable llama servers found"
[GIN] 2024/08/12 - 00:20:56 | 500 | 47.8946ms | 127.0.0.1 | POST "/api/chat"

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.3.4

Originally created by @vagitablebirdcode on GitHub (Aug 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6311 ### What is the issue? when i use command: ollama run qwen2:0.5b, then error: suitable llama servers found occured. the specific debug log is as follows: time=2024-08-12T00:20:56.118+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="27.9 GiB" before.free="11.4 GiB" before.free_swap="33.5 GiB" now.total="27.9 GiB" now.free="11.4 GiB" now.free_swap="33.4 GiB" time=2024-08-12T00:20:56.134+08:00 level=DEBUG source=gpu.go:407 msg="updating cuda memory data" gpu=GPU-045c0234-9e9a-658c-53f4-ce9c92f98381 name="NVIDIA GeForce RTX 3060 Laptop GPU" overhead="0 B" before.total="6.0 GiB" before.free="4.1 GiB" now.total="6.0 GiB" now.free="4.1 GiB" now.used="1.9 GiB" releasing nvml library time=2024-08-12T00:20:56.149+08:00 level=DEBUG source=sched.go:219 msg="loading first model" model=D:\AI-app\ollama_model\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 time=2024-08-12T00:20:56.150+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[4.1 GiB]" time=2024-08-12T00:20:56.150+08:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=D:\AI-app\ollama_model\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 gpu=GPU-045c0234-9e9a-658c-53f4-ce9c92f98381 parallel=4 available=4379136000 required="1.2 GiB" time=2024-08-12T00:20:56.150+08:00 level=DEBUG source=server.go:101 msg="system memory" total="27.9 GiB" free="11.4 GiB" free_swap="33.4 GiB" time=2024-08-12T00:20:56.150+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[4.1 GiB]" time=2024-08-12T00:20:56.150+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[4.1 GiB]" memory.required.full="1.2 GiB" memory.required.partial="1.2 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.2 GiB]" memory.weights.total="288.2 MiB" memory.weights.repeating="150.3 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB" time=2024-08-12T00:20:56.152+08:00 level=INFO source=sched.go:424 msg="NewLlamaServer failed" model=D:\AI-app\ollama_model\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 error="no suitable llama servers found" [GIN] 2024/08/12 - 00:20:56 | 500 | 47.8946ms | 127.0.0.1 | POST "/api/chat" ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.3.4
GiteaMirror added the bug label 2026-04-12 14:50:00 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 11, 2024):

If you include the full log it may show something relevant.

<!-- gh-comment-id:2282813865 --> @rick-github commented on GitHub (Aug 11, 2024): If you include the full log it may show something relevant.
Author
Owner

@vagitablebirdcode commented on GitHub (Aug 11, 2024):

i set OLLAMA_DEBUG=1, and the full log are as follows:
2024/08/12 00:39:55 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\AI-app\ollama_model OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://] OLLAMA_RUNNERS_DIR:C:\Users\USER\AppData\Local\Programs\Ollama OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-12T00:39:55.674+08:00 level=INFO source=images.go:781 msg="total blobs: 14"
time=2024-08-12T00:39:55.675+08:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-12T00:39:55.675+08:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.3.4)"
time=2024-08-12T00:39:55.677+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries []"
time=2024-08-12T00:39:55.677+08:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-08-12T00:39:55.677+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-08-12T00:39:55.677+08:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-12T00:39:55.677+08:00 level=DEBUG source=gpu.go:90 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-08-12T00:39:55.677+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvml.dll
time=2024-08-12T00:39:55.677+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[D:\Coding\Anaconda3\nvml.dll
D:\Coding\Anaconda3\Library\mingw-w64\bin\nvml.dll* D:\Coding\Anaconda3\Library\usr\bin\nvml.dll* D:\Coding\Anaconda3\Library\bin\nvml.dll* D:\Coding\Anaconda3\Scripts\nvml.dll* D:\Coding\Anaconda3\bin\nvml.dll* D:\Coding\Anaconda3\condabin\nvml.dll* D:\Coding\MPI\Bin\nvml.dll* D:\VMware\VMware Workstation\bin\nvml.dll* C:\Windows\system32\nvml.dll* C:\Windows\nvml.dll* C:\Windows\System32\Wbem\nvml.dll* C:\Windows\System32\WindowsPowerShell\v1.0\nvml.dll* C:\Windows\System32\OpenSSH\nvml.dll* C:\Program Files\dotnet\nvml.dll* C:\Program Files\Microsoft\jdk-11.0.12.7-hotspot\bin\nvml.dll* C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvml.dll* C:\Program Files\NVIDIA Corporation\Nsight Compute 2020.3.1\nvml.dll* C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR\nvml.dll* C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin\nvml.dll* C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\lib\nvml.dll* C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\include\nvml.dll* C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\libnvvp\nvml.dll* C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\extras\CUPTI\lib64\nvml.dll* C:\Users\USER\.mujoco\mjpro150\bin\nvml.dll* D:\LaTeX\Strawberry\c\bin\nvml.dll* D:\LaTeX\Strawberry\perl\site\bin\nvml.dll* D:\LaTeX\Strawberry\perl\bin\nvml.dll* D:\LaTeX\texlive\2023\bin\windows\nvml.dll* D:\Coding\pandoc\nvml.dll* D:\Coding\mingw64\bin\nvml.dll* D:\MATLAB\R2021b\runtime\win64\nvml.dll* D:\MATLAB\R2021b\bin\nvml.dll* D:\Science\PDFtk\bin\nvml.dll* D:\website\MongoDB\server\7.0\bin\nvml.dll* D:\Coding\Git\cmd\nvml.dll* D:\Coding\nodejs\nvml.dll* C:\Users\USER\AppData\Local\Programs\Ollama\nvml.dll* D:\Coding\GiteeAI\nvml.dll* C:\Users\USER\AppData\Local\Microsoft\WindowsApps\nvml.dll* C:\Users\USER\.dotnet\tools\nvml.dll* D:\Coding\Microsoft VS Code\bin\nvml.dll* D:\Coding\JetBrains\CLion 2023.1.1\bin\nvml.dll* D:\Coding\JetBrains\PyCharm 2023.1\bin\nvml.dll* C:\Users\USER\AppData\Local\Programs\oh-my-posh\bin\nvml.dll* D:\Coding\nodejs\node_global\nvml.dll* D:\LaTeX\texlive\2023\bin\windows\nvml.dll* D:\Coding\JetBrains\PyCharm 2023.1\bin\nvml.dll* D:\Coding\Lens\resources\cli\bin\nvml.dll* C:\Program Files (x86)\Nmap\nvml.dll* C:\Users\USER\AppData\Local\Programs\Ollama\nvml.dll* D:\Coding\JetBrains\PyCharm Community Edition 2024.1.5\bin\nvml.dll* C:\Users\USER\nvml.dll* C:\Users\USER\nvml.dll* c:\Windows\System32\nvml.dll]"
time=2024-08-12T00:39:55.706+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvml.dll*"
time=2024-08-12T00:39:55.713+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths="[C:\Windows\system32\nvml.dll c:\Windows\System32\nvml.dll]"
time=2024-08-12T00:39:55.734+08:00 level=DEBUG source=gpu.go:112 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll
time=2024-08-12T00:39:55.734+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvcuda.dll
time=2024-08-12T00:39:55.734+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[D:\Coding\Anaconda3\nvcuda.dll* D:\Coding\Anaconda3\Library\mingw-w64\bin\nvcuda.dll* D:\Coding\Anaconda3\Library\usr\bin\nvcuda.dll* D:\Coding\Anaconda3\Library\bin\nvcuda.dll* D:\Coding\Anaconda3\Scripts\nvcuda.dll* D:\Coding\Anaconda3\bin\nvcuda.dll* D:\Coding\Anaconda3\condabin\nvcuda.dll* D:\Coding\MPI\Bin\nvcuda.dll* D:\VMware\VMware Workstation\bin\nvcuda.dll* C:\Windows\system32\nvcuda.dll* C:\Windows\nvcuda.dll* C:\Windows\System32\Wbem\nvcuda.dll* C:\Windows\System32\WindowsPowerShell\v1.0\nvcuda.dll* C:\Windows\System32\OpenSSH\nvcuda.dll* C:\Program Files\dotnet\nvcuda.dll* C:\Program Files\Microsoft\jdk-11.0.12.7-hotspot\bin\nvcuda.dll* C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvcuda.dll* C:\Program Files\NVIDIA Corporation\Nsight Compute 2020.3.1\nvcuda.dll* C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR\nvcuda.dll* C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin\nvcuda.dll* C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\lib\nvcuda.dll* C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\include\nvcuda.dll* C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\libnvvp\nvcuda.dll* C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\extras\CUPTI\lib64\nvcuda.dll* C:\Users\USER\.mujoco\mjpro150\bin\nvcuda.dll* D:\LaTeX\Strawberry\c\bin\nvcuda.dll* D:\LaTeX\Strawberry\perl\site\bin\nvcuda.dll* D:\LaTeX\Strawberry\perl\bin\nvcuda.dll* D:\LaTeX\texlive\2023\bin\windows\nvcuda.dll* D:\Coding\pandoc\nvcuda.dll* D:\Coding\mingw64\bin\nvcuda.dll* D:\MATLAB\R2021b\runtime\win64\nvcuda.dll* D:\MATLAB\R2021b\bin\nvcuda.dll* D:\Science\PDFtk\bin\nvcuda.dll* D:\website\MongoDB\server\7.0\bin\nvcuda.dll* D:\Coding\Git\cmd\nvcuda.dll* D:\Coding\nodejs\nvcuda.dll* C:\Users\USER\AppData\Local\Programs\Ollama\nvcuda.dll* D:\Coding\GiteeAI\nvcuda.dll* C:\Users\USER\AppData\Local\Microsoft\WindowsApps\nvcuda.dll* C:\Users\USER\.dotnet\tools\nvcuda.dll* D:\Coding\Microsoft VS Code\bin\nvcuda.dll* D:\Coding\JetBrains\CLion 2023.1.1\bin\nvcuda.dll* D:\Coding\JetBrains\PyCharm 2023.1\bin\nvcuda.dll* C:\Users\USER\AppData\Local\Programs\oh-my-posh\bin\nvcuda.dll* D:\Coding\nodejs\node_global\nvcuda.dll* D:\LaTeX\texlive\2023\bin\windows\nvcuda.dll* D:\Coding\JetBrains\PyCharm 2023.1\bin\nvcuda.dll* D:\Coding\Lens\resources\cli\bin\nvcuda.dll* C:\Program Files (x86)\Nmap\nvcuda.dll* C:\Users\USER\AppData\Local\Programs\Ollama\nvcuda.dll* D:\Coding\JetBrains\PyCharm Community Edition 2024.1.5\bin\nvcuda.dll* C:\Users\USER\nvcuda.dll* C:\Users\USER\nvcuda.dll* c:\windows\system*\nvcuda.dll]"
time=2024-08-12T00:39:55.739+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\nvcuda.dll*"
time=2024-08-12T00:39:55.747+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll]
CUDA driver version: 12.4
time=2024-08-12T00:39:55.783+08:00 level=DEBUG source=gpu.go:123 msg="detected GPUs" count=1 library=C:\Windows\system32\nvcuda.dll
[GPU-045c0234-9e9a-658c-53f4-ce9c92f98381] CUDA totalMem 6143 mb
[GPU-045c0234-9e9a-658c-53f4-ce9c92f98381] CUDA freeMem 5122 mb
[GPU-045c0234-9e9a-658c-53f4-ce9c92f98381] Compute Capability 8.6
time=2024-08-12T00:39:55.952+08:00 level=DEBUG source=amd_windows.go:33 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found."
releasing cuda driver library
releasing nvml library
time=2024-08-12T00:39:55.954+08:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-045c0234-9e9a-658c-53f4-ce9c92f98381 library=cuda compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3060 Laptop GPU" total="6.0 GiB" available="5.0 GiB"
[GIN] 2024/08/12 - 00:40:11 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/12 - 00:40:11 | 404 | 513.6µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/08/12 - 00:40:14 | 200 | 3.7056792s | 127.0.0.1 | POST "/api/pull"
[GIN] 2024/08/12 - 00:40:17 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/12 - 00:40:17 | 200 | 1.6011ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/08/12 - 00:40:23 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/12 - 00:40:23 | 200 | 14.7391ms | 127.0.0.1 | POST "/api/show"
time=2024-08-12T00:40:24.017+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="27.9 GiB" before.free="11.8 GiB" before.free_swap="33.4 GiB" now.total="27.9 GiB" now.free="11.4 GiB" now.free_swap="32.8 GiB"
time=2024-08-12T00:40:24.028+08:00 level=DEBUG source=gpu.go:407 msg="updating cuda memory data" gpu=GPU-045c0234-9e9a-658c-53f4-ce9c92f98381 name="NVIDIA GeForce RTX 3060 Laptop GPU" overhead="0 B" before.total="6.0 GiB" before.free="5.0 GiB" now.total="6.0 GiB" now.free="4.1 GiB" now.used="1.9 GiB"
releasing nvml library
time=2024-08-12T00:40:24.029+08:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0xccbb00 gpu_count=1
time=2024-08-12T00:40:24.043+08:00 level=DEBUG source=sched.go:219 msg="loading first model" model=D:\AI-app\ollama_model\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8
time=2024-08-12T00:40:24.044+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[4.1 GiB]"
time=2024-08-12T00:40:24.044+08:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=D:\AI-app\ollama_model\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 gpu=GPU-045c0234-9e9a-658c-53f4-ce9c92f98381 parallel=4 available=4436783104 required="1.2 GiB"
time=2024-08-12T00:40:24.044+08:00 level=DEBUG source=server.go:101 msg="system memory" total="27.9 GiB" free="11.4 GiB" free_swap="32.8 GiB"
time=2024-08-12T00:40:24.044+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[4.1 GiB]"
time=2024-08-12T00:40:24.044+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[4.1 GiB]" memory.required.full="1.2 GiB" memory.required.partial="1.2 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.2 GiB]" memory.weights.total="288.2 MiB" memory.weights.repeating="150.3 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB"
time=2024-08-12T00:40:24.046+08:00 level=INFO source=sched.go:424 msg="NewLlamaServer failed" model=D:\AI-app\ollama_model\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 error="no suitable llama servers found"
[GIN] 2024/08/12 - 00:40:24 | 500 | 44.7374ms | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:2282821135 --> @vagitablebirdcode commented on GitHub (Aug 11, 2024): i set OLLAMA_DEBUG=1, and the full log are as follows: 2024/08/12 00:39:55 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\AI-app\\ollama_model OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-12T00:39:55.674+08:00 level=INFO source=images.go:781 msg="total blobs: 14" time=2024-08-12T00:39:55.675+08:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" time=2024-08-12T00:39:55.675+08:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.3.4)" time=2024-08-12T00:39:55.677+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries []" time=2024-08-12T00:39:55.677+08:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-08-12T00:39:55.677+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-08-12T00:39:55.677+08:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs" time=2024-08-12T00:39:55.677+08:00 level=DEBUG source=gpu.go:90 msg="searching for GPU discovery libraries for NVIDIA" time=2024-08-12T00:39:55.677+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvml.dll time=2024-08-12T00:39:55.677+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[D:\\Coding\\Anaconda3\\nvml.dll* D:\\Coding\\Anaconda3\\Library\\mingw-w64\\bin\\nvml.dll* D:\\Coding\\Anaconda3\\Library\\usr\\bin\\nvml.dll* D:\\Coding\\Anaconda3\\Library\\bin\\nvml.dll* D:\\Coding\\Anaconda3\\Scripts\\nvml.dll* D:\\Coding\\Anaconda3\\bin\\nvml.dll* D:\\Coding\\Anaconda3\\condabin\\nvml.dll* D:\\Coding\\MPI\\Bin\\nvml.dll* D:\\VMware\\VMware Workstation\\bin\\nvml.dll* C:\\Windows\\system32\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Microsoft\\jdk-11.0.12.7-hotspot\\bin\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2020.3.1\\nvml.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvml.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\bin\\nvml.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\lib\\nvml.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\include\\nvml.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\libnvvp\\nvml.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\extras\\CUPTI\\lib64\\nvml.dll* C:\\Users\\USER\\.mujoco\\mjpro150\\bin\\nvml.dll* D:\\LaTeX\\Strawberry\\c\\bin\\nvml.dll* D:\\LaTeX\\Strawberry\\perl\\site\\bin\\nvml.dll* D:\\LaTeX\\Strawberry\\perl\\bin\\nvml.dll* D:\\LaTeX\\texlive\\2023\\bin\\windows\\nvml.dll* D:\\Coding\\pandoc\\nvml.dll* D:\\Coding\\mingw64\\bin\\nvml.dll* D:\\MATLAB\\R2021b\\runtime\\win64\\nvml.dll* D:\\MATLAB\\R2021b\\bin\\nvml.dll* D:\\Science\\PDFtk\\bin\\nvml.dll* D:\\website\\MongoDB\\server\\7.0\\bin\\nvml.dll* D:\\Coding\\Git\\cmd\\nvml.dll* D:\\Coding\\nodejs\\nvml.dll* C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\nvml.dll* D:\\Coding\\GiteeAI\\nvml.dll* C:\\Users\\USER\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\USER\\.dotnet\\tools\\nvml.dll* D:\\Coding\\Microsoft VS Code\\bin\\nvml.dll* D:\\Coding\\JetBrains\\CLion 2023.1.1\\bin\\nvml.dll* D:\\Coding\\JetBrains\\PyCharm 2023.1\\bin\\nvml.dll* C:\\Users\\USER\\AppData\\Local\\Programs\\oh-my-posh\\bin\\nvml.dll* D:\\Coding\\nodejs\\node_global\\nvml.dll* D:\\LaTeX\\texlive\\2023\\bin\\windows\\nvml.dll* D:\\Coding\\JetBrains\\PyCharm 2023.1\\bin\\nvml.dll* D:\\Coding\\Lens\\resources\\cli\\bin\\nvml.dll* C:\\Program Files (x86)\\Nmap\\nvml.dll* C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\nvml.dll* D:\\Coding\\JetBrains\\PyCharm Community Edition 2024.1.5\\bin\\nvml.dll* C:\\Users\\USER\\nvml.dll* C:\\Users\\USER\\nvml.dll* c:\\Windows\\System32\\nvml.dll]" time=2024-08-12T00:39:55.706+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll*" time=2024-08-12T00:39:55.713+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths="[C:\\Windows\\system32\\nvml.dll c:\\Windows\\System32\\nvml.dll]" time=2024-08-12T00:39:55.734+08:00 level=DEBUG source=gpu.go:112 msg="nvidia-ml loaded" library=C:\Windows\system32\nvml.dll time=2024-08-12T00:39:55.734+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvcuda.dll time=2024-08-12T00:39:55.734+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[D:\\Coding\\Anaconda3\\nvcuda.dll* D:\\Coding\\Anaconda3\\Library\\mingw-w64\\bin\\nvcuda.dll* D:\\Coding\\Anaconda3\\Library\\usr\\bin\\nvcuda.dll* D:\\Coding\\Anaconda3\\Library\\bin\\nvcuda.dll* D:\\Coding\\Anaconda3\\Scripts\\nvcuda.dll* D:\\Coding\\Anaconda3\\bin\\nvcuda.dll* D:\\Coding\\Anaconda3\\condabin\\nvcuda.dll* D:\\Coding\\MPI\\Bin\\nvcuda.dll* D:\\VMware\\VMware Workstation\\bin\\nvcuda.dll* C:\\Windows\\system32\\nvcuda.dll* C:\\Windows\\nvcuda.dll* C:\\Windows\\System32\\Wbem\\nvcuda.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\Windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* C:\\Program Files\\Microsoft\\jdk-11.0.12.7-hotspot\\bin\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2020.3.1\\nvcuda.dll* C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR\\nvcuda.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\bin\\nvcuda.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\lib\\nvcuda.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\include\\nvcuda.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\libnvvp\\nvcuda.dll* C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.2\\extras\\CUPTI\\lib64\\nvcuda.dll* C:\\Users\\USER\\.mujoco\\mjpro150\\bin\\nvcuda.dll* D:\\LaTeX\\Strawberry\\c\\bin\\nvcuda.dll* D:\\LaTeX\\Strawberry\\perl\\site\\bin\\nvcuda.dll* D:\\LaTeX\\Strawberry\\perl\\bin\\nvcuda.dll* D:\\LaTeX\\texlive\\2023\\bin\\windows\\nvcuda.dll* D:\\Coding\\pandoc\\nvcuda.dll* D:\\Coding\\mingw64\\bin\\nvcuda.dll* D:\\MATLAB\\R2021b\\runtime\\win64\\nvcuda.dll* D:\\MATLAB\\R2021b\\bin\\nvcuda.dll* D:\\Science\\PDFtk\\bin\\nvcuda.dll* D:\\website\\MongoDB\\server\\7.0\\bin\\nvcuda.dll* D:\\Coding\\Git\\cmd\\nvcuda.dll* D:\\Coding\\nodejs\\nvcuda.dll* C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* D:\\Coding\\GiteeAI\\nvcuda.dll* C:\\Users\\USER\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\USER\\.dotnet\\tools\\nvcuda.dll* D:\\Coding\\Microsoft VS Code\\bin\\nvcuda.dll* D:\\Coding\\JetBrains\\CLion 2023.1.1\\bin\\nvcuda.dll* D:\\Coding\\JetBrains\\PyCharm 2023.1\\bin\\nvcuda.dll* C:\\Users\\USER\\AppData\\Local\\Programs\\oh-my-posh\\bin\\nvcuda.dll* D:\\Coding\\nodejs\\node_global\\nvcuda.dll* D:\\LaTeX\\texlive\\2023\\bin\\windows\\nvcuda.dll* D:\\Coding\\JetBrains\\PyCharm 2023.1\\bin\\nvcuda.dll* D:\\Coding\\Lens\\resources\\cli\\bin\\nvcuda.dll* C:\\Program Files (x86)\\Nmap\\nvcuda.dll* C:\\Users\\USER\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* D:\\Coding\\JetBrains\\PyCharm Community Edition 2024.1.5\\bin\\nvcuda.dll* C:\\Users\\USER\\nvcuda.dll* C:\\Users\\USER\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]" time=2024-08-12T00:39:55.739+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*" time=2024-08-12T00:39:55.747+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[C:\Windows\system32\nvcuda.dll] CUDA driver version: 12.4 time=2024-08-12T00:39:55.783+08:00 level=DEBUG source=gpu.go:123 msg="detected GPUs" count=1 library=C:\Windows\system32\nvcuda.dll [GPU-045c0234-9e9a-658c-53f4-ce9c92f98381] CUDA totalMem 6143 mb [GPU-045c0234-9e9a-658c-53f4-ce9c92f98381] CUDA freeMem 5122 mb [GPU-045c0234-9e9a-658c-53f4-ce9c92f98381] Compute Capability 8.6 time=2024-08-12T00:39:55.952+08:00 level=DEBUG source=amd_windows.go:33 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found." releasing cuda driver library releasing nvml library time=2024-08-12T00:39:55.954+08:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-045c0234-9e9a-658c-53f4-ce9c92f98381 library=cuda compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3060 Laptop GPU" total="6.0 GiB" available="5.0 GiB" [GIN] 2024/08/12 - 00:40:11 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/12 - 00:40:11 | 404 | 513.6µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/08/12 - 00:40:14 | 200 | 3.7056792s | 127.0.0.1 | POST "/api/pull" [GIN] 2024/08/12 - 00:40:17 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/12 - 00:40:17 | 200 | 1.6011ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/08/12 - 00:40:23 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/12 - 00:40:23 | 200 | 14.7391ms | 127.0.0.1 | POST "/api/show" time=2024-08-12T00:40:24.017+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="27.9 GiB" before.free="11.8 GiB" before.free_swap="33.4 GiB" now.total="27.9 GiB" now.free="11.4 GiB" now.free_swap="32.8 GiB" time=2024-08-12T00:40:24.028+08:00 level=DEBUG source=gpu.go:407 msg="updating cuda memory data" gpu=GPU-045c0234-9e9a-658c-53f4-ce9c92f98381 name="NVIDIA GeForce RTX 3060 Laptop GPU" overhead="0 B" before.total="6.0 GiB" before.free="5.0 GiB" now.total="6.0 GiB" now.free="4.1 GiB" now.used="1.9 GiB" releasing nvml library time=2024-08-12T00:40:24.029+08:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0xccbb00 gpu_count=1 time=2024-08-12T00:40:24.043+08:00 level=DEBUG source=sched.go:219 msg="loading first model" model=D:\AI-app\ollama_model\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 time=2024-08-12T00:40:24.044+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[4.1 GiB]" time=2024-08-12T00:40:24.044+08:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=D:\AI-app\ollama_model\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 gpu=GPU-045c0234-9e9a-658c-53f4-ce9c92f98381 parallel=4 available=4436783104 required="1.2 GiB" time=2024-08-12T00:40:24.044+08:00 level=DEBUG source=server.go:101 msg="system memory" total="27.9 GiB" free="11.4 GiB" free_swap="32.8 GiB" time=2024-08-12T00:40:24.044+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[4.1 GiB]" time=2024-08-12T00:40:24.044+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[4.1 GiB]" memory.required.full="1.2 GiB" memory.required.partial="1.2 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.2 GiB]" memory.weights.total="288.2 MiB" memory.weights.repeating="150.3 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB" time=2024-08-12T00:40:24.046+08:00 level=INFO source=sched.go:424 msg="NewLlamaServer failed" model=D:\AI-app\ollama_model\blobs\sha256-8de95da68dc485c0889c205384c24642f83ca18d089559c977ffc6a3972a71a8 error="no suitable llama servers found" [GIN] 2024/08/12 - 00:40:24 | 500 | 44.7374ms | 127.0.0.1 | POST "/api/chat"
Author
Owner

@rick-github commented on GitHub (Aug 11, 2024):

What's in the directory C:\Users\USER\AppData\Local\Programs\Ollama? This is where the runners are supposed to be (OLLAMA_RUNNERS_DIR). I note that OLLAMA_MODELS is D:\AI-app\ollama_model, if you moved ollama from C: to D: you need to point OLLAMA_RUNNERS_DIR to the new location of the runners.

<!-- gh-comment-id:2282824878 --> @rick-github commented on GitHub (Aug 11, 2024): What's in the directory `C:\Users\USER\AppData\Local\Programs\Ollama`? This is where the runners are supposed to be (`OLLAMA_RUNNERS_DIR`). I note that `OLLAMA_MODELS` is `D:\AI-app\ollama_model`, if you moved ollama from C: to D: you need to point `OLLAMA_RUNNERS_DIR` to the new location of the runners.
Author
Owner

@vagitablebirdcode commented on GitHub (Aug 11, 2024):

The ollama installer default install path is C:\Users\USER\AppData\Local\Programs\Ollama , which shows in image as follows. I can't choose the install path when i exec the installer. I will try to change the model path to C: and run this command again.
图片

<!-- gh-comment-id:2282827093 --> @vagitablebirdcode commented on GitHub (Aug 11, 2024): The ollama installer default install path is `C:\Users\USER\AppData\Local\Programs\Ollama` , which shows in image as follows. I can't choose the install path when i exec the installer. I will try to change the model path to C: and run this command again. ![图片](https://github.com/user-attachments/assets/cb93fd04-fc74-4681-921e-49b1da570ef6)
Author
Owner

@vagitablebirdcode commented on GitHub (Aug 11, 2024):

i move the model to: C:\Users\USER\AppData\Local\Programs\Ollama\ollama_model and change the ``OLLAMA_MODELS`, but it still occured the same error.

<!-- gh-comment-id:2282830599 --> @vagitablebirdcode commented on GitHub (Aug 11, 2024): i move the model to: `C:\Users\USER\AppData\Local\Programs\Ollama\ollama_model` and change the ``OLLAMA_MODELS`, but it still occured the same error.
Author
Owner

@jmorganca commented on GitHub (Aug 11, 2024):

Would it be possible to share what's in the ollama_runners directory (and subdirectories)? Thanks so much

<!-- gh-comment-id:2282833402 --> @jmorganca commented on GitHub (Aug 11, 2024): Would it be possible to share what's in the `ollama_runners` directory (and subdirectories)? Thanks so much
Author
Owner

@jmorganca commented on GitHub (Aug 11, 2024):

Hi @vagitablebirdcode try unsetting OLLAMA_RUNNERS_DIR, that will default to the correct location. Let me know if that doesn't work!

<!-- gh-comment-id:2282836169 --> @jmorganca commented on GitHub (Aug 11, 2024): Hi @vagitablebirdcode try unsetting `OLLAMA_RUNNERS_DIR`, that will default to the correct location. Let me know if that doesn't work!
Author
Owner

@vagitablebirdcode commented on GitHub (Aug 12, 2024):

Thanks! When I unset OLLAMA_RUNNERS_DIR and put the model path and ollama on the same disk(C:), the service can be recognized normally!

<!-- gh-comment-id:2283030194 --> @vagitablebirdcode commented on GitHub (Aug 12, 2024): Thanks! When I unset `OLLAMA_RUNNERS_DIR` and put the model path and ollama on the same disk(C:), the service can be recognized normally!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3958