[GH-ISSUE #2012] Ollama not using my gpu whatsoever. #26924

Closed
opened 2026-04-22 03:41:34 -05:00 by GiteaMirror · 26 comments
Owner

Originally created by @Motzumoto on GitHub (Jan 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2012

Originally assigned to: @dhiltgen on GitHub.

image

I do have cuda drivers installed:
image

Originally created by @Motzumoto on GitHub (Jan 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2012 Originally assigned to: @dhiltgen on GitHub. ![image](https://github.com/jmorganca/ollama/assets/45925152/368ba9e2-8113-46e7-9192-43f27ff91fb9) I do have cuda drivers installed: ![image](https://github.com/jmorganca/ollama/assets/45925152/bbd87158-7f01-40ee-98b9-c111858cd238)
GiteaMirror added the nvidia label 2026-04-22 03:41:34 -05:00
Author
Owner

@DragonBtc93 commented on GitHub (Jan 17, 2024):

I'm having the same issue

<!-- gh-comment-id:1896896186 --> @DragonBtc93 commented on GitHub (Jan 17, 2024): I'm having the same issue
Author
Owner

@Rushmore75 commented on GitHub (Jan 19, 2024):

same issue (but on "pure" linux (not wsl))

<!-- gh-comment-id:1899862136 --> @Rushmore75 commented on GitHub (Jan 19, 2024): same issue (but on "pure" linux (not wsl))
Author
Owner

@mzhadigerov commented on GitHub (Jan 22, 2024):

Hi! Could you figure out why?

<!-- gh-comment-id:1904767291 --> @mzhadigerov commented on GitHub (Jan 22, 2024): Hi! Could you figure out why?
Author
Owner

@Rushmore75 commented on GitHub (Jan 24, 2024):

not yet, but I'm tracking my adventure in issue #2065

<!-- gh-comment-id:1907391666 --> @Rushmore75 commented on GitHub (Jan 24, 2024): not yet, but I'm tracking my adventure in issue #2065
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

@Motzumoto can you share the server log so we can see why it's not running on the GPU?

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues

<!-- gh-comment-id:1912851705 --> @dhiltgen commented on GitHub (Jan 27, 2024): @Motzumoto can you share the server log so we can see why it's not running on the GPU? https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues
Author
Owner

@Motzumoto commented on GitHub (Jan 29, 2024):

@Motzumoto can you share the server log so we can see why it's not running on the GPU?

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues

heres my log:
log.txt

<!-- gh-comment-id:1915665546 --> @Motzumoto commented on GitHub (Jan 29, 2024): > @Motzumoto can you share the server log so we can see why it's not running on the GPU? > > https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues heres my log: [log.txt](https://github.com/ollama/ollama/files/14090266/log.txt)
Author
Owner

@BrujitoOz commented on GitHub (Jan 30, 2024):

I think I have a similar issue. I decided to run Ollama building from source on my WSL 2 to test my Nvidia MX130 GPU, which has compatibility 5.0.

The text generation is superior on speed compared to when I had Ollama installed with curl https://ollama.ai/install.sh | sh (which only accepted compatibility from 6.0). However, in my task manager, I don't see my Nvidia GPU being used; it always stays at 0%.

My device is a laptop with two GPUs: Intel(R) UHD Graphics 620 and Nvidia MX130. It's possible that it's using the Intel card.

In the logs, I saw this:

2024/01/29 18:50:55 routes.go:970: INFO Listening on 127.0.0.1:11434 (version 0.0.0)
2024/01/29 18:50:55 payload_common.go:106: INFO Extracting dynamic libraries...
2024/01/29 18:50:55 payload_common.go:145: INFO Dynamic LLM libraries [cpu_avx2 cpu_avx cpu]
2024/01/29 18:50:55 gpu.go:94: INFO Detecting GPU type
2024/01/29 18:50:55 gpu.go:242: INFO Searching for GPU management library libnvidia-ml.so
2024/01/29 18:51:01 gpu.go:288: INFO Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05 /usr/lib/wsl/lib/libnvidia-ml.so.1 /usr/lib/wsl/drivers/nvaci.inf_amd64_6eae42cbc3ee7e36/libnvidia-ml.so.1]
2024/01/29 18:51:01 gpu.go:300: INFO Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05: nvml vram init failure: 9
2024/01/29 18:51:01 gpu.go:99: INFO Nvidia GPU detected
2024/01/29 18:51:01 cpu_common.go:11: INFO CPU has AVX2
2024/01/29 18:51:01 gpu.go:146: INFO CUDA Compute Capability detected: 5.0
...
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =  1532.35 MiB

I think the "Unable to load CUDA management library" might have something to do with it.

<!-- gh-comment-id:1915793330 --> @BrujitoOz commented on GitHub (Jan 30, 2024): I think I have a similar issue. I decided to run Ollama building from source on my WSL 2 to test my Nvidia MX130 GPU, which has compatibility 5.0. The text generation is superior on speed compared to when I had Ollama installed with curl https://ollama.ai/install.sh | sh (which only accepted compatibility from 6.0). However, in my task manager, I don't see my Nvidia GPU being used; it always stays at 0%. My device is a laptop with two GPUs: Intel(R) UHD Graphics 620 and Nvidia MX130. It's possible that it's using the Intel card. In the logs, I saw this: ``` 2024/01/29 18:50:55 routes.go:970: INFO Listening on 127.0.0.1:11434 (version 0.0.0) 2024/01/29 18:50:55 payload_common.go:106: INFO Extracting dynamic libraries... 2024/01/29 18:50:55 payload_common.go:145: INFO Dynamic LLM libraries [cpu_avx2 cpu_avx cpu] 2024/01/29 18:50:55 gpu.go:94: INFO Detecting GPU type 2024/01/29 18:50:55 gpu.go:242: INFO Searching for GPU management library libnvidia-ml.so 2024/01/29 18:51:01 gpu.go:288: INFO Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05 /usr/lib/wsl/lib/libnvidia-ml.so.1 /usr/lib/wsl/drivers/nvaci.inf_amd64_6eae42cbc3ee7e36/libnvidia-ml.so.1] 2024/01/29 18:51:01 gpu.go:300: INFO Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05: nvml vram init failure: 9 2024/01/29 18:51:01 gpu.go:99: INFO Nvidia GPU detected 2024/01/29 18:51:01 cpu_common.go:11: INFO CPU has AVX2 2024/01/29 18:51:01 gpu.go:146: INFO CUDA Compute Capability detected: 5.0 ... llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 1532.35 MiB ``` I think the "Unable to load CUDA management library" might have something to do with it.
Author
Owner

@dhiltgen commented on GitHub (Jan 30, 2024):

@Motzumoto those logs are for 0.1.17 which is quite old (we're up to 0.1.22). That said, I do see it is running on your GPU, yet due to limited VRAM, is only able to load a very small percentage of the model, so most of the LLM is running on your CPU. If you run a smaller model that fits all or mostly in the VRAM, then you should see much better performance.

Jan 16 01:56:25 Motzumoto ollama[140]: 2024/01/16 01:56:25 llama.go:300: 4716 MB VRAM available, loading up to 3 GPU layers
Jan 16 01:56:25 Motzumoto ollama[140]: 2024/01/16 01:56:25 llama.go:436: starting llama runner
Jan 16 01:56:25 Motzumoto ollama[140]: 2024/01/16 01:56:25 llama.go:494: waiting for llama runner to start responding
Jan 16 01:56:26 Motzumoto ollama[140]: ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
Jan 16 01:56:26 Motzumoto ollama[140]: ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
Jan 16 01:56:26 Motzumoto ollama[140]: ggml_init_cublas: found 1 CUDA devices:
Jan 16 01:56:26 Motzumoto ollama[140]:   Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5
...
Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: using CUDA for GPU acceleration
Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: mem required  = 22868.48 MiB
Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: offloading 3 repeating layers to GPU
Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: offloaded 3/33 layers to GPU
Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: VRAM used: 2347.78 MiB
<!-- gh-comment-id:1917404107 --> @dhiltgen commented on GitHub (Jan 30, 2024): @Motzumoto those logs are for 0.1.17 which is quite old (we're up to 0.1.22). That said, I do see it is running on your GPU, yet due to limited VRAM, is only able to load a very small percentage of the model, so most of the LLM is running on your CPU. If you run a smaller model that fits all or mostly in the VRAM, then you should see much better performance. ``` Jan 16 01:56:25 Motzumoto ollama[140]: 2024/01/16 01:56:25 llama.go:300: 4716 MB VRAM available, loading up to 3 GPU layers Jan 16 01:56:25 Motzumoto ollama[140]: 2024/01/16 01:56:25 llama.go:436: starting llama runner Jan 16 01:56:25 Motzumoto ollama[140]: 2024/01/16 01:56:25 llama.go:494: waiting for llama runner to start responding Jan 16 01:56:26 Motzumoto ollama[140]: ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no Jan 16 01:56:26 Motzumoto ollama[140]: ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes Jan 16 01:56:26 Motzumoto ollama[140]: ggml_init_cublas: found 1 CUDA devices: Jan 16 01:56:26 Motzumoto ollama[140]: Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5 ... Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: using CUDA for GPU acceleration Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: mem required = 22868.48 MiB Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: offloading 3 repeating layers to GPU Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: offloaded 3/33 layers to GPU Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: VRAM used: 2347.78 MiB ```
Author
Owner

@dhiltgen commented on GitHub (Jan 30, 2024):

@BrujitoOz support for CC 5.0+ cards will come in 0.1.23 (not yet shipped)

<!-- gh-comment-id:1917407270 --> @dhiltgen commented on GitHub (Jan 30, 2024): @BrujitoOz support for CC 5.0+ cards will come in 0.1.23 (not yet shipped)
Author
Owner

@BrujitoOz commented on GitHub (Jan 30, 2024):

@BrujitoOz support for CC 5.0+ cards will come in 0.1.23 (not yet shipped)

Nice. Do you know if the message:
"INFO Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05: nvml vram init failure: 9"
would be solved on 0.1.23 then? or is another problem that has nothing to do with ollama not using GPU?

<!-- gh-comment-id:1917973075 --> @BrujitoOz commented on GitHub (Jan 30, 2024): > @BrujitoOz support for CC 5.0+ cards will come in 0.1.23 (not yet shipped) Nice. Do you know if the message: "INFO Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05: nvml vram init failure: 9" would be solved on 0.1.23 then? or is another problem that has nothing to do with ollama not using GPU?
Author
Owner

@remy415 commented on GitHub (Jan 31, 2024):

@BrujitoOz The library loader attempts to load every detected library and will continue with the first one. As long as you have a valid libnvidia-ml.so* file in your LD_LIBRARY_PATH, it will load correctly. Try running 'export LD_LIBRARY_PATH="/usr/lib/wsl/lib/:$LD_LIBRARY_PATH" and see if you still get that error message. If it works, then you can add the export line to the bottom of your ~/.bashrc file for it to be loaded every time you log in.

That being said, the MX130 is an older card and the models I found had only 2GB of VRAM. If your laptop also has 2GB of VRAM, you will need a very small model to be able to use the GPU for acceleration.

<!-- gh-comment-id:1918235051 --> @remy415 commented on GitHub (Jan 31, 2024): @BrujitoOz The library loader attempts to load every detected library and will continue with the first one. As long as you have a valid libnvidia-ml.so* file in your LD_LIBRARY_PATH, it will load correctly. Try running 'export LD_LIBRARY_PATH="/usr/lib/wsl/lib/:$LD_LIBRARY_PATH" and see if you still get that error message. If it works, then you can add the export line to the bottom of your ~/.bashrc file for it to be loaded every time you log in. That being said, the MX130 is an older card and the models I found had only 2GB of VRAM. If your laptop also has 2GB of VRAM, you will need a very small model to be able to use the GPU for acceleration.
Author
Owner

@manzonif commented on GitHub (Feb 2, 2024):

Hi, I'm running ollama build on Windows . All seems to work, but my 4090 GPU it is completely ignored and all is processed by the CPU.
Heres my server log:

time=2024-02-02T04:35:58.549+01:00 level=INFO source=routes.go:983 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-02-02T04:35:58.549+01:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..."
time=2024-02-02T04:35:58.565+01:00 level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu_avx2 cpu_avx cpu]"
[GIN] 2024/02/02 - 04:38:40 | 400 | 553µs | 127.0.0.1 | POST "/api/pull"
[GIN] 2024/02/02 - 04:38:55 | 200 | 3.4939971s | 127.0.0.1 | POST "/api/pull"
[GIN] 2024/02/02 - 04:46:46 | 200 | 503.6µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/02/02 - 04:46:46 | 200 | 3.316ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/02/02 - 04:46:46 | 200 | 594.4µs | 127.0.0.1 | POST "/api/show"
time=2024-02-02T04:46:46.460+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-02T04:46:46.461+01:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library nvml.dll"
time=2024-02-02T04:46:46.468+01:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [c:\Windows\System32\nvml.dll c:\windows\system32\nvml.dll]"
time=2024-02-02T04:46:46.481+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-02-02T04:46:46.481+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-02T04:46:46.499+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
time=2024-02-02T04:46:46.499+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-02T04:46:46.500+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
time=2024-02-02T04:46:46.501+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
...
...
llm_load_print_meta: model size = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.11 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 3647.87 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CPU input buffer size = 12.01 MiB
llama_new_context_with_model: CPU compute buffer size = 167.20 MiB
llama_new_context_with_model: graph splits (measure): 1
time=2024-02-02T04:46:48.303+01:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
[GIN] 2024/02/02 - 04:46:48 | 200 | 2.2425064s | 127.0.0.1 | POST "/api/chat"
time=2024-02-02T04:49:11.006+01:00 level=INFO source=dyn_ext_server.go:170 msg="loaded 0 images"
[GIN] 2024/02/02 - 04:49:33 | 200 | 22.3927047s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:1922785290 --> @manzonif commented on GitHub (Feb 2, 2024): Hi, I'm running ollama build on [Windows ](https://github.com/ollama/ollama/blob/main/docs/development.md#windows). All seems to work, but my 4090 GPU it is completely ignored and all is processed by the CPU. Heres my server log: time=2024-02-02T04:35:58.549+01:00 level=INFO source=routes.go:983 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2024-02-02T04:35:58.549+01:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..." time=2024-02-02T04:35:58.565+01:00 level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu_avx2 cpu_avx cpu]" [GIN] 2024/02/02 - 04:38:40 | 400 | 553µs | 127.0.0.1 | POST "/api/pull" [GIN] 2024/02/02 - 04:38:55 | 200 | 3.4939971s | 127.0.0.1 | POST "/api/pull" [GIN] 2024/02/02 - 04:46:46 | 200 | 503.6µs | 127.0.0.1 | HEAD "/" [GIN] 2024/02/02 - 04:46:46 | 200 | 3.316ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/02/02 - 04:46:46 | 200 | 594.4µs | 127.0.0.1 | POST "/api/show" time=2024-02-02T04:46:46.460+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-02T04:46:46.461+01:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library nvml.dll" time=2024-02-02T04:46:46.468+01:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [c:\\Windows\\System32\\nvml.dll c:\\windows\\system32\\nvml.dll]" time=2024-02-02T04:46:46.481+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected" time=2024-02-02T04:46:46.481+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-02T04:46:46.499+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9" time=2024-02-02T04:46:46.499+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-02T04:46:46.500+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9" time=2024-02-02T04:46:46.501+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ... ... llm_load_print_meta: model size = 3.56 GiB (4.54 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.11 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 3647.87 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CPU input buffer size = 12.01 MiB llama_new_context_with_model: CPU compute buffer size = 167.20 MiB llama_new_context_with_model: graph splits (measure): 1 time=2024-02-02T04:46:48.303+01:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop" [GIN] 2024/02/02 - 04:46:48 | 200 | 2.2425064s | 127.0.0.1 | POST "/api/chat" time=2024-02-02T04:49:11.006+01:00 level=INFO source=dyn_ext_server.go:170 msg="loaded 0 images" [GIN] 2024/02/02 - 04:49:33 | 200 | 22.3927047s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@dhiltgen commented on GitHub (Feb 2, 2024):

@manzonif it looks like it's not detecting the CUDA libraries, and only building for CPU usage. We try to find where CUDA is installed, but that requires nvcc.exe to be in your path - here's where that logic lives - https://github.com/ollama/ollama/blob/main/llm/generate/gen_windows.ps1#L17

We're still refining things, but the dev guide for windows is here - https://github.com/ollama/ollama/blob/main/docs/development.md#windows

<!-- gh-comment-id:1922807118 --> @dhiltgen commented on GitHub (Feb 2, 2024): @manzonif it looks like it's not detecting the CUDA libraries, and only building for CPU usage. We try to find where CUDA is installed, but that requires `nvcc.exe` to be in your path - here's where that logic lives - https://github.com/ollama/ollama/blob/main/llm/generate/gen_windows.ps1#L17 We're still refining things, but the dev guide for windows is here - https://github.com/ollama/ollama/blob/main/docs/development.md#windows
Author
Owner

@remy415 commented on GitHub (Feb 2, 2024):

@manzonif that’s weird, It detects your GPU and even says loading layers into GPU, then loads it onto cpu. Not seeing CUDA listed in llama.cpp

<!-- gh-comment-id:1922809494 --> @remy415 commented on GitHub (Feb 2, 2024): @manzonif that’s weird, It detects your GPU and even says loading layers into GPU, then loads it onto cpu. Not seeing CUDA listed in llama.cpp
Author
Owner

@remy415 commented on GitHub (Feb 2, 2024):

@dhiltgen gpu.go detected nvml.dll, payload_common.go didn’t

<!-- gh-comment-id:1922811288 --> @remy415 commented on GitHub (Feb 2, 2024): @dhiltgen gpu.go detected nvml.dll, payload_common.go didn’t
Author
Owner

@manzonif commented on GitHub (Feb 2, 2024):

@manzonif it looks like it's not detecting the CUDA libraries, and only building for CPU usage. We try to find where CUDA is installed, but that requires nvcc.exe to be in your path - here's where that logic lives - https://github.com/ollama/ollama/blob/main/llm/generate/gen_windows.ps1#L17

We're still refining things, but the dev guide for windows is here - https://github.com/ollama/ollama/blob/main/docs/development.md#windows

@dhiltgen Thanks for reply, I followed your dev guide, it is linked in my previous post.
Actually nvcc.exe is in the CUDA toolkit folder: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3\bin
As @remy415 pointed out, it seems to be recognized in my log. Should I perhaps copy nvcc.exe to the ollama directory?

time=2024-02-02T07:32:57.232+01:00 level=INFO source=dyn_ext_server.go:383 msg="Updating PATH to C:\Users\Fausto\AppData\Local\Temp\ollama2003462564\cpu_avx2;C:\Users\Fausto\anaconda3\condabin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3\libnvvp;C:\Program Files\Common Files\Oracle\Java\javapath;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvp;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\libnvvp;c:\program files\nvidia gpu computing toolkit\cuda\v11.3\bin;c:\program files\nvidia gpu computing toolkit\cuda\v11.3\libnvvp;c:\windows\system32;c:\windows;c:\windows\system32\wbem;c:\windows\system32\windowspowershell\v1.0\;c:\windows\system32\openssh\;c:\program files\nvidia corporation\nvidia nvdlisr;c:\users\fausto\appdata\roaming\nvm;c:\program files\microsoft\web platform installer\;c:\program files\git\cmd;c:\program files\docker\docker\resources\bin;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\Docker\Docker\resources\bin;C:\Program Files\dotnet\;C:\Users\Fausto\AppData\Roaming\nvm;C:\Program Files\nodejs;C:\Program Files\Git\cmd;C:\Program Files\NVIDIA Corporation\Nsight Compute 2023.3.1\;C:\Users\Fausto\go\bin;C:\Users\Fausto\scoop\apps\gcc\current\bin;C:\Users\Fausto\scoop\shims;C:\Users\Fausto\.cargo\bin;C:\Users\Fausto\AppData\Local\Programs\Python\Python310\Scripts\;C:\Users\Fausto\AppData\Local\Programs\Python\Python310\;C:\Users\Fausto\AppData\Local\Microsoft\WindowsApps;C:\Users\Fausto\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\Fausto\AppData\Roaming\nvm;C:\Program Files\nodejs;C:\ffmpeg\ffmpeg.exe;C:\Users\Fausto\.dotnet\tools;C:\Users\Fausto\AppData\Local\Android\Sdk\tools;C:\Users\Fausto\AppData\Local\Android\Sdk\platform-tools;C:\gradle-8.3\bin;C:\Program Files\Java\jdk-17\bin;C:\Users\Fausto\anaconda3\Scripts"
loading library C:\Users\Fausto\AppData\Local\Temp\ollama2003462564\cpu_avx2\ext_server.dll
time=2024-02-02T07:32:57.262+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\Users\Fausto\AppData\Local\Temp\ollama2003462564\cpu_avx2\ext_server.dll"

<!-- gh-comment-id:1923024291 --> @manzonif commented on GitHub (Feb 2, 2024): > @manzonif it looks like it's not detecting the CUDA libraries, and only building for CPU usage. We try to find where CUDA is installed, but that requires `nvcc.exe` to be in your path - here's where that logic lives - https://github.com/ollama/ollama/blob/main/llm/generate/gen_windows.ps1#L17 > > We're still refining things, but the dev guide for windows is here - https://github.com/ollama/ollama/blob/main/docs/development.md#windows @dhiltgen Thanks for reply, I followed your dev guide, it is linked in my previous post. Actually nvcc.exe is in the CUDA toolkit folder: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3\bin As @remy415 pointed out, it seems to be recognized in my log. Should I perhaps copy nvcc.exe to the ollama directory? time=2024-02-02T07:32:57.232+01:00 level=INFO source=dyn_ext_server.go:383 msg="Updating PATH to C:\\Users\\Fausto\\AppData\\Local\\Temp\\ollama2003462564\\cpu_avx2;C:\\Users\\Fausto\\anaconda3\\condabin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.3\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.3\\libnvvp;C:\\Program Files\\Common Files\\Oracle\\Java\\javapath;C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.8\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.8\\libnvvp;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.7\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.7\\libnvvp;c:\\program files\\nvidia gpu computing toolkit\\cuda\\v11.3\\bin;c:\\program files\\nvidia gpu computing toolkit\\cuda\\v11.3\\libnvvp;c:\\windows\\system32;c:\\windows;c:\\windows\\system32\\wbem;c:\\windows\\system32\\windowspowershell\\v1.0\\;c:\\windows\\system32\\openssh\\;c:\\program files\\nvidia corporation\\nvidia nvdlisr;c:\\users\\fausto\\appdata\\roaming\\nvm;c:\\program files\\microsoft\\web platform installer\\;c:\\program files\\git\\cmd;c:\\program files\\docker\\docker\\resources\\bin;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\Program Files\\dotnet\\;C:\\Users\\Fausto\\AppData\\Roaming\\nvm;C:\\Program Files\\nodejs;C:\\Program Files\\Git\\cmd;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2023.3.1\\;C:\\Users\\Fausto\\go\\bin;C:\\Users\\Fausto\\scoop\\apps\\gcc\\current\\bin;C:\\Users\\Fausto\\scoop\\shims;C:\\Users\\Fausto\\.cargo\\bin;C:\\Users\\Fausto\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\;C:\\Users\\Fausto\\AppData\\Local\\Programs\\Python\\Python310\\;C:\\Users\\Fausto\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\Fausto\\AppData\\Local\\Programs\\Microsoft VS Code\\bin;C:\\Users\\Fausto\\AppData\\Roaming\\nvm;C:\\Program Files\\nodejs;C:\\ffmpeg\\ffmpeg.exe;C:\\Users\\Fausto\\.dotnet\\tools;C:\\Users\\Fausto\\AppData\\Local\\Android\\Sdk\\tools;C:\\Users\\Fausto\\AppData\\Local\\Android\\Sdk\\platform-tools;C:\\gradle-8.3\\bin;C:\\Program Files\\Java\\jdk-17\\bin;C:\\Users\\Fausto\\anaconda3\\Scripts" loading library C:\Users\Fausto\AppData\Local\Temp\ollama2003462564\cpu_avx2\ext_server.dll time=2024-02-02T07:32:57.262+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\Users\\Fausto\\AppData\\Local\\Temp\\ollama2003462564\\cpu_avx2\\ext_server.dll"
Author
Owner

@manzonif commented on GitHub (Feb 2, 2024):

Resolved! I set the CUDA_LIB_DIR and CUDACXX environment variables in the corresponding toolkit directories, recompiled, and now it works perfectly.

The only thing is that I have to start the server separately, otherwise I get:
Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.

<!-- gh-comment-id:1923187280 --> @manzonif commented on GitHub (Feb 2, 2024): Resolved! I set the CUDA_LIB_DIR and CUDACXX environment variables in the corresponding toolkit directories, recompiled, and now it works perfectly. The only thing is that I have to start the server separately, otherwise I get: Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.
Author
Owner

@BrujitoOz commented on GitHub (Feb 2, 2024):

@BrujitoOz The library loader attempts to load every detected library and will continue with the first one. As long as you have a valid libnvidia-ml.so* file in your LD_LIBRARY_PATH, it will load correctly. Try running 'export LD_LIBRARY_PATH="/usr/lib/wsl/lib/:$LD_LIBRARY_PATH" and see if you still get that error message. If it works, then you can add the export line to the bottom of your ~/.bashrc file for it to be loaded every time you log in.

That being said, the MX130 is an older card and the models I found had only 2GB of VRAM. If your laptop also has 2GB of VRAM, you will need a very small model to be able to use the GPU for acceleration.

I just uninstalled libnvidia-ml.so.525.147.05 to have libnvidia-ml.so.1 as the first option

Discovered GPU libraries: [/usr/lib/wsl/lib/libnvidia-ml.so.1 /usr/lib/wsl/drivers/nvaci.inf_amd64_6eae42cbc3ee7e36/libnvidia-ml.so.1 /usr/lib/wsl/drivers/nvacig.inf_amd64_6eae42cbc3ee7e36/libnvidia-ml.so.1]"
wiring nvidia management library functions in /usr/lib/wsl/lib/libnvidia-ml.so.1

so the "INFO Unable to load CUDA management library ... nvml vram init failure: 9" is no more
although I've enabled debug mode with export OLLAMA_DEBUG=1 and rebuild again to see what happen and found this:

time=2024-02-02T03:48:00.354-05:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-02-02T03:48:00.354-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[0] CUDA device name: NVIDIA GeForce MX130
nvmlDeviceGetBoardPartNumber failed: 3
nvmlDeviceGetSerial failed: 3
[0] CUDA vbios version: 82.08.77.00.29
[0] CUDA brand: 5
[0] CUDA totalMem 2147483648
[0] CUDA usedMem 2098724864
time=2024-02-02T03:48:00.390-05:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.0"
time=2024-02-02T03:48:00.390-05:00 level=DEBUG source=gpu.go:231 msg="cuda detected 1 devices with 977M available memory"

what nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial means? task manager still shows 0% usage on GPU, even with small models like tinyllama

<!-- gh-comment-id:1923392920 --> @BrujitoOz commented on GitHub (Feb 2, 2024): > @BrujitoOz The library loader attempts to load every detected library and will continue with the first one. As long as you have a valid libnvidia-ml.so* file in your LD_LIBRARY_PATH, it will load correctly. Try running 'export LD_LIBRARY_PATH="/usr/lib/wsl/lib/:$LD_LIBRARY_PATH" and see if you still get that error message. If it works, then you can add the export line to the bottom of your ~/.bashrc file for it to be loaded every time you log in. > > That being said, the MX130 is an older card and the models I found had only 2GB of VRAM. If your laptop also has 2GB of VRAM, you will need a very small model to be able to use the GPU for acceleration. I just uninstalled libnvidia-ml.so.525.147.05 to have libnvidia-ml.so.1 as the first option ``` Discovered GPU libraries: [/usr/lib/wsl/lib/libnvidia-ml.so.1 /usr/lib/wsl/drivers/nvaci.inf_amd64_6eae42cbc3ee7e36/libnvidia-ml.so.1 /usr/lib/wsl/drivers/nvacig.inf_amd64_6eae42cbc3ee7e36/libnvidia-ml.so.1]" wiring nvidia management library functions in /usr/lib/wsl/lib/libnvidia-ml.so.1 ``` so the "INFO Unable to load CUDA management library ... nvml vram init failure: 9" is no more although I've enabled debug mode with export OLLAMA_DEBUG=1 and rebuild again to see what happen and found this: ``` time=2024-02-02T03:48:00.354-05:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected" time=2024-02-02T03:48:00.354-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [0] CUDA device name: NVIDIA GeForce MX130 nvmlDeviceGetBoardPartNumber failed: 3 nvmlDeviceGetSerial failed: 3 [0] CUDA vbios version: 82.08.77.00.29 [0] CUDA brand: 5 [0] CUDA totalMem 2147483648 [0] CUDA usedMem 2098724864 time=2024-02-02T03:48:00.390-05:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.0" time=2024-02-02T03:48:00.390-05:00 level=DEBUG source=gpu.go:231 msg="cuda detected 1 devices with 977M available memory" ``` what nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial means? task manager still shows 0% usage on GPU, even with small models like tinyllama
Author
Owner

@remy415 commented on GitHub (Feb 2, 2024):

Resolved! I set the CUDA_LIB_DIR and CUDACXX environment variables in the corresponding toolkit directories, recompiled, and now it works perfectly.

The only thing is that I have to start the server separately, otherwise I get: Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.

Yes, the Ollama binary does both the serving and the front end, this is expected behavior.

what nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial means?

nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial are informational messages only and don't otherwise affect the application. You can ignore them.

task manager still shows 0% usage on GPU, even with small models like tinyllama

tinyllama looks cool, I'll have to check it out. Can you paste the rest of the log? Tinyllama is only supposed to take ~600-700MB of memory but it looks like something else is occupying ~2GB of your VRAM, do you have any other applications running GPU-intensive tasks?

<!-- gh-comment-id:1924059497 --> @remy415 commented on GitHub (Feb 2, 2024): > Resolved! I set the CUDA_LIB_DIR and CUDACXX environment variables in the corresponding toolkit directories, recompiled, and now it works perfectly. > > The only thing is that I have to start the server separately, otherwise I get: Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it. Yes, the Ollama binary does both the serving and the front end, this is expected behavior. > what nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial means? nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial are informational messages only and don't otherwise affect the application. You can ignore them. > task manager still shows 0% usage on GPU, even with small models like tinyllama tinyllama looks cool, I'll have to check it out. Can you paste the rest of the log? Tinyllama is only supposed to take ~600-700MB of memory but it looks like something else is occupying ~2GB of your VRAM, do you have any other applications running GPU-intensive tasks?
Author
Owner

@BrujitoOz commented on GitHub (Feb 4, 2024):

Resolved! I set the CUDA_LIB_DIR and CUDACXX environment variables in the corresponding toolkit directories, recompiled, and now it works perfectly.
The only thing is that I have to start the server separately, otherwise I get: Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.

Yes, the Ollama binary does both the serving and the front end, this is expected behavior.

what nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial means?

nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial are informational messages only and don't otherwise affect the application. You can ignore them.

task manager still shows 0% usage on GPU, even with small models like tinyllama

tinyllama looks cool, I'll have to check it out. Can you paste the rest of the log? Tinyllama is only supposed to take ~600-700MB of memory but it looks like something else is occupying ~2GB of your VRAM, do you have any other applications running GPU-intensive tasks?

I downloaded version v0.1.23 of Olama, and now the GPU is used, thanks for all the help everyone.

<!-- gh-comment-id:1925564579 --> @BrujitoOz commented on GitHub (Feb 4, 2024): > > Resolved! I set the CUDA_LIB_DIR and CUDACXX environment variables in the corresponding toolkit directories, recompiled, and now it works perfectly. > > The only thing is that I have to start the server separately, otherwise I get: Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it. > > Yes, the Ollama binary does both the serving and the front end, this is expected behavior. > > > what nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial means? > > nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial are informational messages only and don't otherwise affect the application. You can ignore them. > > > task manager still shows 0% usage on GPU, even with small models like tinyllama > > tinyllama looks cool, I'll have to check it out. Can you paste the rest of the log? Tinyllama is only supposed to take ~600-700MB of memory but it looks like something else is occupying ~2GB of your VRAM, do you have any other applications running GPU-intensive tasks? I downloaded version v0.1.23 of Olama, and now the GPU is used, thanks for all the help everyone.
Author
Owner

@HyperUpscale commented on GitHub (Feb 17, 2024):

Resolved! I set the CUDA_LIB_DIR and CUDACXX environment variables in the corresponding toolkit directories, recompiled, and now it works perfectly.
The only thing is that I have to start the server separately, otherwise I get: Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.

Yes, the Ollama binary does both the serving and the front end, this is expected behavior.

what nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial means?

nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial are informational messages only and don't otherwise affect the application. You can ignore them.

task manager still shows 0% usage on GPU, even with small models like tinyllama

tinyllama looks cool, I'll have to check it out. Can you paste the rest of the log? Tinyllama is only supposed to take ~600-700MB of memory but it looks like something else is occupying ~2GB of your VRAM, do you have any other applications running GPU-intensive tasks?

I downloaded version v0.1.23 of Olama, and now the GPU is used, thanks for all the help everyone.

How did you install version v0.1.23?

Do you have a link ...

<!-- gh-comment-id:1949527040 --> @HyperUpscale commented on GitHub (Feb 17, 2024): > > > Resolved! I set the CUDA_LIB_DIR and CUDACXX environment variables in the corresponding toolkit directories, recompiled, and now it works perfectly. > > > The only thing is that I have to start the server separately, otherwise I get: Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it. > > > > > > Yes, the Ollama binary does both the serving and the front end, this is expected behavior. > > > what nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial means? > > > > > > nvmlDeviceGetBoardPartNumber and nvmlDeviceGetSerial are informational messages only and don't otherwise affect the application. You can ignore them. > > > task manager still shows 0% usage on GPU, even with small models like tinyllama > > > > > > tinyllama looks cool, I'll have to check it out. Can you paste the rest of the log? Tinyllama is only supposed to take ~600-700MB of memory but it looks like something else is occupying ~2GB of your VRAM, do you have any other applications running GPU-intensive tasks? > > I downloaded version v0.1.23 of Olama, and now the GPU is used, thanks for all the help everyone. How did you install version v0.1.23? Do you have a link ...
Author
Owner

@remy415 commented on GitHub (Feb 17, 2024):

Do you have a link ...

https://www.ollama.com

<!-- gh-comment-id:1950154446 --> @remy415 commented on GitHub (Feb 17, 2024): > Do you have a link ... [https://www.ollama.com](https://www.ollama.com)
Author
Owner

@HyperUpscale commented on GitHub (Feb 17, 2024):

WOW, You are way too smart, my mind can't comprehend the brilliance of the solution provided.

Still looking for a simple way to get a previous version and a way to install so I can test.

Do you have a link ...

https://www.ollama.com

<!-- gh-comment-id:1950226540 --> @HyperUpscale commented on GitHub (Feb 17, 2024): **WOW, You are way too smart, my mind can't comprehend the brilliance of the solution provided.** Still looking for a simple way to get a previous version and a way to install so I can test. > > Do you have a link ... > > https://www.ollama.com
Author
Owner

@remy415 commented on GitHub (Feb 17, 2024):

WOW, You are way too smart, my mind can't comprehend the brilliance of the solution provided.

Still looking for a simple way to get a previous version and a way to install so I can test.

Do you have a link ...

https://www.ollama.com

You asked for a link, I gave you a link. v0.1.23 was the current version ~2 weeks ago when they posted that message. They likely downloaded it from https://www.ollama.com, as I said. There isn’t a versioned download that I could find, which makes sense given the “in development” status indicated by the version being below 1.0.

There’s generally no reason to downgrade your version unless the current version is giving you issues when the older one didn’t, which you haven’t indicated is the situation. If you want an older version, you’ll need to download a previous commit and build it yourself locally. There’s instructions in the developers guide. Good luck.

<!-- gh-comment-id:1950287937 --> @remy415 commented on GitHub (Feb 17, 2024): > **WOW, You are way too smart, my mind can't comprehend the brilliance of the solution provided.** > > Still looking for a simple way to get a previous version and a way to install so I can test. > > > > Do you have a link ... > > > > > > https://www.ollama.com You asked for a link, I gave you a link. v0.1.23 was the current version ~2 weeks ago when they posted that message. They likely downloaded it from https://www.ollama.com, as I said. There isn’t a versioned download that I could find, which makes sense given the “in development” status indicated by the version being below 1.0. There’s generally no reason to downgrade your version unless the current version is giving you issues when the older one didn’t, which you haven’t indicated is the situation. If you want an older version, you’ll need to download a previous commit and build it yourself locally. There’s instructions in the developers guide. Good luck.
Author
Owner

@Motzumoto commented on GitHub (Feb 20, 2024):

@Motzumoto those logs are for 0.1.17 which is quite old (we're up to 0.1.22). That said, I do see it is running on your GPU, yet due to limited VRAM, is only able to load a very small percentage of the model, so most of the LLM is running on your CPU. If you run a smaller model that fits all or mostly in the VRAM, then you should see much better performance.

Jan 16 01:56:25 Motzumoto ollama[140]: 2024/01/16 01:56:25 llama.go:300: 4716 MB VRAM available, loading up to 3 GPU layers
Jan 16 01:56:25 Motzumoto ollama[140]: 2024/01/16 01:56:25 llama.go:436: starting llama runner
Jan 16 01:56:25 Motzumoto ollama[140]: 2024/01/16 01:56:25 llama.go:494: waiting for llama runner to start responding
Jan 16 01:56:26 Motzumoto ollama[140]: ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
Jan 16 01:56:26 Motzumoto ollama[140]: ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
Jan 16 01:56:26 Motzumoto ollama[140]: ggml_init_cublas: found 1 CUDA devices:
Jan 16 01:56:26 Motzumoto ollama[140]:   Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5
...
Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: using CUDA for GPU acceleration
Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: mem required  = 22868.48 MiB
Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: offloading 3 repeating layers to GPU
Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: offloaded 3/33 layers to GPU
Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: VRAM used: 2347.78 MiB

Are there any LLM's you can suggest that are good for coding support? Im planning on integrating this into a discord bot to assist people with their programming issues. I went with mixtral because it says on hugging face that its "exceptionally good" at coding.

<!-- gh-comment-id:1955308297 --> @Motzumoto commented on GitHub (Feb 20, 2024): > @Motzumoto those logs are for 0.1.17 which is quite old (we're up to 0.1.22). That said, I do see it is running on your GPU, yet due to limited VRAM, is only able to load a very small percentage of the model, so most of the LLM is running on your CPU. If you run a smaller model that fits all or mostly in the VRAM, then you should see much better performance. > > ``` > Jan 16 01:56:25 Motzumoto ollama[140]: 2024/01/16 01:56:25 llama.go:300: 4716 MB VRAM available, loading up to 3 GPU layers > Jan 16 01:56:25 Motzumoto ollama[140]: 2024/01/16 01:56:25 llama.go:436: starting llama runner > Jan 16 01:56:25 Motzumoto ollama[140]: 2024/01/16 01:56:25 llama.go:494: waiting for llama runner to start responding > Jan 16 01:56:26 Motzumoto ollama[140]: ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no > Jan 16 01:56:26 Motzumoto ollama[140]: ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes > Jan 16 01:56:26 Motzumoto ollama[140]: ggml_init_cublas: found 1 CUDA devices: > Jan 16 01:56:26 Motzumoto ollama[140]: Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5 > ... > Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: using CUDA for GPU acceleration > Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: mem required = 22868.48 MiB > Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: offloading 3 repeating layers to GPU > Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: offloaded 3/33 layers to GPU > Jan 16 01:56:27 Motzumoto ollama[140]: llm_load_tensors: VRAM used: 2347.78 MiB > ``` Are there any LLM's you can suggest that are good for coding support? Im planning on integrating this into a discord bot to assist people with their programming issues. I went with mixtral because it says on hugging face that its "exceptionally good" at coding.
Author
Owner

@dhiltgen commented on GitHub (Mar 11, 2024):

It sounds like the original issue has been resolved. @Motzumoto folks on our Discord channel might have suggestions for smaller coding models, but in general most of the coding models I've used are larger.

<!-- gh-comment-id:1989284646 --> @dhiltgen commented on GitHub (Mar 11, 2024): It sounds like the original issue has been resolved. @Motzumoto folks on our [Discord channel](https://discord.gg/ollama) might have suggestions for smaller coding models, but in general most of the coding models I've used are larger.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26924