[GH-ISSUE #1940] CUDA error 100 after detecting GPU libraries on system #26876

Closed
opened 2026-04-22 03:35:08 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @jmorganca on GitHub (Jan 12, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1940

Originally assigned to: @dhiltgen on GitHub.

It seems that upon detecting an Nvidia card, ollama may error with CUDA error 100:

Jan 11 15:37:50 LR9135SQP ollama[5616]: 2024/01/11 15:37:50 gpu.go:88: Detecting GPU type
Jan 11 15:37:50 LR9135SQP ollama[5616]: 2024/01/11 15:37:50 gpu.go:203: Searching for GPU management library libnvidia-ml.so
Jan 11 15:37:50 LR9135SQP ollama[5616]: 2024/01/11 15:37:50 gpu.go:248: Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.535.104.05 /usr/lib/wsl/lib/libnvidia-ml.so.1]
Jan 11 15:37:50 LR9135SQP ollama[5616]: 2024/01/11 15:37:50 gpu.go:259: Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.535.104.05: nvml vram init failure: 9
Jan 11 15:37:51 LR9135SQP ollama[5616]: 2024/01/11 15:37:51 gpu.go:94: Nvidia GPU detected
Jan 11 15:37:51 LR9135SQP ollama[5616]: 2024/01/11 15:37:51 gpu.go:135: CUDA Compute Capability detected: 7.5
Jan 11 15:55:41 LR9135SQP ollama[5616]: CUDA error 100 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: no CUDA-capable device is detected
Jan 11 15:55:41 LR9135SQP ollama[5616]: current device: 1881676272
Jan 11 15:55:41 LR9135SQP ollama[5616]: Lazy loading /tmp/ollama958766944/cuda/libext_server.so library
Jan 11 15:55:41 LR9135SQP ollama[5616]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: !"CUDA error"
Originally created by @jmorganca on GitHub (Jan 12, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1940 Originally assigned to: @dhiltgen on GitHub. It seems that upon detecting an Nvidia card, `ollama` may error with `CUDA error 100`: ``` Jan 11 15:37:50 LR9135SQP ollama[5616]: 2024/01/11 15:37:50 gpu.go:88: Detecting GPU type Jan 11 15:37:50 LR9135SQP ollama[5616]: 2024/01/11 15:37:50 gpu.go:203: Searching for GPU management library libnvidia-ml.so Jan 11 15:37:50 LR9135SQP ollama[5616]: 2024/01/11 15:37:50 gpu.go:248: Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.535.104.05 /usr/lib/wsl/lib/libnvidia-ml.so.1] Jan 11 15:37:50 LR9135SQP ollama[5616]: 2024/01/11 15:37:50 gpu.go:259: Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.535.104.05: nvml vram init failure: 9 Jan 11 15:37:51 LR9135SQP ollama[5616]: 2024/01/11 15:37:51 gpu.go:94: Nvidia GPU detected Jan 11 15:37:51 LR9135SQP ollama[5616]: 2024/01/11 15:37:51 gpu.go:135: CUDA Compute Capability detected: 7.5 ``` ``` Jan 11 15:55:41 LR9135SQP ollama[5616]: CUDA error 100 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: no CUDA-capable device is detected Jan 11 15:55:41 LR9135SQP ollama[5616]: current device: 1881676272 Jan 11 15:55:41 LR9135SQP ollama[5616]: Lazy loading /tmp/ollama958766944/cuda/libext_server.so library Jan 11 15:55:41 LR9135SQP ollama[5616]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: !"CUDA error" ```
GiteaMirror added the nvidiabug labels 2026-04-22 03:35:08 -05:00
Author
Owner

@abdinal1 commented on GitHub (Jan 13, 2024):

Error can be reproduced with the Kaggle notebook I released easily:

https://www.kaggle.com/code/aliabdin1/ollama-server/

<!-- gh-comment-id:1890410099 --> @abdinal1 commented on GitHub (Jan 13, 2024): Error can be reproduced with the Kaggle notebook I released easily: https://www.kaggle.com/code/aliabdin1/ollama-server/
Author
Owner

@jmorganca commented on GitHub (Jan 14, 2024):

@abdinal1 thanks!

<!-- gh-comment-id:1891053127 --> @jmorganca commented on GitHub (Jan 14, 2024): @abdinal1 thanks!
Author
Owner

@MaxPhilipss commented on GitHub (Jan 16, 2024):

Having the same issue leading to Error: Post "http://127.0.0.1:11434/api/generate": EOF #1991

<!-- gh-comment-id:1893909280 --> @MaxPhilipss commented on GitHub (Jan 16, 2024): Having the same issue leading to Error: Post "http://127.0.0.1:11434/api/generate": EOF #1991
Author
Owner

@Ceye4n commented on GitHub (Jan 17, 2024):

I have exactly the same issue, trying to run mixtral 8x7b on an RTX 2060 6GB through wsl2 on kali-linux

<!-- gh-comment-id:1896357478 --> @Ceye4n commented on GitHub (Jan 17, 2024): I have exactly the same issue, trying to run mixtral 8x7b on an RTX 2060 6GB through wsl2 on kali-linux
Author
Owner

@dhiltgen commented on GitHub (Jan 21, 2024):

Based on the log message line numbers, I have a feeling this is a variation of #1877

<!-- gh-comment-id:1902751990 --> @dhiltgen commented on GitHub (Jan 21, 2024): Based on the log message line numbers, I have a feeling this is a variation of #1877
Author
Owner

@dhiltgen commented on GitHub (Jan 22, 2024):

    /**
     * This indicates that no CUDA-capable devices were detected by the installed
     * CUDA driver.
     */
    cudaErrorNoDevice                     =     100,

It's still unclear to me why nvidia-ml reports devices but the cuda library does not. My suspicion is mismatched libraries/drivers. In 0.1.21 we've switched to linking against the cuda v11 shared libraries and carrying them as payloads instead of linking the v11 static libraries directly into ollama. This might be sufficient to get us linked to the underlying host cuda libraries, although we might need some further mod's to our rpath settings.

Please give the pre-release 0.1.21 a try on any system that was failing with the CUDA error 100 and report back if the problem is resolved, or still present.

One other possible explanation might be a mistaken driver install in the WSL2 setup. According to the CUDA WSL2 docs, you're not supposed to install the linux driver, as they have wired up a pass-through model for WSL2, but it's possible to accidentally install the driver and cause things not to work.

<!-- gh-comment-id:1904406254 --> @dhiltgen commented on GitHub (Jan 22, 2024): ``` /** * This indicates that no CUDA-capable devices were detected by the installed * CUDA driver. */ cudaErrorNoDevice = 100, ``` It's still unclear to me why nvidia-ml reports devices but the cuda library does not. My suspicion is mismatched libraries/drivers. In 0.1.21 we've switched to linking against the cuda v11 shared libraries and carrying them as payloads instead of linking the v11 static libraries directly into ollama. This might be sufficient to get us linked to the underlying host cuda libraries, although we might need some further mod's to our rpath settings. Please give the pre-release [0.1.21](https://github.com/jmorganca/ollama/releases/tag/v0.1.21) a try on any system that was failing with the `CUDA error 100` and report back if the problem is resolved, or still present. One other possible explanation might be a mistaken driver install in the WSL2 setup. According to the [CUDA WSL2 docs](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#cuda-support-for-wsl-2), you're not supposed to install the linux driver, as they have wired up a pass-through model for WSL2, but it's possible to accidentally install the driver and cause things not to work.
Author
Owner

@bala-nullpointer commented on GitHub (Jan 24, 2024):

Hello,
I have updated to version 0.1.21 but still getting a CUDA error - although it is not CUDA error 100. It's a very verbose error trace so just pasting in the initial CUDA error and the first part of the goroutine trace.

CUDA error: an illegal memory access was encountered
  current device: 0, in function ggml_backend_cuda_buffer_clear at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:10346
  cudaDeviceSynchronize()
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:233: !"CUDA error"
SIGABRT: abort
PC=0x7fa94dcc900b m=8 sigcode=18446744073709551610
signal arrived during cgo execution.

goroutine 6 [syscall]:
runtime.cgocall(0x9b4850, 0xc0003587f8)
        /usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0003587d0 sp=0xc000358798 pc=0x409b0b
github.com/jmorganca/ollama/llm._Cfunc_dyn_llama_server_init({0x7fa8e0001370, 0x7fa8d797cbc0, 0x7fa8d796e6a0, 0x7fa8d7972700, 0x7fa8d7980620, 0x7fa8d797a0e0, 0x7fa8d79726d0, 0x7fa8d796e720, 0x7fa8d7980dd0, 0x7fa8d79801d0, ...}, ...)
        _cgo_gotypes.go:282 +0x45 fp=0xc0003587f8 sp=0xc0003587d0 pc=0x7c2b25
github.com/jmorganca/ollama/llm.newDynExtServer.func7(0xae6fd9?, 0xc?)
        /go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:148 +0xef fp=0xc0003588e8 sp=0xc0003587f8 pc=0x7c404f
github.com/jmorganca/ollama/llm.newDynExtServer({0xc00049a5a0, 0x2f}, {0xc0005b2180, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
        /go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:148 +0xa45 fp=0xc000358b88 sp=0xc0003588e8 pc=0x7c3ce5
github.com/jmorganca/ollama/llm.newLlmServer({{_, _, _}, {_, _}, {_, _}}, {_, _}, {0x0, ...}, ...)
        /go/src/github.com/jmorganca/ollama/llm/llm.go:148 +0x36a fp=0xc000358d48 sp=0xc000358b88 pc=0x7c04ea
github.com/jmorganca/ollama/llm.New({0x0?, 0x1000100000100?}, {0xc0005b2180, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...)
        /go/src/github.com/jmorganca/ollama/llm/llm.go:123 +0x6f9 fp=0xc000358fb8 sp=0xc000358d48 pc=0x7bff19
github.com/jmorganca/ollama/server.load(0xc000176900?, 0xc000176900, {{0x0, 0x800, 0x200, 0x1, 0xffffffffffffffff, 0x0, 0x0, 0x1, ...}, ...}, ...)
        /go/src/github.com/jmorganca/ollama/server/routes.go:83 +0x3a5 fp=0xc000359138 sp=0xc000358fb8 pc=0x990ba5
github.com/jmorganca/ollama/server.ChatHandler(0xc000480f00)
        /go/src/github.com/jmorganca/ollama/server/routes.go:1071 +0x828 fp=0xc000359748 sp=0xc000359138 pc=0x99b4e8
github.com/gin-gonic/gin.(*Context).Next(...)
        /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc000480f00)
        /go/src/github.com/jmorganca/ollama/server/routes.go:883 +0x68 fp=0xc000359780 sp=0xc000359748 pc=0x99a028
github.com/gin-gonic/gin.(*Context).Next(...)
        /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc000480f00)
        /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 +0x7a fp=0xc0003597d0 sp=0xc000359780 pc=0x97575a
github.com/gin-gonic/gin.(*Context).Next(...)
        /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc000480f00)
        /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 +0xde fp=0xc000359980 sp=0xc0003597d0 pc=0x9748fe
github.com/gin-gonic/gin.(*Context).Next(...)
        /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174
github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc00042e680, 0xc000480f00)
        /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 +0x65b fp=0xc000359b08 sp=0xc000359980 pc=0x9739bb
github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc00042e680, {0x106aeca0?, 0xc00044a000}, 0xc000480500)
        /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 +0x1dd fp=0xc000359b48 sp=0xc000359b08 pc=0x97317d
net/http.serverHandler.ServeHTTP({0x106acfc0?}, {0x106aeca0?, 0xc00044a000?}, 0x6?)
        /usr/local/go/src/net/http/server.go:2938 +0x8e fp=0xc000359b78 sp=0xc000359b48 pc=0x6ce60e
net/http.(*conn).serve(0xc000174360, {0x106b0308, 0xc00049c690})
        /usr/local/go/src/net/http/server.go:2009 +0x5f4 fp=0xc000359fb8 sp=0xc000359b78 pc=0x6ca4f4
net/http.(*Server).Serve.func3()
        /usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc000359fe0 sp=0xc000359fb8 pc=0x6cee28
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000359fe8 sp=0xc000359fe0 pc=0x46e0a1
created by net/http.(*Server).Serve in goroutine 1
        /usr/local/go/src/net/http/server.go:3086 +0x5cb

olama run llama2 give this output:-
Error: Post "http://0.0.0.0:11434/api/chat": EOF

I am assuming ollama serve does detect a GPU from this output:-

2024/01/24 08:02:29 gpu.go:137: INFO CUDA Compute Capability detected: 7.0
2024/01/24 08:02:29 gpu.go:137: INFO CUDA Compute Capability detected: 7.0
2024/01/24 08:02:29 cpu_common.go:11: INFO CPU has AVX2
loading library /tmp/ollama2178682280/cuda_v11/libext_server.so
2024/01/24 08:02:29 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama2178682280/cuda_v11/libext_server.so
2024/01/24 08:02:29 dyn_ext_server.go:145: INFO Initializing llama server
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: Tesla V100-PCIE-16GB, compute capability 7.0, VMM: yes
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /home/fincopilot-tijori/.ollama/models/blobs/sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))

nvidia-smi output:-

Wed Jan 24 08:10:00 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.154.05             Driver Version: 535.154.05   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla V100-PCIE-16GB           Off | 00000001:00:00.0 Off |                  Off |
| N/A   30C    P0              24W / 250W |      0MiB / 16384MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                    
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

And, nvcc --version output:-
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Wed_Nov_22_10:17:15_PST_2023 Cuda compilation tools, release 12.3, V12.3.107 Build cuda_12.3.r12.3/compiler.33567101_0

Setup:-
Azure VM Standard NC6s v3 (6 vcpus, 112 GiB memory) with one V100 GPU running Ubuntu 20.04.

Worst part, was running perfectly with version 0.1.20 last week. Now breaks in both versions.

<!-- gh-comment-id:1907615766 --> @bala-nullpointer commented on GitHub (Jan 24, 2024): Hello, I have updated to version 0.1.21 but still getting a CUDA error - although it is not `CUDA error 100`. It's a very verbose error trace so just pasting in the initial CUDA error and the first part of the `goroutine` trace. ``` CUDA error: an illegal memory access was encountered current device: 0, in function ggml_backend_cuda_buffer_clear at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:10346 cudaDeviceSynchronize() GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:233: !"CUDA error" SIGABRT: abort PC=0x7fa94dcc900b m=8 sigcode=18446744073709551610 signal arrived during cgo execution. goroutine 6 [syscall]: runtime.cgocall(0x9b4850, 0xc0003587f8) /usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0003587d0 sp=0xc000358798 pc=0x409b0b github.com/jmorganca/ollama/llm._Cfunc_dyn_llama_server_init({0x7fa8e0001370, 0x7fa8d797cbc0, 0x7fa8d796e6a0, 0x7fa8d7972700, 0x7fa8d7980620, 0x7fa8d797a0e0, 0x7fa8d79726d0, 0x7fa8d796e720, 0x7fa8d7980dd0, 0x7fa8d79801d0, ...}, ...) _cgo_gotypes.go:282 +0x45 fp=0xc0003587f8 sp=0xc0003587d0 pc=0x7c2b25 github.com/jmorganca/ollama/llm.newDynExtServer.func7(0xae6fd9?, 0xc?) /go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:148 +0xef fp=0xc0003588e8 sp=0xc0003587f8 pc=0x7c404f github.com/jmorganca/ollama/llm.newDynExtServer({0xc00049a5a0, 0x2f}, {0xc0005b2180, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...) /go/src/github.com/jmorganca/ollama/llm/dyn_ext_server.go:148 +0xa45 fp=0xc000358b88 sp=0xc0003588e8 pc=0x7c3ce5 github.com/jmorganca/ollama/llm.newLlmServer({{_, _, _}, {_, _}, {_, _}}, {_, _}, {0x0, ...}, ...) /go/src/github.com/jmorganca/ollama/llm/llm.go:148 +0x36a fp=0xc000358d48 sp=0xc000358b88 pc=0x7c04ea github.com/jmorganca/ollama/llm.New({0x0?, 0x1000100000100?}, {0xc0005b2180, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...) /go/src/github.com/jmorganca/ollama/llm/llm.go:123 +0x6f9 fp=0xc000358fb8 sp=0xc000358d48 pc=0x7bff19 github.com/jmorganca/ollama/server.load(0xc000176900?, 0xc000176900, {{0x0, 0x800, 0x200, 0x1, 0xffffffffffffffff, 0x0, 0x0, 0x1, ...}, ...}, ...) /go/src/github.com/jmorganca/ollama/server/routes.go:83 +0x3a5 fp=0xc000359138 sp=0xc000358fb8 pc=0x990ba5 github.com/jmorganca/ollama/server.ChatHandler(0xc000480f00) /go/src/github.com/jmorganca/ollama/server/routes.go:1071 +0x828 fp=0xc000359748 sp=0xc000359138 pc=0x99b4e8 github.com/gin-gonic/gin.(*Context).Next(...) /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc000480f00) /go/src/github.com/jmorganca/ollama/server/routes.go:883 +0x68 fp=0xc000359780 sp=0xc000359748 pc=0x99a028 github.com/gin-gonic/gin.(*Context).Next(...) /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc000480f00) /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 +0x7a fp=0xc0003597d0 sp=0xc000359780 pc=0x97575a github.com/gin-gonic/gin.(*Context).Next(...) /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc000480f00) /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 +0xde fp=0xc000359980 sp=0xc0003597d0 pc=0x9748fe github.com/gin-gonic/gin.(*Context).Next(...) /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc00042e680, 0xc000480f00) /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 +0x65b fp=0xc000359b08 sp=0xc000359980 pc=0x9739bb github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc00042e680, {0x106aeca0?, 0xc00044a000}, 0xc000480500) /root/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 +0x1dd fp=0xc000359b48 sp=0xc000359b08 pc=0x97317d net/http.serverHandler.ServeHTTP({0x106acfc0?}, {0x106aeca0?, 0xc00044a000?}, 0x6?) /usr/local/go/src/net/http/server.go:2938 +0x8e fp=0xc000359b78 sp=0xc000359b48 pc=0x6ce60e net/http.(*conn).serve(0xc000174360, {0x106b0308, 0xc00049c690}) /usr/local/go/src/net/http/server.go:2009 +0x5f4 fp=0xc000359fb8 sp=0xc000359b78 pc=0x6ca4f4 net/http.(*Server).Serve.func3() /usr/local/go/src/net/http/server.go:3086 +0x28 fp=0xc000359fe0 sp=0xc000359fb8 pc=0x6cee28 runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000359fe8 sp=0xc000359fe0 pc=0x46e0a1 created by net/http.(*Server).Serve in goroutine 1 /usr/local/go/src/net/http/server.go:3086 +0x5cb ``` `olama run llama2` give this output:- `Error: Post "http://0.0.0.0:11434/api/chat": EOF` I am assuming `ollama serve` does detect a GPU from this output:- ``` 2024/01/24 08:02:29 gpu.go:137: INFO CUDA Compute Capability detected: 7.0 2024/01/24 08:02:29 gpu.go:137: INFO CUDA Compute Capability detected: 7.0 2024/01/24 08:02:29 cpu_common.go:11: INFO CPU has AVX2 loading library /tmp/ollama2178682280/cuda_v11/libext_server.so 2024/01/24 08:02:29 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama2178682280/cuda_v11/libext_server.so 2024/01/24 08:02:29 dyn_ext_server.go:145: INFO Initializing llama server ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: Tesla V100-PCIE-16GB, compute capability 7.0, VMM: yes llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /home/fincopilot-tijori/.ollama/models/blobs/sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) ``` `nvidia-smi` output:- ``` Wed Jan 24 08:10:00 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.154.05 Driver Version: 535.154.05 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Tesla V100-PCIE-16GB Off | 00000001:00:00.0 Off | Off | | N/A 30C P0 24W / 250W | 0MiB / 16384MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ ``` And, `nvcc --version` output:- `nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Wed_Nov_22_10:17:15_PST_2023 Cuda compilation tools, release 12.3, V12.3.107 Build cuda_12.3.r12.3/compiler.33567101_0` Setup:- Azure VM Standard NC6s v3 (6 vcpus, 112 GiB memory) with one V100 GPU running Ubuntu 20.04. Worst part, was running perfectly with version `0.1.20` last week. Now breaks in both versions.
Author
Owner

@dhiltgen commented on GitHub (Jan 24, 2024):

@bala-nullpointer I think this is probably a different issue - Looking upstream at llama.cpp I see a recent issue tracking a similar problem. https://github.com/ggerganov/llama.cpp/issues/5102

Can you clarify if you were hitting the CUDA error 100 error before picking up the latest pre-release build of 0.1.21?

<!-- gh-comment-id:1908748632 --> @dhiltgen commented on GitHub (Jan 24, 2024): @bala-nullpointer I think this is probably a different issue - Looking upstream at llama.cpp I see a recent issue tracking a similar problem. https://github.com/ggerganov/llama.cpp/issues/5102 Can you clarify if you were hitting the `CUDA error 100` error before picking up the latest pre-release build of 0.1.21?
Author
Owner

@bala-nullpointer commented on GitHub (Jan 24, 2024):

@dhiltgen thanks for pointing it out. Will track that issue.

Nope it was a CUDA error 700, with this trace.

CUDA error 700 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:9177: an illegal memory access was encountered
current device: 0
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:9177: !"CUDA error"
SIGABRT: abort
PC=0x7f4a7f30f00b m=8 sigcode=18446744073709551610
signal arrived during cgo execution

Apologies, if that caused any confusion.

With respect to my issue, I deleted that instance (with a V100 16GB GPU), spun up a new instance with an A100 40GB GPU on Google Cloud and installed Nvidia drivers and Ollama from scratch - which I had tried on the older instance too. And now ollama serve and ollama run llama2 are working fine.

Here are outputs of nvidia-smi and nvcc --version.

Wed Jan 24 20:08:53 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08              Driver Version: 545.23.08    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100-SXM4-40GB          On  | 00000000:00:04.0 Off |                    0 |
| N/A   30C    P0              52W / 400W |   5728MiB / 40960MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A       622      C   /usr/local/bin/ollama                      5710MiB |
+---------------------------------------------------------------------------------------+
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Wed_Nov_22_10:17:15_PST_2023
Cuda compilation tools, release 12.3, V12.3.107
Build cuda_12.3.r12.3/compiler.33567101_0
<!-- gh-comment-id:1908852162 --> @bala-nullpointer commented on GitHub (Jan 24, 2024): @dhiltgen thanks for pointing it out. Will track that issue. Nope it was a `CUDA error 700`, with this trace. ``` CUDA error 700 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:9177: an illegal memory access was encountered current device: 0 GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:9177: !"CUDA error" SIGABRT: abort PC=0x7f4a7f30f00b m=8 sigcode=18446744073709551610 signal arrived during cgo execution ``` Apologies, if that caused any confusion. With respect to my issue, I deleted that instance (with a V100 16GB GPU), spun up a new instance with an A100 40GB GPU on Google Cloud and installed Nvidia drivers and Ollama from scratch - which I had tried on the older instance too. And now `ollama serve` and `ollama run llama2` are working fine. Here are outputs of `nvidia-smi` and `nvcc --version`. ``` Wed Jan 24 20:08:53 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A100-SXM4-40GB On | 00000000:00:04.0 Off | 0 | | N/A 30C P0 52W / 400W | 5728MiB / 40960MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 622 C /usr/local/bin/ollama 5710MiB | +---------------------------------------------------------------------------------------+ ``` ``` nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Wed_Nov_22_10:17:15_PST_2023 Cuda compilation tools, release 12.3, V12.3.107 Build cuda_12.3.r12.3/compiler.33567101_0 ```
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

I'll keep this issue open for a while to see if anyone else is still able to repro on 0.1.22 or later builds. If not, I'll close it as fixed based on various improvements we've made to the way we link the libraries, and upstream fixes in llama.cpp.

<!-- gh-comment-id:1912859135 --> @dhiltgen commented on GitHub (Jan 27, 2024): I'll keep this issue open for a while to see if anyone else is still able to repro on 0.1.22 or later builds. If not, I'll close it as fixed based on various improvements we've made to the way we link the libraries, and upstream fixes in llama.cpp.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26876