[GH-ISSUE #2904] cuMemCreate with gpu nvidia m2000 #1779

Closed
opened 2026-04-12 11:48:10 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @aymengazzah on GitHub (Mar 3, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2904

"Hi, is anyone else experiencing this error with the GPU? The GPU successfully passes through for video transcoding in another container app (Emby/Plex), but it shows an error for all ollama models."

Error library

level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama4235720163/cuda_v11/libext_server.so Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama4235720163/cuda_v11/libext_server.so: undefined symbol: cuMemCreate"

docker start

 level=INFO source=images.go:710 msg="total blobs: 12"
 level=INFO source=images.go:717 msg="total unused blobs removed: 0"
 level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.27)"
 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu cpu_avx cuda_v11 rocm_v6 cpu_avx2 rocm_v5]"
 level=INFO source=gpu.go:94 msg="Detecting GPU type"
 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.418.226.00]"
 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.2"

Start request

level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.2"
level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.2"
level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama4235720163/cuda_v11/libext_server.so  Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama4235720163/cuda_v11/libext_server.so: undefined symbol: cuMemCreate"
level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama4235720163/cpu_avx2/libext_server.so"
level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256:2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest))

Check nvidia

root@srv-01:~$ sudo docker exec -it ollama bash
root@bc5e85c49508:/# nvidia-smi
Sun Mar  3 23:07:23 2024       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.226.00   Driver Version: 418.226.00   CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro M2000        On   | 00000000:03:00.0  On |                  N/A |
| 56%   31C    P8     8W /  75W |      1MiB /  4040MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

environment

os: debian 10.13
docker: 25.0.3
cpu: E5-2698 v4
gpu nvidia quadro m2000 4GB

compose.yml

version: '3.8'

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    restart: unless-stopped
    user: "0:0"
    userns_mode: host
    volumes:
      - ./:/root/.ollama
    deploy:
      resources:
        reservations:
          devices:
            - driver: ${OLLAMA_GPU_DRIVER-nvidia}
              count: ${OLLAMA_GPU_COUNT-1}
              capabilities:
                - gpu

Originally created by @aymengazzah on GitHub (Mar 3, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2904 "Hi, is anyone else experiencing this error with the GPU? The GPU successfully passes through for video transcoding in another container app (Emby/Plex), but it shows an error for all ollama models." ### Error library ` level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama4235720163/cuda_v11/libext_server.so Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama4235720163/cuda_v11/libext_server.so: undefined symbol: cuMemCreate"` ### docker start ``` level=INFO source=images.go:710 msg="total blobs: 12" level=INFO source=images.go:717 msg="total unused blobs removed: 0" level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.27)" level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu cpu_avx cuda_v11 rocm_v6 cpu_avx2 rocm_v5]" level=INFO source=gpu.go:94 msg="Detecting GPU type" level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so" level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.418.226.00]" level=INFO source=gpu.go:99 msg="Nvidia GPU detected" level=INFO source=cpu_common.go:11 msg="CPU has AVX2" level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.2" ``` ### Start request ``` level=INFO source=cpu_common.go:11 msg="CPU has AVX2" level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.2" level=INFO source=cpu_common.go:11 msg="CPU has AVX2" level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 5.2" level=INFO source=cpu_common.go:11 msg="CPU has AVX2" level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama4235720163/cuda_v11/libext_server.so Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama4235720163/cuda_v11/libext_server.so: undefined symbol: cuMemCreate" level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama4235720163/cpu_avx2/libext_server.so" level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256:2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest)) ``` ### Check nvidia ``` root@srv-01:~$ sudo docker exec -it ollama bash root@bc5e85c49508:/# nvidia-smi Sun Mar 3 23:07:23 2024 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.226.00 Driver Version: 418.226.00 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Quadro M2000 On | 00000000:03:00.0 On | N/A | | 56% 31C P8 8W / 75W | 1MiB / 4040MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ``` ### environment os: debian 10.13 docker: 25.0.3 cpu: E5-2698 v4 gpu nvidia quadro m2000 4GB ### compose.yml ``` version: '3.8' services: ollama: image: ollama/ollama:latest container_name: ollama restart: unless-stopped user: "0:0" userns_mode: host volumes: - ./:/root/.ollama deploy: resources: reservations: devices: - driver: ${OLLAMA_GPU_DRIVER-nvidia} count: ${OLLAMA_GPU_COUNT-1} capabilities: - gpu ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1779