[GH-ISSUE #7049] ollama does not detect Quadro RTX 4000 - cuda driver library failed to get device context 801 #30235

Open
opened 2026-04-22 09:45:38 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @mfzhsn on GitHub (Sep 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7049

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Hi All,

I installed ollama both (on machine/docker) both with same behaviour of not detecting the GPU. Have LM Studio on the same machine which picks up GPU without any issues.

root@d50a3f8d8474:/# ollama run phi3.5:3.8b-mini-instruct-q2_K ""
root@d50a3f8d8474:/# ollama ps
NAME                              ID              SIZE      PROCESSOR    UNTIL
phi3.5:3.8b-mini-instruct-q2_K    45b8dc82a846    5.3 GB    100% CPU     4 minutes from now

Installation

[root@ai ~]# curl -fsSL https://ollama.com/install.sh | sh
>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%#=#=#                                               ######################################################################## 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.
>>> NVIDIA GPU installed.

Logs from package installation

[root@ai ~]# OLLAMA_DEBUG=1 ollama serve
Error: listen tcp 127.0.0.1:11434: bind: address already in use
[root@ai ~]# systemctl stop ollama
[root@ai ~]# OLLAMA_DEBUG=1 ollama serve
2024/09/29 03:47:20 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-09-29T03:47:20.643-05:00 level=INFO source=images.go:753 msg="total blobs: 10"
time=2024-09-29T03:47:20.672-05:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-29T03:47:20.672-05:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)"
time=2024-09-29T03:47:20.673-05:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama3184037398/runners
time=2024-09-29T03:47:20.673-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libggml.so.gz
time=2024-09-29T03:47:20.673-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libllama.so.gz
time=2024-09-29T03:47:20.673-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/ollama_llama_server.gz
time=2024-09-29T03:47:20.673-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libggml.so.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libllama.so.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/ollama_llama_server.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libggml.so.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libllama.so.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/ollama_llama_server.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libggml.so.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libllama.so.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/ollama_llama_server.gz
time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libggml.so.gz
time=2024-09-29T03:47:20.675-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libllama.so.gz
time=2024-09-29T03:47:20.675-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/ollama_llama_server.gz
time=2024-09-29T03:47:20.675-05:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libggml.so.gz
time=2024-09-29T03:47:20.675-05:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libllama.so.gz
time=2024-09-29T03:47:20.676-05:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/ollama_llama_server.gz
time=2024-09-29T03:47:32.712-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cpu/ollama_llama_server
time=2024-09-29T03:47:32.712-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cpu_avx/ollama_llama_server
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cpu_avx2/ollama_llama_server
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cuda_v11/ollama_llama_server
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cuda_v12/ollama_llama_server
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/rocm_v60102/ollama_llama_server
time=2024-09-29T03:47:32.713-05:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[rocm_v60102 cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-09-29T03:47:32.713-05:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so*
time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /usr/local/cuda/lib64/libcuda.so* /root/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-09-29T03:47:32.715-05:00 level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths=[/usr/lib64/libcuda.so.560.35.03]
CUDA driver version: 12.6
time=2024-09-29T03:47:32.878-05:00 level=DEBUG source=gpu.go:118 msg="detected GPUs" count=1 library=/usr/lib64/libcuda.so.560.35.03
time=2024-09-29T03:47:32.907-05:00 level=INFO source=gpu.go:252 msg="error looking up nvidia GPU memory" error="cuda driver library failed to get device context 801"
time=2024-09-29T03:47:32.907-05:00 level=DEBUG source=amd_linux.go:376 msg="amdgpu driver not detected /sys/module/amdgpu"
time=2024-09-29T03:47:32.907-05:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
releasing cuda driver library
time=2024-09-29T03:47:32.907-05:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="251.1 GiB" available="240.5 GiB"

Logs from Docker installation

[root@ai ~]# docker logs -f ollama
2024/09/30 15:58:28 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-09-30T15:58:28.508Z level=INFO source=images.go:753 msg="total blobs: 6"
time=2024-09-30T15:58:28.509Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-30T15:58:28.509Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.12)"
time=2024-09-30T15:58:28.510Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 cpu]"
time=2024-09-30T15:58:28.510Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-09-30T15:58:28.670Z level=INFO source=gpu.go:252 msg="error looking up nvidia GPU memory" error="cuda driver library failed to get device context 801"
time=2024-09-30T15:58:28.670Z level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
time=2024-09-30T15:58:28.670Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="251.1 GiB" available="240.4 GiB"
[GIN] 2024/09/30 - 15:59:07 | 200 |       94.19µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/09/30 - 15:59:07 | 200 |   10.954794ms |       127.0.0.1 | POST     "/api/show"
time=2024-09-30T15:59:07.334Z level=INFO source=server.go:103 msg="system memory" total="251.1 GiB" free="240.5 GiB" free_swap="4.0 GiB"
time=2024-09-30T15:59:07.334Z level=INFO source=memory.go:326 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[240.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.3 GiB" memory.required.partial="0 B" memory.required.kv="4.0 GiB" memory.required.allocations="[8.3 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="7.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="681.0 MiB"
time=2024-09-30T15:59:07.338Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --numa distribute --parallel 4 --port 39753"
time=2024-09-30T15:59:07.339Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-09-30T15:59:07.339Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
time=2024-09-30T15:59:07.339Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
WARNING: /proc/sys/kernel/numa_balancing is enabled, this has been observed to impair performance
INFO [main] build info | build=10 commit="070c75f" tid="140389372093376" timestamp=1727711947
INFO [main] system info | n_threads=20 n_threads_batch=20 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140389372093376" timestamp=1727711947 total_threads=40
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="39" port="39753" tid="140389372093376" timestamp=1727711947
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.

I am able to get all the outputs

[root@ai ~]# nvidia-smi
Sat Sep 28 01:07:12 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03              Driver Version: 560.35.03      CUDA Version: 12.6     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  Quadro RTX 4000                Off |   00000000:37:00.0 Off |                  N/A |
| 30%   34C    P8              9W /  125W |       1MiB /   8192MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+
[root@ai ~]# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Wed_Aug_14_10:10:22_PDT_2024
Cuda compilation tools, release 12.6, V12.6.68
Build cuda_12.6.r12.6/compiler.34714021_0

OS
Linux Rocky 9.4

[root@ai ~]# uname -r
5.14.0-427.37.1.el9_4.x86_64

logs

[root@ai ~]# sudo dmesg | grep -i nvidia
[    1.704573] Loaded X.509 cert 'Rocky Enterprise Software Foundation: Nvidia GPU OOT Signing 101: 816ba9c770e6960cefe378020865d4ebbc352a7d'
[    6.270595] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:36/0000:36:00.0/0000:37:00.1/sound/card0/input6
[    6.270694] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:36/0000:36:00.0/0000:37:00.1/sound/card0/input7
[    6.270796] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:36/0000:36:00.0/0000:37:00.1/sound/card0/input8
[    6.270843] input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:36/0000:36:00.0/0000:37:00.1/sound/card0/input9
[    7.812685] nvidia: loading out-of-tree module taints kernel.
[    7.812696] nvidia: module license 'NVIDIA' taints kernel.
[    7.836076] nvidia: module verification failed: signature and/or required key missing - tainting kernel
[    7.950180] nvidia-nvlink: Nvlink Core is being initialized, major device number 510
[    7.951760] nvidia 0000:37:00.0: enabling device (0140 -> 0143)
[    7.951842] nvidia 0000:37:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=none:owns=none
[    8.001592] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  560.35.03  Fri Aug 16 21:39:15 UTC 2024
[    8.115320] nvidia_uvm: module uses symbols from proprietary module nvidia, inheriting taint.
[    8.252789] nvidia-uvm: Loaded the UVM driver, major device number 508.
[    8.307886] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  560.35.03  Fri Aug 16 21:21:48 UTC 2024
[    8.323807] [drm] [nvidia-drm] [GPU ID 0x00003700] Loading driver
[    9.814993] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:37:00.0 on minor 1
[    9.815921] nvidia 0000:37:00.0: [drm] Cannot find any crtc or sizes

additional logs

[root@ai ~]# sudo dmesg | grep -i nvrm
[    8.001592] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  560.35.03  Fri Aug 16 21:39:15 UTC 2024

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.12

Originally created by @mfzhsn on GitHub (Sep 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7049 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Hi All, I installed ollama both (on machine/docker) both with same behaviour of not detecting the GPU. Have LM Studio on the same machine which picks up GPU without any issues. ``` root@d50a3f8d8474:/# ollama run phi3.5:3.8b-mini-instruct-q2_K "" root@d50a3f8d8474:/# ollama ps NAME ID SIZE PROCESSOR UNTIL phi3.5:3.8b-mini-instruct-q2_K 45b8dc82a846 5.3 GB 100% CPU 4 minutes from now ``` **Installation** ``` [root@ai ~]# curl -fsSL https://ollama.com/install.sh | sh >>> Installing ollama to /usr/local >>> Downloading Linux amd64 bundle ######################################################################## 100.0%#=#=# ######################################################################## 100.0% >>> Creating ollama user... >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service. >>> NVIDIA GPU installed. ``` *Logs from package installation* ``` [root@ai ~]# OLLAMA_DEBUG=1 ollama serve Error: listen tcp 127.0.0.1:11434: bind: address already in use [root@ai ~]# systemctl stop ollama [root@ai ~]# OLLAMA_DEBUG=1 ollama serve 2024/09/29 03:47:20 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-09-29T03:47:20.643-05:00 level=INFO source=images.go:753 msg="total blobs: 10" time=2024-09-29T03:47:20.672-05:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-09-29T03:47:20.672-05:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)" time=2024-09-29T03:47:20.673-05:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama3184037398/runners time=2024-09-29T03:47:20.673-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libggml.so.gz time=2024-09-29T03:47:20.673-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/libllama.so.gz time=2024-09-29T03:47:20.673-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu payload=linux/amd64/cpu/ollama_llama_server.gz time=2024-09-29T03:47:20.673-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libggml.so.gz time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/libllama.so.gz time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx payload=linux/amd64/cpu_avx/ollama_llama_server.gz time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libggml.so.gz time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/libllama.so.gz time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cpu_avx2 payload=linux/amd64/cpu_avx2/ollama_llama_server.gz time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libggml.so.gz time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/libllama.so.gz time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v11 payload=linux/amd64/cuda_v11/ollama_llama_server.gz time=2024-09-29T03:47:20.674-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libggml.so.gz time=2024-09-29T03:47:20.675-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/libllama.so.gz time=2024-09-29T03:47:20.675-05:00 level=DEBUG source=common.go:168 msg=extracting runner=cuda_v12 payload=linux/amd64/cuda_v12/ollama_llama_server.gz time=2024-09-29T03:47:20.675-05:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libggml.so.gz time=2024-09-29T03:47:20.675-05:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/libllama.so.gz time=2024-09-29T03:47:20.676-05:00 level=DEBUG source=common.go:168 msg=extracting runner=rocm_v60102 payload=linux/amd64/rocm_v60102/ollama_llama_server.gz time=2024-09-29T03:47:32.712-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cpu/ollama_llama_server time=2024-09-29T03:47:32.712-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cpu_avx/ollama_llama_server time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cpu_avx2/ollama_llama_server time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cuda_v11/ollama_llama_server time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/cuda_v12/ollama_llama_server time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:294 msg="availableServers : found" file=/tmp/ollama3184037398/runners/rocm_v60102/ollama_llama_server time=2024-09-29T03:47:32.713-05:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[rocm_v60102 cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]" time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-09-29T03:47:32.713-05:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs" time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA" time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so* time=2024-09-29T03:47:32.713-05:00 level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /usr/local/cuda/lib64/libcuda.so* /root/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2024-09-29T03:47:32.715-05:00 level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths=[/usr/lib64/libcuda.so.560.35.03] CUDA driver version: 12.6 time=2024-09-29T03:47:32.878-05:00 level=DEBUG source=gpu.go:118 msg="detected GPUs" count=1 library=/usr/lib64/libcuda.so.560.35.03 time=2024-09-29T03:47:32.907-05:00 level=INFO source=gpu.go:252 msg="error looking up nvidia GPU memory" error="cuda driver library failed to get device context 801" time=2024-09-29T03:47:32.907-05:00 level=DEBUG source=amd_linux.go:376 msg="amdgpu driver not detected /sys/module/amdgpu" time=2024-09-29T03:47:32.907-05:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered" releasing cuda driver library time=2024-09-29T03:47:32.907-05:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="251.1 GiB" available="240.5 GiB" ``` **Logs from Docker installation** ``` [root@ai ~]# docker logs -f ollama 2024/09/30 15:58:28 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-09-30T15:58:28.508Z level=INFO source=images.go:753 msg="total blobs: 6" time=2024-09-30T15:58:28.509Z level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-09-30T15:58:28.509Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.12)" time=2024-09-30T15:58:28.510Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 cpu]" time=2024-09-30T15:58:28.510Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs" time=2024-09-30T15:58:28.670Z level=INFO source=gpu.go:252 msg="error looking up nvidia GPU memory" error="cuda driver library failed to get device context 801" time=2024-09-30T15:58:28.670Z level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered" time=2024-09-30T15:58:28.670Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="251.1 GiB" available="240.4 GiB" [GIN] 2024/09/30 - 15:59:07 | 200 | 94.19µs | 127.0.0.1 | HEAD "/" [GIN] 2024/09/30 - 15:59:07 | 200 | 10.954794ms | 127.0.0.1 | POST "/api/show" time=2024-09-30T15:59:07.334Z level=INFO source=server.go:103 msg="system memory" total="251.1 GiB" free="240.5 GiB" free_swap="4.0 GiB" time=2024-09-30T15:59:07.334Z level=INFO source=memory.go:326 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[240.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.3 GiB" memory.required.partial="0 B" memory.required.kv="4.0 GiB" memory.required.allocations="[8.3 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="7.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="681.0 MiB" time=2024-09-30T15:59:07.338Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --numa distribute --parallel 4 --port 39753" time=2024-09-30T15:59:07.339Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-09-30T15:59:07.339Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" time=2024-09-30T15:59:07.339Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" WARNING: /proc/sys/kernel/numa_balancing is enabled, this has been observed to impair performance INFO [main] build info | build=10 commit="070c75f" tid="140389372093376" timestamp=1727711947 INFO [main] system info | n_threads=20 n_threads_batch=20 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140389372093376" timestamp=1727711947 total_threads=40 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="39" port="39753" tid="140389372093376" timestamp=1727711947 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ``` I am able to get all the outputs ``` [root@ai ~]# nvidia-smi Sat Sep 28 01:07:12 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Quadro RTX 4000 Off | 00000000:37:00.0 Off | N/A | | 30% 34C P8 9W / 125W | 1MiB / 8192MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ ``` ``` [root@ai ~]# nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Wed_Aug_14_10:10:22_PDT_2024 Cuda compilation tools, release 12.6, V12.6.68 Build cuda_12.6.r12.6/compiler.34714021_0 ``` **OS** Linux Rocky 9.4 ``` [root@ai ~]# uname -r 5.14.0-427.37.1.el9_4.x86_64 ``` **logs** ``` [root@ai ~]# sudo dmesg | grep -i nvidia [ 1.704573] Loaded X.509 cert 'Rocky Enterprise Software Foundation: Nvidia GPU OOT Signing 101: 816ba9c770e6960cefe378020865d4ebbc352a7d' [ 6.270595] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:36/0000:36:00.0/0000:37:00.1/sound/card0/input6 [ 6.270694] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:36/0000:36:00.0/0000:37:00.1/sound/card0/input7 [ 6.270796] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:36/0000:36:00.0/0000:37:00.1/sound/card0/input8 [ 6.270843] input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:36/0000:36:00.0/0000:37:00.1/sound/card0/input9 [ 7.812685] nvidia: loading out-of-tree module taints kernel. [ 7.812696] nvidia: module license 'NVIDIA' taints kernel. [ 7.836076] nvidia: module verification failed: signature and/or required key missing - tainting kernel [ 7.950180] nvidia-nvlink: Nvlink Core is being initialized, major device number 510 [ 7.951760] nvidia 0000:37:00.0: enabling device (0140 -> 0143) [ 7.951842] nvidia 0000:37:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=none:owns=none [ 8.001592] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 560.35.03 Fri Aug 16 21:39:15 UTC 2024 [ 8.115320] nvidia_uvm: module uses symbols from proprietary module nvidia, inheriting taint. [ 8.252789] nvidia-uvm: Loaded the UVM driver, major device number 508. [ 8.307886] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 560.35.03 Fri Aug 16 21:21:48 UTC 2024 [ 8.323807] [drm] [nvidia-drm] [GPU ID 0x00003700] Loading driver [ 9.814993] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:37:00.0 on minor 1 [ 9.815921] nvidia 0000:37:00.0: [drm] Cannot find any crtc or sizes ``` **additional logs** ``` [root@ai ~]# sudo dmesg | grep -i nvrm [ 8.001592] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 560.35.03 Fri Aug 16 21:39:15 UTC 2024 ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.12
GiteaMirror added the linuxnvidianeeds more infobug labels 2026-04-22 09:45:38 -05:00
Author
Owner

@dhiltgen commented on GitHub (Sep 30, 2024):

It looks like the failure causing us not to be able to gather GPU information is cuda driver library failed to get device context 801

We're currently tracking this via issue #6364 however your description is a bit different in that you seem to be seeing this consistently, not intermittently.

<!-- gh-comment-id:2383681184 --> @dhiltgen commented on GitHub (Sep 30, 2024): It looks like the failure causing us not to be able to gather GPU information is `cuda driver library failed to get device context 801` We're currently tracking this via issue #6364 however your description is a bit different in that you seem to be seeing this consistently, not intermittently.
Author
Owner

@mfzhsn commented on GitHub (Sep 30, 2024):

Yes, my case is pretty consistent with 550 and 560 version of drivers. GPU never came up with either version of ollama(only).

<!-- gh-comment-id:2383714349 --> @mfzhsn commented on GitHub (Sep 30, 2024): Yes, my case is pretty consistent with 550 and 560 version of drivers. GPU never came up with either version of ollama(only).
Author
Owner

@harshsavasil commented on GitHub (Oct 10, 2024):

I'm facing the same issue with RTX 4080. @mfzhsn did you find a fix for this?

<!-- gh-comment-id:2404219289 --> @harshsavasil commented on GitHub (Oct 10, 2024): I'm facing the same issue with RTX 4080. @mfzhsn did you find a fix for this?
Author
Owner

@dhiltgen commented on GitHub (Nov 7, 2024):

I've posted a new PR documenting a workaround some users are seeing success with for a slightly different failure mode, but it might be helpful in these cases as well. If you are experiencing the sporadic 801, please give it a try and let us know if it resolves the problem.

https://github.com/ollama/ollama/pull/7519

<!-- gh-comment-id:2461053331 --> @dhiltgen commented on GitHub (Nov 7, 2024): I've posted a new PR documenting a workaround some users are seeing success with for a slightly different failure mode, but it might be helpful in these cases as well. If you are experiencing the sporadic 801, please give it a try and let us know if it resolves the problem. https://github.com/ollama/ollama/pull/7519
Author
Owner

@mfzhsn commented on GitHub (Nov 16, 2024):

I'm facing the same issue with RTX 4080. @mfzhsn did you find a fix for this?

I am still facing the same issue :( @harshsavasil

<!-- gh-comment-id:2480447205 --> @mfzhsn commented on GitHub (Nov 16, 2024): > I'm facing the same issue with RTX 4080. @mfzhsn did you find a fix for this? I am still facing the same issue :( @harshsavasil
Author
Owner

@harshsavasil commented on GitHub (Nov 16, 2024):

I was able to fix this by switching to microk8s. Installing gpu operator was very easy with it. @mfzhsn

<!-- gh-comment-id:2480455039 --> @harshsavasil commented on GitHub (Nov 16, 2024): I was able to fix this by switching to microk8s. Installing gpu operator was very easy with it. @mfzhsn
Author
Owner

@mfzhsn commented on GitHub (Nov 16, 2024):

@harshsavasil Could you please provide me the steps or any links.
i tried docker and regular installation, but still failing.

<!-- gh-comment-id:2480458690 --> @mfzhsn commented on GitHub (Nov 16, 2024): @harshsavasil Could you please provide me the steps or any links. i tried docker and regular installation, but still failing.
Author
Owner

@harshsavasil commented on GitHub (Nov 16, 2024):

@harshsavasil Could you please provide me the steps or any links.
i tried docker and regular installation, but still failing.

@mfzhsn do you have nvidia container toolkit installed? Also, what is your docker version?

<!-- gh-comment-id:2480546009 --> @harshsavasil commented on GitHub (Nov 16, 2024): > @harshsavasil Could you please provide me the steps or any links. > i tried docker and regular installation, but still failing. @mfzhsn do you have nvidia container toolkit installed? Also, what is your docker version?
Author
Owner

@mfzhsn commented on GitHub (Nov 16, 2024):

@harshsavasil

[root@ai ~]# docker version
Client: Docker Engine - Community
 Version:           27.3.1
 API version:       1.47
 Go version:        go1.22.7
 Git commit:        ce12230
 Built:             Fri Sep 20 11:42:48 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          27.3.1
  API version:      1.47 (minimum version 1.24)
  Go version:       go1.22.7
  Git commit:       41ca978
  Built:            Fri Sep 20 11:41:09 2024
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          1.7.22
  GitCommit:        7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
 nvidia:
  Version:          1.1.14
  GitCommit:        v1.1.14-0-g2c9f560
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Yes, I have install toolkit

<!-- gh-comment-id:2480633145 --> @mfzhsn commented on GitHub (Nov 16, 2024): @harshsavasil ``` [root@ai ~]# docker version Client: Docker Engine - Community Version: 27.3.1 API version: 1.47 Go version: go1.22.7 Git commit: ce12230 Built: Fri Sep 20 11:42:48 2024 OS/Arch: linux/amd64 Context: default Server: Docker Engine - Community Engine: Version: 27.3.1 API version: 1.47 (minimum version 1.24) Go version: go1.22.7 Git commit: 41ca978 Built: Fri Sep 20 11:41:09 2024 OS/Arch: linux/amd64 Experimental: true containerd: Version: 1.7.22 GitCommit: 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c nvidia: Version: 1.1.14 GitCommit: v1.1.14-0-g2c9f560 docker-init: Version: 0.19.0 GitCommit: de40ad0 ``` Yes, I have install toolkit
Author
Owner

@mfzhsn commented on GitHub (Nov 25, 2024):

I've posted a new PR documenting a workaround some users are seeing success with for a slightly different failure mode, but it might be helpful in these cases as well. If you are experiencing the sporadic 801, please give it a try and let us know if it resolves the problem.

#7519

Hi @dhiltgen, The work-around does not fix the issues, any leads ?

<!-- gh-comment-id:2498815364 --> @mfzhsn commented on GitHub (Nov 25, 2024): > I've posted a new PR documenting a workaround some users are seeing success with for a slightly different failure mode, but it might be helpful in these cases as well. If you are experiencing the sporadic 801, please give it a try and let us know if it resolves the problem. > > #7519 Hi @dhiltgen, The work-around does not fix the issues, any leads ?
Author
Owner

@sandwh1ched commented on GitHub (Jun 12, 2025):

I also can't get it to detect my RTX 4000. I'm not really sure why, but I'm on Arch Linux and I tried installing both the extra/cuda and extra/nvidia-container-toolkit package, but it resulted in the same error.

<!-- gh-comment-id:2965082482 --> @sandwh1ched commented on GitHub (Jun 12, 2025): I also can't get it to detect my RTX 4000. I'm not really sure why, but I'm on Arch Linux and I tried installing both the `extra/cuda` and `extra/nvidia-container-toolkit` package, but it resulted in the same error.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30235