[GH-ISSUE #7512] Snap 0.3.13 missing libcudart.so.12 #4779

Closed
opened 2026-04-12 15:43:32 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @edmcman on GitHub (Nov 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7512

What is the issue?

I am trying ollama for the first time. Since there was a snap available, I tried that first. I can download llama 3.2, but when I attempt to run it, I get:

(venv) ed@banana ~/P/re-copilot (dev)> ollama run llama3.2
Error: llama runner process has terminated: exit status 127

Here are the logs from snap logs ollama:

2024-11-05T10:38:44-05:00 ollama.listener[631587]: time=2024-11-05T10:38:44.915-05:00 level=INFO source=server.go:399 msg="starting llama server" cmd="/tmp/ollama2989640063/runners/cuda_v12/ollama_llama_server --model /var/snap/ollama/common/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --parallel 4 --port 34001"
2024-11-05T10:38:44-05:00 ollama.listener[631587]: time=2024-11-05T10:38:44.915-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
2024-11-05T10:38:44-05:00 ollama.listener[631587]: time=2024-11-05T10:38:44.915-05:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
2024-11-05T10:38:44-05:00 ollama.listener[631587]: time=2024-11-05T10:38:44.916-05:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error"
2024-11-05T10:38:44-05:00 ollama.listener[631587]: /tmp/ollama2989640063/runners/cuda_v12/ollama_llama_server: error while loading shared libraries: libcudart.so.12: cannot open shared object file: No such file or directory
2024-11-05T10:38:45-05:00 ollama.listener[631587]: time=2024-11-05T10:38:45.166-05:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 127"
2024-11-05T10:38:45-05:00 ollama.listener[631587]: [GIN] 2024/11/05 - 10:38:45 | 500 |  400.927983ms |       127.0.0.1 | POST     "/api/generate"
2024-11-05T10:38:50-05:00 ollama.listener[631587]: time=2024-11-05T10:38:50.312-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.145569241 model=/var/snap/ollama/common/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
2024-11-05T10:38:50-05:00 ollama.listener[631587]: time=2024-11-05T10:38:50.561-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.39517205 model=/var/snap/ollama/common/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
2024-11-05T10:38:50-05:00 ollama.listener[631587]: time=2024-11-05T10:38:50.811-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.644657486 model=/var/snap/ollama/common/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff

It seems like the problem is that libcudart.so.12 is missing. Should it be getting that from the host, or the snap?

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.13 snap

Originally created by @edmcman on GitHub (Nov 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7512 ### What is the issue? I am trying ollama for the first time. Since there was a snap available, I tried that first. I can download llama 3.2, but when I attempt to run it, I get: ``` (venv) ed@banana ~/P/re-copilot (dev)> ollama run llama3.2 Error: llama runner process has terminated: exit status 127 ``` Here are the logs from `snap logs ollama`: ``` 2024-11-05T10:38:44-05:00 ollama.listener[631587]: time=2024-11-05T10:38:44.915-05:00 level=INFO source=server.go:399 msg="starting llama server" cmd="/tmp/ollama2989640063/runners/cuda_v12/ollama_llama_server --model /var/snap/ollama/common/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --parallel 4 --port 34001" 2024-11-05T10:38:44-05:00 ollama.listener[631587]: time=2024-11-05T10:38:44.915-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 2024-11-05T10:38:44-05:00 ollama.listener[631587]: time=2024-11-05T10:38:44.915-05:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" 2024-11-05T10:38:44-05:00 ollama.listener[631587]: time=2024-11-05T10:38:44.916-05:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server error" 2024-11-05T10:38:44-05:00 ollama.listener[631587]: /tmp/ollama2989640063/runners/cuda_v12/ollama_llama_server: error while loading shared libraries: libcudart.so.12: cannot open shared object file: No such file or directory 2024-11-05T10:38:45-05:00 ollama.listener[631587]: time=2024-11-05T10:38:45.166-05:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 127" 2024-11-05T10:38:45-05:00 ollama.listener[631587]: [GIN] 2024/11/05 - 10:38:45 | 500 | 400.927983ms | 127.0.0.1 | POST "/api/generate" 2024-11-05T10:38:50-05:00 ollama.listener[631587]: time=2024-11-05T10:38:50.312-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.145569241 model=/var/snap/ollama/common/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff 2024-11-05T10:38:50-05:00 ollama.listener[631587]: time=2024-11-05T10:38:50.561-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.39517205 model=/var/snap/ollama/common/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff 2024-11-05T10:38:50-05:00 ollama.listener[631587]: time=2024-11-05T10:38:50.811-05:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.644657486 model=/var/snap/ollama/common/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff ``` It seems like the problem is that libcudart.so.12 is missing. Should it be getting that from the host, or the snap? ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.13 snap
GiteaMirror added the installlinuxbug labels 2026-04-12 15:43:32 -05:00
Author
Owner

@edmcman commented on GitHub (Nov 5, 2024):

I do seem to have cudart12.so on my host:

ed@banana ~/P/re-copilot (dev)> sudo ldconfig -p | fgrep cudart
        libcudart.so.12 (libc6,x86-64) => /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so.12
        libcudart.so (libc6,x86-64) => /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so
<!-- gh-comment-id:2457550791 --> @edmcman commented on GitHub (Nov 5, 2024): I do seem to have cudart12.so on my host: ``` ed@banana ~/P/re-copilot (dev)> sudo ldconfig -p | fgrep cudart libcudart.so.12 (libc6,x86-64) => /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so.12 libcudart.so (libc6,x86-64) => /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so ```
Author
Owner

@rick-github commented on GitHub (Nov 5, 2024):

It's normally bundled with the application to prevent library version skew. snap is not the usual way to install ollama on linux, you may have more success with the canonical way: curl -fsSL https://ollama.com/install.sh | sh.

<!-- gh-comment-id:2457559562 --> @rick-github commented on GitHub (Nov 5, 2024): It's normally bundled with the application to prevent library version skew. snap is not the usual way to install ollama on linux, you may have more success with the canonical way: `curl -fsSL https://ollama.com/install.sh | sh`.
Author
Owner

@dhiltgen commented on GitHub (Nov 5, 2024):

The snap packaging isn't authored or maintained by the Ollama maintainers so unfortunately we wont be able to fix this. Hopefully the package identifies who created it and you can file an issue with them to update their packaging. Until then, as Rick suggested, you can uninstall the snap package and use our official curl install script.

<!-- gh-comment-id:2457670884 --> @dhiltgen commented on GitHub (Nov 5, 2024): The snap packaging isn't authored or maintained by the Ollama maintainers so unfortunately we wont be able to fix this. Hopefully the package identifies who created it and you can file an issue with them to update their packaging. Until then, as Rick suggested, you can uninstall the snap package and use our official curl install script.
Author
Owner

@edmcman commented on GitHub (Nov 5, 2024):

Thanks for clarifying. The snap's page linked to the ollama's issue page, so I thought that meant it was an official project.

<!-- gh-comment-id:2457708585 --> @edmcman commented on GitHub (Nov 5, 2024): Thanks for clarifying. The snap's page linked to the ollama's issue page, so I thought that meant it was an official project.
Author
Owner

@mz2 commented on GitHub (Nov 5, 2024):

Sorry about that, I'll correct the issue link to point at https://github.com/mz2/ollama-snap …

<!-- gh-comment-id:2457862389 --> @mz2 commented on GitHub (Nov 5, 2024): Sorry about that, I'll correct the issue link to point at https://github.com/mz2/ollama-snap …
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4779