[GH-ISSUE #4429] GPU error #49278

Closed
opened 2026-04-28 11:04:47 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @A-Akhil on GitHub (May 14, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4429

What is the issue?

> ollama serve    
2024/05/14 21:38:47 routes.go:1006: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-05-14T21:38:47.161+05:30 level=INFO source=images.go:704 msg="total blobs: 23"
time=2024-05-14T21:38:47.162+05:30 level=INFO source=images.go:711 msg="total unused blobs removed: 0"
time=2024-05-14T21:38:47.163+05:30 level=INFO source=routes.go:1052 msg="Listening on 127.0.0.1:11434 (version 0.1.37)"
time=2024-05-14T21:38:47.163+05:30 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2326455941/runners
time=2024-05-14T21:38:49.646+05:30 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 rocm_v60002 cpu cpu_avx cpu_avx2]"
time=2024-05-14T21:38:50.036+05:30 level=INFO source=types.go:71 msg="inference compute" id=GPU-8881b192-fe71-d28f-8272-04bd6758891f library=cuda compute=7.5 driver=12.4 name="NVIDIA GeForce GTX 1650 Ti" total="3.6 GiB" available="3.6 GiB"

Before i used to get this type

> ollama serve
Error: listen tcp 127.0.0.1:11434: bind: address already in use

And my docker container cannot connect with ollama on http://172.17.0.1:11434

This is my systemd

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/home/akhil/Downloads/temp/prism/RAG/venv/bin:/home/akhil/.nvm/versions/node/v21.6.0/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin:/home/akhil/.local/bin:/usr/bin:/home/akhil/.fzf/bin:/usr/local/MATLAB/R2024a/bin"
Environment="OLLAMA_HOST=0.0.0.0"
[Install]
WantedBy=default.target

Why is there is so much path is it okay to remove it all other path

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.37

Originally created by @A-Akhil on GitHub (May 14, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4429 ### What is the issue? ``` > ollama serve 2024/05/14 21:38:47 routes.go:1006: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" time=2024-05-14T21:38:47.161+05:30 level=INFO source=images.go:704 msg="total blobs: 23" time=2024-05-14T21:38:47.162+05:30 level=INFO source=images.go:711 msg="total unused blobs removed: 0" time=2024-05-14T21:38:47.163+05:30 level=INFO source=routes.go:1052 msg="Listening on 127.0.0.1:11434 (version 0.1.37)" time=2024-05-14T21:38:47.163+05:30 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2326455941/runners time=2024-05-14T21:38:49.646+05:30 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 rocm_v60002 cpu cpu_avx cpu_avx2]" time=2024-05-14T21:38:50.036+05:30 level=INFO source=types.go:71 msg="inference compute" id=GPU-8881b192-fe71-d28f-8272-04bd6758891f library=cuda compute=7.5 driver=12.4 name="NVIDIA GeForce GTX 1650 Ti" total="3.6 GiB" available="3.6 GiB" ``` Before i used to get this type ``` > ollama serve Error: listen tcp 127.0.0.1:11434: bind: address already in use ``` And my docker container cannot connect with ollama on http://172.17.0.1:11434 This is my systemd ``` [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/home/akhil/Downloads/temp/prism/RAG/venv/bin:/home/akhil/.nvm/versions/node/v21.6.0/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin:/home/akhil/.local/bin:/usr/bin:/home/akhil/.fzf/bin:/usr/local/MATLAB/R2024a/bin" Environment="OLLAMA_HOST=0.0.0.0" [Install] WantedBy=default.target ``` Why is there is so much path is it okay to remove it all other path ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.37
GiteaMirror added the bug label 2026-04-28 11:04:47 -05:00
Author
Owner

@sowon0 commented on GitHub (Jul 5, 2024):

Did you solve this problem? I'm also struggling with this problem....

<!-- gh-comment-id:2209652635 --> @sowon0 commented on GitHub (Jul 5, 2024): Did you solve this problem? I'm also struggling with this problem....
Author
Owner

@MajorWookie commented on GitHub (Sep 13, 2024):

I have this issue..

<!-- gh-comment-id:2349485290 --> @MajorWookie commented on GitHub (Sep 13, 2024): I have this issue..
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49278