[GH-ISSUE #7197] llama runner process no longer running: -1 #4572

Closed
opened 2026-04-12 15:30:16 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Dhruv-1212 on GitHub (Oct 14, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7197

What is the issue?

I am trying to run llama3 models but getting this error on both pip installation and linux installation on a server with Tesla T4 GPU, but the same is working fine on my local machine.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.3

Originally created by @Dhruv-1212 on GitHub (Oct 14, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7197 ### What is the issue? I am trying to run llama3 models but getting this error on both pip installation and linux installation on a server with Tesla T4 GPU, but the same is working fine on my local machine. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.3
GiteaMirror added the bug label 2026-04-12 15:30:16 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 14, 2024):

Server logs will help in debugging.

<!-- gh-comment-id:2410977343 --> @rick-github commented on GitHub (Oct 14, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Author
Owner

@Dhruv-1212 commented on GitHub (Oct 15, 2024):

that worked thanks, port 11434 was preoccupied due to some reason

<!-- gh-comment-id:2412995381 --> @Dhruv-1212 commented on GitHub (Oct 15, 2024): that worked thanks, port 11434 was preoccupied due to some reason
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4572