[GH-ISSUE #6825] LLava:13B Model Outputting ############### After Period of Inactivity #50824

Closed
opened 2026-04-28 17:12:56 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @Atharvaaat on GitHub (Sep 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6825

What is the issue?

Description:

I encountered an issue with the Ollama LLava:13B model where the output was consistently ############### after a period of inactivity. Restarting the ollama.service resolved the issue temporarily, but the root cause remains unclear.


Environment:

  • Model: Ollama LLava:13B
  • System Specs:
    • OS: Debian (Google Cloud VM)
    • GPU: NVIDIA L4
    • Driver/CUDA: Latest drivers compatible with NVIDIA L4
    • Service: Ollama.service

Issue Details:

  1. I was testing the Ollama LLava:13B model without any issues for an extended session.
  2. After stopping the model for a period of time, when attempting to restart inference, the output was consistently ###############.
  3. Restarting the ollama.service resolved the issue temporarily, and normal functionality was restored.
  4. The root cause of the incorrect output (###############) is unknown and has not been encountered previously in continuous usage.

Steps to Reproduce:

  1. Start Ollama LLava:13B model on a Google Cloud VM with an NVIDIA L4 GPU.
  2. Perform inference operations successfully.
  3. Allow a period of inactivity (length uncertain, may be related to session timeout or resource deallocation).
  4. Resume inference, resulting in ############### as the output.
  5. Restart ollama.service to restore normal function.

Expected Behavior:

The model should resume normal operation after a period of inactivity, without needing to restart the service.


Observed Behavior:

After resuming inference post-inactivity, the model consistently output ############### until the service was restarted.


Additional Information:

  • Logs and model outputs prior to the issue were normal.
  • This behavior suggests a potential issue with resource management, memory, or session state handling in the model.

OS

Linux

GPU

Nvidia

CPU

No response

Ollama version

0.3.10

Originally created by @Atharvaaat on GitHub (Sep 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6825 ### What is the issue? **Description:** I encountered an issue with the Ollama LLava:13B model where the output was consistently `###############` after a period of inactivity. Restarting the `ollama.service` resolved the issue temporarily, but the root cause remains unclear. --- **Environment:** - **Model:** Ollama LLava:13B - **System Specs:** - **OS:** Debian (Google Cloud VM) - **GPU:** NVIDIA L4 - **Driver/CUDA:** Latest drivers compatible with NVIDIA L4 - **Service:** Ollama.service --- **Issue Details:** 1. I was testing the Ollama LLava:13B model without any issues for an extended session. 2. After stopping the model for a period of time, when attempting to restart inference, the output was consistently `###############`. 3. Restarting the `ollama.service` resolved the issue temporarily, and normal functionality was restored. 4. The root cause of the incorrect output (`###############`) is unknown and has not been encountered previously in continuous usage. --- **Steps to Reproduce:** 1. Start Ollama LLava:13B model on a Google Cloud VM with an NVIDIA L4 GPU. 2. Perform inference operations successfully. 3. Allow a period of inactivity (length uncertain, may be related to session timeout or resource deallocation). 4. Resume inference, resulting in `###############` as the output. 5. Restart `ollama.service` to restore normal function. --- **Expected Behavior:** The model should resume normal operation after a period of inactivity, without needing to restart the service. --- **Observed Behavior:** After resuming inference post-inactivity, the model consistently output `###############` until the service was restarted. --- **Additional Information:** - Logs and model outputs prior to the issue were normal. - This behavior suggests a potential issue with resource management, memory, or session state handling in the model. ### OS Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.3.10
GiteaMirror added the bug label 2026-04-28 17:12:56 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 16, 2024):

Server logs may help in debugging.

<!-- gh-comment-id:2352827227 --> @rick-github commented on GitHub (Sep 16, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may help in debugging.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50824