LLaVA Custom Modelfile doesn't produce related answers with the image #811

Closed
opened 2025-11-11 14:31:43 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @kasimchooch on GitHub (May 5, 2024).

Bug Report

Description

Bug Summary:
LLaVA Custom Modelfile doesn't produce related answers with the image I uploaded. Without creating a custom modelfile it works without any problem, but as soon as I type system prompt it starts giving random answers.

Steps to Reproduce:
1- Create a modelfile selecting LLaVA:7b
2- As the system prompt try something identical, mine was 'start your response with AHOY"
3- Open modelfile and type something without uploading an image. It will start the response with AHOY
4- Upload an image and say 'analyze', the response will be totally random and irrelevant to the image.

Expected Behavior:
LLaVA should use the system prompt when analyzing the images. In this case it should start image description with 'AHOY'

Actual Behavior:
Image description is totally random

Environment

  • Open WebUI Version: v0.1.123

  • Ollama (if applicable): 0.1.33

  • Operating System: Windows 11

  • Browser (if applicable): Chrome 124.0.6367.119
    ## Reproduction Details

Confirmation:

  • [ X] I have read and followed all the instructions provided in the README.md.
  • [ X] I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.

Screenshots
chrome_ffleaOLJaW
chrome_8peySd3gow
chrome_61xmBjpKSz

Installation Method

I installed the Open WebUI using Pinokio by CocktailPeanut.

Originally created by @kasimchooch on GitHub (May 5, 2024). # Bug Report ## Description **Bug Summary:** LLaVA Custom Modelfile doesn't produce related answers with the image I uploaded. Without creating a custom modelfile it works without any problem, but as soon as I type system prompt it starts giving random answers. **Steps to Reproduce:** 1- Create a modelfile selecting LLaVA:7b 2- As the system prompt try something identical, mine was 'start your response with AHOY" 3- Open modelfile and type something without uploading an image. It will start the response with AHOY 4- Upload an image and say 'analyze', the response will be totally random and irrelevant to the image. **Expected Behavior:** LLaVA should use the system prompt when analyzing the images. In this case it should start image description with 'AHOY' **Actual Behavior:** Image description is totally random ## Environment - **Open WebUI Version:** v0.1.123 - **Ollama (if applicable):** 0.1.33 - **Operating System:** Windows 11 - **Browser (if applicable):** Chrome 124.0.6367.119 **## Reproduction Details** **Confirmation:** - [ X] I have read and followed all the instructions provided in the README.md. - [ X] I am on the latest version of both Open WebUI and Ollama. - [ ] I have included the browser console logs. - [ ] I have included the Docker container logs. **Screenshots** ![chrome_ffleaOLJaW](https://github.com/open-webui/open-webui/assets/111050169/f2f3557a-2687-4cc0-ab7a-2c6797c90919) ![chrome_8peySd3gow](https://github.com/open-webui/open-webui/assets/111050169/32cc2392-0d69-4a16-886c-2be9bee6c871) ![chrome_61xmBjpKSz](https://github.com/open-webui/open-webui/assets/111050169/4c523daf-1bef-415d-ace0-0c758e7b2ad2) ## Installation Method I installed the Open WebUI using Pinokio by CocktailPeanut.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#811