[GH-ISSUE #6097] ollama bad response #50324

Closed
opened 2026-04-28 15:08:54 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @elifbykrbc on GitHub (Jul 31, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6097

What is the issue?

hi, i'm working on a llm project that uses phi3-vision. lately pushed into ollama but very bad responses i'm getting than i get from colab notebook. on colab phi3-vision can recognize well images but when run ollama and ask same question with same image its hallucinating too much. i don't know what would caused the hallucination and bad responses, pls help me get through this prob

OS

Windows

GPU

No response

CPU

No response

Ollama version

0.2.8

Originally created by @elifbykrbc on GitHub (Jul 31, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6097 ### What is the issue? hi, i'm working on a llm project that uses phi3-vision. lately pushed into ollama but very bad responses i'm getting than i get from colab notebook. on colab phi3-vision can recognize well images but when run ollama and ask same question with same image its hallucinating too much. i don't know what would caused the hallucination and bad responses, pls help me get through this prob ### OS Windows ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.2.8
GiteaMirror added the bug label 2026-04-28 15:08:54 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 31, 2024):

Where did you get the ollama model from? The phi3 model in the ollama library is not vision enabled. If that's what you are using, try llava3-phi instead. If you are using a vision-enabled model with ollama, an example of the prompt you are sending and server logs will aid in diagnosis.

<!-- gh-comment-id:2260729175 --> @rick-github commented on GitHub (Jul 31, 2024): Where did you get the ollama model from? The phi3 model in the ollama library is not vision enabled. If that's what you are using, try [llava3-phi](https://ollama.com/library/llava-phi3) instead. If you are using a vision-enabled model with ollama, an example of the prompt you are sending and server logs will aid in diagnosis.
Author
Owner

@rick-github commented on GitHub (Jul 31, 2024):

Other vision enabled models can be found here.

<!-- gh-comment-id:2260740172 --> @rick-github commented on GitHub (Jul 31, 2024): Other vision enabled models can be found [here](https://ollama.com/search?q=&c=vision).
Author
Owner

@elifbykrbc commented on GitHub (Aug 1, 2024):

it's fine tuned phi3 model by my colleagues and me. we run it colab it's respond well but when use the model(same) that we push into ollama it sucks. i wonder if it's about ollama or hardware. Maybe smth else that i don't know. Any idea?

<!-- gh-comment-id:2262175732 --> @elifbykrbc commented on GitHub (Aug 1, 2024): it's fine tuned phi3 model by my colleagues and me. we run it colab it's respond well but when use the model(same) that we push into ollama it sucks. i wonder if it's about ollama or hardware. Maybe smth else that i don't know. Any idea?
Author
Owner

@elifbykrbc commented on GitHub (Aug 1, 2024):

@rick-github thank you btw for your advice but it does not working for me

<!-- gh-comment-id:2262177510 --> @elifbykrbc commented on GitHub (Aug 1, 2024): @rick-github thank you btw for your advice but it does not working for me
Author
Owner

@rampageservices commented on GitHub (Aug 2, 2024):

With all due respect @elifbykrbc, I am going to provide some advice that will help you get help.
You need to provide specifics like the following:

  1. what is bad (in your opinion)
  2. what needs to be fixed
  3. what seems to be broken
  4. examples of different use cases

Just saying "it sucks" is not going to get many people to help you. Please do better. People are here to help those that provide as much information as possible. Many people don't have time to pry answers out of you to help you. Please understand this reality.

<!-- gh-comment-id:2265340299 --> @rampageservices commented on GitHub (Aug 2, 2024): With all due respect @elifbykrbc, I am going to provide some advice that will help you get help. You need to provide specifics like the following: 1. what is bad (in your opinion) 2. what needs to be fixed 3. what seems to be broken 4. examples of different use cases Just saying "it sucks" is not going to get many people to help you. Please do better. People are here to help those that provide as much information as possible. Many people don't have time to pry answers out of you to help you. Please understand this reality.
Author
Owner

@rick-github commented on GitHub (Aug 2, 2024):

I'm pretty sure the problem is the conversion, either the projector wasn't carried over or it's incompatible with ollama/llama.cpp. The conversion method, an example of a failed prompt and server logs would aid in diagnosis.

<!-- gh-comment-id:2265362107 --> @rick-github commented on GitHub (Aug 2, 2024): I'm pretty sure the problem is the conversion, either the projector wasn't carried over or it's incompatible with ollama/llama.cpp. The conversion method, an example of a failed prompt and server logs would aid in diagnosis.
Author
Owner

@jmorganca commented on GitHub (Sep 2, 2024):

Merging with https://github.com/ollama/ollama/issues/4560 – as mentioned by @rick-github Phi3 and Phi 3.5 vision isn't yet supported by Ollama, but hoping to support this soon along with #4499 and others

<!-- gh-comment-id:2325382629 --> @jmorganca commented on GitHub (Sep 2, 2024): Merging with https://github.com/ollama/ollama/issues/4560 – as mentioned by @rick-github Phi3 and Phi 3.5 vision isn't yet supported by Ollama, but hoping to support this soon along with #4499 and others
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50324