[GH-ISSUE #9962] Llama 3.1 Hallucination – Incorrect Information About Deepika Padukone #53035

Closed
opened 2026-04-29 01:44:55 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @sakshiselmokar on GitHub (Mar 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9962

What is the issue?

Issue Summary

I was testing Llama 3.1 through Ollama (without fine-tuning) and found a factual inaccuracy.
The model incorrectly states that Deepika Padukone divorced Ranveer Singh (cricketer) and later married Ranveer Singh (actor).

Steps to Reproduce

  1. Run the model using Ollama.
  2. Input the following prompt:
    "Who is Deepika Padukone married to?"
  3. Observe the incorrect response.

Expected Behavior

The model should correctly state that Deepika Padukone is married to Ranveer Singh (actor) since 2018, without any mention of a divorce or a cricketer.

Actual Behavior

The model states that she divorced "Ranveer Singh (cricketer)" and later married "Ranveer Singh (actor)," which is factually incorrect.

Model Version & Setup

  • Model: Llama 3.1
  • Platform: Ollama
  • Fine-tuning: No fine-tuning applied
  • Prompt: "Who is Deepika Padukone married to?"

Additional Context

This is a clear case of hallucination where the model is generating misinformation about a public figure. Please investigate and improve factual accuracy.

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

3.1

Originally created by @sakshiselmokar on GitHub (Mar 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9962 ### What is the issue? ### **Issue Summary** I was testing Llama 3.1 through Ollama (without fine-tuning) and found a factual inaccuracy. The model incorrectly states that Deepika Padukone divorced Ranveer Singh (cricketer) and later married Ranveer Singh (actor). ### **Steps to Reproduce** 1. Run the model using Ollama. 2. Input the following prompt: **"Who is Deepika Padukone married to?"** 3. Observe the incorrect response. ### **Expected Behavior** The model should correctly state that Deepika Padukone is married to Ranveer Singh (actor) since 2018, without any mention of a divorce or a cricketer. ### **Actual Behavior** The model states that she divorced "Ranveer Singh (cricketer)" and later married "Ranveer Singh (actor)," which is factually incorrect. ### **Model Version & Setup** - Model: Llama 3.1 - Platform: Ollama - Fine-tuning: No fine-tuning applied - Prompt: `"Who is Deepika Padukone married to?"` ### **Additional Context** This is a clear case of hallucination where the model is generating misinformation about a public figure. Please investigate and improve factual accuracy. ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version 3.1
GiteaMirror added the bug label 2026-04-29 01:44:55 -05:00
Author
Owner

@abhiram1809 commented on GitHub (Mar 24, 2025):

This seems expected as the models are not trained with specificity on Bollywood data, but rather generalization. It's more of a model problem than ollama.

<!-- gh-comment-id:2747923401 --> @abhiram1809 commented on GitHub (Mar 24, 2025): This seems expected as the models are not trained with specificity on Bollywood data, but rather generalization. It's more of a model problem than ollama.
Author
Owner

@sakshiselmokar commented on GitHub (Mar 24, 2025):

Thank you for your response. I understand that the model is designed for generalization rather than Bollywood-specific data. However, this particular case highlights a broader issue of factual inaccuracies and hallucinations about widely known public figures, which can mislead users.

While domain-specific training might not be the goal, ensuring reliable responses to well-documented facts should still be a priority. Is there any way to improve the model’s handling of such cases, perhaps through retrieval-augmented generation (RAG) or better entity disambiguation techniques?

<!-- gh-comment-id:2747935342 --> @sakshiselmokar commented on GitHub (Mar 24, 2025): Thank you for your response. I understand that the model is designed for generalization rather than Bollywood-specific data. However, this particular case highlights a broader issue of factual inaccuracies and hallucinations about widely known public figures, which can mislead users. While domain-specific training might not be the goal, ensuring reliable responses to well-documented facts should still be a priority. Is there any way to improve the model’s handling of such cases, perhaps through retrieval-augmented generation (RAG) or better entity disambiguation techniques?
Author
Owner

@abhiram1809 commented on GitHub (Mar 24, 2025):

Ollama's main purpose is to provide users with an endpoint with their preferred models and flexibility, they do not focus on building guardrails/RAG frameworks for fact checks. You can build them yourself from scratch or you can opt for external libraries like LlamaIndex or Nemo Guardrails.

Ollama only provides the endpoint to llm(local or hosted).

<!-- gh-comment-id:2747950156 --> @abhiram1809 commented on GitHub (Mar 24, 2025): Ollama's main purpose is to provide users with an endpoint with their preferred models and flexibility, they do not focus on building guardrails/RAG frameworks for fact checks. You can build them yourself from scratch or you can opt for external libraries like LlamaIndex or Nemo Guardrails. Ollama only provides the endpoint to llm(local or hosted).
Author
Owner

@sakshiselmokar commented on GitHub (Mar 24, 2025):

Understood, thanks for the clarification! I appreciate the insights.

<!-- gh-comment-id:2747957459 --> @sakshiselmokar commented on GitHub (Mar 24, 2025): Understood, thanks for the clarification! I appreciate the insights.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53035