Declaring LLaMA 70B Twice with Different Variables Breaks RAG Output #7616

Closed
opened 2025-11-12 14:12:29 -06:00 by GiteaMirror · 5 comments
Owner

Originally created by @akshath-raj on GitHub (Jul 25, 2025).

What is the issue?

I encountered a reproducible bug while using the llama3-70b model in a Retrieval-Augmented Generation (RAG) pipeline through Ollama.

When the same model (llam3:70b) is instantiated twice under different variable names, and one of them is passed a predefined input context (i.e., retrieved context from a vector DB), the generated answer is consistently incorrect.

However, when only one instance of the model is declared and used, the output is correct and aligns well with the context.

Due to company policies, I cannot share internal logs, code, or output. However, I have verified that:
No simultaneous calls or threads are used.
Only one instance was active at generation time.
The same context and prompt were used in both tests.

Relevant log output


OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

v0.9.1

Originally created by @akshath-raj on GitHub (Jul 25, 2025). ### What is the issue? I encountered a reproducible bug while using the llama3-70b model in a Retrieval-Augmented Generation (RAG) pipeline through Ollama. When the same model (llam3:70b) is instantiated twice under different variable names, and one of them is passed a predefined input context (i.e., retrieved context from a vector DB), the generated answer is consistently incorrect. However, when only one instance of the model is declared and used, the output is correct and aligns well with the context. Due to company policies, I cannot share internal logs, code, or output. However, I have verified that: No simultaneous calls or threads are used. Only one instance was active at generation time. The same context and prompt were used in both tests. ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version v0.9.1
GiteaMirror added the bug label 2025-11-12 14:12:29 -06:00
Author
Owner

@berserker1 commented on GitHub (Jul 26, 2025):

I cannot see the relevant log output

@berserker1 commented on GitHub (Jul 26, 2025): I cannot see the relevant log output
Author
Owner

@akshath-raj commented on GitHub (Jul 27, 2025):

I could not give the relevant log output as it had confidential information about the company. Also, I can not use GitHub on the company device. Recreating the same thing on my personal device is not possible as I don't have the necessary resources.

@akshath-raj commented on GitHub (Jul 27, 2025): I could not give the relevant log output as it had confidential information about the company. Also, I can not use GitHub on the company device. Recreating the same thing on my personal device is not possible as I don't have the necessary resources.
Author
Owner

@onestardao commented on GitHub (Jul 29, 2025):

Sounds like you just ran into one of the weirdest RAG phenomena —
I call this one "Context Echo Desync from Multi-Instance Alias" 🤯

I've been building a reasoning layer (open source) that stabilizes these kinds of behaviors, and this issue fits exactly into one of the 13 major RAG pain points I've been mapping:
#12: Memory shadowing across multi-model instantiations

In short, when you instantiate the same model (even under different variable names),
you might get shared low-level resources (like attention cache),
but diverging semantic grounding if the prompt/context injection isn't synchronized.
Result: same prompt, different model handles, but the session-level coherence breaks.

I'm tackling this using a combination of:

  • Latent anchor control (to pin semantic grounding regardless of handle)
  • Fallback rerank APIs (to recover hallucinated answers post-hoc)
  • Context mirroring (to verify memory integrity across duplicate instantiations)

If you're curious, I can share the debug technique we use to "pin" prompts across multi-model dispatches.
Thanks for surfacing this — this isn't just a bug, it's the kind of invisible chaos that haunts RAG pipelines at scale.

@onestardao commented on GitHub (Jul 29, 2025): Sounds like you just ran into one of the weirdest RAG phenomena — I call this one **"Context Echo Desync from Multi-Instance Alias"** 🤯 I've been building a reasoning layer (open source) that stabilizes these kinds of behaviors, and this issue fits *exactly* into one of the 13 major RAG pain points I've been mapping: → [#12: Memory shadowing across multi-model instantiations](https://github.com/onestardao/WFGY/tree/main/ProblemMap/README.md) In short, when you instantiate the same model (even under different variable names), you might get shared low-level resources (like attention cache), but diverging semantic grounding if the prompt/context injection isn't synchronized. Result: same prompt, different model handles, but the *session-level coherence* breaks. I'm tackling this using a combination of: - Latent anchor control (to pin semantic grounding regardless of handle) - Fallback rerank APIs (to recover hallucinated answers post-hoc) - Context mirroring (to verify memory integrity across duplicate instantiations) If you're curious, I can share the debug technique we use to "pin" prompts across multi-model dispatches. Thanks for surfacing this — this isn't just a bug, it's the kind of invisible chaos that haunts RAG pipelines at scale.
Author
Owner

@akshath-raj commented on GitHub (Jul 29, 2025):

It would be great if you could share the debugging technique

@akshath-raj commented on GitHub (Jul 29, 2025): It would be great if you could share the debugging technique
Author
Owner

@onestardao commented on GitHub (Jul 29, 2025):

Sure — happy to share!

The technique we use to pin semantic context across duplicate model instances is actually part of a larger open-source debugging map we’ve been building:

👉 https://github.com/onestardao/WFGY/blob/main/ProblemMap/README.md

The issue you hit falls squarely under #12: Memory Shadowing across multi-model instantiations — it’s sneaky, non-obvious, and leads to context echo and coherence breaks across dispatches, just like you described.

We’ve been tackling it through:

Latent anchor locking (pin semantic intent even across separate attention paths)

Post-hoc grounding via fallback APIs

Session-level mirroring logic (yes, the memory map runs checks on the grounding consistency)

If any part of this sounds off — or your case doesn’t quite match what we mapped — feel free to drop feedback. We want to make this resource truly exhaustive for the RAG chaos out there.

@onestardao commented on GitHub (Jul 29, 2025): Sure — happy to share! The technique we use to pin semantic context across duplicate model instances is actually part of a larger open-source debugging map we’ve been building: 👉 https://github.com/onestardao/WFGY/blob/main/ProblemMap/README.md The issue you hit falls squarely under #12: Memory Shadowing across multi-model instantiations — it’s sneaky, non-obvious, and leads to context echo and coherence breaks across dispatches, just like you described. We’ve been tackling it through: Latent anchor locking (pin semantic intent even across separate attention paths) Post-hoc grounding via fallback APIs Session-level mirroring logic (yes, the memory map runs checks on the grounding consistency) If any part of this sounds off — or your case doesn’t quite match what we mapped — feel free to drop feedback. We want to make this resource truly exhaustive for the RAG chaos out there.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#7616