[GH-ISSUE #4472] llama3-chatqa always returns Empty reponse #64831

Open
opened 2026-05-03 18:55:10 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @pnmartinez on GitHub (May 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4472

Problem

I've been toying around with RAG using ollama and llama-index.
The results I am getting with llama3 8b are not that good, so I was happy to see llama3-chatqa being added in v0.1.35.

However, I always get "Empty response" using llama3-chatqa. Is there sth I am missing?

Code

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.embeddings.ollama import OllamaEmbedding
from llama_index.llms.ollama import Ollama

documents = SimpleDirectoryReader("data").load_data()

# Tested nomic-embed-text and mxbai-embed-large
Settings.embed_model = OllamaEmbedding(model_name="mxbai-embed-large")

#llama3 instead of llama3-chatqa can provide answers - though sometimes incorrect
Settings.llm = Ollama(model="llama3-chatqa", request_timeout=360.0)

index = VectorStoreIndex.from_documents(
    documents,
)

query_engine = index.as_query_engine()
response = query_engine.query(query)

# "Empty Response" always when using llama3-chatqa

OS

Linux, Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.135, 0.136, 0.138

Originally created by @pnmartinez on GitHub (May 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4472 # Problem I've been toying around with RAG using `ollama` and `llama-index`. The results I am getting with `llama3 8b` are not that good, so I was happy to see `llama3-chatqa` being added in `v0.1.35`. However, I always get "Empty response" using `llama3-chatqa`. Is there sth I am missing? ## Code ```py from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings from llama_index.embeddings.ollama import OllamaEmbedding from llama_index.llms.ollama import Ollama documents = SimpleDirectoryReader("data").load_data() # Tested nomic-embed-text and mxbai-embed-large Settings.embed_model = OllamaEmbedding(model_name="mxbai-embed-large") #llama3 instead of llama3-chatqa can provide answers - though sometimes incorrect Settings.llm = Ollama(model="llama3-chatqa", request_timeout=360.0) index = VectorStoreIndex.from_documents( documents, ) query_engine = index.as_query_engine() response = query_engine.query(query) # "Empty Response" always when using llama3-chatqa ``` ### OS Linux, Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.135, 0.136, 0.138
GiteaMirror added the bug label 2026-05-03 18:55:10 -05:00
Author
Owner

@cpetersen commented on GitHub (May 21, 2024):

FWIW, I get an empty response >80% of the time when using llama3-chatqa with ollama 0.1.38

<!-- gh-comment-id:2121631475 --> @cpetersen commented on GitHub (May 21, 2024): FWIW, I get an empty response >80% of the time when using `llama3-chatqa` with ollama 0.1.38
Author
Owner

@ArthurDeleu commented on GitHub (May 21, 2024):

Having the same issue, started when using llama3-chatqa now have it with every model

<!-- gh-comment-id:2123054373 --> @ArthurDeleu commented on GitHub (May 21, 2024): Having the same issue, started when using `llama3-chatqa ` now have it with every model
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64831