[GH-ISSUE #6771] Inconsistent Responses from Identical Models #66304

Closed
opened 2026-05-04 02:22:14 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @wahidur028 on GitHub (Sep 12, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6771

What is the issue?

I am new to Ollama and have noticed that when I ask a query using Ollama, the model's responses are quite poor. However, if I ask the same query using https://www.llama2.ai/, I receive much better responses. Can anyone explain what might be causing this difference? What could I be doing wrong?

# ! pip install -q pyautogen
# ! pip install -q 'litellm[proxy]'

import requests
import json

# Define the payload
payload = {
    "model": "llama3:8b",  # Changed model to match the curl version
    "prompt": """How does a neural network work?""" ,         
    "format": "json",  # To ensure response is in JSON format
    "stream": False  # To disable streaming in the response
}

# Send the POST request to the API
response = requests.post("http://127.0.0.1:11434/api/generate", json=payload)
output = response.json()
print(output)

{'model': 'llama3:8b', 'created_at': '2024-09-12T06:26:59.001500912Z', 'response': '{ }\n\n ', 'done': True, 'done_reason': 'stop', 'context': [128006, 882, 128007, 271, 4438, 1587, 264, 30828, 4009, 990, 30, 128009, 128006, 78191, 128007, 271, 90, 557, 220], 'total_duration': 243709137, 'load_duration': 56054134, 'prompt_eval_count': 17, 'prompt_eval_duration': 39720000, 'eval_count': 4, 'eval_duration': 103353000}

Ollama model response: response': '{ }\n\n

URL: https://www.llama2.ai/
Model: Meta LLAMA3-8B
Response:
I'd be happy to explain how a neural network works!
A neural network is a type of machine learning model inspired by the structure and function of the human brain. It's a complex system made up of many interconnected nodes or "neurons," which process and transmit information.
Here's a simplified overview of how a neural network works:

  1. Data Input: The neural network receives input data, which can be images, sound waves, text, or any other type of data that can be digitized.
  2. Neuron Layers: The input data is fed into the first layer of neurons, called the input layer. Each neuron in this layer receives a set of input values and performs a calculation on them.
  3. Activation Functions: The output of each neuron is then passed through an activation function, which determines whether the neuron should "fire" or not. Common activation functions include sigmoid, tanh, and ReLU (Rectified Linear Unit).
  4. Hidden Layers: The output from the input layer is then passed to one or more hidden layers, which are made up of neurons that perform complex calculations on the input data. Each hidden layer can have multiple neurons, and each neuron in the layer receives input from the previous layer.
  5. Output Layer: The output from the hidden layers is then passed to the output layer, which produces the final output of the neural network.
  6. Training: During training, the neural network is presented with a large dataset of input-output pairs. The network adjusts the weights and biases of the connections between neurons to minimize the error between its predictions and the actual output.
  7. Forward Propagation: During forward propagation, the input data flows through the network, and the output is calculated at each layer.
  8. Backpropagation: During backpropagation, the error between the predicted output and the actual output is calculated, and the error is propagated backwards through the network. This process helps the network adjust the weights and biases to improve its performance.
  9. Optimization: The network uses an optimization algorithm, such as stochastic gradient descent (SGD), to adjust the weights and biases based on the error calculated during backpropagation.
  10. Repeat: The process of forward propagation, backpropagation, and optimization is repeated multiple times until the network converges or reaches a desired level of accuracy.
    Neural networks can be used for a wide range of applications, including image and speech recognition, natural language processing, and predictive modeling. They're particularly useful when dealing with complex, non-linear relationships between inputs and outputs.
    I hope this helps! Do you have any specific questions about neural networks or would you like me to elaborate on any of these points?

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

No response

Originally created by @wahidur028 on GitHub (Sep 12, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6771 ### What is the issue? I am new to Ollama and have noticed that when I ask a query using Ollama, the model's responses are quite poor. However, if I ask the same query using https://www.llama2.ai/, I receive much better responses. Can anyone explain what might be causing this difference? What could I be doing wrong? ``` # ! pip install -q pyautogen # ! pip install -q 'litellm[proxy]' import requests import json # Define the payload payload = { "model": "llama3:8b", # Changed model to match the curl version "prompt": """How does a neural network work?""" , "format": "json", # To ensure response is in JSON format "stream": False # To disable streaming in the response } # Send the POST request to the API response = requests.post("http://127.0.0.1:11434/api/generate", json=payload) output = response.json() print(output) ``` `{'model': 'llama3:8b', 'created_at': '2024-09-12T06:26:59.001500912Z', 'response': '{ }\n\n ', 'done': True, 'done_reason': 'stop', 'context': [128006, 882, 128007, 271, 4438, 1587, 264, 30828, 4009, 990, 30, 128009, 128006, 78191, 128007, 271, 90, 557, 220], 'total_duration': 243709137, 'load_duration': 56054134, 'prompt_eval_count': 17, 'prompt_eval_duration': 39720000, 'eval_count': 4, 'eval_duration': 103353000}` **Ollama model response:** response': '{ }\n\n **URL: https://www.llama2.ai/** **Model: Meta LLAMA3-8B** **Response:** I'd be happy to explain how a neural network works! A neural network is a type of machine learning model inspired by the structure and function of the human brain. It's a complex system made up of many interconnected nodes or "neurons," which process and transmit information. Here's a simplified overview of how a neural network works: 1. **Data Input**: The neural network receives input data, which can be images, sound waves, text, or any other type of data that can be digitized. 2. **Neuron Layers**: The input data is fed into the first layer of neurons, called the input layer. Each neuron in this layer receives a set of input values and performs a calculation on them. 3. **Activation Functions**: The output of each neuron is then passed through an activation function, which determines whether the neuron should "fire" or not. Common activation functions include sigmoid, tanh, and ReLU (Rectified Linear Unit). 4. **Hidden Layers**: The output from the input layer is then passed to one or more hidden layers, which are made up of neurons that perform complex calculations on the input data. Each hidden layer can have multiple neurons, and each neuron in the layer receives input from the previous layer. 5. **Output Layer**: The output from the hidden layers is then passed to the output layer, which produces the final output of the neural network. 6. **Training**: During training, the neural network is presented with a large dataset of input-output pairs. The network adjusts the weights and biases of the connections between neurons to minimize the error between its predictions and the actual output. 7. **Forward Propagation**: During forward propagation, the input data flows through the network, and the output is calculated at each layer. 8. **Backpropagation**: During backpropagation, the error between the predicted output and the actual output is calculated, and the error is propagated backwards through the network. This process helps the network adjust the weights and biases to improve its performance. 9. **Optimization**: The network uses an optimization algorithm, such as stochastic gradient descent (SGD), to adjust the weights and biases based on the error calculated during backpropagation. 10. **Repeat**: The process of forward propagation, backpropagation, and optimization is repeated multiple times until the network converges or reaches a desired level of accuracy. Neural networks can be used for a wide range of applications, including image and speech recognition, natural language processing, and predictive modeling. They're particularly useful when dealing with complex, non-linear relationships between inputs and outputs. I hope this helps! Do you have any specific questions about neural networks or would you like me to elaborate on any of these points? ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
GiteaMirror added the apiquestion labels 2026-05-04 02:22:14 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 13, 2024):

There might be some confusion about "format":"json". In the request to ollama, it indicates the the model output should be formatted as JSON. This has nothing to do with the JSON marshalling that requests does with the payload that it sends to and receives from the ollama service. If you remove it you get an answer more like the one from llama2.ai:

--- 6771.py.orig	2024-09-13 18:34:19.572952124 +1000
+++ 6771.py	2024-09-13 18:34:42.853207233 +1000
@@ -5,11 +5,10 @@
 payload = {
     "model": "llama3:8b",  # Changed model to match the curl version
     "prompt": """How does a neural network work?""" ,
-    "format": "json",  # To ensure response is in JSON format
     "stream": False  # To disable streaming in the response
 }
 
 # Send the POST request to the API
 response = requests.post("http://127.0.0.1:11434/api/generate", json=payload)
 output = response.json()
-print(output)
+print(output["response"])
$ python3 6771.py
A fundamental question in the world of artificial intelligence!

A neural network is a type of machine learning model inspired by the structure 
and function of the human brain. It's a complex system composed of many 
interconnected nodes or "neurons," which process and transmit information to 
each other.

Here's a simplified overview of how a neural network works:

**Basic Components:**

1. **Neurons (Nodes)**: These are the basic computing units in a neural 
network. Each neuron receives input from other neurons, performs a computation 
on that input, and then sends the output to other neurons.
2. **Connections (Edges)**: Neurons are connected by edges, which represent the 
flow of information between them.
3. **Activation Functions**: Each neuron applies an activation function to its 
output before sending it to other neurons.

**How Neural Networks Work:**

1. **Input Layer**: The input layer receives the input data, which is a set of 
features or values that describe the problem you're trying to solve.
2. **Hidden Layers**: The input layer feeds into one or more hidden layers, 
which are composed of multiple neurons. Each neuron in these layers applies an 
activation function to its output and then sends it to other neurons.
3. **Output Layer**: The hidden layers feed into the output layer, which 
produces the final output of the neural network.

**The Training Process:**

1. **Forward Pass**: The input data flows through the network, with each neuron 
applying its activation function to the input from previous layers.
2. **Error Calculation**: The output of the network is compared to the desired 
output (target), and an error is calculated based on the difference between the 
two.
3. **Backpropagation**: The error is propagated backward through the network, 
adjusting the weights and biases of each neuron to minimize the error.
4. **Optimization**: An optimization algorithm, such as Stochastic Gradient 
Descent (SGD), is used to update the model's parameters based on the calculated 
error.

**Key Concepts:**

1. **Weighted Sums**: Each neuron computes a weighted sum of its inputs, where 
the weights represent the strength of the connections between neurons.
2. **Bias Terms**: A bias term is added to each neuron's output to introduce 
some randomness and allow for non-linear relationships.
3. **Activation Functions**: These determine the output of each neuron based on 
its input. Common activation functions include sigmoid, ReLU (Rectified Linear 
Unit), and tanh.

**Types of Neural Networks:**

1. **Feedforward Networks**: Information flows only in one direction, from 
input to output, without any feedback loops.
2. **Recurrent Neural Networks (RNNs)**: Feedback connections allow information 
to flow backwards in time, enabling the network to capture temporal 
dependencies.
3. **Convolutional Neural Networks (CNNs)**: Designed for image and signal 
processing tasks, CNNs use convolutional and pooling layers to extract features.

This is a high-level overview of how neural networks work. If you're interested 
in diving deeper, I'd be happy to explain more about the math behind neural 
networks or discuss specific applications!


<!-- gh-comment-id:2348374806 --> @rick-github commented on GitHub (Sep 13, 2024): There might be some confusion about `"format":"json"`. In the request to ollama, it indicates the the model output should be formatted as JSON. This has nothing to do with the JSON marshalling that `requests` does with the payload that it sends to and receives from the ollama service. If you remove it you get an answer more like the one from llama2.ai: ```diff --- 6771.py.orig 2024-09-13 18:34:19.572952124 +1000 +++ 6771.py 2024-09-13 18:34:42.853207233 +1000 @@ -5,11 +5,10 @@ payload = { "model": "llama3:8b", # Changed model to match the curl version "prompt": """How does a neural network work?""" , - "format": "json", # To ensure response is in JSON format "stream": False # To disable streaming in the response } # Send the POST request to the API response = requests.post("http://127.0.0.1:11434/api/generate", json=payload) output = response.json() -print(output) +print(output["response"]) ``` ```console $ python3 6771.py A fundamental question in the world of artificial intelligence! A neural network is a type of machine learning model inspired by the structure and function of the human brain. It's a complex system composed of many interconnected nodes or "neurons," which process and transmit information to each other. Here's a simplified overview of how a neural network works: **Basic Components:** 1. **Neurons (Nodes)**: These are the basic computing units in a neural network. Each neuron receives input from other neurons, performs a computation on that input, and then sends the output to other neurons. 2. **Connections (Edges)**: Neurons are connected by edges, which represent the flow of information between them. 3. **Activation Functions**: Each neuron applies an activation function to its output before sending it to other neurons. **How Neural Networks Work:** 1. **Input Layer**: The input layer receives the input data, which is a set of features or values that describe the problem you're trying to solve. 2. **Hidden Layers**: The input layer feeds into one or more hidden layers, which are composed of multiple neurons. Each neuron in these layers applies an activation function to its output and then sends it to other neurons. 3. **Output Layer**: The hidden layers feed into the output layer, which produces the final output of the neural network. **The Training Process:** 1. **Forward Pass**: The input data flows through the network, with each neuron applying its activation function to the input from previous layers. 2. **Error Calculation**: The output of the network is compared to the desired output (target), and an error is calculated based on the difference between the two. 3. **Backpropagation**: The error is propagated backward through the network, adjusting the weights and biases of each neuron to minimize the error. 4. **Optimization**: An optimization algorithm, such as Stochastic Gradient Descent (SGD), is used to update the model's parameters based on the calculated error. **Key Concepts:** 1. **Weighted Sums**: Each neuron computes a weighted sum of its inputs, where the weights represent the strength of the connections between neurons. 2. **Bias Terms**: A bias term is added to each neuron's output to introduce some randomness and allow for non-linear relationships. 3. **Activation Functions**: These determine the output of each neuron based on its input. Common activation functions include sigmoid, ReLU (Rectified Linear Unit), and tanh. **Types of Neural Networks:** 1. **Feedforward Networks**: Information flows only in one direction, from input to output, without any feedback loops. 2. **Recurrent Neural Networks (RNNs)**: Feedback connections allow information to flow backwards in time, enabling the network to capture temporal dependencies. 3. **Convolutional Neural Networks (CNNs)**: Designed for image and signal processing tasks, CNNs use convolutional and pooling layers to extract features. This is a high-level overview of how neural networks work. If you're interested in diving deeper, I'd be happy to explain more about the math behind neural networks or discuss specific applications! ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66304