[GH-ISSUE #5636] Inconsistent outputs when sending parallel requests #3515

Closed
opened 2026-04-12 14:13:05 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @taha-yassine on GitHub (Jul 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5636

What is the issue?

May be related to https://github.com/ollama/ollama/issues/5321

When sending parallel requests, responses are inconsistent despite fixing the seed and temperature = 0.

Here is a little script to test this:

import httpx
import json
import asyncio
from tqdm import tqdm

async def query_model_async(prompt, model="llama3", url="http://localhost:11434/api/chat", role="user"):
    data = {
        "model": model,
        "messages": [
            {"role": role, "content": prompt}
        ],
        "options": {
            "seed": 123,
            "temperature": 0,
            "num_ctx": 2048
        },
    }
    payload = json.dumps(data)
    
    async with httpx.AsyncClient(timeout=None) as client:
        async with client.stream("POST", url, data=payload, headers={"Content-Type": "application/json"}) as response:
            response_data = ""
            async for line in response.aiter_lines():
                if line:
                    response_json = json.loads(line)
                    response_data += response_json["message"]["content"]
    return response_data

async def generate_output(query):
    result = await query_model_async(query, role="user")
    return result.strip()

async def generate_outputs(num_iterations, query):
    tasks = [generate_output(query) for _ in range(num_iterations)]
    outputs = await asyncio.gather(*tasks)
    return outputs

async def main():
    num_iterations = 3
    query = "What do Llamas eat?"

    print(f"Generating {num_iterations} outputs")
    
    with tqdm(total=num_iterations, desc="Generating outputs") as pbar:
        outputs = await generate_outputs(num_iterations, query)
        pbar.update(num_iterations)
    
    print("\nSaving outputs...")
    with open("outputs.json", "w") as f:
        json.dump(outputs, f, indent=4)
    print("Outputs saved to outputs.json")

if __name__ == "__main__":
    asyncio.run(main())

Output 1:

Llamas are herbivores and primarily feed on grasses, leaves, twigs, and other plant material. They have a complex digestive system that allows them to break down the tough fibers of their food efficiently.

Their diet typically includes:

1. **Grasses**: Llamas can eat both fresh and dried grasses.
2. **Leaves**: They enjoy eating various types of leaves from bushes and trees, especially those high up in mountains where they are often found.
3. **Twigs**: Llamas will also consume twigs as part of their diet for additional fiber.
4. **Herbs**: They like to eat different kinds of herbs which can provide them with essential nutrients and minerals.

Llamas do not typically eat fruits or vegetables, except in small quantities if they are offered during feeding time. Their digestive system is designed to process the high-fiber content found in grasses and leaves, so they need a diet that is rich in this type of food.

It's important for llamas to have access to fresh water at all times as well, since hydration is crucial for their health.

Output 2:

Llamas are herbivores and primarily feed on grasses, leaves, twigs, and other plant material. They have a complex digestive system that allows them to break down the tough fibers of their food efficiently.

Their diet typically includes:

1. **Grasses**: Llamas can eat various types of grasses found in their natural habitats.
2. **Leaves**: They enjoy eating leaves from bushes, shrubs, and trees.
3. **Twigs**: Llamas will also consume twigs as part of their diet for additional fiber and nutrients.
4. **Herbs**: Some herbs are also part of their diet, providing them with essential vitamins and minerals.

It's important to note that while llamas can eat a variety of foods, they should not be fed foods high in sugar or starches, such as grains or fruits, because these can cause digestive issues like colic. Their diet should consist mainly of forages like hay (often alfalfa hay is preferred due to its high nutritional content) and occasionally supplemented with fresh grass when available.

Proper nutrition is crucial for the health of llamas, so it's recommended to consult with a veterinarian or an animal nutritionist to ensure they are receiving a balanced diet that meets their specific needs.

Output 3:

Llamas are herbivores and primarily feed on grasses, leaves, twigs, and other plant material. They have a complex digestive system that allows them to break down the tough fibers of their food efficiently.

Their diet typically includes:

1. **Grasses**: Llamas can eat various types of grasses found in their natural habitat.
2. **Leaves**: They enjoy eating leaves from bushes, shrubs, and trees.
3. **Twigs**: Llamas will also consume twigs as part of their diet for additional fiber.
4. **Herbs**: They may eat a variety of herbs that are available in their environment.
5. **Crops**: In agricultural settings, llamas can be fed grains such as corn or hay.

It's important to provide fresh water daily and ensure the quality of their feed is high since they are sensitive to changes in diet which could lead to digestive issues like bloat. Llamas should not be fed foods that contain caffeine, chocolate, onions, garlic, or avocado, as these can be toxic to them.

Notice how they are all slightly different, especially towards the end.

I tested with llama3 and qwen2:7b-instruct and encountered the same issue.

OS

Linux

GPU

No response

CPU

AMD

Ollama version

0.2.1

Originally created by @taha-yassine on GitHub (Jul 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5636 ### What is the issue? May be related to https://github.com/ollama/ollama/issues/5321 When sending parallel requests, responses are inconsistent despite fixing the `seed` and `temperature = 0`. Here is a little script to test this: ```python import httpx import json import asyncio from tqdm import tqdm async def query_model_async(prompt, model="llama3", url="http://localhost:11434/api/chat", role="user"): data = { "model": model, "messages": [ {"role": role, "content": prompt} ], "options": { "seed": 123, "temperature": 0, "num_ctx": 2048 }, } payload = json.dumps(data) async with httpx.AsyncClient(timeout=None) as client: async with client.stream("POST", url, data=payload, headers={"Content-Type": "application/json"}) as response: response_data = "" async for line in response.aiter_lines(): if line: response_json = json.loads(line) response_data += response_json["message"]["content"] return response_data async def generate_output(query): result = await query_model_async(query, role="user") return result.strip() async def generate_outputs(num_iterations, query): tasks = [generate_output(query) for _ in range(num_iterations)] outputs = await asyncio.gather(*tasks) return outputs async def main(): num_iterations = 3 query = "What do Llamas eat?" print(f"Generating {num_iterations} outputs") with tqdm(total=num_iterations, desc="Generating outputs") as pbar: outputs = await generate_outputs(num_iterations, query) pbar.update(num_iterations) print("\nSaving outputs...") with open("outputs.json", "w") as f: json.dump(outputs, f, indent=4) print("Outputs saved to outputs.json") if __name__ == "__main__": asyncio.run(main()) ``` Output 1: ```text Llamas are herbivores and primarily feed on grasses, leaves, twigs, and other plant material. They have a complex digestive system that allows them to break down the tough fibers of their food efficiently. Their diet typically includes: 1. **Grasses**: Llamas can eat both fresh and dried grasses. 2. **Leaves**: They enjoy eating various types of leaves from bushes and trees, especially those high up in mountains where they are often found. 3. **Twigs**: Llamas will also consume twigs as part of their diet for additional fiber. 4. **Herbs**: They like to eat different kinds of herbs which can provide them with essential nutrients and minerals. Llamas do not typically eat fruits or vegetables, except in small quantities if they are offered during feeding time. Their digestive system is designed to process the high-fiber content found in grasses and leaves, so they need a diet that is rich in this type of food. It's important for llamas to have access to fresh water at all times as well, since hydration is crucial for their health. ``` Output 2: ```text Llamas are herbivores and primarily feed on grasses, leaves, twigs, and other plant material. They have a complex digestive system that allows them to break down the tough fibers of their food efficiently. Their diet typically includes: 1. **Grasses**: Llamas can eat various types of grasses found in their natural habitats. 2. **Leaves**: They enjoy eating leaves from bushes, shrubs, and trees. 3. **Twigs**: Llamas will also consume twigs as part of their diet for additional fiber and nutrients. 4. **Herbs**: Some herbs are also part of their diet, providing them with essential vitamins and minerals. It's important to note that while llamas can eat a variety of foods, they should not be fed foods high in sugar or starches, such as grains or fruits, because these can cause digestive issues like colic. Their diet should consist mainly of forages like hay (often alfalfa hay is preferred due to its high nutritional content) and occasionally supplemented with fresh grass when available. Proper nutrition is crucial for the health of llamas, so it's recommended to consult with a veterinarian or an animal nutritionist to ensure they are receiving a balanced diet that meets their specific needs. ``` Output 3: ```text Llamas are herbivores and primarily feed on grasses, leaves, twigs, and other plant material. They have a complex digestive system that allows them to break down the tough fibers of their food efficiently. Their diet typically includes: 1. **Grasses**: Llamas can eat various types of grasses found in their natural habitat. 2. **Leaves**: They enjoy eating leaves from bushes, shrubs, and trees. 3. **Twigs**: Llamas will also consume twigs as part of their diet for additional fiber. 4. **Herbs**: They may eat a variety of herbs that are available in their environment. 5. **Crops**: In agricultural settings, llamas can be fed grains such as corn or hay. It's important to provide fresh water daily and ensure the quality of their feed is high since they are sensitive to changes in diet which could lead to digestive issues like bloat. Llamas should not be fed foods that contain caffeine, chocolate, onions, garlic, or avocado, as these can be toxic to them. ``` Notice how they are all slightly different, especially towards the end. I tested with `llama3` and `qwen2:7b-instruct` and encountered the same issue. ### OS Linux ### GPU _No response_ ### CPU AMD ### Ollama version 0.2.1
GiteaMirror added the bug label 2026-04-12 14:13:05 -05:00
Author
Owner

@jmorganca commented on GitHub (Jul 12, 2024):

Hi there, do you have an Nvidia GPU? I believe this may be the same problem as https://github.com/ollama/ollama/issues/4990 - will close for this for now

<!-- gh-comment-id:2225015895 --> @jmorganca commented on GitHub (Jul 12, 2024): Hi there, do you have an Nvidia GPU? I believe this may be the same problem as https://github.com/ollama/ollama/issues/4990 - will close for this for now
Author
Owner

@taha-yassine commented on GitHub (Jul 12, 2024):

Hi there, do you have an Nvidia GPU? I believe this may be the same problem as #4990 - will close for this for now

All the tests were done on an AMD CPU as I don't have access to a GPU on my machine

<!-- gh-comment-id:2225867603 --> @taha-yassine commented on GitHub (Jul 12, 2024): > Hi there, do you have an Nvidia GPU? I believe this may be the same problem as #4990 - will close for this for now All the tests were done on an AMD CPU as I don't have access to a GPU on my machine
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3515