[GH-ISSUE #7809] Strange output behavior between Ollama llama 3.2 11b vs lmsys deployed llama 3.2 11b #30755

Closed
opened 2026-04-22 10:39:31 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @dhandhalyabhavik on GitHub (Nov 23, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7809

What is the issue?

I had a graph with 10 columns with values on top of it, I wanted to find total of all the columns.

So, I asked llama 3.2 11b vision model, and it hallucinated badly.

Next, I removed quantized version and pulled fp16 model thinking that it will get me correct result, may be the quantization was issue. But to my surprise, even fp16 model didn't get the answer right. (in past, I have seen the same issue with vllm so I tried lmsys platform) I tried checking accuracy by asking directly to lmsys deployed llama 3.2 11b model (with same temperature and same output tokens) It got it right. Output was perfect.

Help me figure out what is the issue here.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.4

Originally created by @dhandhalyabhavik on GitHub (Nov 23, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7809 ### What is the issue? I had a graph with 10 columns with values on top of it, I wanted to find total of all the columns. So, I asked llama 3.2 11b vision model, and it hallucinated badly. Next, I removed quantized version and pulled fp16 model thinking that it will get me correct result, may be the quantization was issue. But to my surprise, even fp16 model didn't get the answer right. (in past, I have seen the same issue with vllm so I tried lmsys platform) I tried checking accuracy by asking directly to lmsys deployed llama 3.2 11b model (with same temperature and same output tokens) It got it right. Output was perfect. Help me figure out what is the issue here. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version >0.4
GiteaMirror added the bug label 2026-04-22 10:39:31 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 23, 2024):

How did you pass the image to ollama?

<!-- gh-comment-id:2495430972 --> @rick-github commented on GitHub (Nov 23, 2024): How did you pass the image to ollama?
Author
Owner

@dhandhalyabhavik commented on GitHub (Nov 23, 2024):

Hi @rick-github,

I used OpenAI compatible APIs and converted image to base64 string, attached to message format described into one of Ollama's documentation like below,

from openai import OpenAI
import base64

def image_to_base64(image_path):
    try:
        with open(image_path, "rb") as image_file:
            encoded_string = base64.b64encode(image_file.read())
            return encoded_string.decode('utf-8')
    except FileNotFoundError:
        print(f"Error: The file {image_path} was not found.")
        return None
    except Exception as e:
        print(f"An error occurred: {e}")
        return None

client = OpenAI(
    base_url="http://<my_ip>:11434/v1",
    api_key='ollama' 
)

def gen(Q):

    response = client.chat.completions.create(
        model="llama3.2-vision:11b-instruct-fp16",
        messages=[
            {
                "role": "user",
                "content": [
                    {"type": "text", "text": Q},
                    {
                        "type": "image_url",
                        "image_url": 'data:image/png;base64,' + image_to_base64('results/1.png')
                    }
                ]
            }
        ],
        stream=True,
        temperature=0.7,
    )
    
    # Process the streaming response
    for chunk in response:
        if chunk.choices[0].delta.content is not None:
            print(chunk.choices[0].delta.content, end="")

gen('Provide total sum of column 1 to 10 from provided image.')
<!-- gh-comment-id:2495551453 --> @dhandhalyabhavik commented on GitHub (Nov 23, 2024): Hi @rick-github, I used OpenAI compatible APIs and converted image to base64 string, attached to message format described into one of Ollama's documentation like below, ``` from openai import OpenAI import base64 def image_to_base64(image_path): try: with open(image_path, "rb") as image_file: encoded_string = base64.b64encode(image_file.read()) return encoded_string.decode('utf-8') except FileNotFoundError: print(f"Error: The file {image_path} was not found.") return None except Exception as e: print(f"An error occurred: {e}") return None client = OpenAI( base_url="http://<my_ip>:11434/v1", api_key='ollama' ) def gen(Q): response = client.chat.completions.create( model="llama3.2-vision:11b-instruct-fp16", messages=[ { "role": "user", "content": [ {"type": "text", "text": Q}, { "type": "image_url", "image_url": 'data:image/png;base64,' + image_to_base64('results/1.png') } ] } ], stream=True, temperature=0.7, ) # Process the streaming response for chunk in response: if chunk.choices[0].delta.content is not None: print(chunk.choices[0].delta.content, end="") gen('Provide total sum of column 1 to 10 from provided image.') ```
Author
Owner

@rick-github commented on GitHub (Nov 23, 2024):

Can you provide the image you tried to process?

<!-- gh-comment-id:2495552687 --> @rick-github commented on GitHub (Nov 23, 2024): Can you provide the image you tried to process?
Author
Owner

@dhandhalyabhavik commented on GitHub (Nov 24, 2024):

Hi @rick-github,

Here is the image, ask it to get sum of last graph.

image

<!-- gh-comment-id:2495887173 --> @dhandhalyabhavik commented on GitHub (Nov 24, 2024): Hi @rick-github, Here is the image, ask it to get sum of last graph. ![image](https://github.com/user-attachments/assets/c986d9ba-8dfb-4fac-8bb9-7c75932d41d6)
Author
Owner

@rick-github commented on GitHub (Nov 24, 2024):

$ ollama:0.4.0 run llama3.2-vision:90b-instruct-q4_K_M Provide total sum of column 1 to 10 from provided image. ./389260947-c986d9ba-8dfb-4fac-8bb9-7c75932d41d6.png
Added image './389260947-c986d9ba-8dfb-4fac-8bb9-7c75932d41d6.png'
To find the total sum of columns 1 to 10, we need to add up all the values in each column.

**Column 1: Sales growth %**

*   2015: 3.32
*   2016: 3.05
*   2017: 2.87
*   2018: 3.57
*   2019: 3.28
*   2020: 3.21
*   2021: 3.22
*   2022: 2.92
*   2023: 3.08
*   2024: 3.09

Total sum of column 1 = 3.32 + 3.05 + 2.87 + 3.57 + 3.28 + 3.21 + 3.22 + 2.92 + 3.08 + 3.09 = **31.61**

**Column 2: Qtr YoY growth %**

*   2015: 3.05
*   2016: 3.23
*   2017: 1.85
*   2018: 3.23
*   2019: 2.23
*   2020: 2.04
*   2021: 2.50
*   2022: 2.30
*   2023: 1.44
*   2024: 2.13

Total sum of column 2 = 3.05 + 3.23 + 1.85 + 3.23 + 2.23 + 2.04 + 2.50 + 2.30 + 1.44 + 2.13 = **24.00**

**Column 3: 5 year return %**

*   2015: 18.02
*   2016: 23.52
*   2017: 5.95
*   2018: 18.97
*   2019: 7.64
*   2020: 7.83
*   2021: 8.80
*   2022: 6.10
*   2023: 6.23
*   2024: 6.56

Total sum of column 3 = 18.02 + 23.52 + 5.95 + 18.97 + 7.64 + 7.83 + 8.80 + 6.10 + 6.23 + 6.56 = **109.62**

Therefore, the total sum of columns 1 to 3 is:

**31.61 + 24.00 + 109.62 = 165.23**

Other than calling "graphs" "columns", it seems to have got it right. When it hallucinated badly, what was the output?

<!-- gh-comment-id:2495935616 --> @rick-github commented on GitHub (Nov 24, 2024): ```console $ ollama:0.4.0 run llama3.2-vision:90b-instruct-q4_K_M Provide total sum of column 1 to 10 from provided image. ./389260947-c986d9ba-8dfb-4fac-8bb9-7c75932d41d6.png Added image './389260947-c986d9ba-8dfb-4fac-8bb9-7c75932d41d6.png' To find the total sum of columns 1 to 10, we need to add up all the values in each column. **Column 1: Sales growth %** * 2015: 3.32 * 2016: 3.05 * 2017: 2.87 * 2018: 3.57 * 2019: 3.28 * 2020: 3.21 * 2021: 3.22 * 2022: 2.92 * 2023: 3.08 * 2024: 3.09 Total sum of column 1 = 3.32 + 3.05 + 2.87 + 3.57 + 3.28 + 3.21 + 3.22 + 2.92 + 3.08 + 3.09 = **31.61** **Column 2: Qtr YoY growth %** * 2015: 3.05 * 2016: 3.23 * 2017: 1.85 * 2018: 3.23 * 2019: 2.23 * 2020: 2.04 * 2021: 2.50 * 2022: 2.30 * 2023: 1.44 * 2024: 2.13 Total sum of column 2 = 3.05 + 3.23 + 1.85 + 3.23 + 2.23 + 2.04 + 2.50 + 2.30 + 1.44 + 2.13 = **24.00** **Column 3: 5 year return %** * 2015: 18.02 * 2016: 23.52 * 2017: 5.95 * 2018: 18.97 * 2019: 7.64 * 2020: 7.83 * 2021: 8.80 * 2022: 6.10 * 2023: 6.23 * 2024: 6.56 Total sum of column 3 = 18.02 + 23.52 + 5.95 + 18.97 + 7.64 + 7.83 + 8.80 + 6.10 + 6.23 + 6.56 = **109.62** Therefore, the total sum of columns 1 to 3 is: **31.61 + 24.00 + 109.62 = 165.23** ``` Other than calling "graphs" "columns", it seems to have got it right. When it hallucinated badly, what was the output?
Author
Owner

@dhandhalyabhavik commented on GitHub (Nov 24, 2024):

Update 2.0

Hi @rick-github,

found the issue, the default context length in docker ollama was set to 2k, pretty amazing that I was still able to run 95k size documents. (very well implemented everything)

So when I changed it to max supported and it is now giving good output.

Thanks for help, closing this now.


Hi @rick-github,

I can see you have used llama3.2-vision:90b-instruct-q4_K_M model, can you try with llama3.2-vision:11b-instruct-fp16 model?

With 11b model and below code,

response = client.chat.completions.create(
        model="llama3.2-vision:11b-instruct-fp16",
        messages=[
            {
                "role": "user",
                "content": [
                    {"type": "text", "text": Q},
                    {
                        "type": "image_url",
                        "image_url": 'data:image/png;base64,' + image_to_base64('results/1.png')
                    }
                ]
            }
        ],
        stream=True,
        temperature=0.7,
    )

# Q question is
gen('Provide total sum of column 1 to 10 from provided image.')

I got this output,

The bar graph shows sales growth percentages for the years 2015 through 2024. The graph does not provide a total percentage. However, you can calculate the total by adding up each individual year's percentage together as shown below: 

33.32 +3.05 +3.06 +2.87 +3.57+3.28+3.21+3.22+2.92+3.08+3.09 =36.18
<!-- gh-comment-id:2496088138 --> @dhandhalyabhavik commented on GitHub (Nov 24, 2024): Update 2.0 Hi @rick-github, found the issue, the default context length in docker ollama was set to 2k, pretty amazing that I was still able to run 95k size documents. (very well implemented everything) So when I changed it to max supported and it is now giving good output. Thanks for help, closing this now. --- Hi @rick-github, I can see you have used `llama3.2-vision:90b-instruct-q4_K_M` model, can you try with `llama3.2-vision:11b-instruct-fp16` model? With 11b model and below code, ``` response = client.chat.completions.create( model="llama3.2-vision:11b-instruct-fp16", messages=[ { "role": "user", "content": [ {"type": "text", "text": Q}, { "type": "image_url", "image_url": 'data:image/png;base64,' + image_to_base64('results/1.png') } ] } ], stream=True, temperature=0.7, ) # Q question is gen('Provide total sum of column 1 to 10 from provided image.') ``` I got this output, ``` The bar graph shows sales growth percentages for the years 2015 through 2024. The graph does not provide a total percentage. However, you can calculate the total by adding up each individual year's percentage together as shown below: 33.32 +3.05 +3.06 +2.87 +3.57+3.28+3.21+3.22+2.92+3.08+3.09 =36.18 ```
Author
Owner

@rick-github commented on GitHub (Nov 24, 2024):

My mistake, I got my B's mixed up. I agree that the 11b model produces poor results. But the same is true for the 11b model on lmsys:

To calculate the total sum of column 1 to 10, we need to add up all the values in the image. Here are the steps:

Step 1: Identify the values

The values in column 1 to 10 are:

3.32
3.05
2.87
3.57
3.28
3.21
3.22
2.92
3.08
3.09
Step 2: Add up the values

Let's add up these values:

3.32 + 3.05 = 6.37
6.37 + 2.87 = 9.24
9.24 + 3.57 = 12.81
12.81 + 3.28 = 16.09
16.09 + 3.21 = 19.30
19.30 + 3.22 = 22.52
22.52 + 2.92 = 25.44
25.44 + 3.08 = 28.52
28.52 + 3.09 = 31.61

Therefore, the total sum of column 1 to 10 is 31.61.
To find the total sum of column 1 to 10 from the provided image, we need to add up the values in each row for these columns.

Here are the calculations:

2015: 3.32 + 3.05 = 6.37
2016: 3.05 + 3.23 = 6.28
2017: 2.87 + 1.85 = 4.72
2018: 3.57 + 3.23 = 6.80
2019: 3.28 + 2.23 = 5.51
2020: 3.21 + 2.04 = 5.25
2021: 3.22 + 2.50 = 5.72
2022: 2.92 + 2.30 = 5.22
2023: 3.08 + 1.44 = 4.52
2024: 3.09 + 2.13 = 5.22
Now, let's add up all these sums:

6.37 + 6.28 + 4.72 + 6.80 + 5.51 + 5.25 + 5.72 + 5.22 + 4.52 + 5.22 = 54.61

Therefore, the total sum of column 1 to 10 from the provided image is 54.61.
To find the total sum of column 1 to 10, we need to add up all the values in the image. Here are the calculations:

Step 1: Identify the values

Column 1: 3.32 + 3.05 + 2.87 + 3.57 + 3.28 + 3.21 + 3.22 + 2.92 + 3.08 + 3.09 = 31.12
Column 2: 3.23 + 1.85 + 3.23 + 2.23 + 2.04 + 2.50 + 2.30 + 1.44 + 2.13 + 2.13 = 21.00
Column 3: 18.02 + 23.52 + 5.95 + 18.97 + 7.64 + 7.83 + 8.80 + 6.10 + 6.23 + 6.56 = 119.62
Step 2: Add up the values

31.12 + 21.00 + 119.62 = 171.74

The total sum of column 1 to 10 is 171.74.

Since the results are consistently poor across deployments. this would seem to be a function of the model and not the deployment.

I don't know what your use case is, but if you break it in to steps then smaller models can do better. For example, if we just ask the vision model to extract the numbers:

$ ollama:0.4.0 run llama3.2-vision:11b-instruct-q4_K_M extract the values from each graph in this image and return them in a JSON object ./389260947-c986d9ba-8dfb-4fac-8bb9-7c75932d41d6.png 
Added image './389260947-c986d9ba-8dfb-4fac-8bb9-7c75932d41d6.png'
Sure, here is a breakdown of all of the data you requested:

**Sales Growth %**

*   2015: 3.32
*   2016: 3.05
*   2017: 2.87
*   2018: 3.57
*   2019: 3.28
*   2020: 3.21
*   2021: 3.22
*   2022: 2.92
*   2023: 3.08
*   2024: 3.09

**Qtr YoY Growth %**

*   2015: 3.05
*   2016: 3.23
*   2017: 1.85
*   2018: 3.23
*   2019: 2.23
*   2020: 2.04
*   2021: 2.50
*   2022: 2.30
*   2023: 1.44
*   2024: 2.13

**5-Year Return %**

*   2015: 18.02
*   2016: 23.52
*   2017: 5.95
*   2018: 18.97
*   2019: 7.64
*   2020: 7.83
*   2021: 8.80
*   2022: 6.10
*   2023: 6.23
*   2024: 6.56

The we can use a model that hasn't had part of it's processing capability diverted to image processing to do the actual arithmetic:

$ ollama:0.4.0 run llama3.2:3b-instruct-q4_K_M sum all of the values in the following tables.  then add the totals together '
**Sales Growth %**

*   2015: 3.32
*   2016: 3.05
*   2017: 2.87
*   2018: 3.57
*   2019: 3.28
*   2020: 3.21
*   2021: 3.22
*   2022: 2.92
*   2023: 3.08
*   2024: 3.09

**Qtr YoY Growth %**

*   2015: 3.05
*   2016: 3.23
*   2017: 1.85
*   2018: 3.23
*   2019: 2.23
*   2020: 2.04
*   2021: 2.50
*   2022: 2.30
*   2023: 1.44
*   2024: 2.13

**5-Year Return %**

*   2015: 18.02
*   2016: 23.52
*   2017: 5.95
*   2018: 18.97
*   2019: 7.64
*   2020: 7.83
*   2021: 8.80
*   2022: 6.10
*   2023: 6.23
*   2024: 6.56
'
To calculate the sum of all values in each table, I'll add up the numbers and then add those totals together.

**Sales Growth %**

1. 2015: 3.32
2. 2016: 3.05 = 3.32 + 3.05 = 6.37
3. 2017: 2.87 = 6.37 + 2.87 = 9.24
4. 2018: 3.57 = 9.24 + 3.57 = 12.81
5. 2019: 3.28 = 12.81 + 3.28 = 16.09
6. 2020: 3.21 = 16.09 + 3.21 = 19.30
7. 2021: 3.22 = 19.30 + 3.22 = 22.52
8. 2022: 2.92 = 22.52 + 2.92 = 25.44
9. 2023: 3.08 = 25.44 + 3.08 = 28.52
10. 2024: 3.09 = 28.52 + 3.09 = 31.61

**Qtr YoY Growth %**

1. 2015: 3.05
2. 2016: 3.23 = 3.05 + 3.23 = 6.28
3. 2017: 1.85 = 6.28 + 1.85 = 8.13
4. 2018: 3.23 = 8.13 + 3.23 = 11.36
5. 2019: 2.23 = 11.36 + 2.23 = 13.59
6. 2020: 2.04 = 13.59 + 2.04 = 15.63
7. 2021: 2.50 = 15.63 + 2.50 = 18.13
8. 2022: 2.30 = 18.13 + 2.30 = 20.43
9. 2023: 1.44 = 20.43 + 1.44 = 21.87
10. 2024: 2.13 = 21.87 + 2.13 = 24.00

**5-Year Return %**

1. 2015: 18.02
2. 2016: 23.52 = 18.02 + 23.52 = 41.54
3. 2017: 5.95 = 41.54 + 5.95 = 47.49
4. 2018: 18.97 = 47.49 + 18.97 = 66.46
5. 2019: 7.64 = 66.46 + 7.64 = 74.10
6. 2020: 7.83 = 74.10 + 7.83 = 81.93
7. 2021: 8.80 = 81.93 + 8.80 = 90.73
8. 2022: 6.10 = 90.73 + 6.10 = 96.83
9. 2023: 6.23 = 96.83 + 6.23 = 103.06
10. 2024: 6.56 = 103.06 + 6.56 = 109.62

Now, let's add the totals of each table:

**Sales Growth %**: 31.61
**Qtr YoY Growth %**: 24.00
**5-Year Return %**: 109.62

Adding those totals together: 
31.61 + 24.00 + 109.62 = 165.23
<!-- gh-comment-id:2496135329 --> @rick-github commented on GitHub (Nov 24, 2024): My mistake, I got my B's mixed up. I agree that the 11b model produces poor results. But the same is true for the 11b model on lmsys: ``` To calculate the total sum of column 1 to 10, we need to add up all the values in the image. Here are the steps: Step 1: Identify the values The values in column 1 to 10 are: 3.32 3.05 2.87 3.57 3.28 3.21 3.22 2.92 3.08 3.09 Step 2: Add up the values Let's add up these values: 3.32 + 3.05 = 6.37 6.37 + 2.87 = 9.24 9.24 + 3.57 = 12.81 12.81 + 3.28 = 16.09 16.09 + 3.21 = 19.30 19.30 + 3.22 = 22.52 22.52 + 2.92 = 25.44 25.44 + 3.08 = 28.52 28.52 + 3.09 = 31.61 Therefore, the total sum of column 1 to 10 is 31.61. ``` ``` To find the total sum of column 1 to 10 from the provided image, we need to add up the values in each row for these columns. Here are the calculations: 2015: 3.32 + 3.05 = 6.37 2016: 3.05 + 3.23 = 6.28 2017: 2.87 + 1.85 = 4.72 2018: 3.57 + 3.23 = 6.80 2019: 3.28 + 2.23 = 5.51 2020: 3.21 + 2.04 = 5.25 2021: 3.22 + 2.50 = 5.72 2022: 2.92 + 2.30 = 5.22 2023: 3.08 + 1.44 = 4.52 2024: 3.09 + 2.13 = 5.22 Now, let's add up all these sums: 6.37 + 6.28 + 4.72 + 6.80 + 5.51 + 5.25 + 5.72 + 5.22 + 4.52 + 5.22 = 54.61 Therefore, the total sum of column 1 to 10 from the provided image is 54.61. ``` ``` To find the total sum of column 1 to 10, we need to add up all the values in the image. Here are the calculations: Step 1: Identify the values Column 1: 3.32 + 3.05 + 2.87 + 3.57 + 3.28 + 3.21 + 3.22 + 2.92 + 3.08 + 3.09 = 31.12 Column 2: 3.23 + 1.85 + 3.23 + 2.23 + 2.04 + 2.50 + 2.30 + 1.44 + 2.13 + 2.13 = 21.00 Column 3: 18.02 + 23.52 + 5.95 + 18.97 + 7.64 + 7.83 + 8.80 + 6.10 + 6.23 + 6.56 = 119.62 Step 2: Add up the values 31.12 + 21.00 + 119.62 = 171.74 The total sum of column 1 to 10 is 171.74. ``` Since the results are consistently poor across deployments. this would seem to be a function of the model and not the deployment. I don't know what your use case is, but if you break it in to steps then smaller models can do better. For example, if we just ask the vision model to extract the numbers: ```console $ ollama:0.4.0 run llama3.2-vision:11b-instruct-q4_K_M extract the values from each graph in this image and return them in a JSON object ./389260947-c986d9ba-8dfb-4fac-8bb9-7c75932d41d6.png Added image './389260947-c986d9ba-8dfb-4fac-8bb9-7c75932d41d6.png' Sure, here is a breakdown of all of the data you requested: **Sales Growth %** * 2015: 3.32 * 2016: 3.05 * 2017: 2.87 * 2018: 3.57 * 2019: 3.28 * 2020: 3.21 * 2021: 3.22 * 2022: 2.92 * 2023: 3.08 * 2024: 3.09 **Qtr YoY Growth %** * 2015: 3.05 * 2016: 3.23 * 2017: 1.85 * 2018: 3.23 * 2019: 2.23 * 2020: 2.04 * 2021: 2.50 * 2022: 2.30 * 2023: 1.44 * 2024: 2.13 **5-Year Return %** * 2015: 18.02 * 2016: 23.52 * 2017: 5.95 * 2018: 18.97 * 2019: 7.64 * 2020: 7.83 * 2021: 8.80 * 2022: 6.10 * 2023: 6.23 * 2024: 6.56 ``` The we can use a model that hasn't had part of it's processing capability diverted to image processing to do the actual arithmetic: ```console $ ollama:0.4.0 run llama3.2:3b-instruct-q4_K_M sum all of the values in the following tables. then add the totals together ' **Sales Growth %** * 2015: 3.32 * 2016: 3.05 * 2017: 2.87 * 2018: 3.57 * 2019: 3.28 * 2020: 3.21 * 2021: 3.22 * 2022: 2.92 * 2023: 3.08 * 2024: 3.09 **Qtr YoY Growth %** * 2015: 3.05 * 2016: 3.23 * 2017: 1.85 * 2018: 3.23 * 2019: 2.23 * 2020: 2.04 * 2021: 2.50 * 2022: 2.30 * 2023: 1.44 * 2024: 2.13 **5-Year Return %** * 2015: 18.02 * 2016: 23.52 * 2017: 5.95 * 2018: 18.97 * 2019: 7.64 * 2020: 7.83 * 2021: 8.80 * 2022: 6.10 * 2023: 6.23 * 2024: 6.56 ' To calculate the sum of all values in each table, I'll add up the numbers and then add those totals together. **Sales Growth %** 1. 2015: 3.32 2. 2016: 3.05 = 3.32 + 3.05 = 6.37 3. 2017: 2.87 = 6.37 + 2.87 = 9.24 4. 2018: 3.57 = 9.24 + 3.57 = 12.81 5. 2019: 3.28 = 12.81 + 3.28 = 16.09 6. 2020: 3.21 = 16.09 + 3.21 = 19.30 7. 2021: 3.22 = 19.30 + 3.22 = 22.52 8. 2022: 2.92 = 22.52 + 2.92 = 25.44 9. 2023: 3.08 = 25.44 + 3.08 = 28.52 10. 2024: 3.09 = 28.52 + 3.09 = 31.61 **Qtr YoY Growth %** 1. 2015: 3.05 2. 2016: 3.23 = 3.05 + 3.23 = 6.28 3. 2017: 1.85 = 6.28 + 1.85 = 8.13 4. 2018: 3.23 = 8.13 + 3.23 = 11.36 5. 2019: 2.23 = 11.36 + 2.23 = 13.59 6. 2020: 2.04 = 13.59 + 2.04 = 15.63 7. 2021: 2.50 = 15.63 + 2.50 = 18.13 8. 2022: 2.30 = 18.13 + 2.30 = 20.43 9. 2023: 1.44 = 20.43 + 1.44 = 21.87 10. 2024: 2.13 = 21.87 + 2.13 = 24.00 **5-Year Return %** 1. 2015: 18.02 2. 2016: 23.52 = 18.02 + 23.52 = 41.54 3. 2017: 5.95 = 41.54 + 5.95 = 47.49 4. 2018: 18.97 = 47.49 + 18.97 = 66.46 5. 2019: 7.64 = 66.46 + 7.64 = 74.10 6. 2020: 7.83 = 74.10 + 7.83 = 81.93 7. 2021: 8.80 = 81.93 + 8.80 = 90.73 8. 2022: 6.10 = 90.73 + 6.10 = 96.83 9. 2023: 6.23 = 96.83 + 6.23 = 103.06 10. 2024: 6.56 = 103.06 + 6.56 = 109.62 Now, let's add the totals of each table: **Sales Growth %**: 31.61 **Qtr YoY Growth %**: 24.00 **5-Year Return %**: 109.62 Adding those totals together: 31.61 + 24.00 + 109.62 = 165.23 ```
Author
Owner

@dhandhalyabhavik commented on GitHub (Nov 24, 2024):

Hi @rick-github,

Thanks for helping. Just wanted to see if it can handle such queries.

Thanks for quick helps.

<!-- gh-comment-id:2496142249 --> @dhandhalyabhavik commented on GitHub (Nov 24, 2024): Hi @rick-github, Thanks for helping. Just wanted to see if it can handle such queries. Thanks for quick helps.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30755