[GH-ISSUE #8401] Failed to summarize the long context #51908

Closed
opened 2026-04-28 21:13:09 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @goactiongo on GitHub (Jan 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8401

What is the issue?

I have 4 * A30 GPU cards (24G*4) and a piece of content with 111k context. I used 3 models that can support 128k context, which are

llama3.2:latest
llama3.1:8b
glm4:9b
The models were set with the parameter num_ctx=121k. After testing, none of the models could successfully summarize the context content (if the context content is sufficiently small, all three models can succeed).

Moreover, by monitoring the GPU usage with gpustat -i, it was found that only one model can utilize multiple GPUs for processing, while the other two models can only use one GPU.

Through the ollama logs, it was discovered that almost every model has to repeatedly load the context content, which takes a long time, but ultimately fails, resulting in a poor user experience.

Could you please help analyze the logs to figure out why it always fails?

The logs for the three models are as follows attached..
glm4.log
Llama 3.1 8B Instruct.log
Llama 3.2 3B Instruct.log

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

ollama version is 0.3.11

Originally created by @goactiongo on GitHub (Jan 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8401 ### What is the issue? I have 4 * A30 GPU cards (24G*4) and a piece of content with 111k context. I used 3 models that can support 128k context, which are llama3.2:latest llama3.1:8b glm4:9b The models were set with the parameter num_ctx=121k. After testing, none of the models could successfully summarize the context content (if the context content is sufficiently small, all three models can succeed). Moreover, by monitoring the GPU usage with gpustat -i, it was found that only one model can utilize multiple GPUs for processing, while the other two models can only use one GPU. Through the ollama logs, it was discovered that almost every model has to repeatedly load the context content, which takes a long time, but ultimately fails, resulting in a poor user experience. Could you please help analyze the logs to figure out why it always fails? The logs for the three models are as follows attached.. [glm4.log](https://github.com/user-attachments/files/18396940/glm4.log) [Llama 3.1 8B Instruct.log](https://github.com/user-attachments/files/18396944/Llama.3.1.8B.Instruct.log) [Llama 3.2 3B Instruct.log](https://github.com/user-attachments/files/18396945/Llama.3.2.3B.Instruct.log) ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version ollama version is 0.3.11
GiteaMirror added the bug label 2026-04-28 21:13:09 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 14, 2025):

1月 13 20:36:40 gpu ollama[28665]: time=2025-01-13T20:36:40.673+08:00 level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-b506a070d1152
1月 13 20:36:50 gpu ollama[28665]: time=2025-01-13T20:36:50.674+08:00 level=DEBUG source=server.go:576 msg="server unhealthy" error="server not responding"

The runner doesn't respond to a health check and gets restarted. This happens to all three models. It's not clear why. I set up a similar environment (0.3.11, glm4:9b, 121234 context, 112102 input, not A30) and was unable to reproduce.

In my test the input was two copies of 600007.pdf from #7146 (to make up the 112102 tokens). The prompt wasn't clear from the logs since they are incomplete, but I used "Summarize this document". The output was:

This document is the 2024 semi-annual report of China World Trade Center Co., Ltd. (China National贸). The report covers various aspects of the company's operations, financials, and governance during the first half of 2024.

**Key points include**:

* **Business Overview**: The company primarily engages in property leasing and management, as well as hotel operation. Its main revenue sources are office buildings, shopping malls, apartments (investment properties), and hotels.
* **Financial Performance**: The report highlights the company's financial results for the first half of 2024, including revenue, expenses, profit, and cash flow. It also provides analysis of key financial metrics such as net assets and return on equity.
* **Risk Factors**: The report identifies potential risks facing the company, such as economic slowdown, market competition, and property management challenges. The company outlines its strategies to mitigate these risks.
* **Corporate Governance**: The report discusses the composition of the board of directors, senior management team, and corporate governance structure.
* **Other Sections**: The report includes additional sections on environmental and social responsibility, significant events, shareholding information, bond related matters, and financial statements.

**Overall, the report aims to provide a comprehensive overview of China National贸's performance and prospects for the first half of 2024**

I don't know if it will make a difference, but have you considered up grading to 0.5.4/0.5.5?

<!-- gh-comment-id:2588804026 --> @rick-github commented on GitHub (Jan 14, 2025): ``` 1月 13 20:36:40 gpu ollama[28665]: time=2025-01-13T20:36:40.673+08:00 level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-b506a070d1152 1月 13 20:36:50 gpu ollama[28665]: time=2025-01-13T20:36:50.674+08:00 level=DEBUG source=server.go:576 msg="server unhealthy" error="server not responding" ``` The runner doesn't respond to a health check and gets restarted. This happens to all three models. It's not clear why. I set up a similar environment (0.3.11, glm4:9b, 121234 context, 112102 input, not A30) and was unable to reproduce. In my test the input was two copies of 600007.pdf from #7146 (to make up the 112102 tokens). The prompt wasn't clear from the logs since they are incomplete, but I used "Summarize this document". The output was: ``` This document is the 2024 semi-annual report of China World Trade Center Co., Ltd. (China National贸). The report covers various aspects of the company's operations, financials, and governance during the first half of 2024. **Key points include**: * **Business Overview**: The company primarily engages in property leasing and management, as well as hotel operation. Its main revenue sources are office buildings, shopping malls, apartments (investment properties), and hotels. * **Financial Performance**: The report highlights the company's financial results for the first half of 2024, including revenue, expenses, profit, and cash flow. It also provides analysis of key financial metrics such as net assets and return on equity. * **Risk Factors**: The report identifies potential risks facing the company, such as economic slowdown, market competition, and property management challenges. The company outlines its strategies to mitigate these risks. * **Corporate Governance**: The report discusses the composition of the board of directors, senior management team, and corporate governance structure. * **Other Sections**: The report includes additional sections on environmental and social responsibility, significant events, shareholding information, bond related matters, and financial statements. **Overall, the report aims to provide a comprehensive overview of China National贸's performance and prospects for the first half of 2024** ``` I don't know if it will make a difference, but have you considered up grading to 0.5.4/0.5.5?
Author
Owner

@goactiongo commented on GitHub (Jan 14, 2025):

Thanks for help,I will give you feedback once testing with new vesion

---Original---
From: @.>
Date: Tue, Jan 14, 2025 10:56 AM
To: @.
>;
Cc: @.@.>;
Subject: Re: [ollama/ollama] Failed to summarize the long context (Issue#8401)

1月 13 20:36:40 gpu ollama[28665]: time=2025-01-13T20:36:40.673+08:00 level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-b506a070d1152 1月 13 20:36:50 gpu ollama[28665]: time=2025-01-13T20:36:50.674+08:00 level=DEBUG source=server.go:576 msg="server unhealthy" error="server not responding"
The runner doesn't respond to a health check and gets restarted. This happens to all three models. It's not clear why. I set up a similar environment (0.3.11, glm4:9b, 121234 context, 112102 input, not A30) and was unable to reproduce.

In my test the input was two copies of 600007.pdf from #7146 (to make up the 112102 tokens). The prompt wasn't clear from the logs since they are incomplete, but I used "Summarize this document". The output was:
This document is the 2024 semi-annual report of China World Trade Center Co., Ltd. (China National贸). The report covers various aspects of the company's operations, financials, and governance during the first half of 2024. Key points include: * Business Overview: The company primarily engages in property leasing and management, as well as hotel operation. Its main revenue sources are office buildings, shopping malls, apartments (investment properties), and hotels. * Financial Performance: The report highlights the company's financial results for the first half of 2024, including revenue, expenses, profit, and cash flow. It also provides analysis of key financial metrics such as net assets and return on equity. * Risk Factors: The report identifies potential risks facing the company, such as economic slowdown, market competition, and property management challenges. The company outlines its strategies to mitigate these risks. * Corporate Governance: The report discusses the composition of the board of directors, senior management team, and corporate governance structure. * Other Sections: The report includes additional sections on environmental and social responsibility, significant events, shareholding information, bond related matters, and financial statements. Overall, the report aims to provide a comprehensive overview of China National贸's performance and prospects for the first half of 2024
I don't know if it will make a difference, but have you considered up grading to 0.5.4/0.5.5?


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: @.***>

<!-- gh-comment-id:2588904748 --> @goactiongo commented on GitHub (Jan 14, 2025): Thanks for help,I will give you feedback once testing with new vesion ---Original--- From: ***@***.***&gt; Date: Tue, Jan 14, 2025 10:56 AM To: ***@***.***&gt;; Cc: ***@***.******@***.***&gt;; Subject: Re: [ollama/ollama] Failed to summarize the long context (Issue#8401) 1月 13 20:36:40 gpu ollama[28665]: time=2025-01-13T20:36:40.673+08:00 level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-b506a070d1152 1月 13 20:36:50 gpu ollama[28665]: time=2025-01-13T20:36:50.674+08:00 level=DEBUG source=server.go:576 msg="server unhealthy" error="server not responding" The runner doesn't respond to a health check and gets restarted. This happens to all three models. It's not clear why. I set up a similar environment (0.3.11, glm4:9b, 121234 context, 112102 input, not A30) and was unable to reproduce. In my test the input was two copies of 600007.pdf from #7146 (to make up the 112102 tokens). The prompt wasn't clear from the logs since they are incomplete, but I used "Summarize this document". The output was: This document is the 2024 semi-annual report of China World Trade Center Co., Ltd. (China National贸). The report covers various aspects of the company's operations, financials, and governance during the first half of 2024. **Key points include**: * **Business Overview**: The company primarily engages in property leasing and management, as well as hotel operation. Its main revenue sources are office buildings, shopping malls, apartments (investment properties), and hotels. * **Financial Performance**: The report highlights the company's financial results for the first half of 2024, including revenue, expenses, profit, and cash flow. It also provides analysis of key financial metrics such as net assets and return on equity. * **Risk Factors**: The report identifies potential risks facing the company, such as economic slowdown, market competition, and property management challenges. The company outlines its strategies to mitigate these risks. * **Corporate Governance**: The report discusses the composition of the board of directors, senior management team, and corporate governance structure. * **Other Sections**: The report includes additional sections on environmental and social responsibility, significant events, shareholding information, bond related matters, and financial statements. **Overall, the report aims to provide a comprehensive overview of China National贸's performance and prospects for the first half of 2024** I don't know if it will make a difference, but have you considered up grading to 0.5.4/0.5.5? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: ***@***.***&gt;
Author
Owner

@goactiongo commented on GitHub (Jan 14, 2025):

After upgrading to 0.5.4, the parameters are as follows:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_DEBUG=1"
Environment="OLLAMA_NUM_PARALLEL=1"
Environment="OLLAMA_FLASH_ATTENTION=1"
Environment="GGML_CUDA_ENABLE_UNIFIED_MEMORY=1"

For glm4:9b, only a single GPU resource was used.
The first run was successful, and the log is: glm4 - success.txt
The second run failed, and the log is: glm4 - failed.txt

For llama3.2, only a single CPU resource was used, and the run was successful. The log is: llama3.2.log

For llama3.1, three GPU resources were used, and the run was successful. The log is: llama3.1.log

According to the above log files, no log information indicating success or failure was found.

With the previous version of Ollama, the successful log information could be seen in the logs, as shown below.

10月 19 22:19:34 gpu ollama[60399]: DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=2 tid="139958483587072" timestamp=1729347574
10月 19 22:21:05 gpu ollama[60399]: DEBUG [print_timings] prompt eval time     =   44130.26 ms / 58174 tokens (    0.76 ms per token,  1318.23 tokens per second) | n_prompt_tokens_processed=58174 n_tokens_second=1318.2337027993692 slot_id=0 t_prompt_processing=44130.263 t_token=0.7585908309554096 task_id=2 tid="139958483587072" timestamp=1729347665
10月 19 22:21:05 gpu ollama[60399]: DEBUG [print_timings] generation eval time =   47231.63 ms /   640 runs   (   73.80 ms per token,    13.55 tokens per second) | n_decoded=640 n_tokens_second=13.55024251017226 slot_id=0 t_token=73.7994171875 t_token_generation=47231.627 task_id=2 tid="139958483587072" timestamp=1729347665

glm4_failed.log
glm4-success.log
llama3.1.log
llama3.2.log

<!-- gh-comment-id:2589368129 --> @goactiongo commented on GitHub (Jan 14, 2025): After upgrading to 0.5.4, the parameters are as follows: ``` [Service] Environment="OLLAMA_HOST=0.0.0.0" Environment="OLLAMA_DEBUG=1" Environment="OLLAMA_NUM_PARALLEL=1" Environment="OLLAMA_FLASH_ATTENTION=1" Environment="GGML_CUDA_ENABLE_UNIFIED_MEMORY=1" ``` For glm4:9b, only a single GPU resource was used. The first run was successful, and the log is: glm4 - success.txt The second run failed, and the log is: glm4 - failed.txt For llama3.2, only a single CPU resource was used, and the run was successful. The log is: llama3.2.log For llama3.1, three GPU resources were used, and the run was successful. The log is: llama3.1.log According to the above log files, no log information indicating success or failure was found. With the previous version of Ollama, the successful log information could be seen in the logs, as shown below. ``` 10月 19 22:19:34 gpu ollama[60399]: DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=2 tid="139958483587072" timestamp=1729347574 10月 19 22:21:05 gpu ollama[60399]: DEBUG [print_timings] prompt eval time = 44130.26 ms / 58174 tokens ( 0.76 ms per token, 1318.23 tokens per second) | n_prompt_tokens_processed=58174 n_tokens_second=1318.2337027993692 slot_id=0 t_prompt_processing=44130.263 t_token=0.7585908309554096 task_id=2 tid="139958483587072" timestamp=1729347665 10月 19 22:21:05 gpu ollama[60399]: DEBUG [print_timings] generation eval time = 47231.63 ms / 640 runs ( 73.80 ms per token, 13.55 tokens per second) | n_decoded=640 n_tokens_second=13.55024251017226 slot_id=0 t_token=73.7994171875 t_token_generation=47231.627 task_id=2 tid="139958483587072" timestamp=1729347665 ``` [glm4_failed.log](https://github.com/user-attachments/files/18407966/glm4_failed.log) [glm4-success.log](https://github.com/user-attachments/files/18407967/glm4-success.log) [llama3.1.log](https://github.com/user-attachments/files/18407968/llama3.1.log) [llama3.2.log](https://github.com/user-attachments/files/18407969/llama3.2.log)
Author
Owner

@rick-github commented on GitHub (Jan 14, 2025):

What was the response from the failed glm4 run? The log indicates a 200 status code so something was sent to the client. Was it incomplete? Incorrect summary? No summary?

<!-- gh-comment-id:2589389855 --> @rick-github commented on GitHub (Jan 14, 2025): What was the response from the failed glm4 run? The log indicates a `200` status code so something was sent to the client. Was it incomplete? Incorrect summary? No summary?
Author
Owner

@goactiongo commented on GitHub (Jan 14, 2025):

An AI program called ollama through the AI gateway,probably timed out.

---Original---
From: @.>
Date: Tue, Jan 14, 2025 17:13 PM
To: @.
>;
Cc: @.@.>;
Subject: Re: [ollama/ollama] Failed to summarize the long context (Issue#8401)

What was the response from the failed glm4 run? The log indicates a 200 status code so something was sent to the client. Was it incomplete? Incorrect summary? No summary?


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: @.***>

<!-- gh-comment-id:2589421734 --> @goactiongo commented on GitHub (Jan 14, 2025): An AI program called ollama through the AI gateway,probably timed out. ---Original--- From: ***@***.***&gt; Date: Tue, Jan 14, 2025 17:13 PM To: ***@***.***&gt;; Cc: ***@***.******@***.***&gt;; Subject: Re: [ollama/ollama] Failed to summarize the long context (Issue#8401) What was the response from the failed glm4 run? The log indicates a 200 status code so something was sent to the client. Was it incomplete? Incorrect summary? No summary? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: ***@***.***&gt;
Author
Owner

@rick-github commented on GitHub (Jan 14, 2025):

How do you know it was a failure?

<!-- gh-comment-id:2589431515 --> @rick-github commented on GitHub (Jan 14, 2025): How do you know it was a failure?
Author
Owner

@goactiongo commented on GitHub (Jan 14, 2025):

The frontend AI program didn't return any information and ended after a while.I checked the logs returned by the AI and it showed"timeout".I'm not sure whether this timeout was returned by the AI,the AI gateway,or Ollama.Judging from the Ollama logs,it doesn't seem to be returned by Ollama.

---Original---
From: @.>
Date: Tue, Jan 14, 2025 17:33 PM
To: @.
>;
Cc: @.@.>;
Subject: Re: [ollama/ollama] Failed to summarize the long context (Issue#8401)

How do you know it was a failure?


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: @.***>

<!-- gh-comment-id:2589440222 --> @goactiongo commented on GitHub (Jan 14, 2025): The frontend AI program didn't return any information and ended after a while.I checked the logs returned by the AI and it showed"timeout".I'm not sure whether this timeout was returned by the AI,the AI gateway,or Ollama.Judging from the Ollama logs,it doesn't seem to be returned by Ollama. ---Original--- From: ***@***.***&gt; Date: Tue, Jan 14, 2025 17:33 PM To: ***@***.***&gt;; Cc: ***@***.******@***.***&gt;; Subject: Re: [ollama/ollama] Failed to summarize the long context (Issue#8401) How do you know it was a failure? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: ***@***.***&gt;
Author
Owner

@rick-github commented on GitHub (Jan 14, 2025):

A client timeout normally shows up as a 500 status code in the GIN log line.

<!-- gh-comment-id:2589446746 --> @rick-github commented on GitHub (Jan 14, 2025): A client timeout normally shows up as a 500 status code in the GIN log line.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51908