[GH-ISSUE #5280] Bug: Ollama keeps crashing & switching num-ctx context length back to default during usage #29069

Closed
opened 2026-04-22 07:43:06 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @Nulled-Out on GitHub (Jun 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5280

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

It seems like Ollama (non-docker) models crash and restart while any output is being processed

image
With 70k Context:

Jun 23 20:18:29 main ollama[7231]: llm_load_tensors: offloading 9 repeating layers to GPU
Jun 23 20:18:29 main ollama[7231]: llm_load_tensors: offloaded 9/28 layers to GPU
Jun 23 20:18:29 main ollama[7231]: llm_load_tensors:      ROCm0 buffer size =  2827.64 MiB
Jun 23 20:18:29 main ollama[7231]: llm_load_tensors:        CPU buffer size =  5685.15 MiB
Jun 23 20:18:30 main ollama[7231]: llama_new_context_with_model: n_ctx      = 70016
Jun 23 20:18:30 main ollama[7231]: llama_new_context_with_model: n_batch    = 512
Jun 23 20:18:30 main ollama[7231]: llama_new_context_with_model: n_ubatch   = 512
Jun 23 20:18:30 main ollama[7231]: llama_new_context_with_model: flash_attn = 0
Jun 23 20:18:30 main ollama[7231]: llama_new_context_with_model: freq_base  = 10000.0
Jun 23 20:18:30 main ollama[7231]: llama_new_context_with_model: freq_scale = 0.025
Jun 23 20:18:30 main ollama[7231]: llama_kv_cache_init:      ROCm0 KV buffer size =  6153.75 MiB
Jun 23 20:18:33 main ollama[7231]: llama_kv_cache_init:  ROCm_Host KV buffer size = 12307.50 MiB
Jun 23 20:18:33 main ollama[7231]: llama_new_context_with_model: KV self size  = 18461.25 MiB, K (f16): 11076.75 MiB, V (f16): 7384.50 MiB
Jun 23 20:18:33 main ollama[7231]: llama_new_context_with_model:  ROCm_Host  output buffer size =     0.40 MiB
Jun 23 20:18:33 main ollama[7231]: llama_new_context_with_model:      ROCm0 compute buffer size =  3168.63 MiB
Jun 23 20:18:33 main ollama[7231]: llama_new_context_with_model:  ROCm_Host compute buffer size =   146.76 MiB

Down to 2k Context:

Jun 23 20:21:45 main ollama[7231]: llm_load_tensors: offloading 27 repeating layers to GPU
Jun 23 20:21:45 main ollama[7231]: llm_load_tensors: offloading non-repeating layers to GPU
Jun 23 20:21:45 main ollama[7231]: llm_load_tensors: offloaded 28/28 layers to GPU
Jun 23 20:21:45 main ollama[7231]: llm_load_tensors:      ROCm0 buffer size =  8400.30 MiB
Jun 23 20:21:45 main ollama[7231]: llm_load_tensors:        CPU buffer size =   112.50 MiB
Jun 23 20:21:47 main ollama[7231]: llama_new_context_with_model: n_ctx      = 2048
Jun 23 20:21:47 main ollama[7231]: llama_new_context_with_model: n_batch    = 512
Jun 23 20:21:47 main ollama[7231]: llama_new_context_with_model: n_ubatch   = 512
Jun 23 20:21:47 main ollama[7231]: llama_new_context_with_model: flash_attn = 0
Jun 23 20:21:47 main ollama[7231]: llama_new_context_with_model: freq_base  = 10000.0
Jun 23 20:21:47 main ollama[7231]: llama_new_context_with_model: freq_scale = 0.025
Jun 23 20:21:48 main ollama[7231]: llama_kv_cache_init:      ROCm0 KV buffer size =   540.00 MiB
Jun 23 20:21:48 main ollama[7231]: llama_new_context_with_model: KV self size  =  540.00 MiB, K (f16):  324.00 MiB, V (f16):  216.00 MiB
Jun 23 20:21:48 main ollama[7231]: llama_new_context_with_model:  ROCm_Host  output buffer size =     0.40 MiB
Jun 23 20:21:48 main ollama[7231]: llama_new_context_with_model:      ROCm0 compute buffer size =   212.00 MiB
Jun 23 20:21:48 main ollama[7231]: llama_new_context_with_model:  ROCm_Host compute buffer size =     8.01 MiB

Initially thought it was model or memory issue due to

Jun 12 17:17:20 main ollama[20275]: time=2024-06-12T17:17:20.451+01:00 level=WARN source=sched.go:511 msg="gpu VRAM usage didn't recover within timeout" seconds=5.258364717

but even after shrinking context length, and after it stopped giving warnings about memory in the logs but still crashed on any model I tried

logs.zip

Originally created by @Nulled-Out on GitHub (Jun 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5280 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? It seems like Ollama (non-docker) models crash and restart while any output is being processed ![image](https://github.com/ggerganov/llama.cpp/assets/92701426/2f164d1e-a330-4bc1-9c67-ca8d9468b651) With 70k Context: ``` Jun 23 20:18:29 main ollama[7231]: llm_load_tensors: offloading 9 repeating layers to GPU Jun 23 20:18:29 main ollama[7231]: llm_load_tensors: offloaded 9/28 layers to GPU Jun 23 20:18:29 main ollama[7231]: llm_load_tensors: ROCm0 buffer size = 2827.64 MiB Jun 23 20:18:29 main ollama[7231]: llm_load_tensors: CPU buffer size = 5685.15 MiB Jun 23 20:18:30 main ollama[7231]: llama_new_context_with_model: n_ctx = 70016 Jun 23 20:18:30 main ollama[7231]: llama_new_context_with_model: n_batch = 512 Jun 23 20:18:30 main ollama[7231]: llama_new_context_with_model: n_ubatch = 512 Jun 23 20:18:30 main ollama[7231]: llama_new_context_with_model: flash_attn = 0 Jun 23 20:18:30 main ollama[7231]: llama_new_context_with_model: freq_base = 10000.0 Jun 23 20:18:30 main ollama[7231]: llama_new_context_with_model: freq_scale = 0.025 Jun 23 20:18:30 main ollama[7231]: llama_kv_cache_init: ROCm0 KV buffer size = 6153.75 MiB Jun 23 20:18:33 main ollama[7231]: llama_kv_cache_init: ROCm_Host KV buffer size = 12307.50 MiB Jun 23 20:18:33 main ollama[7231]: llama_new_context_with_model: KV self size = 18461.25 MiB, K (f16): 11076.75 MiB, V (f16): 7384.50 MiB Jun 23 20:18:33 main ollama[7231]: llama_new_context_with_model: ROCm_Host output buffer size = 0.40 MiB Jun 23 20:18:33 main ollama[7231]: llama_new_context_with_model: ROCm0 compute buffer size = 3168.63 MiB Jun 23 20:18:33 main ollama[7231]: llama_new_context_with_model: ROCm_Host compute buffer size = 146.76 MiB ``` Down to 2k Context: ``` Jun 23 20:21:45 main ollama[7231]: llm_load_tensors: offloading 27 repeating layers to GPU Jun 23 20:21:45 main ollama[7231]: llm_load_tensors: offloading non-repeating layers to GPU Jun 23 20:21:45 main ollama[7231]: llm_load_tensors: offloaded 28/28 layers to GPU Jun 23 20:21:45 main ollama[7231]: llm_load_tensors: ROCm0 buffer size = 8400.30 MiB Jun 23 20:21:45 main ollama[7231]: llm_load_tensors: CPU buffer size = 112.50 MiB Jun 23 20:21:47 main ollama[7231]: llama_new_context_with_model: n_ctx = 2048 Jun 23 20:21:47 main ollama[7231]: llama_new_context_with_model: n_batch = 512 Jun 23 20:21:47 main ollama[7231]: llama_new_context_with_model: n_ubatch = 512 Jun 23 20:21:47 main ollama[7231]: llama_new_context_with_model: flash_attn = 0 Jun 23 20:21:47 main ollama[7231]: llama_new_context_with_model: freq_base = 10000.0 Jun 23 20:21:47 main ollama[7231]: llama_new_context_with_model: freq_scale = 0.025 Jun 23 20:21:48 main ollama[7231]: llama_kv_cache_init: ROCm0 KV buffer size = 540.00 MiB Jun 23 20:21:48 main ollama[7231]: llama_new_context_with_model: KV self size = 540.00 MiB, K (f16): 324.00 MiB, V (f16): 216.00 MiB Jun 23 20:21:48 main ollama[7231]: llama_new_context_with_model: ROCm_Host output buffer size = 0.40 MiB Jun 23 20:21:48 main ollama[7231]: llama_new_context_with_model: ROCm0 compute buffer size = 212.00 MiB Jun 23 20:21:48 main ollama[7231]: llama_new_context_with_model: ROCm_Host compute buffer size = 8.01 MiB ``` Initially thought it was model or memory issue due to > Jun 12 17:17:20 main ollama[20275]: time=2024-06-12T17:17:20.451+01:00 level=WARN source=sched.go:511 msg="gpu VRAM usage didn't recover within timeout" seconds=5.258364717 but even after shrinking context length, and after it stopped giving warnings about memory in the logs but still crashed on any model I tried [logs.zip](https://github.com/user-attachments/files/15977181/logs.zip)
GiteaMirror added the bug label 2026-04-22 07:43:06 -05:00
Author
Owner

@Nulled-Out commented on GitHub (Jun 25, 2024):

OS: Ubuntu 22.04 (Kubuntu)
CPU: Ryzen 5800X3D
RAM: 64GB
GPU: RX 7800XT (16gb vram) (rocm)

ROCk module version 6.7.0 is loaded
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.13
Runtime Ext Version:     1.4
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    AMD Ryzen 7 5800X3D 8-Core Processor
  Uuid:                    CPU-XX                             
  Marketing Name:          AMD Ryzen 7 5800X3D 8-Core Processor
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
  Cache Info:              
    L1:                      32768(0x8000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   3400                               
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            16                                 
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:1                                  
  Features:                None
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: FINE GRAINED        
      Size:                    65759484(0x3eb68fc) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    65759484(0x3eb68fc) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    65759484(0x3eb68fc) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*******                  
Agent 2                  
*******                  
  Name:                    gfx1101                            
  Uuid:                    GPU-6a15db6e7ae3f79d               
  Marketing Name:          AMD Radeon RX 7800 XT              
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      32(0x20) KB                        
    L2:                      4096(0x1000) KB                    
    L3:                      65536(0x10000) KB                  
  Chip ID:                 29822(0x747e)                      
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   2254                               
  BDFID:                   3584                               
  Internal Node ID:        1                                  
  Compute Unit:            60                                 
  SIMDs per CU:            2                                  
  Shader Engines:          3                                  
  Shader Arrs. per Eng.:   2                                  
  WatchPts on Addr. Ranges:4                                  
  Coherent Host Access:    FALSE                              
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          32(0x20)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        32(0x20)                           
  Max Work-item Per CU:    1024(0x400)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 132                                
  SDMA engine uCode::      21                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    16760832(0xffc000) KB              
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    16760832(0xffc000) KB              
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx1101         
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 

<!-- gh-comment-id:2189816922 --> @Nulled-Out commented on GitHub (Jun 25, 2024): OS: Ubuntu 22.04 (Kubuntu) CPU: Ryzen 5800X3D RAM: 64GB GPU: RX 7800XT (16gb vram) (rocm) ``` ROCk module version 6.7.0 is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.13 Runtime Ext Version: 1.4 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: AMD Ryzen 7 5800X3D 8-Core Processor Uuid: CPU-XX Marketing Name: AMD Ryzen 7 5800X3D 8-Core Processor Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU Cache Info: L1: 32768(0x8000) KB Chip ID: 0(0x0) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 3400 BDFID: 0 Internal Node ID: 0 Compute Unit: 16 SIMDs per CU: 0 Shader Engines: 0 Shader Arrs. per Eng.: 0 WatchPts on Addr. Ranges:1 Features: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: FINE GRAINED Size: 65759484(0x3eb68fc) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 2 Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED Size: 65759484(0x3eb68fc) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 3 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 65759484(0x3eb68fc) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE ISA Info: ******* Agent 2 ******* Name: gfx1101 Uuid: GPU-6a15db6e7ae3f79d Marketing Name: AMD Radeon RX 7800 XT Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU Cache Info: L1: 32(0x20) KB L2: 4096(0x1000) KB L3: 65536(0x10000) KB Chip ID: 29822(0x747e) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 2254 BDFID: 3584 Internal Node ID: 1 Compute Unit: 60 SIMDs per CU: 2 Shader Engines: 3 Shader Arrs. per Eng.: 2 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 132 SDMA engine uCode:: 21 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 16760832(0xffc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 16760832(0xffc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Recommended Granule:0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1101 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 ```
Author
Owner

@dhiltgen commented on GitHub (Jul 24, 2024):

Skimming through some of your logs, I see signs of system memory exhaustion

Jun 06 22:46:41 main systemd[1]: ollama.service: A process of this unit has been killed by the OOM killer.

I also see you're running 0.1.41, so upgrading might be helpful. We've added some safeguards to try o avoid loading models that have no chance in fitting into the available system memory + VRAM.

<!-- gh-comment-id:2248710125 --> @dhiltgen commented on GitHub (Jul 24, 2024): Skimming through some of your logs, I see signs of system memory exhaustion ``` Jun 06 22:46:41 main systemd[1]: ollama.service: A process of this unit has been killed by the OOM killer. ``` I also see you're running 0.1.41, so upgrading might be helpful. We've added some safeguards to try o avoid loading models that have no chance in fitting into the available system memory + VRAM.
Author
Owner

@Nulled-Out commented on GitHub (Jul 25, 2024):

Yeah, ill admit I got a bit too ambitious in spreading such a big load across the GPU/CPU & RAM/VRAM but happened after that in less intensive settings too across different updates.

Unfortunately I no longer use the set up I had when filing this bug report :/ so I might not be much help in terms of getting new information other than copying everything i posted in the discord about it

<!-- gh-comment-id:2251366461 --> @Nulled-Out commented on GitHub (Jul 25, 2024): Yeah, ill admit I got a bit too ambitious in spreading such a big load across the GPU/CPU & RAM/VRAM but happened after that in less intensive settings too across different updates. Unfortunately I no longer use the set up I had when filing this bug report :/ so I might not be much help in terms of getting new information other than copying everything i posted in the discord about it
Author
Owner

@dhiltgen commented on GitHub (Jul 26, 2024):

OK. I think we should do a better job not loading a model when the system is under so much stress, so I'll go ahead and close this one then, but if the problem comes back, please share updated logs and I'll reopen.

<!-- gh-comment-id:2253087510 --> @dhiltgen commented on GitHub (Jul 26, 2024): OK. I think we should do a better job not loading a model when the system is under so much stress, so I'll go ahead and close this one then, but if the problem comes back, please share updated logs and I'll reopen.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29069