[GH-ISSUE #10266] llama2 models hanging, no response. Exception 0xc0000005 signal arrived during external code execution #68797

Closed
opened 2026-05-04 15:12:04 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @yongkaikai on GitHub (Apr 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10266

What is the issue?

first I run llama2 model, and type Hi to chat. then it gets stuck on loading and get exception "signal arrived during external code execution" in server log.
Reboot, reinstall, wwitch other model(like deepseek..) still found the problem. The docs does not provide a solution.

server.log
gpu: gfx1100 (AMD Radeon RX 7900 XTX)
clinfo.txt

Relevant log output

$ Get-CimInstance -ClassName Win32_VideoController | findstr "Name"
Name                         : AMD Radeon RX 7900 XTX

$ Get-CimInstance -ClassName Win32_Processor | Select-Object -Property Name
Name
----
AMD Ryzen 7 7800X3D 8-Core Processor

$ systeminfo | findstr /B /C:"OS Name" /B /C:"OS Version"
OS Name:                       Microsoft Windows 11 Pro
OS Version:                    10.0.26100 N/A Build 26100
                                                                                                                             
$ ollama --version
ollama version is 0.6.5

$ ollama run llama2
>>> Hi
⠹

//gets stuck on loading and get exception 

time=2025-04-14T20:28:57.785+08:00 level=INFO source=server.go:619 msg="llama runner started in 3.51 seconds"
[GIN] 2025/04/14 - 20:28:57 | 200 |    4.1726743s |       127.0.0.1 | POST     "/api/generate"
Exception 0xc0000005 0x0 0x0 0x7ffb816a8c34
PC=0x7ffb816a8c34
signal arrived during external code execution

runtime.cgocall(0x7ff63a4fdf80, 0xc00047dbb8)

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.6.5

Originally created by @yongkaikai on GitHub (Apr 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10266 ### What is the issue? first I run llama2 model, and type Hi to chat. then it gets stuck on loading and get exception "signal arrived during external code execution" in server log. Reboot, reinstall, wwitch other model(like deepseek..) still found the problem. The docs does not provide a solution. [server.log](https://github.com/user-attachments/files/19735783/server.log) gpu: gfx1100 (AMD Radeon RX 7900 XTX) [clinfo.txt](https://github.com/user-attachments/files/19735456/clinfo.txt) ### Relevant log output ```shell $ Get-CimInstance -ClassName Win32_VideoController | findstr "Name" Name : AMD Radeon RX 7900 XTX $ Get-CimInstance -ClassName Win32_Processor | Select-Object -Property Name Name ---- AMD Ryzen 7 7800X3D 8-Core Processor $ systeminfo | findstr /B /C:"OS Name" /B /C:"OS Version" OS Name: Microsoft Windows 11 Pro OS Version: 10.0.26100 N/A Build 26100 $ ollama --version ollama version is 0.6.5 $ ollama run llama2 >>> Hi ⠹ //gets stuck on loading and get exception time=2025-04-14T20:28:57.785+08:00 level=INFO source=server.go:619 msg="llama runner started in 3.51 seconds" [GIN] 2025/04/14 - 20:28:57 | 200 | 4.1726743s | 127.0.0.1 | POST "/api/generate" Exception 0xc0000005 0x0 0x0 0x7ffb816a8c34 PC=0x7ffb816a8c34 signal arrived during external code execution runtime.cgocall(0x7ff63a4fdf80, 0xc00047dbb8) ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.6.5
GiteaMirror added the bug label 2026-05-04 15:12:04 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 14, 2025):

Full server log will aid in debugging.

<!-- gh-comment-id:2801615757 --> @rick-github commented on GitHub (Apr 14, 2025): Full server log will aid in debugging.
Author
Owner

@yongkaikai commented on GitHub (Apr 14, 2025):

Full server log will aid in debugging.

server.log
Here's full server log in debugging mode.

<!-- gh-comment-id:2801622286 --> @yongkaikai commented on GitHub (Apr 14, 2025): > Full server log will aid in debugging. [server.log](https://github.com/user-attachments/files/19735821/server.log) Here's full server log in debugging mode.
Author
Owner

@rick-github commented on GitHub (Apr 14, 2025):

Failure occurs in llama_decode(). Works fine in Linux+Nvidia:

$ ollama run llama2
>>> Hi
Hello! It's nice to meet you. Is there something I can help you with or 
would you like to chat?

Does the error occur if the GPU is not used?

$ ollama run llama2
>>> /set parameter num_gpu 0
Set parameter 'num_gpu' to '0'
>>> Hi
<!-- gh-comment-id:2801678213 --> @rick-github commented on GitHub (Apr 14, 2025): Failure occurs in `llama_decode()`. Works fine in Linux+Nvidia: ```console $ ollama run llama2 >>> Hi Hello! It's nice to meet you. Is there something I can help you with or would you like to chat? ``` Does the error occur if the GPU is not used? ```console $ ollama run llama2 >>> /set parameter num_gpu 0 Set parameter 'num_gpu' to '0' >>> Hi ```
Author
Owner

@yongkaikai commented on GitHub (Apr 14, 2025):

/set parameter num_gpu 0

not happen when set num_gpu 0

$ ollama run llama2
>>> /set parameter num_gpu 0
Set parameter 'num_gpu' to '0'
>>> Hi

Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?

>>> /set parameter num_gpu 1
Set parameter 'num_gpu' to '1'
>>> Hi
Error: POST predict: Post "http://127.0.0.1:3679/completion": read tcp 127.0.0.1:3681->127.0.0.1:3679: wsarecv: An existing connection was forcibly closed by the remote host.
<!-- gh-comment-id:2801702452 --> @yongkaikai commented on GitHub (Apr 14, 2025): > /set parameter num_gpu 0 not happen when set num_gpu 0 ``` $ ollama run llama2 >>> /set parameter num_gpu 0 Set parameter 'num_gpu' to '0' >>> Hi Hello! It's nice to meet you. Is there something I can help you with or would you like to chat? >>> /set parameter num_gpu 1 Set parameter 'num_gpu' to '1' >>> Hi Error: POST predict: Post "http://127.0.0.1:3679/completion": read tcp 127.0.0.1:3681->127.0.0.1:3679: wsarecv: An existing connection was forcibly closed by the remote host. ```
Author
Owner

@rick-github commented on GitHub (Apr 14, 2025):

What's the output of

rocminfo
<!-- gh-comment-id:2801747587 --> @rick-github commented on GitHub (Apr 14, 2025): What's the output of ``` rocminfo ```
Author
Owner

@yongkaikai commented on GitHub (Apr 14, 2025):

What's the output of

rocminfo

`$ wsl
$ rocminfo
WSL environment detected.

HSA System Attributes

Runtime Version: 1.1
Runtime Ext Version: 1.6
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
DMAbuf Support: YES

==========
HSA Agents


Agent 1


Name: AMD Ryzen 7 7800X3D 8-Core Processor
Uuid: CPU-XX
Marketing Name: AMD Ryzen 7 7800X3D 8-Core Processor
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
Cacheline Size: 64(0x40)
Internal Node ID: 0
Compute Unit: 16
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
Memory Properties:
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 15974744(0xf3c158) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 15974744(0xf3c158) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 15974744(0xf3c158) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 4
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 15974744(0xf3c158) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:


Agent 2


Name: gfx1100
Marketing Name: AMD Radeon RX 7900 XTX
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
Cache Info:
L1: 32(0x20) KB
L2: 6144(0x1800) KB
L3: 98304(0x18000) KB
Chip ID: 29772(0x744c)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 2371
Internal Node ID: 1
Compute Unit: 96
SIMDs per CU: 2
Shader Engines: 6
Shader Arrs. per Eng.: 2
Coherent Host Access: FALSE
Memory Properties:
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 32(0x20)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 32(0x20)
Max Work-item Per CU: 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Packet Processor uCode:: 372
SDMA engine uCode:: 24
IOMMU Support:: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 25084584(0x17ec2a8) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 25084584(0x17ec2a8) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 3
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Recommended Granule:0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx1100
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***`

<!-- gh-comment-id:2801801951 --> @yongkaikai commented on GitHub (Apr 14, 2025): > What's the output of > > ``` > rocminfo > ``` `$ wsl $ rocminfo WSL environment detected. ===================== HSA System Attributes ===================== Runtime Version: 1.1 Runtime Ext Version: 1.6 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: AMD Ryzen 7 7800X3D 8-Core Processor Uuid: CPU-XX Marketing Name: AMD Ryzen 7 7800X3D 8-Core Processor Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU Cache Info: L1: 32768(0x8000) KB Chip ID: 0(0x0) Cacheline Size: 64(0x40) Internal Node ID: 0 Compute Unit: 16 SIMDs per CU: 0 Shader Engines: 0 Shader Arrs. per Eng.: 0 Memory Properties: Features: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: FINE GRAINED Size: 15974744(0xf3c158) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 15974744(0xf3c158) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 3 Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED Size: 15974744(0xf3c158) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 4 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 15974744(0xf3c158) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE ISA Info: ******* Agent 2 ******* Name: gfx1100 Marketing Name: AMD Radeon RX 7900 XTX Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU Cache Info: L1: 32(0x20) KB L2: 6144(0x1800) KB L3: 98304(0x18000) KB Chip ID: 29772(0x744c) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 2371 Internal Node ID: 1 Compute Unit: 96 SIMDs per CU: 2 Shader Engines: 6 Shader Arrs. per Eng.: 2 Coherent Host Access: FALSE Memory Properties: Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 372 SDMA engine uCode:: 24 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 25084584(0x17ec2a8) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 25084584(0x17ec2a8) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Recommended Granule:0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1100 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 *** Done ***`
Author
Owner

@rick-github commented on GitHub (Apr 14, 2025):

Have you tried not running in WSL?

<!-- gh-comment-id:2801952886 --> @rick-github commented on GitHub (Apr 14, 2025): Have you tried not running in WSL?
Author
Owner

@yongkaikai commented on GitHub (Apr 14, 2025):

Have you tried not running in WSL?

@rick-github Not work if run in windows os environment.

`rocminfo
rocminfo : The term 'rocminfo' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1

  • rocminfo
  •   + CategoryInfo          : ObjectNotFound: (rocminfo:String) [], CommandNotFoundException
      + FullyQualifiedErrorId : CommandNotFoundException`
    
<!-- gh-comment-id:2801967714 --> @yongkaikai commented on GitHub (Apr 14, 2025): > Have you tried not running in WSL? @rick-github Not work if run in windows os environment. `rocminfo rocminfo : The term 'rocminfo' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + rocminfo + ~~~~~~~~ + CategoryInfo : ObjectNotFound: (rocminfo:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException`
Author
Owner

@yongkaikai commented on GitHub (Apr 14, 2025):

I ever tried run ollama on wsl environment, but catch another problem. similar to https://github.com/ollama/ollama/issues/9599

<!-- gh-comment-id:2801977170 --> @yongkaikai commented on GitHub (Apr 14, 2025): I ever tried run ollama on wsl environment, but catch another problem. similar to https://github.com/ollama/ollama/issues/9599
Author
Owner

@rick-github commented on GitHub (Apr 14, 2025):

Just to clarify - you are running ollama in native windows, and ran rocminfo in WSL? In your posts you are using '$' as your prompt indicator, which is usually an indication of a Linux environment, so it's not clear where you are running ollama run llama2.

<!-- gh-comment-id:2802023569 --> @rick-github commented on GitHub (Apr 14, 2025): Just to clarify - you are running ollama in native windows, and ran rocminfo in WSL? In your posts you are using '$' as your prompt indicator, which is usually an indication of a Linux environment, so it's not clear where you are running `ollama run llama2`.
Author
Owner

@yongkaikai commented on GitHub (Apr 14, 2025):

Have you tried not running in WSL?

@rick-github Not work if run in windows os environment.

`rocminfo rocminfo : The term 'rocminfo' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1

  • rocminfo
  •   + CategoryInfo          : ObjectNotFound: (rocminfo:String) [], CommandNotFoundException
      + FullyQualifiedErrorId : CommandNotFoundException`
    

@rick-github I'm running ollama and rocminfo in native windows both.

<!-- gh-comment-id:2802048934 --> @yongkaikai commented on GitHub (Apr 14, 2025): > > Have you tried not running in WSL? > > [@rick-github](https://github.com/rick-github) Not work if run in windows os environment. > > `rocminfo rocminfo : The term 'rocminfo' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 > > * rocminfo > * ``` > + CategoryInfo : ObjectNotFound: (rocminfo:String) [], CommandNotFoundException > + FullyQualifiedErrorId : CommandNotFoundException` > ``` @rick-github I'm running ollama and rocminfo in native windows both.
Author
Owner

@yongkaikai commented on GitHub (Apr 14, 2025):

Just to clarify - you are running ollama in native windows, and ran rocminfo in WSL? In your posts you are using '$' as your prompt indicator, which is usually an indication of a Linux environment, so it's not clear where you are running ollama run llama2.

all above is running in native windows. please ignore '$'.

<!-- gh-comment-id:2802053952 --> @yongkaikai commented on GitHub (Apr 14, 2025): > Just to clarify - you are running ollama in native windows, and ran rocminfo in WSL? In your posts you are using '$' as your prompt indicator, which is usually an indication of a Linux environment, so it's not clear where you are running `ollama run llama2`. all above is running in native windows. please ignore '$'.
Author
Owner

@yongkaikai commented on GitHub (Apr 16, 2025):

@rick-github any finds? please let me know if you need anything further.

<!-- gh-comment-id:2810259301 --> @yongkaikai commented on GitHub (Apr 16, 2025): @rick-github any finds? please let me know if you need anything further.
Author
Owner

@rick-github commented on GitHub (Apr 17, 2025):

It's not clear from the logs why the runner is failing.

Exception 0xc0000005 0x0 0x0 0x7ffb816a8c34
PC=0x7ffb816a8c34

This indicates something like a SEGV (segmentation violation) occurred at program counter 0x7ffb816a8c34, which usually means the program tried to dereference a pointer that pointed outside of the address space of the process. That's presumably one of the 0x0 values, but since the call stack after `lama_decode() is to static functions, there's no way to determine where the dereference happened. There are no other errors in the log so the failed dereference can't be tied back to an earlier failure.

Does this failure occur with other models?

<!-- gh-comment-id:2813464695 --> @rick-github commented on GitHub (Apr 17, 2025): It's not clear from the logs why the runner is failing. ``` Exception 0xc0000005 0x0 0x0 0x7ffb816a8c34 PC=0x7ffb816a8c34 ``` This indicates something like a SEGV (segmentation violation) occurred at program counter 0x7ffb816a8c34, which usually means the program tried to dereference a pointer that pointed outside of the address space of the process. That's presumably one of the `0x0` values, but since the call stack after `lama_decode() is to static functions, there's no way to determine where the dereference happened. There are no other errors in the log so the failed dereference can't be tied back to an earlier failure. Does this failure occur with other models?
Author
Owner

@yongkaikai commented on GitHub (Apr 18, 2025):

It's not clear from the logs why the runner is failing.

Exception 0xc0000005 0x0 0x0 0x7ffb816a8c34
PC=0x7ffb816a8c34

This indicates something like a SEGV (segmentation violation) occurred at program counter 0x7ffb816a8c34, which usually means the program tried to dereference a pointer that pointed outside of the address space of the process. That's presumably one of the 0x0 values, but since the call stack after `lama_decode() is to static functions, there's no way to determine where the dereference happened. There are no other errors in the log so the failed dereference can't be tied back to an earlier failure.

Does this failure occur with other models?

@rick-github Thanks for your explanation. I've tried a lot models, such as deepseek-r1:1.5b to 32b models, they all face the same error, all exception code is 0xc0000005

<!-- gh-comment-id:2814353313 --> @yongkaikai commented on GitHub (Apr 18, 2025): > It's not clear from the logs why the runner is failing. > > ``` > Exception 0xc0000005 0x0 0x0 0x7ffb816a8c34 > PC=0x7ffb816a8c34 > ``` > > This indicates something like a SEGV (segmentation violation) occurred at program counter 0x7ffb816a8c34, which usually means the program tried to dereference a pointer that pointed outside of the address space of the process. That's presumably one of the `0x0` values, but since the call stack after `lama_decode() is to static functions, there's no way to determine where the dereference happened. There are no other errors in the log so the failed dereference can't be tied back to an earlier failure. > > Does this failure occur with other models? @rick-github Thanks for your explanation. I've tried a lot models, such as deepseek-r1:1.5b to 32b models, they all face the same error, all exception code is 0xc0000005
Author
Owner

@rick-github commented on GitHub (Apr 18, 2025):

In that case it might not be an inference problem but a system problem. You can try reinstalling ollama and drivers, and running a VRAM test (OCCT, gpumemtest).

<!-- gh-comment-id:2814934327 --> @rick-github commented on GitHub (Apr 18, 2025): In that case it might not be an inference problem but a system problem. You can try reinstalling ollama and drivers, and running a VRAM test ([OCCT](https://www.ocbase.com/occt/personal), [gpumemtest](https://www.programming4beginners.com/gpumemtest)).
Author
Owner

@yongkaikai commented on GitHub (Apr 18, 2025):

@rick-github Thanks you so much for your support and advice, I finally find the root cause is ollama is not compatible with old versions of vc++. By using the windeg analysis, found that the exception address belong to MSVCP140 version 14.0.24215.1(Visual Studio 2015).
Issue solved after upgrade Latest VC++ Redist
Hope this helps someone else with the same issue.

windbg.txt

Thanks again, you're so kindly.

<!-- gh-comment-id:2815776720 --> @yongkaikai commented on GitHub (Apr 18, 2025): @rick-github Thanks you so much for your support and advice, I finally find the root cause is ollama is not compatible with old versions of vc++. By using the windeg analysis, found that the exception address belong to MSVCP140 version 14.0.24215.1(Visual Studio 2015). Issue solved after upgrade [Latest VC++ Redist](https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0) Hope this helps someone else with the same issue. [windbg.txt](https://github.com/user-attachments/files/19814348/windbg.txt) Thanks again, you're so kindly.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68797