[GH-ISSUE #8886] signal arrived during external code execution #52272

Closed
opened 2026-04-28 22:47:03 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @ican2002 on GitHub (Feb 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8886

What is the issue?

Can anyone help to resolve this issue? thanks.

CPU: intel i7-6700HQ
OS: windows10
GPU: 960M

seems CPU and GPU detected: in the log "Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu]"

and cgo related problems as log shows:
runtime.cgocall(0x7ff6bdc60920, 0xc0003f4c10)
runtime/cgocall.go:167 +0x3e fp=0xc0003f4be8 sp=0xc0003f4b80 pc=0x7ff6bcea9c3e

seems many ones are facing this problems.

if anyone resloeved it, please reply to this, thank you.

some info for reference, if useful: I searched some other discussion about cgocall exception, which says if cgo calls a C subroutine which uses too much CPU and use local storage, while another go-routine calls fmt.Print() at the same time, it will cause an exception. and a suggest is to call runtime.UnlockOSThread() to lock the cgo routine and runtime.UnlockOSThread() to unlock it. NOT sure it works.

Thanks.

Relevant log output

2025/02/05 20:47:23 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\can\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-05T20:47:23.684+08:00 level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-02-05T20:47:23.685+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-05T20:47:23.686+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-02-05T20:47:23.687+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu]"
time=2025-02-05T20:47:23.687+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-05T20:47:23.687+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-02-05T20:47:23.687+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=4 efficiency=0 threads=8
Exception 0xc0000005 0x0 0x10 0x7ffcaca97983
PC=0x7ffcaca97983
signal arrived during external code execution

runtime.cgocall(0x7ff6bdc60920, 0xc0003f4c10)
runtime/cgocall.go:167 +0x3e fp=0xc0003f4be8 sp=0xc0003f4b80 pc=0x7ff6bcea9c3e
github.com/ollama/ollama/discover._Cfunc_nvml_init(0x2041ea093d0, 0xc00004f440)
_cgo_gotypes.go:573 +0x4d fp=0xc0003f4c10 sp=0xc0003f4be8 pc=0x7ff6bd476f8d
github.com/ollama/ollama/discover.loadNVMLMgmt.func2(0x2041ea093d0, 0xc00004f440)
github.com/ollama/ollama/discover/gpu.go:651 +0x4a fp=0xc0003f4c40 sp=0xc0003f4c10 pc=0x7ff6bd47e68a
github.com/ollama/ollama/discover.loadNVMLMgmt({0xc00004f400, 0x3, 0x7ff6be8b9410?})
github.com/ollama/ollama/discover/gpu.go:651 +0x245 fp=0xc0003f4d30 sp=0xc0003f4c40 pc=0x7ff6bd47e4c5
github.com/ollama/ollama/discover.initCudaHandles()
github.com/ollama/ollama/discover/gpu.go:118 +0x4fa fp=0xc0003f4f98 sp=0xc0003f4d30 pc=0x7ff6bd477a3a
github.com/ollama/ollama/discover.GetGPUInfo()
github.com/ollama/ollama/discover/gpu.go:262 +0x705 fp=0xc0003f5ae0 sp=0xc0003f4f98 pc=0x7ff6bd478b45
github.com/ollama/ollama/server.Serve({0x7ff6be099760, 0xc000608a80})
github.com/ollama/ollama/server/routes.go:1274 +0x8aa fp=0xc0003f5d18 sp=0xc0003f5ae0 pc=0x7ff6bda2e94a
github.com/ollama/ollama/cmd.RunServer(0xc00062a400?, {0x7ff6be955020?, 0x4?, 0x7ff6bdeda1ef?})
github.com/ollama/ollama/cmd/cmd.go:1033 +0x4a fp=0xc0003f5d58 sp=0xc0003f5d18 pc=0x7ff6bda5daaa
github.com/spf13/cobra.(*Command).execute(0xc0000bc608, {0x7ff6be955020, 0x0, 0x0})
github.com/spf13/cobra@v1.7.0/command.go:940 +0x862 fp=0xc0003f5e78 sp=0xc0003f5d58 pc=0x7ff6bd02c122
github.com/spf13/cobra.(*Command).ExecuteC(0xc00008b508)
github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc0003f5f30 sp=0xc0003f5e78 pc=0x7ff6bd02c965
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
github.com/ollama/ollama/main.go:12 +0x4d fp=0xc0003f5f50 sp=0xc0003f5f30 pc=0x7ff6bda65c8d
runtime.main()
runtime/proc.go:272 +0x27d fp=0xc0003f5fe0 sp=0xc0003f5f50 pc=0x7ff6bce7dfbd
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0003f5fe8 sp=0xc0003f5fe0 pc=0x7ff6bceb8921

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7

Originally created by @ican2002 on GitHub (Feb 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8886 ### What is the issue? Can anyone help to resolve this issue? thanks. CPU: intel i7-6700HQ OS: windows10 GPU: 960M seems CPU and GPU detected: in the log "Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu]" and cgo related problems as log shows: runtime.cgocall(0x7ff6bdc60920, 0xc0003f4c10) runtime/cgocall.go:167 +0x3e fp=0xc0003f4be8 sp=0xc0003f4b80 pc=0x7ff6bcea9c3e seems many ones are facing this problems. if anyone resloeved it, please reply to this, thank you. some info for reference, if useful: I searched some other discussion about cgocall exception, which says if cgo calls a C subroutine which uses too much CPU and use local storage, while another go-routine calls fmt.Print() at the same time, it will cause an exception. and a suggest is to call runtime.UnlockOSThread() to lock the cgo routine and runtime.UnlockOSThread() to unlock it. NOT sure it works. Thanks. ### Relevant log output ```shell 2025/02/05 20:47:23 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\can\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-05T20:47:23.684+08:00 level=INFO source=images.go:432 msg="total blobs: 0" time=2025-02-05T20:47:23.685+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-05T20:47:23.686+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-02-05T20:47:23.687+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu]" time=2025-02-05T20:47:23.687+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-05T20:47:23.687+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-02-05T20:47:23.687+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=4 efficiency=0 threads=8 Exception 0xc0000005 0x0 0x10 0x7ffcaca97983 PC=0x7ffcaca97983 signal arrived during external code execution runtime.cgocall(0x7ff6bdc60920, 0xc0003f4c10) runtime/cgocall.go:167 +0x3e fp=0xc0003f4be8 sp=0xc0003f4b80 pc=0x7ff6bcea9c3e github.com/ollama/ollama/discover._Cfunc_nvml_init(0x2041ea093d0, 0xc00004f440) _cgo_gotypes.go:573 +0x4d fp=0xc0003f4c10 sp=0xc0003f4be8 pc=0x7ff6bd476f8d github.com/ollama/ollama/discover.loadNVMLMgmt.func2(0x2041ea093d0, 0xc00004f440) github.com/ollama/ollama/discover/gpu.go:651 +0x4a fp=0xc0003f4c40 sp=0xc0003f4c10 pc=0x7ff6bd47e68a github.com/ollama/ollama/discover.loadNVMLMgmt({0xc00004f400, 0x3, 0x7ff6be8b9410?}) github.com/ollama/ollama/discover/gpu.go:651 +0x245 fp=0xc0003f4d30 sp=0xc0003f4c40 pc=0x7ff6bd47e4c5 github.com/ollama/ollama/discover.initCudaHandles() github.com/ollama/ollama/discover/gpu.go:118 +0x4fa fp=0xc0003f4f98 sp=0xc0003f4d30 pc=0x7ff6bd477a3a github.com/ollama/ollama/discover.GetGPUInfo() github.com/ollama/ollama/discover/gpu.go:262 +0x705 fp=0xc0003f5ae0 sp=0xc0003f4f98 pc=0x7ff6bd478b45 github.com/ollama/ollama/server.Serve({0x7ff6be099760, 0xc000608a80}) github.com/ollama/ollama/server/routes.go:1274 +0x8aa fp=0xc0003f5d18 sp=0xc0003f5ae0 pc=0x7ff6bda2e94a github.com/ollama/ollama/cmd.RunServer(0xc00062a400?, {0x7ff6be955020?, 0x4?, 0x7ff6bdeda1ef?}) github.com/ollama/ollama/cmd/cmd.go:1033 +0x4a fp=0xc0003f5d58 sp=0xc0003f5d18 pc=0x7ff6bda5daaa github.com/spf13/cobra.(*Command).execute(0xc0000bc608, {0x7ff6be955020, 0x0, 0x0}) github.com/spf13/cobra@v1.7.0/command.go:940 +0x862 fp=0xc0003f5e78 sp=0xc0003f5d58 pc=0x7ff6bd02c122 github.com/spf13/cobra.(*Command).ExecuteC(0xc00008b508) github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc0003f5f30 sp=0xc0003f5e78 pc=0x7ff6bd02c965 github.com/spf13/cobra.(*Command).Execute(...) github.com/spf13/cobra@v1.7.0/command.go:992 github.com/spf13/cobra.(*Command).ExecuteContext(...) github.com/spf13/cobra@v1.7.0/command.go:985 main.main() github.com/ollama/ollama/main.go:12 +0x4d fp=0xc0003f5f50 sp=0xc0003f5f30 pc=0x7ff6bda65c8d runtime.main() runtime/proc.go:272 +0x27d fp=0xc0003f5fe0 sp=0xc0003f5f50 pc=0x7ff6bce7dfbd runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0003f5fe8 sp=0xc0003f5fe0 pc=0x7ff6bceb8921 ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
GiteaMirror added the needs more infobugnvidia labels 2026-04-28 22:47:05 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 6, 2025):

The runner crashed in nvml_init. It's mostly trying to load the driver libraries. You are running an old card on an old operating system, have you tried updating the drivers?

There might be more relevant info in the logs if you set OLLAMA_DEBUG=1 in the server environment.

<!-- gh-comment-id:2640165361 --> @rick-github commented on GitHub (Feb 6, 2025): The runner crashed in [`nvml_init`](https://github.com/ollama/ollama/blob/1c198977ecdd471aee827a378080ace73c02fa8d/discover/gpu_info_nvml.c#L7). It's mostly trying to load the driver libraries. You are running an old card on an old operating system, have you tried updating the drivers? There might be more relevant info in the logs if you set `OLLAMA_DEBUG=1` in the server environment.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52272