[GH-ISSUE #15734] Ollama not working on Mac M5 #72092

Closed
opened 2026-05-05 03:26:53 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @pranitmodi on GitHub (Apr 21, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15734

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Keep getting this error - 500 Internal Server Error: llama runner process has terminated: %!w()

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @pranitmodi on GitHub (Apr 21, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15734 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Keep getting this error - 500 Internal Server Error: llama runner process has terminated: %!w(<nil>) ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-05-05 03:26:53 -05:00
Author
Owner

@cl93a commented on GitHub (Apr 21, 2026):

Same issue here, wanted to try out Gemma4 on MLX, but all models (not just MLX) fail. Ollama 0.21.0. Model fails to load with 500 error. Logs show:

ggml_metal_init: picking default device: Apple M5
signal arrived during cgo execution
github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_get_default_buffer_type(0x0)
fault 0x1926755b0
llama runner terminated: exit status 2

<!-- gh-comment-id:4291679769 --> @cl93a commented on GitHub (Apr 21, 2026): Same issue here, wanted to try out Gemma4 on MLX, but all models (not just MLX) fail. Ollama 0.21.0. Model fails to load with 500 error. Logs show: ggml_metal_init: picking default device: Apple M5 signal arrived during cgo execution github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_get_default_buffer_type(0x0) fault 0x1926755b0 llama runner terminated: exit status 2
Author
Owner

@mverrilli commented on GitHub (Apr 21, 2026):

I am guessing #15581 will address this. Posted this to the wrong issue.

<!-- gh-comment-id:4292394924 --> @mverrilli commented on GitHub (Apr 21, 2026): ~~I am guessing #15581 will address this.~~ Posted this to the wrong issue.
Author
Owner

@dhiltgen commented on GitHub (Apr 22, 2026):

@cl93a can you verify this fails for you on MLX? What model and tag were you trying to run? So far, this seems to be a GGML specific defect.

<!-- gh-comment-id:4297865890 --> @dhiltgen commented on GitHub (Apr 22, 2026): @cl93a can you verify this fails for you on MLX? What model and tag were you trying to run? So far, this seems to be a GGML specific defect.
Author
Owner

@cl93a commented on GitHub (Apr 22, 2026):

@dhiltgen — Confirmed: MLX works, GGML does not on M5 (Ollama v0.21.1). Was confused that :latest did not auto pick mlx.

Tested:

gemma4:e2b-mlx-bf16 loads and runs fine
gemma4:e4b-mlx-bf16 loads and runs fine
gemma4:latest (Q4_K_M, GGML) crashes — even after a full ollama rm + re-pull, ruling out a corrupted download
gemma3:4b / gemma3:12b (GGML) same crash
All GGML failures hit ggml_metal_init: picking default device: Apple M5 → signal arrived during cgo execution → fault 0x1926755b0 → exit status 2. MLX is the only working backend on M5 right now it appears.

<!-- gh-comment-id:4298891093 --> @cl93a commented on GitHub (Apr 22, 2026): @dhiltgen — Confirmed: MLX works, GGML does not on M5 (Ollama v0.21.1). Was confused that :latest did not auto pick mlx. Tested: gemma4:e2b-mlx-bf16 ✅ loads and runs fine gemma4:e4b-mlx-bf16 ✅ loads and runs fine gemma4:latest (Q4_K_M, GGML) ❌ crashes — even after a full ollama rm + re-pull, ruling out a corrupted download gemma3:4b / gemma3:12b (GGML) ❌ same crash All GGML failures hit ggml_metal_init: picking default device: Apple M5 → signal arrived during cgo execution → fault 0x1926755b0 → exit status 2. MLX is the only working backend on M5 right now it appears.
Author
Owner

@cerdman commented on GitHub (Apr 23, 2026):

Believe I am seeing the same issue:

ollama run gpt-oss:20b --verbose
or
ollama run medgemma1.5--verbose

ggml_metal_device_init: error: failed to create library
ggml_metal_rsets_init: creating a residency set collection (keep_alive = 180 s)
ggml_metal_device_init: GPU name:   Apple M5 Pro
ggml_metal_device_init: GPU family: MTLGPUFamilyApple10  (1010)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal4  (5002)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: has tensor            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 40200.90 MB
load_backend: loaded CPU backend from /Applications/Ollama.app/Contents/Resources/libggml-cpu.so
time=2026-04-23T01:26:41.931-07:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.FP16_VA=1 CPU.1.DOTPROD=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M5 Pro
ggml_metal_init: the device does not have a precompiled Metal library - this is unexpected
ggml_metal_init: will try to compile it on the fly
...
ggml_metal_init: error: failed to initialize the Metal library
ggml_backend_metal_device_init: error: failed to allocate context
ggml-backend.cpp:258: GGML_ASSERT(backend) failed
WARNING: Using native backtrace. Set GGML_BACKTRACE_LLDB for more info.
WARNING: GGML_BACKTRACE_LLDB may cause native MacOS Terminal.app to crash.
See: https://github.com/ggml-org/llama.cpp/pull/17869
0   ollama                              0x0000000103620988 ggml_print_backtrace + 276
1   ollama                              0x0000000103620b74 ggml_abort + 156
2   ollama                              0x000000010363a474 ggml_backend_get_default_buffer_type + 76
3   ollama                              0x00000001035b6768 _cgo_c81fd19bee02_Cfunc_ggml_backend_get_default_buffer_type + 36
4   ollama                              0x00000001026d86ac ollama + 509612
SIGABRT: abort
PC=0x19ce4d5b0 m=5 sigcode=0
signal arrived during cgo execution
....
goroutine 12 gp=0x140000b7340 m=nil [sync.WaitGroup.Wait]:
runtime.gopark(0x10497bf20?, 0x1026785cc?, 0xc0?, 0xa0?, 0x102669e38?)
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/proc.go:435 +0xc8 fp=0x1400008aa90 sp=0x1400008aa70 pc=0x1026d0248
runtime.goparkunlock(...)
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/proc.go:441
runtime.semacquire1(0x140002aafb8, 0x0, 0x1, 0x0, 0x18)
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/sema.go:188 +0x204 fp=0x1400008aae0 sp=0x1400008aa90 pc=0x1026b0cf4
sync.runtime_SemacquireWaitGroup(0x0?)
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/sema.go:110 +0x2c fp=0x1400008ab20 sp=0x1400008aae0 pc=0x1026d1cbc
sync.(*WaitGroup).Wait(0x140002aafb0)
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/sync/waitgroup.go:118 +0x70 fp=0x1400008ab40 sp=0x1400008ab20 pc=0x1026e42b0
github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0x140002aaf00, {0x103ea5420, 0x1400052d720})
	/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:442 +0x38 fp=0x1400008afa0 sp=0x1400008ab40 pc=0x102c74988
github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap1()
	/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1430 +0x30 fp=0x1400008afd0 sp=0x1400008afa0 pc=0x102c7c8a0
runtime.goexit({})
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400008afd0 sp=0x1400008afd0 pc=0x1026d88b4
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
	/Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1430 +0x448

goroutine 51 gp=0x14000102fc0 m=nil [IO wait]:
runtime.gopark(0xffffffffffffffff?, 0xffffffffffffffff?, 0x23?, 0x0?, 0x1026f44b0?)
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/proc.go:435 +0xc8 fp=0x1400018cd80 sp=0x1400018cd60 pc=0x1026d0248
runtime.netpollblock(0x0?, 0x0?, 0x0?)
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/netpoll.go:575 +0x158 fp=0x1400018cdc0 sp=0x1400018cd80 pc=0x102695ca8
internal/poll.runtime_pollWait(0x14fe745f8, 0x72)
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/netpoll.go:351 +0xa0 fp=0x1400018cdf0 sp=0x1400018cdc0 pc=0x1026cf400
internal/poll.(*pollDesc).wait(0x1400062a300?, 0x1400009a041?, 0x0)
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/internal/poll/fd_poll_runtime.go:84 +0x28 fp=0x1400018ce20 sp=0x1400018cdf0 pc=0x102750868
internal/poll.(*pollDesc).waitRead(...)
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x1400062a300, {0x1400009a041, 0x1, 0x1})
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/internal/poll/fd_unix.go:165 +0x1fc fp=0x1400018cec0 sp=0x1400018ce20 pc=0x102751b1c
net.(*netFD).Read(0x1400062a300, {0x1400009a041?, 0x1400018cf58?, 0x102991994?})
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/fd_posix.go:55 +0x28 fp=0x1400018cf10 sp=0x1400018cec0 pc=0x1027c3a48
net.(*conn).Read(0x14000120990, {0x1400009a041?, 0x0?, 0x0?})
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/net.go:194 +0x34 fp=0x1400018cf60 sp=0x1400018cf10 pc=0x1027d0914
net/http.(*connReader).backgroundRead(0x1400009a030)
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:690 +0x40 fp=0x1400018cfb0 sp=0x1400018cf60 pc=0x102991890
net/http.(*connReader).startBackgroundRead.gowrap2()
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:686 +0x28 fp=0x1400018cfd0 sp=0x1400018cfb0 pc=0x102991778
runtime.goexit({})
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400018cfd0 sp=0x1400018cfd0 pc=0x1026d88b4
created by net/http.(*connReader).startBackgroundRead in goroutine 13
	/Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:686 +0xc4

r0      0x0
r1      0x0
r2      0x0
r3      0x0
r4      0x19cd8fa08
r5      0x16efb5d20
r6      0x32
r7      0x0
r8      0x5665feb11f714189
r9      0x5665feb0718a3189
r10     0x2
r11     0x10000000000
r12     0xfffffffd
r13     0x0
r14     0x0
r15     0x0
r16     0x148
r17     0x20a70cfc0
r18     0x0
r19     0x6
r20     0x1d03
r21     0x16efb70e0
r22     0x0
r23     0x0
r24     0x0
r25     0x14000051c98
r26     0x103e8bbc8
r27     0x818
r28     0x140000036c0
r29     0x16efb6610
lr      0x19ce87888
sp      0x16efb65f0
pc      0x19ce4d5b0
fault   0x19ce4d5b0
time=2026-04-23T01:26:42.943-07:00 level=ERROR source=server.go:1219 msg="do load request" error="Post \"http://127.0.0.1:62772/load\": EOF"
time=2026-04-23T01:26:42.943-07:00 level=ERROR source=server.go:316 msg="llama runner terminated" error="exit status 2"
time=2026-04-23T01:26:42.944-07:00 level=ERROR source=server.go:1219 msg="do load request" error="Post \"http://127.0.0.1:62772/load\": dial tcp 127.0.0.1:62772: connect: connection refused"
time=2026-04-23T01:26:42.944-07:00 level=INFO source=sched.go:511 msg="Load failed" model=/Users/cerdman/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb error="model failed to load, this may be due to resource limitations or an internal error, check ollama server logs for details"

<!-- gh-comment-id:4302920635 --> @cerdman commented on GitHub (Apr 23, 2026): Believe I am seeing the same issue: `ollama run gpt-oss:20b --verbose` or `ollama run medgemma1.5--verbose` ``` ggml_metal_device_init: error: failed to create library ggml_metal_rsets_init: creating a residency set collection (keep_alive = 180 s) ggml_metal_device_init: GPU name: Apple M5 Pro ggml_metal_device_init: GPU family: MTLGPUFamilyApple10 (1010) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal4 (5002) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: has tensor = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 40200.90 MB load_backend: loaded CPU backend from /Applications/Ollama.app/Contents/Resources/libggml-cpu.so time=2026-04-23T01:26:41.931-07:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.FP16_VA=1 CPU.1.DOTPROD=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M5 Pro ggml_metal_init: the device does not have a precompiled Metal library - this is unexpected ggml_metal_init: will try to compile it on the fly ... ggml_metal_init: error: failed to initialize the Metal library ggml_backend_metal_device_init: error: failed to allocate context ggml-backend.cpp:258: GGML_ASSERT(backend) failed WARNING: Using native backtrace. Set GGML_BACKTRACE_LLDB for more info. WARNING: GGML_BACKTRACE_LLDB may cause native MacOS Terminal.app to crash. See: https://github.com/ggml-org/llama.cpp/pull/17869 0 ollama 0x0000000103620988 ggml_print_backtrace + 276 1 ollama 0x0000000103620b74 ggml_abort + 156 2 ollama 0x000000010363a474 ggml_backend_get_default_buffer_type + 76 3 ollama 0x00000001035b6768 _cgo_c81fd19bee02_Cfunc_ggml_backend_get_default_buffer_type + 36 4 ollama 0x00000001026d86ac ollama + 509612 SIGABRT: abort PC=0x19ce4d5b0 m=5 sigcode=0 signal arrived during cgo execution .... goroutine 12 gp=0x140000b7340 m=nil [sync.WaitGroup.Wait]: runtime.gopark(0x10497bf20?, 0x1026785cc?, 0xc0?, 0xa0?, 0x102669e38?) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/proc.go:435 +0xc8 fp=0x1400008aa90 sp=0x1400008aa70 pc=0x1026d0248 runtime.goparkunlock(...) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/proc.go:441 runtime.semacquire1(0x140002aafb8, 0x0, 0x1, 0x0, 0x18) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/sema.go:188 +0x204 fp=0x1400008aae0 sp=0x1400008aa90 pc=0x1026b0cf4 sync.runtime_SemacquireWaitGroup(0x0?) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/sema.go:110 +0x2c fp=0x1400008ab20 sp=0x1400008aae0 pc=0x1026d1cbc sync.(*WaitGroup).Wait(0x140002aafb0) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/sync/waitgroup.go:118 +0x70 fp=0x1400008ab40 sp=0x1400008ab20 pc=0x1026e42b0 github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0x140002aaf00, {0x103ea5420, 0x1400052d720}) /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:442 +0x38 fp=0x1400008afa0 sp=0x1400008ab40 pc=0x102c74988 github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap1() /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1430 +0x30 fp=0x1400008afd0 sp=0x1400008afa0 pc=0x102c7c8a0 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400008afd0 sp=0x1400008afd0 pc=0x1026d88b4 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 /Users/runner/work/ollama/ollama/runner/ollamarunner/runner.go:1430 +0x448 goroutine 51 gp=0x14000102fc0 m=nil [IO wait]: runtime.gopark(0xffffffffffffffff?, 0xffffffffffffffff?, 0x23?, 0x0?, 0x1026f44b0?) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/proc.go:435 +0xc8 fp=0x1400018cd80 sp=0x1400018cd60 pc=0x1026d0248 runtime.netpollblock(0x0?, 0x0?, 0x0?) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/netpoll.go:575 +0x158 fp=0x1400018cdc0 sp=0x1400018cd80 pc=0x102695ca8 internal/poll.runtime_pollWait(0x14fe745f8, 0x72) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/netpoll.go:351 +0xa0 fp=0x1400018cdf0 sp=0x1400018cdc0 pc=0x1026cf400 internal/poll.(*pollDesc).wait(0x1400062a300?, 0x1400009a041?, 0x0) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/internal/poll/fd_poll_runtime.go:84 +0x28 fp=0x1400018ce20 sp=0x1400018cdf0 pc=0x102750868 internal/poll.(*pollDesc).waitRead(...) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0x1400062a300, {0x1400009a041, 0x1, 0x1}) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/internal/poll/fd_unix.go:165 +0x1fc fp=0x1400018cec0 sp=0x1400018ce20 pc=0x102751b1c net.(*netFD).Read(0x1400062a300, {0x1400009a041?, 0x1400018cf58?, 0x102991994?}) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/fd_posix.go:55 +0x28 fp=0x1400018cf10 sp=0x1400018cec0 pc=0x1027c3a48 net.(*conn).Read(0x14000120990, {0x1400009a041?, 0x0?, 0x0?}) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/net.go:194 +0x34 fp=0x1400018cf60 sp=0x1400018cf10 pc=0x1027d0914 net/http.(*connReader).backgroundRead(0x1400009a030) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:690 +0x40 fp=0x1400018cfb0 sp=0x1400018cf60 pc=0x102991890 net/http.(*connReader).startBackgroundRead.gowrap2() /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:686 +0x28 fp=0x1400018cfd0 sp=0x1400018cfb0 pc=0x102991778 runtime.goexit({}) /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/runtime/asm_arm64.s:1223 +0x4 fp=0x1400018cfd0 sp=0x1400018cfd0 pc=0x1026d88b4 created by net/http.(*connReader).startBackgroundRead in goroutine 13 /Users/runner/hostedtoolcache/go/1.24.1/arm64/src/net/http/server.go:686 +0xc4 r0 0x0 r1 0x0 r2 0x0 r3 0x0 r4 0x19cd8fa08 r5 0x16efb5d20 r6 0x32 r7 0x0 r8 0x5665feb11f714189 r9 0x5665feb0718a3189 r10 0x2 r11 0x10000000000 r12 0xfffffffd r13 0x0 r14 0x0 r15 0x0 r16 0x148 r17 0x20a70cfc0 r18 0x0 r19 0x6 r20 0x1d03 r21 0x16efb70e0 r22 0x0 r23 0x0 r24 0x0 r25 0x14000051c98 r26 0x103e8bbc8 r27 0x818 r28 0x140000036c0 r29 0x16efb6610 lr 0x19ce87888 sp 0x16efb65f0 pc 0x19ce4d5b0 fault 0x19ce4d5b0 time=2026-04-23T01:26:42.943-07:00 level=ERROR source=server.go:1219 msg="do load request" error="Post \"http://127.0.0.1:62772/load\": EOF" time=2026-04-23T01:26:42.943-07:00 level=ERROR source=server.go:316 msg="llama runner terminated" error="exit status 2" time=2026-04-23T01:26:42.944-07:00 level=ERROR source=server.go:1219 msg="do load request" error="Post \"http://127.0.0.1:62772/load\": dial tcp 127.0.0.1:62772: connect: connection refused" time=2026-04-23T01:26:42.944-07:00 level=INFO source=sched.go:511 msg="Load failed" model=/Users/cerdman/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb error="model failed to load, this may be due to resource limitations or an internal error, check ollama server logs for details" ```
Author
Owner

@stan581994 commented on GitHub (Apr 30, 2026):

Having the same issue. I hope this one will resolve soon

<!-- gh-comment-id:4356071051 --> @stan581994 commented on GitHub (Apr 30, 2026): Having the same issue. I hope this one will resolve soon
Author
Owner

@MM8i commented on GitHub (Apr 30, 2026):

Having the same issue. I hope this one will resolve soon

What worked for me was updating to Tahoe 26.4.1. I read it in another issue

<!-- gh-comment-id:4356860712 --> @MM8i commented on GitHub (Apr 30, 2026): > Having the same issue. I hope this one will resolve soon What worked for me was updating to Tahoe 26.4.1. I read it in another issue
Author
Owner

@stan581994 commented on GitHub (May 1, 2026):

Amazing works now! Thank you

Having the same issue. I hope this one will resolve soon

What worked for me was updating to Tahoe 26.4.1. I read it in another issue

<!-- gh-comment-id:4358342664 --> @stan581994 commented on GitHub (May 1, 2026): Amazing works now! Thank you > > Having the same issue. I hope this one will resolve soon > > What worked for me was updating to Tahoe 26.4.1. I read it in another issue
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#72092