[GH-ISSUE #13808] GLM-4.7-Flash bad gating function. #34804

Closed
opened 2026-04-22 18:40:58 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @itzpingcat on GitHub (Jan 21, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13808

What is the issue?

https://github.com/ggml-org/llama.cpp/pull/18980

This is being patched in llama-cpp right now.
Ollama's GO Engine need to fix this too.
This has been causing much of the instability we see in GLM-4.7-Flash right now.

Relevant log output


OS

Windows

GPU

NVIDIA 5060 TI 16gb

CPU

AMD

Ollama version

0.14.3

Originally created by @itzpingcat on GitHub (Jan 21, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13808 ### What is the issue? https://github.com/ggml-org/llama.cpp/pull/18980 This is being patched in llama-cpp right now. Ollama's GO Engine need to fix this too. This has been causing much of the instability we see in GLM-4.7-Flash right now. ### Relevant log output ```shell ``` ### OS Windows ### GPU NVIDIA 5060 TI 16gb ### CPU AMD ### Ollama version 0.14.3
GiteaMirror added the bug label 2026-04-22 18:40:58 -05:00
Author
Owner

@itzpingcat commented on GitHub (Jan 21, 2026):

mb, checked ollama code and it uses sigmoid. how queer.

<!-- gh-comment-id:3775940070 --> @itzpingcat commented on GitHub (Jan 21, 2026): mb, checked ollama code and it uses sigmoid. how queer.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34804