[GH-ISSUE #6936] glm4 model return "GGGGGGGGGG" #4392

Closed
opened 2026-04-12 15:19:58 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @shuye-cheung on GitHub (Sep 24, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6936

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

After deploying the GLM4 model with ollama, the model returns a series of "GGGG".

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.3

Originally created by @shuye-cheung on GitHub (Sep 24, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6936 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? After deploying the GLM4 model with ollama, the model returns a series of "GGGG". ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.3
GiteaMirror added the nvidianeeds more infobug labels 2026-04-12 15:19:59 -05:00
Author
Owner

@dhiltgen commented on GitHub (Sep 24, 2024):

I was able to load glm4 on an nvidia GPU on linux and get good results. Sometimes this sort of giberish response can be the result of loading too many layers leading to subtle memory corruption on the GPU. Can you share your server log so we can see what might be going wrong?

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md

<!-- gh-comment-id:2372590096 --> @dhiltgen commented on GitHub (Sep 24, 2024): I was able to load glm4 on an nvidia GPU on linux and get good results. Sometimes this sort of giberish response can be the result of loading too many layers leading to subtle memory corruption on the GPU. Can you share your server log so we can see what might be going wrong? https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md
Author
Owner

@wszgrcy commented on GitHub (Sep 26, 2024):

What is the issue?

After deploying the GLM4 model with ollama, the model returns a series of "GGGG".

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.3

建议更新到最新试一下.我记得llama.cpp最近修过一次glm4的问题.不知道ollama是否合并进来

<!-- gh-comment-id:2375533441 --> @wszgrcy commented on GitHub (Sep 26, 2024): > ### What is the issue? > After deploying the GLM4 model with ollama, the model returns a series of "GGGG". > > ### OS > Linux > > ### GPU > Nvidia > > ### CPU > Intel > > ### Ollama version > 0.3.3 建议更新到最新试一下.我记得llama.cpp最近修过一次glm4的问题.不知道ollama是否合并进来
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4392