[GH-ISSUE #5351] gguf success,but run error #29110

Closed
opened 2026-04-22 07:45:43 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @enryteam on GitHub (Jun 28, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5351

What is the issue?

ollama create glm-4-9b-chat -f ./Modelfile-glm
transferring model data
using existing layer sha256:d7cd056b858a46ad875a4abb7b0d7cf8cde26ac1f975c18b97175fbfdb809acb
using existing layer sha256:821004920baf42135ce3fd33c72eb1022fc0215a569ea7b90337a9bf92f23294
creating new layer sha256:2dcaf84fc5b358793d9451b614605bcb5fec302c14166644fd77e3503ca5dcf4
creating new layer sha256:f15127bec7ed18496fc730bea8d4f052b33089396bbbf88603968323d2f1f88f
writing manifest
success

ollama run glm-4-9b-chat
Error: llama runner process has terminated: signal: aborted (core dumped)

GGUF download :
https://modelscope.cn/api/v1/models/LLM-Research/glm-4-9b-chat-GGUF/repo?Revision=master&FilePath=glm-4-9b-chat.Q6_K.gguf

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.36

Originally created by @enryteam on GitHub (Jun 28, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5351 ### What is the issue? **ollama create glm-4-9b-chat -f ./Modelfile-glm** transferring model data using existing layer sha256:d7cd056b858a46ad875a4abb7b0d7cf8cde26ac1f975c18b97175fbfdb809acb using existing layer sha256:821004920baf42135ce3fd33c72eb1022fc0215a569ea7b90337a9bf92f23294 creating new layer sha256:2dcaf84fc5b358793d9451b614605bcb5fec302c14166644fd77e3503ca5dcf4 creating new layer sha256:f15127bec7ed18496fc730bea8d4f052b33089396bbbf88603968323d2f1f88f writing manifest success **ollama run glm-4-9b-chat** Error: llama runner process has terminated: signal: aborted (core dumped) GGUF download : https://modelscope.cn/api/v1/models/LLM-Research/glm-4-9b-chat-GGUF/repo?Revision=master&FilePath=glm-4-9b-chat.Q6_K.gguf ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.36
GiteaMirror added the bug label 2026-04-22 07:45:43 -05:00
Author
Owner

@Forevery1 commented on GitHub (Jun 28, 2024):

https://github.com/ggerganov/llama.cpp/pull/8031

<!-- gh-comment-id:2195985834 --> @Forevery1 commented on GitHub (Jun 28, 2024): https://github.com/ggerganov/llama.cpp/pull/8031
Author
Owner

@yinjianjie commented on GitHub (Jul 2, 2024):

How to solve this problem, please ask for a nanny tutorial answer

<!-- gh-comment-id:2202017004 --> @yinjianjie commented on GitHub (Jul 2, 2024): How to solve this problem, please ask for a nanny tutorial answer
Author
Owner

@pdevine commented on GitHub (Jul 3, 2024):

@yinjianjie unfortunately we don't have support in Ollama yet for the glm3/glm4 models. I think llamacpp is going to merge a change soon which will make this work.

<!-- gh-comment-id:2207016396 --> @pdevine commented on GitHub (Jul 3, 2024): @yinjianjie unfortunately we don't have support in Ollama yet for the glm3/glm4 models. I think llamacpp is going to merge a change soon which will make this work.
Author
Owner

@pdevine commented on GitHub (Jul 8, 2024):

Going to close this in favor of #4826. This is merging soon.

<!-- gh-comment-id:2215495828 --> @pdevine commented on GitHub (Jul 8, 2024): Going to close this in favor of #4826. This is merging _soon_.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29110