[GH-ISSUE #5647] glm4 直接报错了 #50032

Closed
opened 2026-04-28 13:54:16 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @tqangxl on GitHub (Jul 12, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5647

Originally assigned to: @jmorganca on GitHub.

What is the issue?

4bdfd020f40e7ad0e3876630271ddb5
image

WLC-SWISSLOG-SH-haiyan-trap-log.txt

2024/07/12 09:21:02 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:D:\Lib\Dev\AI\ollama\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\Users\James\AppData\Local\Programs\Ollama\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-12T09:21:02.650+08:00 level=INFO source=images.go:751 msg="total blobs: 87"
time=2024-07-12T09:21:02.653+08:00 level=INFO source=images.go:758 msg="total unused blobs removed: 0"
time=2024-07-12T09:21:02.654+08:00 level=INFO source=routes.go:1080 msg="Listening on [::]:11434 (version 0.2.1)"
time=2024-07-12T09:21:02.657+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]"
time=2024-07-12T09:21:02.657+08:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-12T09:21:02.831+08:00 level=INFO source=types.go:103 msg="inference compute" id=GPU-7ace657e-48c4-dfe6-058c-7307a0ea5112 library=cuda compute=7.5 driver=12.5 name="NVIDIA GeForce RTX 2070 with Max-Q Design" total="8.0 GiB" available="7.0 GiB"
[GIN] 2024/07/12 - 12:37:43 | 404 | 3.4748ms | 127.0.0.1 | POST "/api/show"

OS

设备名称 DESKTOP-DOE0ADN
处理器 Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz 2.59 GHz
机带 RAM 40.0 GB (39.9 GB 可用)

No response

GPU

No response

CPU

No response

Ollama version

ollama -v
ollama version is 0.2.1
No response

Originally created by @tqangxl on GitHub (Jul 12, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5647 Originally assigned to: @jmorganca on GitHub. ### What is the issue? ![4bdfd020f40e7ad0e3876630271ddb5](https://github.com/user-attachments/assets/ae1fe79a-bbbe-4abb-aa01-3a4a1e9997b9) ![image](https://github.com/user-attachments/assets/6da71707-1777-41eb-af2c-b7fabb30965c) [WLC-SWISSLOG-SH-haiyan-trap-log.txt](https://github.com/user-attachments/files/16190214/WLC-SWISSLOG-SH-haiyan-trap-log.txt) 2024/07/12 09:21:02 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:D:\\Lib\\Dev\\AI\\ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\James\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-12T09:21:02.650+08:00 level=INFO source=images.go:751 msg="total blobs: 87" time=2024-07-12T09:21:02.653+08:00 level=INFO source=images.go:758 msg="total unused blobs removed: 0" time=2024-07-12T09:21:02.654+08:00 level=INFO source=routes.go:1080 msg="Listening on [::]:11434 (version 0.2.1)" time=2024-07-12T09:21:02.657+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]" time=2024-07-12T09:21:02.657+08:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-07-12T09:21:02.831+08:00 level=INFO source=types.go:103 msg="inference compute" id=GPU-7ace657e-48c4-dfe6-058c-7307a0ea5112 library=cuda compute=7.5 driver=12.5 name="NVIDIA GeForce RTX 2070 with Max-Q Design" total="8.0 GiB" available="7.0 GiB" [GIN] 2024/07/12 - 12:37:43 | 404 | 3.4748ms | 127.0.0.1 | POST "/api/show" ### OS 设备名称 DESKTOP-DOE0ADN 处理器 Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz 2.59 GHz 机带 RAM 40.0 GB (39.9 GB 可用) _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version ollama -v ollama version is 0.2.1 _No response_
GiteaMirror added the bug label 2026-04-28 13:54:16 -05:00
Author
Owner

@loveyume520 commented on GitHub (Jul 12, 2024):

Same here, this commit might fix it, even though it is for Qwen2: ggerganov/llama.cpp#8412

<!-- gh-comment-id:2225002075 --> @loveyume520 commented on GitHub (Jul 12, 2024): Same here, this commit might fix it, even though it is for Qwen2: [ggerganov/llama.cpp#8412](https://github.com/ggerganov/llama.cpp/pull/8412)
Author
Owner

@cleverpig commented on GitHub (Jul 12, 2024):

Error: llama runner process has terminated: signal: aborted (core dumped)

<!-- gh-comment-id:2225655164 --> @cleverpig commented on GitHub (Jul 12, 2024): Error: llama runner process has terminated: signal: aborted (core dumped)
Author
Owner

@jmorganca commented on GitHub (Jul 12, 2024):

Hi all, the latest version of Ollama 0.2.2 should fix this. It will be released shortly, however you can download it directly here: https://github.com/ollama/ollama/releases

Sorry you hit this issue!

<!-- gh-comment-id:2225875944 --> @jmorganca commented on GitHub (Jul 12, 2024): Hi all, the latest version of Ollama 0.2.2 should fix this. It will be released shortly, however you can download it directly here: https://github.com/ollama/ollama/releases Sorry you hit this issue!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50032