[GH-ISSUE #13805] GLM-4.7-Flash | Ollama 0.14.3-rc3 #34803

Closed
opened 2026-04-22 18:40:55 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @Burnarz on GitHub (Jan 20, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13805

What is the issue?

Hello,

As said in the title, i'm testing Ollama 0.14.3-rc3, specifically for GLM-4.7-Flash.

Here's my log:
ollama -v ollama version is 0.14.3-rc3
ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL glm-4.7-flash:latest ff14144f31df 51 GB 52%/48% CPU/GPU 32768 Forever

Here's my "systemctl edit ollama.service":

### Anything between here and the comment below will become the contents of the drop-in file

[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_KEEP_ALIVE=-1"
Environment="OLLAMA_CONTEXT_LENGTH=32768"
Environment="OLLAMA_FLASH_ATTENTION=1"

### Edits below this comment will be discarded


### /etc/systemd/system/ollama.service
# [Unit]
# Description=Ollama Service
# After=network-online.target
# 
# [Service]
# ExecStart=/usr/local/bin/ollama serve
# User=ollama
# Group=ollama
# Restart=always
# RestartSec=3
# Environment="PATH=/home/burnarz/.local/bin:/home/burnarz/.cargo/bin:/home/burnarz/.cargo/bin:/home/burnarz/.cargo/bin:/home/burnarz/.cargo/bin:/home/burnarz/.c>
# 
# [Install]
# WantedBy=default.target

Did i miss something ?
Isn't this model supposed to fit in 24Go Vram like the Qwen3 version ?

Relevant log output

journalctl -u ollama --no-pager --follow --pager-end
janv. 20 23:13:12 jarvis-server systemd[1]: Started ollama.service - Ollama Service.
janv. 20 23:13:12 jarvis-server ollama[1232]: time=2026-01-20T23:13:12.982Z level=INFO source=routes.go:1629 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:32768 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
janv. 20 23:13:12 jarvis-server ollama[1232]: time=2026-01-20T23:13:12.996Z level=INFO source=images.go:501 msg="total blobs: 44"
janv. 20 23:13:12 jarvis-server ollama[1232]: time=2026-01-20T23:13:12.997Z level=INFO source=images.go:508 msg="total unused blobs removed: 0"
janv. 20 23:13:12 jarvis-server ollama[1232]: time=2026-01-20T23:13:12.997Z level=INFO source=routes.go:1682 msg="Listening on [::]:11434 (version 0.14.3-rc3)"
janv. 20 23:13:12 jarvis-server ollama[1232]: time=2026-01-20T23:13:12.998Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
janv. 20 23:13:13 jarvis-server ollama[1232]: time=2026-01-20T23:13:13.000Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 46777"
janv. 20 23:13:13 jarvis-server ollama[1232]: time=2026-01-20T23:13:13.330Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41863"
janv. 20 23:13:13 jarvis-server ollama[1232]: time=2026-01-20T23:13:13.533Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
janv. 20 23:13:13 jarvis-server ollama[1232]: time=2026-01-20T23:13:13.533Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 45103"
janv. 20 23:13:13 jarvis-server ollama[1232]: time=2026-01-20T23:13:13.667Z level=INFO source=types.go:42 msg="inference compute" id=GPU-d0364f00-33d1-a9a6-d173-85839f5f872c filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3090" libdirs=ollama,cuda_v12 driver=12.8 pci_id=0000:26:00.0 type=discrete total="24.0 GiB" available="23.6 GiB"
janv. 20 23:14:13 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:14:13 | 200 |    7.321588ms |       127.0.0.1 | GET      "/api/version"
janv. 20 23:15:04 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:15:04 | 200 |      16.859µs |       127.0.0.1 | HEAD     "/"
janv. 20 23:15:05 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:15:05 | 200 |  188.717743ms |       127.0.0.1 | POST     "/api/show"
janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.306Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38973"
janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.641Z level=INFO source=server.go:245 msg="enabling flash attention"
janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.641Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-cb6dfd23d780fa40505f9043ae62c7da85e6ec617cfa921dcd9dbb5a8e64ec67 --port 37871"
janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.642Z level=INFO source=sched.go:452 msg="system memory" total="15.5 GiB" free="13.5 GiB" free_swap="16.0 GiB"
janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.642Z level=INFO source=sched.go:459 msg="gpu memory" id=GPU-d0364f00-33d1-a9a6-d173-85839f5f872c library=CUDA available="23.1 GiB" free="23.6 GiB" minimum="457.0 MiB" overhead="0 B"
janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.642Z level=INFO source=server.go:755 msg="loading model" "model layers"=48 requested=-1
janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.651Z level=INFO source=runner.go:1405 msg="starting ollama engine"
janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.651Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:37871"
janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.653Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:6 GPULayers:48[ID:GPU-d0364f00-33d1-a9a6-d173-85839f5f872c Layers:48(0..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.698Z level=INFO source=ggml.go:136 msg="" architecture=glm4moelite file_type=Q4_K_M name="" description="" num_tensors=797 num_key_values=37
janv. 20 23:15:05 jarvis-server ollama[1232]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
janv. 20 23:15:05 jarvis-server ollama[1232]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
janv. 20 23:15:05 jarvis-server ollama[1232]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
janv. 20 23:15:05 jarvis-server ollama[1232]: ggml_cuda_init: found 1 CUDA devices:
janv. 20 23:15:05 jarvis-server ollama[1232]:   Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-d0364f00-33d1-a9a6-d173-85839f5f872c
janv. 20 23:15:05 jarvis-server ollama[1232]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.861Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
janv. 20 23:15:06 jarvis-server ollama[1232]: time=2026-01-20T23:15:06.300Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:6 GPULayers:22[ID:GPU-d0364f00-33d1-a9a6-d173-85839f5f872c Layers:22(25..46)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
janv. 20 23:15:06 jarvis-server ollama[1232]: time=2026-01-20T23:15:06.423Z level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:6 GPULayers:22[ID:GPU-d0364f00-33d1-a9a6-d173-85839f5f872c Layers:22(25..46)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.766Z level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:6 GPULayers:22[ID:GPU-d0364f00-33d1-a9a6-d173-85839f5f872c Layers:22(25..46)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.770Z level=INFO source=ggml.go:482 msg="offloading 22 repeating layers to GPU"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.772Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.772Z level=INFO source=ggml.go:494 msg="offloaded 22/48 layers to GPU"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.771Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="8.4 GiB"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.774Z level=INFO source=device.go:245 msg="model weights" device=CPU size="9.0 GiB"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.774Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="13.8 GiB"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.774Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="15.6 GiB"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.774Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="850.5 MiB"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.774Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="27.5 MiB"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.774Z level=INFO source=device.go:272 msg="total memory" size="47.7 GiB"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.775Z level=INFO source=sched.go:526 msg="loaded runners" count=1
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.776Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"
janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.788Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"
janv. 20 23:15:18 jarvis-server ollama[1232]: time=2026-01-20T23:15:18.034Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server not responding"
janv. 20 23:15:18 jarvis-server ollama[1232]: time=2026-01-20T23:15:18.563Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"
janv. 20 23:15:37 jarvis-server ollama[1232]: time=2026-01-20T23:15:37.862Z level=INFO source=server.go:1385 msg="llama runner started in 32.22 seconds"
janv. 20 23:15:37 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:15:37 | 200 | 32.779561288s |       127.0.0.1 | POST     "/api/generate"
janv. 20 23:15:46 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:15:46 | 200 |     971.777µs |       127.0.0.1 | HEAD     "/"
janv. 20 23:15:46 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:15:46 | 200 |    1.193025ms |       127.0.0.1 | GET      "/api/ps"
janv. 20 23:21:17 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:21:17 | 200 |       16.52µs |       127.0.0.1 | HEAD     "/"
janv. 20 23:21:17 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:21:17 | 200 |  219.033913ms |       127.0.0.1 | POST     "/api/show"
janv. 20 23:21:17 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:21:17 | 200 |  200.665714ms |       127.0.0.1 | POST     "/api/generate"
janv. 20 23:21:22 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:21:22 | 200 |       15.85µs |       127.0.0.1 | HEAD     "/"
janv. 20 23:21:22 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:21:22 | 200 |       21.16µs |       127.0.0.1 | GET      "/api/ps"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.14.3-rc3

Originally created by @Burnarz on GitHub (Jan 20, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13805 ### What is the issue? Hello, As said in the title, i'm testing Ollama 0.14.3-rc3, specifically for GLM-4.7-Flash. Here's my log: `ollama -v ollama version is 0.14.3-rc3 ` `ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL glm-4.7-flash:latest ff14144f31df 51 GB 52%/48% CPU/GPU 32768 Forever ` Here's my "systemctl edit ollama.service": ```### Editing /etc/systemd/system/ollama.service.d/override.conf ### Anything between here and the comment below will become the contents of the drop-in file [Service] Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_KEEP_ALIVE=-1" Environment="OLLAMA_CONTEXT_LENGTH=32768" Environment="OLLAMA_FLASH_ATTENTION=1" ### Edits below this comment will be discarded ### /etc/systemd/system/ollama.service # [Unit] # Description=Ollama Service # After=network-online.target # # [Service] # ExecStart=/usr/local/bin/ollama serve # User=ollama # Group=ollama # Restart=always # RestartSec=3 # Environment="PATH=/home/burnarz/.local/bin:/home/burnarz/.cargo/bin:/home/burnarz/.cargo/bin:/home/burnarz/.cargo/bin:/home/burnarz/.cargo/bin:/home/burnarz/.c> # # [Install] # WantedBy=default.target ``` Did i miss something ? Isn't this model supposed to fit in 24Go Vram like the Qwen3 version ? ### Relevant log output ```shell journalctl -u ollama --no-pager --follow --pager-end janv. 20 23:13:12 jarvis-server systemd[1]: Started ollama.service - Ollama Service. janv. 20 23:13:12 jarvis-server ollama[1232]: time=2026-01-20T23:13:12.982Z level=INFO source=routes.go:1629 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:32768 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" janv. 20 23:13:12 jarvis-server ollama[1232]: time=2026-01-20T23:13:12.996Z level=INFO source=images.go:501 msg="total blobs: 44" janv. 20 23:13:12 jarvis-server ollama[1232]: time=2026-01-20T23:13:12.997Z level=INFO source=images.go:508 msg="total unused blobs removed: 0" janv. 20 23:13:12 jarvis-server ollama[1232]: time=2026-01-20T23:13:12.997Z level=INFO source=routes.go:1682 msg="Listening on [::]:11434 (version 0.14.3-rc3)" janv. 20 23:13:12 jarvis-server ollama[1232]: time=2026-01-20T23:13:12.998Z level=INFO source=runner.go:67 msg="discovering available GPUs..." janv. 20 23:13:13 jarvis-server ollama[1232]: time=2026-01-20T23:13:13.000Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 46777" janv. 20 23:13:13 jarvis-server ollama[1232]: time=2026-01-20T23:13:13.330Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41863" janv. 20 23:13:13 jarvis-server ollama[1232]: time=2026-01-20T23:13:13.533Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" janv. 20 23:13:13 jarvis-server ollama[1232]: time=2026-01-20T23:13:13.533Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 45103" janv. 20 23:13:13 jarvis-server ollama[1232]: time=2026-01-20T23:13:13.667Z level=INFO source=types.go:42 msg="inference compute" id=GPU-d0364f00-33d1-a9a6-d173-85839f5f872c filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3090" libdirs=ollama,cuda_v12 driver=12.8 pci_id=0000:26:00.0 type=discrete total="24.0 GiB" available="23.6 GiB" janv. 20 23:14:13 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:14:13 | 200 | 7.321588ms | 127.0.0.1 | GET "/api/version" janv. 20 23:15:04 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:15:04 | 200 | 16.859µs | 127.0.0.1 | HEAD "/" janv. 20 23:15:05 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:15:05 | 200 | 188.717743ms | 127.0.0.1 | POST "/api/show" janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.306Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38973" janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.641Z level=INFO source=server.go:245 msg="enabling flash attention" janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.641Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-cb6dfd23d780fa40505f9043ae62c7da85e6ec617cfa921dcd9dbb5a8e64ec67 --port 37871" janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.642Z level=INFO source=sched.go:452 msg="system memory" total="15.5 GiB" free="13.5 GiB" free_swap="16.0 GiB" janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.642Z level=INFO source=sched.go:459 msg="gpu memory" id=GPU-d0364f00-33d1-a9a6-d173-85839f5f872c library=CUDA available="23.1 GiB" free="23.6 GiB" minimum="457.0 MiB" overhead="0 B" janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.642Z level=INFO source=server.go:755 msg="loading model" "model layers"=48 requested=-1 janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.651Z level=INFO source=runner.go:1405 msg="starting ollama engine" janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.651Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:37871" janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.653Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:6 GPULayers:48[ID:GPU-d0364f00-33d1-a9a6-d173-85839f5f872c Layers:48(0..47)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.698Z level=INFO source=ggml.go:136 msg="" architecture=glm4moelite file_type=Q4_K_M name="" description="" num_tensors=797 num_key_values=37 janv. 20 23:15:05 jarvis-server ollama[1232]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so janv. 20 23:15:05 jarvis-server ollama[1232]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no janv. 20 23:15:05 jarvis-server ollama[1232]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no janv. 20 23:15:05 jarvis-server ollama[1232]: ggml_cuda_init: found 1 CUDA devices: janv. 20 23:15:05 jarvis-server ollama[1232]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-d0364f00-33d1-a9a6-d173-85839f5f872c janv. 20 23:15:05 jarvis-server ollama[1232]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so janv. 20 23:15:05 jarvis-server ollama[1232]: time=2026-01-20T23:15:05.861Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) janv. 20 23:15:06 jarvis-server ollama[1232]: time=2026-01-20T23:15:06.300Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:6 GPULayers:22[ID:GPU-d0364f00-33d1-a9a6-d173-85839f5f872c Layers:22(25..46)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" janv. 20 23:15:06 jarvis-server ollama[1232]: time=2026-01-20T23:15:06.423Z level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:6 GPULayers:22[ID:GPU-d0364f00-33d1-a9a6-d173-85839f5f872c Layers:22(25..46)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.766Z level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:6 GPULayers:22[ID:GPU-d0364f00-33d1-a9a6-d173-85839f5f872c Layers:22(25..46)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.770Z level=INFO source=ggml.go:482 msg="offloading 22 repeating layers to GPU" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.772Z level=INFO source=ggml.go:486 msg="offloading output layer to CPU" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.772Z level=INFO source=ggml.go:494 msg="offloaded 22/48 layers to GPU" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.771Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="8.4 GiB" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.774Z level=INFO source=device.go:245 msg="model weights" device=CPU size="9.0 GiB" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.774Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="13.8 GiB" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.774Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="15.6 GiB" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.774Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="850.5 MiB" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.774Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="27.5 MiB" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.774Z level=INFO source=device.go:272 msg="total memory" size="47.7 GiB" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.775Z level=INFO source=sched.go:526 msg="loaded runners" count=1 janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.776Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding" janv. 20 23:15:14 jarvis-server ollama[1232]: time=2026-01-20T23:15:14.788Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model" janv. 20 23:15:18 jarvis-server ollama[1232]: time=2026-01-20T23:15:18.034Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server not responding" janv. 20 23:15:18 jarvis-server ollama[1232]: time=2026-01-20T23:15:18.563Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model" janv. 20 23:15:37 jarvis-server ollama[1232]: time=2026-01-20T23:15:37.862Z level=INFO source=server.go:1385 msg="llama runner started in 32.22 seconds" janv. 20 23:15:37 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:15:37 | 200 | 32.779561288s | 127.0.0.1 | POST "/api/generate" janv. 20 23:15:46 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:15:46 | 200 | 971.777µs | 127.0.0.1 | HEAD "/" janv. 20 23:15:46 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:15:46 | 200 | 1.193025ms | 127.0.0.1 | GET "/api/ps" janv. 20 23:21:17 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:21:17 | 200 | 16.52µs | 127.0.0.1 | HEAD "/" janv. 20 23:21:17 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:21:17 | 200 | 219.033913ms | 127.0.0.1 | POST "/api/show" janv. 20 23:21:17 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:21:17 | 200 | 200.665714ms | 127.0.0.1 | POST "/api/generate" janv. 20 23:21:22 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:21:22 | 200 | 15.85µs | 127.0.0.1 | HEAD "/" janv. 20 23:21:22 jarvis-server ollama[1232]: [GIN] 2026/01/20 - 23:21:22 | 200 | 21.16µs | 127.0.0.1 | GET "/api/ps" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.14.3-rc3
GiteaMirror added the bug label 2026-04-22 18:40:55 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34803