[GH-ISSUE #8471] command-7b:7b-12-2024-fp16 chat completion results in 500 error #5452

Closed
opened 2026-04-12 16:41:05 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @MarkWard0110 on GitHub (Jan 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8471

What is the issue?

GPU Nvidia RTX 4070 TI Super 16 GB
System RAM: 96 GB

When I issue a chat completion request for the model command-7b:7b-12-2024-fp16 I get a 500 response error from Ollama under the following conditions.

500 response error when using
Context: 2048
Max Predict: 2048

The following is the Ollama log

Jan 17 16:29:56 quorra ollama[4173706]: [GIN] 2025/01/17 - 16:29:56 | 500 |  1.998027857s |      10.0.0.123 | POST     "/api/chat"
Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.213Z level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.0 GiB" before.free="90.9 GiB" before.free_swap="7.8 GiB" now.total="94.0 GiB" now.free="90.9 GiB" now.free_swap="7.8 GiB"
Jan 17 16:29:56 quorra ollama[4173706]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.565.57.01
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuInit - 0x7f7c4d321ec0
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDriverGetVersion - 0x7f7c4d321ee0
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetCount - 0x7f7c4d321f20
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGet - 0x7f7c4d321f00
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetAttribute - 0x7f7c4d322000
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetUuid - 0x7f7c4d321f60
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetName - 0x7f7c4d321f40
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuCtxCreate_v3 - 0x7f7c4d3221e0
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuMemGetInfo_v2 - 0x7f7c4d322960
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuCtxDestroy - 0x7f7c4d36e5a0
Jan 17 16:29:56 quorra ollama[4173706]: calling cuInit
Jan 17 16:29:56 quorra ollama[4173706]: calling cuDriverGetVersion
Jan 17 16:29:56 quorra ollama[4173706]: raw version 0x2f26
Jan 17 16:29:56 quorra ollama[4173706]: CUDA driver version: 12.7
Jan 17 16:29:56 quorra ollama[4173706]: calling cuDeviceGetCount
Jan 17 16:29:56 quorra ollama[4173706]: device count 1
Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.404Z level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-007c9d9a-8177-bd6f-7654-45652102b937 name="NVIDIA GeForce RTX 4070 Ti SUPER" overhead="0 B" before.total="15.6 GiB" before.free="15.4 GiB" now.total="15.6 GiB" now.free="15.4 GiB" now.used="217.2 MiB"
Jan 17 16:29:56 quorra ollama[4173706]: releasing cuda driver library
Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.404Z level=DEBUG source=server.go:1079 msg="stopping llama server"
Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.404Z level=DEBUG source=sched.go:380 msg="runner released" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98
Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.655Z level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.0 GiB" before.free="90.9 GiB" before.free_swap="7.8 GiB" now.total="94.0 GiB" now.free="90.8 GiB" now.free_swap="7.8 GiB"
Jan 17 16:29:56 quorra ollama[4173706]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.565.57.01
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuInit - 0x7f7c4d321ec0
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDriverGetVersion - 0x7f7c4d321ee0
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetCount - 0x7f7c4d321f20
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGet - 0x7f7c4d321f00
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetAttribute - 0x7f7c4d322000
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetUuid - 0x7f7c4d321f60
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetName - 0x7f7c4d321f40
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuCtxCreate_v3 - 0x7f7c4d3221e0
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuMemGetInfo_v2 - 0x7f7c4d322960
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuCtxDestroy - 0x7f7c4d36e5a0
Jan 17 16:29:56 quorra ollama[4173706]: calling cuInit
Jan 17 16:29:56 quorra ollama[4173706]: calling cuDriverGetVersion
Jan 17 16:29:56 quorra ollama[4173706]: raw version 0x2f26
Jan 17 16:29:56 quorra ollama[4173706]: CUDA driver version: 12.7
Jan 17 16:29:56 quorra ollama[4173706]: calling cuDeviceGetCount
Jan 17 16:29:56 quorra ollama[4173706]: device count 1
Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.747Z level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-007c9d9a-8177-bd6f-7654-45652102b937 name="NVIDIA GeForce RTX 4070 Ti SUPER" overhead="0 B" before.total="15.6 GiB" before.free="15.4 GiB" now.total="15.6 GiB" now.free="15.4 GiB" now.used="217.2 MiB"
Jan 17 16:29:56 quorra ollama[4173706]: releasing cuda driver library
Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.904Z level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.0 GiB" before.free="90.8 GiB" before.free_swap="7.8 GiB" now.total="94.0 GiB" now.free="90.8 GiB" now.free_swap="7.8 GiB"
Jan 17 16:29:56 quorra ollama[4173706]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.565.57.01
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuInit - 0x7f7c4d321ec0
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDriverGetVersion - 0x7f7c4d321ee0
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetCount - 0x7f7c4d321f20
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGet - 0x7f7c4d321f00
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetAttribute - 0x7f7c4d322000
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetUuid - 0x7f7c4d321f60
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetName - 0x7f7c4d321f40
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuCtxCreate_v3 - 0x7f7c4d3221e0
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuMemGetInfo_v2 - 0x7f7c4d322960
Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuCtxDestroy - 0x7f7c4d36e5a0
Jan 17 16:29:56 quorra ollama[4173706]: calling cuInit
Jan 17 16:29:56 quorra ollama[4173706]: calling cuDriverGetVersion
Jan 17 16:29:56 quorra ollama[4173706]: raw version 0x2f26
Jan 17 16:29:56 quorra ollama[4173706]: CUDA driver version: 12.7
Jan 17 16:29:56 quorra ollama[4173706]: calling cuDeviceGetCount
Jan 17 16:29:56 quorra ollama[4173706]: device count 1
Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.999Z level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-007c9d9a-8177-bd6f-7654-45652102b937 name="NVIDIA GeForce RTX 4070 Ti SUPER" overhead="0 B" before.total="15.6 GiB" before.free="15.4 GiB" now.total="15.6 GiB" now.free="15.4 GiB" now.used="217.2 MiB"
Jan 17 16:29:56 quorra ollama[4173706]: releasing cuda driver library

When I use a context size of 64 and max predict of 64 it works. I get a response from the model.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7

Originally created by @MarkWard0110 on GitHub (Jan 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8471 ### What is the issue? GPU Nvidia RTX 4070 TI Super 16 GB System RAM: 96 GB When I issue a chat completion request for the model `command-7b:7b-12-2024-fp16` I get a 500 response error from Ollama under the following conditions. 500 response error when using Context: 2048 Max Predict: 2048 The following is the Ollama log ``` Jan 17 16:29:56 quorra ollama[4173706]: [GIN] 2025/01/17 - 16:29:56 | 500 | 1.998027857s | 10.0.0.123 | POST "/api/chat" Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.213Z level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.0 GiB" before.free="90.9 GiB" before.free_swap="7.8 GiB" now.total="94.0 GiB" now.free="90.9 GiB" now.free_swap="7.8 GiB" Jan 17 16:29:56 quorra ollama[4173706]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.565.57.01 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuInit - 0x7f7c4d321ec0 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDriverGetVersion - 0x7f7c4d321ee0 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetCount - 0x7f7c4d321f20 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGet - 0x7f7c4d321f00 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetAttribute - 0x7f7c4d322000 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetUuid - 0x7f7c4d321f60 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetName - 0x7f7c4d321f40 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuCtxCreate_v3 - 0x7f7c4d3221e0 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuMemGetInfo_v2 - 0x7f7c4d322960 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuCtxDestroy - 0x7f7c4d36e5a0 Jan 17 16:29:56 quorra ollama[4173706]: calling cuInit Jan 17 16:29:56 quorra ollama[4173706]: calling cuDriverGetVersion Jan 17 16:29:56 quorra ollama[4173706]: raw version 0x2f26 Jan 17 16:29:56 quorra ollama[4173706]: CUDA driver version: 12.7 Jan 17 16:29:56 quorra ollama[4173706]: calling cuDeviceGetCount Jan 17 16:29:56 quorra ollama[4173706]: device count 1 Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.404Z level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-007c9d9a-8177-bd6f-7654-45652102b937 name="NVIDIA GeForce RTX 4070 Ti SUPER" overhead="0 B" before.total="15.6 GiB" before.free="15.4 GiB" now.total="15.6 GiB" now.free="15.4 GiB" now.used="217.2 MiB" Jan 17 16:29:56 quorra ollama[4173706]: releasing cuda driver library Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.404Z level=DEBUG source=server.go:1079 msg="stopping llama server" Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.404Z level=DEBUG source=sched.go:380 msg="runner released" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98 Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.655Z level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.0 GiB" before.free="90.9 GiB" before.free_swap="7.8 GiB" now.total="94.0 GiB" now.free="90.8 GiB" now.free_swap="7.8 GiB" Jan 17 16:29:56 quorra ollama[4173706]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.565.57.01 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuInit - 0x7f7c4d321ec0 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDriverGetVersion - 0x7f7c4d321ee0 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetCount - 0x7f7c4d321f20 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGet - 0x7f7c4d321f00 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetAttribute - 0x7f7c4d322000 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetUuid - 0x7f7c4d321f60 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetName - 0x7f7c4d321f40 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuCtxCreate_v3 - 0x7f7c4d3221e0 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuMemGetInfo_v2 - 0x7f7c4d322960 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuCtxDestroy - 0x7f7c4d36e5a0 Jan 17 16:29:56 quorra ollama[4173706]: calling cuInit Jan 17 16:29:56 quorra ollama[4173706]: calling cuDriverGetVersion Jan 17 16:29:56 quorra ollama[4173706]: raw version 0x2f26 Jan 17 16:29:56 quorra ollama[4173706]: CUDA driver version: 12.7 Jan 17 16:29:56 quorra ollama[4173706]: calling cuDeviceGetCount Jan 17 16:29:56 quorra ollama[4173706]: device count 1 Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.747Z level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-007c9d9a-8177-bd6f-7654-45652102b937 name="NVIDIA GeForce RTX 4070 Ti SUPER" overhead="0 B" before.total="15.6 GiB" before.free="15.4 GiB" now.total="15.6 GiB" now.free="15.4 GiB" now.used="217.2 MiB" Jan 17 16:29:56 quorra ollama[4173706]: releasing cuda driver library Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.904Z level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.0 GiB" before.free="90.8 GiB" before.free_swap="7.8 GiB" now.total="94.0 GiB" now.free="90.8 GiB" now.free_swap="7.8 GiB" Jan 17 16:29:56 quorra ollama[4173706]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.565.57.01 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuInit - 0x7f7c4d321ec0 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDriverGetVersion - 0x7f7c4d321ee0 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetCount - 0x7f7c4d321f20 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGet - 0x7f7c4d321f00 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetAttribute - 0x7f7c4d322000 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetUuid - 0x7f7c4d321f60 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuDeviceGetName - 0x7f7c4d321f40 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuCtxCreate_v3 - 0x7f7c4d3221e0 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuMemGetInfo_v2 - 0x7f7c4d322960 Jan 17 16:29:56 quorra ollama[4173706]: dlsym: cuCtxDestroy - 0x7f7c4d36e5a0 Jan 17 16:29:56 quorra ollama[4173706]: calling cuInit Jan 17 16:29:56 quorra ollama[4173706]: calling cuDriverGetVersion Jan 17 16:29:56 quorra ollama[4173706]: raw version 0x2f26 Jan 17 16:29:56 quorra ollama[4173706]: CUDA driver version: 12.7 Jan 17 16:29:56 quorra ollama[4173706]: calling cuDeviceGetCount Jan 17 16:29:56 quorra ollama[4173706]: device count 1 Jan 17 16:29:56 quorra ollama[4173706]: time=2025-01-17T16:29:56.999Z level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-007c9d9a-8177-bd6f-7654-45652102b937 name="NVIDIA GeForce RTX 4070 Ti SUPER" overhead="0 B" before.total="15.6 GiB" before.free="15.4 GiB" now.total="15.6 GiB" now.free="15.4 GiB" now.used="217.2 MiB" Jan 17 16:29:56 quorra ollama[4173706]: releasing cuda driver library ``` When I use a context size of 64 and max predict of 64 it works. I get a response from the model. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-12 16:41:05 -05:00
Author
Owner

@MarkWard0110 commented on GitHub (Jan 17, 2025):

Additional log when using the context 2048

Jan 17 16:39:27 quorra ollama[4173706]: calling cuInit
Jan 17 16:39:27 quorra ollama[4173706]: calling cuDriverGetVersion
Jan 17 16:39:27 quorra ollama[4173706]: raw version 0x2f26
Jan 17 16:39:27 quorra ollama[4173706]: CUDA driver version: 12.7
Jan 17 16:39:27 quorra ollama[4173706]: calling cuDeviceGetCount
Jan 17 16:39:27 quorra ollama[4173706]: device count 1
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-007c9d9a-8177-bd6f-7654-45652102b937 name="NVIDIA GeForce RTX 4070 Ti SUPER" overhead="0 B" before.total="15.6 GiB" before.free="15.4 GiB" now.total="15.6 GiB" now.free="15.4 GiB" now.used="217.2 MiB"
Jan 17 16:39:27 quorra ollama[4173706]: releasing cuda driver library
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=INFO source=server.go:104 msg="system memory" total="94.0 GiB" free="90.7 GiB" free_swap="7.8 GiB"
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[15.4 GiB]"
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=32 layers.split="" memory.available="[15.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.2 GiB" memory.required.partial="14.3 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[14.3 GiB]" memory.weights.total="13.3 GiB" memory.weights.repeating="11.3 GiB" memory.weights.nonrepeating="2.0 GiB" memory.graph.full="170.7 MiB" memory.graph.partial="170.7 MiB"
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98 --ctx-size 2048 --batch-size 512 --n-gpu-layers 32 --verbose --threads 8 --parallel 1 --port 44685"
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/home/mark/.vscode-server/cli/servers/Stable-fabdb6a30b49f79a7aba0f2ad9df9b399473380f/server/bin/remote-cli:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/mark/.vscode-server/data/User/globalStorage/github.copilot-chat/debugCommand LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama:/usr/local/lib/ollama/runners/cuda_v12_avx CUDA_VISIBLE_DEVICES=GPU-007c9d9a-8177-bd6f-7654-45652102b937]"
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=INFO source=sched.go:449 msg="loaded runners" count=1
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.987Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
Jan 17 16:39:28 quorra ollama[4173706]: time=2025-01-17T16:39:28.003Z level=INFO source=runner.go:936 msg="starting go runner"
Jan 17 16:39:28 quorra ollama[4173706]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Jan 17 16:39:28 quorra ollama[4173706]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Jan 17 16:39:28 quorra ollama[4173706]: ggml_cuda_init: found 1 CUDA devices:
Jan 17 16:39:28 quorra ollama[4173706]:   Device 0: NVIDIA GeForce RTX 4070 Ti SUPER, compute capability 8.9, VMM: yes
Jan 17 16:39:28 quorra ollama[4173706]: time=2025-01-17T16:39:28.031Z level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=8
Jan 17 16:39:28 quorra ollama[4173706]: time=2025-01-17T16:39:28.031Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:44685"
Jan 17 16:39:28 quorra ollama[4173706]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4070 Ti SUPER) - 15752 MiB free
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: loaded meta data with 40 key-value pairs and 258 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98 (version GGUF V3 (latest))
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv   0:                       general.architecture str              = cohere2
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv   1:                               general.type str              = model
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv   2:                               general.name str              = C4Ai Command R7B 12 2024
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv   3:                            general.version str              = 12-2024
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv   4:                           general.basename str              = c4ai-command-r7b
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv   5:                         general.size_label str              = 8.0B
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv   6:                            general.license str              = cc-by-nc-4.0
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv   7:                          general.languages arr[str,23]      = ["en", "fr", "de", "es", "it", "pt", ...
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv   8:                        cohere2.block_count u32              = 32
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv   9:                     cohere2.context_length u32              = 8192
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  10:                   cohere2.embedding_length u32              = 4096
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  11:                cohere2.feed_forward_length u32              = 14336
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  12:               cohere2.attention.head_count u32              = 32
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  13:            cohere2.attention.head_count_kv u32              = 8
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  14:                     cohere2.rope.freq_base f32              = 50000.000000
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  15:       cohere2.attention.layer_norm_epsilon f32              = 0.000010
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  16:               cohere2.attention.key_length u32              = 128
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  17:             cohere2.attention.value_length u32              = 128
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  18:                          general.file_type u32              = 1
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  19:                        cohere2.logit_scale f32              = 0.250000
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  20:           cohere2.attention.sliding_window u32              = 4096
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  21:                         cohere2.vocab_size u32              = 256000
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  22:               cohere2.rope.dimension_count u32              = 128
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  23:                  cohere2.rope.scaling.type str              = none
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  24:                       tokenizer.ggml.model str              = gpt2
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  25:                         tokenizer.ggml.pre str              = command-r
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  26:                      tokenizer.ggml.tokens arr[str,256000]  = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ...
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  27:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ...
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  28:                      tokenizer.ggml.merges arr[str,253333]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a...
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  29:                tokenizer.ggml.bos_token_id u32              = 5
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  30:                tokenizer.ggml.eos_token_id u32              = 255001
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  31:            tokenizer.ggml.unknown_token_id u32              = 1
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  32:            tokenizer.ggml.padding_token_id u32              = 0
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  33:               tokenizer.ggml.add_bos_token bool             = true
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  34:               tokenizer.ggml.add_eos_token bool             = false
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  35:           tokenizer.chat_template.tool_use str              = {%- macro document_turn(documents) -%...
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  36:                tokenizer.chat_template.rag str              = {% set tools = [] %}\n{%- macro docume...
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  37:                   tokenizer.chat_templates arr[str,2]       = ["rag", "tool_use"]
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  38:                    tokenizer.chat_template str              = {% if documents %}\n{% set tools = [] ...
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv  39:               general.quantization_version u32              = 2
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - type  f32:   33 tensors
Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - type  f16:  225 tensors
Jan 17 16:39:28 quorra ollama[4173706]: time=2025-01-17T16:39:28.237Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255032 '<|END_OF_MIDDLE_FIM_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255030 '<|BEGINNING_OF_MIDDLE_FIM_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255029 '<|BEGINNING_OF_PREFIX_FIM_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255026 '<|END_TOOL_RESULT|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255021 '<|START_RESPONSE|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255020 '<|END_THINKING|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255019 '<|START_THINKING|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255016 '<|USER_7_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255008 '<|SYSTEM_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255007 '<|CHATBOT_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255003 '<|NO_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255001 '<|END_OF_TURN_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255000 '<|START_OF_TURN_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255009 '<|USER_0_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255018 '<|USER_9_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255006 '<|USER_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255013 '<|USER_4_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255027 '<|EXTRA_8_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255005 '<|BAD_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token:      7 '<EOP_TOKEN>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token:      2 '<CLS>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255002 '<|YES_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token:      3 '<SEP>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255022 '<|END_RESPONSE|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255014 '<|USER_5_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token:      6 '<EOS_TOKEN>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255004 '<|GOOD_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token:      1 '<UNK>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token:      4 '<MASK_TOKEN>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255017 '<|USER_8_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255024 '<|END_ACTION|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255023 '<|START_ACTION|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255012 '<|USER_3_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255010 '<|USER_1_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255028 '<|NEW_FILE|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255015 '<|USER_6_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255011 '<|USER_2_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token:      5 '<BOS_TOKEN>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255025 '<|START_TOOL_RESULT|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255031 '<|BEGINNING_OF_SUFFIX_FIM_TOKEN|>' is not marked as EOG
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: special tokens cache size = 41
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: token to piece cache size = 1.8428 MB
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: format           = GGUF V3 (latest)
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: arch             = cohere2
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: vocab type       = BPE
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_vocab          = 256000
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_merges         = 253333
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: vocab_only       = 0
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_ctx_train      = 8192
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_embd           = 4096
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_layer          = 32
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_head           = 32
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_head_kv        = 8
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_rot            = 128
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_swa            = 4096
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_embd_head_k    = 128
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_embd_head_v    = 128
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_gqa            = 4
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_embd_k_gqa     = 1024
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_embd_v_gqa     = 1024
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: f_norm_eps       = 1.0e-05
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: f_norm_rms_eps   = 0.0e+00
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: f_logit_scale    = 2.5e-01
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_ff             = 14336
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_expert         = 0
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_expert_used    = 0
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: causal attn      = 1
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: pooling type     = 0
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: rope type        = 0
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: rope scaling     = none
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: freq_base_train  = 50000.0
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: freq_scale_train = 1
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_ctx_orig_yarn  = 8192
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: rope_finetuned   = unknown
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: ssm_d_conv       = 0
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: ssm_d_inner      = 0
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: ssm_d_state      = 0
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: ssm_dt_rank      = 0
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: model type       = 8B
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: model ftype      = F16
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: model params     = 8.03 B
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: model size       = 14.95 GiB (16.00 BPW)
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: general.name     = C4Ai Command R7B 12 2024
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: BOS token        = 5 '<BOS_TOKEN>'
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: EOS token        = 255001 '<|END_OF_TURN_TOKEN|>'
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: UNK token        = 1 '<UNK>'
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: PAD token        = 0 '<PAD>'
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: LF token         = 136 'Ä'
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: FIM PAD token    = 0 '<PAD>'
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: EOG token        = 0 '<PAD>'
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: EOG token        = 255001 '<|END_OF_TURN_TOKEN|>'
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: max token length = 1024
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_tensors: tensor 'token_embd.weight' (f16) (and 2 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_tensors: offloading 32 repeating layers to GPU
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_tensors: offloaded 32/33 layers to GPU
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_tensors:   CPU_Mapped model buffer size = 15312.52 MiB
Jan 17 16:39:28 quorra ollama[4173706]: llm_load_tensors:        CUDA0 model buffer size = 13312.50 MiB
Jan 17 16:39:28 quorra ollama[4173706]: time=2025-01-17T16:39:28.990Z level=DEBUG source=server.go:600 msg="model load progress 0.41"
Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.242Z level=DEBUG source=server.go:600 msg="model load progress 0.70"
Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.493Z level=DEBUG source=server.go:600 msg="model load progress 1.00"
Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: n_seq_max     = 1
Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: n_ctx         = 2048
Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: n_ctx_per_seq = 2048
Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: n_batch       = 512
Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: n_ubatch      = 512
Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: flash_attn    = 0
Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: freq_base     = 50000.0
Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: freq_scale    = 1
Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 24: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 25: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 26: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 27: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 28: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 29: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 30: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init:      CUDA0 KV buffer size =   256.00 MiB
Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model:        CPU  output buffer size =     0.99 MiB
Jan 17 16:39:29 quorra ollama[4173706]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2508.00 MiB on device 0: cudaMalloc failed: out of memory
Jan 17 16:39:29 quorra ollama[4173706]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 2629828608
Jan 17 16:39:29 quorra ollama[4173706]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2508.00 MiB on device 0: cudaMalloc failed: out of memory
Jan 17 16:39:29 quorra ollama[4173706]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 2629828608
Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: failed to allocate compute buffers
Jan 17 16:39:29 quorra ollama[4173706]: panic: unable to create llama context
Jan 17 16:39:29 quorra ollama[4173706]: goroutine 18 [running]:
Jan 17 16:39:29 quorra ollama[4173706]: github.com/ollama/ollama/llama/runner.(*Server).loadModel(0xc00012c000, {0x20, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000112030, 0x0}, ...)
Jan 17 16:39:29 quorra ollama[4173706]:         github.com/ollama/ollama/llama/runner/runner.go:858 +0x39c
Jan 17 16:39:29 quorra ollama[4173706]: created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1
Jan 17 16:39:29 quorra ollama[4173706]:         github.com/ollama/ollama/llama/runner/runner.go:970 +0xd0d
Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.684Z level=DEBUG source=server.go:416 msg="llama runner terminated" error="exit status 2"
Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.744Z level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory\nggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 2629828608\nllama_new_context_with_model: failed to allocate compute buffers"
Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.744Z level=DEBUG source=sched.go:458 msg="triggering expiration for failed load" model=/usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98
Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.744Z level=DEBUG source=sched.go:360 msg="runner expired event received" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98
Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.744Z level=DEBUG source=sched.go:375 msg="got lock to unload" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98
Jan 17 16:39:29 quorra ollama[4173706]: [GIN] 2025/01/17 - 16:39:29 | 500 |  2.507403677s |      10.0.0.123 | POST     "/api/chat"
Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.744Z level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.0 GiB" before.free="90.7 GiB" before.free_swap="7.8 GiB" now.total="94.0 GiB" now.free="90.7 GiB" now.free_swap="7.8 GiB"
Jan 17 16:39:29 quorra ollama[4173706]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.565.57.01
Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuInit - 0x7f7c4d321ec0
Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuDriverGetVersion - 0x7f7c4d321ee0
Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuDeviceGetCount - 0x7f7c4d321f20
Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuDeviceGet - 0x7f7c4d321f00
Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuDeviceGetAttribute - 0x7f7c4d322000
Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuDeviceGetUuid - 0x7f7c4d321f60
Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuDeviceGetName - 0x7f7c4d321f40
Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuCtxCreate_v3 - 0x7f7c4d3221e0
Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuMemGetInfo_v2 - 0x7f7c4d322960
Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuCtxDestroy - 0x7f7c4d36e5a0
Jan 17 16:39:29 quorra ollama[4173706]: calling cuInit
Jan 17 16:39:29 quorra ollama[4173706]: calling cuDriverGetVersion
Jan 17 16:39:29 quorra ollama[4173706]: raw version 0x2f26
Jan 17 16:39:29 quorra ollama[4173706]: CUDA driver version: 12.7
Jan 17 16:39:29 quorra ollama[4173706]: calling cuDeviceGetCount
Jan 17 16:39:29 quorra ollama[4173706]: device count 1
Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.922Z level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-007c9d9a-8177-bd6f-7654-45652102b937 name="NVIDIA GeForce RTX 4070 Ti SUPER" overhead="0 B" before.total="15.6 GiB" before.free="15.4 GiB" now.total="15.6 GiB" now.free="15.4 GiB" now.used="217.2 MiB"
Jan 17 16:39:29 quorra ollama[4173706]: releasing cuda driver library
Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.923Z level=DEBUG source=server.go:1079 msg="stopping llama server"
Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.923Z level=DEBUG source=sched.go:380 msg="runner released" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98
Jan 17 16:39:30 quorra ollama[4173706]: time=2025-01-17T16:39:30.174Z level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.0 GiB" before.free="90.7 GiB" before.free_swap="7.8 GiB" now.total="94.0 GiB" now.free="90.7 GiB" now.free_swap="7.8 GiB"
<!-- gh-comment-id:2598758207 --> @MarkWard0110 commented on GitHub (Jan 17, 2025): Additional log when using the context 2048 ``` Jan 17 16:39:27 quorra ollama[4173706]: calling cuInit Jan 17 16:39:27 quorra ollama[4173706]: calling cuDriverGetVersion Jan 17 16:39:27 quorra ollama[4173706]: raw version 0x2f26 Jan 17 16:39:27 quorra ollama[4173706]: CUDA driver version: 12.7 Jan 17 16:39:27 quorra ollama[4173706]: calling cuDeviceGetCount Jan 17 16:39:27 quorra ollama[4173706]: device count 1 Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-007c9d9a-8177-bd6f-7654-45652102b937 name="NVIDIA GeForce RTX 4070 Ti SUPER" overhead="0 B" before.total="15.6 GiB" before.free="15.4 GiB" now.total="15.6 GiB" now.free="15.4 GiB" now.used="217.2 MiB" Jan 17 16:39:27 quorra ollama[4173706]: releasing cuda driver library Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=INFO source=server.go:104 msg="system memory" total="94.0 GiB" free="90.7 GiB" free_swap="7.8 GiB" Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[15.4 GiB]" Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=32 layers.split="" memory.available="[15.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.2 GiB" memory.required.partial="14.3 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[14.3 GiB]" memory.weights.total="13.3 GiB" memory.weights.repeating="11.3 GiB" memory.weights.nonrepeating="2.0 GiB" memory.graph.full="170.7 MiB" memory.graph.partial="170.7 MiB" Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98 --ctx-size 2048 --batch-size 512 --n-gpu-layers 32 --verbose --threads 8 --parallel 1 --port 44685" Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/home/mark/.vscode-server/cli/servers/Stable-fabdb6a30b49f79a7aba0f2ad9df9b399473380f/server/bin/remote-cli:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/mark/.vscode-server/data/User/globalStorage/github.copilot-chat/debugCommand LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama:/usr/local/lib/ollama/runners/cuda_v12_avx CUDA_VISIBLE_DEVICES=GPU-007c9d9a-8177-bd6f-7654-45652102b937]" Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=INFO source=sched.go:449 msg="loaded runners" count=1 Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.987Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" Jan 17 16:39:28 quorra ollama[4173706]: time=2025-01-17T16:39:28.003Z level=INFO source=runner.go:936 msg="starting go runner" Jan 17 16:39:28 quorra ollama[4173706]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Jan 17 16:39:28 quorra ollama[4173706]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Jan 17 16:39:28 quorra ollama[4173706]: ggml_cuda_init: found 1 CUDA devices: Jan 17 16:39:28 quorra ollama[4173706]: Device 0: NVIDIA GeForce RTX 4070 Ti SUPER, compute capability 8.9, VMM: yes Jan 17 16:39:28 quorra ollama[4173706]: time=2025-01-17T16:39:28.031Z level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=8 Jan 17 16:39:28 quorra ollama[4173706]: time=2025-01-17T16:39:28.031Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:44685" Jan 17 16:39:28 quorra ollama[4173706]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4070 Ti SUPER) - 15752 MiB free Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: loaded meta data with 40 key-value pairs and 258 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98 (version GGUF V3 (latest)) Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 0: general.architecture str = cohere2 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 1: general.type str = model Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 2: general.name str = C4Ai Command R7B 12 2024 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 3: general.version str = 12-2024 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 4: general.basename str = c4ai-command-r7b Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 5: general.size_label str = 8.0B Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 6: general.license str = cc-by-nc-4.0 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 7: general.languages arr[str,23] = ["en", "fr", "de", "es", "it", "pt", ... Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 8: cohere2.block_count u32 = 32 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 9: cohere2.context_length u32 = 8192 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 10: cohere2.embedding_length u32 = 4096 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 11: cohere2.feed_forward_length u32 = 14336 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 12: cohere2.attention.head_count u32 = 32 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 13: cohere2.attention.head_count_kv u32 = 8 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 14: cohere2.rope.freq_base f32 = 50000.000000 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 15: cohere2.attention.layer_norm_epsilon f32 = 0.000010 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 16: cohere2.attention.key_length u32 = 128 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 17: cohere2.attention.value_length u32 = 128 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 18: general.file_type u32 = 1 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 19: cohere2.logit_scale f32 = 0.250000 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 20: cohere2.attention.sliding_window u32 = 4096 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 21: cohere2.vocab_size u32 = 256000 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 22: cohere2.rope.dimension_count u32 = 128 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 23: cohere2.rope.scaling.type str = none Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 25: tokenizer.ggml.pre str = command-r Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,256000] = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ... Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ... Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,253333] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a... Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 29: tokenizer.ggml.bos_token_id u32 = 5 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 30: tokenizer.ggml.eos_token_id u32 = 255001 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 31: tokenizer.ggml.unknown_token_id u32 = 1 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 32: tokenizer.ggml.padding_token_id u32 = 0 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 33: tokenizer.ggml.add_bos_token bool = true Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 34: tokenizer.ggml.add_eos_token bool = false Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 35: tokenizer.chat_template.tool_use str = {%- macro document_turn(documents) -%... Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 36: tokenizer.chat_template.rag str = {% set tools = [] %}\n{%- macro docume... Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 37: tokenizer.chat_templates arr[str,2] = ["rag", "tool_use"] Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 38: tokenizer.chat_template str = {% if documents %}\n{% set tools = [] ... Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - kv 39: general.quantization_version u32 = 2 Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - type f32: 33 tensors Jan 17 16:39:28 quorra ollama[4173706]: llama_model_loader: - type f16: 225 tensors Jan 17 16:39:28 quorra ollama[4173706]: time=2025-01-17T16:39:28.237Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255032 '<|END_OF_MIDDLE_FIM_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255030 '<|BEGINNING_OF_MIDDLE_FIM_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255029 '<|BEGINNING_OF_PREFIX_FIM_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255026 '<|END_TOOL_RESULT|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255021 '<|START_RESPONSE|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255020 '<|END_THINKING|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255019 '<|START_THINKING|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255016 '<|USER_7_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255008 '<|SYSTEM_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255007 '<|CHATBOT_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255003 '<|NO_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255001 '<|END_OF_TURN_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255000 '<|START_OF_TURN_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255009 '<|USER_0_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255018 '<|USER_9_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255006 '<|USER_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255013 '<|USER_4_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255027 '<|EXTRA_8_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255005 '<|BAD_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 7 '<EOP_TOKEN>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 2 '<CLS>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255002 '<|YES_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 3 '<SEP>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255022 '<|END_RESPONSE|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255014 '<|USER_5_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 6 '<EOS_TOKEN>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255004 '<|GOOD_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 1 '<UNK>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 4 '<MASK_TOKEN>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255017 '<|USER_8_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255024 '<|END_ACTION|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255023 '<|START_ACTION|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255012 '<|USER_3_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255010 '<|USER_1_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255028 '<|NEW_FILE|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255015 '<|USER_6_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255011 '<|USER_2_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 5 '<BOS_TOKEN>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255025 '<|START_TOOL_RESULT|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: control token: 255031 '<|BEGINNING_OF_SUFFIX_FIM_TOKEN|>' is not marked as EOG Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: special tokens cache size = 41 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_vocab: token to piece cache size = 1.8428 MB Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: format = GGUF V3 (latest) Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: arch = cohere2 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: vocab type = BPE Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_vocab = 256000 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_merges = 253333 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: vocab_only = 0 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_ctx_train = 8192 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_embd = 4096 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_layer = 32 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_head = 32 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_head_kv = 8 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_rot = 128 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_swa = 4096 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_embd_head_k = 128 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_embd_head_v = 128 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_gqa = 4 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_embd_k_gqa = 1024 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_embd_v_gqa = 1024 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: f_norm_eps = 1.0e-05 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: f_logit_scale = 2.5e-01 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_ff = 14336 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_expert = 0 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_expert_used = 0 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: causal attn = 1 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: pooling type = 0 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: rope type = 0 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: rope scaling = none Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: freq_base_train = 50000.0 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: freq_scale_train = 1 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: n_ctx_orig_yarn = 8192 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: rope_finetuned = unknown Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: ssm_d_conv = 0 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: ssm_d_inner = 0 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: ssm_d_state = 0 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: ssm_dt_rank = 0 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: model type = 8B Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: model ftype = F16 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: model params = 8.03 B Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: model size = 14.95 GiB (16.00 BPW) Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: general.name = C4Ai Command R7B 12 2024 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: BOS token = 5 '<BOS_TOKEN>' Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: EOS token = 255001 '<|END_OF_TURN_TOKEN|>' Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: UNK token = 1 '<UNK>' Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: PAD token = 0 '<PAD>' Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: LF token = 136 'Ä' Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: FIM PAD token = 0 '<PAD>' Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: EOG token = 0 '<PAD>' Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: EOG token = 255001 '<|END_OF_TURN_TOKEN|>' Jan 17 16:39:28 quorra ollama[4173706]: llm_load_print_meta: max token length = 1024 Jan 17 16:39:28 quorra ollama[4173706]: llm_load_tensors: tensor 'token_embd.weight' (f16) (and 2 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead Jan 17 16:39:28 quorra ollama[4173706]: llm_load_tensors: offloading 32 repeating layers to GPU Jan 17 16:39:28 quorra ollama[4173706]: llm_load_tensors: offloaded 32/33 layers to GPU Jan 17 16:39:28 quorra ollama[4173706]: llm_load_tensors: CPU_Mapped model buffer size = 15312.52 MiB Jan 17 16:39:28 quorra ollama[4173706]: llm_load_tensors: CUDA0 model buffer size = 13312.50 MiB Jan 17 16:39:28 quorra ollama[4173706]: time=2025-01-17T16:39:28.990Z level=DEBUG source=server.go:600 msg="model load progress 0.41" Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.242Z level=DEBUG source=server.go:600 msg="model load progress 0.70" Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.493Z level=DEBUG source=server.go:600 msg="model load progress 1.00" Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: n_seq_max = 1 Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: n_ctx = 2048 Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: n_ctx_per_seq = 2048 Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: n_batch = 512 Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: n_ubatch = 512 Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: flash_attn = 0 Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: freq_base = 50000.0 Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: freq_scale = 1 Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 24: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 25: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 26: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 27: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 28: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 29: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 30: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 Jan 17 16:39:29 quorra ollama[4173706]: llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: CPU output buffer size = 0.99 MiB Jan 17 16:39:29 quorra ollama[4173706]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2508.00 MiB on device 0: cudaMalloc failed: out of memory Jan 17 16:39:29 quorra ollama[4173706]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 2629828608 Jan 17 16:39:29 quorra ollama[4173706]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2508.00 MiB on device 0: cudaMalloc failed: out of memory Jan 17 16:39:29 quorra ollama[4173706]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 2629828608 Jan 17 16:39:29 quorra ollama[4173706]: llama_new_context_with_model: failed to allocate compute buffers Jan 17 16:39:29 quorra ollama[4173706]: panic: unable to create llama context Jan 17 16:39:29 quorra ollama[4173706]: goroutine 18 [running]: Jan 17 16:39:29 quorra ollama[4173706]: github.com/ollama/ollama/llama/runner.(*Server).loadModel(0xc00012c000, {0x20, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000112030, 0x0}, ...) Jan 17 16:39:29 quorra ollama[4173706]: github.com/ollama/ollama/llama/runner/runner.go:858 +0x39c Jan 17 16:39:29 quorra ollama[4173706]: created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1 Jan 17 16:39:29 quorra ollama[4173706]: github.com/ollama/ollama/llama/runner/runner.go:970 +0xd0d Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.684Z level=DEBUG source=server.go:416 msg="llama runner terminated" error="exit status 2" Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.744Z level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory\nggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 2629828608\nllama_new_context_with_model: failed to allocate compute buffers" Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.744Z level=DEBUG source=sched.go:458 msg="triggering expiration for failed load" model=/usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98 Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.744Z level=DEBUG source=sched.go:360 msg="runner expired event received" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98 Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.744Z level=DEBUG source=sched.go:375 msg="got lock to unload" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98 Jan 17 16:39:29 quorra ollama[4173706]: [GIN] 2025/01/17 - 16:39:29 | 500 | 2.507403677s | 10.0.0.123 | POST "/api/chat" Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.744Z level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.0 GiB" before.free="90.7 GiB" before.free_swap="7.8 GiB" now.total="94.0 GiB" now.free="90.7 GiB" now.free_swap="7.8 GiB" Jan 17 16:39:29 quorra ollama[4173706]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.565.57.01 Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuInit - 0x7f7c4d321ec0 Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuDriverGetVersion - 0x7f7c4d321ee0 Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuDeviceGetCount - 0x7f7c4d321f20 Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuDeviceGet - 0x7f7c4d321f00 Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuDeviceGetAttribute - 0x7f7c4d322000 Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuDeviceGetUuid - 0x7f7c4d321f60 Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuDeviceGetName - 0x7f7c4d321f40 Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuCtxCreate_v3 - 0x7f7c4d3221e0 Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuMemGetInfo_v2 - 0x7f7c4d322960 Jan 17 16:39:29 quorra ollama[4173706]: dlsym: cuCtxDestroy - 0x7f7c4d36e5a0 Jan 17 16:39:29 quorra ollama[4173706]: calling cuInit Jan 17 16:39:29 quorra ollama[4173706]: calling cuDriverGetVersion Jan 17 16:39:29 quorra ollama[4173706]: raw version 0x2f26 Jan 17 16:39:29 quorra ollama[4173706]: CUDA driver version: 12.7 Jan 17 16:39:29 quorra ollama[4173706]: calling cuDeviceGetCount Jan 17 16:39:29 quorra ollama[4173706]: device count 1 Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.922Z level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-007c9d9a-8177-bd6f-7654-45652102b937 name="NVIDIA GeForce RTX 4070 Ti SUPER" overhead="0 B" before.total="15.6 GiB" before.free="15.4 GiB" now.total="15.6 GiB" now.free="15.4 GiB" now.used="217.2 MiB" Jan 17 16:39:29 quorra ollama[4173706]: releasing cuda driver library Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.923Z level=DEBUG source=server.go:1079 msg="stopping llama server" Jan 17 16:39:29 quorra ollama[4173706]: time=2025-01-17T16:39:29.923Z level=DEBUG source=sched.go:380 msg="runner released" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-d565c4f8340747fb2ae26613a785bd1168d1311ad4f76ce4845cad170c7f3f98 Jan 17 16:39:30 quorra ollama[4173706]: time=2025-01-17T16:39:30.174Z level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.0 GiB" before.free="90.7 GiB" before.free_swap="7.8 GiB" now.total="94.0 GiB" now.free="90.7 GiB" now.free_swap="7.8 GiB" ```
Author
Owner

@rick-github commented on GitHub (Jan 17, 2025):

Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=32 layers.split="" memory.available="[15.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.2 GiB" memory.required.partial="14.3 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[14.3 GiB]" memory.weights.total="13.3 GiB" memory.weights.repeating="11.3 GiB" memory.weights.nonrepeating="2.0 GiB" memory.graph.full="170.7 MiB" memory.graph.partial="170.7 MiB"
Jan 17 16:39:29 quorra ollama[4173706]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2508.00 MiB on device 0: cudaMalloc failed: out of memory
Jan 17 16:39:29 quorra ollama[4173706]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 2629828608

ollama calculated that it would need 14.3G of 15.4G to load the model, but llama.cpp OOMed during the actual load. It would seem that ollama under-estimated how much space the model would need. command-r7b is newly added so it's possible that ollama is not getting the sums quite right. There are mitigiations:

  1. Set OLLAMA_GPU_OVERHEAD to give llama.cpp a buffer to grow in to (eg, OLLAMA_GPU_OVERHEAD=536870912 to reserve 512M)
  2. Enable flash attention by setting OLLAMA_FLASH_ATTENTION=1 in the server environment. Flash attention is a more efficient use of memory and may reduce memory pressure.
  3. Reduce the number layers that ollama thinks it can offload to the GPU, see here. Ollama is currently offloading 32 layers, try setting num_gpu to 25.
  4. Set GGML_CUDA_ENABLE_UNIFIED_MEMORY=1. This will allow the GPU to offload to CPU memory if VRAM is exhausted. This is only useful for small amounts of memory as there is a performance penalty. However, in the case where the goal is to reduce OOMs, the amount offloaded will be small and the impact minimal.
<!-- gh-comment-id:2599376153 --> @rick-github commented on GitHub (Jan 17, 2025): ``` Jan 17 16:39:27 quorra ollama[4173706]: time=2025-01-17T16:39:27.986Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=32 layers.split="" memory.available="[15.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.2 GiB" memory.required.partial="14.3 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[14.3 GiB]" memory.weights.total="13.3 GiB" memory.weights.repeating="11.3 GiB" memory.weights.nonrepeating="2.0 GiB" memory.graph.full="170.7 MiB" memory.graph.partial="170.7 MiB" Jan 17 16:39:29 quorra ollama[4173706]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2508.00 MiB on device 0: cudaMalloc failed: out of memory Jan 17 16:39:29 quorra ollama[4173706]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 2629828608 ``` ollama calculated that it would need 14.3G of 15.4G to load the model, but llama.cpp OOMed during the actual load. It would seem that ollama under-estimated how much space the model would need. command-r7b is newly added so it's possible that ollama is not getting the sums quite right. There are mitigiations: 1. Set [`OLLAMA_GPU_OVERHEAD`](https://github.com/ollama/ollama/blob/5f8051180e3b9aeafc153f6b5056e7358a939c88/envconfig/config.go#L237) to give llama.cpp a buffer to grow in to (eg, `OLLAMA_GPU_OVERHEAD=536870912` to reserve 512M) 2. Enable flash attention by setting [`OLLAMA_FLASH_ATTENTION=1`](https://github.com/ollama/ollama/blob/5f8051180e3b9aeafc153f6b5056e7358a939c88/envconfig/config.go#L236) in the server environment. Flash attention is a more efficient use of memory and may reduce memory pressure. 3. Reduce the number layers that ollama thinks it can offload to the GPU, see [here](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650). Ollama is currently offloading 32 layers, try setting `num_gpu` to 25. 4. Set `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1`. This will allow the GPU to offload to CPU memory if VRAM is exhausted. This is only useful for small amounts of memory as there is a [performance penalty](https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900). However, in the case where the goal is to reduce OOMs, the amount offloaded will be small and the impact minimal.
Author
Owner

@MarkWard0110 commented on GitHub (Jan 21, 2025):

Using OLLAMA_FLASH_ATTENTION=1 did not help. It still crashed when using a context size of 2048.
I tried adding OLLAMA_GPU_OVERHEAD=536870912 and 1073741824 that didn't work either.

I set CUDA_VISIBLE_DEVICES=-1 and the model worked having100% CPU.

It might be a model that needs to have more care on the context and available resources for the selected model tag.

<!-- gh-comment-id:2603415781 --> @MarkWard0110 commented on GitHub (Jan 21, 2025): Using `OLLAMA_FLASH_ATTENTION=1` did not help. It still crashed when using a context size of 2048. I tried adding `OLLAMA_GPU_OVERHEAD=536870912` and 1073741824 that didn't work either. I set `CUDA_VISIBLE_DEVICES=-1` and the model worked having100% CPU. It might be a model that needs to have more care on the context and available resources for the selected model tag.
Author
Owner

@rick-github commented on GitHub (Jan 21, 2025):

Works for me if I set the overhead to 1.5G: OLLAMA_GPU_OVERHEAD=1610612736.

<!-- gh-comment-id:2604624681 --> @rick-github commented on GitHub (Jan 21, 2025): Works for me if I set the overhead to 1.5G: `OLLAMA_GPU_OVERHEAD=1610612736`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5452