[GH-ISSUE #9415] Extreme drop in inference speed. #68195

Closed
opened 2026-05-04 12:48:45 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @MMaturax on GitHub (Feb 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9415

What is the issue?

With an RTX 5090, in my tests using Ollama 0.5.13-rc1 with Gemma 2 9B Q4, performance is 314% lower compared to the previous version. The evaluation rate dropped from 149 tokens/s to 35 tokens/s.

0.5.13-rc1

Image

0.5.12

Image

OS: Ubuntu 24.04.2 LTS
CPU: AMD Ryzen 9 7950X3D
GPU: RTX 5090

Relevant log output

zemin@ai-server:~$ sudo journalctl -u ollama.service -f


Feb 28 12:43:20 ai-server ollama[854770]: calling cuDriverGetVersion
Feb 28 12:43:20 ai-server ollama[854770]: raw version 0x2f30
Feb 28 12:43:20 ai-server ollama[854770]: CUDA driver version: 12.8
Feb 28 12:43:20 ai-server ollama[854770]: calling cuDeviceGetCount
Feb 28 12:43:20 ai-server ollama[854770]: device count 1
Feb 28 12:43:20 ai-server ollama[854770]: time=2025-02-28T12:43:20.292Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="22.1 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 12:43:20 ai-server ollama[854770]: releasing cuda driver library
Feb 28 12:43:20 ai-server ollama[854770]: time=2025-02-28T12:43:20.292Z level=DEBUG source=sched.go:660 msg="gpu VRAM free memory converged after 0.52 seconds" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373
Feb 28 12:43:20 ai-server ollama[854770]: time=2025-02-28T12:43:20.292Z level=DEBUG source=sched.go:385 msg="sending an unloaded event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373
Feb 28 12:43:20 ai-server ollama[854770]: time=2025-02-28T12:43:20.292Z level=DEBUG source=sched.go:309 msg="ignoring unload event with no pending requests"
Feb 28 12:44:26 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:44:26 | 200 |       26.06µs |       127.0.0.1 | HEAD     "/"
Feb 28 12:44:26 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:44:26 | 200 |      66.629µs |       127.0.0.1 | GET      "/api/ps"
Feb 28 12:44:31 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:44:31 | 200 |      15.989µs |       127.0.0.1 | HEAD     "/"
Feb 28 12:44:31 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:44:31 | 200 |   20.713795ms |       127.0.0.1 | POST     "/api/show"
Feb 28 12:44:31 ai-server ollama[854770]: time=2025-02-28T12:44:31.748Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.1 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.0 GiB" now.free_swap="8.0 GiB"
Feb 28 12:44:31 ai-server ollama[854770]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuInit - 0x777263d0de00
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDriverGetVersion - 0x777263d0de20
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetCount - 0x777263d0de60
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGet - 0x777263d0de40
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetAttribute - 0x777263d0df40
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetUuid - 0x777263d0dea0
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetName - 0x777263d0de80
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuCtxCreate_v3 - 0x777263d0e120
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuMemGetInfo_v2 - 0x777263d0e8a0
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuCtxDestroy - 0x777263d6c9f0
Feb 28 12:44:31 ai-server ollama[854770]: calling cuInit
Feb 28 12:44:31 ai-server ollama[854770]: calling cuDriverGetVersion
Feb 28 12:44:31 ai-server ollama[854770]: raw version 0x2f30
Feb 28 12:44:31 ai-server ollama[854770]: CUDA driver version: 12.8
Feb 28 12:44:31 ai-server ollama[854770]: calling cuDeviceGetCount
Feb 28 12:44:31 ai-server ollama[854770]: device count 1
Feb 28 12:44:31 ai-server ollama[854770]: time=2025-02-28T12:44:31.876Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 12:44:31 ai-server ollama[854770]: releasing cuda driver library
Feb 28 12:44:31 ai-server ollama[854770]: time=2025-02-28T12:44:31.916Z level=DEBUG source=sched.go:225 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373
Feb 28 12:44:31 ai-server ollama[854770]: time=2025-02-28T12:44:31.916Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[30.9 GiB]"
Feb 28 12:44:31 ai-server ollama[854770]: time=2025-02-28T12:44:31.916Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.0 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.0 GiB" now.free_swap="8.0 GiB"
Feb 28 12:44:31 ai-server ollama[854770]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuInit - 0x777263d0de00
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDriverGetVersion - 0x777263d0de20
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetCount - 0x777263d0de60
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGet - 0x777263d0de40
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetAttribute - 0x777263d0df40
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetUuid - 0x777263d0dea0
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetName - 0x777263d0de80
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuCtxCreate_v3 - 0x777263d0e120
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuMemGetInfo_v2 - 0x777263d0e8a0
Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuCtxDestroy - 0x777263d6c9f0
Feb 28 12:44:31 ai-server ollama[854770]: calling cuInit
Feb 28 12:44:31 ai-server ollama[854770]: calling cuDriverGetVersion
Feb 28 12:44:31 ai-server ollama[854770]: raw version 0x2f30
Feb 28 12:44:31 ai-server ollama[854770]: CUDA driver version: 12.8
Feb 28 12:44:31 ai-server ollama[854770]: calling cuDeviceGetCount
Feb 28 12:44:31 ai-server ollama[854770]: device count 1
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.040Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 12:44:32 ai-server ollama[854770]: releasing cuda driver library
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.040Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 parallel=4 available=33139130368 required="8.8 GiB"
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.040Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.0 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.0 GiB" now.free_swap="8.0 GiB"
Feb 28 12:44:32 ai-server ollama[854770]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuInit - 0x777263d0de00
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDriverGetVersion - 0x777263d0de20
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetCount - 0x777263d0de60
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGet - 0x777263d0de40
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetAttribute - 0x777263d0df40
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetUuid - 0x777263d0dea0
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetName - 0x777263d0de80
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuCtxCreate_v3 - 0x777263d0e120
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuMemGetInfo_v2 - 0x777263d0e8a0
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuCtxDestroy - 0x777263d6c9f0
Feb 28 12:44:32 ai-server ollama[854770]: calling cuInit
Feb 28 12:44:32 ai-server ollama[854770]: calling cuDriverGetVersion
Feb 28 12:44:32 ai-server ollama[854770]: raw version 0x2f30
Feb 28 12:44:32 ai-server ollama[854770]: CUDA driver version: 12.8
Feb 28 12:44:32 ai-server ollama[854770]: calling cuDeviceGetCount
Feb 28 12:44:32 ai-server ollama[854770]: device count 1
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.168Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 12:44:32 ai-server ollama[854770]: releasing cuda driver library
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.168Z level=INFO source=server.go:97 msg="system memory" total="62.4 GiB" free="60.0 GiB" free_swap="8.0 GiB"
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.168Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[30.9 GiB]"
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.168Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.0 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.0 GiB" now.free_swap="8.0 GiB"
Feb 28 12:44:32 ai-server ollama[854770]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuInit - 0x777263d0de00
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDriverGetVersion - 0x777263d0de20
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetCount - 0x777263d0de60
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGet - 0x777263d0de40
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetAttribute - 0x777263d0df40
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetUuid - 0x777263d0dea0
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetName - 0x777263d0de80
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuCtxCreate_v3 - 0x777263d0e120
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuMemGetInfo_v2 - 0x777263d0e8a0
Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuCtxDestroy - 0x777263d6c9f0
Feb 28 12:44:32 ai-server ollama[854770]: calling cuInit
Feb 28 12:44:32 ai-server ollama[854770]: calling cuDriverGetVersion
Feb 28 12:44:32 ai-server ollama[854770]: raw version 0x2f30
Feb 28 12:44:32 ai-server ollama[854770]: CUDA driver version: 12.8
Feb 28 12:44:32 ai-server ollama[854770]: calling cuDeviceGetCount
Feb 28 12:44:32 ai-server ollama[854770]: device count 1
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 12:44:32 ai-server ollama[854770]: releasing cuda driver library
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=43 layers.offload=43 layers.split="" memory.available="[30.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.8 GiB" memory.required.partial="8.8 GiB" memory.required.kv="2.6 GiB" memory.required.allocations="[8.8 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=INFO source=server.go:182 msg="enabling flash attention"
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=DEBUG source=server.go:302 msg="adding gpu library" path=/usr/local/lib/ollama/cuda_v12
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=DEBUG source=server.go:310 msg="adding gpu dependency paths" paths=[/usr/local/lib/ollama/cuda_v12]
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 8192 --batch-size 512 --n-gpu-layers 43 --verbose --threads 16 --flash-attn --kv-cache-type f16 --parallel 4 --port 33571"
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin CUDA_VISIBLE_DEVICES=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 LD_LIBRARY_PATH=/usr/local/lib/ollama/cuda_v12:/usr/local/lib/ollama/cuda_v12:/usr/local/lib/ollama]"
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=INFO source=sched.go:450 msg="loaded runners" count=1
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.295Z level=INFO source=runner.go:931 msg="starting go runner"
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.295Z level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
Feb 28 12:44:32 ai-server ollama[854770]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Feb 28 12:44:32 ai-server ollama[854770]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Feb 28 12:44:32 ai-server ollama[854770]: ggml_cuda_init: found 1 CUDA devices:
Feb 28 12:44:32 ai-server ollama[854770]:   Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
Feb 28 12:44:32 ai-server ollama[854770]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.338Z level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Feb 28 12:44:32 ai-server ollama[854770]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-skylakex.so score: 183
Feb 28 12:44:32 ai-server ollama[854770]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-icelake.so score: 1463
Feb 28 12:44:32 ai-server ollama[854770]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-alderlake.so score: 0
Feb 28 12:44:32 ai-server ollama[854770]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-haswell.so score: 55
Feb 28 12:44:32 ai-server ollama[854770]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-sandybridge.so score: 20
Feb 28 12:44:32 ai-server ollama[854770]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.339Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1000 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=16
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.339Z level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:33571"
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 31603 MiB free
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest))
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv   0:                       general.architecture str              = gemma2
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv   1:                               general.name str              = gemma-2-9b-it
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv   3:                    gemma2.embedding_length u32              = 3584
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv   4:                         gemma2.block_count u32              = 42
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 14336
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 16
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 8
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv   8:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv   9:                gemma2.attention.key_length u32              = 256
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  10:              gemma2.attention.value_length u32              = 256
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  11:                          general.file_type u32              = 2
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  12:              gemma2.attn_logit_softcapping f32              = 50.000000
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  13:             gemma2.final_logit_softcapping f32              = 30.000000
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  14:            gemma2.attention.sliding_window u32              = 4096
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 2
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 1
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  22:            tokenizer.ggml.unknown_token_id u32              = 3
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 0
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv  28:               general.quantization_version u32              = 2
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - type  f32:  169 tensors
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - type q4_0:  294 tensors
Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - type q6_K:    1 tensors
Feb 28 12:44:32 ai-server ollama[854770]: print_info: file format = GGUF V3 (latest)
Feb 28 12:44:32 ai-server ollama[854770]: print_info: file type   = Q4_0
Feb 28 12:44:32 ai-server ollama[854770]: print_info: file size   = 5.06 GiB (4.71 BPW)
Feb 28 12:44:32 ai-server ollama[854770]: init_tokenizer: initializing tokenizer for type 1
Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.539Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     45 '<unused38>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     74 '<unused67>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     55 '<unused48>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     99 '<unused92>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:    102 '<unused95>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     44 '<unused37>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     26 '<unused19>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     42 '<unused35>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     92 '<unused85>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     90 '<unused83>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:    106 '<start_of_turn>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     88 '<unused81>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:      5 '<2mass>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:    104 '<unused97>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     68 '<unused61>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     94 '<unused87>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     59 '<unused52>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:      2 '<bos>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     25 '<unused18>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     93 '<unused86>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     95 '<unused88>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     76 '<unused69>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     97 '<unused90>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     56 '<unused49>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     81 '<unused74>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     13 '<unused6>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     51 '<unused44>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     47 '<unused40>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:      8 '<unused1>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:    103 '<unused96>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     75 '<unused68>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     43 '<unused36>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     79 '<unused72>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     39 '<unused32>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     49 '<unused42>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     41 '<unused34>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     34 '<unused27>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:      6 '[@BOS@]' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     40 '<unused33>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     33 '<unused26>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     35 '<unused28>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     32 '<unused25>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     28 '<unused21>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     19 '<unused12>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     80 '<unused73>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     86 '<unused79>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     67 '<unused60>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:      9 '<unused2>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     52 '<unused45>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     16 '<unused9>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     98 '<unused91>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     71 '<unused64>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     36 '<unused29>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:      0 '<pad>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     11 '<unused4>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     70 '<unused63>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     77 '<unused70>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     64 '<unused57>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     50 '<unused43>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     20 '<unused13>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     73 '<unused66>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     23 '<unused16>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     38 '<unused31>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     21 '<unused14>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     15 '<unused8>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     37 '<unused30>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     14 '<unused7>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     30 '<unused23>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     62 '<unused55>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:      3 '<unk>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     18 '<unused11>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     22 '<unused15>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     66 '<unused59>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     65 '<unused58>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     10 '<unused3>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:    105 '<unused98>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     87 '<unused80>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:    100 '<unused93>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     63 '<unused56>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     31 '<unused24>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     58 '<unused51>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     84 '<unused77>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     61 '<unused54>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:      1 '<eos>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     60 '<unused53>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     91 '<unused84>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     83 '<unused76>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     85 '<unused78>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     27 '<unused20>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     96 '<unused89>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     72 '<unused65>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     53 '<unused46>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     82 '<unused75>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:      7 '<unused0>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:      4 '<mask>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:    101 '<unused94>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     78 '<unused71>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     89 '<unused82>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     69 '<unused62>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     54 '<unused47>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     57 '<unused50>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     12 '<unused5>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     48 '<unused41>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     17 '<unused10>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     24 '<unused17>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     46 '<unused39>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: control token:     29 '<unused22>' is not marked as EOG
Feb 28 12:44:32 ai-server ollama[854770]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Feb 28 12:44:32 ai-server ollama[854770]: load: special tokens cache size = 108
Feb 28 12:44:32 ai-server ollama[854770]: load: token to piece cache size = 1.6014 MB
Feb 28 12:44:32 ai-server ollama[854770]: print_info: arch             = gemma2
Feb 28 12:44:32 ai-server ollama[854770]: print_info: vocab_only       = 0
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_ctx_train      = 8192
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_embd           = 3584
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_layer          = 42
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_head           = 16
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_head_kv        = 8
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_rot            = 256
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_swa            = 4096
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_embd_head_k    = 256
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_embd_head_v    = 256
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_gqa            = 2
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_embd_k_gqa     = 2048
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_embd_v_gqa     = 2048
Feb 28 12:44:32 ai-server ollama[854770]: print_info: f_norm_eps       = 0.0e+00
Feb 28 12:44:32 ai-server ollama[854770]: print_info: f_norm_rms_eps   = 1.0e-06
Feb 28 12:44:32 ai-server ollama[854770]: print_info: f_clamp_kqv      = 0.0e+00
Feb 28 12:44:32 ai-server ollama[854770]: print_info: f_max_alibi_bias = 0.0e+00
Feb 28 12:44:32 ai-server ollama[854770]: print_info: f_logit_scale    = 0.0e+00
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_ff             = 14336
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_expert         = 0
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_expert_used    = 0
Feb 28 12:44:32 ai-server ollama[854770]: print_info: causal attn      = 1
Feb 28 12:44:32 ai-server ollama[854770]: print_info: pooling type     = 0
Feb 28 12:44:32 ai-server ollama[854770]: print_info: rope type        = 2
Feb 28 12:44:32 ai-server ollama[854770]: print_info: rope scaling     = linear
Feb 28 12:44:32 ai-server ollama[854770]: print_info: freq_base_train  = 10000.0
Feb 28 12:44:32 ai-server ollama[854770]: print_info: freq_scale_train = 1
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_ctx_orig_yarn  = 8192
Feb 28 12:44:32 ai-server ollama[854770]: print_info: rope_finetuned   = unknown
Feb 28 12:44:32 ai-server ollama[854770]: print_info: ssm_d_conv       = 0
Feb 28 12:44:32 ai-server ollama[854770]: print_info: ssm_d_inner      = 0
Feb 28 12:44:32 ai-server ollama[854770]: print_info: ssm_d_state      = 0
Feb 28 12:44:32 ai-server ollama[854770]: print_info: ssm_dt_rank      = 0
Feb 28 12:44:32 ai-server ollama[854770]: print_info: ssm_dt_b_c_rms   = 0
Feb 28 12:44:32 ai-server ollama[854770]: print_info: model type       = 9B
Feb 28 12:44:32 ai-server ollama[854770]: print_info: model params     = 9.24 B
Feb 28 12:44:32 ai-server ollama[854770]: print_info: general.name     = gemma-2-9b-it
Feb 28 12:44:32 ai-server ollama[854770]: print_info: vocab type       = SPM
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_vocab          = 256000
Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_merges         = 0
Feb 28 12:44:32 ai-server ollama[854770]: print_info: BOS token        = 2 '<bos>'
Feb 28 12:44:32 ai-server ollama[854770]: print_info: EOS token        = 1 '<eos>'
Feb 28 12:44:32 ai-server ollama[854770]: print_info: EOT token        = 107 '<end_of_turn>'
Feb 28 12:44:32 ai-server ollama[854770]: print_info: UNK token        = 3 '<unk>'
Feb 28 12:44:32 ai-server ollama[854770]: print_info: PAD token        = 0 '<pad>'
Feb 28 12:44:32 ai-server ollama[854770]: print_info: LF token         = 227 '<0x0A>'
Feb 28 12:44:32 ai-server ollama[854770]: print_info: EOG token        = 1 '<eos>'
Feb 28 12:44:32 ai-server ollama[854770]: print_info: EOG token        = 107 '<end_of_turn>'
Feb 28 12:44:32 ai-server ollama[854770]: print_info: max token length = 93
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: loading model tensors, this can take a while... (mmap = true)
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer   0 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer   1 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer   2 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer   3 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer   4 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer   5 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer   6 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer   7 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer   8 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer   9 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  10 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  11 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  12 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  13 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  14 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  15 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  16 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  17 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  18 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  19 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  20 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  21 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  22 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  23 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  24 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  25 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  26 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  27 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  28 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  29 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  30 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  31 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  32 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  33 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  34 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  35 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  36 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  37 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  38 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  39 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  40 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  41 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer  42 assigned to device CUDA0
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: tensor 'token_embd.weight' (q6_K) (and 0 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: offloading 42 repeating layers to GPU
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: offloading output layer to GPU
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: offloaded 43/43 layers to GPU
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors:   CPU_Mapped model buffer size =   717.77 MiB
Feb 28 12:44:32 ai-server ollama[854770]: load_tensors:        CUDA0 model buffer size =  5185.21 MiB
Feb 28 12:44:33 ai-server ollama[854770]: time=2025-02-28T12:44:33.041Z level=DEBUG source=server.go:602 msg="model load progress 0.96"
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: n_seq_max     = 4
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: n_ctx         = 8192
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: n_ctx_per_seq = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: n_batch       = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: n_ubatch      = 512
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: flash_attn    = 1
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: freq_base     = 10000.0
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: freq_scale    = 1
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 42, can_shift = 1
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 0: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 1: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 2: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 3: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 4: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 5: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 6: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 7: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 8: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 9: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 10: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 11: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 12: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 13: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 14: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 15: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 16: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 17: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 18: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 19: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 20: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 21: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 22: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 23: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 24: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 25: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 26: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 27: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 28: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 29: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 30: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 31: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 32: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 33: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 34: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 35: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 36: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 37: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 38: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 39: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 40: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 41: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init:      CUDA0 KV buffer size =  2688.00 MiB
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: KV self size  = 2688.00 MiB, K (f16): 1344.00 MiB, V (f16): 1344.00 MiB
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model:  CUDA_Host  output buffer size =     3.96 MiB
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model:      CUDA0 compute buffer size =   507.00 MiB
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model:  CUDA_Host compute buffer size =   104.01 MiB
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: graph nodes  = 1398
Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: graph splits = 86
Feb 28 12:44:33 ai-server ollama[854770]: time=2025-02-28T12:44:33.292Z level=INFO source=server.go:596 msg="llama runner started in 1.00 seconds"
Feb 28 12:44:33 ai-server ollama[854770]: time=2025-02-28T12:44:33.292Z level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373
Feb 28 12:44:33 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:44:33 | 200 |  1.565101416s |       127.0.0.1 | POST     "/api/generate"
Feb 28 12:44:33 ai-server ollama[854770]: time=2025-02-28T12:44:33.292Z level=DEBUG source=sched.go:467 msg="context for request finished"
Feb 28 12:44:33 ai-server ollama[854770]: time=2025-02-28T12:44:33.292Z level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 duration=5m0s
Feb 28 12:44:33 ai-server ollama[854770]: time=2025-02-28T12:44:33.292Z level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 refCount=0
Feb 28 12:44:36 ai-server ollama[854770]: time=2025-02-28T12:44:36.910Z level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373
Feb 28 12:44:36 ai-server ollama[854770]: time=2025-02-28T12:44:36.911Z level=DEBUG source=routes.go:1505 msg="chat request" images=0 prompt="<start_of_turn>user\n5+8/16=?<end_of_turn>\n<start_of_turn>model\n"
Feb 28 12:44:36 ai-server ollama[854770]: time=2025-02-28T12:44:36.911Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=16 used=0 remaining=16
Feb 28 12:44:41 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:44:41 | 200 |  4.550076736s |       127.0.0.1 | POST     "/api/chat"
Feb 28 12:44:41 ai-server ollama[854770]: time=2025-02-28T12:44:41.436Z level=DEBUG source=sched.go:408 msg="context for request finished"
Feb 28 12:44:41 ai-server ollama[854770]: time=2025-02-28T12:44:41.436Z level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 duration=5m0s
Feb 28 12:44:41 ai-server ollama[854770]: time=2025-02-28T12:44:41.436Z level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 refCount=0
Feb 28 12:45:01 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:45:01 | 404 |    1.892022ms | 142.132.195.123 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.5.13-rc1

Environment

Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
Environment="OLLAMA_FLASH_ATTENTION=1"
Environment="OLLAMA_KV_CACHE_TYPE=f16"
Environment="CUDA_VISIBLE_DEVICES=0"
Environment="OLLAMA_CONTEXT_LENGTH=2048"
Environment="OLLAMA_DEBUG=1"

Originally created by @MMaturax on GitHub (Feb 28, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9415 ### What is the issue? With an RTX 5090, in my tests using Ollama 0.5.13-rc1 with Gemma 2 9B Q4, performance is 314% lower compared to the previous version. The evaluation rate dropped from 149 tokens/s to 35 tokens/s. 0.5.13-rc1 ![Image](https://github.com/user-attachments/assets/81a3a377-875c-4c5b-a01c-91ff9eefd4d6) 0.5.12 ![Image](https://github.com/user-attachments/assets/cb40bb14-a22e-47a9-b726-165a1b02e5d3) OS: Ubuntu 24.04.2 LTS CPU: AMD Ryzen 9 7950X3D GPU: RTX 5090 ### Relevant log output ```shell zemin@ai-server:~$ sudo journalctl -u ollama.service -f Feb 28 12:43:20 ai-server ollama[854770]: calling cuDriverGetVersion Feb 28 12:43:20 ai-server ollama[854770]: raw version 0x2f30 Feb 28 12:43:20 ai-server ollama[854770]: CUDA driver version: 12.8 Feb 28 12:43:20 ai-server ollama[854770]: calling cuDeviceGetCount Feb 28 12:43:20 ai-server ollama[854770]: device count 1 Feb 28 12:43:20 ai-server ollama[854770]: time=2025-02-28T12:43:20.292Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="22.1 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 12:43:20 ai-server ollama[854770]: releasing cuda driver library Feb 28 12:43:20 ai-server ollama[854770]: time=2025-02-28T12:43:20.292Z level=DEBUG source=sched.go:660 msg="gpu VRAM free memory converged after 0.52 seconds" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 Feb 28 12:43:20 ai-server ollama[854770]: time=2025-02-28T12:43:20.292Z level=DEBUG source=sched.go:385 msg="sending an unloaded event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 Feb 28 12:43:20 ai-server ollama[854770]: time=2025-02-28T12:43:20.292Z level=DEBUG source=sched.go:309 msg="ignoring unload event with no pending requests" Feb 28 12:44:26 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:44:26 | 200 | 26.06µs | 127.0.0.1 | HEAD "/" Feb 28 12:44:26 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:44:26 | 200 | 66.629µs | 127.0.0.1 | GET "/api/ps" Feb 28 12:44:31 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:44:31 | 200 | 15.989µs | 127.0.0.1 | HEAD "/" Feb 28 12:44:31 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:44:31 | 200 | 20.713795ms | 127.0.0.1 | POST "/api/show" Feb 28 12:44:31 ai-server ollama[854770]: time=2025-02-28T12:44:31.748Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.1 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.0 GiB" now.free_swap="8.0 GiB" Feb 28 12:44:31 ai-server ollama[854770]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuInit - 0x777263d0de00 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDriverGetVersion - 0x777263d0de20 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetCount - 0x777263d0de60 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGet - 0x777263d0de40 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetAttribute - 0x777263d0df40 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetUuid - 0x777263d0dea0 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetName - 0x777263d0de80 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuCtxCreate_v3 - 0x777263d0e120 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuMemGetInfo_v2 - 0x777263d0e8a0 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuCtxDestroy - 0x777263d6c9f0 Feb 28 12:44:31 ai-server ollama[854770]: calling cuInit Feb 28 12:44:31 ai-server ollama[854770]: calling cuDriverGetVersion Feb 28 12:44:31 ai-server ollama[854770]: raw version 0x2f30 Feb 28 12:44:31 ai-server ollama[854770]: CUDA driver version: 12.8 Feb 28 12:44:31 ai-server ollama[854770]: calling cuDeviceGetCount Feb 28 12:44:31 ai-server ollama[854770]: device count 1 Feb 28 12:44:31 ai-server ollama[854770]: time=2025-02-28T12:44:31.876Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 12:44:31 ai-server ollama[854770]: releasing cuda driver library Feb 28 12:44:31 ai-server ollama[854770]: time=2025-02-28T12:44:31.916Z level=DEBUG source=sched.go:225 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 Feb 28 12:44:31 ai-server ollama[854770]: time=2025-02-28T12:44:31.916Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[30.9 GiB]" Feb 28 12:44:31 ai-server ollama[854770]: time=2025-02-28T12:44:31.916Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.0 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.0 GiB" now.free_swap="8.0 GiB" Feb 28 12:44:31 ai-server ollama[854770]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuInit - 0x777263d0de00 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDriverGetVersion - 0x777263d0de20 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetCount - 0x777263d0de60 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGet - 0x777263d0de40 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetAttribute - 0x777263d0df40 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetUuid - 0x777263d0dea0 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuDeviceGetName - 0x777263d0de80 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuCtxCreate_v3 - 0x777263d0e120 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuMemGetInfo_v2 - 0x777263d0e8a0 Feb 28 12:44:31 ai-server ollama[854770]: dlsym: cuCtxDestroy - 0x777263d6c9f0 Feb 28 12:44:31 ai-server ollama[854770]: calling cuInit Feb 28 12:44:31 ai-server ollama[854770]: calling cuDriverGetVersion Feb 28 12:44:31 ai-server ollama[854770]: raw version 0x2f30 Feb 28 12:44:31 ai-server ollama[854770]: CUDA driver version: 12.8 Feb 28 12:44:31 ai-server ollama[854770]: calling cuDeviceGetCount Feb 28 12:44:31 ai-server ollama[854770]: device count 1 Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.040Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 12:44:32 ai-server ollama[854770]: releasing cuda driver library Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.040Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 parallel=4 available=33139130368 required="8.8 GiB" Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.040Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.0 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.0 GiB" now.free_swap="8.0 GiB" Feb 28 12:44:32 ai-server ollama[854770]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuInit - 0x777263d0de00 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDriverGetVersion - 0x777263d0de20 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetCount - 0x777263d0de60 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGet - 0x777263d0de40 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetAttribute - 0x777263d0df40 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetUuid - 0x777263d0dea0 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetName - 0x777263d0de80 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuCtxCreate_v3 - 0x777263d0e120 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuMemGetInfo_v2 - 0x777263d0e8a0 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuCtxDestroy - 0x777263d6c9f0 Feb 28 12:44:32 ai-server ollama[854770]: calling cuInit Feb 28 12:44:32 ai-server ollama[854770]: calling cuDriverGetVersion Feb 28 12:44:32 ai-server ollama[854770]: raw version 0x2f30 Feb 28 12:44:32 ai-server ollama[854770]: CUDA driver version: 12.8 Feb 28 12:44:32 ai-server ollama[854770]: calling cuDeviceGetCount Feb 28 12:44:32 ai-server ollama[854770]: device count 1 Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.168Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 12:44:32 ai-server ollama[854770]: releasing cuda driver library Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.168Z level=INFO source=server.go:97 msg="system memory" total="62.4 GiB" free="60.0 GiB" free_swap="8.0 GiB" Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.168Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[30.9 GiB]" Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.168Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.0 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.0 GiB" now.free_swap="8.0 GiB" Feb 28 12:44:32 ai-server ollama[854770]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuInit - 0x777263d0de00 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDriverGetVersion - 0x777263d0de20 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetCount - 0x777263d0de60 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGet - 0x777263d0de40 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetAttribute - 0x777263d0df40 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetUuid - 0x777263d0dea0 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuDeviceGetName - 0x777263d0de80 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuCtxCreate_v3 - 0x777263d0e120 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuMemGetInfo_v2 - 0x777263d0e8a0 Feb 28 12:44:32 ai-server ollama[854770]: dlsym: cuCtxDestroy - 0x777263d6c9f0 Feb 28 12:44:32 ai-server ollama[854770]: calling cuInit Feb 28 12:44:32 ai-server ollama[854770]: calling cuDriverGetVersion Feb 28 12:44:32 ai-server ollama[854770]: raw version 0x2f30 Feb 28 12:44:32 ai-server ollama[854770]: CUDA driver version: 12.8 Feb 28 12:44:32 ai-server ollama[854770]: calling cuDeviceGetCount Feb 28 12:44:32 ai-server ollama[854770]: device count 1 Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 12:44:32 ai-server ollama[854770]: releasing cuda driver library Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=43 layers.offload=43 layers.split="" memory.available="[30.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.8 GiB" memory.required.partial="8.8 GiB" memory.required.kv="2.6 GiB" memory.required.allocations="[8.8 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB" Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=INFO source=server.go:182 msg="enabling flash attention" Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]" Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=DEBUG source=server.go:302 msg="adding gpu library" path=/usr/local/lib/ollama/cuda_v12 Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=DEBUG source=server.go:310 msg="adding gpu dependency paths" paths=[/usr/local/lib/ollama/cuda_v12] Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 8192 --batch-size 512 --n-gpu-layers 43 --verbose --threads 16 --flash-attn --kv-cache-type f16 --parallel 4 --port 33571" Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin CUDA_VISIBLE_DEVICES=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 LD_LIBRARY_PATH=/usr/local/lib/ollama/cuda_v12:/usr/local/lib/ollama/cuda_v12:/usr/local/lib/ollama]" Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=INFO source=sched.go:450 msg="loaded runners" count=1 Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.287Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.295Z level=INFO source=runner.go:931 msg="starting go runner" Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.295Z level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 Feb 28 12:44:32 ai-server ollama[854770]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Feb 28 12:44:32 ai-server ollama[854770]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Feb 28 12:44:32 ai-server ollama[854770]: ggml_cuda_init: found 1 CUDA devices: Feb 28 12:44:32 ai-server ollama[854770]: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes Feb 28 12:44:32 ai-server ollama[854770]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.338Z level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/local/lib/ollama Feb 28 12:44:32 ai-server ollama[854770]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-skylakex.so score: 183 Feb 28 12:44:32 ai-server ollama[854770]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-icelake.so score: 1463 Feb 28 12:44:32 ai-server ollama[854770]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-alderlake.so score: 0 Feb 28 12:44:32 ai-server ollama[854770]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-haswell.so score: 55 Feb 28 12:44:32 ai-server ollama[854770]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-sandybridge.so score: 20 Feb 28 12:44:32 ai-server ollama[854770]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.339Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1000 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=16 Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.339Z level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:33571" Feb 28 12:44:32 ai-server ollama[854770]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 31603 MiB free Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest)) Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 0: general.architecture str = gemma2 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 1: general.name str = gemma-2-9b-it Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 2: gemma2.context_length u32 = 8192 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 3: gemma2.embedding_length u32 = 3584 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 4: gemma2.block_count u32 = 42 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 5: gemma2.feed_forward_length u32 = 14336 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 6: gemma2.attention.head_count u32 = 16 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 7: gemma2.attention.head_count_kv u32 = 8 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 8: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 9: gemma2.attention.key_length u32 = 256 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 10: gemma2.attention.value_length u32 = 256 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 11: general.file_type u32 = 2 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 12: gemma2.attn_logit_softcapping f32 = 50.000000 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 13: gemma2.final_logit_softcapping f32 = 30.000000 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 14: gemma2.attention.sliding_window u32 = 4096 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 15: tokenizer.ggml.model str = llama Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 16: tokenizer.ggml.pre str = default Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 18: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000... Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 20: tokenizer.ggml.bos_token_id u32 = 2 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 1 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 22: tokenizer.ggml.unknown_token_id u32 = 3 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 0 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = true Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 25: tokenizer.ggml.add_eos_token bool = false Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 26: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 27: tokenizer.ggml.add_space_prefix bool = false Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - kv 28: general.quantization_version u32 = 2 Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - type f32: 169 tensors Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - type q4_0: 294 tensors Feb 28 12:44:32 ai-server ollama[854770]: llama_model_loader: - type q6_K: 1 tensors Feb 28 12:44:32 ai-server ollama[854770]: print_info: file format = GGUF V3 (latest) Feb 28 12:44:32 ai-server ollama[854770]: print_info: file type = Q4_0 Feb 28 12:44:32 ai-server ollama[854770]: print_info: file size = 5.06 GiB (4.71 BPW) Feb 28 12:44:32 ai-server ollama[854770]: init_tokenizer: initializing tokenizer for type 1 Feb 28 12:44:32 ai-server ollama[854770]: time=2025-02-28T12:44:32.539Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 45 '<unused38>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 74 '<unused67>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 55 '<unused48>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 99 '<unused92>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 102 '<unused95>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 44 '<unused37>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 26 '<unused19>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 42 '<unused35>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 92 '<unused85>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 90 '<unused83>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 106 '<start_of_turn>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 88 '<unused81>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 5 '<2mass>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 104 '<unused97>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 68 '<unused61>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 94 '<unused87>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 59 '<unused52>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 2 '<bos>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 25 '<unused18>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 93 '<unused86>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 95 '<unused88>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 76 '<unused69>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 97 '<unused90>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 56 '<unused49>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 81 '<unused74>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 13 '<unused6>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 51 '<unused44>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 47 '<unused40>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 8 '<unused1>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 103 '<unused96>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 75 '<unused68>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 43 '<unused36>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 79 '<unused72>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 39 '<unused32>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 49 '<unused42>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 41 '<unused34>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 34 '<unused27>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 6 '[@BOS@]' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 40 '<unused33>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 33 '<unused26>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 35 '<unused28>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 32 '<unused25>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 28 '<unused21>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 19 '<unused12>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 80 '<unused73>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 86 '<unused79>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 67 '<unused60>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 9 '<unused2>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 52 '<unused45>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 16 '<unused9>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 98 '<unused91>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 71 '<unused64>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 36 '<unused29>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 0 '<pad>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 11 '<unused4>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 70 '<unused63>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 77 '<unused70>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 64 '<unused57>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 50 '<unused43>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 20 '<unused13>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 73 '<unused66>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 23 '<unused16>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 38 '<unused31>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 21 '<unused14>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 15 '<unused8>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 37 '<unused30>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 14 '<unused7>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 30 '<unused23>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 62 '<unused55>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 3 '<unk>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 18 '<unused11>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 22 '<unused15>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 66 '<unused59>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 65 '<unused58>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 10 '<unused3>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 105 '<unused98>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 87 '<unused80>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 100 '<unused93>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 63 '<unused56>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 31 '<unused24>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 58 '<unused51>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 84 '<unused77>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 61 '<unused54>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 1 '<eos>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 60 '<unused53>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 91 '<unused84>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 83 '<unused76>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 85 '<unused78>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 27 '<unused20>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 96 '<unused89>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 72 '<unused65>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 53 '<unused46>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 82 '<unused75>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 7 '<unused0>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 4 '<mask>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 101 '<unused94>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 78 '<unused71>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 89 '<unused82>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 69 '<unused62>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 54 '<unused47>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 57 '<unused50>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 12 '<unused5>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 48 '<unused41>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 17 '<unused10>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 24 '<unused17>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 46 '<unused39>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: control token: 29 '<unused22>' is not marked as EOG Feb 28 12:44:32 ai-server ollama[854770]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Feb 28 12:44:32 ai-server ollama[854770]: load: special tokens cache size = 108 Feb 28 12:44:32 ai-server ollama[854770]: load: token to piece cache size = 1.6014 MB Feb 28 12:44:32 ai-server ollama[854770]: print_info: arch = gemma2 Feb 28 12:44:32 ai-server ollama[854770]: print_info: vocab_only = 0 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_ctx_train = 8192 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_embd = 3584 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_layer = 42 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_head = 16 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_head_kv = 8 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_rot = 256 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_swa = 4096 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_embd_head_k = 256 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_embd_head_v = 256 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_gqa = 2 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_embd_k_gqa = 2048 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_embd_v_gqa = 2048 Feb 28 12:44:32 ai-server ollama[854770]: print_info: f_norm_eps = 0.0e+00 Feb 28 12:44:32 ai-server ollama[854770]: print_info: f_norm_rms_eps = 1.0e-06 Feb 28 12:44:32 ai-server ollama[854770]: print_info: f_clamp_kqv = 0.0e+00 Feb 28 12:44:32 ai-server ollama[854770]: print_info: f_max_alibi_bias = 0.0e+00 Feb 28 12:44:32 ai-server ollama[854770]: print_info: f_logit_scale = 0.0e+00 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_ff = 14336 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_expert = 0 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_expert_used = 0 Feb 28 12:44:32 ai-server ollama[854770]: print_info: causal attn = 1 Feb 28 12:44:32 ai-server ollama[854770]: print_info: pooling type = 0 Feb 28 12:44:32 ai-server ollama[854770]: print_info: rope type = 2 Feb 28 12:44:32 ai-server ollama[854770]: print_info: rope scaling = linear Feb 28 12:44:32 ai-server ollama[854770]: print_info: freq_base_train = 10000.0 Feb 28 12:44:32 ai-server ollama[854770]: print_info: freq_scale_train = 1 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_ctx_orig_yarn = 8192 Feb 28 12:44:32 ai-server ollama[854770]: print_info: rope_finetuned = unknown Feb 28 12:44:32 ai-server ollama[854770]: print_info: ssm_d_conv = 0 Feb 28 12:44:32 ai-server ollama[854770]: print_info: ssm_d_inner = 0 Feb 28 12:44:32 ai-server ollama[854770]: print_info: ssm_d_state = 0 Feb 28 12:44:32 ai-server ollama[854770]: print_info: ssm_dt_rank = 0 Feb 28 12:44:32 ai-server ollama[854770]: print_info: ssm_dt_b_c_rms = 0 Feb 28 12:44:32 ai-server ollama[854770]: print_info: model type = 9B Feb 28 12:44:32 ai-server ollama[854770]: print_info: model params = 9.24 B Feb 28 12:44:32 ai-server ollama[854770]: print_info: general.name = gemma-2-9b-it Feb 28 12:44:32 ai-server ollama[854770]: print_info: vocab type = SPM Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_vocab = 256000 Feb 28 12:44:32 ai-server ollama[854770]: print_info: n_merges = 0 Feb 28 12:44:32 ai-server ollama[854770]: print_info: BOS token = 2 '<bos>' Feb 28 12:44:32 ai-server ollama[854770]: print_info: EOS token = 1 '<eos>' Feb 28 12:44:32 ai-server ollama[854770]: print_info: EOT token = 107 '<end_of_turn>' Feb 28 12:44:32 ai-server ollama[854770]: print_info: UNK token = 3 '<unk>' Feb 28 12:44:32 ai-server ollama[854770]: print_info: PAD token = 0 '<pad>' Feb 28 12:44:32 ai-server ollama[854770]: print_info: LF token = 227 '<0x0A>' Feb 28 12:44:32 ai-server ollama[854770]: print_info: EOG token = 1 '<eos>' Feb 28 12:44:32 ai-server ollama[854770]: print_info: EOG token = 107 '<end_of_turn>' Feb 28 12:44:32 ai-server ollama[854770]: print_info: max token length = 93 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: loading model tensors, this can take a while... (mmap = true) Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 0 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 1 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 2 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 3 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 4 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 5 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 6 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 7 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 8 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 9 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 10 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 11 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 12 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 13 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 14 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 15 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 16 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 17 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 18 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 19 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 20 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 21 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 22 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 23 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 24 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 25 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 26 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 27 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 28 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 29 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 30 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 31 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 32 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 33 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 34 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 35 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 36 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 37 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 38 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 39 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 40 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 41 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: layer 42 assigned to device CUDA0 Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: tensor 'token_embd.weight' (q6_K) (and 0 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: offloading 42 repeating layers to GPU Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: offloading output layer to GPU Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: offloaded 43/43 layers to GPU Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: CPU_Mapped model buffer size = 717.77 MiB Feb 28 12:44:32 ai-server ollama[854770]: load_tensors: CUDA0 model buffer size = 5185.21 MiB Feb 28 12:44:33 ai-server ollama[854770]: time=2025-02-28T12:44:33.041Z level=DEBUG source=server.go:602 msg="model load progress 0.96" Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: n_seq_max = 4 Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: n_ctx = 8192 Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: n_ctx_per_seq = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: n_batch = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: n_ubatch = 512 Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: flash_attn = 1 Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: freq_base = 10000.0 Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: freq_scale = 1 Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 42, can_shift = 1 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 0: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 1: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 2: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 3: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 4: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 5: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 6: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 7: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 8: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 9: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 10: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 11: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 12: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 13: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 14: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 15: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 16: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 17: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 18: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 19: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 20: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 21: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 22: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 23: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 24: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 25: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 26: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 27: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 28: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 29: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 30: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 31: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 32: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 33: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 34: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 35: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 36: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 37: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 38: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 39: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 40: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: layer 41: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 12:44:33 ai-server ollama[854770]: llama_kv_cache_init: CUDA0 KV buffer size = 2688.00 MiB Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: KV self size = 2688.00 MiB, K (f16): 1344.00 MiB, V (f16): 1344.00 MiB Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: CUDA_Host output buffer size = 3.96 MiB Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: CUDA0 compute buffer size = 507.00 MiB Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: CUDA_Host compute buffer size = 104.01 MiB Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: graph nodes = 1398 Feb 28 12:44:33 ai-server ollama[854770]: llama_init_from_model: graph splits = 86 Feb 28 12:44:33 ai-server ollama[854770]: time=2025-02-28T12:44:33.292Z level=INFO source=server.go:596 msg="llama runner started in 1.00 seconds" Feb 28 12:44:33 ai-server ollama[854770]: time=2025-02-28T12:44:33.292Z level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 Feb 28 12:44:33 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:44:33 | 200 | 1.565101416s | 127.0.0.1 | POST "/api/generate" Feb 28 12:44:33 ai-server ollama[854770]: time=2025-02-28T12:44:33.292Z level=DEBUG source=sched.go:467 msg="context for request finished" Feb 28 12:44:33 ai-server ollama[854770]: time=2025-02-28T12:44:33.292Z level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 duration=5m0s Feb 28 12:44:33 ai-server ollama[854770]: time=2025-02-28T12:44:33.292Z level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 refCount=0 Feb 28 12:44:36 ai-server ollama[854770]: time=2025-02-28T12:44:36.910Z level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 Feb 28 12:44:36 ai-server ollama[854770]: time=2025-02-28T12:44:36.911Z level=DEBUG source=routes.go:1505 msg="chat request" images=0 prompt="<start_of_turn>user\n5+8/16=?<end_of_turn>\n<start_of_turn>model\n" Feb 28 12:44:36 ai-server ollama[854770]: time=2025-02-28T12:44:36.911Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=16 used=0 remaining=16 Feb 28 12:44:41 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:44:41 | 200 | 4.550076736s | 127.0.0.1 | POST "/api/chat" Feb 28 12:44:41 ai-server ollama[854770]: time=2025-02-28T12:44:41.436Z level=DEBUG source=sched.go:408 msg="context for request finished" Feb 28 12:44:41 ai-server ollama[854770]: time=2025-02-28T12:44:41.436Z level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 duration=5m0s Feb 28 12:44:41 ai-server ollama[854770]: time=2025-02-28T12:44:41.436Z level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 refCount=0 Feb 28 12:45:01 ai-server ollama[854770]: [GIN] 2025/02/28 - 12:45:01 | 404 | 1.892022ms | 142.132.195.123 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.5.13-rc1 ### Environment Environment="OLLAMA_HOST=0.0.0.0" Environment="OLLAMA_ORIGINS=*" Environment="OLLAMA_FLASH_ATTENTION=1" Environment="OLLAMA_KV_CACHE_TYPE=f16" Environment="CUDA_VISIBLE_DEVICES=0" Environment="OLLAMA_CONTEXT_LENGTH=2048" Environment="OLLAMA_DEBUG=1"
GiteaMirror added the buildbug labels 2026-05-04 12:48:46 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 28, 2025):

Server logs may help in debugging.

<!-- gh-comment-id:2690244093 --> @rick-github commented on GitHub (Feb 28, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may help in debugging.
Author
Owner

@rick-github commented on GitHub (Feb 28, 2025):

Feb 28 12:38:17 ai-server ollama[854770]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Feb 28 12:38:17 ai-server ollama[854770]: time=2025-02-28T12:38:17.020Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1000 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=16

This may be the issue, depending on your CPU: ollama is loading a CPU backend without any vector extensions. It's not clear if both logs you have posted are from 0.5.13-rc1. If they are, can you post a log from version 0.5.12?

<!-- gh-comment-id:2690756923 --> @rick-github commented on GitHub (Feb 28, 2025): ``` Feb 28 12:38:17 ai-server ollama[854770]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Feb 28 12:38:17 ai-server ollama[854770]: time=2025-02-28T12:38:17.020Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1000 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=16 ``` This may be the issue, depending on your CPU: ollama is loading a CPU backend without any vector extensions. It's not clear if both logs you have posted are from 0.5.13-rc1. If they are, can you post a log from version 0.5.12?
Author
Owner

@MMaturax commented on GitHub (Feb 28, 2025):

To avoid confusion, I deleted the previous log. The log from my first post belongs to version 0.5.13-rc1, while the one I just sent corresponds to version 0.5.12.

0.5.12

Image

zemin@ai-server:~$ journalctl -fu ollama
Feb 28 15:45:27 ai-server ollama[1169310]: CUDA driver version: 12.8
Feb 28 15:45:27 ai-server ollama[1169310]: calling cuDeviceGetCount
Feb 28 15:45:27 ai-server ollama[1169310]: device count 1
Feb 28 15:45:27 ai-server ollama[1169310]: time=2025-02-28T15:45:27.755Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 15:45:27 ai-server ollama[1169310]: [GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7] CUDA totalMem 32117 mb
Feb 28 15:45:27 ai-server ollama[1169310]: [GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7] CUDA freeMem 31603 mb
Feb 28 15:45:27 ai-server ollama[1169310]: [GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7] Compute Capability 12.0
Feb 28 15:45:27 ai-server ollama[1169310]: time=2025-02-28T15:45:27.887Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
Feb 28 15:45:27 ai-server ollama[1169310]: releasing cuda driver library
Feb 28 15:45:27 ai-server ollama[1169310]: time=2025-02-28T15:45:27.887Z level=INFO source=types.go:130 msg="inference compute" id=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="30.9 GiB"
Feb 28 15:45:49 ai-server ollama[1169310]: [GIN] 2025/02/28 - 15:45:49 | 200 |      31.369µs |       127.0.0.1 | HEAD     "/"
Feb 28 15:45:49 ai-server ollama[1169310]: [GIN] 2025/02/28 - 15:45:49 | 200 |   19.940655ms |       127.0.0.1 | POST     "/api/show"
Feb 28 15:45:49 ai-server ollama[1169310]: time=2025-02-28T15:45:49.913Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.6 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.1 GiB" now.free_swap="8.0 GiB"
Feb 28 15:45:49 ai-server ollama[1169310]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuInit - 0x79709fd0de00
Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuDriverGetVersion - 0x79709fd0de20
Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuDeviceGetCount - 0x79709fd0de60
Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuDeviceGet - 0x79709fd0de40
Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuDeviceGetAttribute - 0x79709fd0df40
Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuDeviceGetUuid - 0x79709fd0dea0
Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuDeviceGetName - 0x79709fd0de80
Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuCtxCreate_v3 - 0x79709fd0e120
Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuMemGetInfo_v2 - 0x79709fd0e8a0
Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuCtxDestroy - 0x79709fd6c9f0
Feb 28 15:45:49 ai-server ollama[1169310]: calling cuInit
Feb 28 15:45:49 ai-server ollama[1169310]: calling cuDriverGetVersion
Feb 28 15:45:49 ai-server ollama[1169310]: raw version 0x2f30
Feb 28 15:45:49 ai-server ollama[1169310]: CUDA driver version: 12.8
Feb 28 15:45:49 ai-server ollama[1169310]: calling cuDeviceGetCount
Feb 28 15:45:49 ai-server ollama[1169310]: device count 1
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.044Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 15:45:50 ai-server ollama[1169310]: releasing cuda driver library
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.044Z level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.082Z level=DEBUG source=sched.go:225 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.082Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[30.9 GiB]"
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.082Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.1 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.0 GiB" now.free_swap="8.0 GiB"
Feb 28 15:45:50 ai-server ollama[1169310]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuInit - 0x79709fd0de00
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDriverGetVersion - 0x79709fd0de20
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetCount - 0x79709fd0de60
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGet - 0x79709fd0de40
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetAttribute - 0x79709fd0df40
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetUuid - 0x79709fd0dea0
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetName - 0x79709fd0de80
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuCtxCreate_v3 - 0x79709fd0e120
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuMemGetInfo_v2 - 0x79709fd0e8a0
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuCtxDestroy - 0x79709fd6c9f0
Feb 28 15:45:50 ai-server ollama[1169310]: calling cuInit
Feb 28 15:45:50 ai-server ollama[1169310]: calling cuDriverGetVersion
Feb 28 15:45:50 ai-server ollama[1169310]: raw version 0x2f30
Feb 28 15:45:50 ai-server ollama[1169310]: CUDA driver version: 12.8
Feb 28 15:45:50 ai-server ollama[1169310]: calling cuDeviceGetCount
Feb 28 15:45:50 ai-server ollama[1169310]: device count 1
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.209Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 15:45:50 ai-server ollama[1169310]: releasing cuda driver library
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.209Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 parallel=4 available=33139130368 required="8.8 GiB"
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.209Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.0 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="59.9 GiB" now.free_swap="8.0 GiB"
Feb 28 15:45:50 ai-server ollama[1169310]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuInit - 0x79709fd0de00
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDriverGetVersion - 0x79709fd0de20
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetCount - 0x79709fd0de60
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGet - 0x79709fd0de40
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetAttribute - 0x79709fd0df40
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetUuid - 0x79709fd0dea0
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetName - 0x79709fd0de80
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuCtxCreate_v3 - 0x79709fd0e120
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuMemGetInfo_v2 - 0x79709fd0e8a0
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuCtxDestroy - 0x79709fd6c9f0
Feb 28 15:45:50 ai-server ollama[1169310]: calling cuInit
Feb 28 15:45:50 ai-server ollama[1169310]: calling cuDriverGetVersion
Feb 28 15:45:50 ai-server ollama[1169310]: raw version 0x2f30
Feb 28 15:45:50 ai-server ollama[1169310]: CUDA driver version: 12.8
Feb 28 15:45:50 ai-server ollama[1169310]: calling cuDeviceGetCount
Feb 28 15:45:50 ai-server ollama[1169310]: device count 1
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.330Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 15:45:50 ai-server ollama[1169310]: releasing cuda driver library
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.330Z level=INFO source=server.go:97 msg="system memory" total="62.4 GiB" free="59.9 GiB" free_swap="8.0 GiB"
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.330Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[30.9 GiB]"
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.331Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="59.9 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.0 GiB" now.free_swap="8.0 GiB"
Feb 28 15:45:50 ai-server ollama[1169310]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuInit - 0x79709fd0de00
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDriverGetVersion - 0x79709fd0de20
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetCount - 0x79709fd0de60
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGet - 0x79709fd0de40
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetAttribute - 0x79709fd0df40
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetUuid - 0x79709fd0dea0
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetName - 0x79709fd0de80
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuCtxCreate_v3 - 0x79709fd0e120
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuMemGetInfo_v2 - 0x79709fd0e8a0
Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuCtxDestroy - 0x79709fd6c9f0
Feb 28 15:45:50 ai-server ollama[1169310]: calling cuInit
Feb 28 15:45:50 ai-server ollama[1169310]: calling cuDriverGetVersion
Feb 28 15:45:50 ai-server ollama[1169310]: raw version 0x2f30
Feb 28 15:45:50 ai-server ollama[1169310]: CUDA driver version: 12.8
Feb 28 15:45:50 ai-server ollama[1169310]: calling cuDeviceGetCount
Feb 28 15:45:50 ai-server ollama[1169310]: device count 1
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.444Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 15:45:50 ai-server ollama[1169310]: releasing cuda driver library
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=43 layers.offload=43 layers.split="" memory.available="[30.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.8 GiB" memory.required.partial="8.8 GiB" memory.required.kv="2.6 GiB" memory.required.allocations="[8.8 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=INFO source=server.go:182 msg="enabling flash attention"
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=DEBUG source=server.go:302 msg="adding gpu library" path=/usr/local/lib/ollama/cuda_v12
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=DEBUG source=server.go:310 msg="adding gpu dependency paths" paths=[/usr/local/lib/ollama/cuda_v12]
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 8192 --batch-size 512 --n-gpu-layers 43 --verbose --threads 16 --flash-attn --kv-cache-type f16 --parallel 4 --port 45351"
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin CUDA_VISIBLE_DEVICES=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 LD_LIBRARY_PATH=/usr/local/lib/ollama/cuda_v12:/usr/local/lib/ollama/cuda_v12:/usr/local/lib/ollama]"
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=INFO source=sched.go:450 msg="loaded runners" count=1
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.457Z level=INFO source=runner.go:932 msg="starting go runner"
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.457Z level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
Feb 28 15:45:50 ai-server ollama[1169310]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Feb 28 15:45:50 ai-server ollama[1169310]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Feb 28 15:45:50 ai-server ollama[1169310]: ggml_cuda_init: found 1 CUDA devices:
Feb 28 15:45:50 ai-server ollama[1169310]:   Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
Feb 28 15:45:50 ai-server ollama[1169310]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.492Z level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Feb 28 15:45:50 ai-server ollama[1169310]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-skylakex.so score: 183
Feb 28 15:45:50 ai-server ollama[1169310]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-icelake.so score: 1463
Feb 28 15:45:50 ai-server ollama[1169310]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-alderlake.so score: 0
Feb 28 15:45:50 ai-server ollama[1169310]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-haswell.so score: 55
Feb 28 15:45:50 ai-server ollama[1169310]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-sandybridge.so score: 20
Feb 28 15:45:50 ai-server ollama[1169310]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.493Z level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=16
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.493Z level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:45351"
Feb 28 15:45:50 ai-server ollama[1169310]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 5090) - 31603 MiB free
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest))
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv   0:                       general.architecture str              = gemma2
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv   1:                               general.name str              = gemma-2-9b-it
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv   3:                    gemma2.embedding_length u32              = 3584
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv   4:                         gemma2.block_count u32              = 42
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 14336
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 16
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 8
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv   8:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv   9:                gemma2.attention.key_length u32              = 256
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  10:              gemma2.attention.value_length u32              = 256
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  11:                          general.file_type u32              = 2
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  12:              gemma2.attn_logit_softcapping f32              = 50.000000
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  13:             gemma2.final_logit_softcapping f32              = 30.000000
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  14:            gemma2.attention.sliding_window u32              = 4096
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 2
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 1
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  22:            tokenizer.ggml.unknown_token_id u32              = 3
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 0
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv  28:               general.quantization_version u32              = 2
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - type  f32:  169 tensors
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - type q4_0:  294 tensors
Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - type q6_K:    1 tensors
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.696Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     45 '<unused38>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     74 '<unused67>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     55 '<unused48>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     99 '<unused92>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:    102 '<unused95>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     44 '<unused37>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     26 '<unused19>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     42 '<unused35>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     92 '<unused85>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     90 '<unused83>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:    106 '<start_of_turn>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     88 '<unused81>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:      5 '<2mass>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:    104 '<unused97>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     68 '<unused61>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     94 '<unused87>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     59 '<unused52>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:      2 '<bos>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     25 '<unused18>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     93 '<unused86>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     95 '<unused88>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     76 '<unused69>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     97 '<unused90>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     56 '<unused49>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     81 '<unused74>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     13 '<unused6>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     51 '<unused44>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     47 '<unused40>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:      8 '<unused1>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:    103 '<unused96>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     75 '<unused68>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     43 '<unused36>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     79 '<unused72>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     39 '<unused32>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     49 '<unused42>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     41 '<unused34>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     34 '<unused27>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:      6 '[@BOS@]' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     40 '<unused33>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     33 '<unused26>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     35 '<unused28>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     32 '<unused25>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     28 '<unused21>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     19 '<unused12>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     80 '<unused73>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     86 '<unused79>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     67 '<unused60>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:      9 '<unused2>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     52 '<unused45>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     16 '<unused9>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     98 '<unused91>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     71 '<unused64>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     36 '<unused29>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:      0 '<pad>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     11 '<unused4>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     70 '<unused63>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     77 '<unused70>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     64 '<unused57>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     50 '<unused43>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     20 '<unused13>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     73 '<unused66>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     23 '<unused16>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     38 '<unused31>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     21 '<unused14>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     15 '<unused8>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     37 '<unused30>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     14 '<unused7>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     30 '<unused23>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     62 '<unused55>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:      3 '<unk>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     18 '<unused11>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     22 '<unused15>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     66 '<unused59>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     65 '<unused58>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     10 '<unused3>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:    105 '<unused98>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     87 '<unused80>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:    100 '<unused93>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     63 '<unused56>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     31 '<unused24>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     58 '<unused51>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     84 '<unused77>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     61 '<unused54>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:      1 '<eos>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     60 '<unused53>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     91 '<unused84>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     83 '<unused76>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     85 '<unused78>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     27 '<unused20>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     96 '<unused89>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     72 '<unused65>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     53 '<unused46>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     82 '<unused75>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:      7 '<unused0>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:      4 '<mask>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:    101 '<unused94>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     78 '<unused71>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     89 '<unused82>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     69 '<unused62>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     54 '<unused47>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     57 '<unused50>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     12 '<unused5>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     48 '<unused41>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     17 '<unused10>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     24 '<unused17>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     46 '<unused39>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token:     29 '<unused22>' is not marked as EOG
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: special tokens cache size = 108
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: token to piece cache size = 1.6014 MB
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: format           = GGUF V3 (latest)
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: arch             = gemma2
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: vocab type       = SPM
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_vocab          = 256000
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_merges         = 0
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: vocab_only       = 0
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_ctx_train      = 8192
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_embd           = 3584
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_layer          = 42
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_head           = 16
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_head_kv        = 8
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_rot            = 256
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_swa            = 4096
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_embd_head_k    = 256
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_embd_head_v    = 256
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_gqa            = 2
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_embd_k_gqa     = 2048
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_embd_v_gqa     = 2048
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_ff             = 14336
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_expert         = 0
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_expert_used    = 0
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: causal attn      = 1
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: pooling type     = 0
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: rope type        = 2
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: rope scaling     = linear
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: freq_base_train  = 10000.0
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: freq_scale_train = 1
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_ctx_orig_yarn  = 8192
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: rope_finetuned   = unknown
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: ssm_d_conv       = 0
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: ssm_d_inner      = 0
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: ssm_d_state      = 0
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: ssm_dt_rank      = 0
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: model type       = 9B
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: model ftype      = Q4_0
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: model params     = 9.24 B
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: model size       = 5.06 GiB (4.71 BPW)
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: general.name     = gemma-2-9b-it
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: BOS token        = 2 '<bos>'
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: EOS token        = 1 '<eos>'
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: UNK token        = 3 '<unk>'
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: PAD token        = 0 '<pad>'
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: LF token         = 227 '<0x0A>'
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: EOG token        = 1 '<eos>'
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: EOG token        = 107 '<end_of_turn>'
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: max token length = 93
Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_tensors: tensor 'token_embd.weight' (q6_K) (and 0 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead
Feb 28 15:45:51 ai-server ollama[1169310]: llm_load_tensors: offloading 42 repeating layers to GPU
Feb 28 15:45:51 ai-server ollama[1169310]: llm_load_tensors: offloading output layer to GPU
Feb 28 15:45:51 ai-server ollama[1169310]: llm_load_tensors: offloaded 43/43 layers to GPU
Feb 28 15:45:51 ai-server ollama[1169310]: llm_load_tensors:        CUDA0 model buffer size =  5185.21 MiB
Feb 28 15:45:51 ai-server ollama[1169310]: llm_load_tensors:   CPU_Mapped model buffer size =   717.77 MiB
Feb 28 15:45:51 ai-server ollama[1169310]: time=2025-02-28T15:45:51.198Z level=DEBUG source=server.go:602 msg="model load progress 0.45"
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: n_seq_max     = 4
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: n_ctx         = 8192
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: n_ctx_per_seq = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: n_batch       = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: n_ubatch      = 512
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: flash_attn    = 1
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: freq_base     = 10000.0
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: freq_scale    = 1
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 42, can_shift = 1
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 0: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 1: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 2: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 3: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 4: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 5: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 6: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 7: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 8: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 9: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 10: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 11: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 12: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 13: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 14: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 15: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 16: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 17: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 18: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 19: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 20: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 21: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 22: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 23: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 24: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 25: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 26: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 27: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 28: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 29: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 30: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 31: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 32: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 33: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 34: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 35: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 36: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 37: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 38: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 39: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 40: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 41: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init:      CUDA0 KV buffer size =  2688.00 MiB
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: KV self size  = 2688.00 MiB, K (f16): 1344.00 MiB, V (f16): 1344.00 MiB
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model:  CUDA_Host  output buffer size =     3.96 MiB
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model:      CUDA0 compute buffer size =   507.00 MiB
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model:  CUDA_Host compute buffer size =    39.01 MiB
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: graph nodes  = 1398
Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: graph splits = 2
Feb 28 15:45:51 ai-server ollama[1169310]: time=2025-02-28T15:45:51.451Z level=INFO source=server.go:596 msg="llama runner started in 1.01 seconds"
Feb 28 15:45:51 ai-server ollama[1169310]: time=2025-02-28T15:45:51.451Z level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373
Feb 28 15:45:51 ai-server ollama[1169310]: [GIN] 2025/02/28 - 15:45:51 | 200 |   1.55992851s |       127.0.0.1 | POST     "/api/generate"
Feb 28 15:45:51 ai-server ollama[1169310]: time=2025-02-28T15:45:51.451Z level=DEBUG source=sched.go:467 msg="context for request finished"
Feb 28 15:45:51 ai-server ollama[1169310]: time=2025-02-28T15:45:51.451Z level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 duration=5m0s
Feb 28 15:45:51 ai-server ollama[1169310]: time=2025-02-28T15:45:51.451Z level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 refCount=0
Feb 28 15:45:55 ai-server ollama[1169310]: time=2025-02-28T15:45:55.576Z level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373
Feb 28 15:45:55 ai-server ollama[1169310]: time=2025-02-28T15:45:55.576Z level=DEBUG source=routes.go:1480 msg="chat request" images=0 prompt="<start_of_turn>user\n5+8/16=?<end_of_turn>\n<start_of_turn>model\n"
Feb 28 15:45:55 ai-server ollama[1169310]: time=2025-02-28T15:45:55.577Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=16 used=0 remaining=16
Feb 28 15:45:56 ai-server ollama[1169310]: [GIN] 2025/02/28 - 15:45:56 | 200 |  592.031894ms |       127.0.0.1 | POST     "/api/chat"
Feb 28 15:45:56 ai-server ollama[1169310]: time=2025-02-28T15:45:56.146Z level=DEBUG source=sched.go:408 msg="context for request finished"
Feb 28 15:45:56 ai-server ollama[1169310]: time=2025-02-28T15:45:56.146Z level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 duration=5m0s
Feb 28 15:45:56 ai-server ollama[1169310]: time=2025-02-28T15:45:56.146Z level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 refCount=0
<!-- gh-comment-id:2690983994 --> @MMaturax commented on GitHub (Feb 28, 2025): To avoid confusion, I deleted the previous log. The log from my first post belongs to version 0.5.13-rc1, while the one I just sent corresponds to version 0.5.12. 0.5.12 ![Image](https://github.com/user-attachments/assets/c18fa31e-a6f9-4024-88ca-c7bdea0cbe98) ``` zemin@ai-server:~$ journalctl -fu ollama Feb 28 15:45:27 ai-server ollama[1169310]: CUDA driver version: 12.8 Feb 28 15:45:27 ai-server ollama[1169310]: calling cuDeviceGetCount Feb 28 15:45:27 ai-server ollama[1169310]: device count 1 Feb 28 15:45:27 ai-server ollama[1169310]: time=2025-02-28T15:45:27.755Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 15:45:27 ai-server ollama[1169310]: [GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7] CUDA totalMem 32117 mb Feb 28 15:45:27 ai-server ollama[1169310]: [GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7] CUDA freeMem 31603 mb Feb 28 15:45:27 ai-server ollama[1169310]: [GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7] Compute Capability 12.0 Feb 28 15:45:27 ai-server ollama[1169310]: time=2025-02-28T15:45:27.887Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" Feb 28 15:45:27 ai-server ollama[1169310]: releasing cuda driver library Feb 28 15:45:27 ai-server ollama[1169310]: time=2025-02-28T15:45:27.887Z level=INFO source=types.go:130 msg="inference compute" id=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="30.9 GiB" Feb 28 15:45:49 ai-server ollama[1169310]: [GIN] 2025/02/28 - 15:45:49 | 200 | 31.369µs | 127.0.0.1 | HEAD "/" Feb 28 15:45:49 ai-server ollama[1169310]: [GIN] 2025/02/28 - 15:45:49 | 200 | 19.940655ms | 127.0.0.1 | POST "/api/show" Feb 28 15:45:49 ai-server ollama[1169310]: time=2025-02-28T15:45:49.913Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.6 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.1 GiB" now.free_swap="8.0 GiB" Feb 28 15:45:49 ai-server ollama[1169310]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuInit - 0x79709fd0de00 Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuDriverGetVersion - 0x79709fd0de20 Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuDeviceGetCount - 0x79709fd0de60 Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuDeviceGet - 0x79709fd0de40 Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuDeviceGetAttribute - 0x79709fd0df40 Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuDeviceGetUuid - 0x79709fd0dea0 Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuDeviceGetName - 0x79709fd0de80 Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuCtxCreate_v3 - 0x79709fd0e120 Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuMemGetInfo_v2 - 0x79709fd0e8a0 Feb 28 15:45:49 ai-server ollama[1169310]: dlsym: cuCtxDestroy - 0x79709fd6c9f0 Feb 28 15:45:49 ai-server ollama[1169310]: calling cuInit Feb 28 15:45:49 ai-server ollama[1169310]: calling cuDriverGetVersion Feb 28 15:45:49 ai-server ollama[1169310]: raw version 0x2f30 Feb 28 15:45:49 ai-server ollama[1169310]: CUDA driver version: 12.8 Feb 28 15:45:49 ai-server ollama[1169310]: calling cuDeviceGetCount Feb 28 15:45:49 ai-server ollama[1169310]: device count 1 Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.044Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 15:45:50 ai-server ollama[1169310]: releasing cuda driver library Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.044Z level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.082Z level=DEBUG source=sched.go:225 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.082Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[30.9 GiB]" Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.082Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.1 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.0 GiB" now.free_swap="8.0 GiB" Feb 28 15:45:50 ai-server ollama[1169310]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuInit - 0x79709fd0de00 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDriverGetVersion - 0x79709fd0de20 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetCount - 0x79709fd0de60 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGet - 0x79709fd0de40 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetAttribute - 0x79709fd0df40 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetUuid - 0x79709fd0dea0 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetName - 0x79709fd0de80 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuCtxCreate_v3 - 0x79709fd0e120 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuMemGetInfo_v2 - 0x79709fd0e8a0 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuCtxDestroy - 0x79709fd6c9f0 Feb 28 15:45:50 ai-server ollama[1169310]: calling cuInit Feb 28 15:45:50 ai-server ollama[1169310]: calling cuDriverGetVersion Feb 28 15:45:50 ai-server ollama[1169310]: raw version 0x2f30 Feb 28 15:45:50 ai-server ollama[1169310]: CUDA driver version: 12.8 Feb 28 15:45:50 ai-server ollama[1169310]: calling cuDeviceGetCount Feb 28 15:45:50 ai-server ollama[1169310]: device count 1 Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.209Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 15:45:50 ai-server ollama[1169310]: releasing cuda driver library Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.209Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 parallel=4 available=33139130368 required="8.8 GiB" Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.209Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.0 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="59.9 GiB" now.free_swap="8.0 GiB" Feb 28 15:45:50 ai-server ollama[1169310]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuInit - 0x79709fd0de00 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDriverGetVersion - 0x79709fd0de20 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetCount - 0x79709fd0de60 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGet - 0x79709fd0de40 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetAttribute - 0x79709fd0df40 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetUuid - 0x79709fd0dea0 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetName - 0x79709fd0de80 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuCtxCreate_v3 - 0x79709fd0e120 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuMemGetInfo_v2 - 0x79709fd0e8a0 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuCtxDestroy - 0x79709fd6c9f0 Feb 28 15:45:50 ai-server ollama[1169310]: calling cuInit Feb 28 15:45:50 ai-server ollama[1169310]: calling cuDriverGetVersion Feb 28 15:45:50 ai-server ollama[1169310]: raw version 0x2f30 Feb 28 15:45:50 ai-server ollama[1169310]: CUDA driver version: 12.8 Feb 28 15:45:50 ai-server ollama[1169310]: calling cuDeviceGetCount Feb 28 15:45:50 ai-server ollama[1169310]: device count 1 Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.330Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 15:45:50 ai-server ollama[1169310]: releasing cuda driver library Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.330Z level=INFO source=server.go:97 msg="system memory" total="62.4 GiB" free="59.9 GiB" free_swap="8.0 GiB" Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.330Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[30.9 GiB]" Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.331Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="59.9 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="60.0 GiB" now.free_swap="8.0 GiB" Feb 28 15:45:50 ai-server ollama[1169310]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuInit - 0x79709fd0de00 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDriverGetVersion - 0x79709fd0de20 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetCount - 0x79709fd0de60 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGet - 0x79709fd0de40 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetAttribute - 0x79709fd0df40 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetUuid - 0x79709fd0dea0 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuDeviceGetName - 0x79709fd0de80 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuCtxCreate_v3 - 0x79709fd0e120 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuMemGetInfo_v2 - 0x79709fd0e8a0 Feb 28 15:45:50 ai-server ollama[1169310]: dlsym: cuCtxDestroy - 0x79709fd6c9f0 Feb 28 15:45:50 ai-server ollama[1169310]: calling cuInit Feb 28 15:45:50 ai-server ollama[1169310]: calling cuDriverGetVersion Feb 28 15:45:50 ai-server ollama[1169310]: raw version 0x2f30 Feb 28 15:45:50 ai-server ollama[1169310]: CUDA driver version: 12.8 Feb 28 15:45:50 ai-server ollama[1169310]: calling cuDeviceGetCount Feb 28 15:45:50 ai-server ollama[1169310]: device count 1 Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.444Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 15:45:50 ai-server ollama[1169310]: releasing cuda driver library Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=43 layers.offload=43 layers.split="" memory.available="[30.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.8 GiB" memory.required.partial="8.8 GiB" memory.required.kv="2.6 GiB" memory.required.allocations="[8.8 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB" Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=INFO source=server.go:182 msg="enabling flash attention" Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]" Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=DEBUG source=server.go:302 msg="adding gpu library" path=/usr/local/lib/ollama/cuda_v12 Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=DEBUG source=server.go:310 msg="adding gpu dependency paths" paths=[/usr/local/lib/ollama/cuda_v12] Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 8192 --batch-size 512 --n-gpu-layers 43 --verbose --threads 16 --flash-attn --kv-cache-type f16 --parallel 4 --port 45351" Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin CUDA_VISIBLE_DEVICES=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 LD_LIBRARY_PATH=/usr/local/lib/ollama/cuda_v12:/usr/local/lib/ollama/cuda_v12:/usr/local/lib/ollama]" Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=INFO source=sched.go:450 msg="loaded runners" count=1 Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.445Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.457Z level=INFO source=runner.go:932 msg="starting go runner" Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.457Z level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 Feb 28 15:45:50 ai-server ollama[1169310]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Feb 28 15:45:50 ai-server ollama[1169310]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Feb 28 15:45:50 ai-server ollama[1169310]: ggml_cuda_init: found 1 CUDA devices: Feb 28 15:45:50 ai-server ollama[1169310]: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes Feb 28 15:45:50 ai-server ollama[1169310]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.492Z level=DEBUG source=ggml.go:89 msg="ggml backend load all from path" path=/usr/local/lib/ollama Feb 28 15:45:50 ai-server ollama[1169310]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-skylakex.so score: 183 Feb 28 15:45:50 ai-server ollama[1169310]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-icelake.so score: 1463 Feb 28 15:45:50 ai-server ollama[1169310]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-alderlake.so score: 0 Feb 28 15:45:50 ai-server ollama[1169310]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-haswell.so score: 55 Feb 28 15:45:50 ai-server ollama[1169310]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-sandybridge.so score: 20 Feb 28 15:45:50 ai-server ollama[1169310]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.493Z level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=16 Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.493Z level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:45351" Feb 28 15:45:50 ai-server ollama[1169310]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 5090) - 31603 MiB free Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest)) Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 0: general.architecture str = gemma2 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 1: general.name str = gemma-2-9b-it Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 2: gemma2.context_length u32 = 8192 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 3: gemma2.embedding_length u32 = 3584 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 4: gemma2.block_count u32 = 42 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 5: gemma2.feed_forward_length u32 = 14336 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 6: gemma2.attention.head_count u32 = 16 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 7: gemma2.attention.head_count_kv u32 = 8 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 8: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 9: gemma2.attention.key_length u32 = 256 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 10: gemma2.attention.value_length u32 = 256 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 11: general.file_type u32 = 2 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 12: gemma2.attn_logit_softcapping f32 = 50.000000 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 13: gemma2.final_logit_softcapping f32 = 30.000000 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 14: gemma2.attention.sliding_window u32 = 4096 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 15: tokenizer.ggml.model str = llama Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 16: tokenizer.ggml.pre str = default Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 18: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000... Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 20: tokenizer.ggml.bos_token_id u32 = 2 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 1 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 22: tokenizer.ggml.unknown_token_id u32 = 3 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 0 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = true Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 25: tokenizer.ggml.add_eos_token bool = false Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 26: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 27: tokenizer.ggml.add_space_prefix bool = false Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - kv 28: general.quantization_version u32 = 2 Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - type f32: 169 tensors Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - type q4_0: 294 tensors Feb 28 15:45:50 ai-server ollama[1169310]: llama_model_loader: - type q6_K: 1 tensors Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.696Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 45 '<unused38>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 74 '<unused67>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 55 '<unused48>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 99 '<unused92>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 102 '<unused95>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 44 '<unused37>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 26 '<unused19>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 42 '<unused35>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 92 '<unused85>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 90 '<unused83>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 106 '<start_of_turn>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 88 '<unused81>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 5 '<2mass>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 104 '<unused97>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 68 '<unused61>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 94 '<unused87>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 59 '<unused52>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 2 '<bos>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 25 '<unused18>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 93 '<unused86>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 95 '<unused88>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 76 '<unused69>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 97 '<unused90>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 56 '<unused49>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 81 '<unused74>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 13 '<unused6>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 51 '<unused44>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 47 '<unused40>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 8 '<unused1>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 103 '<unused96>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 75 '<unused68>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 43 '<unused36>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 79 '<unused72>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 39 '<unused32>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 49 '<unused42>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 41 '<unused34>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 34 '<unused27>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 6 '[@BOS@]' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 40 '<unused33>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 33 '<unused26>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 35 '<unused28>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 32 '<unused25>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 28 '<unused21>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 19 '<unused12>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 80 '<unused73>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 86 '<unused79>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 67 '<unused60>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 9 '<unused2>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 52 '<unused45>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 16 '<unused9>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 98 '<unused91>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 71 '<unused64>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 36 '<unused29>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 0 '<pad>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 11 '<unused4>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 70 '<unused63>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 77 '<unused70>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 64 '<unused57>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 50 '<unused43>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 20 '<unused13>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 73 '<unused66>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 23 '<unused16>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 38 '<unused31>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 21 '<unused14>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 15 '<unused8>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 37 '<unused30>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 14 '<unused7>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 30 '<unused23>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 62 '<unused55>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 3 '<unk>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 18 '<unused11>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 22 '<unused15>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 66 '<unused59>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 65 '<unused58>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 10 '<unused3>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 105 '<unused98>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 87 '<unused80>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 100 '<unused93>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 63 '<unused56>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 31 '<unused24>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 58 '<unused51>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 84 '<unused77>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 61 '<unused54>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 1 '<eos>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 60 '<unused53>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 91 '<unused84>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 83 '<unused76>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 85 '<unused78>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 27 '<unused20>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 96 '<unused89>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 72 '<unused65>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 53 '<unused46>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 82 '<unused75>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 7 '<unused0>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 4 '<mask>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 101 '<unused94>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 78 '<unused71>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 89 '<unused82>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 69 '<unused62>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 54 '<unused47>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 57 '<unused50>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 12 '<unused5>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 48 '<unused41>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 17 '<unused10>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 24 '<unused17>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 46 '<unused39>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: control token: 29 '<unused22>' is not marked as EOG Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: special tokens cache size = 108 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_vocab: token to piece cache size = 1.6014 MB Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: format = GGUF V3 (latest) Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: arch = gemma2 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: vocab type = SPM Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_vocab = 256000 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_merges = 0 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: vocab_only = 0 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_ctx_train = 8192 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_embd = 3584 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_layer = 42 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_head = 16 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_head_kv = 8 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_rot = 256 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_swa = 4096 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_embd_head_k = 256 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_embd_head_v = 256 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_gqa = 2 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_embd_k_gqa = 2048 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_embd_v_gqa = 2048 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: f_norm_eps = 0.0e+00 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: f_logit_scale = 0.0e+00 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_ff = 14336 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_expert = 0 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_expert_used = 0 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: causal attn = 1 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: pooling type = 0 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: rope type = 2 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: rope scaling = linear Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: freq_base_train = 10000.0 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: freq_scale_train = 1 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: n_ctx_orig_yarn = 8192 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: rope_finetuned = unknown Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: ssm_d_conv = 0 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: ssm_d_inner = 0 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: ssm_d_state = 0 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: ssm_dt_rank = 0 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: model type = 9B Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: model ftype = Q4_0 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: model params = 9.24 B Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: model size = 5.06 GiB (4.71 BPW) Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: general.name = gemma-2-9b-it Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: BOS token = 2 '<bos>' Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: EOS token = 1 '<eos>' Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: EOT token = 107 '<end_of_turn>' Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: UNK token = 3 '<unk>' Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: PAD token = 0 '<pad>' Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: LF token = 227 '<0x0A>' Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: EOG token = 1 '<eos>' Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: EOG token = 107 '<end_of_turn>' Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_print_meta: max token length = 93 Feb 28 15:45:50 ai-server ollama[1169310]: llm_load_tensors: tensor 'token_embd.weight' (q6_K) (and 0 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead Feb 28 15:45:51 ai-server ollama[1169310]: llm_load_tensors: offloading 42 repeating layers to GPU Feb 28 15:45:51 ai-server ollama[1169310]: llm_load_tensors: offloading output layer to GPU Feb 28 15:45:51 ai-server ollama[1169310]: llm_load_tensors: offloaded 43/43 layers to GPU Feb 28 15:45:51 ai-server ollama[1169310]: llm_load_tensors: CUDA0 model buffer size = 5185.21 MiB Feb 28 15:45:51 ai-server ollama[1169310]: llm_load_tensors: CPU_Mapped model buffer size = 717.77 MiB Feb 28 15:45:51 ai-server ollama[1169310]: time=2025-02-28T15:45:51.198Z level=DEBUG source=server.go:602 msg="model load progress 0.45" Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: n_seq_max = 4 Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: n_ctx = 8192 Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: n_ctx_per_seq = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: n_batch = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: n_ubatch = 512 Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: flash_attn = 1 Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: freq_base = 10000.0 Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: freq_scale = 1 Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 42, can_shift = 1 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 0: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 1: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 2: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 3: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 4: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 5: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 6: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 7: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 8: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 9: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 10: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 11: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 12: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 13: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 14: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 15: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 16: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 17: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 18: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 19: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 20: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 21: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 22: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 23: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 24: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 25: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 26: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 27: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 28: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 29: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 30: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 31: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 32: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 33: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 34: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 35: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 36: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 37: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 38: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 39: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 40: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: layer 41: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:45:51 ai-server ollama[1169310]: llama_kv_cache_init: CUDA0 KV buffer size = 2688.00 MiB Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: KV self size = 2688.00 MiB, K (f16): 1344.00 MiB, V (f16): 1344.00 MiB Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: CUDA_Host output buffer size = 3.96 MiB Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: CUDA0 compute buffer size = 507.00 MiB Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: CUDA_Host compute buffer size = 39.01 MiB Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: graph nodes = 1398 Feb 28 15:45:51 ai-server ollama[1169310]: llama_new_context_with_model: graph splits = 2 Feb 28 15:45:51 ai-server ollama[1169310]: time=2025-02-28T15:45:51.451Z level=INFO source=server.go:596 msg="llama runner started in 1.01 seconds" Feb 28 15:45:51 ai-server ollama[1169310]: time=2025-02-28T15:45:51.451Z level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 Feb 28 15:45:51 ai-server ollama[1169310]: [GIN] 2025/02/28 - 15:45:51 | 200 | 1.55992851s | 127.0.0.1 | POST "/api/generate" Feb 28 15:45:51 ai-server ollama[1169310]: time=2025-02-28T15:45:51.451Z level=DEBUG source=sched.go:467 msg="context for request finished" Feb 28 15:45:51 ai-server ollama[1169310]: time=2025-02-28T15:45:51.451Z level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 duration=5m0s Feb 28 15:45:51 ai-server ollama[1169310]: time=2025-02-28T15:45:51.451Z level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 refCount=0 Feb 28 15:45:55 ai-server ollama[1169310]: time=2025-02-28T15:45:55.576Z level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 Feb 28 15:45:55 ai-server ollama[1169310]: time=2025-02-28T15:45:55.576Z level=DEBUG source=routes.go:1480 msg="chat request" images=0 prompt="<start_of_turn>user\n5+8/16=?<end_of_turn>\n<start_of_turn>model\n" Feb 28 15:45:55 ai-server ollama[1169310]: time=2025-02-28T15:45:55.577Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=16 used=0 remaining=16 Feb 28 15:45:56 ai-server ollama[1169310]: [GIN] 2025/02/28 - 15:45:56 | 200 | 592.031894ms | 127.0.0.1 | POST "/api/chat" Feb 28 15:45:56 ai-server ollama[1169310]: time=2025-02-28T15:45:56.146Z level=DEBUG source=sched.go:408 msg="context for request finished" Feb 28 15:45:56 ai-server ollama[1169310]: time=2025-02-28T15:45:56.146Z level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 duration=5m0s Feb 28 15:45:56 ai-server ollama[1169310]: time=2025-02-28T15:45:56.146Z level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 refCount=0 ```
Author
Owner

@rick-github commented on GitHub (Feb 28, 2025):

Feb 28 15:45:50 ai-server ollama[1169310]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.493Z level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=16

Yeah, same CPU backend, but this one has vector extensions. I think this is a build issue.

<!-- gh-comment-id:2690997312 --> @rick-github commented on GitHub (Feb 28, 2025): ``` Feb 28 15:45:50 ai-server ollama[1169310]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Feb 28 15:45:50 ai-server ollama[1169310]: time=2025-02-28T15:45:50.493Z level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=16 ``` Yeah, same CPU backend, but this one has vector extensions. I think this is a build issue.
Author
Owner

@MMaturax commented on GitHub (Feb 28, 2025):

To be sure, I reinstalled version 0.5.13-rc1 and repeated the test.

Image

zemin@ai-server:~$ journalctl -fu ollama
Feb 28 15:55:24 ai-server ollama[1170322]: CUDA driver version: 12.8
Feb 28 15:55:24 ai-server ollama[1170322]: calling cuDeviceGetCount
Feb 28 15:55:24 ai-server ollama[1170322]: device count 1
Feb 28 15:55:24 ai-server ollama[1170322]: time=2025-02-28T15:55:24.596Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 15:55:24 ai-server ollama[1170322]: [GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7] CUDA totalMem 32117 mb
Feb 28 15:55:24 ai-server ollama[1170322]: [GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7] CUDA freeMem 31603 mb
Feb 28 15:55:24 ai-server ollama[1170322]: [GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7] Compute Capability 12.0
Feb 28 15:55:24 ai-server ollama[1170322]: time=2025-02-28T15:55:24.732Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
Feb 28 15:55:24 ai-server ollama[1170322]: releasing cuda driver library
Feb 28 15:55:24 ai-server ollama[1170322]: time=2025-02-28T15:55:24.732Z level=INFO source=types.go:130 msg="inference compute" id=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="30.9 GiB"
Feb 28 15:55:40 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:55:40 | 200 |      34.719µs |       127.0.0.1 | HEAD     "/"
Feb 28 15:55:40 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:55:40 | 200 |   23.654941ms |       127.0.0.1 | POST     "/api/show"
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.511Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.3 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="59.9 GiB" now.free_swap="8.0 GiB"
Feb 28 15:55:40 ai-server ollama[1170322]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuInit - 0x7cb24bd0de00
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDriverGetVersion - 0x7cb24bd0de20
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetCount - 0x7cb24bd0de60
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGet - 0x7cb24bd0de40
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetAttribute - 0x7cb24bd0df40
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetUuid - 0x7cb24bd0dea0
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetName - 0x7cb24bd0de80
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxCreate_v3 - 0x7cb24bd0e120
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuMemGetInfo_v2 - 0x7cb24bd0e8a0
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxDestroy - 0x7cb24bd6c9f0
Feb 28 15:55:40 ai-server ollama[1170322]: calling cuInit
Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDriverGetVersion
Feb 28 15:55:40 ai-server ollama[1170322]: raw version 0x2f30
Feb 28 15:55:40 ai-server ollama[1170322]: CUDA driver version: 12.8
Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDeviceGetCount
Feb 28 15:55:40 ai-server ollama[1170322]: device count 1
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.649Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 15:55:40 ai-server ollama[1170322]: releasing cuda driver library
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.649Z level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.689Z level=DEBUG source=sched.go:225 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.689Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[30.9 GiB]"
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.689Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="59.9 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="59.8 GiB" now.free_swap="8.0 GiB"
Feb 28 15:55:40 ai-server ollama[1170322]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuInit - 0x7cb24bd0de00
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDriverGetVersion - 0x7cb24bd0de20
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetCount - 0x7cb24bd0de60
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGet - 0x7cb24bd0de40
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetAttribute - 0x7cb24bd0df40
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetUuid - 0x7cb24bd0dea0
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetName - 0x7cb24bd0de80
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxCreate_v3 - 0x7cb24bd0e120
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuMemGetInfo_v2 - 0x7cb24bd0e8a0
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxDestroy - 0x7cb24bd6c9f0
Feb 28 15:55:40 ai-server ollama[1170322]: calling cuInit
Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDriverGetVersion
Feb 28 15:55:40 ai-server ollama[1170322]: raw version 0x2f30
Feb 28 15:55:40 ai-server ollama[1170322]: CUDA driver version: 12.8
Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDeviceGetCount
Feb 28 15:55:40 ai-server ollama[1170322]: device count 1
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.815Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 15:55:40 ai-server ollama[1170322]: releasing cuda driver library
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.815Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 parallel=4 available=33139130368 required="8.8 GiB"
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.815Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="59.8 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="59.8 GiB" now.free_swap="8.0 GiB"
Feb 28 15:55:40 ai-server ollama[1170322]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuInit - 0x7cb24bd0de00
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDriverGetVersion - 0x7cb24bd0de20
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetCount - 0x7cb24bd0de60
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGet - 0x7cb24bd0de40
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetAttribute - 0x7cb24bd0df40
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetUuid - 0x7cb24bd0dea0
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetName - 0x7cb24bd0de80
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxCreate_v3 - 0x7cb24bd0e120
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuMemGetInfo_v2 - 0x7cb24bd0e8a0
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxDestroy - 0x7cb24bd6c9f0
Feb 28 15:55:40 ai-server ollama[1170322]: calling cuInit
Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDriverGetVersion
Feb 28 15:55:40 ai-server ollama[1170322]: raw version 0x2f30
Feb 28 15:55:40 ai-server ollama[1170322]: CUDA driver version: 12.8
Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDeviceGetCount
Feb 28 15:55:40 ai-server ollama[1170322]: device count 1
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.936Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 15:55:40 ai-server ollama[1170322]: releasing cuda driver library
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.936Z level=INFO source=server.go:97 msg="system memory" total="62.4 GiB" free="59.8 GiB" free_swap="8.0 GiB"
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.936Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[30.9 GiB]"
Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.936Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="59.8 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="59.8 GiB" now.free_swap="8.0 GiB"
Feb 28 15:55:40 ai-server ollama[1170322]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuInit - 0x7cb24bd0de00
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDriverGetVersion - 0x7cb24bd0de20
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetCount - 0x7cb24bd0de60
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGet - 0x7cb24bd0de40
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetAttribute - 0x7cb24bd0df40
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetUuid - 0x7cb24bd0dea0
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetName - 0x7cb24bd0de80
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxCreate_v3 - 0x7cb24bd0e120
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuMemGetInfo_v2 - 0x7cb24bd0e8a0
Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxDestroy - 0x7cb24bd6c9f0
Feb 28 15:55:40 ai-server ollama[1170322]: calling cuInit
Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDriverGetVersion
Feb 28 15:55:40 ai-server ollama[1170322]: raw version 0x2f30
Feb 28 15:55:40 ai-server ollama[1170322]: CUDA driver version: 12.8
Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDeviceGetCount
Feb 28 15:55:40 ai-server ollama[1170322]: device count 1
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.058Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB"
Feb 28 15:55:41 ai-server ollama[1170322]: releasing cuda driver library
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.058Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=43 layers.offload=43 layers.split="" memory.available="[30.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.8 GiB" memory.required.partial="8.8 GiB" memory.required.kv="2.6 GiB" memory.required.allocations="[8.8 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.058Z level=INFO source=server.go:182 msg="enabling flash attention"
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.058Z level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=DEBUG source=server.go:302 msg="adding gpu library" path=/usr/local/lib/ollama/cuda_v12
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=DEBUG source=server.go:310 msg="adding gpu dependency paths" paths=[/usr/local/lib/ollama/cuda_v12]
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 8192 --batch-size 512 --n-gpu-layers 43 --verbose --threads 16 --flash-attn --kv-cache-type f16 --parallel 4 --port 38621"
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin CUDA_VISIBLE_DEVICES=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 LD_LIBRARY_PATH=/usr/local/lib/ollama/cuda_v12:/usr/local/lib/ollama/cuda_v12:/usr/local/lib/ollama]"
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=INFO source=sched.go:450 msg="loaded runners" count=1
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.067Z level=INFO source=runner.go:931 msg="starting go runner"
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.067Z level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
Feb 28 15:55:41 ai-server ollama[1170322]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Feb 28 15:55:41 ai-server ollama[1170322]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Feb 28 15:55:41 ai-server ollama[1170322]: ggml_cuda_init: found 1 CUDA devices:
Feb 28 15:55:41 ai-server ollama[1170322]:   Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
Feb 28 15:55:41 ai-server ollama[1170322]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.108Z level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Feb 28 15:55:41 ai-server ollama[1170322]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-skylakex.so score: 183
Feb 28 15:55:41 ai-server ollama[1170322]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-icelake.so score: 1463
Feb 28 15:55:41 ai-server ollama[1170322]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-alderlake.so score: 0
Feb 28 15:55:41 ai-server ollama[1170322]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-haswell.so score: 55
Feb 28 15:55:41 ai-server ollama[1170322]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-sandybridge.so score: 20
Feb 28 15:55:41 ai-server ollama[1170322]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.109Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1000 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=16
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.109Z level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:38621"
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 31603 MiB free
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest))
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv   0:                       general.architecture str              = gemma2
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv   1:                               general.name str              = gemma-2-9b-it
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv   3:                    gemma2.embedding_length u32              = 3584
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv   4:                         gemma2.block_count u32              = 42
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 14336
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 16
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 8
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv   8:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv   9:                gemma2.attention.key_length u32              = 256
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  10:              gemma2.attention.value_length u32              = 256
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  11:                          general.file_type u32              = 2
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  12:              gemma2.attn_logit_softcapping f32              = 50.000000
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  13:             gemma2.final_logit_softcapping f32              = 30.000000
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  14:            gemma2.attention.sliding_window u32              = 4096
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 2
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 1
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  22:            tokenizer.ggml.unknown_token_id u32              = 3
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 0
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv  28:               general.quantization_version u32              = 2
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - type  f32:  169 tensors
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - type q4_0:  294 tensors
Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - type q6_K:    1 tensors
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: file format = GGUF V3 (latest)
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: file type   = Q4_0
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: file size   = 5.06 GiB (4.71 BPW)
Feb 28 15:55:41 ai-server ollama[1170322]: init_tokenizer: initializing tokenizer for type 1
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.310Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     45 '<unused38>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     74 '<unused67>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     55 '<unused48>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     99 '<unused92>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:    102 '<unused95>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     44 '<unused37>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     26 '<unused19>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     42 '<unused35>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     92 '<unused85>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     90 '<unused83>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:    106 '<start_of_turn>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     88 '<unused81>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:      5 '<2mass>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:    104 '<unused97>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     68 '<unused61>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     94 '<unused87>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     59 '<unused52>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:      2 '<bos>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     25 '<unused18>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     93 '<unused86>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     95 '<unused88>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     76 '<unused69>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     97 '<unused90>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     56 '<unused49>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     81 '<unused74>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     13 '<unused6>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     51 '<unused44>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     47 '<unused40>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:      8 '<unused1>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:    103 '<unused96>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     75 '<unused68>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     43 '<unused36>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     79 '<unused72>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     39 '<unused32>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     49 '<unused42>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     41 '<unused34>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     34 '<unused27>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:      6 '[@BOS@]' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     40 '<unused33>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     33 '<unused26>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     35 '<unused28>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     32 '<unused25>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     28 '<unused21>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     19 '<unused12>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     80 '<unused73>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     86 '<unused79>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     67 '<unused60>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:      9 '<unused2>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     52 '<unused45>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     16 '<unused9>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     98 '<unused91>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     71 '<unused64>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     36 '<unused29>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:      0 '<pad>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     11 '<unused4>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     70 '<unused63>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     77 '<unused70>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     64 '<unused57>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     50 '<unused43>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     20 '<unused13>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     73 '<unused66>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     23 '<unused16>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     38 '<unused31>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     21 '<unused14>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     15 '<unused8>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     37 '<unused30>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     14 '<unused7>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     30 '<unused23>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     62 '<unused55>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:      3 '<unk>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     18 '<unused11>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     22 '<unused15>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     66 '<unused59>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     65 '<unused58>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     10 '<unused3>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:    105 '<unused98>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     87 '<unused80>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:    100 '<unused93>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     63 '<unused56>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     31 '<unused24>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     58 '<unused51>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     84 '<unused77>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     61 '<unused54>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:      1 '<eos>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     60 '<unused53>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     91 '<unused84>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     83 '<unused76>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     85 '<unused78>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     27 '<unused20>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     96 '<unused89>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     72 '<unused65>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     53 '<unused46>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     82 '<unused75>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:      7 '<unused0>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:      4 '<mask>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:    101 '<unused94>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     78 '<unused71>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     89 '<unused82>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     69 '<unused62>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     54 '<unused47>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     57 '<unused50>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     12 '<unused5>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     48 '<unused41>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     17 '<unused10>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     24 '<unused17>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     46 '<unused39>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: control token:     29 '<unused22>' is not marked as EOG
Feb 28 15:55:41 ai-server ollama[1170322]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Feb 28 15:55:41 ai-server ollama[1170322]: load: special tokens cache size = 108
Feb 28 15:55:41 ai-server ollama[1170322]: load: token to piece cache size = 1.6014 MB
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: arch             = gemma2
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: vocab_only       = 0
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_ctx_train      = 8192
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_embd           = 3584
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_layer          = 42
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_head           = 16
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_head_kv        = 8
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_rot            = 256
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_swa            = 4096
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_embd_head_k    = 256
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_embd_head_v    = 256
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_gqa            = 2
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_embd_k_gqa     = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_embd_v_gqa     = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: f_norm_eps       = 0.0e+00
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: f_norm_rms_eps   = 1.0e-06
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: f_clamp_kqv      = 0.0e+00
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: f_max_alibi_bias = 0.0e+00
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: f_logit_scale    = 0.0e+00
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_ff             = 14336
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_expert         = 0
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_expert_used    = 0
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: causal attn      = 1
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: pooling type     = 0
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: rope type        = 2
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: rope scaling     = linear
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: freq_base_train  = 10000.0
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: freq_scale_train = 1
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_ctx_orig_yarn  = 8192
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: rope_finetuned   = unknown
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: ssm_d_conv       = 0
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: ssm_d_inner      = 0
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: ssm_d_state      = 0
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: ssm_dt_rank      = 0
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: ssm_dt_b_c_rms   = 0
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: model type       = 9B
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: model params     = 9.24 B
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: general.name     = gemma-2-9b-it
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: vocab type       = SPM
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_vocab          = 256000
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_merges         = 0
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: BOS token        = 2 '<bos>'
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: EOS token        = 1 '<eos>'
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: EOT token        = 107 '<end_of_turn>'
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: UNK token        = 3 '<unk>'
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: PAD token        = 0 '<pad>'
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: LF token         = 227 '<0x0A>'
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: EOG token        = 1 '<eos>'
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: EOG token        = 107 '<end_of_turn>'
Feb 28 15:55:41 ai-server ollama[1170322]: print_info: max token length = 93
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: loading model tensors, this can take a while... (mmap = true)
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer   0 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer   1 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer   2 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer   3 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer   4 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer   5 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer   6 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer   7 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer   8 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer   9 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  10 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  11 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  12 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  13 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  14 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  15 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  16 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  17 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  18 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  19 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  20 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  21 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  22 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  23 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  24 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  25 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  26 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  27 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  28 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  29 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  30 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  31 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  32 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  33 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  34 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  35 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  36 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  37 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  38 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  39 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  40 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  41 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer  42 assigned to device CUDA0
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: tensor 'token_embd.weight' (q6_K) (and 0 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: offloading 42 repeating layers to GPU
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: offloading output layer to GPU
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: offloaded 43/43 layers to GPU
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors:   CPU_Mapped model buffer size =   717.77 MiB
Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors:        CUDA0 model buffer size =  5185.21 MiB
Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.812Z level=DEBUG source=server.go:602 msg="model load progress 0.81"
Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: n_seq_max     = 4
Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: n_ctx         = 8192
Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: n_ctx_per_seq = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: n_batch       = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: n_ubatch      = 512
Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: flash_attn    = 1
Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: freq_base     = 10000.0
Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: freq_scale    = 1
Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 42, can_shift = 1
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 0: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 1: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 2: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 3: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 4: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 5: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 6: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 7: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 8: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 9: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 10: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 11: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 12: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 13: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 14: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 15: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 16: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 17: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 18: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 19: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 20: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 21: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 22: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 23: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 24: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 25: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 26: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 27: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 28: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 29: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 30: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 31: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 32: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 33: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 34: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 35: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 36: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 37: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 38: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 39: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 40: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 41: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048
Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init:      CUDA0 KV buffer size =  2688.00 MiB
Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: KV self size  = 2688.00 MiB, K (f16): 1344.00 MiB, V (f16): 1344.00 MiB
Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model:  CUDA_Host  output buffer size =     3.96 MiB
Feb 28 15:55:42 ai-server ollama[1170322]: llama_init_from_model:      CUDA0 compute buffer size =   507.00 MiB
Feb 28 15:55:42 ai-server ollama[1170322]: llama_init_from_model:  CUDA_Host compute buffer size =   104.01 MiB
Feb 28 15:55:42 ai-server ollama[1170322]: llama_init_from_model: graph nodes  = 1398
Feb 28 15:55:42 ai-server ollama[1170322]: llama_init_from_model: graph splits = 86
Feb 28 15:55:42 ai-server ollama[1170322]: time=2025-02-28T15:55:42.063Z level=INFO source=server.go:596 msg="llama runner started in 1.00 seconds"
Feb 28 15:55:42 ai-server ollama[1170322]: time=2025-02-28T15:55:42.063Z level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373
Feb 28 15:55:42 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:55:42 | 200 |  1.572278314s |       127.0.0.1 | POST     "/api/generate"
Feb 28 15:55:42 ai-server ollama[1170322]: time=2025-02-28T15:55:42.063Z level=DEBUG source=sched.go:467 msg="context for request finished"
Feb 28 15:55:42 ai-server ollama[1170322]: time=2025-02-28T15:55:42.063Z level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 duration=5m0s
Feb 28 15:55:42 ai-server ollama[1170322]: time=2025-02-28T15:55:42.063Z level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 refCount=0
Feb 28 15:55:48 ai-server ollama[1170322]: time=2025-02-28T15:55:48.490Z level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373
Feb 28 15:55:48 ai-server ollama[1170322]: time=2025-02-28T15:55:48.490Z level=DEBUG source=routes.go:1505 msg="chat request" images=0 prompt="<start_of_turn>user\n5+8/161=?<end_of_turn>\n<start_of_turn>model\n"
Feb 28 15:55:48 ai-server ollama[1170322]: time=2025-02-28T15:55:48.491Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=17 used=0 remaining=17
Feb 28 15:55:53 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:55:53 | 200 |  5.459593361s |       127.0.0.1 | POST     "/api/chat"
Feb 28 15:55:53 ai-server ollama[1170322]: time=2025-02-28T15:55:53.925Z level=DEBUG source=sched.go:408 msg="context for request finished"
Feb 28 15:55:53 ai-server ollama[1170322]: time=2025-02-28T15:55:53.925Z level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 duration=5m0s
Feb 28 15:55:53 ai-server ollama[1170322]: time=2025-02-28T15:55:53.925Z level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 refCount=0
Feb 28 15:56:01 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:56:01 | 200 |       16.17µs |       127.0.0.1 | HEAD     "/"
Feb 28 15:56:01 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:56:01 | 200 |      56.589µs |       127.0.0.1 | GET      "/api/ps"
Feb 28 15:56:04 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:56:04 | 200 |       24.58µs |       127.0.0.1 | GET      "/api/version"
<!-- gh-comment-id:2691006192 --> @MMaturax commented on GitHub (Feb 28, 2025): To be sure, I reinstalled version 0.5.13-rc1 and repeated the test. ![Image](https://github.com/user-attachments/assets/8190de4c-8637-45be-abe3-ebc00d0aae9a) ``` zemin@ai-server:~$ journalctl -fu ollama Feb 28 15:55:24 ai-server ollama[1170322]: CUDA driver version: 12.8 Feb 28 15:55:24 ai-server ollama[1170322]: calling cuDeviceGetCount Feb 28 15:55:24 ai-server ollama[1170322]: device count 1 Feb 28 15:55:24 ai-server ollama[1170322]: time=2025-02-28T15:55:24.596Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 15:55:24 ai-server ollama[1170322]: [GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7] CUDA totalMem 32117 mb Feb 28 15:55:24 ai-server ollama[1170322]: [GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7] CUDA freeMem 31603 mb Feb 28 15:55:24 ai-server ollama[1170322]: [GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7] Compute Capability 12.0 Feb 28 15:55:24 ai-server ollama[1170322]: time=2025-02-28T15:55:24.732Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" Feb 28 15:55:24 ai-server ollama[1170322]: releasing cuda driver library Feb 28 15:55:24 ai-server ollama[1170322]: time=2025-02-28T15:55:24.732Z level=INFO source=types.go:130 msg="inference compute" id=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="30.9 GiB" Feb 28 15:55:40 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:55:40 | 200 | 34.719µs | 127.0.0.1 | HEAD "/" Feb 28 15:55:40 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:55:40 | 200 | 23.654941ms | 127.0.0.1 | POST "/api/show" Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.511Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="60.3 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="59.9 GiB" now.free_swap="8.0 GiB" Feb 28 15:55:40 ai-server ollama[1170322]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuInit - 0x7cb24bd0de00 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDriverGetVersion - 0x7cb24bd0de20 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetCount - 0x7cb24bd0de60 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGet - 0x7cb24bd0de40 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetAttribute - 0x7cb24bd0df40 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetUuid - 0x7cb24bd0dea0 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetName - 0x7cb24bd0de80 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxCreate_v3 - 0x7cb24bd0e120 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuMemGetInfo_v2 - 0x7cb24bd0e8a0 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxDestroy - 0x7cb24bd6c9f0 Feb 28 15:55:40 ai-server ollama[1170322]: calling cuInit Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDriverGetVersion Feb 28 15:55:40 ai-server ollama[1170322]: raw version 0x2f30 Feb 28 15:55:40 ai-server ollama[1170322]: CUDA driver version: 12.8 Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDeviceGetCount Feb 28 15:55:40 ai-server ollama[1170322]: device count 1 Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.649Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 15:55:40 ai-server ollama[1170322]: releasing cuda driver library Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.649Z level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.689Z level=DEBUG source=sched.go:225 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.689Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[30.9 GiB]" Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.689Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="59.9 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="59.8 GiB" now.free_swap="8.0 GiB" Feb 28 15:55:40 ai-server ollama[1170322]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuInit - 0x7cb24bd0de00 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDriverGetVersion - 0x7cb24bd0de20 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetCount - 0x7cb24bd0de60 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGet - 0x7cb24bd0de40 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetAttribute - 0x7cb24bd0df40 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetUuid - 0x7cb24bd0dea0 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetName - 0x7cb24bd0de80 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxCreate_v3 - 0x7cb24bd0e120 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuMemGetInfo_v2 - 0x7cb24bd0e8a0 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxDestroy - 0x7cb24bd6c9f0 Feb 28 15:55:40 ai-server ollama[1170322]: calling cuInit Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDriverGetVersion Feb 28 15:55:40 ai-server ollama[1170322]: raw version 0x2f30 Feb 28 15:55:40 ai-server ollama[1170322]: CUDA driver version: 12.8 Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDeviceGetCount Feb 28 15:55:40 ai-server ollama[1170322]: device count 1 Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.815Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 15:55:40 ai-server ollama[1170322]: releasing cuda driver library Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.815Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 parallel=4 available=33139130368 required="8.8 GiB" Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.815Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="59.8 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="59.8 GiB" now.free_swap="8.0 GiB" Feb 28 15:55:40 ai-server ollama[1170322]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuInit - 0x7cb24bd0de00 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDriverGetVersion - 0x7cb24bd0de20 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetCount - 0x7cb24bd0de60 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGet - 0x7cb24bd0de40 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetAttribute - 0x7cb24bd0df40 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetUuid - 0x7cb24bd0dea0 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetName - 0x7cb24bd0de80 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxCreate_v3 - 0x7cb24bd0e120 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuMemGetInfo_v2 - 0x7cb24bd0e8a0 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxDestroy - 0x7cb24bd6c9f0 Feb 28 15:55:40 ai-server ollama[1170322]: calling cuInit Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDriverGetVersion Feb 28 15:55:40 ai-server ollama[1170322]: raw version 0x2f30 Feb 28 15:55:40 ai-server ollama[1170322]: CUDA driver version: 12.8 Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDeviceGetCount Feb 28 15:55:40 ai-server ollama[1170322]: device count 1 Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.936Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 15:55:40 ai-server ollama[1170322]: releasing cuda driver library Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.936Z level=INFO source=server.go:97 msg="system memory" total="62.4 GiB" free="59.8 GiB" free_swap="8.0 GiB" Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.936Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[30.9 GiB]" Feb 28 15:55:40 ai-server ollama[1170322]: time=2025-02-28T15:55:40.936Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.4 GiB" before.free="59.8 GiB" before.free_swap="8.0 GiB" now.total="62.4 GiB" now.free="59.8 GiB" now.free_swap="8.0 GiB" Feb 28 15:55:40 ai-server ollama[1170322]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.16 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuInit - 0x7cb24bd0de00 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDriverGetVersion - 0x7cb24bd0de20 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetCount - 0x7cb24bd0de60 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGet - 0x7cb24bd0de40 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetAttribute - 0x7cb24bd0df40 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetUuid - 0x7cb24bd0dea0 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuDeviceGetName - 0x7cb24bd0de80 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxCreate_v3 - 0x7cb24bd0e120 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuMemGetInfo_v2 - 0x7cb24bd0e8a0 Feb 28 15:55:40 ai-server ollama[1170322]: dlsym: cuCtxDestroy - 0x7cb24bd6c9f0 Feb 28 15:55:40 ai-server ollama[1170322]: calling cuInit Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDriverGetVersion Feb 28 15:55:40 ai-server ollama[1170322]: raw version 0x2f30 Feb 28 15:55:40 ai-server ollama[1170322]: CUDA driver version: 12.8 Feb 28 15:55:40 ai-server ollama[1170322]: calling cuDeviceGetCount Feb 28 15:55:40 ai-server ollama[1170322]: device count 1 Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.058Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 name="NVIDIA GeForce RTX 5090" overhead="0 B" before.total="31.4 GiB" before.free="30.9 GiB" now.total="31.4 GiB" now.free="30.9 GiB" now.used="513.4 MiB" Feb 28 15:55:41 ai-server ollama[1170322]: releasing cuda driver library Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.058Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=43 layers.offload=43 layers.split="" memory.available="[30.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.8 GiB" memory.required.partial="8.8 GiB" memory.required.kv="2.6 GiB" memory.required.allocations="[8.8 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB" Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.058Z level=INFO source=server.go:182 msg="enabling flash attention" Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.058Z level=DEBUG source=server.go:259 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]" Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=DEBUG source=server.go:302 msg="adding gpu library" path=/usr/local/lib/ollama/cuda_v12 Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=DEBUG source=server.go:310 msg="adding gpu dependency paths" paths=[/usr/local/lib/ollama/cuda_v12] Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 8192 --batch-size 512 --n-gpu-layers 43 --verbose --threads 16 --flash-attn --kv-cache-type f16 --parallel 4 --port 38621" Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=DEBUG source=server.go:398 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin CUDA_VISIBLE_DEVICES=GPU-0aa928c0-ece6-7698-4db1-ac130bfe47b7 LD_LIBRARY_PATH=/usr/local/lib/ollama/cuda_v12:/usr/local/lib/ollama/cuda_v12:/usr/local/lib/ollama]" Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=INFO source=sched.go:450 msg="loaded runners" count=1 Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.059Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.067Z level=INFO source=runner.go:931 msg="starting go runner" Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.067Z level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 Feb 28 15:55:41 ai-server ollama[1170322]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Feb 28 15:55:41 ai-server ollama[1170322]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Feb 28 15:55:41 ai-server ollama[1170322]: ggml_cuda_init: found 1 CUDA devices: Feb 28 15:55:41 ai-server ollama[1170322]: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes Feb 28 15:55:41 ai-server ollama[1170322]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.108Z level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/local/lib/ollama Feb 28 15:55:41 ai-server ollama[1170322]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-skylakex.so score: 183 Feb 28 15:55:41 ai-server ollama[1170322]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-icelake.so score: 1463 Feb 28 15:55:41 ai-server ollama[1170322]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-alderlake.so score: 0 Feb 28 15:55:41 ai-server ollama[1170322]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-haswell.so score: 55 Feb 28 15:55:41 ai-server ollama[1170322]: ggml_backend_load_best: /usr/local/lib/ollama/libggml-cpu-sandybridge.so score: 20 Feb 28 15:55:41 ai-server ollama[1170322]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.109Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1000 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=16 Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.109Z level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:38621" Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 31603 MiB free Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest)) Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 0: general.architecture str = gemma2 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 1: general.name str = gemma-2-9b-it Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 2: gemma2.context_length u32 = 8192 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 3: gemma2.embedding_length u32 = 3584 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 4: gemma2.block_count u32 = 42 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 5: gemma2.feed_forward_length u32 = 14336 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 6: gemma2.attention.head_count u32 = 16 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 7: gemma2.attention.head_count_kv u32 = 8 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 8: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 9: gemma2.attention.key_length u32 = 256 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 10: gemma2.attention.value_length u32 = 256 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 11: general.file_type u32 = 2 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 12: gemma2.attn_logit_softcapping f32 = 50.000000 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 13: gemma2.final_logit_softcapping f32 = 30.000000 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 14: gemma2.attention.sliding_window u32 = 4096 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 15: tokenizer.ggml.model str = llama Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 16: tokenizer.ggml.pre str = default Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 18: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000... Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 20: tokenizer.ggml.bos_token_id u32 = 2 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 1 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 22: tokenizer.ggml.unknown_token_id u32 = 3 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 0 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = true Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 25: tokenizer.ggml.add_eos_token bool = false Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 26: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 27: tokenizer.ggml.add_space_prefix bool = false Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - kv 28: general.quantization_version u32 = 2 Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - type f32: 169 tensors Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - type q4_0: 294 tensors Feb 28 15:55:41 ai-server ollama[1170322]: llama_model_loader: - type q6_K: 1 tensors Feb 28 15:55:41 ai-server ollama[1170322]: print_info: file format = GGUF V3 (latest) Feb 28 15:55:41 ai-server ollama[1170322]: print_info: file type = Q4_0 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: file size = 5.06 GiB (4.71 BPW) Feb 28 15:55:41 ai-server ollama[1170322]: init_tokenizer: initializing tokenizer for type 1 Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.310Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 45 '<unused38>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 74 '<unused67>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 55 '<unused48>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 99 '<unused92>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 102 '<unused95>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 44 '<unused37>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 26 '<unused19>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 42 '<unused35>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 92 '<unused85>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 90 '<unused83>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 106 '<start_of_turn>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 88 '<unused81>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 5 '<2mass>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 104 '<unused97>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 68 '<unused61>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 94 '<unused87>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 59 '<unused52>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 2 '<bos>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 25 '<unused18>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 93 '<unused86>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 95 '<unused88>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 76 '<unused69>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 97 '<unused90>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 56 '<unused49>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 81 '<unused74>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 13 '<unused6>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 51 '<unused44>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 47 '<unused40>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 8 '<unused1>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 103 '<unused96>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 75 '<unused68>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 43 '<unused36>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 79 '<unused72>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 39 '<unused32>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 49 '<unused42>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 41 '<unused34>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 34 '<unused27>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 6 '[@BOS@]' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 40 '<unused33>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 33 '<unused26>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 35 '<unused28>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 32 '<unused25>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 28 '<unused21>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 19 '<unused12>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 80 '<unused73>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 86 '<unused79>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 67 '<unused60>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 9 '<unused2>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 52 '<unused45>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 16 '<unused9>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 98 '<unused91>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 71 '<unused64>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 36 '<unused29>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 0 '<pad>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 11 '<unused4>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 70 '<unused63>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 77 '<unused70>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 64 '<unused57>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 50 '<unused43>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 20 '<unused13>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 73 '<unused66>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 23 '<unused16>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 38 '<unused31>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 21 '<unused14>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 15 '<unused8>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 37 '<unused30>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 14 '<unused7>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 30 '<unused23>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 62 '<unused55>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 3 '<unk>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 18 '<unused11>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 22 '<unused15>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 66 '<unused59>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 65 '<unused58>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 10 '<unused3>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 105 '<unused98>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 87 '<unused80>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 100 '<unused93>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 63 '<unused56>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 31 '<unused24>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 58 '<unused51>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 84 '<unused77>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 61 '<unused54>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 1 '<eos>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 60 '<unused53>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 91 '<unused84>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 83 '<unused76>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 85 '<unused78>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 27 '<unused20>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 96 '<unused89>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 72 '<unused65>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 53 '<unused46>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 82 '<unused75>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 7 '<unused0>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 4 '<mask>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 101 '<unused94>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 78 '<unused71>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 89 '<unused82>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 69 '<unused62>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 54 '<unused47>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 57 '<unused50>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 12 '<unused5>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 48 '<unused41>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 17 '<unused10>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 24 '<unused17>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 46 '<unused39>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: control token: 29 '<unused22>' is not marked as EOG Feb 28 15:55:41 ai-server ollama[1170322]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Feb 28 15:55:41 ai-server ollama[1170322]: load: special tokens cache size = 108 Feb 28 15:55:41 ai-server ollama[1170322]: load: token to piece cache size = 1.6014 MB Feb 28 15:55:41 ai-server ollama[1170322]: print_info: arch = gemma2 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: vocab_only = 0 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_ctx_train = 8192 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_embd = 3584 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_layer = 42 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_head = 16 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_head_kv = 8 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_rot = 256 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_swa = 4096 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_embd_head_k = 256 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_embd_head_v = 256 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_gqa = 2 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_embd_k_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: f_norm_eps = 0.0e+00 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: f_norm_rms_eps = 1.0e-06 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: f_clamp_kqv = 0.0e+00 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: f_max_alibi_bias = 0.0e+00 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: f_logit_scale = 0.0e+00 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_ff = 14336 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_expert = 0 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_expert_used = 0 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: causal attn = 1 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: pooling type = 0 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: rope type = 2 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: rope scaling = linear Feb 28 15:55:41 ai-server ollama[1170322]: print_info: freq_base_train = 10000.0 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: freq_scale_train = 1 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_ctx_orig_yarn = 8192 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: rope_finetuned = unknown Feb 28 15:55:41 ai-server ollama[1170322]: print_info: ssm_d_conv = 0 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: ssm_d_inner = 0 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: ssm_d_state = 0 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: ssm_dt_rank = 0 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: ssm_dt_b_c_rms = 0 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: model type = 9B Feb 28 15:55:41 ai-server ollama[1170322]: print_info: model params = 9.24 B Feb 28 15:55:41 ai-server ollama[1170322]: print_info: general.name = gemma-2-9b-it Feb 28 15:55:41 ai-server ollama[1170322]: print_info: vocab type = SPM Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_vocab = 256000 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: n_merges = 0 Feb 28 15:55:41 ai-server ollama[1170322]: print_info: BOS token = 2 '<bos>' Feb 28 15:55:41 ai-server ollama[1170322]: print_info: EOS token = 1 '<eos>' Feb 28 15:55:41 ai-server ollama[1170322]: print_info: EOT token = 107 '<end_of_turn>' Feb 28 15:55:41 ai-server ollama[1170322]: print_info: UNK token = 3 '<unk>' Feb 28 15:55:41 ai-server ollama[1170322]: print_info: PAD token = 0 '<pad>' Feb 28 15:55:41 ai-server ollama[1170322]: print_info: LF token = 227 '<0x0A>' Feb 28 15:55:41 ai-server ollama[1170322]: print_info: EOG token = 1 '<eos>' Feb 28 15:55:41 ai-server ollama[1170322]: print_info: EOG token = 107 '<end_of_turn>' Feb 28 15:55:41 ai-server ollama[1170322]: print_info: max token length = 93 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: loading model tensors, this can take a while... (mmap = true) Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 0 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 1 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 2 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 3 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 4 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 5 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 6 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 7 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 8 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 9 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 10 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 11 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 12 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 13 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 14 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 15 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 16 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 17 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 18 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 19 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 20 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 21 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 22 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 23 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 24 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 25 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 26 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 27 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 28 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 29 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 30 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 31 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 32 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 33 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 34 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 35 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 36 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 37 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 38 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 39 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 40 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 41 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: layer 42 assigned to device CUDA0 Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: tensor 'token_embd.weight' (q6_K) (and 0 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: offloading 42 repeating layers to GPU Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: offloading output layer to GPU Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: offloaded 43/43 layers to GPU Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: CPU_Mapped model buffer size = 717.77 MiB Feb 28 15:55:41 ai-server ollama[1170322]: load_tensors: CUDA0 model buffer size = 5185.21 MiB Feb 28 15:55:41 ai-server ollama[1170322]: time=2025-02-28T15:55:41.812Z level=DEBUG source=server.go:602 msg="model load progress 0.81" Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: n_seq_max = 4 Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: n_ctx = 8192 Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: n_ctx_per_seq = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: n_batch = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: n_ubatch = 512 Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: flash_attn = 1 Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: freq_base = 10000.0 Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: freq_scale = 1 Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 42, can_shift = 1 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 0: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 1: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 2: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 3: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 4: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 5: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 6: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 7: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 8: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 9: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 10: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 11: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 12: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 13: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 14: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 15: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 16: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 17: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 18: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 19: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 20: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 21: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 22: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 23: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 24: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 25: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 26: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 27: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 28: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 29: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 30: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 31: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 32: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 33: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 34: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 35: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 36: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 37: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 38: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 39: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 40: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: layer 41: n_embd_k_gqa = 2048, n_embd_v_gqa = 2048 Feb 28 15:55:41 ai-server ollama[1170322]: llama_kv_cache_init: CUDA0 KV buffer size = 2688.00 MiB Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: KV self size = 2688.00 MiB, K (f16): 1344.00 MiB, V (f16): 1344.00 MiB Feb 28 15:55:41 ai-server ollama[1170322]: llama_init_from_model: CUDA_Host output buffer size = 3.96 MiB Feb 28 15:55:42 ai-server ollama[1170322]: llama_init_from_model: CUDA0 compute buffer size = 507.00 MiB Feb 28 15:55:42 ai-server ollama[1170322]: llama_init_from_model: CUDA_Host compute buffer size = 104.01 MiB Feb 28 15:55:42 ai-server ollama[1170322]: llama_init_from_model: graph nodes = 1398 Feb 28 15:55:42 ai-server ollama[1170322]: llama_init_from_model: graph splits = 86 Feb 28 15:55:42 ai-server ollama[1170322]: time=2025-02-28T15:55:42.063Z level=INFO source=server.go:596 msg="llama runner started in 1.00 seconds" Feb 28 15:55:42 ai-server ollama[1170322]: time=2025-02-28T15:55:42.063Z level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 Feb 28 15:55:42 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:55:42 | 200 | 1.572278314s | 127.0.0.1 | POST "/api/generate" Feb 28 15:55:42 ai-server ollama[1170322]: time=2025-02-28T15:55:42.063Z level=DEBUG source=sched.go:467 msg="context for request finished" Feb 28 15:55:42 ai-server ollama[1170322]: time=2025-02-28T15:55:42.063Z level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 duration=5m0s Feb 28 15:55:42 ai-server ollama[1170322]: time=2025-02-28T15:55:42.063Z level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 refCount=0 Feb 28 15:55:48 ai-server ollama[1170322]: time=2025-02-28T15:55:48.490Z level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 Feb 28 15:55:48 ai-server ollama[1170322]: time=2025-02-28T15:55:48.490Z level=DEBUG source=routes.go:1505 msg="chat request" images=0 prompt="<start_of_turn>user\n5+8/161=?<end_of_turn>\n<start_of_turn>model\n" Feb 28 15:55:48 ai-server ollama[1170322]: time=2025-02-28T15:55:48.491Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=17 used=0 remaining=17 Feb 28 15:55:53 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:55:53 | 200 | 5.459593361s | 127.0.0.1 | POST "/api/chat" Feb 28 15:55:53 ai-server ollama[1170322]: time=2025-02-28T15:55:53.925Z level=DEBUG source=sched.go:408 msg="context for request finished" Feb 28 15:55:53 ai-server ollama[1170322]: time=2025-02-28T15:55:53.925Z level=DEBUG source=sched.go:340 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 duration=5m0s Feb 28 15:55:53 ai-server ollama[1170322]: time=2025-02-28T15:55:53.925Z level=DEBUG source=sched.go:358 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 refCount=0 Feb 28 15:56:01 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:56:01 | 200 | 16.17µs | 127.0.0.1 | HEAD "/" Feb 28 15:56:01 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:56:01 | 200 | 56.589µs | 127.0.0.1 | GET "/api/ps" Feb 28 15:56:04 ai-server ollama[1170322]: [GIN] 2025/02/28 - 15:56:04 | 200 | 24.58µs | 127.0.0.1 | GET "/api/version" ```
Author
Owner

@MMaturax commented on GitHub (Feb 28, 2025):

Observations from Logs

  1. Graph Splits:

    • 0.5.12: graph splits = 2
    • 0.5.13-rc1: graph splits = 86
  2. Compute Buffer Size:

    • 0.5.12: CUDA_Host compute buffer size = 39.01 MiB
    • 0.5.13-rc1: CUDA_Host compute buffer size = 104.01 MiB
  3. CUDA Architecture Support:

    • 0.5.12: CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900
    • 0.5.13-rc1: CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1000
  4. Request Processing Time:

    • 0.5.12: /api/chat request processing time: 592.031894ms
    • 0.5.13-rc1: /api/chat request processing time: 5.459593361s
<!-- gh-comment-id:2691039503 --> @MMaturax commented on GitHub (Feb 28, 2025): ### Observations from Logs 1. **Graph Splits**: - 0.5.12: `graph splits = 2` - 0.5.13-rc1: `graph splits = 86` 2. **Compute Buffer Size**: - 0.5.12: `CUDA_Host compute buffer size = 39.01 MiB` - 0.5.13-rc1: `CUDA_Host compute buffer size = 104.01 MiB` 3. **CUDA Architecture Support**: - 0.5.12: `CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900` - 0.5.13-rc1: `CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1000` 4. **Request Processing Time**: - 0.5.12: `/api/chat` request processing time: `592.031894ms` - 0.5.13-rc1: `/api/chat` request processing time: `5.459593361s`
Author
Owner

@MMaturax commented on GitHub (Feb 28, 2025):

Additionally, I'm not sure if it makes a difference, but I didn't compile it from source. I installed it using the following command:

curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=v0.5.13-rc1 sh

<!-- gh-comment-id:2691050716 --> @MMaturax commented on GitHub (Feb 28, 2025): Additionally, I'm not sure if it makes a difference, but I didn't compile it from source. I installed it using the following command: curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=v0.5.13-rc1 sh
Author
Owner

@MMaturax commented on GitHub (Feb 28, 2025):

The same issue is present on Windows 11 as well.

0.5.12 Windows 11 RTX 4070 TI SUPER - 5900X

Image

0.5.13-rc1

Image

<!-- gh-comment-id:2691065800 --> @MMaturax commented on GitHub (Feb 28, 2025): The same issue is present on Windows 11 as well. 0.5.12 Windows 11 RTX 4070 TI SUPER - 5900X ![Image](https://github.com/user-attachments/assets/0447b61c-faca-44b3-a5e9-4d4bd646fc83) 0.5.13-rc1 ![Image](https://github.com/user-attachments/assets/78882ea1-e73a-4179-b8b8-232b36630ce5)
Author
Owner

@mxyng commented on GitHub (Feb 28, 2025):

Thanks for reporting the issue. It appears there's some changes impacting flash attention. Can you verify this by rerunning your test prompt without flash attention?

Nevermind, I found the problem. 0.5.12 build with flash attention which was default on. The upstream added a flag to optionally disable flash attention but since we don't use their default values, this effectively disabled flash attention. Re-enabling flash attention explicitly should resolve this issue

<!-- gh-comment-id:2691376210 --> @mxyng commented on GitHub (Feb 28, 2025): Thanks for reporting the issue. It appears there's some changes impacting flash attention. Can you verify this by rerunning your test prompt without flash attention? Nevermind, I found the problem. 0.5.12 build with flash attention which was default on. The upstream added a flag to optionally disable flash attention but since we don't use their default values, this effectively disabled flash attention. Re-enabling flash attention explicitly should resolve this issue
Author
Owner

@MMaturax commented on GitHub (Mar 1, 2025):

Thanks for the update!

Setting OLLAMA_FLASH_ATTENTION=0 restored the speed to normal. What is the reason for this? Unlike previous versions, should we keep FLASH_ATTENTION disabled in this release?

Image

<!-- gh-comment-id:2691782552 --> @MMaturax commented on GitHub (Mar 1, 2025): Thanks for the update! Setting OLLAMA_FLASH_ATTENTION=0 restored the speed to normal. What is the reason for this? Unlike previous versions, should we keep FLASH_ATTENTION disabled in this release? ![Image](https://github.com/user-attachments/assets/cda56de0-d713-4df2-9072-50488dac2815)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68195