[GH-ISSUE #10215] load_tensors: tensor 'token_embd.weight' cannot be used with preferred buffer type CUDA_Host, using CPU instead #53214

Closed
opened 2026-04-29 02:23:35 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @AlessandroSpallina on GitHub (Apr 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10215

What is the issue?

Hi all,
I'm using Ollama v0.6.5 (but I tested all the 0.6.X releases) and I'm on ubuntu 24.04 with a NVIDIA A100 80GB, whenever I try to query the llm (tested qwen2.5 72B, 32B and Command A 111B) Ollama stays 10-20 minutes blocked on a line load_tensors: tensor 'token_embd.weight' cannot be used with preferred buffer type CUDA_Host, using CPU instead, then it works fine. Below the full logs.

Any idea on how to avoid this 10-20 minutes downtime? I tried a small 32B model, so I don't think this is related low VRAM.. and I would like to have a VM that on the fly I turn on and infer with Ollama, if everytime I have to wait 10-20 minutes to start can be annoying.

Relevant log output

2025/04/10 07:21:29 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-04-10T07:21:29.412Z level=INFO source=images.go:458 msg="total blobs: 13"
time=2025-04-10T07:21:29.413Z level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-10T07:21:29.418Z level=INFO source=routes.go:1298 msg="Listening on [::]:11434 (version 0.6.5)"
time=2025-04-10T07:21:29.419Z level=DEBUG source=sched.go:107 msg="starting llm scheduler"
time=2025-04-10T07:21:29.427Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-10T07:21:29.429Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-04-10T07:21:29.429Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-04-10T07:21:29.429Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-04-10T07:21:29.433Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15
dlsym: cuInit - 0x77c16fd0de00
dlsym: cuDriverGetVersion - 0x77c16fd0de20
dlsym: cuDeviceGetCount - 0x77c16fd0de60
dlsym: cuDeviceGet - 0x77c16fd0de40
dlsym: cuDeviceGetAttribute - 0x77c16fd0df40
dlsym: cuDeviceGetUuid - 0x77c16fd0dea0
dlsym: cuDeviceGetName - 0x77c16fd0de80
dlsym: cuCtxCreate_v3 - 0x77c16fd0e120
dlsym: cuMemGetInfo_v2 - 0x77c16fd0e8a0
dlsym: cuCtxDestroy - 0x77c16fd6c9f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-04-10T07:21:32.107Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15
[GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20] CUDA totalMem 81153 mb
[GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20] CUDA freeMem 80727 mb
[GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20] Compute Capability 8.0
time=2025-04-10T07:21:32.500Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-04-10T07:21:32.500Z level=INFO source=types.go:130 msg="inference compute" id=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20 library=cuda variant=v12 compute=8.0 driver=12.8 name="NVIDIA A100 80GB PCIe" total="79.3 GiB" available="78.8 GiB"
time=2025-04-10T07:25:50.071Z level=WARN source=types.go:524 msg="invalid option provided" option=tfs_z
time=2025-04-10T07:25:50.071Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="216.3 GiB" before.free="213.8 GiB" before.free_swap="0 B" now.total="216.3 GiB" now.free="212.5 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15
dlsym: cuInit - 0x77c16fd0de00
dlsym: cuDriverGetVersion - 0x77c16fd0de20
dlsym: cuDeviceGetCount - 0x77c16fd0de60
dlsym: cuDeviceGet - 0x77c16fd0de40
dlsym: cuDeviceGetAttribute - 0x77c16fd0df40
dlsym: cuDeviceGetUuid - 0x77c16fd0dea0
dlsym: cuDeviceGetName - 0x77c16fd0de80
dlsym: cuCtxCreate_v3 - 0x77c16fd0e120
dlsym: cuMemGetInfo_v2 - 0x77c16fd0e8a0
dlsym: cuCtxDestroy - 0x77c16fd6c9f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-04-10T07:25:50.369Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.3 GiB" before.free="78.8 GiB" now.total="79.3 GiB" now.free="78.8 GiB" now.used="426.1 MiB"
releasing cuda driver library
time=2025-04-10T07:25:50.369Z level=DEBUG source=sched.go:183 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-04-10T07:25:50.430Z level=DEBUG source=sched.go:226 msg="loading first model" model=/root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f
time=2025-04-10T07:25:50.430Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[78.8 GiB]"
time=2025-04-10T07:25:50.430Z level=WARN source=ggml.go:152 msg="key not found" key=cohere2.vision.block_count default=0
time=2025-04-10T07:25:50.431Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="216.3 GiB" before.free="212.5 GiB" before.free_swap="0 B" now.total="216.3 GiB" now.free="212.4 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15
dlsym: cuInit - 0x77c16fd0de00
dlsym: cuDriverGetVersion - 0x77c16fd0de20
dlsym: cuDeviceGetCount - 0x77c16fd0de60
dlsym: cuDeviceGet - 0x77c16fd0de40
dlsym: cuDeviceGetAttribute - 0x77c16fd0df40
dlsym: cuDeviceGetUuid - 0x77c16fd0dea0
dlsym: cuDeviceGetName - 0x77c16fd0de80
dlsym: cuCtxCreate_v3 - 0x77c16fd0e120
dlsym: cuMemGetInfo_v2 - 0x77c16fd0e8a0
dlsym: cuCtxDestroy - 0x77c16fd6c9f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-04-10T07:25:50.725Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.3 GiB" before.free="78.8 GiB" now.total="79.3 GiB" now.free="78.8 GiB" now.used="426.1 MiB"
releasing cuda driver library
time=2025-04-10T07:25:50.725Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[78.8 GiB]"
time=2025-04-10T07:25:50.725Z level=WARN source=ggml.go:152 msg="key not found" key=cohere2.vision.block_count default=0
time=2025-04-10T07:25:50.725Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="216.3 GiB" before.free="212.4 GiB" before.free_swap="0 B" now.total="216.3 GiB" now.free="212.2 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15
dlsym: cuInit - 0x77c16fd0de00
dlsym: cuDriverGetVersion - 0x77c16fd0de20
dlsym: cuDeviceGetCount - 0x77c16fd0de60
dlsym: cuDeviceGet - 0x77c16fd0de40
dlsym: cuDeviceGetAttribute - 0x77c16fd0df40
dlsym: cuDeviceGetUuid - 0x77c16fd0dea0
dlsym: cuDeviceGetName - 0x77c16fd0de80
dlsym: cuCtxCreate_v3 - 0x77c16fd0e120
dlsym: cuMemGetInfo_v2 - 0x77c16fd0e8a0
dlsym: cuCtxDestroy - 0x77c16fd6c9f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-04-10T07:25:51.018Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.3 GiB" before.free="78.8 GiB" now.total="79.3 GiB" now.free="78.8 GiB" now.used="426.1 MiB"
releasing cuda driver library
time=2025-04-10T07:25:51.019Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="216.3 GiB" before.free="212.2 GiB" before.free_swap="0 B" now.total="216.3 GiB" now.free="212.2 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15
dlsym: cuInit - 0x77c16fd0de00
dlsym: cuDriverGetVersion - 0x77c16fd0de20
dlsym: cuDeviceGetCount - 0x77c16fd0de60
dlsym: cuDeviceGet - 0x77c16fd0de40
dlsym: cuDeviceGetAttribute - 0x77c16fd0df40
dlsym: cuDeviceGetUuid - 0x77c16fd0dea0
dlsym: cuDeviceGetName - 0x77c16fd0de80
dlsym: cuCtxCreate_v3 - 0x77c16fd0e120
dlsym: cuMemGetInfo_v2 - 0x77c16fd0e8a0
dlsym: cuCtxDestroy - 0x77c16fd6c9f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-04-10T07:25:51.307Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.3 GiB" before.free="78.8 GiB" now.total="79.3 GiB" now.free="78.8 GiB" now.used="426.1 MiB"
releasing cuda driver library
time=2025-04-10T07:25:51.307Z level=INFO source=server.go:105 msg="system memory" total="216.3 GiB" free="212.2 GiB" free_swap="0 B"
time=2025-04-10T07:25:51.307Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[78.8 GiB]"
time=2025-04-10T07:25:51.307Z level=WARN source=ggml.go:152 msg="key not found" key=cohere2.vision.block_count default=0
time=2025-04-10T07:25:51.309Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="216.3 GiB" before.free="212.2 GiB" before.free_swap="0 B" now.total="216.3 GiB" now.free="212.2 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15
dlsym: cuInit - 0x77c16fd0de00
dlsym: cuDriverGetVersion - 0x77c16fd0de20
dlsym: cuDeviceGetCount - 0x77c16fd0de60
dlsym: cuDeviceGet - 0x77c16fd0de40
dlsym: cuDeviceGetAttribute - 0x77c16fd0df40
dlsym: cuDeviceGetUuid - 0x77c16fd0dea0
dlsym: cuDeviceGetName - 0x77c16fd0de80
dlsym: cuCtxCreate_v3 - 0x77c16fd0e120
dlsym: cuMemGetInfo_v2 - 0x77c16fd0e8a0
dlsym: cuCtxDestroy - 0x77c16fd6c9f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f30
CUDA driver version: 12.8
calling cuDeviceGetCount
device count 1
time=2025-04-10T07:25:51.599Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.3 GiB" before.free="78.8 GiB" now.total="79.3 GiB" now.free="78.8 GiB" now.used="426.1 MiB"
releasing cuda driver library
time=2025-04-10T07:25:51.600Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=59 layers.split="" memory.available="[78.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="86.0 GiB" memory.required.partial="78.1 GiB" memory.required.kv="7.3 GiB" memory.required.allocations="[78.1 GiB]" memory.weights.total="62.5 GiB" memory.weights.repeating="60.1 GiB" memory.weights.nonrepeating="2.4 GiB" memory.graph.full="14.6 GiB" memory.graph.partial="14.6 GiB"
time=2025-04-10T07:25:51.600Z level=INFO source=server.go:185 msg="enabling flash attention"
time=2025-04-10T07:25:51.600Z level=WARN source=server.go:193 msg="kv cache type not supported by model" type=""
time=2025-04-10T07:25:51.600Z level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
llama_model_loader: loaded meta data with 36 key-value pairs and 514 tensors from /root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = cohere2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = A Model
llama_model_loader: - kv   3:                         general.size_label str              = 111B
llama_model_loader: - kv   4:                        cohere2.block_count u32              = 64
llama_model_loader: - kv   5:                     cohere2.context_length u32              = 16384
llama_model_loader: - kv   6:                   cohere2.embedding_length u32              = 12288
llama_model_loader: - kv   7:                cohere2.feed_forward_length u32              = 36864
llama_model_loader: - kv   8:               cohere2.attention.head_count u32              = 96
llama_model_loader: - kv   9:            cohere2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:                     cohere2.rope.freq_base f32              = 50000.000000
llama_model_loader: - kv  11:       cohere2.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  12:               cohere2.attention.key_length u32              = 128
llama_model_loader: - kv  13:             cohere2.attention.value_length u32              = 128
llama_model_loader: - kv  14:                        cohere2.logit_scale f32              = 0.250000
llama_model_loader: - kv  15:           cohere2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  16:                         cohere2.vocab_size u32              = 256000
llama_model_loader: - kv  17:               cohere2.rope.dimension_count u32              = 128
llama_model_loader: - kv  18:                  cohere2.rope.scaling.type str              = none
llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = command-r
llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,256000]  = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ...
llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ...
llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,253333]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a...
llama_model_loader: - kv  24:                tokenizer.ggml.bos_token_id u32              = 5
llama_model_loader: - kv  25:                tokenizer.ggml.eos_token_id u32              = 255001
llama_model_loader: - kv  26:            tokenizer.ggml.unknown_token_id u32              = 1
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  28:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  29:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  30:           tokenizer.chat_template.tool_use str              = {%- macro document_turn(documents) -%...
llama_model_loader: - kv  31:                tokenizer.chat_template.rag str              = {% set tools = [] %}\n{%- macro docume...
llama_model_loader: - kv  32:                   tokenizer.chat_templates arr[str,2]       = ["tool_use", "rag"]
llama_model_loader: - kv  33:                    tokenizer.chat_template str              = {% if documents %}\n{% set tools = [] ...
llama_model_loader: - kv  34:               general.quantization_version u32              = 2
llama_model_loader: - kv  35:                          general.file_type u32              = 15
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  384 tensors
llama_model_loader: - type q6_K:   65 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 62.51 GiB (4.84 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 255032 '<|END_OF_MIDDLE_FIM_TOKEN|>' is not marked as EOG
load: control token: 255030 '<|BEGINNING_OF_MIDDLE_FIM_TOKEN|>' is not marked as EOG
load: control token: 255029 '<|BEGINNING_OF_PREFIX_FIM_TOKEN|>' is not marked as EOG
load: control token: 255026 '<|END_TOOL_RESULT|>' is not marked as EOG
load: control token: 255021 '<|START_RESPONSE|>' is not marked as EOG
load: control token: 255020 '<|END_THINKING|>' is not marked as EOG
load: control token: 255019 '<|START_THINKING|>' is not marked as EOG
load: control token: 255016 '<|USER_7_TOKEN|>' is not marked as EOG
load: control token: 255008 '<|SYSTEM_TOKEN|>' is not marked as EOG
load: control token: 255007 '<|CHATBOT_TOKEN|>' is not marked as EOG
load: control token: 255003 '<|NO_TOKEN|>' is not marked as EOG
load: control token: 255001 '<|END_OF_TURN_TOKEN|>' is not marked as EOG
load: control token: 255000 '<|START_OF_TURN_TOKEN|>' is not marked as EOG
load: control token: 255009 '<|USER_0_TOKEN|>' is not marked as EOG
load: control token: 255018 '<|USER_9_TOKEN|>' is not marked as EOG
load: control token: 255006 '<|USER_TOKEN|>' is not marked as EOG
load: control token: 255013 '<|USER_4_TOKEN|>' is not marked as EOG
load: control token: 255027 '<|EXTRA_8_TOKEN|>' is not marked as EOG
load: control token: 255005 '<|BAD_TOKEN|>' is not marked as EOG
load: control token:      7 '<EOP_TOKEN>' is not marked as EOG
load: control token:      2 '<CLS>' is not marked as EOG
load: control token: 255002 '<|YES_TOKEN|>' is not marked as EOG
load: control token:      3 '<SEP>' is not marked as EOG
load: control token: 255022 '<|END_RESPONSE|>' is not marked as EOG
load: control token: 255014 '<|USER_5_TOKEN|>' is not marked as EOG
load: control token:      6 '<EOS_TOKEN>' is not marked as EOG
load: control token: 255004 '<|GOOD_TOKEN|>' is not marked as EOG
load: control token:      1 '<UNK>' is not marked as EOG
load: control token:      4 '<MASK_TOKEN>' is not marked as EOG
load: control token: 255017 '<|USER_8_TOKEN|>' is not marked as EOG
load: control token: 255024 '<|END_ACTION|>' is not marked as EOG
load: control token: 255023 '<|START_ACTION|>' is not marked as EOG
load: control token: 255012 '<|USER_3_TOKEN|>' is not marked as EOG
load: control token: 255010 '<|USER_1_TOKEN|>' is not marked as EOG
load: control token: 255028 '<|NEW_FILE|>' is not marked as EOG
load: control token: 255015 '<|USER_6_TOKEN|>' is not marked as EOG
load: control token: 255011 '<|USER_2_TOKEN|>' is not marked as EOG
load: control token:      5 '<BOS_TOKEN>' is not marked as EOG
load: control token: 255025 '<|START_TOOL_RESULT|>' is not marked as EOG
load: control token: 255031 '<|BEGINNING_OF_SUFFIX_FIM_TOKEN|>' is not marked as EOG
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 41
load: token to piece cache size = 1.8428 MB
print_info: arch             = cohere2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 111.06 B
print_info: general.name     = A Model
print_info: vocab type       = BPE
print_info: n_vocab          = 256000
print_info: n_merges         = 253333
print_info: BOS token        = 5 '<BOS_TOKEN>'
print_info: EOS token        = 255001 '<|END_OF_TURN_TOKEN|>'
print_info: UNK token        = 1 '<UNK>'
print_info: PAD token        = 0 '<PAD>'
print_info: LF token         = 206 'Ċ'
print_info: FIM PAD token    = 0 '<PAD>'
print_info: EOG token        = 0 '<PAD>'
print_info: EOG token        = 255001 '<|END_OF_TURN_TOKEN|>'
print_info: max token length = 1024
llama_model_load: vocab only - skipping tensors
time=2025-04-10T07:25:52.071Z level=DEBUG source=server.go:335 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12
time=2025-04-10T07:25:52.071Z level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12]
time=2025-04-10T07:25:52.072Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f --ctx-size 30000 --batch-size 512 --n-gpu-layers 59 --verbose --threads 24 --flash-attn --parallel 1 --port 33921"
time=2025-04-10T07:25:52.072Z level=DEBUG source=server.go:423 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama/cuda_v12:/usr/lib/ollama CUDA_VISIBLE_DEVICES=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20]"
time=2025-04-10T07:25:52.072Z level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-10T07:25:52.072Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-10T07:25:52.076Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-10T07:25:52.085Z level=INFO source=runner.go:853 msg="starting go runner"
time=2025-04-10T07:25:52.086Z level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2025-04-10T07:25:54.071Z level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib
time=2025-04-10T07:25:54.071Z level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib64
time=2025-04-10T07:25:54.071Z level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 55
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 0
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
time=2025-04-10T07:25:54.159Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-04-10T07:25:54.162Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:33921"
time=2025-04-10T07:25:54.333Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA A100 80GB PCIe) - 80727 MiB free
llama_model_loader: loaded meta data with 36 key-value pairs and 514 tensors from /root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = cohere2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = A Model
llama_model_loader: - kv   3:                         general.size_label str              = 111B
llama_model_loader: - kv   4:                        cohere2.block_count u32              = 64
llama_model_loader: - kv   5:                     cohere2.context_length u32              = 16384
llama_model_loader: - kv   6:                   cohere2.embedding_length u32              = 12288
llama_model_loader: - kv   7:                cohere2.feed_forward_length u32              = 36864
llama_model_loader: - kv   8:               cohere2.attention.head_count u32              = 96
llama_model_loader: - kv   9:            cohere2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:                     cohere2.rope.freq_base f32              = 50000.000000
llama_model_loader: - kv  11:       cohere2.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  12:               cohere2.attention.key_length u32              = 128
llama_model_loader: - kv  13:             cohere2.attention.value_length u32              = 128
llama_model_loader: - kv  14:                        cohere2.logit_scale f32              = 0.250000
llama_model_loader: - kv  15:           cohere2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  16:                         cohere2.vocab_size u32              = 256000
llama_model_loader: - kv  17:               cohere2.rope.dimension_count u32              = 128
llama_model_loader: - kv  18:                  cohere2.rope.scaling.type str              = none
llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = command-r
llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,256000]  = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ...
llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ...
llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,253333]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a...
llama_model_loader: - kv  24:                tokenizer.ggml.bos_token_id u32              = 5
llama_model_loader: - kv  25:                tokenizer.ggml.eos_token_id u32              = 255001
llama_model_loader: - kv  26:            tokenizer.ggml.unknown_token_id u32              = 1
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  28:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  29:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  30:           tokenizer.chat_template.tool_use str              = {%- macro document_turn(documents) -%...
llama_model_loader: - kv  31:                tokenizer.chat_template.rag str              = {% set tools = [] %}\n{%- macro docume...
llama_model_loader: - kv  32:                   tokenizer.chat_templates arr[str,2]       = ["tool_use", "rag"]
llama_model_loader: - kv  33:                    tokenizer.chat_template str              = {% if documents %}\n{% set tools = [] ...
llama_model_loader: - kv  34:               general.quantization_version u32              = 2
llama_model_loader: - kv  35:                          general.file_type u32              = 15
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  384 tensors
llama_model_loader: - type q6_K:   65 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 62.51 GiB (4.84 BPW)
init_tokenizer: initializing tokenizer for type 2
load: control token: 255032 '<|END_OF_MIDDLE_FIM_TOKEN|>' is not marked as EOG
load: control token: 255030 '<|BEGINNING_OF_MIDDLE_FIM_TOKEN|>' is not marked as EOG
load: control token: 255029 '<|BEGINNING_OF_PREFIX_FIM_TOKEN|>' is not marked as EOG
load: control token: 255026 '<|END_TOOL_RESULT|>' is not marked as EOG
load: control token: 255021 '<|START_RESPONSE|>' is not marked as EOG
load: control token: 255020 '<|END_THINKING|>' is not marked as EOG
load: control token: 255019 '<|START_THINKING|>' is not marked as EOG
load: control token: 255016 '<|USER_7_TOKEN|>' is not marked as EOG
load: control token: 255008 '<|SYSTEM_TOKEN|>' is not marked as EOG
load: control token: 255007 '<|CHATBOT_TOKEN|>' is not marked as EOG
load: control token: 255003 '<|NO_TOKEN|>' is not marked as EOG
load: control token: 255001 '<|END_OF_TURN_TOKEN|>' is not marked as EOG
load: control token: 255000 '<|START_OF_TURN_TOKEN|>' is not marked as EOG
load: control token: 255009 '<|USER_0_TOKEN|>' is not marked as EOG
load: control token: 255018 '<|USER_9_TOKEN|>' is not marked as EOG
load: control token: 255006 '<|USER_TOKEN|>' is not marked as EOG
load: control token: 255013 '<|USER_4_TOKEN|>' is not marked as EOG
load: control token: 255027 '<|EXTRA_8_TOKEN|>' is not marked as EOG
load: control token: 255005 '<|BAD_TOKEN|>' is not marked as EOG
load: control token:      7 '<EOP_TOKEN>' is not marked as EOG
load: control token:      2 '<CLS>' is not marked as EOG
load: control token: 255002 '<|YES_TOKEN|>' is not marked as EOG
load: control token:      3 '<SEP>' is not marked as EOG
load: control token: 255022 '<|END_RESPONSE|>' is not marked as EOG
load: control token: 255014 '<|USER_5_TOKEN|>' is not marked as EOG
load: control token:      6 '<EOS_TOKEN>' is not marked as EOG
load: control token: 255004 '<|GOOD_TOKEN|>' is not marked as EOG
load: control token:      1 '<UNK>' is not marked as EOG
load: control token:      4 '<MASK_TOKEN>' is not marked as EOG
load: control token: 255017 '<|USER_8_TOKEN|>' is not marked as EOG
load: control token: 255024 '<|END_ACTION|>' is not marked as EOG
load: control token: 255023 '<|START_ACTION|>' is not marked as EOG
load: control token: 255012 '<|USER_3_TOKEN|>' is not marked as EOG
load: control token: 255010 '<|USER_1_TOKEN|>' is not marked as EOG
load: control token: 255028 '<|NEW_FILE|>' is not marked as EOG
load: control token: 255015 '<|USER_6_TOKEN|>' is not marked as EOG
load: control token: 255011 '<|USER_2_TOKEN|>' is not marked as EOG
load: control token:      5 '<BOS_TOKEN>' is not marked as EOG
load: control token: 255025 '<|START_TOOL_RESULT|>' is not marked as EOG
load: control token: 255031 '<|BEGINNING_OF_SUFFIX_FIM_TOKEN|>' is not marked as EOG
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 41
load: token to piece cache size = 1.8428 MB
print_info: arch             = cohere2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 16384
print_info: n_embd           = 12288
print_info: n_layer          = 64
print_info: n_head           = 96
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 4096
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 12
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 1.0e-05
print_info: f_norm_rms_eps   = 0.0e+00
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 2.5e-01
print_info: n_ff             = 36864
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = none
print_info: freq_base_train  = 50000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 16384
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = ?B
print_info: model params     = 111.06 B
print_info: general.name     = A Model
print_info: vocab type       = BPE
print_info: n_vocab          = 256000
print_info: n_merges         = 253333
print_info: BOS token        = 5 '<BOS_TOKEN>'
print_info: EOS token        = 255001 '<|END_OF_TURN_TOKEN|>'
print_info: UNK token        = 1 '<UNK>'
print_info: PAD token        = 0 '<PAD>'
print_info: LF token         = 206 'Ċ'
print_info: FIM PAD token    = 0 '<PAD>'
print_info: EOG token        = 0 '<PAD>'
print_info: EOG token        = 255001 '<|END_OF_TURN_TOKEN|>'
print_info: max token length = 1024
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: layer   0 assigned to device CPU
load_tensors: layer   1 assigned to device CPU
load_tensors: layer   2 assigned to device CPU
load_tensors: layer   3 assigned to device CPU
load_tensors: layer   4 assigned to device CPU
load_tensors: layer   5 assigned to device CUDA0
load_tensors: layer   6 assigned to device CUDA0
load_tensors: layer   7 assigned to device CUDA0
load_tensors: layer   8 assigned to device CUDA0
load_tensors: layer   9 assigned to device CUDA0
load_tensors: layer  10 assigned to device CUDA0
load_tensors: layer  11 assigned to device CUDA0
load_tensors: layer  12 assigned to device CUDA0
load_tensors: layer  13 assigned to device CUDA0
load_tensors: layer  14 assigned to device CUDA0
load_tensors: layer  15 assigned to device CUDA0
load_tensors: layer  16 assigned to device CUDA0
load_tensors: layer  17 assigned to device CUDA0
load_tensors: layer  18 assigned to device CUDA0
load_tensors: layer  19 assigned to device CUDA0
load_tensors: layer  20 assigned to device CUDA0
load_tensors: layer  21 assigned to device CUDA0
load_tensors: layer  22 assigned to device CUDA0
load_tensors: layer  23 assigned to device CUDA0
load_tensors: layer  24 assigned to device CUDA0
load_tensors: layer  25 assigned to device CUDA0
load_tensors: layer  26 assigned to device CUDA0
load_tensors: layer  27 assigned to device CUDA0
load_tensors: layer  28 assigned to device CUDA0
load_tensors: layer  29 assigned to device CUDA0
load_tensors: layer  30 assigned to device CUDA0
load_tensors: layer  31 assigned to device CUDA0
load_tensors: layer  32 assigned to device CUDA0
load_tensors: layer  33 assigned to device CUDA0
load_tensors: layer  34 assigned to device CUDA0
load_tensors: layer  35 assigned to device CUDA0
load_tensors: layer  36 assigned to device CUDA0
load_tensors: layer  37 assigned to device CUDA0
load_tensors: layer  38 assigned to device CUDA0
load_tensors: layer  39 assigned to device CUDA0
load_tensors: layer  40 assigned to device CUDA0
load_tensors: layer  41 assigned to device CUDA0
load_tensors: layer  42 assigned to device CUDA0
load_tensors: layer  43 assigned to device CUDA0
load_tensors: layer  44 assigned to device CUDA0
load_tensors: layer  45 assigned to device CUDA0
load_tensors: layer  46 assigned to device CUDA0
load_tensors: layer  47 assigned to device CUDA0
load_tensors: layer  48 assigned to device CUDA0
load_tensors: layer  49 assigned to device CUDA0
load_tensors: layer  50 assigned to device CUDA0
load_tensors: layer  51 assigned to device CUDA0
load_tensors: layer  52 assigned to device CUDA0
load_tensors: layer  53 assigned to device CUDA0
load_tensors: layer  54 assigned to device CUDA0
load_tensors: layer  55 assigned to device CUDA0
load_tensors: layer  56 assigned to device CUDA0
load_tensors: layer  57 assigned to device CUDA0
load_tensors: layer  58 assigned to device CUDA0
load_tensors: layer  59 assigned to device CUDA0
load_tensors: layer  60 assigned to device CUDA0
load_tensors: layer  61 assigned to device CUDA0
load_tensors: layer  62 assigned to device CUDA0
load_tensors: layer  63 assigned to device CUDA0
load_tensors: layer  64 assigned to device CPU
load_tensors: tensor 'token_embd.weight' (q6_K) (and 42 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead

OS

Ubuntu 24.04.2 LTS

GPU

NVIDA A100

CPU

AMD EPYC 7V13 64-Core Processor

Ollama version

0.6.5

Originally created by @AlessandroSpallina on GitHub (Apr 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10215 ### What is the issue? Hi all, I'm using Ollama v0.6.5 (but I tested all the 0.6.X releases) and I'm on ubuntu 24.04 with a NVIDIA A100 80GB, whenever I try to query the llm (tested qwen2.5 72B, 32B and Command A 111B) Ollama stays 10-20 minutes blocked on a line `load_tensors: tensor 'token_embd.weight' cannot be used with preferred buffer type CUDA_Host, using CPU instead`, then it works fine. Below the full logs. Any idea on how to avoid this 10-20 minutes downtime? I tried a small 32B model, so I don't think this is related low VRAM.. and I would like to have a VM that on the fly I turn on and infer with Ollama, if everytime I have to wait 10-20 minutes to start can be annoying. ### Relevant log output ```shell 2025/04/10 07:21:29 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-04-10T07:21:29.412Z level=INFO source=images.go:458 msg="total blobs: 13" time=2025-04-10T07:21:29.413Z level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-10T07:21:29.418Z level=INFO source=routes.go:1298 msg="Listening on [::]:11434 (version 0.6.5)" time=2025-04-10T07:21:29.419Z level=DEBUG source=sched.go:107 msg="starting llm scheduler" time=2025-04-10T07:21:29.427Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-10T07:21:29.429Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-04-10T07:21:29.429Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-04-10T07:21:29.429Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-04-10T07:21:29.433Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15] initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15 dlsym: cuInit - 0x77c16fd0de00 dlsym: cuDriverGetVersion - 0x77c16fd0de20 dlsym: cuDeviceGetCount - 0x77c16fd0de60 dlsym: cuDeviceGet - 0x77c16fd0de40 dlsym: cuDeviceGetAttribute - 0x77c16fd0df40 dlsym: cuDeviceGetUuid - 0x77c16fd0dea0 dlsym: cuDeviceGetName - 0x77c16fd0de80 dlsym: cuCtxCreate_v3 - 0x77c16fd0e120 dlsym: cuMemGetInfo_v2 - 0x77c16fd0e8a0 dlsym: cuCtxDestroy - 0x77c16fd6c9f0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-04-10T07:21:32.107Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15 [GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20] CUDA totalMem 81153 mb [GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20] CUDA freeMem 80727 mb [GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20] Compute Capability 8.0 time=2025-04-10T07:21:32.500Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2025-04-10T07:21:32.500Z level=INFO source=types.go:130 msg="inference compute" id=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20 library=cuda variant=v12 compute=8.0 driver=12.8 name="NVIDIA A100 80GB PCIe" total="79.3 GiB" available="78.8 GiB" time=2025-04-10T07:25:50.071Z level=WARN source=types.go:524 msg="invalid option provided" option=tfs_z time=2025-04-10T07:25:50.071Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="216.3 GiB" before.free="213.8 GiB" before.free_swap="0 B" now.total="216.3 GiB" now.free="212.5 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15 dlsym: cuInit - 0x77c16fd0de00 dlsym: cuDriverGetVersion - 0x77c16fd0de20 dlsym: cuDeviceGetCount - 0x77c16fd0de60 dlsym: cuDeviceGet - 0x77c16fd0de40 dlsym: cuDeviceGetAttribute - 0x77c16fd0df40 dlsym: cuDeviceGetUuid - 0x77c16fd0dea0 dlsym: cuDeviceGetName - 0x77c16fd0de80 dlsym: cuCtxCreate_v3 - 0x77c16fd0e120 dlsym: cuMemGetInfo_v2 - 0x77c16fd0e8a0 dlsym: cuCtxDestroy - 0x77c16fd6c9f0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-04-10T07:25:50.369Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.3 GiB" before.free="78.8 GiB" now.total="79.3 GiB" now.free="78.8 GiB" now.used="426.1 MiB" releasing cuda driver library time=2025-04-10T07:25:50.369Z level=DEBUG source=sched.go:183 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-04-10T07:25:50.430Z level=DEBUG source=sched.go:226 msg="loading first model" model=/root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f time=2025-04-10T07:25:50.430Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[78.8 GiB]" time=2025-04-10T07:25:50.430Z level=WARN source=ggml.go:152 msg="key not found" key=cohere2.vision.block_count default=0 time=2025-04-10T07:25:50.431Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="216.3 GiB" before.free="212.5 GiB" before.free_swap="0 B" now.total="216.3 GiB" now.free="212.4 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15 dlsym: cuInit - 0x77c16fd0de00 dlsym: cuDriverGetVersion - 0x77c16fd0de20 dlsym: cuDeviceGetCount - 0x77c16fd0de60 dlsym: cuDeviceGet - 0x77c16fd0de40 dlsym: cuDeviceGetAttribute - 0x77c16fd0df40 dlsym: cuDeviceGetUuid - 0x77c16fd0dea0 dlsym: cuDeviceGetName - 0x77c16fd0de80 dlsym: cuCtxCreate_v3 - 0x77c16fd0e120 dlsym: cuMemGetInfo_v2 - 0x77c16fd0e8a0 dlsym: cuCtxDestroy - 0x77c16fd6c9f0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-04-10T07:25:50.725Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.3 GiB" before.free="78.8 GiB" now.total="79.3 GiB" now.free="78.8 GiB" now.used="426.1 MiB" releasing cuda driver library time=2025-04-10T07:25:50.725Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[78.8 GiB]" time=2025-04-10T07:25:50.725Z level=WARN source=ggml.go:152 msg="key not found" key=cohere2.vision.block_count default=0 time=2025-04-10T07:25:50.725Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="216.3 GiB" before.free="212.4 GiB" before.free_swap="0 B" now.total="216.3 GiB" now.free="212.2 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15 dlsym: cuInit - 0x77c16fd0de00 dlsym: cuDriverGetVersion - 0x77c16fd0de20 dlsym: cuDeviceGetCount - 0x77c16fd0de60 dlsym: cuDeviceGet - 0x77c16fd0de40 dlsym: cuDeviceGetAttribute - 0x77c16fd0df40 dlsym: cuDeviceGetUuid - 0x77c16fd0dea0 dlsym: cuDeviceGetName - 0x77c16fd0de80 dlsym: cuCtxCreate_v3 - 0x77c16fd0e120 dlsym: cuMemGetInfo_v2 - 0x77c16fd0e8a0 dlsym: cuCtxDestroy - 0x77c16fd6c9f0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-04-10T07:25:51.018Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.3 GiB" before.free="78.8 GiB" now.total="79.3 GiB" now.free="78.8 GiB" now.used="426.1 MiB" releasing cuda driver library time=2025-04-10T07:25:51.019Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="216.3 GiB" before.free="212.2 GiB" before.free_swap="0 B" now.total="216.3 GiB" now.free="212.2 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15 dlsym: cuInit - 0x77c16fd0de00 dlsym: cuDriverGetVersion - 0x77c16fd0de20 dlsym: cuDeviceGetCount - 0x77c16fd0de60 dlsym: cuDeviceGet - 0x77c16fd0de40 dlsym: cuDeviceGetAttribute - 0x77c16fd0df40 dlsym: cuDeviceGetUuid - 0x77c16fd0dea0 dlsym: cuDeviceGetName - 0x77c16fd0de80 dlsym: cuCtxCreate_v3 - 0x77c16fd0e120 dlsym: cuMemGetInfo_v2 - 0x77c16fd0e8a0 dlsym: cuCtxDestroy - 0x77c16fd6c9f0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-04-10T07:25:51.307Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.3 GiB" before.free="78.8 GiB" now.total="79.3 GiB" now.free="78.8 GiB" now.used="426.1 MiB" releasing cuda driver library time=2025-04-10T07:25:51.307Z level=INFO source=server.go:105 msg="system memory" total="216.3 GiB" free="212.2 GiB" free_swap="0 B" time=2025-04-10T07:25:51.307Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[78.8 GiB]" time=2025-04-10T07:25:51.307Z level=WARN source=ggml.go:152 msg="key not found" key=cohere2.vision.block_count default=0 time=2025-04-10T07:25:51.309Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="216.3 GiB" before.free="212.2 GiB" before.free_swap="0 B" now.total="216.3 GiB" now.free="212.2 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.570.86.15 dlsym: cuInit - 0x77c16fd0de00 dlsym: cuDriverGetVersion - 0x77c16fd0de20 dlsym: cuDeviceGetCount - 0x77c16fd0de60 dlsym: cuDeviceGet - 0x77c16fd0de40 dlsym: cuDeviceGetAttribute - 0x77c16fd0df40 dlsym: cuDeviceGetUuid - 0x77c16fd0dea0 dlsym: cuDeviceGetName - 0x77c16fd0de80 dlsym: cuCtxCreate_v3 - 0x77c16fd0e120 dlsym: cuMemGetInfo_v2 - 0x77c16fd0e8a0 dlsym: cuCtxDestroy - 0x77c16fd6c9f0 calling cuInit calling cuDriverGetVersion raw version 0x2f30 CUDA driver version: 12.8 calling cuDeviceGetCount device count 1 time=2025-04-10T07:25:51.599Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.3 GiB" before.free="78.8 GiB" now.total="79.3 GiB" now.free="78.8 GiB" now.used="426.1 MiB" releasing cuda driver library time=2025-04-10T07:25:51.600Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=59 layers.split="" memory.available="[78.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="86.0 GiB" memory.required.partial="78.1 GiB" memory.required.kv="7.3 GiB" memory.required.allocations="[78.1 GiB]" memory.weights.total="62.5 GiB" memory.weights.repeating="60.1 GiB" memory.weights.nonrepeating="2.4 GiB" memory.graph.full="14.6 GiB" memory.graph.partial="14.6 GiB" time=2025-04-10T07:25:51.600Z level=INFO source=server.go:185 msg="enabling flash attention" time=2025-04-10T07:25:51.600Z level=WARN source=server.go:193 msg="kv cache type not supported by model" type="" time=2025-04-10T07:25:51.600Z level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]" llama_model_loader: loaded meta data with 36 key-value pairs and 514 tensors from /root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = cohere2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = A Model llama_model_loader: - kv 3: general.size_label str = 111B llama_model_loader: - kv 4: cohere2.block_count u32 = 64 llama_model_loader: - kv 5: cohere2.context_length u32 = 16384 llama_model_loader: - kv 6: cohere2.embedding_length u32 = 12288 llama_model_loader: - kv 7: cohere2.feed_forward_length u32 = 36864 llama_model_loader: - kv 8: cohere2.attention.head_count u32 = 96 llama_model_loader: - kv 9: cohere2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: cohere2.rope.freq_base f32 = 50000.000000 llama_model_loader: - kv 11: cohere2.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 12: cohere2.attention.key_length u32 = 128 llama_model_loader: - kv 13: cohere2.attention.value_length u32 = 128 llama_model_loader: - kv 14: cohere2.logit_scale f32 = 0.250000 llama_model_loader: - kv 15: cohere2.attention.sliding_window u32 = 4096 llama_model_loader: - kv 16: cohere2.vocab_size u32 = 256000 llama_model_loader: - kv 17: cohere2.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: cohere2.rope.scaling.type str = none llama_model_loader: - kv 19: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 20: tokenizer.ggml.pre str = command-r llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,256000] = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ... llama_model_loader: - kv 22: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ... llama_model_loader: - kv 23: tokenizer.ggml.merges arr[str,253333] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a... llama_model_loader: - kv 24: tokenizer.ggml.bos_token_id u32 = 5 llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 255001 llama_model_loader: - kv 26: tokenizer.ggml.unknown_token_id u32 = 1 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 28: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 29: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 30: tokenizer.chat_template.tool_use str = {%- macro document_turn(documents) -%... llama_model_loader: - kv 31: tokenizer.chat_template.rag str = {% set tools = [] %}\n{%- macro docume... llama_model_loader: - kv 32: tokenizer.chat_templates arr[str,2] = ["tool_use", "rag"] llama_model_loader: - kv 33: tokenizer.chat_template str = {% if documents %}\n{% set tools = [] ... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - kv 35: general.file_type u32 = 15 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 384 tensors llama_model_loader: - type q6_K: 65 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 62.51 GiB (4.84 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 255032 '<|END_OF_MIDDLE_FIM_TOKEN|>' is not marked as EOG load: control token: 255030 '<|BEGINNING_OF_MIDDLE_FIM_TOKEN|>' is not marked as EOG load: control token: 255029 '<|BEGINNING_OF_PREFIX_FIM_TOKEN|>' is not marked as EOG load: control token: 255026 '<|END_TOOL_RESULT|>' is not marked as EOG load: control token: 255021 '<|START_RESPONSE|>' is not marked as EOG load: control token: 255020 '<|END_THINKING|>' is not marked as EOG load: control token: 255019 '<|START_THINKING|>' is not marked as EOG load: control token: 255016 '<|USER_7_TOKEN|>' is not marked as EOG load: control token: 255008 '<|SYSTEM_TOKEN|>' is not marked as EOG load: control token: 255007 '<|CHATBOT_TOKEN|>' is not marked as EOG load: control token: 255003 '<|NO_TOKEN|>' is not marked as EOG load: control token: 255001 '<|END_OF_TURN_TOKEN|>' is not marked as EOG load: control token: 255000 '<|START_OF_TURN_TOKEN|>' is not marked as EOG load: control token: 255009 '<|USER_0_TOKEN|>' is not marked as EOG load: control token: 255018 '<|USER_9_TOKEN|>' is not marked as EOG load: control token: 255006 '<|USER_TOKEN|>' is not marked as EOG load: control token: 255013 '<|USER_4_TOKEN|>' is not marked as EOG load: control token: 255027 '<|EXTRA_8_TOKEN|>' is not marked as EOG load: control token: 255005 '<|BAD_TOKEN|>' is not marked as EOG load: control token: 7 '<EOP_TOKEN>' is not marked as EOG load: control token: 2 '<CLS>' is not marked as EOG load: control token: 255002 '<|YES_TOKEN|>' is not marked as EOG load: control token: 3 '<SEP>' is not marked as EOG load: control token: 255022 '<|END_RESPONSE|>' is not marked as EOG load: control token: 255014 '<|USER_5_TOKEN|>' is not marked as EOG load: control token: 6 '<EOS_TOKEN>' is not marked as EOG load: control token: 255004 '<|GOOD_TOKEN|>' is not marked as EOG load: control token: 1 '<UNK>' is not marked as EOG load: control token: 4 '<MASK_TOKEN>' is not marked as EOG load: control token: 255017 '<|USER_8_TOKEN|>' is not marked as EOG load: control token: 255024 '<|END_ACTION|>' is not marked as EOG load: control token: 255023 '<|START_ACTION|>' is not marked as EOG load: control token: 255012 '<|USER_3_TOKEN|>' is not marked as EOG load: control token: 255010 '<|USER_1_TOKEN|>' is not marked as EOG load: control token: 255028 '<|NEW_FILE|>' is not marked as EOG load: control token: 255015 '<|USER_6_TOKEN|>' is not marked as EOG load: control token: 255011 '<|USER_2_TOKEN|>' is not marked as EOG load: control token: 5 '<BOS_TOKEN>' is not marked as EOG load: control token: 255025 '<|START_TOOL_RESULT|>' is not marked as EOG load: control token: 255031 '<|BEGINNING_OF_SUFFIX_FIM_TOKEN|>' is not marked as EOG load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 41 load: token to piece cache size = 1.8428 MB print_info: arch = cohere2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 111.06 B print_info: general.name = A Model print_info: vocab type = BPE print_info: n_vocab = 256000 print_info: n_merges = 253333 print_info: BOS token = 5 '<BOS_TOKEN>' print_info: EOS token = 255001 '<|END_OF_TURN_TOKEN|>' print_info: UNK token = 1 '<UNK>' print_info: PAD token = 0 '<PAD>' print_info: LF token = 206 'Ċ' print_info: FIM PAD token = 0 '<PAD>' print_info: EOG token = 0 '<PAD>' print_info: EOG token = 255001 '<|END_OF_TURN_TOKEN|>' print_info: max token length = 1024 llama_model_load: vocab only - skipping tensors time=2025-04-10T07:25:52.071Z level=DEBUG source=server.go:335 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12 time=2025-04-10T07:25:52.071Z level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12] time=2025-04-10T07:25:52.072Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f --ctx-size 30000 --batch-size 512 --n-gpu-layers 59 --verbose --threads 24 --flash-attn --parallel 1 --port 33921" time=2025-04-10T07:25:52.072Z level=DEBUG source=server.go:423 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama/cuda_v12:/usr/lib/ollama CUDA_VISIBLE_DEVICES=GPU-de5b7935-bf8b-f645-6fc1-3caf2ac91a20]" time=2025-04-10T07:25:52.072Z level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-10T07:25:52.072Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-10T07:25:52.076Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-10T07:25:52.085Z level=INFO source=runner.go:853 msg="starting go runner" time=2025-04-10T07:25:52.086Z level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-04-10T07:25:54.071Z level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib time=2025-04-10T07:25:54.071Z level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib64 time=2025-04-10T07:25:54.071Z level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 55 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 0 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-04-10T07:25:54.159Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-04-10T07:25:54.162Z level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:33921" time=2025-04-10T07:25:54.333Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA A100 80GB PCIe) - 80727 MiB free llama_model_loader: loaded meta data with 36 key-value pairs and 514 tensors from /root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = cohere2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = A Model llama_model_loader: - kv 3: general.size_label str = 111B llama_model_loader: - kv 4: cohere2.block_count u32 = 64 llama_model_loader: - kv 5: cohere2.context_length u32 = 16384 llama_model_loader: - kv 6: cohere2.embedding_length u32 = 12288 llama_model_loader: - kv 7: cohere2.feed_forward_length u32 = 36864 llama_model_loader: - kv 8: cohere2.attention.head_count u32 = 96 llama_model_loader: - kv 9: cohere2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: cohere2.rope.freq_base f32 = 50000.000000 llama_model_loader: - kv 11: cohere2.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 12: cohere2.attention.key_length u32 = 128 llama_model_loader: - kv 13: cohere2.attention.value_length u32 = 128 llama_model_loader: - kv 14: cohere2.logit_scale f32 = 0.250000 llama_model_loader: - kv 15: cohere2.attention.sliding_window u32 = 4096 llama_model_loader: - kv 16: cohere2.vocab_size u32 = 256000 llama_model_loader: - kv 17: cohere2.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: cohere2.rope.scaling.type str = none llama_model_loader: - kv 19: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 20: tokenizer.ggml.pre str = command-r llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,256000] = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ... llama_model_loader: - kv 22: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ... llama_model_loader: - kv 23: tokenizer.ggml.merges arr[str,253333] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a... llama_model_loader: - kv 24: tokenizer.ggml.bos_token_id u32 = 5 llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 255001 llama_model_loader: - kv 26: tokenizer.ggml.unknown_token_id u32 = 1 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 28: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 29: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 30: tokenizer.chat_template.tool_use str = {%- macro document_turn(documents) -%... llama_model_loader: - kv 31: tokenizer.chat_template.rag str = {% set tools = [] %}\n{%- macro docume... llama_model_loader: - kv 32: tokenizer.chat_templates arr[str,2] = ["tool_use", "rag"] llama_model_loader: - kv 33: tokenizer.chat_template str = {% if documents %}\n{% set tools = [] ... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - kv 35: general.file_type u32 = 15 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 384 tensors llama_model_loader: - type q6_K: 65 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 62.51 GiB (4.84 BPW) init_tokenizer: initializing tokenizer for type 2 load: control token: 255032 '<|END_OF_MIDDLE_FIM_TOKEN|>' is not marked as EOG load: control token: 255030 '<|BEGINNING_OF_MIDDLE_FIM_TOKEN|>' is not marked as EOG load: control token: 255029 '<|BEGINNING_OF_PREFIX_FIM_TOKEN|>' is not marked as EOG load: control token: 255026 '<|END_TOOL_RESULT|>' is not marked as EOG load: control token: 255021 '<|START_RESPONSE|>' is not marked as EOG load: control token: 255020 '<|END_THINKING|>' is not marked as EOG load: control token: 255019 '<|START_THINKING|>' is not marked as EOG load: control token: 255016 '<|USER_7_TOKEN|>' is not marked as EOG load: control token: 255008 '<|SYSTEM_TOKEN|>' is not marked as EOG load: control token: 255007 '<|CHATBOT_TOKEN|>' is not marked as EOG load: control token: 255003 '<|NO_TOKEN|>' is not marked as EOG load: control token: 255001 '<|END_OF_TURN_TOKEN|>' is not marked as EOG load: control token: 255000 '<|START_OF_TURN_TOKEN|>' is not marked as EOG load: control token: 255009 '<|USER_0_TOKEN|>' is not marked as EOG load: control token: 255018 '<|USER_9_TOKEN|>' is not marked as EOG load: control token: 255006 '<|USER_TOKEN|>' is not marked as EOG load: control token: 255013 '<|USER_4_TOKEN|>' is not marked as EOG load: control token: 255027 '<|EXTRA_8_TOKEN|>' is not marked as EOG load: control token: 255005 '<|BAD_TOKEN|>' is not marked as EOG load: control token: 7 '<EOP_TOKEN>' is not marked as EOG load: control token: 2 '<CLS>' is not marked as EOG load: control token: 255002 '<|YES_TOKEN|>' is not marked as EOG load: control token: 3 '<SEP>' is not marked as EOG load: control token: 255022 '<|END_RESPONSE|>' is not marked as EOG load: control token: 255014 '<|USER_5_TOKEN|>' is not marked as EOG load: control token: 6 '<EOS_TOKEN>' is not marked as EOG load: control token: 255004 '<|GOOD_TOKEN|>' is not marked as EOG load: control token: 1 '<UNK>' is not marked as EOG load: control token: 4 '<MASK_TOKEN>' is not marked as EOG load: control token: 255017 '<|USER_8_TOKEN|>' is not marked as EOG load: control token: 255024 '<|END_ACTION|>' is not marked as EOG load: control token: 255023 '<|START_ACTION|>' is not marked as EOG load: control token: 255012 '<|USER_3_TOKEN|>' is not marked as EOG load: control token: 255010 '<|USER_1_TOKEN|>' is not marked as EOG load: control token: 255028 '<|NEW_FILE|>' is not marked as EOG load: control token: 255015 '<|USER_6_TOKEN|>' is not marked as EOG load: control token: 255011 '<|USER_2_TOKEN|>' is not marked as EOG load: control token: 5 '<BOS_TOKEN>' is not marked as EOG load: control token: 255025 '<|START_TOOL_RESULT|>' is not marked as EOG load: control token: 255031 '<|BEGINNING_OF_SUFFIX_FIM_TOKEN|>' is not marked as EOG load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 41 load: token to piece cache size = 1.8428 MB print_info: arch = cohere2 print_info: vocab_only = 0 print_info: n_ctx_train = 16384 print_info: n_embd = 12288 print_info: n_layer = 64 print_info: n_head = 96 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 4096 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 12 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 1.0e-05 print_info: f_norm_rms_eps = 0.0e+00 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 2.5e-01 print_info: n_ff = 36864 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = none print_info: freq_base_train = 50000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 16384 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = ?B print_info: model params = 111.06 B print_info: general.name = A Model print_info: vocab type = BPE print_info: n_vocab = 256000 print_info: n_merges = 253333 print_info: BOS token = 5 '<BOS_TOKEN>' print_info: EOS token = 255001 '<|END_OF_TURN_TOKEN|>' print_info: UNK token = 1 '<UNK>' print_info: PAD token = 0 '<PAD>' print_info: LF token = 206 'Ċ' print_info: FIM PAD token = 0 '<PAD>' print_info: EOG token = 0 '<PAD>' print_info: EOG token = 255001 '<|END_OF_TURN_TOKEN|>' print_info: max token length = 1024 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: layer 0 assigned to device CPU load_tensors: layer 1 assigned to device CPU load_tensors: layer 2 assigned to device CPU load_tensors: layer 3 assigned to device CPU load_tensors: layer 4 assigned to device CPU load_tensors: layer 5 assigned to device CUDA0 load_tensors: layer 6 assigned to device CUDA0 load_tensors: layer 7 assigned to device CUDA0 load_tensors: layer 8 assigned to device CUDA0 load_tensors: layer 9 assigned to device CUDA0 load_tensors: layer 10 assigned to device CUDA0 load_tensors: layer 11 assigned to device CUDA0 load_tensors: layer 12 assigned to device CUDA0 load_tensors: layer 13 assigned to device CUDA0 load_tensors: layer 14 assigned to device CUDA0 load_tensors: layer 15 assigned to device CUDA0 load_tensors: layer 16 assigned to device CUDA0 load_tensors: layer 17 assigned to device CUDA0 load_tensors: layer 18 assigned to device CUDA0 load_tensors: layer 19 assigned to device CUDA0 load_tensors: layer 20 assigned to device CUDA0 load_tensors: layer 21 assigned to device CUDA0 load_tensors: layer 22 assigned to device CUDA0 load_tensors: layer 23 assigned to device CUDA0 load_tensors: layer 24 assigned to device CUDA0 load_tensors: layer 25 assigned to device CUDA0 load_tensors: layer 26 assigned to device CUDA0 load_tensors: layer 27 assigned to device CUDA0 load_tensors: layer 28 assigned to device CUDA0 load_tensors: layer 29 assigned to device CUDA0 load_tensors: layer 30 assigned to device CUDA0 load_tensors: layer 31 assigned to device CUDA0 load_tensors: layer 32 assigned to device CUDA0 load_tensors: layer 33 assigned to device CUDA0 load_tensors: layer 34 assigned to device CUDA0 load_tensors: layer 35 assigned to device CUDA0 load_tensors: layer 36 assigned to device CUDA0 load_tensors: layer 37 assigned to device CUDA0 load_tensors: layer 38 assigned to device CUDA0 load_tensors: layer 39 assigned to device CUDA0 load_tensors: layer 40 assigned to device CUDA0 load_tensors: layer 41 assigned to device CUDA0 load_tensors: layer 42 assigned to device CUDA0 load_tensors: layer 43 assigned to device CUDA0 load_tensors: layer 44 assigned to device CUDA0 load_tensors: layer 45 assigned to device CUDA0 load_tensors: layer 46 assigned to device CUDA0 load_tensors: layer 47 assigned to device CUDA0 load_tensors: layer 48 assigned to device CUDA0 load_tensors: layer 49 assigned to device CUDA0 load_tensors: layer 50 assigned to device CUDA0 load_tensors: layer 51 assigned to device CUDA0 load_tensors: layer 52 assigned to device CUDA0 load_tensors: layer 53 assigned to device CUDA0 load_tensors: layer 54 assigned to device CUDA0 load_tensors: layer 55 assigned to device CUDA0 load_tensors: layer 56 assigned to device CUDA0 load_tensors: layer 57 assigned to device CUDA0 load_tensors: layer 58 assigned to device CUDA0 load_tensors: layer 59 assigned to device CUDA0 load_tensors: layer 60 assigned to device CUDA0 load_tensors: layer 61 assigned to device CUDA0 load_tensors: layer 62 assigned to device CUDA0 load_tensors: layer 63 assigned to device CUDA0 load_tensors: layer 64 assigned to device CPU load_tensors: tensor 'token_embd.weight' (q6_K) (and 42 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead ``` ### OS Ubuntu 24.04.2 LTS ### GPU NVIDA A100 ### CPU AMD EPYC 7V13 64-Core Processor ### Ollama version 0.6.5
GiteaMirror added the bugneeds more info labels 2026-04-29 02:23:35 -05:00
Author
Owner

@aniruddh622003 commented on GitHub (Apr 10, 2025):

This might have something to do with num_ctx causing model to overflow from GPU VRAM. As the model size here is 64GB, so based on your configuration (A100 80GB), the model should fit. Try reducing the context window and see if that works.

<!-- gh-comment-id:2792524446 --> @aniruddh622003 commented on GitHub (Apr 10, 2025): This might have something to do with `num_ctx` causing model to overflow from GPU VRAM. As the model size here is 64GB, so based on your configuration (A100 80GB), the model should fit. Try reducing the context window and see if that works.
Author
Owner

@rick-github commented on GitHub (Apr 13, 2025):

load_tensors: tensor 'token_embd.weight' (q6_K) (and 42 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead

This is not blocking, it's just indicating that some tensors can't run on GPU and will run on CPU instead. If the model is taking a long time to become ready, it's either because disk reads are slow or VRAM writes are slow (or both). What's the output of

dd if=/root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f of=/dev/null
<!-- gh-comment-id:2800127242 --> @rick-github commented on GitHub (Apr 13, 2025): ``` load_tensors: tensor 'token_embd.weight' (q6_K) (and 42 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead ``` This is not blocking, it's just indicating that some tensors can't run on GPU and will run on CPU instead. If the model is taking a long time to become ready, it's either because disk reads are slow or VRAM writes are slow (or both). What's the output of ``` dd if=/root/.ollama/models/blobs/sha256-ffd0081a97182da52ef3c58dcafde851cbd436ce82f71fc5ed9973828bf78a8f of=/dev/null ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53214