[GH-ISSUE #8254] ollama not use GPU: when using NVIDIA GPU, it detected amdgpu driver and then use CPU to compute #5275

Closed
opened 2026-04-12 16:26:37 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @Roc136 on GitHub (Dec 27, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8254

What is the issue?

I'm running ollama on a device with NVIDIA A100 80G GPU and Intel(R) Xeon(R) Gold 5320 CPU. I built Ollama using the command make CUSTOM_CPU_FLAGS="", started it with ollama serve, and ran ollama run llama2 to load the Llama2 model.

Problem:
Ollama is running on the CPU instead of the GPU.

I checked the logs by setting OLLAMA_DEBUG=1 and found the following lines:

time=2024-12-27T13:46:02.212 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library

It seems that Ollama is attempting to use the AMD driver?
I want to know is it correct or why it cann't use GPU.

some info about GPU

$ nvidia-smi
Fri Dec 27 13:53:29 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.216.03             Driver Version: 535.216.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100 80GB PCIe          Off | 00000000:00:0A.0 Off |                    0 |
| N/A   50C    P0              71W / 300W |      3MiB / 81920MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

full logs your may needed:

2024/12/27 13:46:01 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-12-27T13:46:01.756 level=INFO source=images.go:757 msg="total blobs: 11"
time=2024-12-27T13:46:01.757 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-12-27T13:46:01.757 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4-11-g023e4bc)"
time=2024-12-27T13:46:01.758 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in"
time=2024-12-27T13:46:01.758 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu]
time=2024-12-27T13:46:01.758 level=DEBUG source=routes.go:1340 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-12-27T13:46:01.758 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-12-27T13:46:01.758 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2024-12-27T13:46:01.762 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-12-27T13:46:01.762 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so*
time=2024-12-27T13:46:01.762 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[libcuda.so* /libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-12-27T13:46:01.766 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03
dlsym: cuInit - 0x7fa8f88134b0
dlsym: cuDriverGetVersion - 0x7fa8f88134d0
dlsym: cuDeviceGetCount - 0x7fa8f8813510
dlsym: cuDeviceGet - 0x7fa8f88134f0
dlsym: cuDeviceGetAttribute - 0x7fa8f88135f0
dlsym: cuDeviceGetUuid - 0x7fa8f8813550
dlsym: cuDeviceGetName - 0x7fa8f8813530
dlsym: cuCtxCreate_v3 - 0x7fa8f881b1b0
dlsym: cuMemGetInfo_v2 - 0x7fa8f8826680
dlsym: cuCtxDestroy - 0x7fa8f8875680
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2024-12-27T13:46:01.781 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03
[GPU-f6468958-9e61-d8c3-ca75-787ea65d2617] CUDA totalMem 81050 mb
[GPU-f6468958-9e61-d8c3-ca75-787ea65d2617] CUDA freeMem 80627 mb
[GPU-f6468958-9e61-d8c3-ca75-787ea65d2617] Compute Capability 8.0
time=2024-12-27T13:46:02.212 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-12-27T13:46:02.212 level=INFO source=types.go:131 msg="inference compute" id=GPU-f6468958-9e61-d8c3-ca75-787ea65d2617 library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A100 80GB PCIe" total="79.2 GiB" available="78.7 GiB"
[GIN] 2024/12/27 - 13:46:40 | 200 |     112.806µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/12/27 - 13:46:40 | 200 |   10.851251ms |       127.0.0.1 | POST     "/api/show"
time=2024-12-27T13:46:40.950 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.2 GiB" before.free="88.9 GiB" before.free_swap="119.2 GiB" now.total="94.2 GiB" now.free="88.8 GiB" now.free_swap="119.2 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03
dlsym: cuInit - 0x7fa8f88134b0
dlsym: cuDriverGetVersion - 0x7fa8f88134d0
dlsym: cuDeviceGetCount - 0x7fa8f8813510
dlsym: cuDeviceGet - 0x7fa8f88134f0
dlsym: cuDeviceGetAttribute - 0x7fa8f88135f0
dlsym: cuDeviceGetUuid - 0x7fa8f8813550
dlsym: cuDeviceGetName - 0x7fa8f8813530
dlsym: cuCtxCreate_v3 - 0x7fa8f881b1b0
dlsym: cuMemGetInfo_v2 - 0x7fa8f8826680
dlsym: cuCtxDestroy - 0x7fa8f8875680
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2024-12-27T13:46:41.140 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-f6468958-9e61-d8c3-ca75-787ea65d2617 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="422.9 MiB"
releasing cuda driver library
time=2024-12-27T13:46:41.140 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x55ab4f7b39a0 gpu_count=1
time=2024-12-27T13:46:41.156 level=DEBUG source=sched.go:224 msg="loading first model" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
time=2024-12-27T13:46:41.156 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]"
time=2024-12-27T13:46:41.156 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 gpu=GPU-f6468958-9e61-d8c3-ca75-787ea65d2617 parallel=4 available=84544258048 required="8.7 GiB"
time=2024-12-27T13:46:41.157 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.2 GiB" before.free="88.8 GiB" before.free_swap="119.2 GiB" now.total="94.2 GiB" now.free="88.8 GiB" now.free_swap="119.2 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03
dlsym: cuInit - 0x7fa8f88134b0
dlsym: cuDriverGetVersion - 0x7fa8f88134d0
dlsym: cuDeviceGetCount - 0x7fa8f8813510
dlsym: cuDeviceGet - 0x7fa8f88134f0
dlsym: cuDeviceGetAttribute - 0x7fa8f88135f0
dlsym: cuDeviceGetUuid - 0x7fa8f8813550
dlsym: cuDeviceGetName - 0x7fa8f8813530
dlsym: cuCtxCreate_v3 - 0x7fa8f881b1b0
dlsym: cuMemGetInfo_v2 - 0x7fa8f8826680
dlsym: cuCtxDestroy - 0x7fa8f8875680
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2024-12-27T13:46:41.342 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-f6468958-9e61-d8c3-ca75-787ea65d2617 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="422.9 MiB"
releasing cuda driver library
time=2024-12-27T13:46:41.342 level=INFO source=server.go:104 msg="system memory" total="94.2 GiB" free="88.8 GiB" free_swap="119.2 GiB"
time=2024-12-27T13:46:41.342 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]"
time=2024-12-27T13:46:41.343 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.7 GiB" memory.required.partial="8.7 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[8.7 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="7.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="681.0 MiB"
time=2024-12-27T13:46:41.344 level=DEBUG source=gpu.go:714 msg="no filter required for library cpu"
time=2024-12-27T13:46:41.344 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --verbose --threads 64 --parallel 4 --port 44134"
time=2024-12-27T13:46:41.344 level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=.:/usr/bin]"
time=2024-12-27T13:46:41.345 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-12-27T13:46:41.345 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2024-12-27T13:46:41.345 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2024-12-27T13:46:41.364 level=INFO source=runner.go:938 msg="starting go runner"
time=2024-12-27T13:46:41.364 level=INFO source=runner.go:939 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=64
time=2024-12-27T13:46:41.364 level=INFO source=runner.go:997 msg="Server listening on 127.0.0.1:44134"
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,61249]   = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  20:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: control token:      2 '</s>' is not marked as EOG
llm_load_vocab: control token:      1 '<s>' is not marked as EOG
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: EOG token        = 2 '</s>'
llm_load_print_meta: max token length = 48
llm_load_tensors: tensor 'token_embd.weight' (q4_0) (and 290 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
time=2024-12-27T13:46:41.597 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors:   CPU_Mapped model buffer size =  3647.87 MiB
llama_new_context_with_model: n_seq_max     = 4
llama_new_context_with_model: n_ctx         = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 2048
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 10000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (4096) -- the full capacity of the model will not be utilized
time=2024-12-27T13:46:41.847 level=DEBUG source=server.go:600 msg="model load progress 1.00"
time=2024-12-27T13:46:42.098 level=DEBUG source=server.go:603 msg="model load completed, waiting for server to become available" status="llm server loading model"
llama_kv_cache_init:        CPU KV buffer size =  4096.00 MiB
llama_new_context_with_model: KV self size  = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.55 MiB
llama_new_context_with_model:        CPU compute buffer size =   560.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
time=2024-12-27T13:46:44.356 level=INFO source=server.go:594 msg="llama runner started in 3.01 seconds"
time=2024-12-27T13:46:44.356 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
[GIN] 2024/12/27 - 13:46:44 | 200 |  3.416664942s |       127.0.0.1 | POST     "/api/generate"
time=2024-12-27T13:46:44.357 level=DEBUG source=sched.go:466 msg="context for request finished"
time=2024-12-27T13:46:44.357 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 duration=5m0s
time=2024-12-27T13:46:44.357 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 refCount=0

OS

Linux, Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.5.4-11-g023e4bc

Originally created by @Roc136 on GitHub (Dec 27, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8254 ### What is the issue? I'm running ollama on a device with NVIDIA A100 80G GPU and Intel(R) Xeon(R) Gold 5320 CPU. I built Ollama using the command `make CUSTOM_CPU_FLAGS=""`, started it with `ollama serve`, and ran `ollama run llama2` to load the Llama2 model. Problem: Ollama is running on the CPU instead of the GPU. I checked the logs by setting `OLLAMA_DEBUG=1` and found the following lines: ``` time=2024-12-27T13:46:02.212 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library ``` It seems that Ollama is attempting to use the AMD driver? I want to know is it correct or why it cann't use GPU. some info about GPU ``` $ nvidia-smi Fri Dec 27 13:53:29 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.216.03 Driver Version: 535.216.03 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A100 80GB PCIe Off | 00000000:00:0A.0 Off | 0 | | N/A 50C P0 71W / 300W | 3MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ ``` full logs your may needed: ``` 2024/12/27 13:46:01 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-12-27T13:46:01.756 level=INFO source=images.go:757 msg="total blobs: 11" time=2024-12-27T13:46:01.757 level=INFO source=images.go:764 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2024-12-27T13:46:01.757 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4-11-g023e4bc)" time=2024-12-27T13:46:01.758 level=DEBUG source=common.go:85 msg="no dynamic runners detected, using only built-in" time=2024-12-27T13:46:01.758 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu] time=2024-12-27T13:46:01.758 level=DEBUG source=routes.go:1340 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-12-27T13:46:01.758 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-12-27T13:46:01.758 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2024-12-27T13:46:01.762 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA" time=2024-12-27T13:46:01.762 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so* time=2024-12-27T13:46:01.762 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[libcuda.so* /libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2024-12-27T13:46:01.766 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03] initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03 dlsym: cuInit - 0x7fa8f88134b0 dlsym: cuDriverGetVersion - 0x7fa8f88134d0 dlsym: cuDeviceGetCount - 0x7fa8f8813510 dlsym: cuDeviceGet - 0x7fa8f88134f0 dlsym: cuDeviceGetAttribute - 0x7fa8f88135f0 dlsym: cuDeviceGetUuid - 0x7fa8f8813550 dlsym: cuDeviceGetName - 0x7fa8f8813530 dlsym: cuCtxCreate_v3 - 0x7fa8f881b1b0 dlsym: cuMemGetInfo_v2 - 0x7fa8f8826680 dlsym: cuCtxDestroy - 0x7fa8f8875680 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2024-12-27T13:46:01.781 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03 [GPU-f6468958-9e61-d8c3-ca75-787ea65d2617] CUDA totalMem 81050 mb [GPU-f6468958-9e61-d8c3-ca75-787ea65d2617] CUDA freeMem 80627 mb [GPU-f6468958-9e61-d8c3-ca75-787ea65d2617] Compute Capability 8.0 time=2024-12-27T13:46:02.212 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2024-12-27T13:46:02.212 level=INFO source=types.go:131 msg="inference compute" id=GPU-f6468958-9e61-d8c3-ca75-787ea65d2617 library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A100 80GB PCIe" total="79.2 GiB" available="78.7 GiB" [GIN] 2024/12/27 - 13:46:40 | 200 | 112.806µs | 127.0.0.1 | HEAD "/" [GIN] 2024/12/27 - 13:46:40 | 200 | 10.851251ms | 127.0.0.1 | POST "/api/show" time=2024-12-27T13:46:40.950 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.2 GiB" before.free="88.9 GiB" before.free_swap="119.2 GiB" now.total="94.2 GiB" now.free="88.8 GiB" now.free_swap="119.2 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03 dlsym: cuInit - 0x7fa8f88134b0 dlsym: cuDriverGetVersion - 0x7fa8f88134d0 dlsym: cuDeviceGetCount - 0x7fa8f8813510 dlsym: cuDeviceGet - 0x7fa8f88134f0 dlsym: cuDeviceGetAttribute - 0x7fa8f88135f0 dlsym: cuDeviceGetUuid - 0x7fa8f8813550 dlsym: cuDeviceGetName - 0x7fa8f8813530 dlsym: cuCtxCreate_v3 - 0x7fa8f881b1b0 dlsym: cuMemGetInfo_v2 - 0x7fa8f8826680 dlsym: cuCtxDestroy - 0x7fa8f8875680 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2024-12-27T13:46:41.140 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-f6468958-9e61-d8c3-ca75-787ea65d2617 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="422.9 MiB" releasing cuda driver library time=2024-12-27T13:46:41.140 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x55ab4f7b39a0 gpu_count=1 time=2024-12-27T13:46:41.156 level=DEBUG source=sched.go:224 msg="loading first model" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 time=2024-12-27T13:46:41.156 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]" time=2024-12-27T13:46:41.156 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 gpu=GPU-f6468958-9e61-d8c3-ca75-787ea65d2617 parallel=4 available=84544258048 required="8.7 GiB" time=2024-12-27T13:46:41.157 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="94.2 GiB" before.free="88.8 GiB" before.free_swap="119.2 GiB" now.total="94.2 GiB" now.free="88.8 GiB" now.free_swap="119.2 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03 dlsym: cuInit - 0x7fa8f88134b0 dlsym: cuDriverGetVersion - 0x7fa8f88134d0 dlsym: cuDeviceGetCount - 0x7fa8f8813510 dlsym: cuDeviceGet - 0x7fa8f88134f0 dlsym: cuDeviceGetAttribute - 0x7fa8f88135f0 dlsym: cuDeviceGetUuid - 0x7fa8f8813550 dlsym: cuDeviceGetName - 0x7fa8f8813530 dlsym: cuCtxCreate_v3 - 0x7fa8f881b1b0 dlsym: cuMemGetInfo_v2 - 0x7fa8f8826680 dlsym: cuCtxDestroy - 0x7fa8f8875680 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2024-12-27T13:46:41.342 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-f6468958-9e61-d8c3-ca75-787ea65d2617 name="NVIDIA A100 80GB PCIe" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="422.9 MiB" releasing cuda driver library time=2024-12-27T13:46:41.342 level=INFO source=server.go:104 msg="system memory" total="94.2 GiB" free="88.8 GiB" free_swap="119.2 GiB" time=2024-12-27T13:46:41.342 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]" time=2024-12-27T13:46:41.343 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.7 GiB" memory.required.partial="8.7 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[8.7 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="7.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="681.0 MiB" time=2024-12-27T13:46:41.344 level=DEBUG source=gpu.go:714 msg="no filter required for library cpu" time=2024-12-27T13:46:41.344 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --verbose --threads 64 --parallel 4 --port 44134" time=2024-12-27T13:46:41.344 level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=.:/usr/bin]" time=2024-12-27T13:46:41.345 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-12-27T13:46:41.345 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2024-12-27T13:46:41.345 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2024-12-27T13:46:41.364 level=INFO source=runner.go:938 msg="starting go runner" time=2024-12-27T13:46:41.364 level=INFO source=runner.go:939 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=64 time=2024-12-27T13:46:41.364 level=INFO source=runner.go:997 msg="Server listening on 127.0.0.1:44134" llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: control token: 2 '</s>' is not marked as EOG llm_load_vocab: control token: 1 '<s>' is not marked as EOG llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 3 llm_load_vocab: token to piece cache size = 0.1684 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 6.74 B llm_load_print_meta: model size = 3.56 GiB (4.54 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOG token = 2 '</s>' llm_load_print_meta: max token length = 48 llm_load_tensors: tensor 'token_embd.weight' (q4_0) (and 290 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead time=2024-12-27T13:46:41.597 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llm_load_tensors: CPU_Mapped model buffer size = 3647.87 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (4096) -- the full capacity of the model will not be utilized time=2024-12-27T13:46:41.847 level=DEBUG source=server.go:600 msg="model load progress 1.00" time=2024-12-27T13:46:42.098 level=DEBUG source=server.go:603 msg="model load completed, waiting for server to become available" status="llm server loading model" llama_kv_cache_init: CPU KV buffer size = 4096.00 MiB llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB llama_new_context_with_model: CPU output buffer size = 0.55 MiB llama_new_context_with_model: CPU compute buffer size = 560.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 time=2024-12-27T13:46:44.356 level=INFO source=server.go:594 msg="llama runner started in 3.01 seconds" time=2024-12-27T13:46:44.356 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 [GIN] 2024/12/27 - 13:46:44 | 200 | 3.416664942s | 127.0.0.1 | POST "/api/generate" time=2024-12-27T13:46:44.357 level=DEBUG source=sched.go:466 msg="context for request finished" time=2024-12-27T13:46:44.357 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 duration=5m0s time=2024-12-27T13:46:44.357 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 refCount=0 ``` ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.4-11-g023e4bc
GiteaMirror added the bug label 2026-04-12 16:26:37 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 27, 2024):

time=2024-12-27T13:46:01.758 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu]

You've built ollama without GPU runners. It's much easier if you just install ollama with curl -fsSL https://ollama.com/install.sh | sh or run the docker image with docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama .

<!-- gh-comment-id:2563406544 --> @rick-github commented on GitHub (Dec 27, 2024): ``` time=2024-12-27T13:46:01.758 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners=[cpu] ``` You've built ollama without GPU runners. It's much easier if you just install ollama with `curl -fsSL https://ollama.com/install.sh | sh` or run the docker image with `docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama `.
Author
Owner

@Roc136 commented on GitHub (Dec 27, 2024):

Thank you for your reply. There is no AVX instruction set on my device (possibly due to virtual machine or some BIOS settings, but I can't modify these configurations now), so I need to add CUSTOM_CPU_FLAGS="" for compilation. I am using Docker with Ubuntu 24.04, and the command to run Docker is docker run -d --gpus=all --name ollama ubuntu:24.04. The compilation command is make CUSTOM_CPU_FLAGS="". According to the documentation, this command compiles an ollama without using AVX. If possible, I would like to know why it will build allama without GPU runners and how can I build ollama with GPU runners but without using AVX.

<!-- gh-comment-id:2563417033 --> @Roc136 commented on GitHub (Dec 27, 2024): Thank you for your reply. There is no AVX instruction set on my device (possibly due to virtual machine or some BIOS settings, but I can't modify these configurations now), so I need to add `CUSTOM_CPU_FLAGS=""` for compilation. I am using Docker with Ubuntu 24.04, and the command to run Docker is `docker run -d --gpus=all --name ollama ubuntu:24.04`. The compilation command is `make CUSTOM_CPU_FLAGS=""`. According to the [documentation](https://github.com/ollama/ollama/blob/main/docs/development.md#advanced-cpu-vector-settings), this command compiles an ollama without using AVX. If possible, I would like to know why it will build allama without GPU runners and how can I build ollama with GPU runners but without using AVX.
Author
Owner

@rick-github commented on GitHub (Dec 27, 2024):

The build system is supposed to build GPU runners if it detects CUDA libraries. Have you installed the NVIDIA CUDA development and runtime packages? Are they in a non-standard location? Does the output of the build system indicate problems with finding or compiling against the CUDA libraries?

<!-- gh-comment-id:2563456340 --> @rick-github commented on GitHub (Dec 27, 2024): The build system is supposed to build GPU runners if it detects CUDA libraries. Have you installed the NVIDIA CUDA development and runtime packages? Are they in a non-standard location? Does the output of the build system indicate problems with finding or compiling against the CUDA libraries?
Author
Owner

@Roc136 commented on GitHub (Dec 27, 2024):

I have installed the NVIDIA CUDA development and runtime packages and the NVIDIA Container Toolkit packages, and I can run nvidia-smi in the container.
I can run nvcc --version on the host machine, but I cann't run it inside the container, does it matter?

And the cuda packages are indeed installed in a non-standard location, but I have set LD_LIBRARY_PATH to the dir of cuda and add it to PATH.

I recompiled ollama now, and the output seems to be fine.

# make clean
rm -rf ./llama/build/linux-amd64 ./dist/linux-amd64/lib/ollama ./ollama ./dist/linux-amd64/bin/ollama
go clean -cache
# make CUSTOM_CPU_FLAGS="" -j 10
make[1]: Nothing to be done for 'cpu'.
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.4-11-g023e4bc\"  " -trimpath  -o ollama .
# 
<!-- gh-comment-id:2563487690 --> @Roc136 commented on GitHub (Dec 27, 2024): I have installed the NVIDIA CUDA development and runtime packages and the NVIDIA Container Toolkit packages, and I **can** run `nvidia-smi` in the container. I can run `nvcc --version` on the host machine, but I **cann't** run it inside the container, does it matter? And the cuda packages are indeed installed in a non-standard location, but I have set `LD_LIBRARY_PATH` to the dir of cuda and add it to `PATH`. I recompiled ollama now, and the output seems to be fine. ``` # make clean rm -rf ./llama/build/linux-amd64 ./dist/linux-amd64/lib/ollama ./ollama ./dist/linux-amd64/bin/ollama go clean -cache # make CUSTOM_CPU_FLAGS="" -j 10 make[1]: Nothing to be done for 'cpu'. GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.4-11-g023e4bc\" " -trimpath -o ollama . # ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5275