[GH-ISSUE #10848] Ollama detect my GPU but model not running on it #7122

Closed
opened 2026-04-12 19:07:37 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @Li-Wentao on GitHub (May 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10848

What is the issue?

OS: archlinux
CPU: AMD 9800X3D
GPU: NVIDIA RTX 5080
NVIDIA Driver: nvidia-open-dkms

I have trouble in launching deepseek-r1:1.5b model in GPU. As you can see from the log, ollama detects my 5080 GPU. But when I run the model, it falls back to CPU resource. Can someone help me on this?

Relevant log output

======================== ollama launching log ==================================
time=2025-05-24T15:57:23.082-05:00 level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/usr/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-05-24T15:57:23.082-05:00 level=INFO source=images.go:463 msg="total blobs: 5"
time=2025-05-24T15:57:23.082-05:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0"
time=2025-05-24T15:57:23.082-05:00 level=INFO source=routes.go:1258 msg="Listening on 127.0.0.1:11434 (version 0.7.1)"
time=2025-05-24T15:57:23.082-05:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-05-24T15:57:23.373-05:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-05-24T15:57:23.375-05:00 level=WARN source=amd_linux.go:443 msg="amdgpu detected, but no compatible rocm library found.  Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
time=2025-05-24T15:57:23.375-05:00 level=WARN source=amd_linux.go:348 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
time=2025-05-24T15:57:23.375-05:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-5ef89995-11e7-5689-7628-8e8735b9df54 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5080" total="15.5 GiB" available="14.5 GiB"


========================= ollama model running log ================================
[GIN] 2025/05/24 - 16:06:55 | 200 |      51.808µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/05/24 - 16:07:59 | 200 |      18.936µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/24 - 16:07:59 | 200 |   16.920679ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-24T16:08:00.059-05:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/home/usr/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-5ef89995-11e7-5689-7628-8e8735b9df54 parallel=2 available=15606022144 required="1.9 GiB"
time=2025-05-24T16:08:00.171-05:00 level=INFO source=server.go:135 msg="system memory" total="60.5 GiB" free="55.6 GiB" free_swap="4.0 GiB"
time=2025-05-24T16:08:00.171-05:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[14.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="934.7 MiB" memory.weights.repeating="752.1 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB"
llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/usr/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.7.1

Originally created by @Li-Wentao on GitHub (May 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10848 ### What is the issue? OS: archlinux CPU: AMD 9800X3D GPU: NVIDIA RTX 5080 NVIDIA Driver: nvidia-open-dkms I have trouble in launching deepseek-r1:1.5b model in GPU. As you can see from the log, ollama detects my 5080 GPU. But when I run the model, it falls back to CPU resource. Can someone help me on this? ### Relevant log output ```shell ======================== ollama launching log ================================== time=2025-05-24T15:57:23.082-05:00 level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/usr/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-05-24T15:57:23.082-05:00 level=INFO source=images.go:463 msg="total blobs: 5" time=2025-05-24T15:57:23.082-05:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0" time=2025-05-24T15:57:23.082-05:00 level=INFO source=routes.go:1258 msg="Listening on 127.0.0.1:11434 (version 0.7.1)" time=2025-05-24T15:57:23.082-05:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-24T15:57:23.373-05:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-05-24T15:57:23.375-05:00 level=WARN source=amd_linux.go:443 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" time=2025-05-24T15:57:23.375-05:00 level=WARN source=amd_linux.go:348 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" time=2025-05-24T15:57:23.375-05:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-5ef89995-11e7-5689-7628-8e8735b9df54 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5080" total="15.5 GiB" available="14.5 GiB" ========================= ollama model running log ================================ [GIN] 2025/05/24 - 16:06:55 | 200 | 51.808µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/05/24 - 16:07:59 | 200 | 18.936µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/24 - 16:07:59 | 200 | 16.920679ms | 127.0.0.1 | POST "/api/show" time=2025-05-24T16:08:00.059-05:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/home/usr/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-5ef89995-11e7-5689-7628-8e8735b9df54 parallel=2 available=15606022144 required="1.9 GiB" time=2025-05-24T16:08:00.171-05:00 level=INFO source=server.go:135 msg="system memory" total="60.5 GiB" free="55.6 GiB" free_swap="4.0 GiB" time=2025-05-24T16:08:00.171-05:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[14.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="934.7 MiB" memory.weights.repeating="752.1 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB" llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/usr/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest)) ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.7.1
GiteaMirror added the bug label 2026-04-12 19:07:37 -05:00
Author
Owner

@rick-github commented on GitHub (May 24, 2025):

Full log required.

<!-- gh-comment-id:2907191700 --> @rick-github commented on GitHub (May 24, 2025): Full log required.
Author
Owner

@stubkan commented on GitHub (May 24, 2025):

Have the same issue. Posting here, rather than making a new post. Got a 4gb model imported (qwen3-4b) and see it taking up GPU vram space... But... inferencing it just uses the CPU. I see all the chip usages going up and down in system monitor. Not good.

Checking ps shows this;

ollama ps
NAME ID SIZE PROCESSOR UNTIL
qwen3-4b:latest fe7c4d51aadb 13 GB 59%/41% CPU/GPU

I'm not sure how a 4gb model is taking up 13gb of memory - but this may be why it's slow and using the CPU ?

ollama list
NAME ID SIZE MODIFIED
qwen3-4b:latest fe7c4d51aadb 4.3 GB 16 minutes ago

Is this normal behaviour?

Logs:


May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = false
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  26:               general.quantization_version u32              = 2
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  27:                          general.file_type u32              = 7
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  28:                      quantize.imatrix.file str              = Qwen3-4B-GGUF/imatrix_unsloth.dat
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  29:                   quantize.imatrix.dataset str              = unsloth_calibration_Qwen3-4B.txt
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  30:             quantize.imatrix.entries_count i32              = 252
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  31:              quantize.imatrix.chunks_count i32              = 685
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - type  f32:  145 tensors
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - type q8_0:  253 tensors
May 24 22:58:22 pleiades ollama[1391]: print_info: file format = GGUF V3 (latest)
May 24 22:58:22 pleiades ollama[1391]: print_info: file type   = Q8_0
May 24 22:58:22 pleiades ollama[1391]: print_info: file size   = 3.98 GiB (8.50 BPW)
May 24 22:58:22 pleiades ollama[1391]: load: special tokens cache size = 26
May 24 22:58:22 pleiades ollama[1391]: load: token to piece cache size = 0.9311 MB
May 24 22:58:22 pleiades ollama[1391]: print_info: arch             = qwen3
May 24 22:58:22 pleiades ollama[1391]: print_info: vocab_only       = 1
May 24 22:58:22 pleiades ollama[1391]: print_info: model type       = ?B
May 24 22:58:22 pleiades ollama[1391]: print_info: model params     = 4.02 B
May 24 22:58:22 pleiades ollama[1391]: print_info: general.name     = Qwen3-4B
May 24 22:58:22 pleiades ollama[1391]: print_info: vocab type       = BPE
May 24 22:58:22 pleiades ollama[1391]: print_info: n_vocab          = 151936
May 24 22:58:22 pleiades ollama[1391]: print_info: n_merges         = 151387
May 24 22:58:22 pleiades ollama[1391]: print_info: BOS token        = 11 ','
May 24 22:58:22 pleiades ollama[1391]: print_info: EOS token        = 151645 '<|im_end|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: EOT token        = 151645 '<|im_end|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: PAD token        = 151654 '<|vision_pad|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: LF token         = 198 'Ċ'
May 24 22:58:22 pleiades ollama[1391]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: FIM REP token    = 151663 '<|repo_name|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: EOG token        = 151643 '<|endoftext|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: EOG token        = 151645 '<|im_end|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: EOG token        = 151662 '<|fim_pad|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: EOG token        = 151663 '<|repo_name|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: EOG token        = 151664 '<|file_sep|>'
May 24 22:58:22 pleiades ollama[1391]: print_info: max token length = 256
May 24 22:58:22 pleiades ollama[1391]: llama_model_load: vocab only - skipping tensors
May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.484+01:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-eed555233267a33c7e8ee31682762cc7751b3f6d224039086e0e846f05fffa5d --ctx-size 32768 --batch-size 512 --n-gpu-layers 5 --threads 4 --parallel 1 --port 35167"
May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.484+01:00 level=INFO source=sched.go:472 msg="loaded runners" count=1
May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.484+01:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.484+01:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.491+01:00 level=INFO source=runner.go:815 msg="starting go runner"
May 24 22:58:22 pleiades ollama[1391]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
May 24 22:58:22 pleiades ollama[1391]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
May 24 22:58:22 pleiades ollama[1391]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
May 24 22:58:22 pleiades ollama[1391]: ggml_cuda_init: found 1 CUDA devices:
May 24 22:58:22 pleiades ollama[1391]:   Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
May 24 22:58:22 pleiades ollama[1391]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.761+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.762+01:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:35167"
May 24 22:58:22 pleiades ollama[1391]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5289 MiB free
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: loaded meta data with 32 key-value pairs and 398 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-eed555233267a33c7e8ee31682762cc7751b3f6d224039086e0e846f05fffa5d (version GGUF V3 (latest))
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv   0:                       general.architecture str              = qwen3
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv   1:                               general.type str              = model
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv   2:                               general.name str              = Qwen3-4B
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv   3:                           general.basename str              = Qwen3-4B
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv   4:                       general.quantized_by str              = Unsloth
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv   5:                         general.size_label str              = 4B
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv   6:                           general.repo_url str              = https://huggingface.co/unsloth
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv   7:                          qwen3.block_count u32              = 36
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv   8:                       qwen3.context_length u32              = 40960
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv   9:                     qwen3.embedding_length u32              = 2560
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  10:                  qwen3.feed_forward_length u32              = 9728
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  11:                 qwen3.attention.head_count u32              = 32
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  12:              qwen3.attention.head_count_kv u32              = 8
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  13:                       qwen3.rope.freq_base f32              = 1000000.000000
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  14:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  15:                 qwen3.attention.key_length u32              = 128
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  16:               qwen3.attention.value_length u32              = 128
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = qwen2
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  22:                tokenizer.ggml.eos_token_id u32              = 151645
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 151654
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = false
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  26:               general.quantization_version u32              = 2
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  27:                          general.file_type u32              = 7
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  28:                      quantize.imatrix.file str              = Qwen3-4B-GGUF/imatrix_unsloth.dat
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  29:                   quantize.imatrix.dataset str              = unsloth_calibration_Qwen3-4B.txt
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  30:             quantize.imatrix.entries_count i32              = 252
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv  31:              quantize.imatrix.chunks_count i32              = 685
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - type  f32:  145 tensors
May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - type q8_0:  253 tensors
May 24 22:58:22 pleiades ollama[1391]: print_info: file format = GGUF V3 (latest)
May 24 22:58:22 pleiades ollama[1391]: print_info: file type   = Q8_0
May 24 22:58:22 pleiades ollama[1391]: print_info: file size   = 3.98 GiB (8.50 BPW)
May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.986+01:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
May 24 22:58:23 pleiades ollama[1391]: load: special tokens cache size = 26
May 24 22:58:23 pleiades ollama[1391]: load: token to piece cache size = 0.9311 MB
May 24 22:58:23 pleiades ollama[1391]: print_info: arch             = qwen3
May 24 22:58:23 pleiades ollama[1391]: print_info: vocab_only       = 0
May 24 22:58:23 pleiades ollama[1391]: print_info: n_ctx_train      = 40960
May 24 22:58:23 pleiades ollama[1391]: print_info: n_embd           = 2560
May 24 22:58:23 pleiades ollama[1391]: print_info: n_layer          = 36
May 24 22:58:23 pleiades ollama[1391]: print_info: n_head           = 32
May 24 22:58:23 pleiades ollama[1391]: print_info: n_head_kv        = 8
May 24 22:58:23 pleiades ollama[1391]: print_info: n_rot            = 128
May 24 22:58:23 pleiades ollama[1391]: print_info: n_swa            = 0
May 24 22:58:23 pleiades ollama[1391]: print_info: n_swa_pattern    = 1
May 24 22:58:23 pleiades ollama[1391]: print_info: n_embd_head_k    = 128
May 24 22:58:23 pleiades ollama[1391]: print_info: n_embd_head_v    = 128
May 24 22:58:23 pleiades ollama[1391]: print_info: n_gqa            = 4
May 24 22:58:23 pleiades ollama[1391]: print_info: n_embd_k_gqa     = 1024
May 24 22:58:23 pleiades ollama[1391]: print_info: n_embd_v_gqa     = 1024
May 24 22:58:23 pleiades ollama[1391]: print_info: f_norm_eps       = 0.0e+00
May 24 22:58:23 pleiades ollama[1391]: print_info: f_norm_rms_eps   = 1.0e-06
May 24 22:58:23 pleiades ollama[1391]: print_info: f_clamp_kqv      = 0.0e+00
May 24 22:58:23 pleiades ollama[1391]: print_info: f_max_alibi_bias = 0.0e+00
May 24 22:58:23 pleiades ollama[1391]: print_info: f_logit_scale    = 0.0e+00
May 24 22:58:23 pleiades ollama[1391]: print_info: f_attn_scale     = 0.0e+00
May 24 22:58:23 pleiades ollama[1391]: print_info: n_ff             = 9728
May 24 22:58:23 pleiades ollama[1391]: print_info: n_expert         = 0
May 24 22:58:23 pleiades ollama[1391]: print_info: n_expert_used    = 0
May 24 22:58:23 pleiades ollama[1391]: print_info: causal attn      = 1
May 24 22:58:23 pleiades ollama[1391]: print_info: pooling type     = 0
May 24 22:58:23 pleiades ollama[1391]: print_info: rope type        = 2
May 24 22:58:23 pleiades ollama[1391]: print_info: rope scaling     = linear
May 24 22:58:23 pleiades ollama[1391]: print_info: freq_base_train  = 1000000.0
May 24 22:58:23 pleiades ollama[1391]: print_info: freq_scale_train = 1
May 24 22:58:23 pleiades ollama[1391]: print_info: n_ctx_orig_yarn  = 40960
May 24 22:58:23 pleiades ollama[1391]: print_info: rope_finetuned   = unknown
May 24 22:58:23 pleiades ollama[1391]: print_info: ssm_d_conv       = 0
May 24 22:58:23 pleiades ollama[1391]: print_info: ssm_d_inner      = 0
May 24 22:58:23 pleiades ollama[1391]: print_info: ssm_d_state      = 0
May 24 22:58:23 pleiades ollama[1391]: print_info: ssm_dt_rank      = 0
May 24 22:58:23 pleiades ollama[1391]: print_info: ssm_dt_b_c_rms   = 0
May 24 22:58:23 pleiades ollama[1391]: print_info: model type       = 4B
May 24 22:58:23 pleiades ollama[1391]: print_info: model params     = 4.02 B
May 24 22:58:23 pleiades ollama[1391]: print_info: general.name     = Qwen3-4B
May 24 22:58:23 pleiades ollama[1391]: print_info: vocab type       = BPE
May 24 22:58:23 pleiades ollama[1391]: print_info: n_vocab          = 151936
May 24 22:58:23 pleiades ollama[1391]: print_info: n_merges         = 151387
May 24 22:58:23 pleiades ollama[1391]: print_info: BOS token        = 11 ','
May 24 22:58:23 pleiades ollama[1391]: print_info: EOS token        = 151645 '<|im_end|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: EOT token        = 151645 '<|im_end|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: PAD token        = 151654 '<|vision_pad|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: LF token         = 198 'Ċ'
May 24 22:58:23 pleiades ollama[1391]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: FIM REP token    = 151663 '<|repo_name|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: EOG token        = 151643 '<|endoftext|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: EOG token        = 151645 '<|im_end|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: EOG token        = 151662 '<|fim_pad|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: EOG token        = 151663 '<|repo_name|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: EOG token        = 151664 '<|file_sep|>'
May 24 22:58:23 pleiades ollama[1391]: print_info: max token length = 256
May 24 22:58:23 pleiades ollama[1391]: load_tensors: loading model tensors, this can take a while... (mmap = true)
May 24 22:58:23 pleiades ollama[1391]: load_tensors: offloading 5 repeating layers to GPU
May 24 22:58:23 pleiades ollama[1391]: load_tensors: offloaded 5/37 layers to GPU
May 24 22:58:23 pleiades ollama[1391]: load_tensors:        CUDA0 model buffer size =   511.43 MiB
May 24 22:58:23 pleiades ollama[1391]: load_tensors:   CPU_Mapped model buffer size =  3565.00 MiB
May 24 22:58:23 pleiades ollama[1391]: llama_context: constructing llama_context
May 24 22:58:23 pleiades ollama[1391]: llama_context: n_seq_max     = 1
May 24 22:58:23 pleiades ollama[1391]: llama_context: n_ctx         = 32768
May 24 22:58:23 pleiades ollama[1391]: llama_context: n_ctx_per_seq = 32768
May 24 22:58:23 pleiades ollama[1391]: llama_context: n_batch       = 512
May 24 22:58:23 pleiades ollama[1391]: llama_context: n_ubatch      = 512
May 24 22:58:23 pleiades ollama[1391]: llama_context: causal_attn   = 1
May 24 22:58:23 pleiades ollama[1391]: llama_context: flash_attn    = 0
May 24 22:58:23 pleiades ollama[1391]: llama_context: freq_base     = 1000000.0
May 24 22:58:23 pleiades ollama[1391]: llama_context: freq_scale    = 1
May 24 22:58:23 pleiades ollama[1391]: llama_context: n_ctx_per_seq (32768) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
May 24 22:58:23 pleiades ollama[1391]: llama_context:        CPU  output buffer size =     0.59 MiB
May 24 22:58:23 pleiades ollama[1391]: llama_kv_cache_unified: kv_size = 32768, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1, padding = 32
May 24 22:58:23 pleiades ollama[1391]: llama_kv_cache_unified:      CUDA0 KV buffer size =   640.00 MiB
May 24 22:58:24 pleiades ollama[1391]: llama_kv_cache_unified:        CPU KV buffer size =  3968.00 MiB
May 24 22:58:24 pleiades ollama[1391]: llama_kv_cache_unified: KV self size  = 4608.00 MiB, K (f16): 2304.00 MiB, V (f16): 2304.00 MiB
May 24 22:58:24 pleiades ollama[1391]: llama_context:      CUDA0 compute buffer size =  2322.00 MiB
May 24 22:58:24 pleiades ollama[1391]: llama_context:  CUDA_Host compute buffer size =    69.01 MiB
May 24 22:58:24 pleiades ollama[1391]: llama_context: graph nodes  = 1374
May 24 22:58:24 pleiades ollama[1391]: llama_context: graph splits = 407 (with bs=512), 65 (with bs=1)
May 24 22:58:24 pleiades ollama[1391]: time=2025-05-24T22:58:24.492+01:00 level=INFO source=server.go:630 msg="llama runner started in 2.01 seconds"
May 24 22:58:32 pleiades ollama[1391]: [GIN] 2025/05/24 - 22:58:32 | 200 | 10.480719552s |       127.0.0.1 | POST     "/api/chat"
May 24 23:01:55 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:01:55 | 200 |          1m8s |       127.0.0.1 | POST     "/api/chat"
May 24 23:05:23 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:05:23 | 200 |      15.649µs |       127.0.0.1 | HEAD     "/"
May 24 23:05:23 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:05:23 | 200 |   25.963275ms |       127.0.0.1 | POST     "/api/show"
May 24 23:05:23 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:05:23 | 200 |   14.118912ms |       127.0.0.1 | POST     "/api/generate"
May 24 23:05:26 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:05:26 | 200 |   67.204728ms |       127.0.0.1 | POST     "/api/show"
May 24 23:05:44 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:05:44 | 200 |  7.356454887s |       127.0.0.1 | POST     "/api/chat"
May 24 23:05:57 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:05:57 | 200 |   31.040694ms |       127.0.0.1 | POST     "/api/show"
May 24 23:08:47 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:08:47 | 200 |      36.266µs |       127.0.0.1 | GET      "/api/version"
May 24 23:08:54 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:08:54 | 200 |       22.67µs |       127.0.0.1 | HEAD     "/"
May 24 23:08:54 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:08:54 | 200 |    1.371102ms |       127.0.0.1 | GET      "/api/ps"
May 24 23:10:29 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:10:29 | 200 |      55.611µs |       127.0.0.1 | HEAD     "/"
May 24 23:10:29 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:10:29 | 200 |    1.432656ms |       127.0.0.1 | GET      "/api/tags"

<!-- gh-comment-id:2907203453 --> @stubkan commented on GitHub (May 24, 2025): Have the same issue. Posting here, rather than making a new post. Got a 4gb model imported (qwen3-4b) and see it taking up GPU vram space... But... inferencing it just uses the CPU. I see all the chip usages going up and down in system monitor. Not good. Checking ps shows this; ollama ps NAME ID SIZE PROCESSOR UNTIL qwen3-4b:latest fe7c4d51aadb 13 GB 59%/41% CPU/GPU I'm not sure how a 4gb model is taking up 13gb of memory - but this may be why it's slow and using the CPU ? ollama list NAME ID SIZE MODIFIED qwen3-4b:latest fe7c4d51aadb 4.3 GB 16 minutes ago Is this normal behaviour? Logs: <details> ``` May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = false May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 25: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 26: general.quantization_version u32 = 2 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 27: general.file_type u32 = 7 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 28: quantize.imatrix.file str = Qwen3-4B-GGUF/imatrix_unsloth.dat May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 29: quantize.imatrix.dataset str = unsloth_calibration_Qwen3-4B.txt May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 30: quantize.imatrix.entries_count i32 = 252 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 31: quantize.imatrix.chunks_count i32 = 685 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - type f32: 145 tensors May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - type q8_0: 253 tensors May 24 22:58:22 pleiades ollama[1391]: print_info: file format = GGUF V3 (latest) May 24 22:58:22 pleiades ollama[1391]: print_info: file type = Q8_0 May 24 22:58:22 pleiades ollama[1391]: print_info: file size = 3.98 GiB (8.50 BPW) May 24 22:58:22 pleiades ollama[1391]: load: special tokens cache size = 26 May 24 22:58:22 pleiades ollama[1391]: load: token to piece cache size = 0.9311 MB May 24 22:58:22 pleiades ollama[1391]: print_info: arch = qwen3 May 24 22:58:22 pleiades ollama[1391]: print_info: vocab_only = 1 May 24 22:58:22 pleiades ollama[1391]: print_info: model type = ?B May 24 22:58:22 pleiades ollama[1391]: print_info: model params = 4.02 B May 24 22:58:22 pleiades ollama[1391]: print_info: general.name = Qwen3-4B May 24 22:58:22 pleiades ollama[1391]: print_info: vocab type = BPE May 24 22:58:22 pleiades ollama[1391]: print_info: n_vocab = 151936 May 24 22:58:22 pleiades ollama[1391]: print_info: n_merges = 151387 May 24 22:58:22 pleiades ollama[1391]: print_info: BOS token = 11 ',' May 24 22:58:22 pleiades ollama[1391]: print_info: EOS token = 151645 '<|im_end|>' May 24 22:58:22 pleiades ollama[1391]: print_info: EOT token = 151645 '<|im_end|>' May 24 22:58:22 pleiades ollama[1391]: print_info: PAD token = 151654 '<|vision_pad|>' May 24 22:58:22 pleiades ollama[1391]: print_info: LF token = 198 'Ċ' May 24 22:58:22 pleiades ollama[1391]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' May 24 22:58:22 pleiades ollama[1391]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' May 24 22:58:22 pleiades ollama[1391]: print_info: FIM MID token = 151660 '<|fim_middle|>' May 24 22:58:22 pleiades ollama[1391]: print_info: FIM PAD token = 151662 '<|fim_pad|>' May 24 22:58:22 pleiades ollama[1391]: print_info: FIM REP token = 151663 '<|repo_name|>' May 24 22:58:22 pleiades ollama[1391]: print_info: FIM SEP token = 151664 '<|file_sep|>' May 24 22:58:22 pleiades ollama[1391]: print_info: EOG token = 151643 '<|endoftext|>' May 24 22:58:22 pleiades ollama[1391]: print_info: EOG token = 151645 '<|im_end|>' May 24 22:58:22 pleiades ollama[1391]: print_info: EOG token = 151662 '<|fim_pad|>' May 24 22:58:22 pleiades ollama[1391]: print_info: EOG token = 151663 '<|repo_name|>' May 24 22:58:22 pleiades ollama[1391]: print_info: EOG token = 151664 '<|file_sep|>' May 24 22:58:22 pleiades ollama[1391]: print_info: max token length = 256 May 24 22:58:22 pleiades ollama[1391]: llama_model_load: vocab only - skipping tensors May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.484+01:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-eed555233267a33c7e8ee31682762cc7751b3f6d224039086e0e846f05fffa5d --ctx-size 32768 --batch-size 512 --n-gpu-layers 5 --threads 4 --parallel 1 --port 35167" May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.484+01:00 level=INFO source=sched.go:472 msg="loaded runners" count=1 May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.484+01:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.484+01:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.491+01:00 level=INFO source=runner.go:815 msg="starting go runner" May 24 22:58:22 pleiades ollama[1391]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so May 24 22:58:22 pleiades ollama[1391]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no May 24 22:58:22 pleiades ollama[1391]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no May 24 22:58:22 pleiades ollama[1391]: ggml_cuda_init: found 1 CUDA devices: May 24 22:58:22 pleiades ollama[1391]: Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes May 24 22:58:22 pleiades ollama[1391]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.761+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.762+01:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:35167" May 24 22:58:22 pleiades ollama[1391]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5289 MiB free May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: loaded meta data with 32 key-value pairs and 398 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-eed555233267a33c7e8ee31682762cc7751b3f6d224039086e0e846f05fffa5d (version GGUF V3 (latest)) May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 0: general.architecture str = qwen3 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 1: general.type str = model May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 2: general.name str = Qwen3-4B May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 3: general.basename str = Qwen3-4B May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 4: general.quantized_by str = Unsloth May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 5: general.size_label str = 4B May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 6: general.repo_url str = https://huggingface.co/unsloth May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 7: qwen3.block_count u32 = 36 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 8: qwen3.context_length u32 = 40960 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 9: qwen3.embedding_length u32 = 2560 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 10: qwen3.feed_forward_length u32 = 9728 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 11: qwen3.attention.head_count u32 = 32 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 12: qwen3.attention.head_count_kv u32 = 8 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 13: qwen3.rope.freq_base f32 = 1000000.000000 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 14: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 15: qwen3.attention.key_length u32 = 128 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 16: qwen3.attention.value_length u32 = 128 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 18: tokenizer.ggml.pre str = qwen2 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 22: tokenizer.ggml.eos_token_id u32 = 151645 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 151654 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = false May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 25: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 26: general.quantization_version u32 = 2 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 27: general.file_type u32 = 7 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 28: quantize.imatrix.file str = Qwen3-4B-GGUF/imatrix_unsloth.dat May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 29: quantize.imatrix.dataset str = unsloth_calibration_Qwen3-4B.txt May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 30: quantize.imatrix.entries_count i32 = 252 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - kv 31: quantize.imatrix.chunks_count i32 = 685 May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - type f32: 145 tensors May 24 22:58:22 pleiades ollama[1391]: llama_model_loader: - type q8_0: 253 tensors May 24 22:58:22 pleiades ollama[1391]: print_info: file format = GGUF V3 (latest) May 24 22:58:22 pleiades ollama[1391]: print_info: file type = Q8_0 May 24 22:58:22 pleiades ollama[1391]: print_info: file size = 3.98 GiB (8.50 BPW) May 24 22:58:22 pleiades ollama[1391]: time=2025-05-24T22:58:22.986+01:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" May 24 22:58:23 pleiades ollama[1391]: load: special tokens cache size = 26 May 24 22:58:23 pleiades ollama[1391]: load: token to piece cache size = 0.9311 MB May 24 22:58:23 pleiades ollama[1391]: print_info: arch = qwen3 May 24 22:58:23 pleiades ollama[1391]: print_info: vocab_only = 0 May 24 22:58:23 pleiades ollama[1391]: print_info: n_ctx_train = 40960 May 24 22:58:23 pleiades ollama[1391]: print_info: n_embd = 2560 May 24 22:58:23 pleiades ollama[1391]: print_info: n_layer = 36 May 24 22:58:23 pleiades ollama[1391]: print_info: n_head = 32 May 24 22:58:23 pleiades ollama[1391]: print_info: n_head_kv = 8 May 24 22:58:23 pleiades ollama[1391]: print_info: n_rot = 128 May 24 22:58:23 pleiades ollama[1391]: print_info: n_swa = 0 May 24 22:58:23 pleiades ollama[1391]: print_info: n_swa_pattern = 1 May 24 22:58:23 pleiades ollama[1391]: print_info: n_embd_head_k = 128 May 24 22:58:23 pleiades ollama[1391]: print_info: n_embd_head_v = 128 May 24 22:58:23 pleiades ollama[1391]: print_info: n_gqa = 4 May 24 22:58:23 pleiades ollama[1391]: print_info: n_embd_k_gqa = 1024 May 24 22:58:23 pleiades ollama[1391]: print_info: n_embd_v_gqa = 1024 May 24 22:58:23 pleiades ollama[1391]: print_info: f_norm_eps = 0.0e+00 May 24 22:58:23 pleiades ollama[1391]: print_info: f_norm_rms_eps = 1.0e-06 May 24 22:58:23 pleiades ollama[1391]: print_info: f_clamp_kqv = 0.0e+00 May 24 22:58:23 pleiades ollama[1391]: print_info: f_max_alibi_bias = 0.0e+00 May 24 22:58:23 pleiades ollama[1391]: print_info: f_logit_scale = 0.0e+00 May 24 22:58:23 pleiades ollama[1391]: print_info: f_attn_scale = 0.0e+00 May 24 22:58:23 pleiades ollama[1391]: print_info: n_ff = 9728 May 24 22:58:23 pleiades ollama[1391]: print_info: n_expert = 0 May 24 22:58:23 pleiades ollama[1391]: print_info: n_expert_used = 0 May 24 22:58:23 pleiades ollama[1391]: print_info: causal attn = 1 May 24 22:58:23 pleiades ollama[1391]: print_info: pooling type = 0 May 24 22:58:23 pleiades ollama[1391]: print_info: rope type = 2 May 24 22:58:23 pleiades ollama[1391]: print_info: rope scaling = linear May 24 22:58:23 pleiades ollama[1391]: print_info: freq_base_train = 1000000.0 May 24 22:58:23 pleiades ollama[1391]: print_info: freq_scale_train = 1 May 24 22:58:23 pleiades ollama[1391]: print_info: n_ctx_orig_yarn = 40960 May 24 22:58:23 pleiades ollama[1391]: print_info: rope_finetuned = unknown May 24 22:58:23 pleiades ollama[1391]: print_info: ssm_d_conv = 0 May 24 22:58:23 pleiades ollama[1391]: print_info: ssm_d_inner = 0 May 24 22:58:23 pleiades ollama[1391]: print_info: ssm_d_state = 0 May 24 22:58:23 pleiades ollama[1391]: print_info: ssm_dt_rank = 0 May 24 22:58:23 pleiades ollama[1391]: print_info: ssm_dt_b_c_rms = 0 May 24 22:58:23 pleiades ollama[1391]: print_info: model type = 4B May 24 22:58:23 pleiades ollama[1391]: print_info: model params = 4.02 B May 24 22:58:23 pleiades ollama[1391]: print_info: general.name = Qwen3-4B May 24 22:58:23 pleiades ollama[1391]: print_info: vocab type = BPE May 24 22:58:23 pleiades ollama[1391]: print_info: n_vocab = 151936 May 24 22:58:23 pleiades ollama[1391]: print_info: n_merges = 151387 May 24 22:58:23 pleiades ollama[1391]: print_info: BOS token = 11 ',' May 24 22:58:23 pleiades ollama[1391]: print_info: EOS token = 151645 '<|im_end|>' May 24 22:58:23 pleiades ollama[1391]: print_info: EOT token = 151645 '<|im_end|>' May 24 22:58:23 pleiades ollama[1391]: print_info: PAD token = 151654 '<|vision_pad|>' May 24 22:58:23 pleiades ollama[1391]: print_info: LF token = 198 'Ċ' May 24 22:58:23 pleiades ollama[1391]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' May 24 22:58:23 pleiades ollama[1391]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' May 24 22:58:23 pleiades ollama[1391]: print_info: FIM MID token = 151660 '<|fim_middle|>' May 24 22:58:23 pleiades ollama[1391]: print_info: FIM PAD token = 151662 '<|fim_pad|>' May 24 22:58:23 pleiades ollama[1391]: print_info: FIM REP token = 151663 '<|repo_name|>' May 24 22:58:23 pleiades ollama[1391]: print_info: FIM SEP token = 151664 '<|file_sep|>' May 24 22:58:23 pleiades ollama[1391]: print_info: EOG token = 151643 '<|endoftext|>' May 24 22:58:23 pleiades ollama[1391]: print_info: EOG token = 151645 '<|im_end|>' May 24 22:58:23 pleiades ollama[1391]: print_info: EOG token = 151662 '<|fim_pad|>' May 24 22:58:23 pleiades ollama[1391]: print_info: EOG token = 151663 '<|repo_name|>' May 24 22:58:23 pleiades ollama[1391]: print_info: EOG token = 151664 '<|file_sep|>' May 24 22:58:23 pleiades ollama[1391]: print_info: max token length = 256 May 24 22:58:23 pleiades ollama[1391]: load_tensors: loading model tensors, this can take a while... (mmap = true) May 24 22:58:23 pleiades ollama[1391]: load_tensors: offloading 5 repeating layers to GPU May 24 22:58:23 pleiades ollama[1391]: load_tensors: offloaded 5/37 layers to GPU May 24 22:58:23 pleiades ollama[1391]: load_tensors: CUDA0 model buffer size = 511.43 MiB May 24 22:58:23 pleiades ollama[1391]: load_tensors: CPU_Mapped model buffer size = 3565.00 MiB May 24 22:58:23 pleiades ollama[1391]: llama_context: constructing llama_context May 24 22:58:23 pleiades ollama[1391]: llama_context: n_seq_max = 1 May 24 22:58:23 pleiades ollama[1391]: llama_context: n_ctx = 32768 May 24 22:58:23 pleiades ollama[1391]: llama_context: n_ctx_per_seq = 32768 May 24 22:58:23 pleiades ollama[1391]: llama_context: n_batch = 512 May 24 22:58:23 pleiades ollama[1391]: llama_context: n_ubatch = 512 May 24 22:58:23 pleiades ollama[1391]: llama_context: causal_attn = 1 May 24 22:58:23 pleiades ollama[1391]: llama_context: flash_attn = 0 May 24 22:58:23 pleiades ollama[1391]: llama_context: freq_base = 1000000.0 May 24 22:58:23 pleiades ollama[1391]: llama_context: freq_scale = 1 May 24 22:58:23 pleiades ollama[1391]: llama_context: n_ctx_per_seq (32768) < n_ctx_train (40960) -- the full capacity of the model will not be utilized May 24 22:58:23 pleiades ollama[1391]: llama_context: CPU output buffer size = 0.59 MiB May 24 22:58:23 pleiades ollama[1391]: llama_kv_cache_unified: kv_size = 32768, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1, padding = 32 May 24 22:58:23 pleiades ollama[1391]: llama_kv_cache_unified: CUDA0 KV buffer size = 640.00 MiB May 24 22:58:24 pleiades ollama[1391]: llama_kv_cache_unified: CPU KV buffer size = 3968.00 MiB May 24 22:58:24 pleiades ollama[1391]: llama_kv_cache_unified: KV self size = 4608.00 MiB, K (f16): 2304.00 MiB, V (f16): 2304.00 MiB May 24 22:58:24 pleiades ollama[1391]: llama_context: CUDA0 compute buffer size = 2322.00 MiB May 24 22:58:24 pleiades ollama[1391]: llama_context: CUDA_Host compute buffer size = 69.01 MiB May 24 22:58:24 pleiades ollama[1391]: llama_context: graph nodes = 1374 May 24 22:58:24 pleiades ollama[1391]: llama_context: graph splits = 407 (with bs=512), 65 (with bs=1) May 24 22:58:24 pleiades ollama[1391]: time=2025-05-24T22:58:24.492+01:00 level=INFO source=server.go:630 msg="llama runner started in 2.01 seconds" May 24 22:58:32 pleiades ollama[1391]: [GIN] 2025/05/24 - 22:58:32 | 200 | 10.480719552s | 127.0.0.1 | POST "/api/chat" May 24 23:01:55 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:01:55 | 200 | 1m8s | 127.0.0.1 | POST "/api/chat" May 24 23:05:23 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:05:23 | 200 | 15.649µs | 127.0.0.1 | HEAD "/" May 24 23:05:23 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:05:23 | 200 | 25.963275ms | 127.0.0.1 | POST "/api/show" May 24 23:05:23 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:05:23 | 200 | 14.118912ms | 127.0.0.1 | POST "/api/generate" May 24 23:05:26 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:05:26 | 200 | 67.204728ms | 127.0.0.1 | POST "/api/show" May 24 23:05:44 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:05:44 | 200 | 7.356454887s | 127.0.0.1 | POST "/api/chat" May 24 23:05:57 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:05:57 | 200 | 31.040694ms | 127.0.0.1 | POST "/api/show" May 24 23:08:47 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:08:47 | 200 | 36.266µs | 127.0.0.1 | GET "/api/version" May 24 23:08:54 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:08:54 | 200 | 22.67µs | 127.0.0.1 | HEAD "/" May 24 23:08:54 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:08:54 | 200 | 1.371102ms | 127.0.0.1 | GET "/api/ps" May 24 23:10:29 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:10:29 | 200 | 55.611µs | 127.0.0.1 | HEAD "/" May 24 23:10:29 pleiades ollama[1391]: [GIN] 2025/05/24 - 23:10:29 | 200 | 1.432656ms | 127.0.0.1 | GET "/api/tags" ``` </details>
Author
Owner

@rick-github commented on GitHub (May 24, 2025):

It's taking up a lot of VRAM because you have a context of 32768 tokens. In the bit of the log that you didn't include it will show the memory estimation, but the upshot is that ollama can only load 5 of the 37 layers of the model into VRAM. This means that 32 layers are loaded into system RAM where the CPU does the inference. Because the CPU is much slower than the GPU at doing the matrix operations required for inference, most of the time is spent waiting for the CPU to finish its calculations. This shows as high utilization for the CPU and low utilization for the GPU. This is normal behaviour.

<!-- gh-comment-id:2907223016 --> @rick-github commented on GitHub (May 24, 2025): It's taking up a lot of VRAM because you have a context of 32768 tokens. In the bit of the log that you didn't include it will show the memory estimation, but the upshot is that ollama can only load 5 of the 37 layers of the model into VRAM. This means that 32 layers are loaded into system RAM where the CPU does the inference. Because the CPU is much slower than the GPU at doing the matrix operations required for inference, most of the time is spent waiting for the CPU to finish its calculations. This shows as high utilization for the CPU and low utilization for the GPU. This is normal behaviour.
Author
Owner

@stubkan commented on GitHub (May 24, 2025):

Thank you for that, I put the tokens back down and it does free up a lot of RAM.

I increased the tokens, because Qwen3 thinking easily hits the ollama token limit before it can start working on the prompt. KoboldCpp can resume generating when the limit is hit, but I couldnt figure out how to do that with Ollama - is there a way to do that, rather than sacrifice RAM ?

<!-- gh-comment-id:2907238556 --> @stubkan commented on GitHub (May 24, 2025): Thank you for that, I put the tokens back down and it does free up a lot of RAM. I increased the tokens, because Qwen3 thinking easily hits the ollama token limit before it can start working on the prompt. KoboldCpp can resume generating when the limit is hit, but I couldnt figure out how to do that with Ollama - is there a way to do that, rather than sacrifice RAM ?
Author
Owner

@rick-github commented on GitHub (May 24, 2025):

You can experiment with flash attention and k/v cache quantization to reduce the VRAM footprint. I don't use KoboldCpp so I don't know the feature you mention and whether it has an equivalent in ollama. In ollama, when the context limit is reached, the runner will shift the current context to make room for new tokens.

<!-- gh-comment-id:2907252133 --> @rick-github commented on GitHub (May 24, 2025): You can experiment with [flash attention](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-enable-flash-attention) and [k/v cache quantization](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-set-the-quantization-type-for-the-kv-cache) to reduce the VRAM footprint. I don't use KoboldCpp so I don't know the feature you mention and whether it has an equivalent in ollama. In ollama, when the context limit is reached, the runner will shift the current context to make room for new tokens.
Author
Owner

@rick-github commented on GitHub (May 24, 2025):

Also note that you can disable qwen3 thinking with /nothink.

<!-- gh-comment-id:2907262956 --> @rick-github commented on GitHub (May 24, 2025): Also note that you can disable qwen3 thinking with `/nothink`.
Author
Owner

@Li-Wentao commented on GitHub (May 26, 2025):

Full log required.

time=2025-05-24T15:57:23.082-05:00 level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/usr/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-05-24T15:57:23.082-05:00 level=INFO source=images.go:463 msg="total blobs: 5"
time=2025-05-24T15:57:23.082-05:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0"
time=2025-05-24T15:57:23.082-05:00 level=INFO source=routes.go:1258 msg="Listening on 127.0.0.1:11434 (version 0.7.1)"
time=2025-05-24T15:57:23.082-05:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-05-24T15:57:23.373-05:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-05-24T15:57:23.375-05:00 level=WARN source=amd_linux.go:443 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
time=2025-05-24T15:57:23.375-05:00 level=WARN source=amd_linux.go:348 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
time=2025-05-24T15:57:23.375-05:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-5ef89995-11e7-5689-7628-8e8735b9df54 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5080" total="15.5 GiB" available="14.5 GiB"
[GIN] 2025/05/24 - 16:06:55 | 200 | 51.808µs | 127.0.0.1 | GET "/api/version"
[GIN] 2025/05/24 - 16:07:59 | 200 | 18.936µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/05/24 - 16:07:59 | 200 | 16.920679ms | 127.0.0.1 | POST "/api/show"
time=2025-05-24T16:08:00.059-05:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/home/usr/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-5ef89995-11e7-5689-7628-8e8735b9df54 parallel=2 available=15606022144 required="1.9 GiB"
time=2025-05-24T16:08:00.171-05:00 level=INFO source=server.go:135 msg="system memory" total="60.5 GiB" free="55.6 GiB" free_swap="4.0 GiB"
time=2025-05-24T16:08:00.171-05:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[14.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="934.7 MiB" memory.weights.repeating="752.1 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB"
llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/usr/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 4: general.size_label str = 1.5B
llama_model_loader: - kv 5: qwen2.block_count u32 = 28
llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536
llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960
llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12
llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2
llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_K: 169 tensors
llama_model_loader: - type q6_K: 29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 1.04 GiB (5.00 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch = qwen2
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 1.78 B
print_info: general.name = DeepSeek R1 Distill Qwen 1.5B
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token = 151643 '<|end▁of▁sentence|>'
print_info: EOT token = 151643 '<|end▁of▁sentence|>'
print_info: PAD token = 151643 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|end▁of▁sentence|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-05-24T16:08:00.275-05:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /home/usr/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --parallel 2 --port 34845"
time=2025-05-24T16:08:00.275-05:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-05-24T16:08:00.275-05:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-05-24T16:08:00.275-05:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-24T16:08:00.279-05:00 level=INFO source=runner.go:815 msg="starting go runner"
time=2025-05-24T16:08:00.279-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-05-24T16:08:00.280-05:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:34845"
llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/usr/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv 4: general.size_label str = 1.5B
llama_model_loader: - kv 5: qwen2.block_count u32 = 28
llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536
llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960
llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12
llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2
llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_K: 169 tensors
llama_model_loader: - type q6_K: 29 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 1.04 GiB (5.00 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch = qwen2
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 1536
print_info: n_layer = 28
print_info: n_head = 12
print_info: n_head_kv = 2
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 6
print_info: n_embd_k_gqa = 256
print_info: n_embd_v_gqa = 256
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-06
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 8960
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = -1
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 1.5B
print_info: model params = 1.78 B
print_info: general.name = DeepSeek R1 Distill Qwen 1.5B
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token = 151643 '<|end▁of▁sentence|>'
print_info: EOT token = 151643 '<|end▁of▁sentence|>'
print_info: PAD token = 151643 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|end▁of▁sentence|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: CPU_Mapped model buffer size = 1059.89 MiB
llama_context: constructing llama_context
llama_context: n_seq_max = 2
llama_context: n_ctx = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 1024
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 10000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 1.17 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1, padding = 32
llama_kv_cache_unified: CPU KV buffer size = 224.00 MiB
llama_kv_cache_unified: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB
llama_context: CPU compute buffer size = 302.75 MiB
llama_context: graph nodes = 1042
llama_context: graph splits = 1
time=2025-05-24T16:08:00.526-05:00 level=INFO source=server.go:630 msg="llama runner started in 0.25 seconds"
[GIN] 2025/05/24 - 16:08:00 | 200 | 620.371379ms | 127.0.0.1 | POST "/api/generate"

<!-- gh-comment-id:2908442269 --> @Li-Wentao commented on GitHub (May 26, 2025): > Full log required. time=2025-05-24T15:57:23.082-05:00 level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/usr/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-05-24T15:57:23.082-05:00 level=INFO source=images.go:463 msg="total blobs: 5" time=2025-05-24T15:57:23.082-05:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0" time=2025-05-24T15:57:23.082-05:00 level=INFO source=routes.go:1258 msg="Listening on 127.0.0.1:11434 (version 0.7.1)" time=2025-05-24T15:57:23.082-05:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-24T15:57:23.373-05:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-05-24T15:57:23.375-05:00 level=WARN source=amd_linux.go:443 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" time=2025-05-24T15:57:23.375-05:00 level=WARN source=amd_linux.go:348 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" time=2025-05-24T15:57:23.375-05:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-5ef89995-11e7-5689-7628-8e8735b9df54 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5080" total="15.5 GiB" available="14.5 GiB" [GIN] 2025/05/24 - 16:06:55 | 200 | 51.808µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/05/24 - 16:07:59 | 200 | 18.936µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/24 - 16:07:59 | 200 | 16.920679ms | 127.0.0.1 | POST "/api/show" time=2025-05-24T16:08:00.059-05:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/home/usr/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-5ef89995-11e7-5689-7628-8e8735b9df54 parallel=2 available=15606022144 required="1.9 GiB" time=2025-05-24T16:08:00.171-05:00 level=INFO source=server.go:135 msg="system memory" total="60.5 GiB" free="55.6 GiB" free_swap="4.0 GiB" time=2025-05-24T16:08:00.171-05:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[14.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.9 GiB" memory.required.partial="1.9 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[1.9 GiB]" memory.weights.total="934.7 MiB" memory.weights.repeating="752.1 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB" llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/usr/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 1.5B llama_model_loader: - kv 5: qwen2.block_count u32 = 28 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.04 GiB (5.00 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 1.78 B print_info: general.name = DeepSeek R1 Distill Qwen 1.5B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-05-24T16:08:00.275-05:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /home/usr/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --parallel 2 --port 34845" time=2025-05-24T16:08:00.275-05:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-05-24T16:08:00.275-05:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-24T16:08:00.275-05:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-24T16:08:00.279-05:00 level=INFO source=runner.go:815 msg="starting go runner" time=2025-05-24T16:08:00.279-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) time=2025-05-24T16:08:00.280-05:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:34845" llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/usr/.ollama/models/blobs/sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 1.5B llama_model_loader: - kv 5: qwen2.block_count u32 = 28 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 1.04 GiB (5.00 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 1536 print_info: n_layer = 28 print_info: n_head = 12 print_info: n_head_kv = 2 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 6 print_info: n_embd_k_gqa = 256 print_info: n_embd_v_gqa = 256 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 8960 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 10000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 1.5B print_info: model params = 1.78 B print_info: general.name = DeepSeek R1 Distill Qwen 1.5B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: CPU_Mapped model buffer size = 1059.89 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 10000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 1.17 MiB llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1, padding = 32 llama_kv_cache_unified: CPU KV buffer size = 224.00 MiB llama_kv_cache_unified: KV self size = 224.00 MiB, K (f16): 112.00 MiB, V (f16): 112.00 MiB llama_context: CPU compute buffer size = 302.75 MiB llama_context: graph nodes = 1042 llama_context: graph splits = 1 time=2025-05-24T16:08:00.526-05:00 level=INFO source=server.go:630 msg="llama runner started in 0.25 seconds" [GIN] 2025/05/24 - 16:08:00 | 200 | 620.371379ms | 127.0.0.1 | POST "/api/generate"
Author
Owner

@rick-github commented on GitHub (May 26, 2025):

time=2025-05-24T16:08:00.279-05:00 level=INFO source=runner.go:815 msg="starting go runner"
time=2025-05-24T16:08:00.279-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)

No GPU or CPU backends found in /usr/local/lib/ollama. How did you install ollama?

<!-- gh-comment-id:2908840672 --> @rick-github commented on GitHub (May 26, 2025): ``` time=2025-05-24T16:08:00.279-05:00 level=INFO source=runner.go:815 msg="starting go runner" time=2025-05-24T16:08:00.279-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) ``` No GPU or CPU backends found in `/usr/local/lib/ollama`. How did you install ollama?
Author
Owner

@Li-Wentao commented on GitHub (May 27, 2025):

time=2025-05-24T16:08:00.279-05:00 level=INFO source=runner.go:815 msg="starting go runner"
time=2025-05-24T16:08:00.279-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)

No GPU or CPU backends found in /usr/local/lib/ollama. How did you install ollama?

Thanks for helping. I used pacman to install ollama and ollama-cuda

<!-- gh-comment-id:2913807806 --> @Li-Wentao commented on GitHub (May 27, 2025): > ``` > time=2025-05-24T16:08:00.279-05:00 level=INFO source=runner.go:815 msg="starting go runner" > time=2025-05-24T16:08:00.279-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) > ``` > No GPU or CPU backends found in `/usr/local/lib/ollama`. How did you install ollama? Thanks for helping. I used pacman to install ollama and ollama-cuda
Author
Owner

@rick-github commented on GitHub (May 27, 2025):

What's the output of

ls -lR /usr/local/lib/ollama
<!-- gh-comment-id:2914050052 --> @rick-github commented on GitHub (May 27, 2025): What's the output of ``` ls -lR /usr/local/lib/ollama ```
Author
Owner

@Li-Wentao commented on GitHub (May 27, 2025):

What's the output of

ls -lR /usr/local/lib/ollama

/usr/local/lib/ollama:
total 4
drwxr-xr-x 2 root root 4096 May 24 01:09 cuda_v11

/usr/local/lib/ollama/cuda_v11:
total 76280
lrwxrwxrwx 1 root root 25 May 23 14:10 libcublasLt.so.11 -> libcublasLt.so.11.5.1.109
-rwxr-xr-x 1 root root 78107648 May 24 01:09 libcublas.so.11.5.1.109

<!-- gh-comment-id:2914080209 --> @Li-Wentao commented on GitHub (May 27, 2025): > What's the output of > > ``` > ls -lR /usr/local/lib/ollama > ``` /usr/local/lib/ollama: total 4 drwxr-xr-x 2 root root 4096 May 24 01:09 cuda_v11 /usr/local/lib/ollama/cuda_v11: total 76280 lrwxrwxrwx 1 root root 25 May 23 14:10 libcublasLt.so.11 -> libcublasLt.so.11.5.1.109 -rwxr-xr-x 1 root root 78107648 May 24 01:09 libcublas.so.11.5.1.109
Author
Owner

@rick-github commented on GitHub (May 27, 2025):

This is what it should look like:

/usr/local/lib/ollama:
total 4772
drwxr-xr-x 2 root root   4096 Mai 23 21:10 cuda_v11
drwxr-xr-x 2 root root   4096 Mai 23 21:12 cuda_v12
-rwxr-xr-x 1 root root 595648 Mai 23 20:59 libggml-base.so
-rwxr-xr-x 1 root root 619280 Mai 23 20:59 libggml-cpu-alderlake.so
-rwxr-xr-x 1 root root 619280 Mai 23 20:59 libggml-cpu-haswell.so
-rwxr-xr-x 1 root root 725776 Mai 23 20:59 libggml-cpu-icelake.so
-rwxr-xr-x 1 root root 606992 Mai 23 20:59 libggml-cpu-sandybridge.so
-rwxr-xr-x 1 root root 729872 Mai 23 20:59 libggml-cpu-skylakex.so
-rwxr-xr-x 1 root root 480048 Mai 23 20:59 libggml-cpu-sse42.so
-rwxr-xr-x 1 root root 475952 Mai 23 20:59 libggml-cpu-x64.so

/usr/local/lib/ollama/cuda_v11:
total 1158364
lrwxrwxrwx 1 root root        25 Mai 23 21:10 libcublasLt.so.11 -> libcublasLt.so.11.5.1.109
-rwxr-xr-x 1 root root 263770264 Mai  4  2021 libcublasLt.so.11.5.1.109
lrwxrwxrwx 1 root root        23 Mai 23 21:10 libcublas.so.11 -> libcublas.so.11.5.1.109
-rwxr-xr-x 1 root root 121866104 Mai  4  2021 libcublas.so.11.5.1.109
lrwxrwxrwx 1 root root        21 Mai 23 21:10 libcudart.so.11.0 -> libcudart.so.11.3.109
-rwxr-xr-x 1 root root    619192 Mai  4  2021 libcudart.so.11.3.109
-rwxr-xr-x 1 root root 799883944 Mai 23 21:10 libggml-cuda.so

/usr/local/lib/ollama/cuda_v12:
total 2104184
lrwxrwxrwx 1 root root         23 Mai 23 21:12 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1
-rwxr-xr-x 1 root root  751771728 Jul  8  2015 libcublasLt.so.12.8.4.1
lrwxrwxrwx 1 root root         21 Mai 23 21:12 libcublas.so.12 -> libcublas.so.12.8.4.1
-rwxr-xr-x 1 root root  116388640 Jul  8  2015 libcublas.so.12.8.4.1
lrwxrwxrwx 1 root root         20 Mai 23 21:12 libcudart.so.12 -> libcudart.so.12.8.90
-rwxr-xr-x 1 root root     728800 Jul  8  2015 libcudart.so.12.8.90

Either try reinstalling ollama and ollama-cuda, or use the official install method shown here.

<!-- gh-comment-id:2914094760 --> @rick-github commented on GitHub (May 27, 2025): This is what it should look like: ``` /usr/local/lib/ollama: total 4772 drwxr-xr-x 2 root root 4096 Mai 23 21:10 cuda_v11 drwxr-xr-x 2 root root 4096 Mai 23 21:12 cuda_v12 -rwxr-xr-x 1 root root 595648 Mai 23 20:59 libggml-base.so -rwxr-xr-x 1 root root 619280 Mai 23 20:59 libggml-cpu-alderlake.so -rwxr-xr-x 1 root root 619280 Mai 23 20:59 libggml-cpu-haswell.so -rwxr-xr-x 1 root root 725776 Mai 23 20:59 libggml-cpu-icelake.so -rwxr-xr-x 1 root root 606992 Mai 23 20:59 libggml-cpu-sandybridge.so -rwxr-xr-x 1 root root 729872 Mai 23 20:59 libggml-cpu-skylakex.so -rwxr-xr-x 1 root root 480048 Mai 23 20:59 libggml-cpu-sse42.so -rwxr-xr-x 1 root root 475952 Mai 23 20:59 libggml-cpu-x64.so /usr/local/lib/ollama/cuda_v11: total 1158364 lrwxrwxrwx 1 root root 25 Mai 23 21:10 libcublasLt.so.11 -> libcublasLt.so.11.5.1.109 -rwxr-xr-x 1 root root 263770264 Mai 4 2021 libcublasLt.so.11.5.1.109 lrwxrwxrwx 1 root root 23 Mai 23 21:10 libcublas.so.11 -> libcublas.so.11.5.1.109 -rwxr-xr-x 1 root root 121866104 Mai 4 2021 libcublas.so.11.5.1.109 lrwxrwxrwx 1 root root 21 Mai 23 21:10 libcudart.so.11.0 -> libcudart.so.11.3.109 -rwxr-xr-x 1 root root 619192 Mai 4 2021 libcudart.so.11.3.109 -rwxr-xr-x 1 root root 799883944 Mai 23 21:10 libggml-cuda.so /usr/local/lib/ollama/cuda_v12: total 2104184 lrwxrwxrwx 1 root root 23 Mai 23 21:12 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1 -rwxr-xr-x 1 root root 751771728 Jul 8 2015 libcublasLt.so.12.8.4.1 lrwxrwxrwx 1 root root 21 Mai 23 21:12 libcublas.so.12 -> libcublas.so.12.8.4.1 -rwxr-xr-x 1 root root 116388640 Jul 8 2015 libcublas.so.12.8.4.1 lrwxrwxrwx 1 root root 20 Mai 23 21:12 libcudart.so.12 -> libcudart.so.12.8.90 -rwxr-xr-x 1 root root 728800 Jul 8 2015 libcudart.so.12.8.90 ``` Either try reinstalling `ollama` and `ollama-cuda`, or use the official install method shown [here](https://ollama.com/download/linux).
Author
Owner

@Li-Wentao commented on GitHub (May 27, 2025):

Okay, thanks again. Let me try to reinstall with the official method.

<!-- gh-comment-id:2914099874 --> @Li-Wentao commented on GitHub (May 27, 2025): Okay, thanks again. Let me try to reinstall with the official method.
Author
Owner

@Li-Wentao commented on GitHub (May 27, 2025):

This is what it should look like:

/usr/local/lib/ollama:
total 4772
drwxr-xr-x 2 root root   4096 Mai 23 21:10 cuda_v11
drwxr-xr-x 2 root root   4096 Mai 23 21:12 cuda_v12
-rwxr-xr-x 1 root root 595648 Mai 23 20:59 libggml-base.so
-rwxr-xr-x 1 root root 619280 Mai 23 20:59 libggml-cpu-alderlake.so
-rwxr-xr-x 1 root root 619280 Mai 23 20:59 libggml-cpu-haswell.so
-rwxr-xr-x 1 root root 725776 Mai 23 20:59 libggml-cpu-icelake.so
-rwxr-xr-x 1 root root 606992 Mai 23 20:59 libggml-cpu-sandybridge.so
-rwxr-xr-x 1 root root 729872 Mai 23 20:59 libggml-cpu-skylakex.so
-rwxr-xr-x 1 root root 480048 Mai 23 20:59 libggml-cpu-sse42.so
-rwxr-xr-x 1 root root 475952 Mai 23 20:59 libggml-cpu-x64.so

/usr/local/lib/ollama/cuda_v11:
total 1158364
lrwxrwxrwx 1 root root        25 Mai 23 21:10 libcublasLt.so.11 -> libcublasLt.so.11.5.1.109
-rwxr-xr-x 1 root root 263770264 Mai  4  2021 libcublasLt.so.11.5.1.109
lrwxrwxrwx 1 root root        23 Mai 23 21:10 libcublas.so.11 -> libcublas.so.11.5.1.109
-rwxr-xr-x 1 root root 121866104 Mai  4  2021 libcublas.so.11.5.1.109
lrwxrwxrwx 1 root root        21 Mai 23 21:10 libcudart.so.11.0 -> libcudart.so.11.3.109
-rwxr-xr-x 1 root root    619192 Mai  4  2021 libcudart.so.11.3.109
-rwxr-xr-x 1 root root 799883944 Mai 23 21:10 libggml-cuda.so

/usr/local/lib/ollama/cuda_v12:
total 2104184
lrwxrwxrwx 1 root root         23 Mai 23 21:12 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1
-rwxr-xr-x 1 root root  751771728 Jul  8  2015 libcublasLt.so.12.8.4.1
lrwxrwxrwx 1 root root         21 Mai 23 21:12 libcublas.so.12 -> libcublas.so.12.8.4.1
-rwxr-xr-x 1 root root  116388640 Jul  8  2015 libcublas.so.12.8.4.1
lrwxrwxrwx 1 root root         20 Mai 23 21:12 libcudart.so.12 -> libcudart.so.12.8.90
-rwxr-xr-x 1 root root     728800 Jul  8  2015 libcudart.so.12.8.90

Either try reinstalling ollama and ollama-cuda, or use the official install method shown here.

Thank you so much, after re-install it with offical source, I can see that Ollama is running on my GPU now! Turns out pacman does not configure the GPU and CPU backend in there package.

Image

<!-- gh-comment-id:2914181341 --> @Li-Wentao commented on GitHub (May 27, 2025): > This is what it should look like: > > ``` > /usr/local/lib/ollama: > total 4772 > drwxr-xr-x 2 root root 4096 Mai 23 21:10 cuda_v11 > drwxr-xr-x 2 root root 4096 Mai 23 21:12 cuda_v12 > -rwxr-xr-x 1 root root 595648 Mai 23 20:59 libggml-base.so > -rwxr-xr-x 1 root root 619280 Mai 23 20:59 libggml-cpu-alderlake.so > -rwxr-xr-x 1 root root 619280 Mai 23 20:59 libggml-cpu-haswell.so > -rwxr-xr-x 1 root root 725776 Mai 23 20:59 libggml-cpu-icelake.so > -rwxr-xr-x 1 root root 606992 Mai 23 20:59 libggml-cpu-sandybridge.so > -rwxr-xr-x 1 root root 729872 Mai 23 20:59 libggml-cpu-skylakex.so > -rwxr-xr-x 1 root root 480048 Mai 23 20:59 libggml-cpu-sse42.so > -rwxr-xr-x 1 root root 475952 Mai 23 20:59 libggml-cpu-x64.so > > /usr/local/lib/ollama/cuda_v11: > total 1158364 > lrwxrwxrwx 1 root root 25 Mai 23 21:10 libcublasLt.so.11 -> libcublasLt.so.11.5.1.109 > -rwxr-xr-x 1 root root 263770264 Mai 4 2021 libcublasLt.so.11.5.1.109 > lrwxrwxrwx 1 root root 23 Mai 23 21:10 libcublas.so.11 -> libcublas.so.11.5.1.109 > -rwxr-xr-x 1 root root 121866104 Mai 4 2021 libcublas.so.11.5.1.109 > lrwxrwxrwx 1 root root 21 Mai 23 21:10 libcudart.so.11.0 -> libcudart.so.11.3.109 > -rwxr-xr-x 1 root root 619192 Mai 4 2021 libcudart.so.11.3.109 > -rwxr-xr-x 1 root root 799883944 Mai 23 21:10 libggml-cuda.so > > /usr/local/lib/ollama/cuda_v12: > total 2104184 > lrwxrwxrwx 1 root root 23 Mai 23 21:12 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1 > -rwxr-xr-x 1 root root 751771728 Jul 8 2015 libcublasLt.so.12.8.4.1 > lrwxrwxrwx 1 root root 21 Mai 23 21:12 libcublas.so.12 -> libcublas.so.12.8.4.1 > -rwxr-xr-x 1 root root 116388640 Jul 8 2015 libcublas.so.12.8.4.1 > lrwxrwxrwx 1 root root 20 Mai 23 21:12 libcudart.so.12 -> libcudart.so.12.8.90 > -rwxr-xr-x 1 root root 728800 Jul 8 2015 libcudart.so.12.8.90 > ``` > > Either try reinstalling `ollama` and `ollama-cuda`, or use the official install method shown [here](https://ollama.com/download/linux). Thank you so much, after re-install it with offical source, I can see that Ollama is running on my GPU now! Turns out pacman does not configure the GPU and CPU backend in there package. ![Image](https://github.com/user-attachments/assets/e582e31c-bf8c-4e1f-abc4-275107dc64c8)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7122