[GH-ISSUE #11168] Ollama is not using available resources CPU + GPU #7366

Closed
opened 2026-04-12 19:25:39 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @faci2000 on GitHub (Jun 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11168

What is the issue?

I'm trying to run ollama on AWS EC2 g4dn.12xlarge instance, but it is not reaching the full potential.

Using the deepseek-r1:32b model, it is reaching usage of GPUs just around 30%:
Image

Average performance looks like here:
Image

I've noticed that during the response

docker-inspect.txt

ollama is fully using only one core of the CPU:
Image

Inside docker container:
Image

Outside:
Image

Nevertheless, the docker container is not restriced in any way, and have possibility to use whole CPU.
Image

I've tried playing with parameters like: num_thread or tried setting the env variables regarding multicore usage, but without success.

logs.log
lscpu.txt
nvidia-smi.txt

Relevant log output

time=2025-06-18T13:06:18.358Z level=INFO source=server.go:630 msg="llama runner started in 6.78 seconds"
[GIN] 2025/06/18 - 13:07:41 | 200 |         1m35s |      172.20.0.2 | POST     "/api/chat"
[GIN] 2025/06/18 - 13:08:03 | 200 |        14m21s |      172.20.0.2 | POST     "/api/chat"
[GIN] 2025/06/18 - 13:08:06 | 200 | 24.671084422s |      172.20.0.2 | POST     "/api/chat"
[GIN] 2025/06/18 - 13:08:08 | 200 |      35.566µs |      172.20.0.2 | GET      "/api/version"
[GIN] 2025/06/18 - 13:08:13 | 200 |      35.206µs |      172.20.0.2 | GET      "/api/version"
[GIN] 2025/06/18 - 13:08:15 | 200 |      33.833µs |      172.20.0.2 | GET      "/api/version"
[GIN] 2025/06/18 - 13:08:41 | 200 |      35.448µs |      172.20.0.2 | GET      "/api/version"
[GIN] 2025/06/18 - 13:08:55 | 200 |     877.804µs |      172.20.0.2 | GET      "/api/tags"
[GIN] 2025/06/18 - 13:08:55 | 200 |      29.454µs |      172.20.0.2 | GET      "/api/ps"
[GIN] 2025/06/18 - 13:08:59 | 500 | 52.956108807s |      172.20.0.2 | POST     "/api/chat"
time=2025-06-18T13:09:00.836Z level=INFO source=routes.go:1206 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-06-18T13:09:00.839Z level=INFO source=images.go:463 msg="total blobs: 34"
time=2025-06-18T13:09:00.839Z level=INFO source=images.go:470 msg="total unused blobs removed: 0"
time=2025-06-18T13:09:00.840Z level=INFO source=routes.go:1259 msg="Listening on [::]:11434 (version 0.8.0)"
time=2025-06-18T13:09:00.840Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-06-18T13:09:01.833Z level=INFO source=types.go:130 msg="inference compute" id=GPU-76077695-d38e-6877-07c4-37549f5b7862 library=cuda variant=v12 compute=7.5 driver=12.2 name="Tesla T4" total="14.6 GiB" available="14.5 GiB"
time=2025-06-18T13:09:01.833Z level=INFO source=types.go:130 msg="inference compute" id=GPU-7fa2e651-fc04-8ece-2a8e-65348cc2a35b library=cuda variant=v12 compute=7.5 driver=12.2 name="Tesla T4" total="14.6 GiB" available="14.5 GiB"
time=2025-06-18T13:09:01.833Z level=INFO source=types.go:130 msg="inference compute" id=GPU-cd9b52f1-7931-d509-4ad7-fd1f543682b1 library=cuda variant=v12 compute=7.5 driver=12.2 name="Tesla T4" total="14.6 GiB" available="14.5 GiB"
time=2025-06-18T13:09:01.833Z level=INFO source=types.go:130 msg="inference compute" id=GPU-9321216a-55b1-30c4-eb0b-c8bad3a0b84c library=cuda variant=v12 compute=7.5 driver=12.2 name="Tesla T4" total="14.6 GiB" available="14.5 GiB"
[GIN] 2025/06/18 - 13:09:40 | 200 |     1.21188ms |      172.20.0.2 | GET      "/api/tags"
[GIN] 2025/06/18 - 13:09:40 | 200 |     103.587µs |      172.20.0.2 | GET      "/api/ps"
[GIN] 2025/06/18 - 13:09:41 | 200 |        65.1µs |      172.20.0.2 | GET      "/api/version"
time=2025-06-18T13:10:44.654Z level=INFO source=sched.go:804 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 library=cuda parallel=2 required="26.7 GiB"
time=2025-06-18T13:10:45.709Z level=INFO source=server.go:135 msg="system memory" total="186.7 GiB" free="183.0 GiB" free_swap="0 B"
time=2025-06-18T13:10:45.709Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=65 layers.split=17,16,16,16 memory.available="[14.5 GiB 14.5 GiB 14.5 GiB 14.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="26.7 GiB" memory.required.partial="26.7 GiB" memory.required.kv="2.0 GiB" memory.required.allocations="[7.1 GiB 6.5 GiB 6.5 GiB 6.5 GiB]" memory.weights.total="18.1 GiB" memory.weights.repeating="17.5 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="916.1 MiB" memory.graph.partial="916.1 MiB"
llama_model_loader: loaded meta data with 26 key-value pairs and 771 tensors from /root/.ollama/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 32B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 32B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 64
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 27648
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = deepseek-r1-qwen
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  321 tensors
llama_model_loader: - type q4_K:  385 tensors
llama_model_loader: - type q6_K:   65 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 18.48 GiB (4.85 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 32.76 B
print_info: general.name     = DeepSeek R1 Distill Qwen 32B
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151646 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151643 '<|end▁of▁sentence|>'
print_info: EOT token        = 151643 '<|end▁of▁sentence|>'
print_info: PAD token        = 151643 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-06-18T13:10:45.987Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 --ctx-size 8192 --batch-size 512 --n-gpu-layers 65 --threads 24 --parallel 2 --tensor-split 17,16,16,16 --port 41815"
time=2025-06-18T13:10:45.987Z level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-18T13:10:45.987Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-18T13:10:45.988Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-06-18T13:10:46.001Z level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 CUDA devices:
  Device 0: Tesla T4, compute capability 7.5, VMM: yes
  Device 1: Tesla T4, compute capability 7.5, VMM: yes
  Device 2: Tesla T4, compute capability 7.5, VMM: yes
  Device 3: Tesla T4, compute capability 7.5, VMM: yes
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2025-06-18T13:10:46.665Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-06-18T13:10:46.665Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:41815"
time=2025-06-18T13:10:46.740Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (Tesla T4) - 14825 MiB free
llama_model_load_from_file_impl: using device CUDA1 (Tesla T4) - 14825 MiB free
llama_model_load_from_file_impl: using device CUDA2 (Tesla T4) - 14825 MiB free
llama_model_load_from_file_impl: using device CUDA3 (Tesla T4) - 14825 MiB free
llama_model_loader: loaded meta data with 26 key-value pairs and 771 tensors from /root/.ollama/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 32B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 32B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 64
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 27648
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = deepseek-r1-qwen
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  321 tensors
llama_model_loader: - type q4_K:  385 tensors
llama_model_loader: - type q6_K:   65 tensors

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.8.0

Originally created by @faci2000 on GitHub (Jun 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11168 ### What is the issue? I'm trying to run ollama on AWS EC2 g4dn.12xlarge instance, but it is not reaching the full potential. Using the deepseek-r1:32b model, it is reaching usage of GPUs just around 30%: ![Image](https://github.com/user-attachments/assets/1ad113cf-f31e-4290-88a6-254beb37fee1) Average performance looks like here: ![Image](https://github.com/user-attachments/assets/1b72eca8-255d-496a-99b1-8ce0ff8b3382) I've noticed that during the response [docker-inspect.txt](https://github.com/user-attachments/files/20864617/docker-inspect.txt) ollama is fully using only one core of the CPU: ![Image](https://github.com/user-attachments/assets/9a7bbba2-4e68-4afd-b492-94a8a607adab) Inside docker container: ![Image](https://github.com/user-attachments/assets/e7617e1b-acf8-4884-9a63-65832ee3adcf) Outside: ![Image](https://github.com/user-attachments/assets/1777f1d9-a6ac-44fa-b050-59673d4561f5) Nevertheless, the docker container is not restriced in any way, and have possibility to use whole CPU. ![Image](https://github.com/user-attachments/assets/abe4ffd8-e8a2-4473-aa64-f419d422b4af) I've tried playing with parameters like: num_thread or tried setting the env variables regarding multicore usage, but without success. [logs.log](https://github.com/user-attachments/files/20864618/logs.log) [lscpu.txt](https://github.com/user-attachments/files/20864619/lscpu.txt) [nvidia-smi.txt](https://github.com/user-attachments/files/20864616/nvidia-smi.txt) ### Relevant log output ```shell time=2025-06-18T13:06:18.358Z level=INFO source=server.go:630 msg="llama runner started in 6.78 seconds" [GIN] 2025/06/18 - 13:07:41 | 200 | 1m35s | 172.20.0.2 | POST "/api/chat" [GIN] 2025/06/18 - 13:08:03 | 200 | 14m21s | 172.20.0.2 | POST "/api/chat" [GIN] 2025/06/18 - 13:08:06 | 200 | 24.671084422s | 172.20.0.2 | POST "/api/chat" [GIN] 2025/06/18 - 13:08:08 | 200 | 35.566µs | 172.20.0.2 | GET "/api/version" [GIN] 2025/06/18 - 13:08:13 | 200 | 35.206µs | 172.20.0.2 | GET "/api/version" [GIN] 2025/06/18 - 13:08:15 | 200 | 33.833µs | 172.20.0.2 | GET "/api/version" [GIN] 2025/06/18 - 13:08:41 | 200 | 35.448µs | 172.20.0.2 | GET "/api/version" [GIN] 2025/06/18 - 13:08:55 | 200 | 877.804µs | 172.20.0.2 | GET "/api/tags" [GIN] 2025/06/18 - 13:08:55 | 200 | 29.454µs | 172.20.0.2 | GET "/api/ps" [GIN] 2025/06/18 - 13:08:59 | 500 | 52.956108807s | 172.20.0.2 | POST "/api/chat" time=2025-06-18T13:09:00.836Z level=INFO source=routes.go:1206 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-06-18T13:09:00.839Z level=INFO source=images.go:463 msg="total blobs: 34" time=2025-06-18T13:09:00.839Z level=INFO source=images.go:470 msg="total unused blobs removed: 0" time=2025-06-18T13:09:00.840Z level=INFO source=routes.go:1259 msg="Listening on [::]:11434 (version 0.8.0)" time=2025-06-18T13:09:00.840Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-06-18T13:09:01.833Z level=INFO source=types.go:130 msg="inference compute" id=GPU-76077695-d38e-6877-07c4-37549f5b7862 library=cuda variant=v12 compute=7.5 driver=12.2 name="Tesla T4" total="14.6 GiB" available="14.5 GiB" time=2025-06-18T13:09:01.833Z level=INFO source=types.go:130 msg="inference compute" id=GPU-7fa2e651-fc04-8ece-2a8e-65348cc2a35b library=cuda variant=v12 compute=7.5 driver=12.2 name="Tesla T4" total="14.6 GiB" available="14.5 GiB" time=2025-06-18T13:09:01.833Z level=INFO source=types.go:130 msg="inference compute" id=GPU-cd9b52f1-7931-d509-4ad7-fd1f543682b1 library=cuda variant=v12 compute=7.5 driver=12.2 name="Tesla T4" total="14.6 GiB" available="14.5 GiB" time=2025-06-18T13:09:01.833Z level=INFO source=types.go:130 msg="inference compute" id=GPU-9321216a-55b1-30c4-eb0b-c8bad3a0b84c library=cuda variant=v12 compute=7.5 driver=12.2 name="Tesla T4" total="14.6 GiB" available="14.5 GiB" [GIN] 2025/06/18 - 13:09:40 | 200 | 1.21188ms | 172.20.0.2 | GET "/api/tags" [GIN] 2025/06/18 - 13:09:40 | 200 | 103.587µs | 172.20.0.2 | GET "/api/ps" [GIN] 2025/06/18 - 13:09:41 | 200 | 65.1µs | 172.20.0.2 | GET "/api/version" time=2025-06-18T13:10:44.654Z level=INFO source=sched.go:804 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 library=cuda parallel=2 required="26.7 GiB" time=2025-06-18T13:10:45.709Z level=INFO source=server.go:135 msg="system memory" total="186.7 GiB" free="183.0 GiB" free_swap="0 B" time=2025-06-18T13:10:45.709Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=65 layers.split=17,16,16,16 memory.available="[14.5 GiB 14.5 GiB 14.5 GiB 14.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="26.7 GiB" memory.required.partial="26.7 GiB" memory.required.kv="2.0 GiB" memory.required.allocations="[7.1 GiB 6.5 GiB 6.5 GiB 6.5 GiB]" memory.weights.total="18.1 GiB" memory.weights.repeating="17.5 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="916.1 MiB" memory.graph.partial="916.1 MiB" llama_model_loader: loaded meta data with 26 key-value pairs and 771 tensors from /root/.ollama/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 32B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen2.block_count u32 = 64 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 27648 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = deepseek-r1-qwen llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 321 tensors llama_model_loader: - type q4_K: 385 tensors llama_model_loader: - type q6_K: 65 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 18.48 GiB (4.85 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 32.76 B print_info: general.name = DeepSeek R1 Distill Qwen 32B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151646 '<|begin▁of▁sentence|>' print_info: EOS token = 151643 '<|end▁of▁sentence|>' print_info: EOT token = 151643 '<|end▁of▁sentence|>' print_info: PAD token = 151643 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-06-18T13:10:45.987Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 --ctx-size 8192 --batch-size 512 --n-gpu-layers 65 --threads 24 --parallel 2 --tensor-split 17,16,16,16 --port 41815" time=2025-06-18T13:10:45.987Z level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-06-18T13:10:45.987Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-06-18T13:10:45.988Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-06-18T13:10:46.001Z level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 4 CUDA devices: Device 0: Tesla T4, compute capability 7.5, VMM: yes Device 1: Tesla T4, compute capability 7.5, VMM: yes Device 2: Tesla T4, compute capability 7.5, VMM: yes Device 3: Tesla T4, compute capability 7.5, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-06-18T13:10:46.665Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-06-18T13:10:46.665Z level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:41815" time=2025-06-18T13:10:46.740Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (Tesla T4) - 14825 MiB free llama_model_load_from_file_impl: using device CUDA1 (Tesla T4) - 14825 MiB free llama_model_load_from_file_impl: using device CUDA2 (Tesla T4) - 14825 MiB free llama_model_load_from_file_impl: using device CUDA3 (Tesla T4) - 14825 MiB free llama_model_loader: loaded meta data with 26 key-value pairs and 771 tensors from /root/.ollama/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 32B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen2.block_count u32 = 64 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 27648 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = deepseek-r1-qwen llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 321 tensors llama_model_loader: - type q4_K: 385 tensors llama_model_loader: - type q6_K: 65 tensors ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.8.0
GiteaMirror added the bug label 2026-04-12 19:25:39 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 23, 2025):

https://github.com/ollama/ollama/issues/7648#issuecomment-2473561990

<!-- gh-comment-id:2996197410 --> @rick-github commented on GitHub (Jun 23, 2025): https://github.com/ollama/ollama/issues/7648#issuecomment-2473561990
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7366