[GH-ISSUE #7765] Not using GPU after timeout unload of models with Docker image #67016

Closed
opened 2026-05-04 09:14:51 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @brauliobo on GitHub (Nov 20, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7765

What is the issue?

If I run sudo systemctl restart containerd docker then run

 docker exec -it ollama ollama run llama3.2

Then it uses the GPU as can be seen with nvidia-smi, low CPU usage and the speed of prompts.

The container was created with --gpus all and docker configured nvidia-container-toolkit.

But after a little while, the model is unloaded. Then when I run again docker exec -it ollama ollama run llama3.2 it on the CPU

OS

Linux, Docker

GPU

Nvidia

CPU

AMD

Ollama version

0.4.2 from official docker image

Originally created by @brauliobo on GitHub (Nov 20, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7765 ### What is the issue? If I run `sudo systemctl restart containerd docker` then run ``` docker exec -it ollama ollama run llama3.2 ``` Then it uses the GPU as can be seen with `nvidia-smi`, low CPU usage and the speed of prompts. The container was created with `--gpus all` and docker configured `nvidia-container-toolkit`. But after a little while, the model is unloaded. Then when I run again ` docker exec -it ollama ollama run llama3.2` it on the CPU ### OS Linux, Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.4.2 from official docker image
GiteaMirror added the bug label 2026-05-04 09:14:51 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 20, 2024):

Server logs will aid in debugging.

<!-- gh-comment-id:2489308540 --> @rick-github commented on GitHub (Nov 20, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md) will aid in debugging.
Author
Owner

@brauliobo commented on GitHub (Nov 20, 2024):

here it is:

2024/11/20 19:11:44 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-20T19:11:44.932Z level=INFO source=images.go:755 msg="total blobs: 0"
time=2024-11-20T19:11:44.932Z level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-20T19:11:44.932Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.2)"
time=2024-11-20T19:11:44.932Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-11-20T19:11:44.932Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-20T19:11:45.416Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-11-20T19:11:45.417Z level=INFO source=amd_linux.go:399 msg="no compatible amdgpu devices detected"
time=2024-11-20T19:11:45.417Z level=INFO source=types.go:123 msg="inference compute" id=GPU-467f0325-4701-7246-da8e-ea5f9e822dbf library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3060" total="11.7 GiB" available="7.1 GiB"
time=2024-11-20T19:16:03.897Z level=INFO source=download.go:175 msg="downloading dde5aa3fc5ff in 16 126 MB part(s)"
time=2024-11-20T19:25:47.063Z level=INFO source=download.go:175 msg="downloading 966de95ca8a6 in 1 1.4 KB part(s)"
time=2024-11-20T19:25:49.406Z level=INFO source=download.go:175 msg="downloading fcc5a6bec9da in 1 7.7 KB part(s)"
time=2024-11-20T19:25:51.786Z level=INFO source=download.go:175 msg="downloading a70ff7e570d9 in 1 6.0 KB part(s)"
time=2024-11-20T19:25:54.015Z level=INFO source=download.go:175 msg="downloading 56bb8bd477a5 in 1 96 B part(s)"
time=2024-11-20T19:25:56.674Z level=INFO source=download.go:175 msg="downloading 34bb5ab01051 in 1 561 B part(s)"
cuda driver library failed to get device context 800time=2024-11-20T19:25:58.956Z level=WARN source=gpu.go:441 msg="error looking up nvidia GPU memory"
time=2024-11-20T19:25:58.979Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-467f0325-4701-7246-da8e-ea5f9e822dbf parallel=4 available=7593525248 required="3.7 GiB"
cuda driver library failed to get device context 800time=2024-11-20T19:25:58.994Z level=WARN source=gpu.go:441 msg="error looking up nvidia GPU memory"
time=2024-11-20T19:25:58.994Z level=INFO source=server.go:105 msg="system memory" total="94.2 GiB" free="70.4 GiB" free_swap="0 B"
time=2024-11-20T19:25:58.994Z level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[7.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB"
time=2024-11-20T19:25:58.995Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 6 --parallel 4 --port 45637"
time=2024-11-20T19:25:58.998Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-20T19:25:58.998Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
time=2024-11-20T19:25:58.999Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-20T19:25:59.083Z level=INFO source=runner.go:883 msg="starting go runner"
time=2024-11-20T19:25:59.083Z level=INFO source=runner.go:884 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6
time=2024-11-20T19:25:59.083Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:45637"
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   8:                          llama.block_count u32              = 28
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  18:                          general.file_type u32              = 15
llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
time=2024-11-20T19:25:59.250Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 3072
llm_load_print_meta: n_layer          = 28
llm_load_print_meta: n_head           = 24
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 3
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 8192
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 3B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 3.21 B
llm_load_print_meta: model size       = 1.87 GiB (5.01 BPW) 
llm_load_print_meta: general.name     = Llama 3.2 3B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected
llm_load_tensors: ggml ctx size =    0.12 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors:        CPU buffer size =  1918.35 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
ggml_cuda_host_malloc: failed to allocate 896.00 MiB of pinned memory: no CUDA-capable device is detected
llama_kv_cache_init:        CPU KV buffer size =   896.00 MiB
llama_new_context_with_model: KV self size  =  896.00 MiB, K (f16):  448.00 MiB, V (f16):  448.00 MiB
ggml_cuda_host_malloc: failed to allocate 2.00 MiB of pinned memory: no CUDA-capable device is detected
llama_new_context_with_model:        CPU  output buffer size =     2.00 MiB
ggml_cuda_host_malloc: failed to allocate 424.01 MiB of pinned memory: no CUDA-capable device is detected
llama_new_context_with_model:  CUDA_Host compute buffer size =   424.01 MiB
llama_new_context_with_model: graph nodes  = 902
llama_new_context_with_model: graph splits = 1
time=2024-11-20T19:25:59.751Z level=INFO source=server.go:601 msg="llama runner started in 0.75 seconds"

after a stop/start then it works, logs below:

braulio @ bhavapower ➜  ~  docker logs ollama                                                                                                                                                                                                                                                                                                                                                      
Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.                                                                                                                                                                                                                                                                                                                              
Your new public key is:                                                                                                                                                                                                                                                                                                                                                                            
                                                                                                                                                                                                                                                                                                                                                                                                   
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK4bI2uih7gjydskG+t/p1JuosWg/1eVsh25Z5D9qH1A                                                                                                                                                                                                                                                                                                                   
                                                                                                                                                                                                                                                                                                                                                                                                   
2024/11/20 19:11:44 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_LOA
D_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0
 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"                                                                                                                                                                                                             
time=2024-11-20T19:11:44.932Z level=INFO source=images.go:755 msg="total blobs: 0"                                                                                                                                                                                                                                                                                                                 
time=2024-11-20T19:11:44.932Z level=INFO source=images.go:762 msg="total unused blobs removed: 0"                                                                                                                                                                                                                                                                                                  
time=2024-11-20T19:11:44.932Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.2)"                                                                                                                                                                                                                                                                                       
time=2024-11-20T19:11:44.932Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"                                                                                                                                                                                                                                                        
time=2024-11-20T19:11:44.932Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"                                                                                                                                                                                                                                                                                                       
time=2024-11-20T19:11:45.416Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"                                                                                                                        
time=2024-11-20T19:11:45.417Z level=INFO source=amd_linux.go:399 msg="no compatible amdgpu devices detected"                                                                                                                                                                                                                                                                                       
time=2024-11-20T19:11:45.417Z level=INFO source=types.go:123 msg="inference compute" id=GPU-467f0325-4701-7246-da8e-ea5f9e822dbf library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3060" total="11.7 GiB" available="7.1 GiB"                                                                                                                                              
[GIN] 2024/11/20 - 19:15:59 | 200 |      19.255µs |       127.0.0.1 | HEAD     "/"                                                                                                                                                                                                                                                                                                                 
[GIN] 2024/11/20 - 19:15:59 | 404 |      84.878µs |       127.0.0.1 | POST     "/api/show"                                                                                                                                                                                                                                                                                                         
time=2024-11-20T19:16:03.897Z level=INFO source=download.go:175 msg="downloading dde5aa3fc5ff in 16 126 MB part(s)"                                                                                                                                                                                                                                                                                
time=2024-11-20T19:25:47.063Z level=INFO source=download.go:175 msg="downloading 966de95ca8a6 in 1 1.4 KB part(s)"                                                                                                                                                                                                                                                                                 
time=2024-11-20T19:25:49.406Z level=INFO source=download.go:175 msg="downloading fcc5a6bec9da in 1 7.7 KB part(s)"                                                                                                                                                                                                                                                                                 
time=2024-11-20T19:25:51.786Z level=INFO source=download.go:175 msg="downloading a70ff7e570d9 in 1 6.0 KB part(s)"                                                                                                                                                                                                                                                                                 
time=2024-11-20T19:25:54.015Z level=INFO source=download.go:175 msg="downloading 56bb8bd477a5 in 1 96 B part(s)"                                                                                                                                                                                                                                                                                   
time=2024-11-20T19:25:56.674Z level=INFO source=download.go:175 msg="downloading 34bb5ab01051 in 1 561 B part(s)"                                                                                                                                                                                                                                                                                  
[GIN] 2024/11/20 - 19:25:58 | 200 |         9m59s |       127.0.0.1 | POST     "/api/pull"                                                                                                                                                                                                                                                                                                         
[GIN] 2024/11/20 - 19:25:58 | 200 |   12.366763ms |       127.0.0.1 | POST     "/api/show"                                                                                                                                                                                                                                                                                                         
cuda driver library failed to get device context 800time=2024-11-20T19:25:58.956Z level=WARN source=gpu.go:441 msg="error looking up nvidia GPU memory"                                                                                                                                                                                                                                            
time=2024-11-20T19:25:58.979Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-467f0325-4701-7246-da8e-ea5f9e822dbf parallel=4 available=7593525248 required="3.7 GiB"                                                            
cuda driver library failed to get device context 800time=2024-11-20T19:25:58.994Z level=WARN source=gpu.go:441 msg="error looking up nvidia GPU memory"                                                                                                                                                                                                                                            
time=2024-11-20T19:25:58.994Z level=INFO source=server.go:105 msg="system memory" total="94.2 GiB" free="70.4 GiB" free_swap="0 B"                                                                                                                                                                                                                                                                 
time=2024-11-20T19:25:58.994Z level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[7.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weig
hts.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB"                                                                                                                                                                                                                                                                     
time=2024-11-20T19:25:58.995Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 6 --parallel 4 --port 45637"                                           
time=2024-11-20T19:25:58.998Z level=INFO source=sched.go:449 msg="loaded runners" count=1                                                                                                                                                                                                                                                                                                          
time=2024-11-20T19:25:58.998Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding"                                                                                                                                                                                                                                                                                   
time=2024-11-20T19:25:58.999Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"                                                                                                                                                                                                                                                               
time=2024-11-20T19:25:59.083Z level=INFO source=runner.go:883 msg="starting go runner"                                                                                                                                                                                                                                                                                                             
time=2024-11-20T19:25:59.083Z level=INFO source=runner.go:884 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6     
time=2024-11-20T19:25:59.083Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:45637"                                                                                                                                                                                                                                                                                                      
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))                                                                                                                                                                                    
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.                                                                                                                                                                                                                                                                                                  
llama_model_loader: - kv   0:                       general.architecture str              = llama                                                                                                                                                                                                                                                                                                  
llama_model_loader: - kv   1:                               general.type str              = model                                                                                                                                                                                                                                                                                                  
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct                                                                                                                                                                                                                                                                                  
llama_model_loader: - kv   3:                           general.finetune str              = Instruct                                                                                                                                                                                                                                                                                               
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2                                                                                                                                                                                                                                                                                              
llama_model_loader: - kv   5:                         general.size_label str              = 3B                                                                                                                                                                                                                                                                                                     
llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...                                                                                                                                                                                                                                                               
llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...                                                                                                                                                                                                                                                               
llama_model_loader: - kv   8:                          llama.block_count u32              = 28                                                                                                                                                                                                                                                                                                     
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072                                                                                                                                                                                                                                                                                                 
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072                                                                                                                                                                                                                                                                                                   
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192                                                                                                                                                                                                                                                                                                   
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24                                                                                                                                                                                                                                                                                                     
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8                                                                                                                                                                                                                                                                                                      
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000                                                                                                                                                                                                                                                                                          
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010                                                                                                                                                                                                                                                                                               
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128                                                                                                                                                                                                                                                                                                    
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128                                                                                                                                                                                                                                                                                                    
llama_model_loader: - kv  18:                          general.file_type u32              = 15                                                                                                                                                                                                                                                                                                     
llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256                                                                                                                                                                                                                                                                                                 
llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128                                                                                                                                                                                                                                                                                                    
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2                                                                                                                                                                                                                                                                                                   
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe                                                                                                                                                                                                                                                                                              
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...                                                                                                                                                                                                                                                               
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...                                                                                                                                                                                                                                                               
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...                                                                                                                                                                                                                                                                         
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000                                                                                                                                                                                                                                                                                                 
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009                                                                                                                                                                                                                                                                                                 
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...                                                                                                                                                                                                                                                              
llama_model_loader: - kv  29:               general.quantization_version u32              = 2                                                                                                                                                                                                                                                                                                      
llama_model_loader: - type  f32:   58 tensors                                                                                                                                                                                                                                                                                                                                                      
llama_model_loader: - type q4_K:  168 tensors                                                                                                                                                                                                                                                                                                                                                      
llama_model_loader: - type q6_K:   29 tensors                                                                                                                                                                                                                                                                                                                                                      
time=2024-11-20T19:25:59.250Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"                                                                                                                                                                                                                                                       
llm_load_vocab: special tokens cache size = 256                                                                                                                                                                                                                                                                                                                                                    
llm_load_vocab: token to piece cache size = 0.7999 MB                                                                                                                                                                                                                                                                                                                                              
llm_load_print_meta: format           = GGUF V3 (latest)                                                                                                                                                                                                                                                                                                                                           
llm_load_print_meta: arch             = llama                                                                                                                                                                                                                                                                                                                                                      
llm_load_print_meta: vocab type       = BPE                                                                                                                                                                                                                                                                                                                                                        
llm_load_print_meta: n_vocab          = 128256                                                                                                                                                                                                                                                                                                                                                     
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 3072
llm_load_print_meta: n_layer          = 28
llm_load_print_meta: n_head           = 24
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 3
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 8192
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 3B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 3.21 B
llm_load_print_meta: model size       = 1.87 GiB (5.01 BPW) 
llm_load_print_meta: general.name     = Llama 3.2 3B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected
llm_load_tensors: ggml ctx size =    0.12 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors:        CPU buffer size =  1918.35 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
ggml_cuda_host_malloc: failed to allocate 896.00 MiB of pinned memory: no CUDA-capable device is detected
llama_kv_cache_init:        CPU KV buffer size =   896.00 MiB
llama_new_context_with_model: KV self size  =  896.00 MiB, K (f16):  448.00 MiB, V (f16):  448.00 MiB
ggml_cuda_host_malloc: failed to allocate 2.00 MiB of pinned memory: no CUDA-capable device is detected
llama_new_context_with_model:        CPU  output buffer size =     2.00 MiB
ggml_cuda_host_malloc: failed to allocate 424.01 MiB of pinned memory: no CUDA-capable device is detected
llama_new_context_with_model:  CUDA_Host compute buffer size =   424.01 MiB
llama_new_context_with_model: graph nodes  = 902
llama_new_context_with_model: graph splits = 1
time=2024-11-20T19:25:59.751Z level=INFO source=server.go:601 msg="llama runner started in 0.75 seconds"
[GIN] 2024/11/20 - 19:25:59 | 200 |  823.467751ms |       127.0.0.1 | POST     "/api/generate"
2024/11/20 19:28:41 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-20T19:28:41.527Z level=INFO source=images.go:755 msg="total blobs: 6"
time=2024-11-20T19:28:41.527Z level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-20T19:28:41.527Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.2)"
time=2024-11-20T19:28:41.527Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 cpu cpu_avx]"
time=2024-11-20T19:28:41.527Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-20T19:28:42.011Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-11-20T19:28:42.012Z level=INFO source=amd_linux.go:399 msg="no compatible amdgpu devices detected"
time=2024-11-20T19:28:42.012Z level=INFO source=types.go:123 msg="inference compute" id=GPU-467f0325-4701-7246-da8e-ea5f9e822dbf library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3060" total="11.7 GiB" available="7.1 GiB"
[GIN] 2024/11/20 - 19:28:47 | 200 |      19.856µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/20 - 19:28:47 | 200 |   12.291472ms |       127.0.0.1 | POST     "/api/show"
time=2024-11-20T19:28:47.988Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-467f0325-4701-7246-da8e-ea5f9e822dbf parallel=4 available=7593525248 required="3.7 GiB"
time=2024-11-20T19:28:48.443Z level=INFO source=server.go:105 msg="system memory" total="94.2 GiB" free="69.9 GiB" free_swap="0 B"
time=2024-11-20T19:28:48.443Z level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[7.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB"
time=2024-11-20T19:28:48.444Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 6 --parallel 4 --port 39057"
time=2024-11-20T19:28:48.444Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-20T19:28:48.444Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
time=2024-11-20T19:28:48.444Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-20T19:28:48.457Z level=INFO source=runner.go:883 msg="starting go runner"
time=2024-11-20T19:28:48.457Z level=INFO source=runner.go:884 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6
time=2024-11-20T19:28:48.457Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:39057"
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   8:                          llama.block_count u32              = 28
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  18:                          general.file_type u32              = 15
llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
time=2024-11-20T19:28:48.695Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 3072
llm_load_print_meta: n_layer          = 28
llm_load_print_meta: n_head           = 24
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 3
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 8192
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 3B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 3.21 B
llm_load_print_meta: model size       = 1.87 GiB (5.01 BPW) 
llm_load_print_meta: general.name     = Llama 3.2 3B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.24 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors:        CPU buffer size =   308.23 MiB
llm_load_tensors:      CUDA0 buffer size =  1918.36 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   896.00 MiB
llama_new_context_with_model: KV self size  =  896.00 MiB, K (f16):  448.00 MiB, V (f16):  448.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     2.00 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   424.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    22.01 MiB
llama_new_context_with_model: graph nodes  = 902
llama_new_context_with_model: graph splits = 2
time=2024-11-20T19:28:50.201Z level=INFO source=server.go:601 msg="llama runner started in 1.76 seconds"
[GIN] 2024/11/20 - 19:28:50 | 200 |   2.70827177s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2024/11/20 - 19:28:52 | 200 |  1.219114813s |       127.0.0.1 | POST     "/api/chat"

<!-- gh-comment-id:2489389982 --> @brauliobo commented on GitHub (Nov 20, 2024): here it is: ``` 2024/11/20 19:11:44 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-11-20T19:11:44.932Z level=INFO source=images.go:755 msg="total blobs: 0" time=2024-11-20T19:11:44.932Z level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-20T19:11:44.932Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.2)" time=2024-11-20T19:11:44.932Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]" time=2024-11-20T19:11:44.932Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-20T19:11:45.416Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-11-20T19:11:45.417Z level=INFO source=amd_linux.go:399 msg="no compatible amdgpu devices detected" time=2024-11-20T19:11:45.417Z level=INFO source=types.go:123 msg="inference compute" id=GPU-467f0325-4701-7246-da8e-ea5f9e822dbf library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3060" total="11.7 GiB" available="7.1 GiB" time=2024-11-20T19:16:03.897Z level=INFO source=download.go:175 msg="downloading dde5aa3fc5ff in 16 126 MB part(s)" time=2024-11-20T19:25:47.063Z level=INFO source=download.go:175 msg="downloading 966de95ca8a6 in 1 1.4 KB part(s)" time=2024-11-20T19:25:49.406Z level=INFO source=download.go:175 msg="downloading fcc5a6bec9da in 1 7.7 KB part(s)" time=2024-11-20T19:25:51.786Z level=INFO source=download.go:175 msg="downloading a70ff7e570d9 in 1 6.0 KB part(s)" time=2024-11-20T19:25:54.015Z level=INFO source=download.go:175 msg="downloading 56bb8bd477a5 in 1 96 B part(s)" time=2024-11-20T19:25:56.674Z level=INFO source=download.go:175 msg="downloading 34bb5ab01051 in 1 561 B part(s)" cuda driver library failed to get device context 800time=2024-11-20T19:25:58.956Z level=WARN source=gpu.go:441 msg="error looking up nvidia GPU memory" time=2024-11-20T19:25:58.979Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-467f0325-4701-7246-da8e-ea5f9e822dbf parallel=4 available=7593525248 required="3.7 GiB" cuda driver library failed to get device context 800time=2024-11-20T19:25:58.994Z level=WARN source=gpu.go:441 msg="error looking up nvidia GPU memory" time=2024-11-20T19:25:58.994Z level=INFO source=server.go:105 msg="system memory" total="94.2 GiB" free="70.4 GiB" free_swap="0 B" time=2024-11-20T19:25:58.994Z level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[7.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB" time=2024-11-20T19:25:58.995Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 6 --parallel 4 --port 45637" time=2024-11-20T19:25:58.998Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-20T19:25:58.998Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding" time=2024-11-20T19:25:58.999Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-20T19:25:59.083Z level=INFO source=runner.go:883 msg="starting go runner" time=2024-11-20T19:25:59.083Z level=INFO source=runner.go:884 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6 time=2024-11-20T19:25:59.083Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:45637" llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 28 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 llama_model_loader: - kv 18: general.file_type u32 = 15 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors time=2024-11-20T19:25:59.250Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 24 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 3 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8192 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 3B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 3.21 B llm_load_print_meta: model size = 1.87 GiB (5.01 BPW) llm_load_print_meta: general.name = Llama 3.2 3B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected llm_load_tensors: ggml ctx size = 0.12 MiB llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: CPU buffer size = 1918.35 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 ggml_cuda_host_malloc: failed to allocate 896.00 MiB of pinned memory: no CUDA-capable device is detected llama_kv_cache_init: CPU KV buffer size = 896.00 MiB llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB ggml_cuda_host_malloc: failed to allocate 2.00 MiB of pinned memory: no CUDA-capable device is detected llama_new_context_with_model: CPU output buffer size = 2.00 MiB ggml_cuda_host_malloc: failed to allocate 424.01 MiB of pinned memory: no CUDA-capable device is detected llama_new_context_with_model: CUDA_Host compute buffer size = 424.01 MiB llama_new_context_with_model: graph nodes = 902 llama_new_context_with_model: graph splits = 1 time=2024-11-20T19:25:59.751Z level=INFO source=server.go:601 msg="llama runner started in 0.75 seconds" ``` after a stop/start then it works, logs below: ``` braulio @ bhavapower ➜ ~ docker logs ollama Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK4bI2uih7gjydskG+t/p1JuosWg/1eVsh25Z5D9qH1A 2024/11/20 19:11:44 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_LOA D_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-11-20T19:11:44.932Z level=INFO source=images.go:755 msg="total blobs: 0" time=2024-11-20T19:11:44.932Z level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-20T19:11:44.932Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.2)" time=2024-11-20T19:11:44.932Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]" time=2024-11-20T19:11:44.932Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-20T19:11:45.416Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-11-20T19:11:45.417Z level=INFO source=amd_linux.go:399 msg="no compatible amdgpu devices detected" time=2024-11-20T19:11:45.417Z level=INFO source=types.go:123 msg="inference compute" id=GPU-467f0325-4701-7246-da8e-ea5f9e822dbf library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3060" total="11.7 GiB" available="7.1 GiB" [GIN] 2024/11/20 - 19:15:59 | 200 | 19.255µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/20 - 19:15:59 | 404 | 84.878µs | 127.0.0.1 | POST "/api/show" time=2024-11-20T19:16:03.897Z level=INFO source=download.go:175 msg="downloading dde5aa3fc5ff in 16 126 MB part(s)" time=2024-11-20T19:25:47.063Z level=INFO source=download.go:175 msg="downloading 966de95ca8a6 in 1 1.4 KB part(s)" time=2024-11-20T19:25:49.406Z level=INFO source=download.go:175 msg="downloading fcc5a6bec9da in 1 7.7 KB part(s)" time=2024-11-20T19:25:51.786Z level=INFO source=download.go:175 msg="downloading a70ff7e570d9 in 1 6.0 KB part(s)" time=2024-11-20T19:25:54.015Z level=INFO source=download.go:175 msg="downloading 56bb8bd477a5 in 1 96 B part(s)" time=2024-11-20T19:25:56.674Z level=INFO source=download.go:175 msg="downloading 34bb5ab01051 in 1 561 B part(s)" [GIN] 2024/11/20 - 19:25:58 | 200 | 9m59s | 127.0.0.1 | POST "/api/pull" [GIN] 2024/11/20 - 19:25:58 | 200 | 12.366763ms | 127.0.0.1 | POST "/api/show" cuda driver library failed to get device context 800time=2024-11-20T19:25:58.956Z level=WARN source=gpu.go:441 msg="error looking up nvidia GPU memory" time=2024-11-20T19:25:58.979Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-467f0325-4701-7246-da8e-ea5f9e822dbf parallel=4 available=7593525248 required="3.7 GiB" cuda driver library failed to get device context 800time=2024-11-20T19:25:58.994Z level=WARN source=gpu.go:441 msg="error looking up nvidia GPU memory" time=2024-11-20T19:25:58.994Z level=INFO source=server.go:105 msg="system memory" total="94.2 GiB" free="70.4 GiB" free_swap="0 B" time=2024-11-20T19:25:58.994Z level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[7.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weig hts.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB" time=2024-11-20T19:25:58.995Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 6 --parallel 4 --port 45637" time=2024-11-20T19:25:58.998Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-20T19:25:58.998Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding" time=2024-11-20T19:25:58.999Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-20T19:25:59.083Z level=INFO source=runner.go:883 msg="starting go runner" time=2024-11-20T19:25:59.083Z level=INFO source=runner.go:884 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6 time=2024-11-20T19:25:59.083Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:45637" llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 28 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 llama_model_loader: - kv 18: general.file_type u32 = 15 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors time=2024-11-20T19:25:59.250Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 24 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 3 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8192 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 3B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 3.21 B llm_load_print_meta: model size = 1.87 GiB (5.01 BPW) llm_load_print_meta: general.name = Llama 3.2 3B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected llm_load_tensors: ggml ctx size = 0.12 MiB llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: CPU buffer size = 1918.35 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 ggml_cuda_host_malloc: failed to allocate 896.00 MiB of pinned memory: no CUDA-capable device is detected llama_kv_cache_init: CPU KV buffer size = 896.00 MiB llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB ggml_cuda_host_malloc: failed to allocate 2.00 MiB of pinned memory: no CUDA-capable device is detected llama_new_context_with_model: CPU output buffer size = 2.00 MiB ggml_cuda_host_malloc: failed to allocate 424.01 MiB of pinned memory: no CUDA-capable device is detected llama_new_context_with_model: CUDA_Host compute buffer size = 424.01 MiB llama_new_context_with_model: graph nodes = 902 llama_new_context_with_model: graph splits = 1 time=2024-11-20T19:25:59.751Z level=INFO source=server.go:601 msg="llama runner started in 0.75 seconds" [GIN] 2024/11/20 - 19:25:59 | 200 | 823.467751ms | 127.0.0.1 | POST "/api/generate" 2024/11/20 19:28:41 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-11-20T19:28:41.527Z level=INFO source=images.go:755 msg="total blobs: 6" time=2024-11-20T19:28:41.527Z level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-20T19:28:41.527Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.2)" time=2024-11-20T19:28:41.527Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 cpu cpu_avx]" time=2024-11-20T19:28:41.527Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-20T19:28:42.011Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-11-20T19:28:42.012Z level=INFO source=amd_linux.go:399 msg="no compatible amdgpu devices detected" time=2024-11-20T19:28:42.012Z level=INFO source=types.go:123 msg="inference compute" id=GPU-467f0325-4701-7246-da8e-ea5f9e822dbf library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3060" total="11.7 GiB" available="7.1 GiB" [GIN] 2024/11/20 - 19:28:47 | 200 | 19.856µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/20 - 19:28:47 | 200 | 12.291472ms | 127.0.0.1 | POST "/api/show" time=2024-11-20T19:28:47.988Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-467f0325-4701-7246-da8e-ea5f9e822dbf parallel=4 available=7593525248 required="3.7 GiB" time=2024-11-20T19:28:48.443Z level=INFO source=server.go:105 msg="system memory" total="94.2 GiB" free="69.9 GiB" free_swap="0 B" time=2024-11-20T19:28:48.443Z level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[7.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB" time=2024-11-20T19:28:48.444Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 6 --parallel 4 --port 39057" time=2024-11-20T19:28:48.444Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-20T19:28:48.444Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding" time=2024-11-20T19:28:48.444Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-20T19:28:48.457Z level=INFO source=runner.go:883 msg="starting go runner" time=2024-11-20T19:28:48.457Z level=INFO source=runner.go:884 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=6 time=2024-11-20T19:28:48.457Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:39057" llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 28 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 llama_model_loader: - kv 18: general.file_type u32 = 15 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors time=2024-11-20T19:28:48.695Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 24 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 3 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8192 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 3B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 3.21 B llm_load_print_meta: model size = 1.87 GiB (5.01 BPW) llm_load_print_meta: general.name = Llama 3.2 3B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes llm_load_tensors: ggml ctx size = 0.24 MiB llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: CPU buffer size = 308.23 MiB llm_load_tensors: CUDA0 buffer size = 1918.36 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 896.00 MiB llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.00 MiB llama_new_context_with_model: CUDA0 compute buffer size = 424.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 22.01 MiB llama_new_context_with_model: graph nodes = 902 llama_new_context_with_model: graph splits = 2 time=2024-11-20T19:28:50.201Z level=INFO source=server.go:601 msg="llama runner started in 1.76 seconds" [GIN] 2024/11/20 - 19:28:50 | 200 | 2.70827177s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/20 - 19:28:52 | 200 | 1.219114813s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Nov 20, 2024):

cuda driver library failed to get device context 800
time=2024-11-20T19:25:58.956Z level=WARN source=gpu.go:441 msg="error looking up nvidia GPU memory"

Try the work around in https://github.com/ollama/ollama/pull/7519. It's techically a workaround for AMD GPUs but it sounds a lot like what you are experiencing.

<!-- gh-comment-id:2489406758 --> @rick-github commented on GitHub (Nov 20, 2024): ``` cuda driver library failed to get device context 800 time=2024-11-20T19:25:58.956Z level=WARN source=gpu.go:441 msg="error looking up nvidia GPU memory" ``` Try the work around in https://github.com/ollama/ollama/pull/7519. It's techically a workaround for AMD GPUs but it sounds a lot like what you are experiencing.
Author
Owner

@brauliobo commented on GitHub (Nov 20, 2024):

Thanks! I've just put "exec-opts": ["native.cgroupdriver=cgroupfs"] in /etc/docker/daemon.json. I've also used -e OLLAMA_KEEP_ALIVE=-1 to avoid restarts. Closing for now

<!-- gh-comment-id:2489428491 --> @brauliobo commented on GitHub (Nov 20, 2024): Thanks! I've just put `"exec-opts": ["native.cgroupdriver=cgroupfs"]` in `/etc/docker/daemon.json`. I've also used `-e OLLAMA_KEEP_ALIVE=-1` to avoid restarts. Closing for now
Author
Owner

@rick-github commented on GitHub (Nov 20, 2024):

Note that OLLAMA_KEEP_ALIVE=1 means keep the model loaded for 1 second after a completion is finished. If you want the model to be always loaded (which is what I think you mean by "avoid restarts"), you want OLLAMA_KEEP_ALIVE=-1.

<!-- gh-comment-id:2489436767 --> @rick-github commented on GitHub (Nov 20, 2024): Note that `OLLAMA_KEEP_ALIVE=1` means keep the model loaded for 1 second after a completion is finished. If you want the model to be always loaded (which is what I think you mean by "avoid restarts"), you want `OLLAMA_KEEP_ALIVE=-1`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67016