[GH-ISSUE #9562] CUDA error: out of memory on Windows,when I use anythingLLM and Ollama #6236

Closed
opened 2026-04-12 17:39:19 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @Steelzy on GitHub (Mar 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9562

What is the issue?

我已经将ollama和anythingLLM都更新到最新版了,驱动程序也是目前最新的572.70版本,运行时还是会出现CUDA的这个错误,超出内存,但是我查看了我的显存和内存都没有超过。(intel i9-14900HX,rtx4070)

Relevant log output

time=2025-03-07T10:10:42.193+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 33 key-value pairs and 771 tensors from D:\Ollama\blobs\sha256-c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = QwQ 32B
llama_model_loader: - kv   3:                           general.basename str              = QwQ
llama_model_loader: - kv   4:                         general.size_label str              = 32B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                       general.license.link str              = https://huggingface.co/Qwen/QWQ-32B/b...
llama_model_loader: - kv   7:                   general.base_model.count u32              = 1
llama_model_loader: - kv   8:                  general.base_model.0.name str              = Qwen2.5 32B
llama_model_loader: - kv   9:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  10:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-32B
llama_model_loader: - kv  11:                               general.tags arr[str,2]       = ["chat", "text-generation"]
llama_model_loader: - kv  12:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  13:                          qwen2.block_count u32              = 64
llama_model_loader: - kv  14:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv  15:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv  16:                  qwen2.feed_forward_length u32              = 27648
llama_model_loader: - kv  17:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  18:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  19:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  20:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  27:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  29:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  30:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  31:               general.quantization_version u32              = 2
llama_model_loader: - kv  32:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  321 tensors
llama_model_loader: - type q4_K:  385 tensors
llama_model_loader: - type q6_K:   65 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 18.48 GiB (4.85 BPW) 
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 5120
print_info: n_layer          = 64
print_info: n_head           = 40
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 5
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 27648
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 32B
print_info: model params     = 32.76 B
print_info: general.name     = QwQ 32B
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
warning: SetProcessWorkingSetSize failed: ϵͳ  Դ   㣬 ޷        ķ   

load_tensors: offloading 14 repeating layers to GPU
load_tensors: offloaded 14/65 layers to GPU
load_tensors:    CUDA_Host model buffer size = 14484.61 MiB
load_tensors:        CUDA0 model buffer size =  4023.74 MiB
load_tensors:          CPU model buffer size =   417.66 MiB
CUDA error: out of memory
  current device: 0, in function ggml_backend_cuda_device_get_memory at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:2898
  cudaMemGetInfo(free, total)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:73: CUDA error
time=2025-03-07T10:10:55.475+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server not responding"
time=2025-03-07T10:10:56.214+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-07T10:10:57.059+08:00 level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 0xc0000409"
time=2025-03-07T10:10:57.219+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: CUDA error"
[GIN] 2025/03/07 - 10:10:57 | 500 |   15.5583504s |       127.0.0.1 | POST     "/api/chat"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.5.13

Originally created by @Steelzy on GitHub (Mar 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9562 ### What is the issue? 我已经将ollama和anythingLLM都更新到最新版了,驱动程序也是目前最新的572.70版本,运行时还是会出现CUDA的这个错误,超出内存,但是我查看了我的显存和内存都没有超过。(intel i9-14900HX,rtx4070) ### Relevant log output ```shell time=2025-03-07T10:10:42.193+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 33 key-value pairs and 771 tensors from D:\Ollama\blobs\sha256-c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = QwQ 32B llama_model_loader: - kv 3: general.basename str = QwQ llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/QWQ-32B/b... llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Qwen2.5 32B llama_model_loader: - kv 9: general.base_model.0.organization str = Qwen llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B llama_model_loader: - kv 11: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 12: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 13: qwen2.block_count u32 = 64 llama_model_loader: - kv 14: qwen2.context_length u32 = 131072 llama_model_loader: - kv 15: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 16: qwen2.feed_forward_length u32 = 27648 llama_model_loader: - kv 17: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 18: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 20: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 31: general.quantization_version u32 = 2 llama_model_loader: - kv 32: general.file_type u32 = 15 llama_model_loader: - type f32: 321 tensors llama_model_loader: - type q4_K: 385 tensors llama_model_loader: - type q6_K: 65 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 18.48 GiB (4.85 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 5120 print_info: n_layer = 64 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 27648 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 32B print_info: model params = 32.76 B print_info: general.name = QwQ 32B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) warning: SetProcessWorkingSetSize failed: ϵͳ Դ 㣬 ޷ ķ load_tensors: offloading 14 repeating layers to GPU load_tensors: offloaded 14/65 layers to GPU load_tensors: CUDA_Host model buffer size = 14484.61 MiB load_tensors: CUDA0 model buffer size = 4023.74 MiB load_tensors: CPU model buffer size = 417.66 MiB CUDA error: out of memory current device: 0, in function ggml_backend_cuda_device_get_memory at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:2898 cudaMemGetInfo(free, total) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:73: CUDA error time=2025-03-07T10:10:55.475+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server not responding" time=2025-03-07T10:10:56.214+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-07T10:10:57.059+08:00 level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 0xc0000409" time=2025-03-07T10:10:57.219+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: CUDA error" [GIN] 2025/03/07 - 10:10:57 | 500 | 15.5583504s | 127.0.0.1 | POST "/api/chat" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.13
GiteaMirror added the bug label 2026-04-12 17:39:19 -05:00
Author
Owner

@infinitymask8 commented on GitHub (Mar 7, 2025):

same problem but my gpu is 2060 rxt

<!-- gh-comment-id:2705670966 --> @infinitymask8 commented on GitHub (Mar 7, 2025): same problem but my gpu is 2060 rxt
Author
Owner

@Steelzy commented on GitHub (Mar 7, 2025):

same problem but my gpu is 2060 rxt

Did you find any solution? I found that when I don't use Game Mode of Lenovo, it works well. But when I choose game mode or super mode, it will occur to this error.

<!-- gh-comment-id:2706023751 --> @Steelzy commented on GitHub (Mar 7, 2025): > same problem but my gpu is 2060 rxt Did you find any solution? I found that when I don't use Game Mode of Lenovo, it works well. But when I choose game mode or super mode, it will occur to this error.
Author
Owner

@rick-github commented on GitHub (Mar 7, 2025):

Some general steps for dealing with OOMs here.

<!-- gh-comment-id:2706161844 --> @rick-github commented on GitHub (Mar 7, 2025): Some general steps for dealing with OOMs [here](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288).
Author
Owner

@Steelzy commented on GitHub (Mar 7, 2025):

Some general steps for dealing with OOMs here.

Thanks for your advice! I will try it.

<!-- gh-comment-id:2706251086 --> @Steelzy commented on GitHub (Mar 7, 2025): > Some general steps for dealing with OOMs [here](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288). Thanks for your advice! I will try it.
Author
Owner

@Steelzy commented on GitHub (Mar 7, 2025):

Some general steps for dealing with OOMs here.

I have configured the settings according to the link you provided, but I'm still encountering the error "llama runner process has terminated: CUDA error".

<!-- gh-comment-id:2706311179 --> @Steelzy commented on GitHub (Mar 7, 2025): > Some general steps for dealing with OOMs [here](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288). I have configured the settings according to the link you provided, but I'm still encountering the error "llama runner process has terminated: CUDA error".
Author
Owner

@Steelzy commented on GitHub (Mar 7, 2025):

Some general steps for dealing with OOMs here.

time=2025-03-07T20:17:06.990+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-07T20:17:06.991+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-07T20:17:06.991+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-07T20:17:06.991+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-07T20:17:07.021+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-07T20:17:07.021+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-07T20:17:07.021+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-07T20:17:07.021+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-07T20:17:07.044+08:00 level=INFO source=server.go:97 msg="system memory" total="31.7 GiB" free="16.1 GiB" free_swap="11.4 GiB"
time=2025-03-07T20:17:07.067+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-07T20:17:07.067+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-07T20:17:07.067+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-07T20:17:07.067+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-07T20:17:07.068+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=11 layers.split="" memory.available="[5.6 GiB]" memory.gpu_overhead="512.0 MiB" memory.required.full="21.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="512.0 MiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="18.0 GiB" memory.weights.repeating="17.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB"
time=2025-03-07T20:17:07.068+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-07T20:17:07.068+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-07T20:17:07.068+08:00 level=INFO source=server.go:182 msg="enabling flash attention"
time=2025-03-07T20:17:07.068+08:00 level=WARN source=server.go:190 msg="kv cache type not supported by model" type=""
time=2025-03-07T20:17:07.101+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\Users\eng89\AppData\Local\Programs\Ollama\ollama.exe runner --model D:\Ollama\blobs\sha256-c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb --ctx-size 2048 --batch-size 512 --n-gpu-layers 11 --threads 8 --flash-attn --no-mmap --mlock --parallel 1 --port 60689"
time=2025-03-07T20:17:07.116+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-07T20:17:07.116+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
time=2025-03-07T20:17:07.117+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-07T20:17:07.166+08:00 level=INFO source=runner.go:931 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4070 Laptop GPU, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from C:\Users\eng89\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
load_backend: loaded CPU backend from C:\Users\eng89\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
time=2025-03-07T20:17:07.487+08:00 level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(clang)" threads=8
time=2025-03-07T20:17:07.488+08:00 level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:60689"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4070 Laptop GPU) - 7056 MiB free
llama_model_loader: loaded meta data with 33 key-value pairs and 771 tensors from D:\Ollama\blobs\sha256-c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = QwQ 32B
llama_model_loader: - kv 3: general.basename str = QwQ
llama_model_loader: - kv 4: general.size_label str = 32B
llama_model_loader: - kv 5: general.license str = apache-2.0
llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/QWQ-32B/b...
llama_model_loader: - kv 7: general.base_model.count u32 = 1
llama_model_loader: - kv 8: general.base_model.0.name str = Qwen2.5 32B
llama_model_loader: - kv 9: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B
llama_model_loader: - kv 11: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 12: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 13: qwen2.block_count u32 = 64
llama_model_loader: - kv 14: qwen2.context_length u32 = 131072
llama_model_loader: - kv 15: qwen2.embedding_length u32 = 5120
llama_model_loader: - kv 16: qwen2.feed_forward_length u32 = 27648
llama_model_loader: - kv 17: qwen2.attention.head_count u32 = 40
llama_model_loader: - kv 18: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 19: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 20: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
time=2025-03-07T20:17:07.622+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 31: general.quantization_version u32 = 2
llama_model_loader: - kv 32: general.file_type u32 = 15
llama_model_loader: - type f32: 321 tensors
llama_model_loader: - type q4_K: 385 tensors
llama_model_loader: - type q6_K: 65 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 18.48 GiB (4.85 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen2
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 5120
print_info: n_layer = 64
print_info: n_head = 40
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 5
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: n_ff = 27648
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 32B
print_info: model params = 32.76 B
print_info: general.name = QwQ 32B
print_info: vocab type = BPE
print_info: n_vocab = 152064
print_info: n_merges = 151387
print_info: BOS token = 151643 '<|endoftext|>'
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151643 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
warning: failed to VirtualLock 16049074176-byte buffer (after previously locking 0 bytes): ϵͳ �� 㣬 ޷ ķ

CUDA error: out of memory
current device: 0, in function ggml_backend_cuda_device_get_memory at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:2898
cudaMemGetInfo(free, total)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:73: CUDA error
time=2025-03-07T20:17:11.836+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server not responding"
time=2025-03-07T20:17:14.794+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
time=2025-03-07T20:17:15.277+08:00 level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 0xc0000409"
time=2025-03-07T20:17:15.295+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: CUDA error"
[GIN] 2025/03/07 - 20:17:15 | 500 | 8.5488182s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:2706314774 --> @Steelzy commented on GitHub (Mar 7, 2025): > Some general steps for dealing with OOMs [here](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288). time=2025-03-07T20:17:06.990+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-07T20:17:06.991+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-07T20:17:06.991+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-07T20:17:06.991+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-07T20:17:07.021+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-07T20:17:07.021+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-07T20:17:07.021+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-07T20:17:07.021+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-07T20:17:07.044+08:00 level=INFO source=server.go:97 msg="system memory" total="31.7 GiB" free="16.1 GiB" free_swap="11.4 GiB" time=2025-03-07T20:17:07.067+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-07T20:17:07.067+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-07T20:17:07.067+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-07T20:17:07.067+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-07T20:17:07.068+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=65 layers.offload=11 layers.split="" memory.available="[5.6 GiB]" memory.gpu_overhead="512.0 MiB" memory.required.full="21.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="512.0 MiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="18.0 GiB" memory.weights.repeating="17.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="307.0 MiB" memory.graph.partial="916.1 MiB" time=2025-03-07T20:17:07.068+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-07T20:17:07.068+08:00 level=WARN source=ggml.go:136 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-07T20:17:07.068+08:00 level=INFO source=server.go:182 msg="enabling flash attention" time=2025-03-07T20:17:07.068+08:00 level=WARN source=server.go:190 msg="kv cache type not supported by model" type="" time=2025-03-07T20:17:07.101+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\eng89\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\Ollama\\blobs\\sha256-c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb --ctx-size 2048 --batch-size 512 --n-gpu-layers 11 --threads 8 --flash-attn --no-mmap --mlock --parallel 1 --port 60689" time=2025-03-07T20:17:07.116+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-07T20:17:07.116+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" time=2025-03-07T20:17:07.117+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-07T20:17:07.166+08:00 level=INFO source=runner.go:931 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4070 Laptop GPU, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from C:\Users\eng89\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll load_backend: loaded CPU backend from C:\Users\eng89\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll time=2025-03-07T20:17:07.487+08:00 level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(clang)" threads=8 time=2025-03-07T20:17:07.488+08:00 level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:60689" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4070 Laptop GPU) - 7056 MiB free llama_model_loader: loaded meta data with 33 key-value pairs and 771 tensors from D:\Ollama\blobs\sha256-c62ccde5630c20c8a9cf601861d31977d07450cad6dfdf1c661aab307107bddb (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = QwQ 32B llama_model_loader: - kv 3: general.basename str = QwQ llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: general.license.link str = https://huggingface.co/Qwen/QWQ-32B/b... llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Qwen2.5 32B llama_model_loader: - kv 9: general.base_model.0.organization str = Qwen llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-32B llama_model_loader: - kv 11: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 12: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 13: qwen2.block_count u32 = 64 llama_model_loader: - kv 14: qwen2.context_length u32 = 131072 llama_model_loader: - kv 15: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 16: qwen2.feed_forward_length u32 = 27648 llama_model_loader: - kv 17: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 18: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 20: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... time=2025-03-07T20:17:07.622+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 30: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 31: general.quantization_version u32 = 2 llama_model_loader: - kv 32: general.file_type u32 = 15 llama_model_loader: - type f32: 321 tensors llama_model_loader: - type q4_K: 385 tensors llama_model_loader: - type q6_K: 65 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 18.48 GiB (4.85 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 5120 print_info: n_layer = 64 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 27648 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 32B print_info: model params = 32.76 B print_info: general.name = QwQ 32B print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) warning: failed to VirtualLock 16049074176-byte buffer (after previously locking 0 bytes): ϵͳ �� 㣬 ޷ ķ CUDA error: out of memory current device: 0, in function ggml_backend_cuda_device_get_memory at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:2898 cudaMemGetInfo(free, total) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:73: CUDA error time=2025-03-07T20:17:11.836+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server not responding" time=2025-03-07T20:17:14.794+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" time=2025-03-07T20:17:15.277+08:00 level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 0xc0000409" time=2025-03-07T20:17:15.295+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: CUDA error" [GIN] 2025/03/07 - 20:17:15 | 500 | 8.5488182s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@rick-github commented on GitHub (Mar 7, 2025):

Try a bigger overhead.
Try reducing num_gpu.

<!-- gh-comment-id:2706330917 --> @rick-github commented on GitHub (Mar 7, 2025): Try a bigger overhead. Try reducing `num_gpu`.
Author
Owner

@Steelzy commented on GitHub (Mar 7, 2025):

Try a bigger overhead. Try reducing num_gpu.

I'm very sorry, but is the num_gpu parameter you mentioned set in the command line? I'm using anythingLLM to call models from ollama, so how should I configure this? For the other parameters you mentioned like OLLAMA_GPU_OVERHEAD, I added those to the Windows environment variables. Or let me explain it this way: if my computer is not in performance mode, running anythingLLM to call the qwq model from ollama doesn't report any CUDA OOM errors. However, after enabling performance mode, the first time I call a large model from ollama through anythingLLM works fine, but if I try again after about ten minutes, I get an OOM error. I don't think this is an issue with my computer, is it? Could there be something wrong with ollama's logic or something else going on?

<!-- gh-comment-id:2706373603 --> @Steelzy commented on GitHub (Mar 7, 2025): > Try a bigger overhead. Try reducing `num_gpu`. I'm very sorry, but is the num_gpu parameter you mentioned set in the command line? I'm using anythingLLM to call models from ollama, so how should I configure this? For the other parameters you mentioned like OLLAMA_GPU_OVERHEAD, I added those to the Windows environment variables. Or let me explain it this way: if my computer is not in performance mode, running anythingLLM to call the qwq model from ollama doesn't report any CUDA OOM errors. However, after enabling performance mode, the first time I call a large model from ollama through anythingLLM works fine, but if I try again after about ten minutes, I get an OOM error. I don't think this is an issue with my computer, is it? Could there be something wrong with ollama's logic or something else going on?
Author
Owner

@rick-github commented on GitHub (Mar 7, 2025):

https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650

<!-- gh-comment-id:2706382535 --> @rick-github commented on GitHub (Mar 7, 2025): https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650
Author
Owner

@Steelzy commented on GitHub (Mar 7, 2025):

#6950 (comment)

Thanks for your help ,I will try it

<!-- gh-comment-id:2706464174 --> @Steelzy commented on GitHub (Mar 7, 2025): > [#6950 (comment)](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650) Thanks for your help ,I will try it
Author
Owner

@infinitymask8 commented on GitHub (Mar 7, 2025):

``2025-03-06 22:55:20 2025/03/07 04:55:20 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2025-03-06 22:55:20 time=2025-03-07T04:55:20.782Z level=INFO source=images.go:432 msg="total blobs: 20"
2025-03-06 22:55:20 time=2025-03-07T04:55:20.782Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
2025-03-06 22:55:20 time=2025-03-07T04:55:20.784Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)"
2025-03-06 22:55:20 time=2025-03-07T04:55:20.784Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
2025-03-06 22:55:21 time=2025-03-07T04:55:21.301Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51e84bf9-91ed-160d-f8af-f145f1df6904 library=cuda variant=v12 compute=7.5 driver=12.8 name="NVIDIA GeForce RTX 2060" total="6.0 GiB" available="5.0 GiB"
2025-03-06 22:58:35 time=2025-03-07T04:58:35.077Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 22:58:35 time=2025-03-07T04:58:35.078Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 22:58:35 time=2025-03-07T04:58:35.079Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 22:58:35 time=2025-03-07T04:58:35.079Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 22:58:35 time=2025-03-07T04:58:35.081Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 22:58:35 time=2025-03-07T04:58:35.081Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 22:55:24 [GIN] 2025/03/07 - 04:55:24 | 200 | 69.702µs | 172.18.0.7 | HEAD "/"
2025-03-06 22:55:25 [GIN] 2025/03/07 - 04:55:25 | 200 | 1.351738055s | 172.18.0.7 | POST "/api/pull"
2025-03-06 22:55:25 [GIN] 2025/03/07 - 04:55:25 | 200 | 24.446µs | 172.18.0.7 | HEAD "/"
2025-03-06 22:55:26 [GIN] 2025/03/07 - 04:55:26 | 200 | 642.743017ms | 172.18.0.7 | POST "/api/pull"
2025-03-06 22:58:24 [GIN] 2025/03/07 - 04:58:24 | 200 | 3.059855ms | 172.18.0.1 | GET "/api/tags"
2025-03-06 22:58:24 [GIN] 2025/03/07 - 04:58:24 | 200 | 126.731µs | 172.18.0.1 | GET "/api/version"
2025-03-06 22:58:35 time=2025-03-07T04:58:35.081Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 22:58:35 time=2025-03-07T04:58:35.081Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 22:58:35 time=2025-03-07T04:58:35.253Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="9.1 GiB" free_swap="3.0 GiB"
2025-03-06 22:58:35 time=2025-03-07T04:58:35.253Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 22:58:35 time=2025-03-07T04:58:35.253Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 22:58:35 time=2025-03-07T04:58:35.253Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB"
2025-03-06 22:58:35 time=2025-03-07T04:58:35.254Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 34845"
2025-03-06 22:58:35 time=2025-03-07T04:58:35.255Z level=INFO source=sched.go:450 msg="loaded runners" count=1
2025-03-06 22:58:35 time=2025-03-07T04:58:35.255Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
2025-03-06 22:58:35 time=2025-03-07T04:58:35.255Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 22:58:35 time=2025-03-07T04:58:35.276Z level=INFO source=runner.go:931 msg="starting go runner"
2025-03-06 22:58:35 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2025-03-06 22:58:35 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-03-06 22:58:35 ggml_cuda_init: found 1 CUDA devices:
2025-03-06 22:58:35 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
2025-03-06 22:58:35 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
2025-03-06 22:58:35 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
2025-03-06 22:58:35 time=2025-03-07T04:58:35.889Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8
2025-03-06 22:58:35 time=2025-03-07T04:58:35.905Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:34845"
2025-03-06 22:58:36 time=2025-03-07T04:58:36.009Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
2025-03-06 22:58:36 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free
2025-03-06 22:58:36 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
2025-03-06 22:58:36 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-03-06 22:58:36 llama_model_loader: - kv 0: general.architecture str = llama
2025-03-06 22:58:36 llama_model_loader: - kv 1: general.name str = LLaMA v2
2025-03-06 22:58:36 llama_model_loader: - kv 2: llama.context_length u32 = 4096
2025-03-06 22:58:36 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
2025-03-06 22:58:36 llama_model_loader: - kv 4: llama.block_count u32 = 32
2025-03-06 22:58:36 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
2025-03-06 22:58:36 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
2025-03-06 22:58:36 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
2025-03-06 22:58:36 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
2025-03-06 22:58:36 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
2025-03-06 22:58:36 llama_model_loader: - kv 10: general.file_type u32 = 2
2025-03-06 22:58:36 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
2025-03-06 22:58:36 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
2025-03-06 22:58:36 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
2025-03-06 22:58:36 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
2025-03-06 22:58:36 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
2025-03-06 22:58:36 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
2025-03-06 22:58:36 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
2025-03-06 22:58:36 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
2025-03-06 22:58:36 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
2025-03-06 22:58:36 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
2025-03-06 22:58:36 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
2025-03-06 22:58:36 llama_model_loader: - kv 22: general.quantization_version u32 = 2
2025-03-06 22:58:36 llama_model_loader: - type f32: 65 tensors
2025-03-06 22:58:36 llama_model_loader: - type q4_0: 225 tensors
2025-03-06 22:58:36 llama_model_loader: - type q6_K: 1 tensors
2025-03-06 22:58:36 print_info: file format = GGUF V3 (latest)
2025-03-06 22:58:36 print_info: file type = Q4_0
2025-03-06 22:58:36 print_info: file size = 3.56 GiB (4.54 BPW)
2025-03-06 22:58:36 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2025-03-06 22:58:36 load: special tokens cache size = 3
2025-03-06 22:58:36 load: token to piece cache size = 0.1684 MB
2025-03-06 22:58:36 print_info: arch = llama
2025-03-06 22:58:36 print_info: vocab_only = 0
2025-03-06 22:58:36 print_info: n_ctx_train = 4096
2025-03-06 22:58:36 print_info: n_embd = 4096
2025-03-06 22:58:36 print_info: n_layer = 32
2025-03-06 22:58:36 print_info: n_head = 32
2025-03-06 22:58:36 print_info: n_head_kv = 32
2025-03-06 22:58:36 print_info: n_rot = 128
2025-03-06 22:58:36 print_info: n_swa = 0
2025-03-06 22:58:36 print_info: n_embd_head_k = 128
2025-03-06 22:58:36 print_info: n_embd_head_v = 128
2025-03-06 22:58:36 print_info: n_gqa = 1
2025-03-06 22:58:36 print_info: n_embd_k_gqa = 4096
2025-03-06 22:58:36 print_info: n_embd_v_gqa = 4096
2025-03-06 22:58:36 print_info: f_norm_eps = 0.0e+00
2025-03-06 22:58:36 print_info: f_norm_rms_eps = 1.0e-05
2025-03-06 22:58:36 print_info: f_clamp_kqv = 0.0e+00
2025-03-06 22:58:36 print_info: f_max_alibi_bias = 0.0e+00
2025-03-06 22:58:36 print_info: f_logit_scale = 0.0e+00
2025-03-06 22:58:36 print_info: n_ff = 11008
2025-03-06 22:58:36 print_info: n_expert = 0
2025-03-06 22:58:36 print_info: n_expert_used = 0
2025-03-06 22:58:36 print_info: causal attn = 1
2025-03-06 22:58:36 print_info: pooling type = 0
2025-03-06 22:58:36 print_info: rope type = 0
2025-03-06 22:58:36 print_info: rope scaling = linear
2025-03-06 22:58:36 print_info: freq_base_train = 10000.0
2025-03-06 22:58:36 print_info: freq_scale_train = 1
2025-03-06 22:58:36 print_info: n_ctx_orig_yarn = 4096
2025-03-06 22:58:36 print_info: rope_finetuned = unknown
2025-03-06 22:58:36 print_info: ssm_d_conv = 0
2025-03-06 22:58:36 print_info: ssm_d_inner = 0
2025-03-06 22:58:36 print_info: ssm_d_state = 0
2025-03-06 22:58:36 print_info: ssm_dt_rank = 0
2025-03-06 22:58:36 print_info: ssm_dt_b_c_rms = 0
2025-03-06 22:58:36 print_info: model type = 7B
2025-03-06 22:58:36 print_info: model params = 6.74 B
2025-03-06 22:58:36 print_info: general.name = LLaMA v2
2025-03-06 22:58:36 print_info: vocab type = SPM
2025-03-06 22:58:36 print_info: n_vocab = 32000
2025-03-06 22:58:36 print_info: n_merges = 0
2025-03-06 22:58:36 print_info: BOS token = 1 ''
2025-03-06 22:58:36 print_info: EOS token = 2 '
'
2025-03-06 22:58:36 print_info: UNK token = 0 ''
2025-03-06 22:58:36 print_info: LF token = 13 '<0x0A>'
2025-03-06 22:58:36 print_info: EOG token = 2 ''
2025-03-06 22:58:36 print_info: max token length = 48
2025-03-06 22:58:36 load_tensors: loading model tensors, this can take a while... (mmap = true)
2025-03-06 22:58:48 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory
2025-03-06 22:58:48 llama_model_load: error loading model: unable to allocate CUDA0 buffer
2025-03-06 22:58:48 llama_model_load_from_file_impl: failed to load model
2025-03-06 22:58:48 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 22:58:48
2025-03-06 22:58:48 goroutine 23 [running]:
2025-03-06 22:58:48 github.com/ollama/ollama/runner/llamarunner.(Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc0005983a0, 0x0}, ...)
2025-03-06 22:58:48 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375
2025-03-06 22:58:48 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
2025-03-06 22:58:48 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7
2025-03-06 22:58:48 time=2025-03-07T04:58:48.719Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
2025-03-06 22:58:48 time=2025-03-07T04:58:48.816Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model"
2025-03-06 22:58:54 time=2025-03-07T04:58:54.014Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.19787795 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 22:58:54 time=2025-03-07T04:58:54.265Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.448804272 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 22:58:54 time=2025-03-07T04:58:54.515Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.698363967 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:08:55 2025/03/07 05:08:55 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:
https://localhost:
http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2025-03-06 23:08:55 time=2025-03-07T05:08:55.042Z level=INFO source=images.go:432 msg="total blobs: 20"
2025-03-06 23:08:55 time=2025-03-07T05:08:55.043Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
2025-03-06 23:08:55 time=2025-03-07T05:08:55.045Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)"
2025-03-06 23:08:55 time=2025-03-07T05:08:55.047Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
2025-03-06 23:08:55 time=2025-03-07T05:08:55.615Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51e84bf9-91ed-160d-f8af-f145f1df6904 library=cuda variant=v12 compute=7.5 driver=12.8 name="NVIDIA GeForce RTX 2060" total="6.0 GiB" available="5.0 GiB"
2025-03-06 23:09:33 time=2025-03-07T05:09:33.611Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:09:33 time=2025-03-07T05:09:33.613Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:09:33 time=2025-03-07T05:09:33.613Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:09:33 time=2025-03-07T05:09:33.613Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:09:33 time=2025-03-07T05:09:33.613Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:09:33 time=2025-03-07T05:09:33.613Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 22:58:48 [GIN] 2025/03/07 - 04:58:48 | 500 | 13.95591217s | 172.18.0.1 | POST "/api/chat"
2025-03-06 23:08:57 [GIN] 2025/03/07 - 05:08:57 | 200 | 4.297383ms | 172.18.0.1 | GET "/"
2025-03-06 23:08:58 [GIN] 2025/03/07 - 05:08:58 | 404 | 6.432µs | 172.18.0.1 | GET "/favicon.ico"
2025-03-06 23:08:58 [GIN] 2025/03/07 - 05:08:58 | 200 | 42.16µs | 172.18.0.7 | HEAD "/"
2025-03-06 23:09:00 [GIN] 2025/03/07 - 05:09:00 | 200 | 2.575771145s | 172.18.0.7 | POST "/api/pull"
2025-03-06 23:09:00 [GIN] 2025/03/07 - 05:09:00 | 200 | 25.017µs | 172.18.0.7 | HEAD "/"
2025-03-06 23:09:01 [GIN] 2025/03/07 - 05:09:01 | 200 | 434.123739ms | 172.18.0.7 | POST "/api/pull"
2025-03-06 23:09:33 [GIN] 2025/03/07 - 05:09:33 | 200 | 2.553231ms | 172.18.0.1 | GET "/api/tags"
2025-03-06 23:09:47 [GIN] 2025/03/07 - 05:09:47 | 500 | 14.257638542s | 172.18.0.1 | POST "/api/chat"
2025-03-06 23:12:35 [GIN] 2025/03/07 - 05:12:35 | 200 | 44.434µs | 127.0.0.1 | GET "/api/version"
2025-03-06 23:12:47 [GIN] 2025/03/07 - 05:12:47 | 200 | 22.713µs | 127.0.0.1 | HEAD "/"
2025-03-06 23:12:47 [GIN] 2025/03/07 - 05:12:47 | 200 | 820.322548ms | 127.0.0.1 | POST "/api/pull"
2025-03-06 23:14:46 [GIN] 2025/03/07 - 05:14:46 | 200 | 51.698µs | 127.0.0.1 | HEAD "/"
2025-03-06 23:14:46 [GIN] 2025/03/07 - 05:14:46 | 200 | 9.700237ms | 127.0.0.1 | POST "/api/show"
2025-03-06 23:14:48 [GIN] 2025/03/07 - 05:14:48 | 500 | 1.734769224s | 127.0.0.1 | POST "/api/generate"
2025-03-06 23:17:24 [GIN] 2025/03/07 - 05:17:24 | 200 | 33.013µs | 127.0.0.1 | HEAD "/"
2025-03-06 23:17:24 [GIN] 2025/03/07 - 05:17:24 | 200 | 9.672037ms | 127.0.0.1 | POST "/api/show"
2025-03-06 23:17:25 [GIN] 2025/03/07 - 05:17:25 | 500 | 1.68572623s | 127.0.0.1 | POST "/api/generate"
2025-03-06 23:18:11 [GIN] 2025/03/07 - 05:18:11 | 200 | 30.618µs | 127.0.0.1 | HEAD "/"
2025-03-06 23:18:11 [GIN] 2025/03/07 - 05:18:11 | 200 | 1.646124ms | 127.0.0.1 | POST "/api/generate"
2025-03-06 23:19:37 [GIN] 2025/03/07 - 05:19:37 | 200 | 79.491µs | 172.18.0.7 | HEAD "/"
2025-03-06 23:19:40 [GIN] 2025/03/07 - 05:19:40 | 200 | 3.448854831s | 172.18.0.7 | POST "/api/pull"
2025-03-06 23:19:40 [GIN] 2025/03/07 - 05:19:40 | 200 | 23.885µs | 172.18.0.7 | HEAD "/"
2025-03-06 23:19:41 [GIN] 2025/03/07 - 05:19:41 | 200 | 350.179263ms | 172.18.0.7 | POST "/api/pull"
2025-03-06 23:19:56 [GIN] 2025/03/07 - 05:19:56 | 200 | 650.89µs | 172.18.0.1 | GET "/api/tags"
2025-03-06 23:19:56 [GIN] 2025/03/07 - 05:19:56 | 200 | 49.995µs | 172.18.0.1 | GET "/api/version"
2025-03-06 23:20:15 [GIN] 2025/03/07 - 05:20:15 | 200 | 29.506µs | 127.0.0.1 | HEAD "/"
2025-03-06 23:33:07 [GIN] 2025/03/07 - 05:33:07 | 200 | 12m51s | 127.0.0.1 | POST "/api/pull"
2025-03-06 23:33:18 [GIN] 2025/03/07 - 05:33:18 | 200 | 26.801µs | 127.0.0.1 | HEAD "/"
2025-03-06 23:33:18 [GIN] 2025/03/07 - 05:33:18 | 200 | 9.816206ms | 127.0.0.1 | POST "/api/show"
2025-03-06 23:33:20 [GIN] 2025/03/07 - 05:33:20 | 500 | 1.963073355s | 127.0.0.1 | POST "/api/generate"
2025-03-06 23:33:28 [GIN] 2025/03/07 - 05:33:28 | 200 | 24.096µs | 127.0.0.1 | HEAD "/"
2025-03-06 23:33:28 [GIN] 2025/03/07 - 05:33:28 | 200 | 11.885626ms | 127.0.0.1 | POST "/api/show"
2025-03-06 23:09:33 time=2025-03-07T05:09:33.614Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:09:33 time=2025-03-07T05:09:33.614Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:09:33 time=2025-03-07T05:09:33.802Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="9.0 GiB" free_swap="3.0 GiB"
2025-03-06 23:09:33 time=2025-03-07T05:09:33.802Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:09:33 time=2025-03-07T05:09:33.802Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:09:33 time=2025-03-07T05:09:33.802Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB"
2025-03-06 23:09:33 time=2025-03-07T05:09:33.803Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 34771"
2025-03-06 23:09:33 time=2025-03-07T05:09:33.804Z level=INFO source=sched.go:450 msg="loaded runners" count=1
2025-03-06 23:09:33 time=2025-03-07T05:09:33.804Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
2025-03-06 23:09:33 time=2025-03-07T05:09:33.805Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:09:33 time=2025-03-07T05:09:33.826Z level=INFO source=runner.go:931 msg="starting go runner"
2025-03-06 23:09:34 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2025-03-06 23:09:34 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-03-06 23:09:34 ggml_cuda_init: found 1 CUDA devices:
2025-03-06 23:09:34 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
2025-03-06 23:09:34 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
2025-03-06 23:09:34 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
2025-03-06 23:09:34 time=2025-03-07T05:09:34.441Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8
2025-03-06 23:09:34 time=2025-03-07T05:09:34.459Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:34771"
2025-03-06 23:09:34 time=2025-03-07T05:09:34.559Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
2025-03-06 23:09:34 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free
2025-03-06 23:09:34 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
2025-03-06 23:09:34 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-03-06 23:09:34 llama_model_loader: - kv 0: general.architecture str = llama
2025-03-06 23:09:34 llama_model_loader: - kv 1: general.name str = LLaMA v2
2025-03-06 23:09:34 llama_model_loader: - kv 2: llama.context_length u32 = 4096
2025-03-06 23:09:34 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
2025-03-06 23:09:34 llama_model_loader: - kv 4: llama.block_count u32 = 32
2025-03-06 23:09:34 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
2025-03-06 23:09:34 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
2025-03-06 23:09:34 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
2025-03-06 23:09:34 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
2025-03-06 23:09:34 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
2025-03-06 23:09:34 llama_model_loader: - kv 10: general.file_type u32 = 2
2025-03-06 23:09:34 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
2025-03-06 23:09:34 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
2025-03-06 23:09:34 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
2025-03-06 23:09:34 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
2025-03-06 23:09:34 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
2025-03-06 23:09:34 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
2025-03-06 23:09:34 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
2025-03-06 23:09:34 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
2025-03-06 23:09:34 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
2025-03-06 23:09:34 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
2025-03-06 23:09:34 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
2025-03-06 23:09:34 llama_model_loader: - kv 22: general.quantization_version u32 = 2
2025-03-06 23:09:34 llama_model_loader: - type f32: 65 tensors
2025-03-06 23:09:34 llama_model_loader: - type q4_0: 225 tensors
2025-03-06 23:09:34 llama_model_loader: - type q6_K: 1 tensors
2025-03-06 23:09:34 print_info: file format = GGUF V3 (latest)
2025-03-06 23:09:34 print_info: file type = Q4_0
2025-03-06 23:09:34 print_info: file size = 3.56 GiB (4.54 BPW)
2025-03-06 23:09:34 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2025-03-06 23:09:34 load: special tokens cache size = 3
2025-03-06 23:09:34 load: token to piece cache size = 0.1684 MB
2025-03-06 23:09:34 print_info: arch = llama
2025-03-06 23:09:34 print_info: vocab_only = 0
2025-03-06 23:09:34 print_info: n_ctx_train = 4096
2025-03-06 23:09:34 print_info: n_embd = 4096
2025-03-06 23:09:34 print_info: n_layer = 32
2025-03-06 23:09:34 print_info: n_head = 32
2025-03-06 23:09:34 print_info: n_head_kv = 32
2025-03-06 23:09:34 print_info: n_rot = 128
2025-03-06 23:09:34 print_info: n_swa = 0
2025-03-06 23:09:34 print_info: n_embd_head_k = 128
2025-03-06 23:09:34 print_info: n_embd_head_v = 128
2025-03-06 23:09:34 print_info: n_gqa = 1
2025-03-06 23:09:34 print_info: n_embd_k_gqa = 4096
2025-03-06 23:09:34 print_info: n_embd_v_gqa = 4096
2025-03-06 23:09:34 print_info: f_norm_eps = 0.0e+00
2025-03-06 23:09:34 print_info: f_norm_rms_eps = 1.0e-05
2025-03-06 23:09:34 print_info: f_clamp_kqv = 0.0e+00
2025-03-06 23:09:34 print_info: f_max_alibi_bias = 0.0e+00
2025-03-06 23:09:34 print_info: f_logit_scale = 0.0e+00
2025-03-06 23:09:34 print_info: n_ff = 11008
2025-03-06 23:09:34 print_info: n_expert = 0
2025-03-06 23:09:34 print_info: n_expert_used = 0
2025-03-06 23:09:34 print_info: causal attn = 1
2025-03-06 23:09:34 print_info: pooling type = 0
2025-03-06 23:09:34 print_info: rope type = 0
2025-03-06 23:09:34 print_info: rope scaling = linear
2025-03-06 23:09:34 print_info: freq_base_train = 10000.0
2025-03-06 23:09:34 print_info: freq_scale_train = 1
2025-03-06 23:09:34 print_info: n_ctx_orig_yarn = 4096
2025-03-06 23:09:34 print_info: rope_finetuned = unknown
2025-03-06 23:09:34 print_info: ssm_d_conv = 0
2025-03-06 23:09:34 print_info: ssm_d_inner = 0
2025-03-06 23:09:34 print_info: ssm_d_state = 0
2025-03-06 23:09:34 print_info: ssm_dt_rank = 0
2025-03-06 23:09:34 print_info: ssm_dt_b_c_rms = 0
2025-03-06 23:09:34 print_info: model type = 7B
2025-03-06 23:09:34 print_info: model params = 6.74 B
2025-03-06 23:09:34 print_info: general.name = LLaMA v2
2025-03-06 23:09:34 print_info: vocab type = SPM
2025-03-06 23:09:34 print_info: n_vocab = 32000
2025-03-06 23:09:34 print_info: n_merges = 0
2025-03-06 23:09:34 print_info: BOS token = 1 ''
2025-03-06 23:09:34 print_info: EOS token = 2 '
'
2025-03-06 23:09:34 print_info: UNK token = 0 ''
2025-03-06 23:09:34 print_info: LF token = 13 '<0x0A>'
2025-03-06 23:09:34 print_info: EOG token = 2 ''
2025-03-06 23:09:34 print_info: max token length = 48
2025-03-06 23:09:34 load_tensors: loading model tensors, this can take a while... (mmap = true)
2025-03-06 23:09:46 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory
2025-03-06 23:09:47 llama_model_load: error loading model: unable to allocate CUDA0 buffer
2025-03-06 23:09:47 llama_model_load_from_file_impl: failed to load model
2025-03-06 23:09:47 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:09:47
2025-03-06 23:09:47 goroutine 50 [running]:
2025-03-06 23:09:47 github.com/ollama/ollama/runner/llamarunner.(Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc00062e1f0, 0x0}, ...)
2025-03-06 23:09:47 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375
2025-03-06 23:09:47 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
2025-03-06 23:09:47 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7
2025-03-06 23:09:47 time=2025-03-07T05:09:47.467Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
2025-03-06 23:09:47 time=2025-03-07T05:09:47.620Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model"
2025-03-06 23:09:52 time=2025-03-07T05:09:52.826Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.206297755 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:09:53 time=2025-03-07T05:09:53.076Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.45587692 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:09:53 time=2025-03-07T05:09:53.326Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.706447279 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:14:26 2025/03/07 05:14:26 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:
https://localhost:
http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2025-03-06 23:14:26 time=2025-03-07T05:14:26.659Z level=INFO source=images.go:432 msg="total blobs: 20"
2025-03-06 23:14:26 time=2025-03-07T05:14:26.659Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
2025-03-06 23:14:26 time=2025-03-07T05:14:26.659Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)"
2025-03-06 23:14:26 time=2025-03-07T05:14:26.660Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
2025-03-06 23:14:27 time=2025-03-07T05:14:27.077Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51e84bf9-91ed-160d-f8af-f145f1df6904 library=cuda variant=v12 compute=7.5 driver=12.8 name="NVIDIA GeForce RTX 2060" total="6.0 GiB" available="5.0 GiB"
2025-03-06 23:14:46 time=2025-03-07T05:14:46.585Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:14:46 time=2025-03-07T05:14:46.585Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:14:46 time=2025-03-07T05:14:46.585Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:14:46 time=2025-03-07T05:14:46.585Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:14:46 time=2025-03-07T05:14:46.586Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:14:46 time=2025-03-07T05:14:46.586Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:14:46 time=2025-03-07T05:14:46.587Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:14:46 time=2025-03-07T05:14:46.587Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:14:46 time=2025-03-07T05:14:46.780Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="10.4 GiB" free_swap="3.0 GiB"
2025-03-06 23:14:46 time=2025-03-07T05:14:46.780Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:14:46 time=2025-03-07T05:14:46.780Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:14:46 time=2025-03-07T05:14:46.781Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB"
2025-03-06 23:14:46 time=2025-03-07T05:14:46.781Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 44319"
2025-03-06 23:14:46 time=2025-03-07T05:14:46.782Z level=INFO source=sched.go:450 msg="loaded runners" count=1
2025-03-06 23:14:46 time=2025-03-07T05:14:46.783Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
2025-03-06 23:14:46 time=2025-03-07T05:14:46.785Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:14:46 time=2025-03-07T05:14:46.804Z level=INFO source=runner.go:931 msg="starting go runner"
2025-03-06 23:14:46 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2025-03-06 23:14:46 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-03-06 23:14:46 ggml_cuda_init: found 1 CUDA devices:
2025-03-06 23:14:46 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
2025-03-06 23:14:46 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
2025-03-06 23:14:46 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
2025-03-06 23:14:46 time=2025-03-07T05:14:46.924Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8
2025-03-06 23:14:46 time=2025-03-07T05:14:46.942Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:44319"
2025-03-06 23:14:47 time=2025-03-07T05:14:47.036Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
2025-03-06 23:14:47 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free
2025-03-06 23:14:47 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
2025-03-06 23:14:47 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-03-06 23:14:47 llama_model_loader: - kv 0: general.architecture str = llama
2025-03-06 23:14:47 llama_model_loader: - kv 1: general.name str = LLaMA v2
2025-03-06 23:14:47 llama_model_loader: - kv 2: llama.context_length u32 = 4096
2025-03-06 23:14:47 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
2025-03-06 23:14:47 llama_model_loader: - kv 4: llama.block_count u32 = 32
2025-03-06 23:14:47 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
2025-03-06 23:14:47 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
2025-03-06 23:14:47 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
2025-03-06 23:14:47 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
2025-03-06 23:14:47 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
2025-03-06 23:14:47 llama_model_loader: - kv 10: general.file_type u32 = 2
2025-03-06 23:14:47 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
2025-03-06 23:14:47 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
2025-03-06 23:14:47 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
2025-03-06 23:14:47 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
2025-03-06 23:14:47 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
2025-03-06 23:14:47 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
2025-03-06 23:14:47 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
2025-03-06 23:14:47 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
2025-03-06 23:14:47 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
2025-03-06 23:14:47 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
2025-03-06 23:14:47 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
2025-03-06 23:14:47 llama_model_loader: - kv 22: general.quantization_version u32 = 2
2025-03-06 23:14:47 llama_model_loader: - type f32: 65 tensors
2025-03-06 23:14:47 llama_model_loader: - type q4_0: 225 tensors
2025-03-06 23:14:47 llama_model_loader: - type q6_K: 1 tensors
2025-03-06 23:14:47 print_info: file format = GGUF V3 (latest)
2025-03-06 23:14:47 print_info: file type = Q4_0
2025-03-06 23:14:47 print_info: file size = 3.56 GiB (4.54 BPW)
2025-03-06 23:14:47 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2025-03-06 23:14:47 load: special tokens cache size = 3
2025-03-06 23:14:47 load: token to piece cache size = 0.1684 MB
2025-03-06 23:14:47 print_info: arch = llama
2025-03-06 23:14:47 print_info: vocab_only = 0
2025-03-06 23:14:47 print_info: n_ctx_train = 4096
2025-03-06 23:14:47 print_info: n_embd = 4096
2025-03-06 23:14:47 print_info: n_layer = 32
2025-03-06 23:14:47 print_info: n_head = 32
2025-03-06 23:14:47 print_info: n_head_kv = 32
2025-03-06 23:14:47 print_info: n_rot = 128
2025-03-06 23:14:47 print_info: n_swa = 0
2025-03-06 23:14:47 print_info: n_embd_head_k = 128
2025-03-06 23:14:47 print_info: n_embd_head_v = 128
2025-03-06 23:14:47 print_info: n_gqa = 1
2025-03-06 23:14:47 print_info: n_embd_k_gqa = 4096
2025-03-06 23:14:47 print_info: n_embd_v_gqa = 4096
2025-03-06 23:14:47 print_info: f_norm_eps = 0.0e+00
2025-03-06 23:14:47 print_info: f_norm_rms_eps = 1.0e-05
2025-03-06 23:14:47 print_info: f_clamp_kqv = 0.0e+00
2025-03-06 23:14:47 print_info: f_max_alibi_bias = 0.0e+00
2025-03-06 23:14:47 print_info: f_logit_scale = 0.0e+00
2025-03-06 23:14:47 print_info: n_ff = 11008
2025-03-06 23:14:47 print_info: n_expert = 0
2025-03-06 23:14:47 print_info: n_expert_used = 0
2025-03-06 23:14:47 print_info: causal attn = 1
2025-03-06 23:14:47 print_info: pooling type = 0
2025-03-06 23:14:47 print_info: rope type = 0
2025-03-06 23:14:47 print_info: rope scaling = linear
2025-03-06 23:14:47 print_info: freq_base_train = 10000.0
2025-03-06 23:14:47 print_info: freq_scale_train = 1
2025-03-06 23:14:47 print_info: n_ctx_orig_yarn = 4096
2025-03-06 23:14:47 print_info: rope_finetuned = unknown
2025-03-06 23:14:47 print_info: ssm_d_conv = 0
2025-03-06 23:14:47 print_info: ssm_d_inner = 0
2025-03-06 23:14:47 print_info: ssm_d_state = 0
2025-03-06 23:14:47 print_info: ssm_dt_rank = 0
2025-03-06 23:14:47 print_info: ssm_dt_b_c_rms = 0
2025-03-06 23:14:47 print_info: model type = 7B
2025-03-06 23:14:47 print_info: model params = 6.74 B
2025-03-06 23:14:47 print_info: general.name = LLaMA v2
2025-03-06 23:14:47 print_info: vocab type = SPM
2025-03-06 23:14:47 print_info: n_vocab = 32000
2025-03-06 23:14:47 print_info: n_merges = 0
2025-03-06 23:14:47 print_info: BOS token = 1 ''
2025-03-06 23:14:47 print_info: EOS token = 2 '
'
2025-03-06 23:14:47 print_info: UNK token = 0 ''
2025-03-06 23:14:47 print_info: LF token = 13 '<0x0A>'
2025-03-06 23:14:47 print_info: EOG token = 2 ''
2025-03-06 23:14:47 print_info: max token length = 48
2025-03-06 23:14:47 load_tensors: loading model tensors, this can take a while... (mmap = true)
2025-03-06 23:14:47 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory
2025-03-06 23:14:47 llama_model_load: error loading model: unable to allocate CUDA0 buffer
2025-03-06 23:14:47 llama_model_load_from_file_impl: failed to load model
2025-03-06 23:14:47 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:14:47
2025-03-06 23:14:47 goroutine 66 [running]:
2025-03-06 23:14:47 github.com/ollama/ollama/runner/llamarunner.(Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000502020, 0x0}, ...)
2025-03-06 23:14:47 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375
2025-03-06 23:14:47 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
2025-03-06 23:14:47 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7
2025-03-06 23:14:47 time=2025-03-07T05:14:47.791Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:14:47 time=2025-03-07T05:14:47.817Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
2025-03-06 23:14:48 time=2025-03-07T05:14:48.041Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model"
2025-03-06 23:14:53 time=2025-03-07T05:14:53.216Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.174206187 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:14:53 time=2025-03-07T05:14:53.466Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.423891673 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:14:53 time=2025-03-07T05:14:53.716Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.674046193 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:17:24 time=2025-03-07T05:17:24.472Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:17:24 time=2025-03-07T05:17:24.472Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:17:24 time=2025-03-07T05:17:24.473Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:17:24 time=2025-03-07T05:17:24.473Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:17:24 time=2025-03-07T05:17:24.473Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:17:24 time=2025-03-07T05:17:24.473Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:17:24 time=2025-03-07T05:17:24.474Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:17:24 time=2025-03-07T05:17:24.474Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:17:24 time=2025-03-07T05:17:24.667Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="10.3 GiB" free_swap="3.0 GiB"
2025-03-06 23:17:24 time=2025-03-07T05:17:24.668Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:17:24 time=2025-03-07T05:17:24.668Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:17:24 time=2025-03-07T05:17:24.668Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB"
2025-03-06 23:17:24 time=2025-03-07T05:17:24.669Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 38339"
2025-03-06 23:17:24 time=2025-03-07T05:17:24.669Z level=INFO source=sched.go:450 msg="loaded runners" count=1
2025-03-06 23:17:24 time=2025-03-07T05:17:24.669Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
2025-03-06 23:17:24 time=2025-03-07T05:17:24.670Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:17:24 time=2025-03-07T05:17:24.689Z level=INFO source=runner.go:931 msg="starting go runner"
2025-03-06 23:17:24 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2025-03-06 23:17:24 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-03-06 23:17:24 ggml_cuda_init: found 1 CUDA devices:
2025-03-06 23:17:24 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
2025-03-06 23:17:24 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
2025-03-06 23:17:24 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
2025-03-06 23:17:24 time=2025-03-07T05:17:24.799Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8
2025-03-06 23:17:24 time=2025-03-07T05:17:24.815Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:38339"
2025-03-06 23:17:24 time=2025-03-07T05:17:24.921Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
2025-03-06 23:17:25 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free
2025-03-06 23:17:25 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
2025-03-06 23:17:25 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-03-06 23:17:25 llama_model_loader: - kv 0: general.architecture str = llama
2025-03-06 23:17:25 llama_model_loader: - kv 1: general.name str = LLaMA v2
2025-03-06 23:17:25 llama_model_loader: - kv 2: llama.context_length u32 = 4096
2025-03-06 23:17:25 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
2025-03-06 23:17:25 llama_model_loader: - kv 4: llama.block_count u32 = 32
2025-03-06 23:17:25 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
2025-03-06 23:17:25 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
2025-03-06 23:17:25 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
2025-03-06 23:17:25 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
2025-03-06 23:17:25 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
2025-03-06 23:17:25 llama_model_loader: - kv 10: general.file_type u32 = 2
2025-03-06 23:17:25 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
2025-03-06 23:17:25 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
2025-03-06 23:17:25 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
2025-03-06 23:17:25 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
2025-03-06 23:17:25 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
2025-03-06 23:17:25 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
2025-03-06 23:17:25 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
2025-03-06 23:17:25 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
2025-03-06 23:17:25 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
2025-03-06 23:17:25 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
2025-03-06 23:17:25 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
2025-03-06 23:17:25 llama_model_loader: - kv 22: general.quantization_version u32 = 2
2025-03-06 23:17:25 llama_model_loader: - type f32: 65 tensors
2025-03-06 23:17:25 llama_model_loader: - type q4_0: 225 tensors
2025-03-06 23:17:25 llama_model_loader: - type q6_K: 1 tensors
2025-03-06 23:17:25 print_info: file format = GGUF V3 (latest)
2025-03-06 23:17:25 print_info: file type = Q4_0
2025-03-06 23:17:25 print_info: file size = 3.56 GiB (4.54 BPW)
2025-03-06 23:17:25 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2025-03-06 23:17:25 load: special tokens cache size = 3
2025-03-06 23:17:25 load: token to piece cache size = 0.1684 MB
2025-03-06 23:17:25 print_info: arch = llama
2025-03-06 23:17:25 print_info: vocab_only = 0
2025-03-06 23:17:25 print_info: n_ctx_train = 4096
2025-03-06 23:17:25 print_info: n_embd = 4096
2025-03-06 23:17:25 print_info: n_layer = 32
2025-03-06 23:17:25 print_info: n_head = 32
2025-03-06 23:17:25 print_info: n_head_kv = 32
2025-03-06 23:17:25 print_info: n_rot = 128
2025-03-06 23:17:25 print_info: n_swa = 0
2025-03-06 23:17:25 print_info: n_embd_head_k = 128
2025-03-06 23:17:25 print_info: n_embd_head_v = 128
2025-03-06 23:17:25 print_info: n_gqa = 1
2025-03-06 23:17:25 print_info: n_embd_k_gqa = 4096
2025-03-06 23:17:25 print_info: n_embd_v_gqa = 4096
2025-03-06 23:17:25 print_info: f_norm_eps = 0.0e+00
2025-03-06 23:17:25 print_info: f_norm_rms_eps = 1.0e-05
2025-03-06 23:17:25 print_info: f_clamp_kqv = 0.0e+00
2025-03-06 23:17:25 print_info: f_max_alibi_bias = 0.0e+00
2025-03-06 23:17:25 print_info: f_logit_scale = 0.0e+00
2025-03-06 23:17:25 print_info: n_ff = 11008
2025-03-06 23:17:25 print_info: n_expert = 0
2025-03-06 23:17:25 print_info: n_expert_used = 0
2025-03-06 23:17:25 print_info: causal attn = 1
2025-03-06 23:17:25 print_info: pooling type = 0
2025-03-06 23:17:25 print_info: rope type = 0
2025-03-06 23:17:25 print_info: rope scaling = linear
2025-03-06 23:17:25 print_info: freq_base_train = 10000.0
2025-03-06 23:17:25 print_info: freq_scale_train = 1
2025-03-06 23:17:25 print_info: n_ctx_orig_yarn = 4096
2025-03-06 23:17:25 print_info: rope_finetuned = unknown
2025-03-06 23:17:25 print_info: ssm_d_conv = 0
2025-03-06 23:17:25 print_info: ssm_d_inner = 0
2025-03-06 23:17:25 print_info: ssm_d_state = 0
2025-03-06 23:17:25 print_info: ssm_dt_rank = 0
2025-03-06 23:17:25 print_info: ssm_dt_b_c_rms = 0
2025-03-06 23:17:25 print_info: model type = 7B
2025-03-06 23:17:25 print_info: model params = 6.74 B
2025-03-06 23:17:25 print_info: general.name = LLaMA v2
2025-03-06 23:17:25 print_info: vocab type = SPM
2025-03-06 23:17:25 print_info: n_vocab = 32000
2025-03-06 23:17:25 print_info: n_merges = 0
2025-03-06 23:17:25 print_info: BOS token = 1 ''
2025-03-06 23:17:25 print_info: EOS token = 2 '
'
2025-03-06 23:17:25 print_info: UNK token = 0 ''
2025-03-06 23:17:25 print_info: LF token = 13 '<0x0A>'
2025-03-06 23:17:25 print_info: EOG token = 2 ''
2025-03-06 23:17:25 print_info: max token length = 48
2025-03-06 23:17:25 load_tensors: loading model tensors, this can take a while... (mmap = true)
2025-03-06 23:17:25 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory
2025-03-06 23:17:25 llama_model_load: error loading model: unable to allocate CUDA0 buffer
2025-03-06 23:17:25 llama_model_load_from_file_impl: failed to load model
2025-03-06 23:17:25 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:17:25
2025-03-06 23:17:25 goroutine 25 [running]:
2025-03-06 23:17:25 github.com/ollama/ollama/runner/llamarunner.(Server).loadModel(0xc00012dcb0, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000044030, 0x0}, ...)
2025-03-06 23:17:25 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375
2025-03-06 23:17:25 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
2025-03-06 23:17:25 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7
2025-03-06 23:17:25 time=2025-03-07T05:17:25.674Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:17:25 time=2025-03-07T05:17:25.717Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
2025-03-06 23:17:25 time=2025-03-07T05:17:25.925Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model"
2025-03-06 23:17:31 time=2025-03-07T05:17:31.106Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.18037034 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:17:31 time=2025-03-07T05:17:31.357Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.431203368 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:17:31 time=2025-03-07T05:17:31.606Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.680775272 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:18:11 time=2025-03-07T05:18:11.961Z level=INFO source=images.go:432 msg="total blobs: 20"
2025-03-06 23:18:12 time=2025-03-07T05:18:12.547Z level=INFO source=images.go:439 msg="total unused blobs removed: 6"
2025-03-06 23:18:12 time=2025-03-07T05:18:12.547Z level=INFO source=server.go:154 msg=http status=200 method=DELETE path=/api/delete content-length=31 remote=127.0.0.1:38078 proto=HTTP/1.1 query=""
2025-03-06 23:19:33 2025/03/07 05:19:33 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:
https://localhost:
http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:
https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2025-03-06 23:19:33 time=2025-03-07T05:19:33.972Z level=INFO source=images.go:432 msg="total blobs: 14"
2025-03-06 23:19:33 time=2025-03-07T05:19:33.973Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
2025-03-06 23:19:33 time=2025-03-07T05:19:33.974Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)"
2025-03-06 23:19:33 time=2025-03-07T05:19:33.974Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
2025-03-06 23:19:34 time=2025-03-07T05:19:34.620Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51e84bf9-91ed-160d-f8af-f145f1df6904 library=cuda variant=v12 compute=7.5 driver=12.8 name="NVIDIA GeForce RTX 2060" total="6.0 GiB" available="5.0 GiB"
2025-03-06 23:20:16 time=2025-03-07T05:20:16.945Z level=INFO source=download.go:176 msg="downloading 8934d96d3f08 in 16 239 MB part(s)"
2025-03-06 23:20:57 time=2025-03-07T05:20:57.328Z level=INFO source=download.go:294 msg="8934d96d3f08 part 5 attempt 0 failed: unexpected EOF, retrying in 1s"
2025-03-06 23:21:36 time=2025-03-07T05:21:36.844Z level=INFO source=download.go:294 msg="8934d96d3f08 part 6 attempt 0 failed: unexpected EOF, retrying in 1s"
2025-03-06 23:23:34 time=2025-03-07T05:23:34.793Z level=INFO source=download.go:294 msg="8934d96d3f08 part 2 attempt 0 failed: unexpected EOF, retrying in 1s"
2025-03-06 23:24:29 time=2025-03-07T05:24:29.706Z level=INFO source=download.go:294 msg="8934d96d3f08 part 12 attempt 0 failed: unexpected EOF, retrying in 1s"
2025-03-06 23:32:58 time=2025-03-07T05:32:58.561Z level=INFO source=download.go:176 msg="downloading 8c17c2ebb0ea in 1 7.0 KB part(s)"
2025-03-06 23:32:59 time=2025-03-07T05:32:59.841Z level=INFO source=download.go:176 msg="downloading 7c23fb36d801 in 1 4.8 KB part(s)"
2025-03-06 23:33:01 time=2025-03-07T05:33:01.165Z level=INFO source=download.go:176 msg="downloading 2e0493f67d0c in 1 59 B part(s)"
2025-03-06 23:33:02 time=2025-03-07T05:33:02.455Z level=INFO source=download.go:176 msg="downloading fa304d675061 in 1 91 B part(s)"
2025-03-06 23:33:03 time=2025-03-07T05:33:03.742Z level=INFO source=download.go:176 msg="downloading 42ba7f8a01dd in 1 557 B part(s)"
2025-03-06 23:33:18 time=2025-03-07T05:33:18.728Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:18 time=2025-03-07T05:33:18.728Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:18 time=2025-03-07T05:33:18.729Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:18 time=2025-03-07T05:33:18.729Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:18 time=2025-03-07T05:33:18.729Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:18 time=2025-03-07T05:33:18.729Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:18 time=2025-03-07T05:33:18.730Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:18 time=2025-03-07T05:33:18.730Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:18 time=2025-03-07T05:33:18.913Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="9.0 GiB" free_swap="3.0 GiB"
2025-03-06 23:33:18 time=2025-03-07T05:33:18.913Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:18 time=2025-03-07T05:33:18.913Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:18 time=2025-03-07T05:33:18.913Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB"
2025-03-06 23:33:18 time=2025-03-07T05:33:18.914Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 39089"
2025-03-06 23:33:18 time=2025-03-07T05:33:18.915Z level=INFO source=sched.go:450 msg="loaded runners" count=1
2025-03-06 23:33:18 time=2025-03-07T05:33:18.915Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
2025-03-06 23:33:18 time=2025-03-07T05:33:18.915Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:33:18 time=2025-03-07T05:33:18.941Z level=INFO source=runner.go:931 msg="starting go runner"
2025-03-06 23:33:19 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2025-03-06 23:33:19 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-03-06 23:33:19 ggml_cuda_init: found 1 CUDA devices:
2025-03-06 23:33:19 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
2025-03-06 23:33:19 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
2025-03-06 23:33:19 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
2025-03-06 23:33:19 time=2025-03-07T05:33:19.081Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8
2025-03-06 23:33:19 time=2025-03-07T05:33:19.107Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:39089"
2025-03-06 23:33:19 time=2025-03-07T05:33:19.166Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
2025-03-06 23:33:19 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free
2025-03-06 23:33:19 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
2025-03-06 23:33:19 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-03-06 23:33:19 llama_model_loader: - kv 0: general.architecture str = llama
2025-03-06 23:33:19 llama_model_loader: - kv 1: general.name str = LLaMA v2
2025-03-06 23:33:19 llama_model_loader: - kv 2: llama.context_length u32 = 4096
2025-03-06 23:33:19 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
2025-03-06 23:33:19 llama_model_loader: - kv 4: llama.block_count u32 = 32
2025-03-06 23:33:19 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
2025-03-06 23:33:19 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
2025-03-06 23:33:19 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
2025-03-06 23:33:19 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
2025-03-06 23:33:19 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
2025-03-06 23:33:19 llama_model_loader: - kv 10: general.file_type u32 = 2
2025-03-06 23:33:19 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
2025-03-06 23:33:19 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
2025-03-06 23:33:19 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
2025-03-06 23:33:19 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
2025-03-06 23:33:19 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
2025-03-06 23:33:19 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
2025-03-06 23:33:19 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
2025-03-06 23:33:19 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
2025-03-06 23:33:19 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
2025-03-06 23:33:19 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
2025-03-06 23:33:19 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
2025-03-06 23:33:19 llama_model_loader: - kv 22: general.quantization_version u32 = 2
2025-03-06 23:33:19 llama_model_loader: - type f32: 65 tensors
2025-03-06 23:33:19 llama_model_loader: - type q4_0: 225 tensors
2025-03-06 23:33:19 llama_model_loader: - type q6_K: 1 tensors
2025-03-06 23:33:19 print_info: file format = GGUF V3 (latest)
2025-03-06 23:33:19 print_info: file type = Q4_0
2025-03-06 23:33:19 print_info: file size = 3.56 GiB (4.54 BPW)
2025-03-06 23:33:19 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2025-03-06 23:33:19 load: special tokens cache size = 3
2025-03-06 23:33:19 load: token to piece cache size = 0.1684 MB
2025-03-06 23:33:19 print_info: arch = llama
2025-03-06 23:33:19 print_info: vocab_only = 0
2025-03-06 23:33:19 print_info: n_ctx_train = 4096
2025-03-06 23:33:19 print_info: n_embd = 4096
2025-03-06 23:33:19 print_info: n_layer = 32
2025-03-06 23:33:19 print_info: n_head = 32
2025-03-06 23:33:19 print_info: n_head_kv = 32
2025-03-06 23:33:19 print_info: n_rot = 128
2025-03-06 23:33:19 print_info: n_swa = 0
2025-03-06 23:33:19 print_info: n_embd_head_k = 128
2025-03-06 23:33:19 print_info: n_embd_head_v = 128
2025-03-06 23:33:19 print_info: n_gqa = 1
2025-03-06 23:33:19 print_info: n_embd_k_gqa = 4096
2025-03-06 23:33:19 print_info: n_embd_v_gqa = 4096
2025-03-06 23:33:19 print_info: f_norm_eps = 0.0e+00
2025-03-06 23:33:19 print_info: f_norm_rms_eps = 1.0e-05
2025-03-06 23:33:19 print_info: f_clamp_kqv = 0.0e+00
2025-03-06 23:33:19 print_info: f_max_alibi_bias = 0.0e+00
2025-03-06 23:33:19 print_info: f_logit_scale = 0.0e+00
2025-03-06 23:33:19 print_info: n_ff = 11008
2025-03-06 23:33:19 print_info: n_expert = 0
2025-03-06 23:33:19 print_info: n_expert_used = 0
2025-03-06 23:33:19 print_info: causal attn = 1
2025-03-06 23:33:19 print_info: pooling type = 0
2025-03-06 23:33:19 print_info: rope type = 0
2025-03-06 23:33:19 print_info: rope scaling = linear
2025-03-06 23:33:19 print_info: freq_base_train = 10000.0
2025-03-06 23:33:19 print_info: freq_scale_train = 1
2025-03-06 23:33:19 print_info: n_ctx_orig_yarn = 4096
2025-03-06 23:33:19 print_info: rope_finetuned = unknown
2025-03-06 23:33:19 print_info: ssm_d_conv = 0
2025-03-06 23:33:19 print_info: ssm_d_inner = 0
2025-03-06 23:33:19 print_info: ssm_d_state = 0
2025-03-06 23:33:19 print_info: ssm_dt_rank = 0
2025-03-06 23:33:19 print_info: ssm_dt_b_c_rms = 0
2025-03-06 23:33:19 print_info: model type = 7B
2025-03-06 23:33:19 print_info: model params = 6.74 B
2025-03-06 23:33:19 print_info: general.name = LLaMA v2
2025-03-06 23:33:19 print_info: vocab type = SPM
2025-03-06 23:33:19 print_info: n_vocab = 32000
2025-03-06 23:33:19 print_info: n_merges = 0
2025-03-06 23:33:19 print_info: BOS token = 1 ''
2025-03-06 23:33:19 print_info: EOS token = 2 '
'
2025-03-06 23:33:19 print_info: UNK token = 0 ''
2025-03-06 23:33:19 print_info: LF token = 13 '<0x0A>'
2025-03-06 23:33:19 print_info: EOG token = 2 ''
2025-03-06 23:33:19 print_info: max token length = 48
2025-03-06 23:33:19 load_tensors: loading model tensors, this can take a while... (mmap = true)
2025-03-06 23:33:19 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory
2025-03-06 23:33:20 llama_model_load: error loading model: unable to allocate CUDA0 buffer
2025-03-06 23:33:20 llama_model_load_from_file_impl: failed to load model
2025-03-06 23:33:20 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:33:20
2025-03-06 23:33:20 goroutine 50 [running]:
2025-03-06 23:33:20 github.com/ollama/ollama/runner/llamarunner.(Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000502020, 0x0}, ...)
2025-03-06 23:33:20 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375
2025-03-06 23:33:20 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
2025-03-06 23:33:20 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7
2025-03-06 23:33:20 time=2025-03-07T05:33:20.270Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
2025-03-06 23:33:20 time=2025-03-07T05:33:20.421Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer"
2025-03-06 23:33:25 time=2025-03-07T05:33:25.597Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.175452535 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:33:25 time=2025-03-07T05:33:25.847Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.425169511 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:33:26 time=2025-03-07T05:33:26.096Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.674710506 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:33:28 time=2025-03-07T05:33:28.955Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:28 time=2025-03-07T05:33:28.955Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:28 time=2025-03-07T05:33:28.956Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:28 time=2025-03-07T05:33:28.956Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:28 time=2025-03-07T05:33:28.957Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:28 time=2025-03-07T05:33:28.957Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:28 time=2025-03-07T05:33:28.958Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:28 time=2025-03-07T05:33:28.958Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:29 time=2025-03-07T05:33:29.183Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="9.0 GiB" free_swap="3.0 GiB"
2025-03-06 23:33:29 time=2025-03-07T05:33:29.183Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:29 time=2025-03-07T05:33:29.183Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:29 time=2025-03-07T05:33:29.183Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB"
2025-03-06 23:33:29 time=2025-03-07T05:33:29.184Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 43659"
2025-03-06 23:33:29 time=2025-03-07T05:33:29.184Z level=INFO source=sched.go:450 msg="loaded runners" count=1
2025-03-06 23:33:29 time=2025-03-07T05:33:29.185Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
2025-03-06 23:33:29 time=2025-03-07T05:33:29.185Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:33:29 time=2025-03-07T05:33:29.205Z level=INFO source=runner.go:931 msg="starting go runner"
2025-03-06 23:33:29 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2025-03-06 23:33:29 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-03-06 23:33:29 ggml_cuda_init: found 1 CUDA devices:
2025-03-06 23:33:29 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
2025-03-06 23:33:29 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
2025-03-06 23:33:29 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
2025-03-06 23:33:29 time=2025-03-07T05:33:29.319Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8
2025-03-06 23:33:30 [GIN] 2025/03/07 - 05:33:30 | 500 | 1.710346111s | 127.0.0.1 | POST "/api/generate"
2025-03-06 23:33:38 [GIN] 2025/03/07 - 05:33:38 | 200 | 28.254µs | 127.0.0.1 | HEAD "/"
2025-03-06 23:33:38 [GIN] 2025/03/07 - 05:33:38 | 200 | 9.733913ms | 127.0.0.1 | POST "/api/show"
2025-03-06 23:33:40 [GIN] 2025/03/07 - 05:33:40 | 500 | 1.773050347s | 127.0.0.1 | POST "/api/generate"
2025-03-06 23:36:38 [GIN] 2025/03/07 - 05:36:38 | 200 | 30.909µs | 127.0.0.1 | HEAD "/"
2025-03-06 23:36:38 [GIN] 2025/03/07 - 05:36:38 | 200 | 10.010402ms | 127.0.0.1 | POST "/api/show"
2025-03-06 23:36:40 [GIN] 2025/03/07 - 05:36:40 | 500 | 1.690181456s | 127.0.0.1 | POST "/api/generate"
2025-03-06 23:46:24 [GIN] 2025/03/07 - 05:46:24 | 200 | 7.118871ms | 172.18.0.1 | GET "/api/tags"
2025-03-06 23:46:24 [GIN] 2025/03/07 - 05:46:24 | 200 | 141.148µs | 172.18.0.1 | GET "/api/version"
2025-03-06 23:46:53 [GIN] 2025/03/07 - 05:46:53 | 200 | 31.6µs | 127.0.0.1 | HEAD "/"
2025-03-06 23:46:53 [GIN] 2025/03/07 - 05:46:53 | 200 | 15.795301ms | 127.0.0.1 | POST "/api/show"
2025-03-06 23:47:08 [GIN] 2025/03/07 - 05:47:08 | 500 | 15.024742148s | 127.0.0.1 | POST "/api/generate"
2025-03-07 00:24:26 [GIN] 2025/03/07 - 06:24:26 | 200 | 26.5µs | 127.0.0.1 | HEAD "/"
2025-03-07 00:24:26 [GIN] 2025/03/07 - 06:24:26 | 200 | 9.899899ms | 127.0.0.1 | POST "/api/show"
2025-03-06 23:33:29 time=2025-03-07T05:33:29.336Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:43659"
2025-03-06 23:33:29 time=2025-03-07T05:33:29.436Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
2025-03-06 23:33:29 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free
2025-03-06 23:33:29 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
2025-03-06 23:33:29 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-03-06 23:33:29 llama_model_loader: - kv 0: general.architecture str = llama
2025-03-06 23:33:29 llama_model_loader: - kv 1: general.name str = LLaMA v2
2025-03-06 23:33:29 llama_model_loader: - kv 2: llama.context_length u32 = 4096
2025-03-06 23:33:29 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
2025-03-06 23:33:29 llama_model_loader: - kv 4: llama.block_count u32 = 32
2025-03-06 23:33:29 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
2025-03-06 23:33:29 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
2025-03-06 23:33:29 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
2025-03-06 23:33:29 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
2025-03-06 23:33:29 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
2025-03-06 23:33:29 llama_model_loader: - kv 10: general.file_type u32 = 2
2025-03-06 23:33:29 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
2025-03-06 23:33:29 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
2025-03-06 23:33:29 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
2025-03-06 23:33:29 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
2025-03-06 23:33:29 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
2025-03-06 23:33:29 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
2025-03-06 23:33:29 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
2025-03-06 23:33:29 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
2025-03-06 23:33:29 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
2025-03-06 23:33:29 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
2025-03-06 23:33:29 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
2025-03-06 23:33:29 llama_model_loader: - kv 22: general.quantization_version u32 = 2
2025-03-06 23:33:29 llama_model_loader: - type f32: 65 tensors
2025-03-06 23:33:29 llama_model_loader: - type q4_0: 225 tensors
2025-03-06 23:33:29 llama_model_loader: - type q6_K: 1 tensors
2025-03-06 23:33:29 print_info: file format = GGUF V3 (latest)
2025-03-06 23:33:29 print_info: file type = Q4_0
2025-03-06 23:33:29 print_info: file size = 3.56 GiB (4.54 BPW)
2025-03-06 23:33:29 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2025-03-06 23:33:29 load: special tokens cache size = 3
2025-03-06 23:33:29 load: token to piece cache size = 0.1684 MB
2025-03-06 23:33:29 print_info: arch = llama
2025-03-06 23:33:29 print_info: vocab_only = 0
2025-03-06 23:33:29 print_info: n_ctx_train = 4096
2025-03-06 23:33:29 print_info: n_embd = 4096
2025-03-06 23:33:29 print_info: n_layer = 32
2025-03-06 23:33:29 print_info: n_head = 32
2025-03-06 23:33:29 print_info: n_head_kv = 32
2025-03-06 23:33:29 print_info: n_rot = 128
2025-03-06 23:33:29 print_info: n_swa = 0
2025-03-06 23:33:29 print_info: n_embd_head_k = 128
2025-03-06 23:33:29 print_info: n_embd_head_v = 128
2025-03-06 23:33:29 print_info: n_gqa = 1
2025-03-06 23:33:29 print_info: n_embd_k_gqa = 4096
2025-03-06 23:33:29 print_info: n_embd_v_gqa = 4096
2025-03-06 23:33:29 print_info: f_norm_eps = 0.0e+00
2025-03-06 23:33:29 print_info: f_norm_rms_eps = 1.0e-05
2025-03-06 23:33:29 print_info: f_clamp_kqv = 0.0e+00
2025-03-06 23:33:29 print_info: f_max_alibi_bias = 0.0e+00
2025-03-06 23:33:29 print_info: f_logit_scale = 0.0e+00
2025-03-06 23:33:29 print_info: n_ff = 11008
2025-03-06 23:33:29 print_info: n_expert = 0
2025-03-06 23:33:29 print_info: n_expert_used = 0
2025-03-06 23:33:29 print_info: causal attn = 1
2025-03-06 23:33:29 print_info: pooling type = 0
2025-03-06 23:33:29 print_info: rope type = 0
2025-03-06 23:33:29 print_info: rope scaling = linear
2025-03-06 23:33:29 print_info: freq_base_train = 10000.0
2025-03-06 23:33:29 print_info: freq_scale_train = 1
2025-03-06 23:33:29 print_info: n_ctx_orig_yarn = 4096
2025-03-06 23:33:29 print_info: rope_finetuned = unknown
2025-03-06 23:33:29 print_info: ssm_d_conv = 0
2025-03-06 23:33:29 print_info: ssm_d_inner = 0
2025-03-06 23:33:29 print_info: ssm_d_state = 0
2025-03-06 23:33:29 print_info: ssm_dt_rank = 0
2025-03-06 23:33:29 print_info: ssm_dt_b_c_rms = 0
2025-03-06 23:33:29 print_info: model type = 7B
2025-03-06 23:33:29 print_info: model params = 6.74 B
2025-03-06 23:33:29 print_info: general.name = LLaMA v2
2025-03-06 23:33:29 print_info: vocab type = SPM
2025-03-06 23:33:29 print_info: n_vocab = 32000
2025-03-06 23:33:29 print_info: n_merges = 0
2025-03-06 23:33:29 print_info: BOS token = 1 ''
2025-03-06 23:33:29 print_info: EOS token = 2 '
'
2025-03-06 23:33:29 print_info: UNK token = 0 ''
2025-03-06 23:33:29 print_info: LF token = 13 '<0x0A>'
2025-03-06 23:33:29 print_info: EOG token = 2 ''
2025-03-06 23:33:29 print_info: max token length = 48
2025-03-06 23:33:29 load_tensors: loading model tensors, this can take a while... (mmap = true)
2025-03-06 23:33:29 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory
2025-03-06 23:33:30 llama_model_load: error loading model: unable to allocate CUDA0 buffer
2025-03-06 23:33:30 llama_model_load_from_file_impl: failed to load model
2025-03-06 23:33:30 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:33:30
2025-03-06 23:33:30 goroutine 23 [running]:
2025-03-06 23:33:30 github.com/ollama/ollama/runner/llamarunner.(Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000614030, 0x0}, ...)
2025-03-06 23:33:30 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375
2025-03-06 23:33:30 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
2025-03-06 23:33:30 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7
2025-03-06 23:33:30 time=2025-03-07T05:33:30.189Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:33:30 time=2025-03-07T05:33:30.235Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
2025-03-06 23:33:30 time=2025-03-07T05:33:30.440Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model"
2025-03-06 23:33:35 time=2025-03-07T05:33:35.649Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.209258062 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:33:35 time=2025-03-07T05:33:35.899Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.459369719 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:33:36 time=2025-03-07T05:33:36.149Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.708838921 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:33:38 time=2025-03-07T05:33:38.832Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:38 time=2025-03-07T05:33:38.832Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:38 time=2025-03-07T05:33:38.833Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:38 time=2025-03-07T05:33:38.833Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:38 time=2025-03-07T05:33:38.833Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:38 time=2025-03-07T05:33:38.833Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:38 time=2025-03-07T05:33:38.834Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:38 time=2025-03-07T05:33:38.834Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:39 time=2025-03-07T05:33:39.068Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="8.9 GiB" free_swap="3.0 GiB"
2025-03-06 23:33:39 time=2025-03-07T05:33:39.068Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:33:39 time=2025-03-07T05:33:39.068Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:33:39 time=2025-03-07T05:33:39.068Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB"
2025-03-06 23:33:39 time=2025-03-07T05:33:39.069Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 41229"
2025-03-06 23:33:39 time=2025-03-07T05:33:39.070Z level=INFO source=sched.go:450 msg="loaded runners" count=1
2025-03-06 23:33:39 time=2025-03-07T05:33:39.070Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
2025-03-06 23:33:39 time=2025-03-07T05:33:39.071Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:33:39 time=2025-03-07T05:33:39.096Z level=INFO source=runner.go:931 msg="starting go runner"
2025-03-06 23:33:39 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2025-03-06 23:33:39 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-03-06 23:33:39 ggml_cuda_init: found 1 CUDA devices:
2025-03-06 23:33:39 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
2025-03-06 23:33:39 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
2025-03-06 23:33:39 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
2025-03-06 23:33:39 time=2025-03-07T05:33:39.217Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8
2025-03-06 23:33:39 time=2025-03-07T05:33:39.234Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:41229"
2025-03-06 23:33:39 time=2025-03-07T05:33:39.323Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
2025-03-06 23:33:39 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free
2025-03-06 23:33:39 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
2025-03-06 23:33:39 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-03-06 23:33:39 llama_model_loader: - kv 0: general.architecture str = llama
2025-03-06 23:33:39 llama_model_loader: - kv 1: general.name str = LLaMA v2
2025-03-06 23:33:39 llama_model_loader: - kv 2: llama.context_length u32 = 4096
2025-03-06 23:33:39 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
2025-03-06 23:33:39 llama_model_loader: - kv 4: llama.block_count u32 = 32
2025-03-06 23:33:39 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
2025-03-06 23:33:39 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
2025-03-06 23:33:39 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
2025-03-06 23:33:39 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
2025-03-06 23:33:39 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
2025-03-06 23:33:39 llama_model_loader: - kv 10: general.file_type u32 = 2
2025-03-06 23:33:39 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
2025-03-06 23:33:39 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
2025-03-06 23:33:39 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
2025-03-06 23:33:39 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
2025-03-06 23:33:39 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
2025-03-06 23:33:39 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
2025-03-06 23:33:39 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
2025-03-06 23:33:39 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
2025-03-06 23:33:39 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
2025-03-06 23:33:39 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
2025-03-06 23:33:39 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
2025-03-06 23:33:39 llama_model_loader: - kv 22: general.quantization_version u32 = 2
2025-03-06 23:33:39 llama_model_loader: - type f32: 65 tensors
2025-03-06 23:33:39 llama_model_loader: - type q4_0: 225 tensors
2025-03-06 23:33:39 llama_model_loader: - type q6_K: 1 tensors
2025-03-06 23:33:39 print_info: file format = GGUF V3 (latest)
2025-03-06 23:33:39 print_info: file type = Q4_0
2025-03-06 23:33:39 print_info: file size = 3.56 GiB (4.54 BPW)
2025-03-06 23:33:39 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2025-03-06 23:33:39 load: special tokens cache size = 3
2025-03-06 23:33:39 load: token to piece cache size = 0.1684 MB
2025-03-06 23:33:39 print_info: arch = llama
2025-03-06 23:33:39 print_info: vocab_only = 0
2025-03-06 23:33:39 print_info: n_ctx_train = 4096
2025-03-06 23:33:39 print_info: n_embd = 4096
2025-03-06 23:33:39 print_info: n_layer = 32
2025-03-06 23:33:39 print_info: n_head = 32
2025-03-06 23:33:39 print_info: n_head_kv = 32
2025-03-06 23:33:39 print_info: n_rot = 128
2025-03-06 23:33:39 print_info: n_swa = 0
2025-03-06 23:33:39 print_info: n_embd_head_k = 128
2025-03-06 23:33:39 print_info: n_embd_head_v = 128
2025-03-06 23:33:39 print_info: n_gqa = 1
2025-03-06 23:33:39 print_info: n_embd_k_gqa = 4096
2025-03-06 23:33:39 print_info: n_embd_v_gqa = 4096
2025-03-06 23:33:39 print_info: f_norm_eps = 0.0e+00
2025-03-06 23:33:39 print_info: f_norm_rms_eps = 1.0e-05
2025-03-06 23:33:39 print_info: f_clamp_kqv = 0.0e+00
2025-03-06 23:33:39 print_info: f_max_alibi_bias = 0.0e+00
2025-03-06 23:33:39 print_info: f_logit_scale = 0.0e+00
2025-03-06 23:33:39 print_info: n_ff = 11008
2025-03-06 23:33:39 print_info: n_expert = 0
2025-03-06 23:33:39 print_info: n_expert_used = 0
2025-03-06 23:33:39 print_info: causal attn = 1
2025-03-06 23:33:39 print_info: pooling type = 0
2025-03-06 23:33:39 print_info: rope type = 0
2025-03-06 23:33:39 print_info: rope scaling = linear
2025-03-06 23:33:39 print_info: freq_base_train = 10000.0
2025-03-06 23:33:39 print_info: freq_scale_train = 1
2025-03-06 23:33:39 print_info: n_ctx_orig_yarn = 4096
2025-03-06 23:33:39 print_info: rope_finetuned = unknown
2025-03-06 23:33:39 print_info: ssm_d_conv = 0
2025-03-06 23:33:39 print_info: ssm_d_inner = 0
2025-03-06 23:33:39 print_info: ssm_d_state = 0
2025-03-06 23:33:39 print_info: ssm_dt_rank = 0
2025-03-06 23:33:39 print_info: ssm_dt_b_c_rms = 0
2025-03-06 23:33:39 print_info: model type = 7B
2025-03-06 23:33:39 print_info: model params = 6.74 B
2025-03-06 23:33:39 print_info: general.name = LLaMA v2
2025-03-06 23:33:39 print_info: vocab type = SPM
2025-03-06 23:33:39 print_info: n_vocab = 32000
2025-03-06 23:33:39 print_info: n_merges = 0
2025-03-06 23:33:39 print_info: BOS token = 1 ''
2025-03-06 23:33:39 print_info: EOS token = 2 '
'
2025-03-06 23:33:39 print_info: UNK token = 0 ''
2025-03-06 23:33:39 print_info: LF token = 13 '<0x0A>'
2025-03-06 23:33:39 print_info: EOG token = 2 ''
2025-03-06 23:33:39 print_info: max token length = 48
2025-03-06 23:33:39 load_tensors: loading model tensors, this can take a while... (mmap = true)
2025-03-06 23:33:39 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory
2025-03-06 23:33:40 llama_model_load: error loading model: unable to allocate CUDA0 buffer
2025-03-06 23:33:40 llama_model_load_from_file_impl: failed to load model
2025-03-06 23:33:40 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:33:40
2025-03-06 23:33:40 goroutine 39 [running]:
2025-03-06 23:33:40 github.com/ollama/ollama/runner/llamarunner.(Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc00059c070, 0x0}, ...)
2025-03-06 23:33:40 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375
2025-03-06 23:33:40 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
2025-03-06 23:33:40 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7
2025-03-06 23:33:40 time=2025-03-07T05:33:40.079Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:33:40 time=2025-03-07T05:33:40.127Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
2025-03-06 23:33:40 time=2025-03-07T05:33:40.329Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model"
2025-03-06 23:33:45 time=2025-03-07T05:33:45.507Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.177852813 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:33:45 time=2025-03-07T05:33:45.758Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.428446301 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:33:46 time=2025-03-07T05:33:46.007Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.677910875 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:36:38 time=2025-03-07T05:36:38.570Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:36:38 time=2025-03-07T05:36:38.570Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:36:38 time=2025-03-07T05:36:38.571Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:36:38 time=2025-03-07T05:36:38.571Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:36:38 time=2025-03-07T05:36:38.571Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:36:38 time=2025-03-07T05:36:38.571Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:36:38 time=2025-03-07T05:36:38.572Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:36:38 time=2025-03-07T05:36:38.572Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:36:38 time=2025-03-07T05:36:38.759Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="8.9 GiB" free_swap="3.0 GiB"
2025-03-06 23:36:38 time=2025-03-07T05:36:38.759Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:36:38 time=2025-03-07T05:36:38.759Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:36:38 time=2025-03-07T05:36:38.759Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB"
2025-03-06 23:36:38 time=2025-03-07T05:36:38.760Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 45219"
2025-03-06 23:36:38 time=2025-03-07T05:36:38.760Z level=INFO source=sched.go:450 msg="loaded runners" count=1
2025-03-06 23:36:38 time=2025-03-07T05:36:38.760Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
2025-03-06 23:36:38 time=2025-03-07T05:36:38.761Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:36:38 time=2025-03-07T05:36:38.780Z level=INFO source=runner.go:931 msg="starting go runner"
2025-03-06 23:36:38 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2025-03-06 23:36:38 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-03-06 23:36:38 ggml_cuda_init: found 1 CUDA devices:
2025-03-06 23:36:38 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
2025-03-06 23:36:38 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
2025-03-06 23:36:38 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
2025-03-06 23:36:38 time=2025-03-07T05:36:38.887Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8
2025-03-06 23:36:38 time=2025-03-07T05:36:38.903Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:45219"
2025-03-06 23:36:39 time=2025-03-07T05:36:39.013Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
2025-03-06 23:36:39 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free
2025-03-06 23:36:39 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
2025-03-06 23:36:39 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-03-06 23:36:39 llama_model_loader: - kv 0: general.architecture str = llama
2025-03-06 23:36:39 llama_model_loader: - kv 1: general.name str = LLaMA v2
2025-03-06 23:36:39 llama_model_loader: - kv 2: llama.context_length u32 = 4096
2025-03-06 23:36:39 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
2025-03-06 23:36:39 llama_model_loader: - kv 4: llama.block_count u32 = 32
2025-03-06 23:36:39 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
2025-03-06 23:36:39 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
2025-03-06 23:36:39 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
2025-03-06 23:36:39 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
2025-03-06 23:36:39 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
2025-03-06 23:36:39 llama_model_loader: - kv 10: general.file_type u32 = 2
2025-03-06 23:36:39 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
2025-03-06 23:36:39 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
2025-03-06 23:36:39 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
2025-03-06 23:36:39 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
2025-03-06 23:36:39 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
2025-03-06 23:36:39 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
2025-03-06 23:36:39 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
2025-03-06 23:36:39 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
2025-03-06 23:36:39 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
2025-03-06 23:36:39 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
2025-03-06 23:36:39 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
2025-03-06 23:36:39 llama_model_loader: - kv 22: general.quantization_version u32 = 2
2025-03-06 23:36:39 llama_model_loader: - type f32: 65 tensors
2025-03-06 23:36:39 llama_model_loader: - type q4_0: 225 tensors
2025-03-06 23:36:39 llama_model_loader: - type q6_K: 1 tensors
2025-03-06 23:36:39 print_info: file format = GGUF V3 (latest)
2025-03-06 23:36:39 print_info: file type = Q4_0
2025-03-06 23:36:39 print_info: file size = 3.56 GiB (4.54 BPW)
2025-03-06 23:36:39 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2025-03-06 23:36:39 load: special tokens cache size = 3
2025-03-06 23:36:39 load: token to piece cache size = 0.1684 MB
2025-03-06 23:36:39 print_info: arch = llama
2025-03-06 23:36:39 print_info: vocab_only = 0
2025-03-06 23:36:39 print_info: n_ctx_train = 4096
2025-03-06 23:36:39 print_info: n_embd = 4096
2025-03-06 23:36:39 print_info: n_layer = 32
2025-03-06 23:36:39 print_info: n_head = 32
2025-03-06 23:36:39 print_info: n_head_kv = 32
2025-03-06 23:36:39 print_info: n_rot = 128
2025-03-06 23:36:39 print_info: n_swa = 0
2025-03-06 23:36:39 print_info: n_embd_head_k = 128
2025-03-06 23:36:39 print_info: n_embd_head_v = 128
2025-03-06 23:36:39 print_info: n_gqa = 1
2025-03-06 23:36:39 print_info: n_embd_k_gqa = 4096
2025-03-06 23:36:39 print_info: n_embd_v_gqa = 4096
2025-03-06 23:36:39 print_info: f_norm_eps = 0.0e+00
2025-03-06 23:36:39 print_info: f_norm_rms_eps = 1.0e-05
2025-03-06 23:36:39 print_info: f_clamp_kqv = 0.0e+00
2025-03-06 23:36:39 print_info: f_max_alibi_bias = 0.0e+00
2025-03-06 23:36:39 print_info: f_logit_scale = 0.0e+00
2025-03-06 23:36:39 print_info: n_ff = 11008
2025-03-06 23:36:39 print_info: n_expert = 0
2025-03-06 23:36:39 print_info: n_expert_used = 0
2025-03-06 23:36:39 print_info: causal attn = 1
2025-03-06 23:36:39 print_info: pooling type = 0
2025-03-06 23:36:39 print_info: rope type = 0
2025-03-06 23:36:39 print_info: rope scaling = linear
2025-03-06 23:36:39 print_info: freq_base_train = 10000.0
2025-03-06 23:36:39 print_info: freq_scale_train = 1
2025-03-06 23:36:39 print_info: n_ctx_orig_yarn = 4096
2025-03-06 23:36:39 print_info: rope_finetuned = unknown
2025-03-06 23:36:39 print_info: ssm_d_conv = 0
2025-03-06 23:36:39 print_info: ssm_d_inner = 0
2025-03-06 23:36:39 print_info: ssm_d_state = 0
2025-03-06 23:36:39 print_info: ssm_dt_rank = 0
2025-03-06 23:36:39 print_info: ssm_dt_b_c_rms = 0
2025-03-06 23:36:39 print_info: model type = 7B
2025-03-06 23:36:39 print_info: model params = 6.74 B
2025-03-06 23:36:39 print_info: general.name = LLaMA v2
2025-03-06 23:36:39 print_info: vocab type = SPM
2025-03-06 23:36:39 print_info: n_vocab = 32000
2025-03-06 23:36:39 print_info: n_merges = 0
2025-03-06 23:36:39 print_info: BOS token = 1 ''
2025-03-06 23:36:39 print_info: EOS token = 2 '
'
2025-03-06 23:36:39 print_info: UNK token = 0 ''
2025-03-06 23:36:39 print_info: LF token = 13 '<0x0A>'
2025-03-06 23:36:39 print_info: EOG token = 2 ''
2025-03-06 23:36:39 print_info: max token length = 48
2025-03-06 23:36:39 load_tensors: loading model tensors, this can take a while... (mmap = true)
2025-03-06 23:36:39 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory
2025-03-06 23:36:39 llama_model_load: error loading model: unable to allocate CUDA0 buffer
2025-03-06 23:36:39 llama_model_load_from_file_impl: failed to load model
2025-03-06 23:36:39 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:36:39
2025-03-06 23:36:39 goroutine 24 [running]:
2025-03-06 23:36:39 github.com/ollama/ollama/runner/llamarunner.(Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000464020, 0x0}, ...)
2025-03-06 23:36:39 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375
2025-03-06 23:36:39 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
2025-03-06 23:36:39 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7
2025-03-06 23:36:39 time=2025-03-07T05:36:39.884Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
2025-03-06 23:36:40 time=2025-03-07T05:36:40.016Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model"
2025-03-06 23:36:45 time=2025-03-07T05:36:45.191Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.175978887 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:36:45 time=2025-03-07T05:36:45.442Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.426659985 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:36:45 time=2025-03-07T05:36:45.691Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.676152506 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:46:53 time=2025-03-07T05:46:53.806Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:46:53 time=2025-03-07T05:46:53.806Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:46:53 time=2025-03-07T05:46:53.806Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:46:53 time=2025-03-07T05:46:53.806Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:46:53 time=2025-03-07T05:46:53.807Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:46:53 time=2025-03-07T05:46:53.807Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:46:53 time=2025-03-07T05:46:53.807Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:46:53 time=2025-03-07T05:46:53.807Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:46:54 time=2025-03-07T05:46:54.017Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="8.9 GiB" free_swap="3.0 GiB"
2025-03-06 23:46:54 time=2025-03-07T05:46:54.017Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-06 23:46:54 time=2025-03-07T05:46:54.017Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-06 23:46:54 time=2025-03-07T05:46:54.017Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB"
2025-03-06 23:46:54 time=2025-03-07T05:46:54.018Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 43487"
2025-03-06 23:46:54 time=2025-03-07T05:46:54.019Z level=INFO source=sched.go:450 msg="loaded runners" count=1
2025-03-06 23:46:54 time=2025-03-07T05:46:54.019Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
2025-03-06 23:46:54 time=2025-03-07T05:46:54.020Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:46:54 time=2025-03-07T05:46:54.042Z level=INFO source=runner.go:931 msg="starting go runner"
2025-03-06 23:46:54 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2025-03-06 23:46:54 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-03-06 23:46:54 ggml_cuda_init: found 1 CUDA devices:
2025-03-06 23:46:54 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
2025-03-06 23:46:54 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
2025-03-06 23:46:54 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
2025-03-06 23:46:54 time=2025-03-07T05:46:54.727Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8
2025-03-06 23:46:54 time=2025-03-07T05:46:54.752Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:43487"
2025-03-06 23:46:54 time=2025-03-07T05:46:54.773Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
2025-03-06 23:46:54 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free
2025-03-06 23:46:54 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
2025-03-06 23:46:54 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-03-06 23:46:54 llama_model_loader: - kv 0: general.architecture str = llama
2025-03-06 23:46:54 llama_model_loader: - kv 1: general.name str = LLaMA v2
2025-03-06 23:46:54 llama_model_loader: - kv 2: llama.context_length u32 = 4096
2025-03-06 23:46:54 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
2025-03-06 23:46:54 llama_model_loader: - kv 4: llama.block_count u32 = 32
2025-03-06 23:46:54 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
2025-03-06 23:46:54 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
2025-03-06 23:46:54 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
2025-03-06 23:46:54 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
2025-03-06 23:46:54 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
2025-03-06 23:46:54 llama_model_loader: - kv 10: general.file_type u32 = 2
2025-03-06 23:46:54 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
2025-03-06 23:46:54 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
2025-03-06 23:46:54 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
2025-03-06 23:46:54 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
2025-03-06 23:46:55 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
2025-03-06 23:46:55 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
2025-03-06 23:46:55 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
2025-03-06 23:46:55 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
2025-03-06 23:46:55 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
2025-03-06 23:46:55 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
2025-03-06 23:46:55 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
2025-03-06 23:46:55 llama_model_loader: - kv 22: general.quantization_version u32 = 2
2025-03-06 23:46:55 llama_model_loader: - type f32: 65 tensors
2025-03-06 23:46:55 llama_model_loader: - type q4_0: 225 tensors
2025-03-06 23:46:55 llama_model_loader: - type q6_K: 1 tensors
2025-03-06 23:46:55 print_info: file format = GGUF V3 (latest)
2025-03-06 23:46:55 print_info: file type = Q4_0
2025-03-06 23:46:55 print_info: file size = 3.56 GiB (4.54 BPW)
2025-03-06 23:46:55 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2025-03-06 23:46:55 load: special tokens cache size = 3
2025-03-06 23:46:55 load: token to piece cache size = 0.1684 MB
2025-03-06 23:46:55 print_info: arch = llama
2025-03-06 23:46:55 print_info: vocab_only = 0
2025-03-06 23:46:55 print_info: n_ctx_train = 4096
2025-03-06 23:46:55 print_info: n_embd = 4096
2025-03-06 23:46:55 print_info: n_layer = 32
2025-03-06 23:46:55 print_info: n_head = 32
2025-03-06 23:46:55 print_info: n_head_kv = 32
2025-03-06 23:46:55 print_info: n_rot = 128
2025-03-06 23:46:55 print_info: n_swa = 0
2025-03-06 23:46:55 print_info: n_embd_head_k = 128
2025-03-06 23:46:55 print_info: n_embd_head_v = 128
2025-03-06 23:46:55 print_info: n_gqa = 1
2025-03-06 23:46:55 print_info: n_embd_k_gqa = 4096
2025-03-06 23:46:55 print_info: n_embd_v_gqa = 4096
2025-03-06 23:46:55 print_info: f_norm_eps = 0.0e+00
2025-03-06 23:46:55 print_info: f_norm_rms_eps = 1.0e-05
2025-03-06 23:46:55 print_info: f_clamp_kqv = 0.0e+00
2025-03-06 23:46:55 print_info: f_max_alibi_bias = 0.0e+00
2025-03-06 23:46:55 print_info: f_logit_scale = 0.0e+00
2025-03-06 23:46:55 print_info: n_ff = 11008
2025-03-06 23:46:55 print_info: n_expert = 0
2025-03-06 23:46:55 print_info: n_expert_used = 0
2025-03-06 23:46:55 print_info: causal attn = 1
2025-03-06 23:46:55 print_info: pooling type = 0
2025-03-06 23:46:55 print_info: rope type = 0
2025-03-06 23:46:55 print_info: rope scaling = linear
2025-03-06 23:46:55 print_info: freq_base_train = 10000.0
2025-03-06 23:46:55 print_info: freq_scale_train = 1
2025-03-06 23:46:55 print_info: n_ctx_orig_yarn = 4096
2025-03-06 23:46:55 print_info: rope_finetuned = unknown
2025-03-06 23:46:55 print_info: ssm_d_conv = 0
2025-03-06 23:46:55 print_info: ssm_d_inner = 0
2025-03-06 23:46:55 print_info: ssm_d_state = 0
2025-03-06 23:46:55 print_info: ssm_dt_rank = 0
2025-03-06 23:46:55 print_info: ssm_dt_b_c_rms = 0
2025-03-06 23:46:55 print_info: model type = 7B
2025-03-06 23:46:55 print_info: model params = 6.74 B
2025-03-06 23:46:55 print_info: general.name = LLaMA v2
2025-03-06 23:46:55 print_info: vocab type = SPM
2025-03-06 23:46:55 print_info: n_vocab = 32000
2025-03-06 23:46:55 print_info: n_merges = 0
2025-03-06 23:46:55 print_info: BOS token = 1 ''
2025-03-06 23:46:55 print_info: EOS token = 2 '
'
2025-03-06 23:46:55 print_info: UNK token = 0 ''
2025-03-06 23:46:55 print_info: LF token = 13 '<0x0A>'
2025-03-06 23:46:55 print_info: EOG token = 2 ''
2025-03-06 23:46:55 print_info: max token length = 48
2025-03-06 23:46:55 load_tensors: loading model tensors, this can take a while... (mmap = true)
2025-03-06 23:47:07 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory
2025-03-06 23:47:08 llama_model_load: error loading model: unable to allocate CUDA0 buffer
2025-03-06 23:47:08 llama_model_load_from_file_impl: failed to load model
2025-03-06 23:47:08 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:47:08
2025-03-06 23:47:08 goroutine 8 [running]:
2025-03-06 23:47:08 github.com/ollama/ollama/runner/llamarunner.(Server).loadModel(0xc0001a5cb0, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc0006121a0, 0x0}, ...)
2025-03-06 23:47:08 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375
2025-03-06 23:47:08 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
2025-03-06 23:47:08 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7
2025-03-06 23:47:08 time=2025-03-07T05:47:08.336Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-06 23:47:08 time=2025-03-07T05:47:08.343Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
2025-03-06 23:47:08 time=2025-03-07T05:47:08.587Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model"
2025-03-06 23:47:13 time=2025-03-07T05:47:13.776Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.190728973 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:47:14 time=2025-03-07T05:47:14.027Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.441434208 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-06 23:47:14 time=2025-03-07T05:47:14.277Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.691328775 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-07 00:24:26 time=2025-03-07T06:24:26.337Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-07 00:24:26 time=2025-03-07T06:24:26.337Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-07 00:24:26 time=2025-03-07T06:24:26.338Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-07 00:24:26 time=2025-03-07T06:24:26.338Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-07 00:24:26 time=2025-03-07T06:24:26.338Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-07 00:24:26 time=2025-03-07T06:24:26.338Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-07 00:24:26 time=2025-03-07T06:24:26.339Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-07 00:24:26 time=2025-03-07T06:24:26.339Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-07 00:24:26 time=2025-03-07T06:24:26.527Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="8.9 GiB" free_swap="3.0 GiB"
2025-03-07 00:24:26 time=2025-03-07T06:24:26.528Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
2025-03-07 00:24:26 time=2025-03-07T06:24:26.528Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
2025-03-07 00:24:26 time=2025-03-07T06:24:26.528Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB"
2025-03-07 00:24:26 time=2025-03-07T06:24:26.529Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 38999"
2025-03-07 00:24:26 time=2025-03-07T06:24:26.530Z level=INFO source=sched.go:450 msg="loaded runners" count=1
2025-03-07 00:24:26 time=2025-03-07T06:24:26.530Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
2025-03-07 00:24:26 time=2025-03-07T06:24:26.530Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-07 00:24:26 time=2025-03-07T06:24:26.551Z level=INFO source=runner.go:931 msg="starting go runner"
2025-03-07 00:24:26 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2025-03-07 00:24:26 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-03-07 00:24:26 ggml_cuda_init: found 1 CUDA devices:
2025-03-07 00:24:26 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
2025-03-07 00:24:26 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
2025-03-07 00:24:26 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
2025-03-07 00:24:26 time=2025-03-07T06:24:26.669Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8
2025-03-07 00:24:26 time=2025-03-07T06:24:26.685Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:38999"
2025-03-07 00:24:26 time=2025-03-07T06:24:26.782Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
2025-03-07 00:24:26 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free
2025-03-07 00:24:26 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
2025-03-07 00:24:26 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-03-07 00:24:26 llama_model_loader: - kv 0: general.architecture str = llama
2025-03-07 00:24:26 llama_model_loader: - kv 1: general.name str = LLaMA v2
2025-03-07 00:24:26 llama_model_loader: - kv 2: llama.context_length u32 = 4096
2025-03-07 00:24:26 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
2025-03-07 00:24:26 llama_model_loader: - kv 4: llama.block_count u32 = 32
2025-03-07 00:24:26 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
2025-03-07 00:24:26 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
2025-03-07 00:24:26 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
2025-03-07 00:24:26 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
2025-03-07 00:24:26 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
2025-03-07 00:24:26 llama_model_loader: - kv 10: general.file_type u32 = 2
2025-03-07 00:24:26 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
2025-03-07 00:24:26 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
2025-03-07 00:24:26 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
2025-03-07 00:24:26 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
2025-03-07 00:24:26 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
2025-03-07 00:24:26 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
2025-03-07 00:24:26 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
2025-03-07 00:24:26 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
2025-03-07 00:24:26 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
2025-03-07 00:24:26 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
2025-03-07 00:24:26 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
2025-03-07 00:24:26 llama_model_loader: - kv 22: general.quantization_version u32 = 2
2025-03-07 00:24:26 llama_model_loader: - type f32: 65 tensors
2025-03-07 00:24:26 llama_model_loader: - type q4_0: 225 tensors
2025-03-07 00:24:26 llama_model_loader: - type q6_K: 1 tensors
2025-03-07 00:24:26 print_info: file format = GGUF V3 (latest)
2025-03-07 00:24:26 print_info: file type = Q4_0
2025-03-07 00:24:26 print_info: file size = 3.56 GiB (4.54 BPW)
2025-03-07 00:24:26 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2025-03-07 00:24:26 load: special tokens cache size = 3
2025-03-07 00:24:26 load: token to piece cache size = 0.1684 MB
2025-03-07 00:24:26 print_info: arch = llama
2025-03-07 00:24:26 print_info: vocab_only = 0
2025-03-07 00:24:26 print_info: n_ctx_train = 4096
2025-03-07 00:24:26 print_info: n_embd = 4096
2025-03-07 00:24:26 print_info: n_layer = 32
2025-03-07 00:24:26 print_info: n_head = 32
2025-03-07 00:24:26 print_info: n_head_kv = 32
2025-03-07 00:24:26 print_info: n_rot = 128
2025-03-07 00:24:26 print_info: n_swa = 0
2025-03-07 00:24:26 print_info: n_embd_head_k = 128
2025-03-07 00:24:26 print_info: n_embd_head_v = 128
2025-03-07 00:24:26 print_info: n_gqa = 1
2025-03-07 00:24:26 print_info: n_embd_k_gqa = 4096
2025-03-07 00:24:26 print_info: n_embd_v_gqa = 4096
2025-03-07 00:24:26 print_info: f_norm_eps = 0.0e+00
2025-03-07 00:24:26 print_info: f_norm_rms_eps = 1.0e-05
2025-03-07 00:24:26 print_info: f_clamp_kqv = 0.0e+00
2025-03-07 00:24:26 print_info: f_max_alibi_bias = 0.0e+00
2025-03-07 00:24:26 print_info: f_logit_scale = 0.0e+00
2025-03-07 00:24:26 print_info: n_ff = 11008
2025-03-07 00:24:26 print_info: n_expert = 0
2025-03-07 00:24:26 print_info: n_expert_used = 0
2025-03-07 00:24:26 print_info: causal attn = 1
2025-03-07 00:24:26 print_info: pooling type = 0
2025-03-07 00:24:26 print_info: rope type = 0
2025-03-07 00:24:26 print_info: rope scaling = linear
2025-03-07 00:24:26 print_info: freq_base_train = 10000.0
2025-03-07 00:24:26 print_info: freq_scale_train = 1
2025-03-07 00:24:26 print_info: n_ctx_orig_yarn = 4096
2025-03-07 00:24:26 print_info: rope_finetuned = unknown
2025-03-07 00:24:26 print_info: ssm_d_conv = 0
2025-03-07 00:24:26 print_info: ssm_d_inner = 0
2025-03-07 00:24:26 print_info: ssm_d_state = 0
2025-03-07 00:24:26 print_info: ssm_dt_rank = 0
2025-03-07 00:24:26 print_info: ssm_dt_b_c_rms = 0
2025-03-07 00:24:26 print_info: model type = 7B
2025-03-07 00:24:26 print_info: model params = 6.74 B
2025-03-07 00:24:26 print_info: general.name = LLaMA v2
2025-03-07 00:24:26 print_info: vocab type = SPM
2025-03-07 00:24:26 print_info: n_vocab = 32000
2025-03-07 00:24:26 print_info: n_merges = 0
2025-03-07 00:24:26 print_info: BOS token = 1 ''
2025-03-07 00:24:26 print_info: EOS token = 2 '
'
2025-03-07 00:24:26 print_info: UNK token = 0 ''
2025-03-07 00:24:26 print_info: LF token = 13 '<0x0A>'
2025-03-07 00:24:26 print_info: EOG token = 2 ''
2025-03-07 00:24:26 print_info: max token length = 48
2025-03-07 00:24:26 load_tensors: loading model tensors, this can take a while... (mmap = true)
2025-03-07 00:24:27 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory
2025-03-07 00:24:27 llama_model_load: error loading model: unable to allocate CUDA0 buffer
2025-03-07 00:24:27 llama_model_load_from_file_impl: failed to load model
2025-03-07 00:24:27 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-07 00:24:27 [GIN] 2025/03/07 - 06:24:27 | 500 | 1.649155397s | 127.0.0.1 | POST "/api/generate"
2025-03-07 00:26:19 [GIN] 2025/03/07 - 06:26:19 | 200 | 25.789µs | 127.0.0.1 | HEAD "/"
2025-03-07 00:26:20 [GIN] 2025/03/07 - 06:26:20 | 404 | 6.938891ms | 127.0.0.1 | POST "/api/show"
2025-03-07 00:26:20 [GIN] 2025/03/07 - 06:26:20 | 200 | 800.989382ms | 127.0.0.1 | POST "/api/pull"
2025-03-07 00:26:35 [GIN] 2025/03/07 - 06:26:35 | 200 | 27.332µs | 127.0.0.1 | HEAD "/"
2025-03-07 00:26:35 [GIN] 2025/03/07 - 06:26:35 | 200 | 2.845839ms | 127.0.0.1 | GET "/api/tags"
2025-03-07 00:26:45 [GIN] 2025/03/07 - 06:26:45 | 200 | 27.121µs | 127.0.0.1 | HEAD "/"
2025-03-07 00:26:45 [GIN] 2025/03/07 - 06:26:45 | 200 | 705.719µs | 127.0.0.1 | GET "/api/tags"
2025-03-07 00:43:20 [GIN] 2025/03/07 - 06:43:20 | 200 | 89.941µs | 172.18.0.7 | HEAD "/"
2025-03-07 00:43:21 [GIN] 2025/03/07 - 06:43:21 | 200 | 655.722975ms | 172.18.0.7 | POST "/api/pull"
2025-03-07 00:43:21 [GIN] 2025/03/07 - 06:43:21 | 200 | 26.561µs | 172.18.0.7 | HEAD "/"
2025-03-07 00:43:21 [GIN] 2025/03/07 - 06:43:21 | 200 | 306.43805ms | 172.18.0.7 | POST "/api/pull"
2025-03-07 00:43:45 [GIN] 2025/03/07 - 06:43:45 | 200 | 2.626832ms | 172.18.0.1 | GET "/api/tags"
2025-03-07 00:43:45 [GIN] 2025/03/07 - 06:43:45 | 200 | 57.86µs | 172.18.0.1 | GET "/api/version"
2025-03-07 00:50:23 [GIN] 2025/03/07 - 06:50:23 | 200 | 6m15s | 172.18.0.1 | POST "/api/pull"
2025-03-07 00:50:24 [GIN] 2025/03/07 - 06:50:24 | 200 | 868.259µs | 172.18.0.1 | GET "/api/tags"
2025-03-07 00:51:57 [GIN] 2025/03/07 - 06:51:57 | 500 | 2.697339437s | 172.18.0.1 | POST "/api/chat"
2025-03-07 00:24:27
2025-03-07 00:24:27 goroutine 23 [running]:
2025-03-07 00:24:27 github.com/ollama/ollama/runner/llamarunner.(Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000464020, 0x0}, ...)
2025-03-07 00:24:27 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375
2025-03-07 00:24:27 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
2025-03-07 00:24:27 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7
2025-03-07 00:24:27 time=2025-03-07T06:24:27.535Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-07 00:24:27 time=2025-03-07T06:24:27.568Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
2025-03-07 00:24:27 time=2025-03-07T06:24:27.786Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model"
2025-03-07 00:24:32 time=2025-03-07T06:24:32.962Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.176344713 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-07 00:24:33 time=2025-03-07T06:24:33.212Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.425779616 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-07 00:24:33 time=2025-03-07T06:24:33.462Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.67636438 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
2025-03-07 00:43:17 2025/03/07 06:43:17 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:
https://localhost:
http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:
https://127.0.0.1:
http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:
https://0.0.0.0:
app://
file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2025-03-07 00:43:17 time=2025-03-07T06:43:17.466Z level=INFO source=images.go:432 msg="total blobs: 20"
2025-03-07 00:43:17 time=2025-03-07T06:43:17.466Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
2025-03-07 00:43:17 time=2025-03-07T06:43:17.470Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)"
2025-03-07 00:43:17 time=2025-03-07T06:43:17.473Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
2025-03-07 00:43:18 time=2025-03-07T06:43:18.020Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51e84bf9-91ed-160d-f8af-f145f1df6904 library=cuda variant=v12 compute=7.5 driver=12.8 name="NVIDIA GeForce RTX 2060" total="6.0 GiB" available="5.0 GiB"
2025-03-07 00:44:08 time=2025-03-07T06:44:08.459Z level=INFO source=download.go:176 msg="downloading dde5aa3fc5ff in 16 126 MB part(s)"
2025-03-07 00:50:16 time=2025-03-07T06:50:16.966Z level=INFO source=download.go:176 msg="downloading 966de95ca8a6 in 1 1.4 KB part(s)"
2025-03-07 00:50:18 time=2025-03-07T06:50:18.387Z level=INFO source=download.go:176 msg="downloading fcc5a6bec9da in 1 7.7 KB part(s)"
2025-03-07 00:50:19 time=2025-03-07T06:50:19.725Z level=INFO source=download.go:176 msg="downloading a70ff7e570d9 in 1 6.0 KB part(s)"
2025-03-07 00:50:21 time=2025-03-07T06:50:21.072Z level=INFO source=download.go:176 msg="downloading 34bb5ab01051 in 1 561 B part(s)"
2025-03-07 00:51:54 time=2025-03-07T06:51:54.677Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-51e84bf9-91ed-160d-f8af-f145f1df6904 parallel=4 available=5354029056 required="3.7 GiB"
2025-03-07 00:51:54 time=2025-03-07T06:51:54.855Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="9.0 GiB" free_swap="3.0 GiB"
2025-03-07 00:51:54 time=2025-03-07T06:51:54.855Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB"
2025-03-07 00:51:54 time=2025-03-07T06:51:54.856Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --parallel 4 --port 42573"
2025-03-07 00:51:54 time=2025-03-07T06:51:54.857Z level=INFO source=sched.go:450 msg="loaded runners" count=1
2025-03-07 00:51:54 time=2025-03-07T06:51:54.857Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
2025-03-07 00:51:54 time=2025-03-07T06:51:54.857Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
2025-03-07 00:51:54 time=2025-03-07T06:51:54.880Z level=INFO source=runner.go:931 msg="starting go runner"
2025-03-07 00:51:55 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2025-03-07 00:51:55 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-03-07 00:51:55 ggml_cuda_init: found 1 CUDA devices:
2025-03-07 00:51:55 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes
2025-03-07 00:51:55 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
2025-03-07 00:51:55 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
2025-03-07 00:51:55 time=2025-03-07T06:51:55.501Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8
2025-03-07 00:51:55 time=2025-03-07T06:51:55.518Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:42573"
2025-03-07 00:51:55 time=2025-03-07T06:51:55.611Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
2025-03-07 00:51:55 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free
2025-03-07 00:51:55 llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
2025-03-07 00:51:55 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-03-07 00:51:55 llama_model_loader: - kv 0: general.architecture str = llama
2025-03-07 00:51:55 llama_model_loader: - kv 1: general.type str = model
2025-03-07 00:51:55 llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct
2025-03-07 00:51:55 llama_model_loader: - kv 3: general.finetune str = Instruct
2025-03-07 00:51:55 llama_model_loader: - kv 4: general.basename str = Llama-3.2
2025-03-07 00:51:55 llama_model_loader: - kv 5: general.size_label str = 3B
2025-03-07 00:51:55 llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
2025-03-07 00:51:55 llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
2025-03-07 00:51:55 llama_model_loader: - kv 8: llama.block_count u32 = 28
2025-03-07 00:51:55 llama_model_loader: - kv 9: llama.context_length u32 = 131072
2025-03-07 00:51:55 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072
2025-03-07 00:51:55 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192
2025-03-07 00:51:55 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24
2025-03-07 00:51:55 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8
2025-03-07 00:51:55 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000
2025-03-07 00:51:55 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
2025-03-07 00:51:55 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128
2025-03-07 00:51:55 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128
2025-03-07 00:51:55 llama_model_loader: - kv 18: general.file_type u32 = 15
2025-03-07 00:51:55 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256
2025-03-07 00:51:55 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128
2025-03-07 00:51:55 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
2025-03-07 00:51:55 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe
2025-03-07 00:51:55 llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
2025-03-07 00:51:55 llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2025-03-07 00:51:55 llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
2025-03-07 00:51:55 llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000
2025-03-07 00:51:55 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009
2025-03-07 00:51:55 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
2025-03-07 00:51:55 llama_model_loader: - kv 29: general.quantization_version u32 = 2
2025-03-07 00:51:55 llama_model_loader: - type f32: 58 tensors
2025-03-07 00:51:55 llama_model_loader: - type q4_K: 168 tensors
2025-03-07 00:51:55 llama_model_loader: - type q6_K: 29 tensors
2025-03-07 00:51:55 print_info: file format = GGUF V3 (latest)
2025-03-07 00:51:55 print_info: file type = Q4_K - Medium
2025-03-07 00:51:55 print_info: file size = 1.87 GiB (5.01 BPW)
2025-03-07 00:51:56 load: special tokens cache size = 256
2025-03-07 00:51:56 load: token to piece cache size = 0.7999 MB
2025-03-07 00:51:56 print_info: arch = llama
2025-03-07 00:51:56 print_info: vocab_only = 0
2025-03-07 00:51:56 print_info: n_ctx_train = 131072
2025-03-07 00:51:56 print_info: n_embd = 3072
2025-03-07 00:51:56 print_info: n_layer = 28
2025-03-07 00:51:56 print_info: n_head = 24
2025-03-07 00:51:56 print_info: n_head_kv = 8
2025-03-07 00:51:56 print_info: n_rot = 128
2025-03-07 00:51:56 print_info: n_swa = 0
2025-03-07 00:51:56 print_info: n_embd_head_k = 128
2025-03-07 00:51:56 print_info: n_embd_head_v = 128
2025-03-07 00:51:56 print_info: n_gqa = 3
2025-03-07 00:51:56 print_info: n_embd_k_gqa = 1024
2025-03-07 00:51:56 print_info: n_embd_v_gqa = 1024
2025-03-07 00:51:56 print_info: f_norm_eps = 0.0e+00
2025-03-07 00:51:56 print_info: f_norm_rms_eps = 1.0e-05
2025-03-07 00:51:56 print_info: f_clamp_kqv = 0.0e+00
2025-03-07 00:51:56 print_info: f_max_alibi_bias = 0.0e+00
2025-03-07 00:51:56 print_info: f_logit_scale = 0.0e+00
2025-03-07 00:51:56 print_info: n_ff = 8192
2025-03-07 00:51:56 print_info: n_expert = 0
2025-03-07 00:51:56 print_info: n_expert_used = 0
2025-03-07 00:51:56 print_info: causal attn = 1
2025-03-07 00:51:56 print_info: pooling type = 0
2025-03-07 00:51:56 print_info: rope type = 0
2025-03-07 00:51:56 print_info: rope scaling = linear
2025-03-07 00:51:56 print_info: freq_base_train = 500000.0
2025-03-07 00:51:56 print_info: freq_scale_train = 1
2025-03-07 00:51:56 print_info: n_ctx_orig_yarn = 131072
2025-03-07 00:51:56 print_info: rope_finetuned = unknown
2025-03-07 00:51:56 print_info: ssm_d_conv = 0
2025-03-07 00:51:56 print_info: ssm_d_inner = 0
2025-03-07 00:51:56 print_info: ssm_d_state = 0
2025-03-07 00:51:56 print_info: ssm_dt_rank = 0
2025-03-07 00:51:56 print_info: ssm_dt_b_c_rms = 0
2025-03-07 00:51:56 print_info: model type = 3B
2025-03-07 00:51:56 print_info: model params = 3.21 B
2025-03-07 00:51:56 print_info: general.name = Llama 3.2 3B Instruct
2025-03-07 00:51:56 print_info: vocab type = BPE
2025-03-07 00:51:56 print_info: n_vocab = 128256
2025-03-07 00:51:56 print_info: n_merges = 280147
2025-03-07 00:51:56 print_info: BOS token = 128000 '<|begin_of_text|>'
2025-03-07 00:51:56 print_info: EOS token = 128009 '<|eot_id|>'
2025-03-07 00:51:56 print_info: EOT token = 128009 '<|eot_id|>'
2025-03-07 00:51:56 print_info: EOM token = 128008 '<|eom_id|>'
2025-03-07 00:51:56 print_info: LF token = 198 'Ċ'
2025-03-07 00:51:56 print_info: EOG token = 128008 '<|eom_id|>'
2025-03-07 00:51:56 print_info: EOG token = 128009 '<|eot_id|>'
2025-03-07 00:51:56 print_info: max token length = 256
2025-03-07 00:51:56 load_tensors: loading model tensors, this can take a while... (mmap = true)
2025-03-07 00:51:56 load_tensors: offloading 28 repeating layers to GPU
2025-03-07 00:51:56 load_tensors: offloading output layer to GPU
2025-03-07 00:51:56 load_tensors: offloaded 29/29 layers to GPU
2025-03-07 00:51:56 load_tensors: CUDA0 model buffer size = 1918.35 MiB
2025-03-07 00:51:56 load_tensors: CPU_Mapped model buffer size = 308.23 MiB
2025-03-07 00:51:56 llama_init_from_model: n_seq_max = 4
2025-03-07 00:51:56 llama_init_from_model: n_ctx = 8192
2025-03-07 00:51:56 llama_init_from_model: n_ctx_per_seq = 2048
2025-03-07 00:51:56 llama_init_from_model: n_batch = 2048
2025-03-07 00:51:56 llama_init_from_model: n_ubatch = 512
2025-03-07 00:51:56 llama_init_from_model: flash_attn = 0
2025-03-07 00:51:56 llama_init_from_model: freq_base = 500000.0
2025-03-07 00:51:56 llama_init_from_model: freq_scale = 1
2025-03-07 00:51:56 llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
2025-03-07 00:51:56 llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1
2025-03-07 00:51:56 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 896.00 MiB on device 0: cudaMalloc failed: out of memory
2025-03-07 00:51:56 llama_kv_cache_init: failed to allocate buffer for kv cache
2025-03-07 00:51:56 llama_init_from_model: llama_kv_cache_init() failed for self-attention cache
2025-03-07 00:51:56 panic: unable to create llama context
2025-03-07 00:51:56
2025-03-07 00:51:56 goroutine 25 [running]:
2025-03-07 00:51:56 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0001adcb0, {0x1d, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000696080, 0x0}, ...)
2025-03-07 00:51:56 github.com/ollama/ollama/runner/llamarunner/runner.go:857 +0x369
2025-03-07 00:51:56 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
2025-03-07 00:51:56 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7
2025-03-07 00:51:57 time=2025-03-07T06:51:57.067Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
2025-03-07 00:51:57 time=2025-03-07T06:51:57.118Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory\nllama_kv_cache_init: failed to allocate buffer for kv cache"
2025-03-07 00:52:02 time=2025-03-07T06:52:02.287Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.169429429 model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
2025-03-07 00:52:02 time=2025-03-07T06:52:02.537Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.419052711 model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
2025-03-07 00:52:02 time=2025-03-07T06:52:02.787Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.668807124 model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
i have this problem , my pc have 2060 rtx , ryzen 2700 and 32 ram

<!-- gh-comment-id:2707519849 --> @infinitymask8 commented on GitHub (Mar 7, 2025): ``2025-03-06 22:55:20 2025/03/07 04:55:20 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2025-03-06 22:55:20 time=2025-03-07T04:55:20.782Z level=INFO source=images.go:432 msg="total blobs: 20" 2025-03-06 22:55:20 time=2025-03-07T04:55:20.782Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" 2025-03-06 22:55:20 time=2025-03-07T04:55:20.784Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)" 2025-03-06 22:55:20 time=2025-03-07T04:55:20.784Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" 2025-03-06 22:55:21 time=2025-03-07T04:55:21.301Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51e84bf9-91ed-160d-f8af-f145f1df6904 library=cuda variant=v12 compute=7.5 driver=12.8 name="NVIDIA GeForce RTX 2060" total="6.0 GiB" available="5.0 GiB" 2025-03-06 22:58:35 time=2025-03-07T04:58:35.077Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 22:58:35 time=2025-03-07T04:58:35.078Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 22:58:35 time=2025-03-07T04:58:35.079Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 22:58:35 time=2025-03-07T04:58:35.079Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 22:58:35 time=2025-03-07T04:58:35.081Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 22:58:35 time=2025-03-07T04:58:35.081Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 22:55:24 [GIN] 2025/03/07 - 04:55:24 | 200 | 69.702µs | 172.18.0.7 | HEAD "/" 2025-03-06 22:55:25 [GIN] 2025/03/07 - 04:55:25 | 200 | 1.351738055s | 172.18.0.7 | POST "/api/pull" 2025-03-06 22:55:25 [GIN] 2025/03/07 - 04:55:25 | 200 | 24.446µs | 172.18.0.7 | HEAD "/" 2025-03-06 22:55:26 [GIN] 2025/03/07 - 04:55:26 | 200 | 642.743017ms | 172.18.0.7 | POST "/api/pull" 2025-03-06 22:58:24 [GIN] 2025/03/07 - 04:58:24 | 200 | 3.059855ms | 172.18.0.1 | GET "/api/tags" 2025-03-06 22:58:24 [GIN] 2025/03/07 - 04:58:24 | 200 | 126.731µs | 172.18.0.1 | GET "/api/version" 2025-03-06 22:58:35 time=2025-03-07T04:58:35.081Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 22:58:35 time=2025-03-07T04:58:35.081Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 22:58:35 time=2025-03-07T04:58:35.253Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="9.1 GiB" free_swap="3.0 GiB" 2025-03-06 22:58:35 time=2025-03-07T04:58:35.253Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 22:58:35 time=2025-03-07T04:58:35.253Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 22:58:35 time=2025-03-07T04:58:35.253Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" 2025-03-06 22:58:35 time=2025-03-07T04:58:35.254Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 34845" 2025-03-06 22:58:35 time=2025-03-07T04:58:35.255Z level=INFO source=sched.go:450 msg="loaded runners" count=1 2025-03-06 22:58:35 time=2025-03-07T04:58:35.255Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" 2025-03-06 22:58:35 time=2025-03-07T04:58:35.255Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 22:58:35 time=2025-03-07T04:58:35.276Z level=INFO source=runner.go:931 msg="starting go runner" 2025-03-06 22:58:35 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-03-06 22:58:35 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-03-06 22:58:35 ggml_cuda_init: found 1 CUDA devices: 2025-03-06 22:58:35 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes 2025-03-06 22:58:35 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-03-06 22:58:35 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so 2025-03-06 22:58:35 time=2025-03-07T04:58:35.889Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8 2025-03-06 22:58:35 time=2025-03-07T04:58:35.905Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:34845" 2025-03-06 22:58:36 time=2025-03-07T04:58:36.009Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" 2025-03-06 22:58:36 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free 2025-03-06 22:58:36 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) 2025-03-06 22:58:36 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-03-06 22:58:36 llama_model_loader: - kv 0: general.architecture str = llama 2025-03-06 22:58:36 llama_model_loader: - kv 1: general.name str = LLaMA v2 2025-03-06 22:58:36 llama_model_loader: - kv 2: llama.context_length u32 = 4096 2025-03-06 22:58:36 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 2025-03-06 22:58:36 llama_model_loader: - kv 4: llama.block_count u32 = 32 2025-03-06 22:58:36 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 2025-03-06 22:58:36 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 2025-03-06 22:58:36 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 2025-03-06 22:58:36 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 2025-03-06 22:58:36 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2025-03-06 22:58:36 llama_model_loader: - kv 10: general.file_type u32 = 2 2025-03-06 22:58:36 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama 2025-03-06 22:58:36 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... 2025-03-06 22:58:36 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... 2025-03-06 22:58:36 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... 2025-03-06 22:58:36 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... 2025-03-06 22:58:36 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 2025-03-06 22:58:36 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 2025-03-06 22:58:36 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 2025-03-06 22:58:36 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true 2025-03-06 22:58:36 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false 2025-03-06 22:58:36 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... 2025-03-06 22:58:36 llama_model_loader: - kv 22: general.quantization_version u32 = 2 2025-03-06 22:58:36 llama_model_loader: - type f32: 65 tensors 2025-03-06 22:58:36 llama_model_loader: - type q4_0: 225 tensors 2025-03-06 22:58:36 llama_model_loader: - type q6_K: 1 tensors 2025-03-06 22:58:36 print_info: file format = GGUF V3 (latest) 2025-03-06 22:58:36 print_info: file type = Q4_0 2025-03-06 22:58:36 print_info: file size = 3.56 GiB (4.54 BPW) 2025-03-06 22:58:36 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2025-03-06 22:58:36 load: special tokens cache size = 3 2025-03-06 22:58:36 load: token to piece cache size = 0.1684 MB 2025-03-06 22:58:36 print_info: arch = llama 2025-03-06 22:58:36 print_info: vocab_only = 0 2025-03-06 22:58:36 print_info: n_ctx_train = 4096 2025-03-06 22:58:36 print_info: n_embd = 4096 2025-03-06 22:58:36 print_info: n_layer = 32 2025-03-06 22:58:36 print_info: n_head = 32 2025-03-06 22:58:36 print_info: n_head_kv = 32 2025-03-06 22:58:36 print_info: n_rot = 128 2025-03-06 22:58:36 print_info: n_swa = 0 2025-03-06 22:58:36 print_info: n_embd_head_k = 128 2025-03-06 22:58:36 print_info: n_embd_head_v = 128 2025-03-06 22:58:36 print_info: n_gqa = 1 2025-03-06 22:58:36 print_info: n_embd_k_gqa = 4096 2025-03-06 22:58:36 print_info: n_embd_v_gqa = 4096 2025-03-06 22:58:36 print_info: f_norm_eps = 0.0e+00 2025-03-06 22:58:36 print_info: f_norm_rms_eps = 1.0e-05 2025-03-06 22:58:36 print_info: f_clamp_kqv = 0.0e+00 2025-03-06 22:58:36 print_info: f_max_alibi_bias = 0.0e+00 2025-03-06 22:58:36 print_info: f_logit_scale = 0.0e+00 2025-03-06 22:58:36 print_info: n_ff = 11008 2025-03-06 22:58:36 print_info: n_expert = 0 2025-03-06 22:58:36 print_info: n_expert_used = 0 2025-03-06 22:58:36 print_info: causal attn = 1 2025-03-06 22:58:36 print_info: pooling type = 0 2025-03-06 22:58:36 print_info: rope type = 0 2025-03-06 22:58:36 print_info: rope scaling = linear 2025-03-06 22:58:36 print_info: freq_base_train = 10000.0 2025-03-06 22:58:36 print_info: freq_scale_train = 1 2025-03-06 22:58:36 print_info: n_ctx_orig_yarn = 4096 2025-03-06 22:58:36 print_info: rope_finetuned = unknown 2025-03-06 22:58:36 print_info: ssm_d_conv = 0 2025-03-06 22:58:36 print_info: ssm_d_inner = 0 2025-03-06 22:58:36 print_info: ssm_d_state = 0 2025-03-06 22:58:36 print_info: ssm_dt_rank = 0 2025-03-06 22:58:36 print_info: ssm_dt_b_c_rms = 0 2025-03-06 22:58:36 print_info: model type = 7B 2025-03-06 22:58:36 print_info: model params = 6.74 B 2025-03-06 22:58:36 print_info: general.name = LLaMA v2 2025-03-06 22:58:36 print_info: vocab type = SPM 2025-03-06 22:58:36 print_info: n_vocab = 32000 2025-03-06 22:58:36 print_info: n_merges = 0 2025-03-06 22:58:36 print_info: BOS token = 1 '<s>' 2025-03-06 22:58:36 print_info: EOS token = 2 '</s>' 2025-03-06 22:58:36 print_info: UNK token = 0 '<unk>' 2025-03-06 22:58:36 print_info: LF token = 13 '<0x0A>' 2025-03-06 22:58:36 print_info: EOG token = 2 '</s>' 2025-03-06 22:58:36 print_info: max token length = 48 2025-03-06 22:58:36 load_tensors: loading model tensors, this can take a while... (mmap = true) 2025-03-06 22:58:48 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory 2025-03-06 22:58:48 llama_model_load: error loading model: unable to allocate CUDA0 buffer 2025-03-06 22:58:48 llama_model_load_from_file_impl: failed to load model 2025-03-06 22:58:48 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 22:58:48 2025-03-06 22:58:48 goroutine 23 [running]: 2025-03-06 22:58:48 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc0005983a0, 0x0}, ...) 2025-03-06 22:58:48 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375 2025-03-06 22:58:48 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 2025-03-06 22:58:48 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7 2025-03-06 22:58:48 time=2025-03-07T04:58:48.719Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" 2025-03-06 22:58:48 time=2025-03-07T04:58:48.816Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model" 2025-03-06 22:58:54 time=2025-03-07T04:58:54.014Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.19787795 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 22:58:54 time=2025-03-07T04:58:54.265Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.448804272 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 22:58:54 time=2025-03-07T04:58:54.515Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.698363967 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:08:55 2025/03/07 05:08:55 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2025-03-06 23:08:55 time=2025-03-07T05:08:55.042Z level=INFO source=images.go:432 msg="total blobs: 20" 2025-03-06 23:08:55 time=2025-03-07T05:08:55.043Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" 2025-03-06 23:08:55 time=2025-03-07T05:08:55.045Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)" 2025-03-06 23:08:55 time=2025-03-07T05:08:55.047Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" 2025-03-06 23:08:55 time=2025-03-07T05:08:55.615Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51e84bf9-91ed-160d-f8af-f145f1df6904 library=cuda variant=v12 compute=7.5 driver=12.8 name="NVIDIA GeForce RTX 2060" total="6.0 GiB" available="5.0 GiB" 2025-03-06 23:09:33 time=2025-03-07T05:09:33.611Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:09:33 time=2025-03-07T05:09:33.613Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:09:33 time=2025-03-07T05:09:33.613Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:09:33 time=2025-03-07T05:09:33.613Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:09:33 time=2025-03-07T05:09:33.613Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:09:33 time=2025-03-07T05:09:33.613Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 22:58:48 [GIN] 2025/03/07 - 04:58:48 | 500 | 13.95591217s | 172.18.0.1 | POST "/api/chat" 2025-03-06 23:08:57 [GIN] 2025/03/07 - 05:08:57 | 200 | 4.297383ms | 172.18.0.1 | GET "/" 2025-03-06 23:08:58 [GIN] 2025/03/07 - 05:08:58 | 404 | 6.432µs | 172.18.0.1 | GET "/favicon.ico" 2025-03-06 23:08:58 [GIN] 2025/03/07 - 05:08:58 | 200 | 42.16µs | 172.18.0.7 | HEAD "/" 2025-03-06 23:09:00 [GIN] 2025/03/07 - 05:09:00 | 200 | 2.575771145s | 172.18.0.7 | POST "/api/pull" 2025-03-06 23:09:00 [GIN] 2025/03/07 - 05:09:00 | 200 | 25.017µs | 172.18.0.7 | HEAD "/" 2025-03-06 23:09:01 [GIN] 2025/03/07 - 05:09:01 | 200 | 434.123739ms | 172.18.0.7 | POST "/api/pull" 2025-03-06 23:09:33 [GIN] 2025/03/07 - 05:09:33 | 200 | 2.553231ms | 172.18.0.1 | GET "/api/tags" 2025-03-06 23:09:47 [GIN] 2025/03/07 - 05:09:47 | 500 | 14.257638542s | 172.18.0.1 | POST "/api/chat" 2025-03-06 23:12:35 [GIN] 2025/03/07 - 05:12:35 | 200 | 44.434µs | 127.0.0.1 | GET "/api/version" 2025-03-06 23:12:47 [GIN] 2025/03/07 - 05:12:47 | 200 | 22.713µs | 127.0.0.1 | HEAD "/" 2025-03-06 23:12:47 [GIN] 2025/03/07 - 05:12:47 | 200 | 820.322548ms | 127.0.0.1 | POST "/api/pull" 2025-03-06 23:14:46 [GIN] 2025/03/07 - 05:14:46 | 200 | 51.698µs | 127.0.0.1 | HEAD "/" 2025-03-06 23:14:46 [GIN] 2025/03/07 - 05:14:46 | 200 | 9.700237ms | 127.0.0.1 | POST "/api/show" 2025-03-06 23:14:48 [GIN] 2025/03/07 - 05:14:48 | 500 | 1.734769224s | 127.0.0.1 | POST "/api/generate" 2025-03-06 23:17:24 [GIN] 2025/03/07 - 05:17:24 | 200 | 33.013µs | 127.0.0.1 | HEAD "/" 2025-03-06 23:17:24 [GIN] 2025/03/07 - 05:17:24 | 200 | 9.672037ms | 127.0.0.1 | POST "/api/show" 2025-03-06 23:17:25 [GIN] 2025/03/07 - 05:17:25 | 500 | 1.68572623s | 127.0.0.1 | POST "/api/generate" 2025-03-06 23:18:11 [GIN] 2025/03/07 - 05:18:11 | 200 | 30.618µs | 127.0.0.1 | HEAD "/" 2025-03-06 23:18:11 [GIN] 2025/03/07 - 05:18:11 | 200 | 1.646124ms | 127.0.0.1 | POST "/api/generate" 2025-03-06 23:19:37 [GIN] 2025/03/07 - 05:19:37 | 200 | 79.491µs | 172.18.0.7 | HEAD "/" 2025-03-06 23:19:40 [GIN] 2025/03/07 - 05:19:40 | 200 | 3.448854831s | 172.18.0.7 | POST "/api/pull" 2025-03-06 23:19:40 [GIN] 2025/03/07 - 05:19:40 | 200 | 23.885µs | 172.18.0.7 | HEAD "/" 2025-03-06 23:19:41 [GIN] 2025/03/07 - 05:19:41 | 200 | 350.179263ms | 172.18.0.7 | POST "/api/pull" 2025-03-06 23:19:56 [GIN] 2025/03/07 - 05:19:56 | 200 | 650.89µs | 172.18.0.1 | GET "/api/tags" 2025-03-06 23:19:56 [GIN] 2025/03/07 - 05:19:56 | 200 | 49.995µs | 172.18.0.1 | GET "/api/version" 2025-03-06 23:20:15 [GIN] 2025/03/07 - 05:20:15 | 200 | 29.506µs | 127.0.0.1 | HEAD "/" 2025-03-06 23:33:07 [GIN] 2025/03/07 - 05:33:07 | 200 | 12m51s | 127.0.0.1 | POST "/api/pull" 2025-03-06 23:33:18 [GIN] 2025/03/07 - 05:33:18 | 200 | 26.801µs | 127.0.0.1 | HEAD "/" 2025-03-06 23:33:18 [GIN] 2025/03/07 - 05:33:18 | 200 | 9.816206ms | 127.0.0.1 | POST "/api/show" 2025-03-06 23:33:20 [GIN] 2025/03/07 - 05:33:20 | 500 | 1.963073355s | 127.0.0.1 | POST "/api/generate" 2025-03-06 23:33:28 [GIN] 2025/03/07 - 05:33:28 | 200 | 24.096µs | 127.0.0.1 | HEAD "/" 2025-03-06 23:33:28 [GIN] 2025/03/07 - 05:33:28 | 200 | 11.885626ms | 127.0.0.1 | POST "/api/show" 2025-03-06 23:09:33 time=2025-03-07T05:09:33.614Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:09:33 time=2025-03-07T05:09:33.614Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:09:33 time=2025-03-07T05:09:33.802Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="9.0 GiB" free_swap="3.0 GiB" 2025-03-06 23:09:33 time=2025-03-07T05:09:33.802Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:09:33 time=2025-03-07T05:09:33.802Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:09:33 time=2025-03-07T05:09:33.802Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" 2025-03-06 23:09:33 time=2025-03-07T05:09:33.803Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 34771" 2025-03-06 23:09:33 time=2025-03-07T05:09:33.804Z level=INFO source=sched.go:450 msg="loaded runners" count=1 2025-03-06 23:09:33 time=2025-03-07T05:09:33.804Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" 2025-03-06 23:09:33 time=2025-03-07T05:09:33.805Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:09:33 time=2025-03-07T05:09:33.826Z level=INFO source=runner.go:931 msg="starting go runner" 2025-03-06 23:09:34 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-03-06 23:09:34 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-03-06 23:09:34 ggml_cuda_init: found 1 CUDA devices: 2025-03-06 23:09:34 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes 2025-03-06 23:09:34 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-03-06 23:09:34 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so 2025-03-06 23:09:34 time=2025-03-07T05:09:34.441Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8 2025-03-06 23:09:34 time=2025-03-07T05:09:34.459Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:34771" 2025-03-06 23:09:34 time=2025-03-07T05:09:34.559Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" 2025-03-06 23:09:34 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free 2025-03-06 23:09:34 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) 2025-03-06 23:09:34 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-03-06 23:09:34 llama_model_loader: - kv 0: general.architecture str = llama 2025-03-06 23:09:34 llama_model_loader: - kv 1: general.name str = LLaMA v2 2025-03-06 23:09:34 llama_model_loader: - kv 2: llama.context_length u32 = 4096 2025-03-06 23:09:34 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 2025-03-06 23:09:34 llama_model_loader: - kv 4: llama.block_count u32 = 32 2025-03-06 23:09:34 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 2025-03-06 23:09:34 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 2025-03-06 23:09:34 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 2025-03-06 23:09:34 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 2025-03-06 23:09:34 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2025-03-06 23:09:34 llama_model_loader: - kv 10: general.file_type u32 = 2 2025-03-06 23:09:34 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama 2025-03-06 23:09:34 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... 2025-03-06 23:09:34 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... 2025-03-06 23:09:34 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... 2025-03-06 23:09:34 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... 2025-03-06 23:09:34 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 2025-03-06 23:09:34 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 2025-03-06 23:09:34 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 2025-03-06 23:09:34 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true 2025-03-06 23:09:34 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false 2025-03-06 23:09:34 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... 2025-03-06 23:09:34 llama_model_loader: - kv 22: general.quantization_version u32 = 2 2025-03-06 23:09:34 llama_model_loader: - type f32: 65 tensors 2025-03-06 23:09:34 llama_model_loader: - type q4_0: 225 tensors 2025-03-06 23:09:34 llama_model_loader: - type q6_K: 1 tensors 2025-03-06 23:09:34 print_info: file format = GGUF V3 (latest) 2025-03-06 23:09:34 print_info: file type = Q4_0 2025-03-06 23:09:34 print_info: file size = 3.56 GiB (4.54 BPW) 2025-03-06 23:09:34 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2025-03-06 23:09:34 load: special tokens cache size = 3 2025-03-06 23:09:34 load: token to piece cache size = 0.1684 MB 2025-03-06 23:09:34 print_info: arch = llama 2025-03-06 23:09:34 print_info: vocab_only = 0 2025-03-06 23:09:34 print_info: n_ctx_train = 4096 2025-03-06 23:09:34 print_info: n_embd = 4096 2025-03-06 23:09:34 print_info: n_layer = 32 2025-03-06 23:09:34 print_info: n_head = 32 2025-03-06 23:09:34 print_info: n_head_kv = 32 2025-03-06 23:09:34 print_info: n_rot = 128 2025-03-06 23:09:34 print_info: n_swa = 0 2025-03-06 23:09:34 print_info: n_embd_head_k = 128 2025-03-06 23:09:34 print_info: n_embd_head_v = 128 2025-03-06 23:09:34 print_info: n_gqa = 1 2025-03-06 23:09:34 print_info: n_embd_k_gqa = 4096 2025-03-06 23:09:34 print_info: n_embd_v_gqa = 4096 2025-03-06 23:09:34 print_info: f_norm_eps = 0.0e+00 2025-03-06 23:09:34 print_info: f_norm_rms_eps = 1.0e-05 2025-03-06 23:09:34 print_info: f_clamp_kqv = 0.0e+00 2025-03-06 23:09:34 print_info: f_max_alibi_bias = 0.0e+00 2025-03-06 23:09:34 print_info: f_logit_scale = 0.0e+00 2025-03-06 23:09:34 print_info: n_ff = 11008 2025-03-06 23:09:34 print_info: n_expert = 0 2025-03-06 23:09:34 print_info: n_expert_used = 0 2025-03-06 23:09:34 print_info: causal attn = 1 2025-03-06 23:09:34 print_info: pooling type = 0 2025-03-06 23:09:34 print_info: rope type = 0 2025-03-06 23:09:34 print_info: rope scaling = linear 2025-03-06 23:09:34 print_info: freq_base_train = 10000.0 2025-03-06 23:09:34 print_info: freq_scale_train = 1 2025-03-06 23:09:34 print_info: n_ctx_orig_yarn = 4096 2025-03-06 23:09:34 print_info: rope_finetuned = unknown 2025-03-06 23:09:34 print_info: ssm_d_conv = 0 2025-03-06 23:09:34 print_info: ssm_d_inner = 0 2025-03-06 23:09:34 print_info: ssm_d_state = 0 2025-03-06 23:09:34 print_info: ssm_dt_rank = 0 2025-03-06 23:09:34 print_info: ssm_dt_b_c_rms = 0 2025-03-06 23:09:34 print_info: model type = 7B 2025-03-06 23:09:34 print_info: model params = 6.74 B 2025-03-06 23:09:34 print_info: general.name = LLaMA v2 2025-03-06 23:09:34 print_info: vocab type = SPM 2025-03-06 23:09:34 print_info: n_vocab = 32000 2025-03-06 23:09:34 print_info: n_merges = 0 2025-03-06 23:09:34 print_info: BOS token = 1 '<s>' 2025-03-06 23:09:34 print_info: EOS token = 2 '</s>' 2025-03-06 23:09:34 print_info: UNK token = 0 '<unk>' 2025-03-06 23:09:34 print_info: LF token = 13 '<0x0A>' 2025-03-06 23:09:34 print_info: EOG token = 2 '</s>' 2025-03-06 23:09:34 print_info: max token length = 48 2025-03-06 23:09:34 load_tensors: loading model tensors, this can take a while... (mmap = true) 2025-03-06 23:09:46 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory 2025-03-06 23:09:47 llama_model_load: error loading model: unable to allocate CUDA0 buffer 2025-03-06 23:09:47 llama_model_load_from_file_impl: failed to load model 2025-03-06 23:09:47 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:09:47 2025-03-06 23:09:47 goroutine 50 [running]: 2025-03-06 23:09:47 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc00062e1f0, 0x0}, ...) 2025-03-06 23:09:47 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375 2025-03-06 23:09:47 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 2025-03-06 23:09:47 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7 2025-03-06 23:09:47 time=2025-03-07T05:09:47.467Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" 2025-03-06 23:09:47 time=2025-03-07T05:09:47.620Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model" 2025-03-06 23:09:52 time=2025-03-07T05:09:52.826Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.206297755 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:09:53 time=2025-03-07T05:09:53.076Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.45587692 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:09:53 time=2025-03-07T05:09:53.326Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.706447279 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:14:26 2025/03/07 05:14:26 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2025-03-06 23:14:26 time=2025-03-07T05:14:26.659Z level=INFO source=images.go:432 msg="total blobs: 20" 2025-03-06 23:14:26 time=2025-03-07T05:14:26.659Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" 2025-03-06 23:14:26 time=2025-03-07T05:14:26.659Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)" 2025-03-06 23:14:26 time=2025-03-07T05:14:26.660Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" 2025-03-06 23:14:27 time=2025-03-07T05:14:27.077Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51e84bf9-91ed-160d-f8af-f145f1df6904 library=cuda variant=v12 compute=7.5 driver=12.8 name="NVIDIA GeForce RTX 2060" total="6.0 GiB" available="5.0 GiB" 2025-03-06 23:14:46 time=2025-03-07T05:14:46.585Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:14:46 time=2025-03-07T05:14:46.585Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:14:46 time=2025-03-07T05:14:46.585Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:14:46 time=2025-03-07T05:14:46.585Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:14:46 time=2025-03-07T05:14:46.586Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:14:46 time=2025-03-07T05:14:46.586Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:14:46 time=2025-03-07T05:14:46.587Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:14:46 time=2025-03-07T05:14:46.587Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:14:46 time=2025-03-07T05:14:46.780Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="10.4 GiB" free_swap="3.0 GiB" 2025-03-06 23:14:46 time=2025-03-07T05:14:46.780Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:14:46 time=2025-03-07T05:14:46.780Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:14:46 time=2025-03-07T05:14:46.781Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" 2025-03-06 23:14:46 time=2025-03-07T05:14:46.781Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 44319" 2025-03-06 23:14:46 time=2025-03-07T05:14:46.782Z level=INFO source=sched.go:450 msg="loaded runners" count=1 2025-03-06 23:14:46 time=2025-03-07T05:14:46.783Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" 2025-03-06 23:14:46 time=2025-03-07T05:14:46.785Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:14:46 time=2025-03-07T05:14:46.804Z level=INFO source=runner.go:931 msg="starting go runner" 2025-03-06 23:14:46 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-03-06 23:14:46 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-03-06 23:14:46 ggml_cuda_init: found 1 CUDA devices: 2025-03-06 23:14:46 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes 2025-03-06 23:14:46 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-03-06 23:14:46 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so 2025-03-06 23:14:46 time=2025-03-07T05:14:46.924Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8 2025-03-06 23:14:46 time=2025-03-07T05:14:46.942Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:44319" 2025-03-06 23:14:47 time=2025-03-07T05:14:47.036Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" 2025-03-06 23:14:47 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free 2025-03-06 23:14:47 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) 2025-03-06 23:14:47 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-03-06 23:14:47 llama_model_loader: - kv 0: general.architecture str = llama 2025-03-06 23:14:47 llama_model_loader: - kv 1: general.name str = LLaMA v2 2025-03-06 23:14:47 llama_model_loader: - kv 2: llama.context_length u32 = 4096 2025-03-06 23:14:47 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 2025-03-06 23:14:47 llama_model_loader: - kv 4: llama.block_count u32 = 32 2025-03-06 23:14:47 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 2025-03-06 23:14:47 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 2025-03-06 23:14:47 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 2025-03-06 23:14:47 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 2025-03-06 23:14:47 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2025-03-06 23:14:47 llama_model_loader: - kv 10: general.file_type u32 = 2 2025-03-06 23:14:47 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama 2025-03-06 23:14:47 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... 2025-03-06 23:14:47 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... 2025-03-06 23:14:47 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... 2025-03-06 23:14:47 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... 2025-03-06 23:14:47 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 2025-03-06 23:14:47 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 2025-03-06 23:14:47 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 2025-03-06 23:14:47 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true 2025-03-06 23:14:47 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false 2025-03-06 23:14:47 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... 2025-03-06 23:14:47 llama_model_loader: - kv 22: general.quantization_version u32 = 2 2025-03-06 23:14:47 llama_model_loader: - type f32: 65 tensors 2025-03-06 23:14:47 llama_model_loader: - type q4_0: 225 tensors 2025-03-06 23:14:47 llama_model_loader: - type q6_K: 1 tensors 2025-03-06 23:14:47 print_info: file format = GGUF V3 (latest) 2025-03-06 23:14:47 print_info: file type = Q4_0 2025-03-06 23:14:47 print_info: file size = 3.56 GiB (4.54 BPW) 2025-03-06 23:14:47 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2025-03-06 23:14:47 load: special tokens cache size = 3 2025-03-06 23:14:47 load: token to piece cache size = 0.1684 MB 2025-03-06 23:14:47 print_info: arch = llama 2025-03-06 23:14:47 print_info: vocab_only = 0 2025-03-06 23:14:47 print_info: n_ctx_train = 4096 2025-03-06 23:14:47 print_info: n_embd = 4096 2025-03-06 23:14:47 print_info: n_layer = 32 2025-03-06 23:14:47 print_info: n_head = 32 2025-03-06 23:14:47 print_info: n_head_kv = 32 2025-03-06 23:14:47 print_info: n_rot = 128 2025-03-06 23:14:47 print_info: n_swa = 0 2025-03-06 23:14:47 print_info: n_embd_head_k = 128 2025-03-06 23:14:47 print_info: n_embd_head_v = 128 2025-03-06 23:14:47 print_info: n_gqa = 1 2025-03-06 23:14:47 print_info: n_embd_k_gqa = 4096 2025-03-06 23:14:47 print_info: n_embd_v_gqa = 4096 2025-03-06 23:14:47 print_info: f_norm_eps = 0.0e+00 2025-03-06 23:14:47 print_info: f_norm_rms_eps = 1.0e-05 2025-03-06 23:14:47 print_info: f_clamp_kqv = 0.0e+00 2025-03-06 23:14:47 print_info: f_max_alibi_bias = 0.0e+00 2025-03-06 23:14:47 print_info: f_logit_scale = 0.0e+00 2025-03-06 23:14:47 print_info: n_ff = 11008 2025-03-06 23:14:47 print_info: n_expert = 0 2025-03-06 23:14:47 print_info: n_expert_used = 0 2025-03-06 23:14:47 print_info: causal attn = 1 2025-03-06 23:14:47 print_info: pooling type = 0 2025-03-06 23:14:47 print_info: rope type = 0 2025-03-06 23:14:47 print_info: rope scaling = linear 2025-03-06 23:14:47 print_info: freq_base_train = 10000.0 2025-03-06 23:14:47 print_info: freq_scale_train = 1 2025-03-06 23:14:47 print_info: n_ctx_orig_yarn = 4096 2025-03-06 23:14:47 print_info: rope_finetuned = unknown 2025-03-06 23:14:47 print_info: ssm_d_conv = 0 2025-03-06 23:14:47 print_info: ssm_d_inner = 0 2025-03-06 23:14:47 print_info: ssm_d_state = 0 2025-03-06 23:14:47 print_info: ssm_dt_rank = 0 2025-03-06 23:14:47 print_info: ssm_dt_b_c_rms = 0 2025-03-06 23:14:47 print_info: model type = 7B 2025-03-06 23:14:47 print_info: model params = 6.74 B 2025-03-06 23:14:47 print_info: general.name = LLaMA v2 2025-03-06 23:14:47 print_info: vocab type = SPM 2025-03-06 23:14:47 print_info: n_vocab = 32000 2025-03-06 23:14:47 print_info: n_merges = 0 2025-03-06 23:14:47 print_info: BOS token = 1 '<s>' 2025-03-06 23:14:47 print_info: EOS token = 2 '</s>' 2025-03-06 23:14:47 print_info: UNK token = 0 '<unk>' 2025-03-06 23:14:47 print_info: LF token = 13 '<0x0A>' 2025-03-06 23:14:47 print_info: EOG token = 2 '</s>' 2025-03-06 23:14:47 print_info: max token length = 48 2025-03-06 23:14:47 load_tensors: loading model tensors, this can take a while... (mmap = true) 2025-03-06 23:14:47 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory 2025-03-06 23:14:47 llama_model_load: error loading model: unable to allocate CUDA0 buffer 2025-03-06 23:14:47 llama_model_load_from_file_impl: failed to load model 2025-03-06 23:14:47 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:14:47 2025-03-06 23:14:47 goroutine 66 [running]: 2025-03-06 23:14:47 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000502020, 0x0}, ...) 2025-03-06 23:14:47 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375 2025-03-06 23:14:47 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 2025-03-06 23:14:47 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7 2025-03-06 23:14:47 time=2025-03-07T05:14:47.791Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:14:47 time=2025-03-07T05:14:47.817Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" 2025-03-06 23:14:48 time=2025-03-07T05:14:48.041Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model" 2025-03-06 23:14:53 time=2025-03-07T05:14:53.216Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.174206187 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:14:53 time=2025-03-07T05:14:53.466Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.423891673 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:14:53 time=2025-03-07T05:14:53.716Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.674046193 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:17:24 time=2025-03-07T05:17:24.472Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:17:24 time=2025-03-07T05:17:24.472Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:17:24 time=2025-03-07T05:17:24.473Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:17:24 time=2025-03-07T05:17:24.473Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:17:24 time=2025-03-07T05:17:24.473Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:17:24 time=2025-03-07T05:17:24.473Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:17:24 time=2025-03-07T05:17:24.474Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:17:24 time=2025-03-07T05:17:24.474Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:17:24 time=2025-03-07T05:17:24.667Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="10.3 GiB" free_swap="3.0 GiB" 2025-03-06 23:17:24 time=2025-03-07T05:17:24.668Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:17:24 time=2025-03-07T05:17:24.668Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:17:24 time=2025-03-07T05:17:24.668Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" 2025-03-06 23:17:24 time=2025-03-07T05:17:24.669Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 38339" 2025-03-06 23:17:24 time=2025-03-07T05:17:24.669Z level=INFO source=sched.go:450 msg="loaded runners" count=1 2025-03-06 23:17:24 time=2025-03-07T05:17:24.669Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" 2025-03-06 23:17:24 time=2025-03-07T05:17:24.670Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:17:24 time=2025-03-07T05:17:24.689Z level=INFO source=runner.go:931 msg="starting go runner" 2025-03-06 23:17:24 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-03-06 23:17:24 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-03-06 23:17:24 ggml_cuda_init: found 1 CUDA devices: 2025-03-06 23:17:24 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes 2025-03-06 23:17:24 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-03-06 23:17:24 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so 2025-03-06 23:17:24 time=2025-03-07T05:17:24.799Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8 2025-03-06 23:17:24 time=2025-03-07T05:17:24.815Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:38339" 2025-03-06 23:17:24 time=2025-03-07T05:17:24.921Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" 2025-03-06 23:17:25 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free 2025-03-06 23:17:25 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) 2025-03-06 23:17:25 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-03-06 23:17:25 llama_model_loader: - kv 0: general.architecture str = llama 2025-03-06 23:17:25 llama_model_loader: - kv 1: general.name str = LLaMA v2 2025-03-06 23:17:25 llama_model_loader: - kv 2: llama.context_length u32 = 4096 2025-03-06 23:17:25 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 2025-03-06 23:17:25 llama_model_loader: - kv 4: llama.block_count u32 = 32 2025-03-06 23:17:25 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 2025-03-06 23:17:25 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 2025-03-06 23:17:25 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 2025-03-06 23:17:25 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 2025-03-06 23:17:25 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2025-03-06 23:17:25 llama_model_loader: - kv 10: general.file_type u32 = 2 2025-03-06 23:17:25 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama 2025-03-06 23:17:25 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... 2025-03-06 23:17:25 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... 2025-03-06 23:17:25 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... 2025-03-06 23:17:25 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... 2025-03-06 23:17:25 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 2025-03-06 23:17:25 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 2025-03-06 23:17:25 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 2025-03-06 23:17:25 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true 2025-03-06 23:17:25 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false 2025-03-06 23:17:25 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... 2025-03-06 23:17:25 llama_model_loader: - kv 22: general.quantization_version u32 = 2 2025-03-06 23:17:25 llama_model_loader: - type f32: 65 tensors 2025-03-06 23:17:25 llama_model_loader: - type q4_0: 225 tensors 2025-03-06 23:17:25 llama_model_loader: - type q6_K: 1 tensors 2025-03-06 23:17:25 print_info: file format = GGUF V3 (latest) 2025-03-06 23:17:25 print_info: file type = Q4_0 2025-03-06 23:17:25 print_info: file size = 3.56 GiB (4.54 BPW) 2025-03-06 23:17:25 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2025-03-06 23:17:25 load: special tokens cache size = 3 2025-03-06 23:17:25 load: token to piece cache size = 0.1684 MB 2025-03-06 23:17:25 print_info: arch = llama 2025-03-06 23:17:25 print_info: vocab_only = 0 2025-03-06 23:17:25 print_info: n_ctx_train = 4096 2025-03-06 23:17:25 print_info: n_embd = 4096 2025-03-06 23:17:25 print_info: n_layer = 32 2025-03-06 23:17:25 print_info: n_head = 32 2025-03-06 23:17:25 print_info: n_head_kv = 32 2025-03-06 23:17:25 print_info: n_rot = 128 2025-03-06 23:17:25 print_info: n_swa = 0 2025-03-06 23:17:25 print_info: n_embd_head_k = 128 2025-03-06 23:17:25 print_info: n_embd_head_v = 128 2025-03-06 23:17:25 print_info: n_gqa = 1 2025-03-06 23:17:25 print_info: n_embd_k_gqa = 4096 2025-03-06 23:17:25 print_info: n_embd_v_gqa = 4096 2025-03-06 23:17:25 print_info: f_norm_eps = 0.0e+00 2025-03-06 23:17:25 print_info: f_norm_rms_eps = 1.0e-05 2025-03-06 23:17:25 print_info: f_clamp_kqv = 0.0e+00 2025-03-06 23:17:25 print_info: f_max_alibi_bias = 0.0e+00 2025-03-06 23:17:25 print_info: f_logit_scale = 0.0e+00 2025-03-06 23:17:25 print_info: n_ff = 11008 2025-03-06 23:17:25 print_info: n_expert = 0 2025-03-06 23:17:25 print_info: n_expert_used = 0 2025-03-06 23:17:25 print_info: causal attn = 1 2025-03-06 23:17:25 print_info: pooling type = 0 2025-03-06 23:17:25 print_info: rope type = 0 2025-03-06 23:17:25 print_info: rope scaling = linear 2025-03-06 23:17:25 print_info: freq_base_train = 10000.0 2025-03-06 23:17:25 print_info: freq_scale_train = 1 2025-03-06 23:17:25 print_info: n_ctx_orig_yarn = 4096 2025-03-06 23:17:25 print_info: rope_finetuned = unknown 2025-03-06 23:17:25 print_info: ssm_d_conv = 0 2025-03-06 23:17:25 print_info: ssm_d_inner = 0 2025-03-06 23:17:25 print_info: ssm_d_state = 0 2025-03-06 23:17:25 print_info: ssm_dt_rank = 0 2025-03-06 23:17:25 print_info: ssm_dt_b_c_rms = 0 2025-03-06 23:17:25 print_info: model type = 7B 2025-03-06 23:17:25 print_info: model params = 6.74 B 2025-03-06 23:17:25 print_info: general.name = LLaMA v2 2025-03-06 23:17:25 print_info: vocab type = SPM 2025-03-06 23:17:25 print_info: n_vocab = 32000 2025-03-06 23:17:25 print_info: n_merges = 0 2025-03-06 23:17:25 print_info: BOS token = 1 '<s>' 2025-03-06 23:17:25 print_info: EOS token = 2 '</s>' 2025-03-06 23:17:25 print_info: UNK token = 0 '<unk>' 2025-03-06 23:17:25 print_info: LF token = 13 '<0x0A>' 2025-03-06 23:17:25 print_info: EOG token = 2 '</s>' 2025-03-06 23:17:25 print_info: max token length = 48 2025-03-06 23:17:25 load_tensors: loading model tensors, this can take a while... (mmap = true) 2025-03-06 23:17:25 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory 2025-03-06 23:17:25 llama_model_load: error loading model: unable to allocate CUDA0 buffer 2025-03-06 23:17:25 llama_model_load_from_file_impl: failed to load model 2025-03-06 23:17:25 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:17:25 2025-03-06 23:17:25 goroutine 25 [running]: 2025-03-06 23:17:25 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc00012dcb0, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000044030, 0x0}, ...) 2025-03-06 23:17:25 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375 2025-03-06 23:17:25 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 2025-03-06 23:17:25 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7 2025-03-06 23:17:25 time=2025-03-07T05:17:25.674Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:17:25 time=2025-03-07T05:17:25.717Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" 2025-03-06 23:17:25 time=2025-03-07T05:17:25.925Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model" 2025-03-06 23:17:31 time=2025-03-07T05:17:31.106Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.18037034 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:17:31 time=2025-03-07T05:17:31.357Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.431203368 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:17:31 time=2025-03-07T05:17:31.606Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.680775272 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:18:11 time=2025-03-07T05:18:11.961Z level=INFO source=images.go:432 msg="total blobs: 20" 2025-03-06 23:18:12 time=2025-03-07T05:18:12.547Z level=INFO source=images.go:439 msg="total unused blobs removed: 6" 2025-03-06 23:18:12 time=2025-03-07T05:18:12.547Z level=INFO source=server.go:154 msg=http status=200 method=DELETE path=/api/delete content-length=31 remote=127.0.0.1:38078 proto=HTTP/1.1 query="" 2025-03-06 23:19:33 2025/03/07 05:19:33 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2025-03-06 23:19:33 time=2025-03-07T05:19:33.972Z level=INFO source=images.go:432 msg="total blobs: 14" 2025-03-06 23:19:33 time=2025-03-07T05:19:33.973Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" 2025-03-06 23:19:33 time=2025-03-07T05:19:33.974Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)" 2025-03-06 23:19:33 time=2025-03-07T05:19:33.974Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" 2025-03-06 23:19:34 time=2025-03-07T05:19:34.620Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51e84bf9-91ed-160d-f8af-f145f1df6904 library=cuda variant=v12 compute=7.5 driver=12.8 name="NVIDIA GeForce RTX 2060" total="6.0 GiB" available="5.0 GiB" 2025-03-06 23:20:16 time=2025-03-07T05:20:16.945Z level=INFO source=download.go:176 msg="downloading 8934d96d3f08 in 16 239 MB part(s)" 2025-03-06 23:20:57 time=2025-03-07T05:20:57.328Z level=INFO source=download.go:294 msg="8934d96d3f08 part 5 attempt 0 failed: unexpected EOF, retrying in 1s" 2025-03-06 23:21:36 time=2025-03-07T05:21:36.844Z level=INFO source=download.go:294 msg="8934d96d3f08 part 6 attempt 0 failed: unexpected EOF, retrying in 1s" 2025-03-06 23:23:34 time=2025-03-07T05:23:34.793Z level=INFO source=download.go:294 msg="8934d96d3f08 part 2 attempt 0 failed: unexpected EOF, retrying in 1s" 2025-03-06 23:24:29 time=2025-03-07T05:24:29.706Z level=INFO source=download.go:294 msg="8934d96d3f08 part 12 attempt 0 failed: unexpected EOF, retrying in 1s" 2025-03-06 23:32:58 time=2025-03-07T05:32:58.561Z level=INFO source=download.go:176 msg="downloading 8c17c2ebb0ea in 1 7.0 KB part(s)" 2025-03-06 23:32:59 time=2025-03-07T05:32:59.841Z level=INFO source=download.go:176 msg="downloading 7c23fb36d801 in 1 4.8 KB part(s)" 2025-03-06 23:33:01 time=2025-03-07T05:33:01.165Z level=INFO source=download.go:176 msg="downloading 2e0493f67d0c in 1 59 B part(s)" 2025-03-06 23:33:02 time=2025-03-07T05:33:02.455Z level=INFO source=download.go:176 msg="downloading fa304d675061 in 1 91 B part(s)" 2025-03-06 23:33:03 time=2025-03-07T05:33:03.742Z level=INFO source=download.go:176 msg="downloading 42ba7f8a01dd in 1 557 B part(s)" 2025-03-06 23:33:18 time=2025-03-07T05:33:18.728Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:18 time=2025-03-07T05:33:18.728Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:18 time=2025-03-07T05:33:18.729Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:18 time=2025-03-07T05:33:18.729Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:18 time=2025-03-07T05:33:18.729Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:18 time=2025-03-07T05:33:18.729Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:18 time=2025-03-07T05:33:18.730Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:18 time=2025-03-07T05:33:18.730Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:18 time=2025-03-07T05:33:18.913Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="9.0 GiB" free_swap="3.0 GiB" 2025-03-06 23:33:18 time=2025-03-07T05:33:18.913Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:18 time=2025-03-07T05:33:18.913Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:18 time=2025-03-07T05:33:18.913Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" 2025-03-06 23:33:18 time=2025-03-07T05:33:18.914Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 39089" 2025-03-06 23:33:18 time=2025-03-07T05:33:18.915Z level=INFO source=sched.go:450 msg="loaded runners" count=1 2025-03-06 23:33:18 time=2025-03-07T05:33:18.915Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" 2025-03-06 23:33:18 time=2025-03-07T05:33:18.915Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:33:18 time=2025-03-07T05:33:18.941Z level=INFO source=runner.go:931 msg="starting go runner" 2025-03-06 23:33:19 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-03-06 23:33:19 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-03-06 23:33:19 ggml_cuda_init: found 1 CUDA devices: 2025-03-06 23:33:19 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes 2025-03-06 23:33:19 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-03-06 23:33:19 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so 2025-03-06 23:33:19 time=2025-03-07T05:33:19.081Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8 2025-03-06 23:33:19 time=2025-03-07T05:33:19.107Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:39089" 2025-03-06 23:33:19 time=2025-03-07T05:33:19.166Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" 2025-03-06 23:33:19 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free 2025-03-06 23:33:19 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) 2025-03-06 23:33:19 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-03-06 23:33:19 llama_model_loader: - kv 0: general.architecture str = llama 2025-03-06 23:33:19 llama_model_loader: - kv 1: general.name str = LLaMA v2 2025-03-06 23:33:19 llama_model_loader: - kv 2: llama.context_length u32 = 4096 2025-03-06 23:33:19 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 2025-03-06 23:33:19 llama_model_loader: - kv 4: llama.block_count u32 = 32 2025-03-06 23:33:19 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 2025-03-06 23:33:19 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 2025-03-06 23:33:19 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 2025-03-06 23:33:19 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 2025-03-06 23:33:19 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2025-03-06 23:33:19 llama_model_loader: - kv 10: general.file_type u32 = 2 2025-03-06 23:33:19 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama 2025-03-06 23:33:19 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... 2025-03-06 23:33:19 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... 2025-03-06 23:33:19 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... 2025-03-06 23:33:19 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... 2025-03-06 23:33:19 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 2025-03-06 23:33:19 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 2025-03-06 23:33:19 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 2025-03-06 23:33:19 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true 2025-03-06 23:33:19 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false 2025-03-06 23:33:19 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... 2025-03-06 23:33:19 llama_model_loader: - kv 22: general.quantization_version u32 = 2 2025-03-06 23:33:19 llama_model_loader: - type f32: 65 tensors 2025-03-06 23:33:19 llama_model_loader: - type q4_0: 225 tensors 2025-03-06 23:33:19 llama_model_loader: - type q6_K: 1 tensors 2025-03-06 23:33:19 print_info: file format = GGUF V3 (latest) 2025-03-06 23:33:19 print_info: file type = Q4_0 2025-03-06 23:33:19 print_info: file size = 3.56 GiB (4.54 BPW) 2025-03-06 23:33:19 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2025-03-06 23:33:19 load: special tokens cache size = 3 2025-03-06 23:33:19 load: token to piece cache size = 0.1684 MB 2025-03-06 23:33:19 print_info: arch = llama 2025-03-06 23:33:19 print_info: vocab_only = 0 2025-03-06 23:33:19 print_info: n_ctx_train = 4096 2025-03-06 23:33:19 print_info: n_embd = 4096 2025-03-06 23:33:19 print_info: n_layer = 32 2025-03-06 23:33:19 print_info: n_head = 32 2025-03-06 23:33:19 print_info: n_head_kv = 32 2025-03-06 23:33:19 print_info: n_rot = 128 2025-03-06 23:33:19 print_info: n_swa = 0 2025-03-06 23:33:19 print_info: n_embd_head_k = 128 2025-03-06 23:33:19 print_info: n_embd_head_v = 128 2025-03-06 23:33:19 print_info: n_gqa = 1 2025-03-06 23:33:19 print_info: n_embd_k_gqa = 4096 2025-03-06 23:33:19 print_info: n_embd_v_gqa = 4096 2025-03-06 23:33:19 print_info: f_norm_eps = 0.0e+00 2025-03-06 23:33:19 print_info: f_norm_rms_eps = 1.0e-05 2025-03-06 23:33:19 print_info: f_clamp_kqv = 0.0e+00 2025-03-06 23:33:19 print_info: f_max_alibi_bias = 0.0e+00 2025-03-06 23:33:19 print_info: f_logit_scale = 0.0e+00 2025-03-06 23:33:19 print_info: n_ff = 11008 2025-03-06 23:33:19 print_info: n_expert = 0 2025-03-06 23:33:19 print_info: n_expert_used = 0 2025-03-06 23:33:19 print_info: causal attn = 1 2025-03-06 23:33:19 print_info: pooling type = 0 2025-03-06 23:33:19 print_info: rope type = 0 2025-03-06 23:33:19 print_info: rope scaling = linear 2025-03-06 23:33:19 print_info: freq_base_train = 10000.0 2025-03-06 23:33:19 print_info: freq_scale_train = 1 2025-03-06 23:33:19 print_info: n_ctx_orig_yarn = 4096 2025-03-06 23:33:19 print_info: rope_finetuned = unknown 2025-03-06 23:33:19 print_info: ssm_d_conv = 0 2025-03-06 23:33:19 print_info: ssm_d_inner = 0 2025-03-06 23:33:19 print_info: ssm_d_state = 0 2025-03-06 23:33:19 print_info: ssm_dt_rank = 0 2025-03-06 23:33:19 print_info: ssm_dt_b_c_rms = 0 2025-03-06 23:33:19 print_info: model type = 7B 2025-03-06 23:33:19 print_info: model params = 6.74 B 2025-03-06 23:33:19 print_info: general.name = LLaMA v2 2025-03-06 23:33:19 print_info: vocab type = SPM 2025-03-06 23:33:19 print_info: n_vocab = 32000 2025-03-06 23:33:19 print_info: n_merges = 0 2025-03-06 23:33:19 print_info: BOS token = 1 '<s>' 2025-03-06 23:33:19 print_info: EOS token = 2 '</s>' 2025-03-06 23:33:19 print_info: UNK token = 0 '<unk>' 2025-03-06 23:33:19 print_info: LF token = 13 '<0x0A>' 2025-03-06 23:33:19 print_info: EOG token = 2 '</s>' 2025-03-06 23:33:19 print_info: max token length = 48 2025-03-06 23:33:19 load_tensors: loading model tensors, this can take a while... (mmap = true) 2025-03-06 23:33:19 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory 2025-03-06 23:33:20 llama_model_load: error loading model: unable to allocate CUDA0 buffer 2025-03-06 23:33:20 llama_model_load_from_file_impl: failed to load model 2025-03-06 23:33:20 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:33:20 2025-03-06 23:33:20 goroutine 50 [running]: 2025-03-06 23:33:20 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000502020, 0x0}, ...) 2025-03-06 23:33:20 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375 2025-03-06 23:33:20 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 2025-03-06 23:33:20 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7 2025-03-06 23:33:20 time=2025-03-07T05:33:20.270Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" 2025-03-06 23:33:20 time=2025-03-07T05:33:20.421Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer" 2025-03-06 23:33:25 time=2025-03-07T05:33:25.597Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.175452535 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:33:25 time=2025-03-07T05:33:25.847Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.425169511 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:33:26 time=2025-03-07T05:33:26.096Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.674710506 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:33:28 time=2025-03-07T05:33:28.955Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:28 time=2025-03-07T05:33:28.955Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:28 time=2025-03-07T05:33:28.956Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:28 time=2025-03-07T05:33:28.956Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:28 time=2025-03-07T05:33:28.957Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:28 time=2025-03-07T05:33:28.957Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:28 time=2025-03-07T05:33:28.958Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:28 time=2025-03-07T05:33:28.958Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:29 time=2025-03-07T05:33:29.183Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="9.0 GiB" free_swap="3.0 GiB" 2025-03-06 23:33:29 time=2025-03-07T05:33:29.183Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:29 time=2025-03-07T05:33:29.183Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:29 time=2025-03-07T05:33:29.183Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" 2025-03-06 23:33:29 time=2025-03-07T05:33:29.184Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 43659" 2025-03-06 23:33:29 time=2025-03-07T05:33:29.184Z level=INFO source=sched.go:450 msg="loaded runners" count=1 2025-03-06 23:33:29 time=2025-03-07T05:33:29.185Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" 2025-03-06 23:33:29 time=2025-03-07T05:33:29.185Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:33:29 time=2025-03-07T05:33:29.205Z level=INFO source=runner.go:931 msg="starting go runner" 2025-03-06 23:33:29 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-03-06 23:33:29 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-03-06 23:33:29 ggml_cuda_init: found 1 CUDA devices: 2025-03-06 23:33:29 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes 2025-03-06 23:33:29 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-03-06 23:33:29 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so 2025-03-06 23:33:29 time=2025-03-07T05:33:29.319Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8 2025-03-06 23:33:30 [GIN] 2025/03/07 - 05:33:30 | 500 | 1.710346111s | 127.0.0.1 | POST "/api/generate" 2025-03-06 23:33:38 [GIN] 2025/03/07 - 05:33:38 | 200 | 28.254µs | 127.0.0.1 | HEAD "/" 2025-03-06 23:33:38 [GIN] 2025/03/07 - 05:33:38 | 200 | 9.733913ms | 127.0.0.1 | POST "/api/show" 2025-03-06 23:33:40 [GIN] 2025/03/07 - 05:33:40 | 500 | 1.773050347s | 127.0.0.1 | POST "/api/generate" 2025-03-06 23:36:38 [GIN] 2025/03/07 - 05:36:38 | 200 | 30.909µs | 127.0.0.1 | HEAD "/" 2025-03-06 23:36:38 [GIN] 2025/03/07 - 05:36:38 | 200 | 10.010402ms | 127.0.0.1 | POST "/api/show" 2025-03-06 23:36:40 [GIN] 2025/03/07 - 05:36:40 | 500 | 1.690181456s | 127.0.0.1 | POST "/api/generate" 2025-03-06 23:46:24 [GIN] 2025/03/07 - 05:46:24 | 200 | 7.118871ms | 172.18.0.1 | GET "/api/tags" 2025-03-06 23:46:24 [GIN] 2025/03/07 - 05:46:24 | 200 | 141.148µs | 172.18.0.1 | GET "/api/version" 2025-03-06 23:46:53 [GIN] 2025/03/07 - 05:46:53 | 200 | 31.6µs | 127.0.0.1 | HEAD "/" 2025-03-06 23:46:53 [GIN] 2025/03/07 - 05:46:53 | 200 | 15.795301ms | 127.0.0.1 | POST "/api/show" 2025-03-06 23:47:08 [GIN] 2025/03/07 - 05:47:08 | 500 | 15.024742148s | 127.0.0.1 | POST "/api/generate" 2025-03-07 00:24:26 [GIN] 2025/03/07 - 06:24:26 | 200 | 26.5µs | 127.0.0.1 | HEAD "/" 2025-03-07 00:24:26 [GIN] 2025/03/07 - 06:24:26 | 200 | 9.899899ms | 127.0.0.1 | POST "/api/show" 2025-03-06 23:33:29 time=2025-03-07T05:33:29.336Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:43659" 2025-03-06 23:33:29 time=2025-03-07T05:33:29.436Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" 2025-03-06 23:33:29 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free 2025-03-06 23:33:29 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) 2025-03-06 23:33:29 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-03-06 23:33:29 llama_model_loader: - kv 0: general.architecture str = llama 2025-03-06 23:33:29 llama_model_loader: - kv 1: general.name str = LLaMA v2 2025-03-06 23:33:29 llama_model_loader: - kv 2: llama.context_length u32 = 4096 2025-03-06 23:33:29 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 2025-03-06 23:33:29 llama_model_loader: - kv 4: llama.block_count u32 = 32 2025-03-06 23:33:29 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 2025-03-06 23:33:29 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 2025-03-06 23:33:29 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 2025-03-06 23:33:29 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 2025-03-06 23:33:29 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2025-03-06 23:33:29 llama_model_loader: - kv 10: general.file_type u32 = 2 2025-03-06 23:33:29 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama 2025-03-06 23:33:29 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... 2025-03-06 23:33:29 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... 2025-03-06 23:33:29 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... 2025-03-06 23:33:29 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... 2025-03-06 23:33:29 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 2025-03-06 23:33:29 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 2025-03-06 23:33:29 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 2025-03-06 23:33:29 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true 2025-03-06 23:33:29 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false 2025-03-06 23:33:29 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... 2025-03-06 23:33:29 llama_model_loader: - kv 22: general.quantization_version u32 = 2 2025-03-06 23:33:29 llama_model_loader: - type f32: 65 tensors 2025-03-06 23:33:29 llama_model_loader: - type q4_0: 225 tensors 2025-03-06 23:33:29 llama_model_loader: - type q6_K: 1 tensors 2025-03-06 23:33:29 print_info: file format = GGUF V3 (latest) 2025-03-06 23:33:29 print_info: file type = Q4_0 2025-03-06 23:33:29 print_info: file size = 3.56 GiB (4.54 BPW) 2025-03-06 23:33:29 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2025-03-06 23:33:29 load: special tokens cache size = 3 2025-03-06 23:33:29 load: token to piece cache size = 0.1684 MB 2025-03-06 23:33:29 print_info: arch = llama 2025-03-06 23:33:29 print_info: vocab_only = 0 2025-03-06 23:33:29 print_info: n_ctx_train = 4096 2025-03-06 23:33:29 print_info: n_embd = 4096 2025-03-06 23:33:29 print_info: n_layer = 32 2025-03-06 23:33:29 print_info: n_head = 32 2025-03-06 23:33:29 print_info: n_head_kv = 32 2025-03-06 23:33:29 print_info: n_rot = 128 2025-03-06 23:33:29 print_info: n_swa = 0 2025-03-06 23:33:29 print_info: n_embd_head_k = 128 2025-03-06 23:33:29 print_info: n_embd_head_v = 128 2025-03-06 23:33:29 print_info: n_gqa = 1 2025-03-06 23:33:29 print_info: n_embd_k_gqa = 4096 2025-03-06 23:33:29 print_info: n_embd_v_gqa = 4096 2025-03-06 23:33:29 print_info: f_norm_eps = 0.0e+00 2025-03-06 23:33:29 print_info: f_norm_rms_eps = 1.0e-05 2025-03-06 23:33:29 print_info: f_clamp_kqv = 0.0e+00 2025-03-06 23:33:29 print_info: f_max_alibi_bias = 0.0e+00 2025-03-06 23:33:29 print_info: f_logit_scale = 0.0e+00 2025-03-06 23:33:29 print_info: n_ff = 11008 2025-03-06 23:33:29 print_info: n_expert = 0 2025-03-06 23:33:29 print_info: n_expert_used = 0 2025-03-06 23:33:29 print_info: causal attn = 1 2025-03-06 23:33:29 print_info: pooling type = 0 2025-03-06 23:33:29 print_info: rope type = 0 2025-03-06 23:33:29 print_info: rope scaling = linear 2025-03-06 23:33:29 print_info: freq_base_train = 10000.0 2025-03-06 23:33:29 print_info: freq_scale_train = 1 2025-03-06 23:33:29 print_info: n_ctx_orig_yarn = 4096 2025-03-06 23:33:29 print_info: rope_finetuned = unknown 2025-03-06 23:33:29 print_info: ssm_d_conv = 0 2025-03-06 23:33:29 print_info: ssm_d_inner = 0 2025-03-06 23:33:29 print_info: ssm_d_state = 0 2025-03-06 23:33:29 print_info: ssm_dt_rank = 0 2025-03-06 23:33:29 print_info: ssm_dt_b_c_rms = 0 2025-03-06 23:33:29 print_info: model type = 7B 2025-03-06 23:33:29 print_info: model params = 6.74 B 2025-03-06 23:33:29 print_info: general.name = LLaMA v2 2025-03-06 23:33:29 print_info: vocab type = SPM 2025-03-06 23:33:29 print_info: n_vocab = 32000 2025-03-06 23:33:29 print_info: n_merges = 0 2025-03-06 23:33:29 print_info: BOS token = 1 '<s>' 2025-03-06 23:33:29 print_info: EOS token = 2 '</s>' 2025-03-06 23:33:29 print_info: UNK token = 0 '<unk>' 2025-03-06 23:33:29 print_info: LF token = 13 '<0x0A>' 2025-03-06 23:33:29 print_info: EOG token = 2 '</s>' 2025-03-06 23:33:29 print_info: max token length = 48 2025-03-06 23:33:29 load_tensors: loading model tensors, this can take a while... (mmap = true) 2025-03-06 23:33:29 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory 2025-03-06 23:33:30 llama_model_load: error loading model: unable to allocate CUDA0 buffer 2025-03-06 23:33:30 llama_model_load_from_file_impl: failed to load model 2025-03-06 23:33:30 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:33:30 2025-03-06 23:33:30 goroutine 23 [running]: 2025-03-06 23:33:30 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000614030, 0x0}, ...) 2025-03-06 23:33:30 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375 2025-03-06 23:33:30 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 2025-03-06 23:33:30 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7 2025-03-06 23:33:30 time=2025-03-07T05:33:30.189Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:33:30 time=2025-03-07T05:33:30.235Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" 2025-03-06 23:33:30 time=2025-03-07T05:33:30.440Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model" 2025-03-06 23:33:35 time=2025-03-07T05:33:35.649Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.209258062 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:33:35 time=2025-03-07T05:33:35.899Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.459369719 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:33:36 time=2025-03-07T05:33:36.149Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.708838921 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:33:38 time=2025-03-07T05:33:38.832Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:38 time=2025-03-07T05:33:38.832Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:38 time=2025-03-07T05:33:38.833Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:38 time=2025-03-07T05:33:38.833Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:38 time=2025-03-07T05:33:38.833Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:38 time=2025-03-07T05:33:38.833Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:38 time=2025-03-07T05:33:38.834Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:38 time=2025-03-07T05:33:38.834Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:39 time=2025-03-07T05:33:39.068Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="8.9 GiB" free_swap="3.0 GiB" 2025-03-06 23:33:39 time=2025-03-07T05:33:39.068Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:33:39 time=2025-03-07T05:33:39.068Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:33:39 time=2025-03-07T05:33:39.068Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" 2025-03-06 23:33:39 time=2025-03-07T05:33:39.069Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 41229" 2025-03-06 23:33:39 time=2025-03-07T05:33:39.070Z level=INFO source=sched.go:450 msg="loaded runners" count=1 2025-03-06 23:33:39 time=2025-03-07T05:33:39.070Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" 2025-03-06 23:33:39 time=2025-03-07T05:33:39.071Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:33:39 time=2025-03-07T05:33:39.096Z level=INFO source=runner.go:931 msg="starting go runner" 2025-03-06 23:33:39 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-03-06 23:33:39 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-03-06 23:33:39 ggml_cuda_init: found 1 CUDA devices: 2025-03-06 23:33:39 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes 2025-03-06 23:33:39 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-03-06 23:33:39 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so 2025-03-06 23:33:39 time=2025-03-07T05:33:39.217Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8 2025-03-06 23:33:39 time=2025-03-07T05:33:39.234Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:41229" 2025-03-06 23:33:39 time=2025-03-07T05:33:39.323Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" 2025-03-06 23:33:39 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free 2025-03-06 23:33:39 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) 2025-03-06 23:33:39 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-03-06 23:33:39 llama_model_loader: - kv 0: general.architecture str = llama 2025-03-06 23:33:39 llama_model_loader: - kv 1: general.name str = LLaMA v2 2025-03-06 23:33:39 llama_model_loader: - kv 2: llama.context_length u32 = 4096 2025-03-06 23:33:39 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 2025-03-06 23:33:39 llama_model_loader: - kv 4: llama.block_count u32 = 32 2025-03-06 23:33:39 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 2025-03-06 23:33:39 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 2025-03-06 23:33:39 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 2025-03-06 23:33:39 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 2025-03-06 23:33:39 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2025-03-06 23:33:39 llama_model_loader: - kv 10: general.file_type u32 = 2 2025-03-06 23:33:39 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama 2025-03-06 23:33:39 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... 2025-03-06 23:33:39 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... 2025-03-06 23:33:39 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... 2025-03-06 23:33:39 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... 2025-03-06 23:33:39 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 2025-03-06 23:33:39 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 2025-03-06 23:33:39 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 2025-03-06 23:33:39 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true 2025-03-06 23:33:39 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false 2025-03-06 23:33:39 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... 2025-03-06 23:33:39 llama_model_loader: - kv 22: general.quantization_version u32 = 2 2025-03-06 23:33:39 llama_model_loader: - type f32: 65 tensors 2025-03-06 23:33:39 llama_model_loader: - type q4_0: 225 tensors 2025-03-06 23:33:39 llama_model_loader: - type q6_K: 1 tensors 2025-03-06 23:33:39 print_info: file format = GGUF V3 (latest) 2025-03-06 23:33:39 print_info: file type = Q4_0 2025-03-06 23:33:39 print_info: file size = 3.56 GiB (4.54 BPW) 2025-03-06 23:33:39 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2025-03-06 23:33:39 load: special tokens cache size = 3 2025-03-06 23:33:39 load: token to piece cache size = 0.1684 MB 2025-03-06 23:33:39 print_info: arch = llama 2025-03-06 23:33:39 print_info: vocab_only = 0 2025-03-06 23:33:39 print_info: n_ctx_train = 4096 2025-03-06 23:33:39 print_info: n_embd = 4096 2025-03-06 23:33:39 print_info: n_layer = 32 2025-03-06 23:33:39 print_info: n_head = 32 2025-03-06 23:33:39 print_info: n_head_kv = 32 2025-03-06 23:33:39 print_info: n_rot = 128 2025-03-06 23:33:39 print_info: n_swa = 0 2025-03-06 23:33:39 print_info: n_embd_head_k = 128 2025-03-06 23:33:39 print_info: n_embd_head_v = 128 2025-03-06 23:33:39 print_info: n_gqa = 1 2025-03-06 23:33:39 print_info: n_embd_k_gqa = 4096 2025-03-06 23:33:39 print_info: n_embd_v_gqa = 4096 2025-03-06 23:33:39 print_info: f_norm_eps = 0.0e+00 2025-03-06 23:33:39 print_info: f_norm_rms_eps = 1.0e-05 2025-03-06 23:33:39 print_info: f_clamp_kqv = 0.0e+00 2025-03-06 23:33:39 print_info: f_max_alibi_bias = 0.0e+00 2025-03-06 23:33:39 print_info: f_logit_scale = 0.0e+00 2025-03-06 23:33:39 print_info: n_ff = 11008 2025-03-06 23:33:39 print_info: n_expert = 0 2025-03-06 23:33:39 print_info: n_expert_used = 0 2025-03-06 23:33:39 print_info: causal attn = 1 2025-03-06 23:33:39 print_info: pooling type = 0 2025-03-06 23:33:39 print_info: rope type = 0 2025-03-06 23:33:39 print_info: rope scaling = linear 2025-03-06 23:33:39 print_info: freq_base_train = 10000.0 2025-03-06 23:33:39 print_info: freq_scale_train = 1 2025-03-06 23:33:39 print_info: n_ctx_orig_yarn = 4096 2025-03-06 23:33:39 print_info: rope_finetuned = unknown 2025-03-06 23:33:39 print_info: ssm_d_conv = 0 2025-03-06 23:33:39 print_info: ssm_d_inner = 0 2025-03-06 23:33:39 print_info: ssm_d_state = 0 2025-03-06 23:33:39 print_info: ssm_dt_rank = 0 2025-03-06 23:33:39 print_info: ssm_dt_b_c_rms = 0 2025-03-06 23:33:39 print_info: model type = 7B 2025-03-06 23:33:39 print_info: model params = 6.74 B 2025-03-06 23:33:39 print_info: general.name = LLaMA v2 2025-03-06 23:33:39 print_info: vocab type = SPM 2025-03-06 23:33:39 print_info: n_vocab = 32000 2025-03-06 23:33:39 print_info: n_merges = 0 2025-03-06 23:33:39 print_info: BOS token = 1 '<s>' 2025-03-06 23:33:39 print_info: EOS token = 2 '</s>' 2025-03-06 23:33:39 print_info: UNK token = 0 '<unk>' 2025-03-06 23:33:39 print_info: LF token = 13 '<0x0A>' 2025-03-06 23:33:39 print_info: EOG token = 2 '</s>' 2025-03-06 23:33:39 print_info: max token length = 48 2025-03-06 23:33:39 load_tensors: loading model tensors, this can take a while... (mmap = true) 2025-03-06 23:33:39 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory 2025-03-06 23:33:40 llama_model_load: error loading model: unable to allocate CUDA0 buffer 2025-03-06 23:33:40 llama_model_load_from_file_impl: failed to load model 2025-03-06 23:33:40 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:33:40 2025-03-06 23:33:40 goroutine 39 [running]: 2025-03-06 23:33:40 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc00059c070, 0x0}, ...) 2025-03-06 23:33:40 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375 2025-03-06 23:33:40 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 2025-03-06 23:33:40 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7 2025-03-06 23:33:40 time=2025-03-07T05:33:40.079Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:33:40 time=2025-03-07T05:33:40.127Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" 2025-03-06 23:33:40 time=2025-03-07T05:33:40.329Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model" 2025-03-06 23:33:45 time=2025-03-07T05:33:45.507Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.177852813 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:33:45 time=2025-03-07T05:33:45.758Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.428446301 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:33:46 time=2025-03-07T05:33:46.007Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.677910875 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:36:38 time=2025-03-07T05:36:38.570Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:36:38 time=2025-03-07T05:36:38.570Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:36:38 time=2025-03-07T05:36:38.571Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:36:38 time=2025-03-07T05:36:38.571Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:36:38 time=2025-03-07T05:36:38.571Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:36:38 time=2025-03-07T05:36:38.571Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:36:38 time=2025-03-07T05:36:38.572Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:36:38 time=2025-03-07T05:36:38.572Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:36:38 time=2025-03-07T05:36:38.759Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="8.9 GiB" free_swap="3.0 GiB" 2025-03-06 23:36:38 time=2025-03-07T05:36:38.759Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:36:38 time=2025-03-07T05:36:38.759Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:36:38 time=2025-03-07T05:36:38.759Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" 2025-03-06 23:36:38 time=2025-03-07T05:36:38.760Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 45219" 2025-03-06 23:36:38 time=2025-03-07T05:36:38.760Z level=INFO source=sched.go:450 msg="loaded runners" count=1 2025-03-06 23:36:38 time=2025-03-07T05:36:38.760Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" 2025-03-06 23:36:38 time=2025-03-07T05:36:38.761Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:36:38 time=2025-03-07T05:36:38.780Z level=INFO source=runner.go:931 msg="starting go runner" 2025-03-06 23:36:38 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-03-06 23:36:38 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-03-06 23:36:38 ggml_cuda_init: found 1 CUDA devices: 2025-03-06 23:36:38 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes 2025-03-06 23:36:38 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-03-06 23:36:38 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so 2025-03-06 23:36:38 time=2025-03-07T05:36:38.887Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8 2025-03-06 23:36:38 time=2025-03-07T05:36:38.903Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:45219" 2025-03-06 23:36:39 time=2025-03-07T05:36:39.013Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" 2025-03-06 23:36:39 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free 2025-03-06 23:36:39 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) 2025-03-06 23:36:39 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-03-06 23:36:39 llama_model_loader: - kv 0: general.architecture str = llama 2025-03-06 23:36:39 llama_model_loader: - kv 1: general.name str = LLaMA v2 2025-03-06 23:36:39 llama_model_loader: - kv 2: llama.context_length u32 = 4096 2025-03-06 23:36:39 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 2025-03-06 23:36:39 llama_model_loader: - kv 4: llama.block_count u32 = 32 2025-03-06 23:36:39 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 2025-03-06 23:36:39 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 2025-03-06 23:36:39 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 2025-03-06 23:36:39 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 2025-03-06 23:36:39 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2025-03-06 23:36:39 llama_model_loader: - kv 10: general.file_type u32 = 2 2025-03-06 23:36:39 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama 2025-03-06 23:36:39 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... 2025-03-06 23:36:39 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... 2025-03-06 23:36:39 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... 2025-03-06 23:36:39 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... 2025-03-06 23:36:39 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 2025-03-06 23:36:39 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 2025-03-06 23:36:39 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 2025-03-06 23:36:39 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true 2025-03-06 23:36:39 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false 2025-03-06 23:36:39 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... 2025-03-06 23:36:39 llama_model_loader: - kv 22: general.quantization_version u32 = 2 2025-03-06 23:36:39 llama_model_loader: - type f32: 65 tensors 2025-03-06 23:36:39 llama_model_loader: - type q4_0: 225 tensors 2025-03-06 23:36:39 llama_model_loader: - type q6_K: 1 tensors 2025-03-06 23:36:39 print_info: file format = GGUF V3 (latest) 2025-03-06 23:36:39 print_info: file type = Q4_0 2025-03-06 23:36:39 print_info: file size = 3.56 GiB (4.54 BPW) 2025-03-06 23:36:39 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2025-03-06 23:36:39 load: special tokens cache size = 3 2025-03-06 23:36:39 load: token to piece cache size = 0.1684 MB 2025-03-06 23:36:39 print_info: arch = llama 2025-03-06 23:36:39 print_info: vocab_only = 0 2025-03-06 23:36:39 print_info: n_ctx_train = 4096 2025-03-06 23:36:39 print_info: n_embd = 4096 2025-03-06 23:36:39 print_info: n_layer = 32 2025-03-06 23:36:39 print_info: n_head = 32 2025-03-06 23:36:39 print_info: n_head_kv = 32 2025-03-06 23:36:39 print_info: n_rot = 128 2025-03-06 23:36:39 print_info: n_swa = 0 2025-03-06 23:36:39 print_info: n_embd_head_k = 128 2025-03-06 23:36:39 print_info: n_embd_head_v = 128 2025-03-06 23:36:39 print_info: n_gqa = 1 2025-03-06 23:36:39 print_info: n_embd_k_gqa = 4096 2025-03-06 23:36:39 print_info: n_embd_v_gqa = 4096 2025-03-06 23:36:39 print_info: f_norm_eps = 0.0e+00 2025-03-06 23:36:39 print_info: f_norm_rms_eps = 1.0e-05 2025-03-06 23:36:39 print_info: f_clamp_kqv = 0.0e+00 2025-03-06 23:36:39 print_info: f_max_alibi_bias = 0.0e+00 2025-03-06 23:36:39 print_info: f_logit_scale = 0.0e+00 2025-03-06 23:36:39 print_info: n_ff = 11008 2025-03-06 23:36:39 print_info: n_expert = 0 2025-03-06 23:36:39 print_info: n_expert_used = 0 2025-03-06 23:36:39 print_info: causal attn = 1 2025-03-06 23:36:39 print_info: pooling type = 0 2025-03-06 23:36:39 print_info: rope type = 0 2025-03-06 23:36:39 print_info: rope scaling = linear 2025-03-06 23:36:39 print_info: freq_base_train = 10000.0 2025-03-06 23:36:39 print_info: freq_scale_train = 1 2025-03-06 23:36:39 print_info: n_ctx_orig_yarn = 4096 2025-03-06 23:36:39 print_info: rope_finetuned = unknown 2025-03-06 23:36:39 print_info: ssm_d_conv = 0 2025-03-06 23:36:39 print_info: ssm_d_inner = 0 2025-03-06 23:36:39 print_info: ssm_d_state = 0 2025-03-06 23:36:39 print_info: ssm_dt_rank = 0 2025-03-06 23:36:39 print_info: ssm_dt_b_c_rms = 0 2025-03-06 23:36:39 print_info: model type = 7B 2025-03-06 23:36:39 print_info: model params = 6.74 B 2025-03-06 23:36:39 print_info: general.name = LLaMA v2 2025-03-06 23:36:39 print_info: vocab type = SPM 2025-03-06 23:36:39 print_info: n_vocab = 32000 2025-03-06 23:36:39 print_info: n_merges = 0 2025-03-06 23:36:39 print_info: BOS token = 1 '<s>' 2025-03-06 23:36:39 print_info: EOS token = 2 '</s>' 2025-03-06 23:36:39 print_info: UNK token = 0 '<unk>' 2025-03-06 23:36:39 print_info: LF token = 13 '<0x0A>' 2025-03-06 23:36:39 print_info: EOG token = 2 '</s>' 2025-03-06 23:36:39 print_info: max token length = 48 2025-03-06 23:36:39 load_tensors: loading model tensors, this can take a while... (mmap = true) 2025-03-06 23:36:39 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory 2025-03-06 23:36:39 llama_model_load: error loading model: unable to allocate CUDA0 buffer 2025-03-06 23:36:39 llama_model_load_from_file_impl: failed to load model 2025-03-06 23:36:39 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:36:39 2025-03-06 23:36:39 goroutine 24 [running]: 2025-03-06 23:36:39 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000464020, 0x0}, ...) 2025-03-06 23:36:39 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375 2025-03-06 23:36:39 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 2025-03-06 23:36:39 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7 2025-03-06 23:36:39 time=2025-03-07T05:36:39.884Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" 2025-03-06 23:36:40 time=2025-03-07T05:36:40.016Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model" 2025-03-06 23:36:45 time=2025-03-07T05:36:45.191Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.175978887 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:36:45 time=2025-03-07T05:36:45.442Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.426659985 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:36:45 time=2025-03-07T05:36:45.691Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.676152506 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:46:53 time=2025-03-07T05:46:53.806Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:46:53 time=2025-03-07T05:46:53.806Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:46:53 time=2025-03-07T05:46:53.806Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:46:53 time=2025-03-07T05:46:53.806Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:46:53 time=2025-03-07T05:46:53.807Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:46:53 time=2025-03-07T05:46:53.807Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:46:53 time=2025-03-07T05:46:53.807Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:46:53 time=2025-03-07T05:46:53.807Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:46:54 time=2025-03-07T05:46:54.017Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="8.9 GiB" free_swap="3.0 GiB" 2025-03-06 23:46:54 time=2025-03-07T05:46:54.017Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-06 23:46:54 time=2025-03-07T05:46:54.017Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-06 23:46:54 time=2025-03-07T05:46:54.017Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" 2025-03-06 23:46:54 time=2025-03-07T05:46:54.018Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 43487" 2025-03-06 23:46:54 time=2025-03-07T05:46:54.019Z level=INFO source=sched.go:450 msg="loaded runners" count=1 2025-03-06 23:46:54 time=2025-03-07T05:46:54.019Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" 2025-03-06 23:46:54 time=2025-03-07T05:46:54.020Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:46:54 time=2025-03-07T05:46:54.042Z level=INFO source=runner.go:931 msg="starting go runner" 2025-03-06 23:46:54 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-03-06 23:46:54 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-03-06 23:46:54 ggml_cuda_init: found 1 CUDA devices: 2025-03-06 23:46:54 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes 2025-03-06 23:46:54 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-03-06 23:46:54 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so 2025-03-06 23:46:54 time=2025-03-07T05:46:54.727Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8 2025-03-06 23:46:54 time=2025-03-07T05:46:54.752Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:43487" 2025-03-06 23:46:54 time=2025-03-07T05:46:54.773Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" 2025-03-06 23:46:54 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free 2025-03-06 23:46:54 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) 2025-03-06 23:46:54 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-03-06 23:46:54 llama_model_loader: - kv 0: general.architecture str = llama 2025-03-06 23:46:54 llama_model_loader: - kv 1: general.name str = LLaMA v2 2025-03-06 23:46:54 llama_model_loader: - kv 2: llama.context_length u32 = 4096 2025-03-06 23:46:54 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 2025-03-06 23:46:54 llama_model_loader: - kv 4: llama.block_count u32 = 32 2025-03-06 23:46:54 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 2025-03-06 23:46:54 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 2025-03-06 23:46:54 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 2025-03-06 23:46:54 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 2025-03-06 23:46:54 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2025-03-06 23:46:54 llama_model_loader: - kv 10: general.file_type u32 = 2 2025-03-06 23:46:54 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama 2025-03-06 23:46:54 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... 2025-03-06 23:46:54 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... 2025-03-06 23:46:54 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... 2025-03-06 23:46:55 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... 2025-03-06 23:46:55 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 2025-03-06 23:46:55 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 2025-03-06 23:46:55 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 2025-03-06 23:46:55 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true 2025-03-06 23:46:55 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false 2025-03-06 23:46:55 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... 2025-03-06 23:46:55 llama_model_loader: - kv 22: general.quantization_version u32 = 2 2025-03-06 23:46:55 llama_model_loader: - type f32: 65 tensors 2025-03-06 23:46:55 llama_model_loader: - type q4_0: 225 tensors 2025-03-06 23:46:55 llama_model_loader: - type q6_K: 1 tensors 2025-03-06 23:46:55 print_info: file format = GGUF V3 (latest) 2025-03-06 23:46:55 print_info: file type = Q4_0 2025-03-06 23:46:55 print_info: file size = 3.56 GiB (4.54 BPW) 2025-03-06 23:46:55 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2025-03-06 23:46:55 load: special tokens cache size = 3 2025-03-06 23:46:55 load: token to piece cache size = 0.1684 MB 2025-03-06 23:46:55 print_info: arch = llama 2025-03-06 23:46:55 print_info: vocab_only = 0 2025-03-06 23:46:55 print_info: n_ctx_train = 4096 2025-03-06 23:46:55 print_info: n_embd = 4096 2025-03-06 23:46:55 print_info: n_layer = 32 2025-03-06 23:46:55 print_info: n_head = 32 2025-03-06 23:46:55 print_info: n_head_kv = 32 2025-03-06 23:46:55 print_info: n_rot = 128 2025-03-06 23:46:55 print_info: n_swa = 0 2025-03-06 23:46:55 print_info: n_embd_head_k = 128 2025-03-06 23:46:55 print_info: n_embd_head_v = 128 2025-03-06 23:46:55 print_info: n_gqa = 1 2025-03-06 23:46:55 print_info: n_embd_k_gqa = 4096 2025-03-06 23:46:55 print_info: n_embd_v_gqa = 4096 2025-03-06 23:46:55 print_info: f_norm_eps = 0.0e+00 2025-03-06 23:46:55 print_info: f_norm_rms_eps = 1.0e-05 2025-03-06 23:46:55 print_info: f_clamp_kqv = 0.0e+00 2025-03-06 23:46:55 print_info: f_max_alibi_bias = 0.0e+00 2025-03-06 23:46:55 print_info: f_logit_scale = 0.0e+00 2025-03-06 23:46:55 print_info: n_ff = 11008 2025-03-06 23:46:55 print_info: n_expert = 0 2025-03-06 23:46:55 print_info: n_expert_used = 0 2025-03-06 23:46:55 print_info: causal attn = 1 2025-03-06 23:46:55 print_info: pooling type = 0 2025-03-06 23:46:55 print_info: rope type = 0 2025-03-06 23:46:55 print_info: rope scaling = linear 2025-03-06 23:46:55 print_info: freq_base_train = 10000.0 2025-03-06 23:46:55 print_info: freq_scale_train = 1 2025-03-06 23:46:55 print_info: n_ctx_orig_yarn = 4096 2025-03-06 23:46:55 print_info: rope_finetuned = unknown 2025-03-06 23:46:55 print_info: ssm_d_conv = 0 2025-03-06 23:46:55 print_info: ssm_d_inner = 0 2025-03-06 23:46:55 print_info: ssm_d_state = 0 2025-03-06 23:46:55 print_info: ssm_dt_rank = 0 2025-03-06 23:46:55 print_info: ssm_dt_b_c_rms = 0 2025-03-06 23:46:55 print_info: model type = 7B 2025-03-06 23:46:55 print_info: model params = 6.74 B 2025-03-06 23:46:55 print_info: general.name = LLaMA v2 2025-03-06 23:46:55 print_info: vocab type = SPM 2025-03-06 23:46:55 print_info: n_vocab = 32000 2025-03-06 23:46:55 print_info: n_merges = 0 2025-03-06 23:46:55 print_info: BOS token = 1 '<s>' 2025-03-06 23:46:55 print_info: EOS token = 2 '</s>' 2025-03-06 23:46:55 print_info: UNK token = 0 '<unk>' 2025-03-06 23:46:55 print_info: LF token = 13 '<0x0A>' 2025-03-06 23:46:55 print_info: EOG token = 2 '</s>' 2025-03-06 23:46:55 print_info: max token length = 48 2025-03-06 23:46:55 load_tensors: loading model tensors, this can take a while... (mmap = true) 2025-03-06 23:47:07 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory 2025-03-06 23:47:08 llama_model_load: error loading model: unable to allocate CUDA0 buffer 2025-03-06 23:47:08 llama_model_load_from_file_impl: failed to load model 2025-03-06 23:47:08 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:47:08 2025-03-06 23:47:08 goroutine 8 [running]: 2025-03-06 23:47:08 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0001a5cb0, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc0006121a0, 0x0}, ...) 2025-03-06 23:47:08 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375 2025-03-06 23:47:08 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 2025-03-06 23:47:08 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7 2025-03-06 23:47:08 time=2025-03-07T05:47:08.336Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-06 23:47:08 time=2025-03-07T05:47:08.343Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" 2025-03-06 23:47:08 time=2025-03-07T05:47:08.587Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model" 2025-03-06 23:47:13 time=2025-03-07T05:47:13.776Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.190728973 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:47:14 time=2025-03-07T05:47:14.027Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.441434208 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-06 23:47:14 time=2025-03-07T05:47:14.277Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.691328775 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-07 00:24:26 time=2025-03-07T06:24:26.337Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-07 00:24:26 time=2025-03-07T06:24:26.337Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-07 00:24:26 time=2025-03-07T06:24:26.338Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-07 00:24:26 time=2025-03-07T06:24:26.338Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-07 00:24:26 time=2025-03-07T06:24:26.338Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-07 00:24:26 time=2025-03-07T06:24:26.338Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-07 00:24:26 time=2025-03-07T06:24:26.339Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-07 00:24:26 time=2025-03-07T06:24:26.339Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-07 00:24:26 time=2025-03-07T06:24:26.527Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="8.9 GiB" free_swap="3.0 GiB" 2025-03-07 00:24:26 time=2025-03-07T06:24:26.528Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 2025-03-07 00:24:26 time=2025-03-07T06:24:26.528Z level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 2025-03-07 00:24:26 time=2025-03-07T06:24:26.528Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=30 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" 2025-03-07 00:24:26 time=2025-03-07T06:24:26.529Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --n-gpu-layers 30 --threads 8 --parallel 1 --port 38999" 2025-03-07 00:24:26 time=2025-03-07T06:24:26.530Z level=INFO source=sched.go:450 msg="loaded runners" count=1 2025-03-07 00:24:26 time=2025-03-07T06:24:26.530Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" 2025-03-07 00:24:26 time=2025-03-07T06:24:26.530Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-07 00:24:26 time=2025-03-07T06:24:26.551Z level=INFO source=runner.go:931 msg="starting go runner" 2025-03-07 00:24:26 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-03-07 00:24:26 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-03-07 00:24:26 ggml_cuda_init: found 1 CUDA devices: 2025-03-07 00:24:26 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes 2025-03-07 00:24:26 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-03-07 00:24:26 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so 2025-03-07 00:24:26 time=2025-03-07T06:24:26.669Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8 2025-03-07 00:24:26 time=2025-03-07T06:24:26.685Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:38999" 2025-03-07 00:24:26 time=2025-03-07T06:24:26.782Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" 2025-03-07 00:24:26 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free 2025-03-07 00:24:26 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) 2025-03-07 00:24:26 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-03-07 00:24:26 llama_model_loader: - kv 0: general.architecture str = llama 2025-03-07 00:24:26 llama_model_loader: - kv 1: general.name str = LLaMA v2 2025-03-07 00:24:26 llama_model_loader: - kv 2: llama.context_length u32 = 4096 2025-03-07 00:24:26 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 2025-03-07 00:24:26 llama_model_loader: - kv 4: llama.block_count u32 = 32 2025-03-07 00:24:26 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 2025-03-07 00:24:26 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 2025-03-07 00:24:26 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 2025-03-07 00:24:26 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 2025-03-07 00:24:26 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2025-03-07 00:24:26 llama_model_loader: - kv 10: general.file_type u32 = 2 2025-03-07 00:24:26 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama 2025-03-07 00:24:26 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... 2025-03-07 00:24:26 llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... 2025-03-07 00:24:26 llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... 2025-03-07 00:24:26 llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... 2025-03-07 00:24:26 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 2025-03-07 00:24:26 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 2025-03-07 00:24:26 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 2025-03-07 00:24:26 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true 2025-03-07 00:24:26 llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false 2025-03-07 00:24:26 llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... 2025-03-07 00:24:26 llama_model_loader: - kv 22: general.quantization_version u32 = 2 2025-03-07 00:24:26 llama_model_loader: - type f32: 65 tensors 2025-03-07 00:24:26 llama_model_loader: - type q4_0: 225 tensors 2025-03-07 00:24:26 llama_model_loader: - type q6_K: 1 tensors 2025-03-07 00:24:26 print_info: file format = GGUF V3 (latest) 2025-03-07 00:24:26 print_info: file type = Q4_0 2025-03-07 00:24:26 print_info: file size = 3.56 GiB (4.54 BPW) 2025-03-07 00:24:26 load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2025-03-07 00:24:26 load: special tokens cache size = 3 2025-03-07 00:24:26 load: token to piece cache size = 0.1684 MB 2025-03-07 00:24:26 print_info: arch = llama 2025-03-07 00:24:26 print_info: vocab_only = 0 2025-03-07 00:24:26 print_info: n_ctx_train = 4096 2025-03-07 00:24:26 print_info: n_embd = 4096 2025-03-07 00:24:26 print_info: n_layer = 32 2025-03-07 00:24:26 print_info: n_head = 32 2025-03-07 00:24:26 print_info: n_head_kv = 32 2025-03-07 00:24:26 print_info: n_rot = 128 2025-03-07 00:24:26 print_info: n_swa = 0 2025-03-07 00:24:26 print_info: n_embd_head_k = 128 2025-03-07 00:24:26 print_info: n_embd_head_v = 128 2025-03-07 00:24:26 print_info: n_gqa = 1 2025-03-07 00:24:26 print_info: n_embd_k_gqa = 4096 2025-03-07 00:24:26 print_info: n_embd_v_gqa = 4096 2025-03-07 00:24:26 print_info: f_norm_eps = 0.0e+00 2025-03-07 00:24:26 print_info: f_norm_rms_eps = 1.0e-05 2025-03-07 00:24:26 print_info: f_clamp_kqv = 0.0e+00 2025-03-07 00:24:26 print_info: f_max_alibi_bias = 0.0e+00 2025-03-07 00:24:26 print_info: f_logit_scale = 0.0e+00 2025-03-07 00:24:26 print_info: n_ff = 11008 2025-03-07 00:24:26 print_info: n_expert = 0 2025-03-07 00:24:26 print_info: n_expert_used = 0 2025-03-07 00:24:26 print_info: causal attn = 1 2025-03-07 00:24:26 print_info: pooling type = 0 2025-03-07 00:24:26 print_info: rope type = 0 2025-03-07 00:24:26 print_info: rope scaling = linear 2025-03-07 00:24:26 print_info: freq_base_train = 10000.0 2025-03-07 00:24:26 print_info: freq_scale_train = 1 2025-03-07 00:24:26 print_info: n_ctx_orig_yarn = 4096 2025-03-07 00:24:26 print_info: rope_finetuned = unknown 2025-03-07 00:24:26 print_info: ssm_d_conv = 0 2025-03-07 00:24:26 print_info: ssm_d_inner = 0 2025-03-07 00:24:26 print_info: ssm_d_state = 0 2025-03-07 00:24:26 print_info: ssm_dt_rank = 0 2025-03-07 00:24:26 print_info: ssm_dt_b_c_rms = 0 2025-03-07 00:24:26 print_info: model type = 7B 2025-03-07 00:24:26 print_info: model params = 6.74 B 2025-03-07 00:24:26 print_info: general.name = LLaMA v2 2025-03-07 00:24:26 print_info: vocab type = SPM 2025-03-07 00:24:26 print_info: n_vocab = 32000 2025-03-07 00:24:26 print_info: n_merges = 0 2025-03-07 00:24:26 print_info: BOS token = 1 '<s>' 2025-03-07 00:24:26 print_info: EOS token = 2 '</s>' 2025-03-07 00:24:26 print_info: UNK token = 0 '<unk>' 2025-03-07 00:24:26 print_info: LF token = 13 '<0x0A>' 2025-03-07 00:24:26 print_info: EOG token = 2 '</s>' 2025-03-07 00:24:26 print_info: max token length = 48 2025-03-07 00:24:26 load_tensors: loading model tensors, this can take a while... (mmap = true) 2025-03-07 00:24:27 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 3257.82 MiB on device 0: cudaMalloc failed: out of memory 2025-03-07 00:24:27 llama_model_load: error loading model: unable to allocate CUDA0 buffer 2025-03-07 00:24:27 llama_model_load_from_file_impl: failed to load model 2025-03-07 00:24:27 panic: unable to load model: /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-07 00:24:27 [GIN] 2025/03/07 - 06:24:27 | 500 | 1.649155397s | 127.0.0.1 | POST "/api/generate" 2025-03-07 00:26:19 [GIN] 2025/03/07 - 06:26:19 | 200 | 25.789µs | 127.0.0.1 | HEAD "/" 2025-03-07 00:26:20 [GIN] 2025/03/07 - 06:26:20 | 404 | 6.938891ms | 127.0.0.1 | POST "/api/show" 2025-03-07 00:26:20 [GIN] 2025/03/07 - 06:26:20 | 200 | 800.989382ms | 127.0.0.1 | POST "/api/pull" 2025-03-07 00:26:35 [GIN] 2025/03/07 - 06:26:35 | 200 | 27.332µs | 127.0.0.1 | HEAD "/" 2025-03-07 00:26:35 [GIN] 2025/03/07 - 06:26:35 | 200 | 2.845839ms | 127.0.0.1 | GET "/api/tags" 2025-03-07 00:26:45 [GIN] 2025/03/07 - 06:26:45 | 200 | 27.121µs | 127.0.0.1 | HEAD "/" 2025-03-07 00:26:45 [GIN] 2025/03/07 - 06:26:45 | 200 | 705.719µs | 127.0.0.1 | GET "/api/tags" 2025-03-07 00:43:20 [GIN] 2025/03/07 - 06:43:20 | 200 | 89.941µs | 172.18.0.7 | HEAD "/" 2025-03-07 00:43:21 [GIN] 2025/03/07 - 06:43:21 | 200 | 655.722975ms | 172.18.0.7 | POST "/api/pull" 2025-03-07 00:43:21 [GIN] 2025/03/07 - 06:43:21 | 200 | 26.561µs | 172.18.0.7 | HEAD "/" 2025-03-07 00:43:21 [GIN] 2025/03/07 - 06:43:21 | 200 | 306.43805ms | 172.18.0.7 | POST "/api/pull" 2025-03-07 00:43:45 [GIN] 2025/03/07 - 06:43:45 | 200 | 2.626832ms | 172.18.0.1 | GET "/api/tags" 2025-03-07 00:43:45 [GIN] 2025/03/07 - 06:43:45 | 200 | 57.86µs | 172.18.0.1 | GET "/api/version" 2025-03-07 00:50:23 [GIN] 2025/03/07 - 06:50:23 | 200 | 6m15s | 172.18.0.1 | POST "/api/pull" 2025-03-07 00:50:24 [GIN] 2025/03/07 - 06:50:24 | 200 | 868.259µs | 172.18.0.1 | GET "/api/tags" 2025-03-07 00:51:57 [GIN] 2025/03/07 - 06:51:57 | 500 | 2.697339437s | 172.18.0.1 | POST "/api/chat" 2025-03-07 00:24:27 2025-03-07 00:24:27 goroutine 23 [running]: 2025-03-07 00:24:27 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc000037d40, {0x1e, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000464020, 0x0}, ...) 2025-03-07 00:24:27 github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x375 2025-03-07 00:24:27 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 2025-03-07 00:24:27 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7 2025-03-07 00:24:27 time=2025-03-07T06:24:27.535Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-07 00:24:27 time=2025-03-07T06:24:27.568Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" 2025-03-07 00:24:27 time=2025-03-07T06:24:27.786Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate CUDA0 buffer\nllama_model_load_from_file_impl: failed to load model" 2025-03-07 00:24:32 time=2025-03-07T06:24:32.962Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.176344713 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-07 00:24:33 time=2025-03-07T06:24:33.212Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.425779616 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-07 00:24:33 time=2025-03-07T06:24:33.462Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.67636438 model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 2025-03-07 00:43:17 2025/03/07 06:43:17 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2025-03-07 00:43:17 time=2025-03-07T06:43:17.466Z level=INFO source=images.go:432 msg="total blobs: 20" 2025-03-07 00:43:17 time=2025-03-07T06:43:17.466Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" 2025-03-07 00:43:17 time=2025-03-07T06:43:17.470Z level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)" 2025-03-07 00:43:17 time=2025-03-07T06:43:17.473Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" 2025-03-07 00:43:18 time=2025-03-07T06:43:18.020Z level=INFO source=types.go:130 msg="inference compute" id=GPU-51e84bf9-91ed-160d-f8af-f145f1df6904 library=cuda variant=v12 compute=7.5 driver=12.8 name="NVIDIA GeForce RTX 2060" total="6.0 GiB" available="5.0 GiB" 2025-03-07 00:44:08 time=2025-03-07T06:44:08.459Z level=INFO source=download.go:176 msg="downloading dde5aa3fc5ff in 16 126 MB part(s)" 2025-03-07 00:50:16 time=2025-03-07T06:50:16.966Z level=INFO source=download.go:176 msg="downloading 966de95ca8a6 in 1 1.4 KB part(s)" 2025-03-07 00:50:18 time=2025-03-07T06:50:18.387Z level=INFO source=download.go:176 msg="downloading fcc5a6bec9da in 1 7.7 KB part(s)" 2025-03-07 00:50:19 time=2025-03-07T06:50:19.725Z level=INFO source=download.go:176 msg="downloading a70ff7e570d9 in 1 6.0 KB part(s)" 2025-03-07 00:50:21 time=2025-03-07T06:50:21.072Z level=INFO source=download.go:176 msg="downloading 34bb5ab01051 in 1 561 B part(s)" 2025-03-07 00:51:54 time=2025-03-07T06:51:54.677Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-51e84bf9-91ed-160d-f8af-f145f1df6904 parallel=4 available=5354029056 required="3.7 GiB" 2025-03-07 00:51:54 time=2025-03-07T06:51:54.855Z level=INFO source=server.go:97 msg="system memory" total="11.7 GiB" free="9.0 GiB" free_swap="3.0 GiB" 2025-03-07 00:51:54 time=2025-03-07T06:51:54.855Z level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[5.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB" 2025-03-07 00:51:54 time=2025-03-07T06:51:54.856Z level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --parallel 4 --port 42573" 2025-03-07 00:51:54 time=2025-03-07T06:51:54.857Z level=INFO source=sched.go:450 msg="loaded runners" count=1 2025-03-07 00:51:54 time=2025-03-07T06:51:54.857Z level=INFO source=server.go:557 msg="waiting for llama runner to start responding" 2025-03-07 00:51:54 time=2025-03-07T06:51:54.857Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" 2025-03-07 00:51:54 time=2025-03-07T06:51:54.880Z level=INFO source=runner.go:931 msg="starting go runner" 2025-03-07 00:51:55 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-03-07 00:51:55 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-03-07 00:51:55 ggml_cuda_init: found 1 CUDA devices: 2025-03-07 00:51:55 Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5, VMM: yes 2025-03-07 00:51:55 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so 2025-03-07 00:51:55 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so 2025-03-07 00:51:55 time=2025-03-07T06:51:55.501Z level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | CUDA : ARCHS = 500,600,610,700,750,800,860,870,890,900,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=8 2025-03-07 00:51:55 time=2025-03-07T06:51:55.518Z level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:42573" 2025-03-07 00:51:55 time=2025-03-07T06:51:55.611Z level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" 2025-03-07 00:51:55 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 2060) - 5106 MiB free 2025-03-07 00:51:55 llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) 2025-03-07 00:51:55 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-03-07 00:51:55 llama_model_loader: - kv 0: general.architecture str = llama 2025-03-07 00:51:55 llama_model_loader: - kv 1: general.type str = model 2025-03-07 00:51:55 llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct 2025-03-07 00:51:55 llama_model_loader: - kv 3: general.finetune str = Instruct 2025-03-07 00:51:55 llama_model_loader: - kv 4: general.basename str = Llama-3.2 2025-03-07 00:51:55 llama_model_loader: - kv 5: general.size_label str = 3B 2025-03-07 00:51:55 llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... 2025-03-07 00:51:55 llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... 2025-03-07 00:51:55 llama_model_loader: - kv 8: llama.block_count u32 = 28 2025-03-07 00:51:55 llama_model_loader: - kv 9: llama.context_length u32 = 131072 2025-03-07 00:51:55 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 2025-03-07 00:51:55 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 2025-03-07 00:51:55 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 2025-03-07 00:51:55 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 2025-03-07 00:51:55 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 2025-03-07 00:51:55 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2025-03-07 00:51:55 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 2025-03-07 00:51:55 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 2025-03-07 00:51:55 llama_model_loader: - kv 18: general.file_type u32 = 15 2025-03-07 00:51:55 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 2025-03-07 00:51:55 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 2025-03-07 00:51:55 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 2025-03-07 00:51:55 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe 2025-03-07 00:51:55 llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... 2025-03-07 00:51:55 llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2025-03-07 00:51:55 llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... 2025-03-07 00:51:55 llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 2025-03-07 00:51:55 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 2025-03-07 00:51:55 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... 2025-03-07 00:51:55 llama_model_loader: - kv 29: general.quantization_version u32 = 2 2025-03-07 00:51:55 llama_model_loader: - type f32: 58 tensors 2025-03-07 00:51:55 llama_model_loader: - type q4_K: 168 tensors 2025-03-07 00:51:55 llama_model_loader: - type q6_K: 29 tensors 2025-03-07 00:51:55 print_info: file format = GGUF V3 (latest) 2025-03-07 00:51:55 print_info: file type = Q4_K - Medium 2025-03-07 00:51:55 print_info: file size = 1.87 GiB (5.01 BPW) 2025-03-07 00:51:56 load: special tokens cache size = 256 2025-03-07 00:51:56 load: token to piece cache size = 0.7999 MB 2025-03-07 00:51:56 print_info: arch = llama 2025-03-07 00:51:56 print_info: vocab_only = 0 2025-03-07 00:51:56 print_info: n_ctx_train = 131072 2025-03-07 00:51:56 print_info: n_embd = 3072 2025-03-07 00:51:56 print_info: n_layer = 28 2025-03-07 00:51:56 print_info: n_head = 24 2025-03-07 00:51:56 print_info: n_head_kv = 8 2025-03-07 00:51:56 print_info: n_rot = 128 2025-03-07 00:51:56 print_info: n_swa = 0 2025-03-07 00:51:56 print_info: n_embd_head_k = 128 2025-03-07 00:51:56 print_info: n_embd_head_v = 128 2025-03-07 00:51:56 print_info: n_gqa = 3 2025-03-07 00:51:56 print_info: n_embd_k_gqa = 1024 2025-03-07 00:51:56 print_info: n_embd_v_gqa = 1024 2025-03-07 00:51:56 print_info: f_norm_eps = 0.0e+00 2025-03-07 00:51:56 print_info: f_norm_rms_eps = 1.0e-05 2025-03-07 00:51:56 print_info: f_clamp_kqv = 0.0e+00 2025-03-07 00:51:56 print_info: f_max_alibi_bias = 0.0e+00 2025-03-07 00:51:56 print_info: f_logit_scale = 0.0e+00 2025-03-07 00:51:56 print_info: n_ff = 8192 2025-03-07 00:51:56 print_info: n_expert = 0 2025-03-07 00:51:56 print_info: n_expert_used = 0 2025-03-07 00:51:56 print_info: causal attn = 1 2025-03-07 00:51:56 print_info: pooling type = 0 2025-03-07 00:51:56 print_info: rope type = 0 2025-03-07 00:51:56 print_info: rope scaling = linear 2025-03-07 00:51:56 print_info: freq_base_train = 500000.0 2025-03-07 00:51:56 print_info: freq_scale_train = 1 2025-03-07 00:51:56 print_info: n_ctx_orig_yarn = 131072 2025-03-07 00:51:56 print_info: rope_finetuned = unknown 2025-03-07 00:51:56 print_info: ssm_d_conv = 0 2025-03-07 00:51:56 print_info: ssm_d_inner = 0 2025-03-07 00:51:56 print_info: ssm_d_state = 0 2025-03-07 00:51:56 print_info: ssm_dt_rank = 0 2025-03-07 00:51:56 print_info: ssm_dt_b_c_rms = 0 2025-03-07 00:51:56 print_info: model type = 3B 2025-03-07 00:51:56 print_info: model params = 3.21 B 2025-03-07 00:51:56 print_info: general.name = Llama 3.2 3B Instruct 2025-03-07 00:51:56 print_info: vocab type = BPE 2025-03-07 00:51:56 print_info: n_vocab = 128256 2025-03-07 00:51:56 print_info: n_merges = 280147 2025-03-07 00:51:56 print_info: BOS token = 128000 '<|begin_of_text|>' 2025-03-07 00:51:56 print_info: EOS token = 128009 '<|eot_id|>' 2025-03-07 00:51:56 print_info: EOT token = 128009 '<|eot_id|>' 2025-03-07 00:51:56 print_info: EOM token = 128008 '<|eom_id|>' 2025-03-07 00:51:56 print_info: LF token = 198 'Ċ' 2025-03-07 00:51:56 print_info: EOG token = 128008 '<|eom_id|>' 2025-03-07 00:51:56 print_info: EOG token = 128009 '<|eot_id|>' 2025-03-07 00:51:56 print_info: max token length = 256 2025-03-07 00:51:56 load_tensors: loading model tensors, this can take a while... (mmap = true) 2025-03-07 00:51:56 load_tensors: offloading 28 repeating layers to GPU 2025-03-07 00:51:56 load_tensors: offloading output layer to GPU 2025-03-07 00:51:56 load_tensors: offloaded 29/29 layers to GPU 2025-03-07 00:51:56 load_tensors: CUDA0 model buffer size = 1918.35 MiB 2025-03-07 00:51:56 load_tensors: CPU_Mapped model buffer size = 308.23 MiB 2025-03-07 00:51:56 llama_init_from_model: n_seq_max = 4 2025-03-07 00:51:56 llama_init_from_model: n_ctx = 8192 2025-03-07 00:51:56 llama_init_from_model: n_ctx_per_seq = 2048 2025-03-07 00:51:56 llama_init_from_model: n_batch = 2048 2025-03-07 00:51:56 llama_init_from_model: n_ubatch = 512 2025-03-07 00:51:56 llama_init_from_model: flash_attn = 0 2025-03-07 00:51:56 llama_init_from_model: freq_base = 500000.0 2025-03-07 00:51:56 llama_init_from_model: freq_scale = 1 2025-03-07 00:51:56 llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized 2025-03-07 00:51:56 llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1 2025-03-07 00:51:56 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 896.00 MiB on device 0: cudaMalloc failed: out of memory 2025-03-07 00:51:56 llama_kv_cache_init: failed to allocate buffer for kv cache 2025-03-07 00:51:56 llama_init_from_model: llama_kv_cache_init() failed for self-attention cache 2025-03-07 00:51:56 panic: unable to create llama context 2025-03-07 00:51:56 2025-03-07 00:51:56 goroutine 25 [running]: 2025-03-07 00:51:56 github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0001adcb0, {0x1d, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000696080, 0x0}, ...) 2025-03-07 00:51:56 github.com/ollama/ollama/runner/llamarunner/runner.go:857 +0x369 2025-03-07 00:51:56 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 2025-03-07 00:51:56 github.com/ollama/ollama/runner/llamarunner/runner.go:966 +0xcb7 2025-03-07 00:51:57 time=2025-03-07T06:51:57.067Z level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" 2025-03-07 00:51:57 time=2025-03-07T06:51:57.118Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory\nllama_kv_cache_init: failed to allocate buffer for kv cache" 2025-03-07 00:52:02 time=2025-03-07T06:52:02.287Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.169429429 model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff 2025-03-07 00:52:02 time=2025-03-07T06:52:02.537Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.419052711 model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff 2025-03-07 00:52:02 time=2025-03-07T06:52:02.787Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.668807124 model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff i have this problem , my pc have 2060 rtx , ryzen 2700 and 32 ram
Author
Owner

@rick-github commented on GitHub (Mar 7, 2025):

OLLAMA_FLASH_ATTENTION:false

Flash attention is not enabled.

OLLAMA_GPU_OVERHEAD:0

GPU overhead is not set.

OLLAMA_NUM_PARALLEL:0

Parallelism is not set.

layers.requested=-1 layers.model=29 layers.offload=29 

num_gpu is not set.

Some general steps for dealing with OOMs here.

<!-- gh-comment-id:2707662987 --> @rick-github commented on GitHub (Mar 7, 2025): ``` OLLAMA_FLASH_ATTENTION:false ``` Flash attention is not enabled. ``` OLLAMA_GPU_OVERHEAD:0 ``` GPU overhead is not set. ``` OLLAMA_NUM_PARALLEL:0 ``` Parallelism is not set. ``` layers.requested=-1 layers.model=29 layers.offload=29 ``` `num_gpu` is not set. Some general steps for dealing with OOMs [here](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288).
Author
Owner

@infinitymask8 commented on GitHub (Mar 7, 2025):

https://github.com/coleam00/ai-agents-masterclass/tree/main/local-ai-packaged

Hi sorry I'm pretty new to this, download from here my ollama if you can explain me how to do it step by step in text ,Of course, if possible, I would appreciate it very much since I have not found help in my language but I can understand English

<!-- gh-comment-id:2707682361 --> @infinitymask8 commented on GitHub (Mar 7, 2025): https://github.com/coleam00/ai-agents-masterclass/tree/main/local-ai-packaged Hi sorry I'm pretty new to this, download from here my ollama if you can explain me how to do it step by step in text ,Of course, if possible, I would appreciate it very much since I have not found help in my language but I can understand English
Author
Owner

@rick-github commented on GitHub (Mar 9, 2025):

OLLAMA_FLASH_ATTENTION, OLLAMA_GPU_OVERHEAD and OLLAMA_NUM_PARALLEL are ollama configuration variables that are set in the server environment. For example, in the configuration file for ollama in local-ai-packaged, set OLLAMA_FLASH_ATTENTION=1 and then restart the ollama server.

num_gpu is a model configuration variable that tells ollama how many layers of the model to load into the GPU. Sometimes ollama overestimates how many layers will fit, and so causes OOM (out of memory) errors that crash the runner. If you look at the logs, you will see load_tensors: offloaded xx/yy layers to GPU where yy is the total number of layers in the model, and xx is the number of layers that ollama is loading in to the GPU. To reduce OOMs, you can set a lower value for xx as described here.

<!-- gh-comment-id:2708861452 --> @rick-github commented on GitHub (Mar 9, 2025): [`OLLAMA_FLASH_ATTENTION`](https://github.com/ollama/ollama/blob/4614fafae0ee58af5b9d04ec4b8c2eb3846274da/envconfig/config.go#L242), [`OLLAMA_GPU_OVERHEAD`](https://github.com/ollama/ollama/blob/4614fafae0ee58af5b9d04ec4b8c2eb3846274da/envconfig/config.go#L244) and [`OLLAMA_NUM_PARALLEL`](https://github.com/ollama/ollama/blob/4614fafae0ee58af5b9d04ec4b8c2eb3846274da/envconfig/config.go#L254) are ollama configuration variables that are set in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server). For example, in the configuration file for ollama in local-ai-packaged, set `OLLAMA_FLASH_ATTENTION=1` and then restart the ollama server. `num_gpu` is a model configuration variable that tells ollama how many layers of the model to load into the GPU. Sometimes ollama overestimates how many layers will fit, and so causes OOM (out of memory) errors that crash the runner. If you look at the logs, you will see `load_tensors: offloaded xx/yy layers to GPU` where `yy` is the total number of layers in the model, and `xx` is the number of layers that ollama is loading in to the GPU. To reduce OOMs, you can set a lower value for `xx` as described [here](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650).
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6236