[GH-ISSUE #6257] Wrong output with the new Llama3.1 and llama3-groq-tool-use pull #50428

Closed
opened 2026-04-28 15:48:24 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @Goekdeniz-Guelmez on GitHub (Aug 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6257

What is the issue?

So I just pulled the new Llama3.1 model and I get wrong outputs:

(base)  ~/Desktop/J.O.S.I.E.v3.5/ ollama run llama3.1
>>> hello
se// ofam oroomex b hades (--et mim n            {adctadic of		ad re            Snd// lentam in p re d {ur n
 ned
            =//im halouadsead and h {ing-- tour ( and ofan;
 f lion (ndil o in Sam            re);
 tois);
 read andam to to
 the mentad
ic);
 toexroidas toilim d { S in nomesaram w { b            =^C

>>>
Use Ctrl + d or /bye to exit.

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.3.4

Update!

Repull ollama pull llama3.1.

Basically the way the tensors were encoded was causing memory to be over-allocated.

Originally created by @Goekdeniz-Guelmez on GitHub (Aug 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6257 ### What is the issue? So I just pulled the new Llama3.1 model and I get wrong outputs: ```text (base)  ~/Desktop/J.O.S.I.E.v3.5/ ollama run llama3.1 >>> hello se// ofam oroomex b hades (--et mim n {adctadic of ad re Snd// lentam in p re d {ur n ned =//im halouadsead and h {ing-- tour ( and ofan; f lion (ndil o in Sam re); tois); read andam to to the mentad ic); toexroidas toilim d { S in nomesaram w { b =^C >>> Use Ctrl + d or /bye to exit. ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.3.4 # Update! Repull `ollama pull llama3.1`. Basically the way the tensors were encoded was causing memory to be over-allocated.
GiteaMirror added the bug label 2026-04-28 15:48:24 -05:00
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024):

Removing and repulling didn't fix it either:

(base)  ~/desktop/ ollama rm llama3.1
deleted 'llama3.1'
(base)  ~/desktop/ ollama run llama3.1
pulling manifest
pulling 4f6dc812262a... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.7 GB
pulling 11ce4ee3e170... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.7 KB
pulling 0ba8f0e314b4... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  12 KB
pulling 56bb8bd477a5... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏   96 B
pulling 7a92f2c5af7a... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  485 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> hello
-<1,#75**$&++*D6+!6H?C5:&->6>57F7;D'H/-<&>5?=71+D157,58<49.35F1$E-G:8/*;H%$14<,'&??'F356F16B+"!2,4GE$):%??81=$45^C

>>>
Use Ctrl + d or /bye to exit.
<!-- gh-comment-id:2275610574 --> @Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024): Removing and repulling didn't fix it either: ```text (base)  ~/desktop/ ollama rm llama3.1 deleted 'llama3.1' (base)  ~/desktop/ ollama run llama3.1 pulling manifest pulling 4f6dc812262a... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.7 GB pulling 11ce4ee3e170... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.7 KB pulling 0ba8f0e314b4... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 12 KB pulling 56bb8bd477a5... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 96 B pulling 7a92f2c5af7a... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 485 B verifying sha256 digest writing manifest removing any unused layers success >>> hello -<1,#75**$&++*D6+!6H?C5:&->6>57F7;D'H/-<&>5?=71+D157,58<49.35F1$E-G:8/*;H%$14<,'&??'F356F16B+"!2,4GE$):%??81=$45^C >>> Use Ctrl + d or /bye to exit. ```
Author
Owner

@rick-github commented on GitHub (Aug 8, 2024):

There are other reports that llama3.1 on a Mac is broken (#6249, #6246). Have you tried a different quantization? Can you try an older version of ollama? Can you post server logs?

<!-- gh-comment-id:2275632035 --> @rick-github commented on GitHub (Aug 8, 2024): There are other reports that llama3.1 on a Mac is broken (#6249, #6246). Have you tried a different quantization? Can you try an older version of ollama? Can you post server logs?
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024):

I tried it on llama3-groq-tool-use too and the same problem, but when loading custom llama3.1 models like the neural-daredevil or llama3.1-abliterated they work as intended. when trying ollama run llama3.1:8b-instruct-q4_K_M and q2 this also works as intended. This just started to not work today, when pulling the latest llama shards.

<!-- gh-comment-id:2275652956 --> @Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024): I tried it on `llama3-groq-tool-use` too and the same problem, but when loading custom llama3.1 models like the `neural-daredevil` or `llama3.1-abliterated` they work as intended. when trying `ollama run llama3.1:8b-instruct-q4_K_M` and `q2` this also works as intended. This just started to not work today, when pulling the latest llama shards.
Author
Owner

@rick-github commented on GitHub (Aug 8, 2024):

llama3-groq-tool-use hasn't been updated for a couple of weeks, so if it was working earlier and not now, it would rule out a model change as the cause. Can you try with ollama v0.3.3 or earlier?

<!-- gh-comment-id:2275660687 --> @rick-github commented on GitHub (Aug 8, 2024): llama3-groq-tool-use hasn't been updated for a couple of weeks, so if it was working earlier and not now, it would rule out a model change as the cause. Can you try with ollama v0.3.3 or earlier?
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024):

here are the logs first with ollama run llama3.1 and with ollama run llama3.1:8b-instruct-q4_K_M

(base)  ~/desktop/ ollama serve
2024/08/08 14:06:58 routes.go:1108: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/gokdenizgulmez/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR:]"
time=2024-08-08T14:06:58.193+02:00 level=INFO source=images.go:781 msg="total blobs: 143"
time=2024-08-08T14:06:58.218+02:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-08T14:06:58.221+02:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.3.4)"
time=2024-08-08T14:06:58.222+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/mx/t2rzvw2n38z843_jd10k4czr0000gn/T/ollama4020182643/runners
time=2024-08-08T14:06:58.252+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [metal]"
time=2024-08-08T14:06:58.334+02:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=metal compute="" driver=0.0 name="" total="5.3 GiB" available="5.3 GiB"
[GIN] 2024/08/08 - 14:07:05 | 200 |     762.875µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/08/08 - 14:07:05 | 200 |   23.656625ms |       127.0.0.1 | POST     "/api/show"
time=2024-08-08T14:07:05.511+02:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/gokdenizgulmez/.ollama/models/blobs/sha256-4f6dc812262ac5e1a74791c2a86310ebba1aa1804fa3cd1c216f5547a620d2f2 gpu=0 parallel=1 available=5726633984 required="5.1 GiB"
time=2024-08-08T14:07:05.512+02:00 level=INFO source=memory.go:309 msg="offload to metal" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[5.3 GiB]" memory.required.full="5.1 GiB" memory.required.partial="5.1 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="164.0 MiB"
time=2024-08-08T14:07:05.513+02:00 level=INFO source=server.go:392 msg="starting llama server" cmd="/var/folders/mx/t2rzvw2n38z843_jd10k4czr0000gn/T/ollama4020182643/runners/metal/ollama_llama_server --model /Users/gokdenizgulmez/.ollama/models/blobs/sha256-4f6dc812262ac5e1a74791c2a86310ebba1aa1804fa3cd1c216f5547a620d2f2 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 1 --port 56544"
time=2024-08-08T14:07:05.555+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-08T14:07:05.555+02:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding"
time=2024-08-08T14:07:05.555+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=3535 commit="1e6f6554" tid="0x1f4264c00" timestamp=1723118825
INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x1f4264c00" timestamp=1723118825 total_threads=8
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="56544" tid="0x1f4264c00" timestamp=1723118825
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /Users/gokdenizgulmez/.ollama/models/blobs/sha256-4f6dc812262ac5e1a74791c2a86310ebba1aa1804fa3cd1c216f5547a620d2f2 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 2
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-08-08T14:07:06.059+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name     = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size =    0.27 MiB
ggml_backend_metal_log_allocated_size: allocated buffer, size =  4096.00 MiB, ( 4096.06 /  5461.34)

ggml_backend_metal_log_allocated_size: allocated buffer, size =   752.83 MiB, ( 4848.89 /  5461.34)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =   281.81 MiB
llm_load_tensors:      Metal buffer size =  4437.81 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1
ggml_metal_init: picking default device: Apple M1
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M1
ggml_metal_init: GPU family: MTLGPUFamilyApple7  (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  =  5726.63 MB
llama_kv_cache_init:      Metal KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.50 MiB
llama_new_context_with_model:      Metal compute buffer size =   258.50 MiB
llama_new_context_with_model:        CPU compute buffer size =    12.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
INFO [main] model loaded | tid="0x1f4264c00" timestamp=1723118830
time=2024-08-08T14:07:10.916+02:00 level=INFO source=server.go:631 msg="llama runner started in 5.36 seconds"
[GIN] 2024/08/08 - 14:07:10 | 200 |    5.4359065s |       127.0.0.1 | POST     "/api/chat"
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
[GIN] 2024/08/08 - 14:07:33 | 200 |  11.93596925s |       127.0.0.1 | POST     "/api/chat"
ggml_metal_graph_compute: command buffer 3 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
[GIN] 2024/08/08 - 14:07:49 | 200 |    1.414041ms |       127.0.0.1 | HEAD     "/"
[GIN] 2024/08/08 - 14:07:49 | 200 |   32.690334ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/08/08 - 14:07:57 | 200 |      16.708µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/08/08 - 14:07:57 | 200 |   20.189666ms |       127.0.0.1 | POST     "/api/show"
time=2024-08-08T14:07:58.030+02:00 level=INFO source=sched.go:503 msg="updated VRAM based on existing loaded models" gpu=0 library=metal total="5.3 GiB" available="248.3 MiB"
time=2024-08-08T14:07:58.050+02:00 level=INFO source=memory.go:309 msg="offload to metal" layers.requested=-1 layers.model=33 layers.offload=32 layers.split="" memory.available="[5.3 GiB]" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="164.0 MiB"
time=2024-08-08T14:07:58.051+02:00 level=INFO source=server.go:392 msg="starting llama server" cmd="/var/folders/mx/t2rzvw2n38z843_jd10k4czr0000gn/T/ollama4020182643/runners/metal/ollama_llama_server --model /Users/gokdenizgulmez/.ollama/models/blobs/sha256-7a9cc88184910ffdcf56a714aa642bbd87e607cd7f20ae9e080cd23dc5cd7031 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 32 --no-mmap --parallel 1 --port 56879"
time=2024-08-08T14:07:58.052+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-08T14:07:58.052+02:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding"
time=2024-08-08T14:07:58.052+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=3535 commit="1e6f6554" tid="0x1f4264c00" timestamp=1723118878
INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x1f4264c00" timestamp=1723118878 total_threads=8
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="56879" tid="0x1f4264c00" timestamp=1723118878
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /Users/gokdenizgulmez/.ollama/models/blobs/sha256-7a9cc88184910ffdcf56a714aa642bbd87e607cd7f20ae9e080cd23dc5cd7031 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 15
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
time=2024-08-08T14:07:58.304+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.58 GiB (4.89 BPW)
llm_load_print_meta: general.name     = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size =    0.27 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloaded 32/33 layers to GPU
llm_load_tensors:        CPU buffer size =   692.80 MiB
llm_load_tensors:      Metal buffer size =  3992.51 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1
ggml_metal_init: picking default device: Apple M1
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M1
ggml_metal_init: GPU family: MTLGPUFamilyApple7  (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  =  5726.63 MB
llama_kv_cache_init:      Metal KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.50 MiB
llama_new_context_with_model:      Metal compute buffer size =   164.00 MiB
llama_new_context_with_model:        CPU compute buffer size =   250.50 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 4
INFO [main] model loaded | tid="0x1f4264c00" timestamp=1723118882
time=2024-08-08T14:08:02.897+02:00 level=INFO source=server.go:631 msg="llama runner started in 4.84 seconds"
[GIN] 2024/08/08 - 14:08:02 | 200 |  4.899833583s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/08/08 - 14:08:11 | 200 |  4.150124625s |       127.0.0.1 | POST     "/api/chat"

It looks like the llama3.1 model (4.9 GB) is not beeng loaded because of not enough RAM but when loading the other one (also 4.9 GB) it load it into the RAM.

<!-- gh-comment-id:2275669240 --> @Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024): here are the logs first with `ollama run llama3.1` and with `ollama run llama3.1:8b-instruct-q4_K_M` ``` (base)  ~/desktop/ ollama serve 2024/08/08 14:06:58 routes.go:1108: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/gokdenizgulmez/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR:]" time=2024-08-08T14:06:58.193+02:00 level=INFO source=images.go:781 msg="total blobs: 143" time=2024-08-08T14:06:58.218+02:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" time=2024-08-08T14:06:58.221+02:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.3.4)" time=2024-08-08T14:06:58.222+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/mx/t2rzvw2n38z843_jd10k4czr0000gn/T/ollama4020182643/runners time=2024-08-08T14:06:58.252+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [metal]" time=2024-08-08T14:06:58.334+02:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=metal compute="" driver=0.0 name="" total="5.3 GiB" available="5.3 GiB" [GIN] 2024/08/08 - 14:07:05 | 200 | 762.875µs | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 14:07:05 | 200 | 23.656625ms | 127.0.0.1 | POST "/api/show" time=2024-08-08T14:07:05.511+02:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/gokdenizgulmez/.ollama/models/blobs/sha256-4f6dc812262ac5e1a74791c2a86310ebba1aa1804fa3cd1c216f5547a620d2f2 gpu=0 parallel=1 available=5726633984 required="5.1 GiB" time=2024-08-08T14:07:05.512+02:00 level=INFO source=memory.go:309 msg="offload to metal" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[5.3 GiB]" memory.required.full="5.1 GiB" memory.required.partial="5.1 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="164.0 MiB" time=2024-08-08T14:07:05.513+02:00 level=INFO source=server.go:392 msg="starting llama server" cmd="/var/folders/mx/t2rzvw2n38z843_jd10k4czr0000gn/T/ollama4020182643/runners/metal/ollama_llama_server --model /Users/gokdenizgulmez/.ollama/models/blobs/sha256-4f6dc812262ac5e1a74791c2a86310ebba1aa1804fa3cd1c216f5547a620d2f2 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 1 --port 56544" time=2024-08-08T14:07:05.555+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 time=2024-08-08T14:07:05.555+02:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding" time=2024-08-08T14:07:05.555+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=3535 commit="1e6f6554" tid="0x1f4264c00" timestamp=1723118825 INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x1f4264c00" timestamp=1723118825 total_threads=8 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="56544" tid="0x1f4264c00" timestamp=1723118825 llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /Users/gokdenizgulmez/.ollama/models/blobs/sha256-4f6dc812262ac5e1a74791c2a86310ebba1aa1804fa3cd1c216f5547a620d2f2 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 2 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-08-08T14:07:06.059+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.27 MiB ggml_backend_metal_log_allocated_size: allocated buffer, size = 4096.00 MiB, ( 4096.06 / 5461.34) ggml_backend_metal_log_allocated_size: allocated buffer, size = 752.83 MiB, ( 4848.89 / 5461.34) llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 281.81 MiB llm_load_tensors: Metal buffer size = 4437.81 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 ggml_metal_init: picking default device: Apple M1 ggml_metal_init: using embedded metal library ggml_metal_init: GPU name: Apple M1 ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction support = true ggml_metal_init: simdgroup matrix mul. support = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 5726.63 MB llama_kv_cache_init: Metal KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU output buffer size = 0.50 MiB llama_new_context_with_model: Metal compute buffer size = 258.50 MiB llama_new_context_with_model: CPU compute buffer size = 12.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) INFO [main] model loaded | tid="0x1f4264c00" timestamp=1723118830 time=2024-08-08T14:07:10.916+02:00 level=INFO source=server.go:631 msg="llama runner started in 5.36 seconds" [GIN] 2024/08/08 - 14:07:10 | 200 | 5.4359065s | 127.0.0.1 | POST "/api/chat" ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) [GIN] 2024/08/08 - 14:07:33 | 200 | 11.93596925s | 127.0.0.1 | POST "/api/chat" ggml_metal_graph_compute: command buffer 3 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) [GIN] 2024/08/08 - 14:07:49 | 200 | 1.414041ms | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 14:07:49 | 200 | 32.690334ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/08/08 - 14:07:57 | 200 | 16.708µs | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 14:07:57 | 200 | 20.189666ms | 127.0.0.1 | POST "/api/show" time=2024-08-08T14:07:58.030+02:00 level=INFO source=sched.go:503 msg="updated VRAM based on existing loaded models" gpu=0 library=metal total="5.3 GiB" available="248.3 MiB" time=2024-08-08T14:07:58.050+02:00 level=INFO source=memory.go:309 msg="offload to metal" layers.requested=-1 layers.model=33 layers.offload=32 layers.split="" memory.available="[5.3 GiB]" memory.required.full="5.3 GiB" memory.required.partial="4.9 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="164.0 MiB" time=2024-08-08T14:07:58.051+02:00 level=INFO source=server.go:392 msg="starting llama server" cmd="/var/folders/mx/t2rzvw2n38z843_jd10k4czr0000gn/T/ollama4020182643/runners/metal/ollama_llama_server --model /Users/gokdenizgulmez/.ollama/models/blobs/sha256-7a9cc88184910ffdcf56a714aa642bbd87e607cd7f20ae9e080cd23dc5cd7031 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 32 --no-mmap --parallel 1 --port 56879" time=2024-08-08T14:07:58.052+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 time=2024-08-08T14:07:58.052+02:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding" time=2024-08-08T14:07:58.052+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=3535 commit="1e6f6554" tid="0x1f4264c00" timestamp=1723118878 INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x1f4264c00" timestamp=1723118878 total_threads=8 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="56879" tid="0x1f4264c00" timestamp=1723118878 llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /Users/gokdenizgulmez/.ollama/models/blobs/sha256-7a9cc88184910ffdcf56a714aa642bbd87e607cd7f20ae9e080cd23dc5cd7031 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 15 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors time=2024-08-08T14:07:58.304+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.58 GiB (4.89 BPW) llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.27 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloaded 32/33 layers to GPU llm_load_tensors: CPU buffer size = 692.80 MiB llm_load_tensors: Metal buffer size = 3992.51 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 ggml_metal_init: picking default device: Apple M1 ggml_metal_init: using embedded metal library ggml_metal_init: GPU name: Apple M1 ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction support = true ggml_metal_init: simdgroup matrix mul. support = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 5726.63 MB llama_kv_cache_init: Metal KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU output buffer size = 0.50 MiB llama_new_context_with_model: Metal compute buffer size = 164.00 MiB llama_new_context_with_model: CPU compute buffer size = 250.50 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 4 INFO [main] model loaded | tid="0x1f4264c00" timestamp=1723118882 time=2024-08-08T14:08:02.897+02:00 level=INFO source=server.go:631 msg="llama runner started in 4.84 seconds" [GIN] 2024/08/08 - 14:08:02 | 200 | 4.899833583s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/08/08 - 14:08:11 | 200 | 4.150124625s | 127.0.0.1 | POST "/api/chat" ``` It looks like the llama3.1 model (4.9 GB) is not beeng loaded because of not enough RAM but when loading the other one (also 4.9 GB) it load it into the RAM.
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024):

llama3-groq-tool-use hasn't been updated for a couple of weeks, so if it was working earlier and not now, it would rule out a model change as the cause. Can you try with ollama v0.3.3 or earlier?

The same issue with v0.3.3.

Is it possible that some layers are not loaded?

<!-- gh-comment-id:2275680363 --> @Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024): > llama3-groq-tool-use hasn't been updated for a couple of weeks, so if it was working earlier and not now, it would rule out a model change as the cause. Can you try with ollama v0.3.3 or earlier? The same issue with v0.3.3. Is it possible that some layers are not loaded?
Author
Owner

@rick-github commented on GitHub (Aug 8, 2024):

According to the logs, all layers are offloaded. I'm not sure how memory is shared between the system and the GPU on the mac, but what happens when you limit the layer offloading:

ollama run llama3.1
/set parameter num_gpu 30
hello
<!-- gh-comment-id:2275700819 --> @rick-github commented on GitHub (Aug 8, 2024): According to the logs, all layers are offloaded. I'm not sure how memory is shared between the system and the GPU on the mac, but what happens when you limit the layer offloading: ``` ollama run llama3.1 /set parameter num_gpu 30 hello ```
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024):

It worked at first, but when closing and reopening ollama and running the same line still doesn't work.

<!-- gh-comment-id:2275897086 --> @Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024): It worked at first, but when closing and reopening ollama and running the same line still doesn't work.
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024):

Using the very same model but with ollama run llama3.1:8b-instruct-q4_K_M works perfectly fine, it just doesn't work using it with ollama run llama3.1

<!-- gh-comment-id:2275901630 --> @Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024): Using the very same model but with `ollama run llama3.1:8b-instruct-q4_K_M` works perfectly fine, it just doesn't work using it with `ollama run llama3.1`
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024):

I also tried inference with the downloaded gguf model from hf, the template is just copied from yours and this also works perfectly fine. That's the model I downloaded: ggml-model-Q4_K_M.gguf from https://huggingface.co/chatpdflocal/llama3.1-8b-gguf

FROM ./ggml-model-Q4_K_M.gguf

TEMPLATE """{{ if .Messages }}
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|>
{{- if .System }}

{{ .System }}
{{- end }}
{{- if .Tools }}

You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the orginal use question.
{{- end }}<|eot_id|>
{{- end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools $last }}

Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.

Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.

{{ $.Tools }}
{{- end }}

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|>
{{- if .ToolCalls }}

{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
{{- else }}

{{ .Content }}{{ if not $last }}<|eot_id|>{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|>

{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}
{{- end }}
{{- end }}
{{- else }}
{{- if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}"""


PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>
<!-- gh-comment-id:2276025681 --> @Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024): I also tried inference with the downloaded gguf model from hf, the template is just copied from yours and this also works perfectly fine. That's the model I downloaded: `ggml-model-Q4_K_M.gguf` from `https://huggingface.co/chatpdflocal/llama3.1-8b-gguf` ``` FROM ./ggml-model-Q4_K_M.gguf TEMPLATE """{{ if .Messages }} {{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|> {{- if .System }} {{ .System }} {{- end }} {{- if .Tools }} You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the orginal use question. {{- end }}<|eot_id|> {{- end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|> {{- if and $.Tools $last }} Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt. Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables. {{ $.Tools }} {{- end }} {{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|> {{ end }} {{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|> {{- if .ToolCalls }} {{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }} {{- else }} {{ .Content }}{{ if not $last }}<|eot_id|>{{ end }} {{- end }} {{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|> {{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|> {{ end }} {{- end }} {{- end }} {{- else }} {{- if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}""" PARAMETER stop <|start_header_id|> PARAMETER stop <|end_header_id|> PARAMETER stop <|eot_id|> ```
Author
Owner

@rick-github commented on GitHub (Aug 8, 2024):

Yeah, it seems the Q4_0 quant doesn't work well on Macs. I have no problems with it on a linux machine with an Nvidia RTX4070.

<!-- gh-comment-id:2276037369 --> @rick-github commented on GitHub (Aug 8, 2024): Yeah, it seems the Q4_0 quant doesn't work well on Macs. I have no problems with it on a linux machine with an Nvidia RTX4070.
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024):

That's interesting, should I open a issue on llama.cpp, since this is a quant problem?

<!-- gh-comment-id:2276199963 --> @Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024): That's interesting, should I open a issue on llama.cpp, since this is a quant problem?
Author
Owner

@MaxJa4 commented on GitHub (Aug 8, 2024):

Just tried it with llama3.1 and llama3.1:8b-instruct-q8_0 on my MacBook (M3 Pro), both worked fine... tried it with flash attention on and off, both worked flawlessly.
Must be specific to M1 chips then I guess?

<!-- gh-comment-id:2276214903 --> @MaxJa4 commented on GitHub (Aug 8, 2024): Just tried it with `llama3.1` and `llama3.1:8b-instruct-q8_0` on my MacBook (M3 Pro), both worked fine... tried it with flash attention on and off, both worked flawlessly. Must be specific to M1 chips then I guess?
Author
Owner

@rick-github commented on GitHub (Aug 8, 2024):

@MaxJa4 What version of ollama are you using, and how much (V)RAM do you have?

I don't think it's solely a quant problem, because then I would expect to see other platforms with problems. So far, the reports are just affecting a subset of Mac users. So it would be good if we could find what similarity the affected machines have and exclude reasons based on success reports like MaxJa4.

<!-- gh-comment-id:2276223759 --> @rick-github commented on GitHub (Aug 8, 2024): @MaxJa4 What version of ollama are you using, and how much (V)RAM do you have? I don't think it's solely a quant problem, because then I would expect to see other platforms with problems. So far, the reports are just affecting a subset of Mac users. So it would be good if we could find what similarity the affected machines have and exclude reasons based on success reports like MaxJa4.
Author
Owner

@MaxJa4 commented on GitHub (Aug 8, 2024):

@MaxJa4 What version of ollama are you using, and how much (V)RAM do you have?

Ollama v0.3.4, "vram" is reported by Ollama with 12.0 GB (Metal)

Update: Added logs.

Debug Logs
2024/08/08 19:55:53 routes.go:1108: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/<user>/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR:]"
time=2024-08-08T19:55:53.502+02:00 level=INFO source=images.go:781 msg="total blobs: 31"
time=2024-08-08T19:55:53.503+02:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-08T19:55:53.504+02:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.3.4)"
time=2024-08-08T19:55:53.504+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/kj/44q0qyhs1djf_271qjh684g80000gn/T/ollama2825508986/runners
time=2024-08-08T19:55:53.529+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [metal]"
time=2024-08-08T19:55:53.548+02:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=metal compute="" driver=0.0 name="" total="12.0 GiB" available="12.0 GiB"
time=2024-08-08T19:56:01.224+02:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/<user>/.ollama/models/blobs/sha256-4f6dc812262ac5e1a74791c2a86310ebba1aa1804fa3cd1c216f5547a620d2f2 gpu=0 parallel=4 available=12884918272 required="6.3 GiB"
time=2024-08-08T19:56:01.224+02:00 level=INFO source=memory.go:309 msg="offload to metal" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[12.0 GiB]" memory.required.full="6.3 GiB" memory.required.partial="6.3 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.3 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="560.0 MiB"
time=2024-08-08T19:56:01.225+02:00 level=INFO source=server.go:392 msg="starting llama server" cmd="/var/folders/kj/44q0qyhs1djf_271qjh684g80000gn/T/ollama2825508986/runners/metal/ollama_llama_server --model /Users/<user>/.ollama/models/blobs/sha256-4f6dc812262ac5e1a74791c2a86310ebba1aa1804fa3cd1c216f5547a620d2f2 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 59201"
time=2024-08-08T19:56:01.229+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-08T19:56:01.229+02:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding"
time=2024-08-08T19:56:01.229+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error"
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /Users/<user>/.ollama/models/blobs/sha256-4f6dc812262ac5e1a74791c2a86310ebba1aa1804fa3cd1c216f5547a620d2f2 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 2
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
time=2024-08-08T19:56:01.480+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name     = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size =    0.27 MiB
ggml_backend_metal_log_allocated_size: allocated buffer, size =  4437.83 MiB, ( 4437.91 / 12288.02)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =   281.81 MiB
llm_load_tensors:      Metal buffer size =  4437.81 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Pro
ggml_metal_init: picking default device: Apple M3 Pro
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M3 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 12884.92 MB
llama_kv_cache_init:      Metal KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     2.02 MiB
llama_new_context_with_model:      Metal compute buffer size =   560.00 MiB
llama_new_context_with_model:        CPU compute buffer size =    24.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2
time=2024-08-08T19:56:01.983+02:00 level=INFO source=server.go:631 msg="llama runner started in 0.75 seconds"

PR which seems to aim to address your issue: #6260

<!-- gh-comment-id:2276229851 --> @MaxJa4 commented on GitHub (Aug 8, 2024): > @MaxJa4 What version of ollama are you using, and how much (V)RAM do you have? Ollama v0.3.4, "vram" is reported by Ollama with 12.0 GB (Metal) Update: Added logs. <details> <summary>Debug Logs</summary> ``` 2024/08/08 19:55:53 routes.go:1108: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/<user>/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR:]" time=2024-08-08T19:55:53.502+02:00 level=INFO source=images.go:781 msg="total blobs: 31" time=2024-08-08T19:55:53.503+02:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" time=2024-08-08T19:55:53.504+02:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.3.4)" time=2024-08-08T19:55:53.504+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/kj/44q0qyhs1djf_271qjh684g80000gn/T/ollama2825508986/runners time=2024-08-08T19:55:53.529+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [metal]" time=2024-08-08T19:55:53.548+02:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=metal compute="" driver=0.0 name="" total="12.0 GiB" available="12.0 GiB" time=2024-08-08T19:56:01.224+02:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/<user>/.ollama/models/blobs/sha256-4f6dc812262ac5e1a74791c2a86310ebba1aa1804fa3cd1c216f5547a620d2f2 gpu=0 parallel=4 available=12884918272 required="6.3 GiB" time=2024-08-08T19:56:01.224+02:00 level=INFO source=memory.go:309 msg="offload to metal" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[12.0 GiB]" memory.required.full="6.3 GiB" memory.required.partial="6.3 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.3 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="560.0 MiB" time=2024-08-08T19:56:01.225+02:00 level=INFO source=server.go:392 msg="starting llama server" cmd="/var/folders/kj/44q0qyhs1djf_271qjh684g80000gn/T/ollama2825508986/runners/metal/ollama_llama_server --model /Users/<user>/.ollama/models/blobs/sha256-4f6dc812262ac5e1a74791c2a86310ebba1aa1804fa3cd1c216f5547a620d2f2 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 59201" time=2024-08-08T19:56:01.229+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 time=2024-08-08T19:56:01.229+02:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding" time=2024-08-08T19:56:01.229+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error" llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /Users/<user>/.ollama/models/blobs/sha256-4f6dc812262ac5e1a74791c2a86310ebba1aa1804fa3cd1c216f5547a620d2f2 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 2 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe time=2024-08-08T19:56:01.480+02:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.27 MiB ggml_backend_metal_log_allocated_size: allocated buffer, size = 4437.83 MiB, ( 4437.91 / 12288.02) llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 281.81 MiB llm_load_tensors: Metal buffer size = 4437.81 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M3 Pro ggml_metal_init: picking default device: Apple M3 Pro ggml_metal_init: using embedded metal library ggml_metal_init: GPU name: Apple M3 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction support = true ggml_metal_init: simdgroup matrix mul. support = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 12884.92 MB llama_kv_cache_init: Metal KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CPU output buffer size = 2.02 MiB llama_new_context_with_model: Metal compute buffer size = 560.00 MiB llama_new_context_with_model: CPU compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 time=2024-08-08T19:56:01.983+02:00 level=INFO source=server.go:631 msg="llama runner started in 0.75 seconds" ``` </details> PR which seems to aim to address your issue: #6260
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024):

Loading the same model from a gguf file ollama create ... that also works fine too, wich is weird. Ill look into the PR.

Edit:

The PR does make sense, I'll close this when the PR has worked.

Thank you!!!!!

<!-- gh-comment-id:2276496921 --> @Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024): Loading the same model from a gguf file `ollama create ...` that also works fine too, wich is weird. Ill look into the PR. Edit: The PR does make sense, I'll close this when the PR has worked. Thank you!!!!!
Author
Owner

@rick-github commented on GitHub (Aug 8, 2024):

#6246 resolved their problem by moving to a machine with more RAM, so it's just a resource problem. https://github.com/ollama/ollama/pull/6260 should resolve it.

<!-- gh-comment-id:2276508712 --> @rick-github commented on GitHub (Aug 8, 2024): #6246 resolved their problem by moving to a machine with more RAM, so it's just a resource problem. https://github.com/ollama/ollama/pull/6260 should resolve it.
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024):

I thought that at first too, but I can run the very same model on my Mac too just not the llama3.1 tag name, can that still be a memory issue?

<!-- gh-comment-id:2276515031 --> @Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024): I thought that at first too, but I can run the very same model on my Mac too just not the `llama3.1` tag name, can that still be a memory issue?
Author
Owner

@rick-github commented on GitHub (Aug 8, 2024):

It could be. If the RAM required to load the mode, the KV space and the graph is close to the limit of available RAM, then the slightly different calculations that ollama does may make the difference between the model loading, the model loading but returning bad results, or the model failing to load.

<!-- gh-comment-id:2276521205 --> @rick-github commented on GitHub (Aug 8, 2024): It could be. If the RAM required to load the mode, the KV space and the graph is close to the limit of available RAM, then the slightly different calculations that ollama does may make the difference between the model loading, the model loading but returning bad results, or the model failing to load.
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024):

Ahh ok, it’s a valid point. So the RAM requirements for loading the model, KV space, and computational graph are teetering on the edge of the available memory. Is it the operations like repeat_kv? So like reshaping and expanding tensors in the attention mechanism, these probably are quite memory-intensive.

<!-- gh-comment-id:2276540508 --> @Goekdeniz-Guelmez commented on GitHub (Aug 8, 2024): Ahh ok, it’s a valid point. So the RAM requirements for loading the model, KV space, and computational graph are teetering on the edge of the available memory. Is it the operations like `repeat_kv`? So like reshaping and expanding tensors in the attention mechanism, these probably are quite memory-intensive.
Author
Owner

@jmorganca commented on GitHub (Aug 8, 2024):

Hi folks, this should be fixed now. You'll need to re-pull llama3.1: ollama pull llama3.1 (sorry!). Let me know if you're still seeing it afterwards. Basically the way the tensors were encoded was causing memory to be over-allocated.

<!-- gh-comment-id:2276759769 --> @jmorganca commented on GitHub (Aug 8, 2024): Hi folks, this should be fixed now. You'll need to re-pull `llama3.1`: `ollama pull llama3.1` (sorry!). Let me know if you're still seeing it afterwards. Basically the way the tensors were encoded was causing memory to be over-allocated.
Author
Owner

@Goekdeniz-Guelmez commented on GitHub (Aug 9, 2024):

It worked! Thanks!!!! So as I understood, it was a problem with how the tensors were encoded in the llama3.1 file itself, causing the memory over-allocation—not the KV cache stuff from earlier. I appreciate the quick fix—makes a lot of sense given how close the model was to hitting memory limits.

<!-- gh-comment-id:2277491944 --> @Goekdeniz-Guelmez commented on GitHub (Aug 9, 2024): It worked! Thanks!!!! So as I understood, it was a problem with how the tensors were encoded in the llama3.1 file itself, causing the memory over-allocation—not the KV cache stuff from earlier. I appreciate the quick fix—makes a lot of sense given how close the model was to hitting memory limits.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50428