[GH-ISSUE #4151] High RAM usage causes yo-yoing memory pressure on Mac, slow inference #28338

Closed
opened 2026-04-22 06:26:42 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @joliss on GitHub (May 4, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4151

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

This is possibly related to the fix for #4028. I updated to the 0.1.33 release and pulled the latest mixtral:8x22b-instruct-v0.1-q4_0 (6a0910fa6dc1), so I'm running an 80 GB model on a 96 GB RAM machine. When I run

ollama run mixtral:8x22b-instruct-v0.1-q4_0 'Hi!'

it's now causing intermittent system freezes, where the entire screen just doesn't update for a second. (Update: The freezes are less frequent on 0.1.37, but still happen.) The inference is working fine now quality-wise, just running extremely slowly (0.27 tokens/s). This is despite it being an 80 GB model that fits comfortably into the 96 GB RAM on my machine.

Update: This also reproduces with other models that use most of my RAM, such as llama3:70b-instruct-q8_0 (74 GB), so it seems to be related to the high memory usage, rather than the specific model.

Curiously, I'm not sure what it's blocking/bottlenecking on during inference:

  • Ollama CPU usage is around 15% (of a single core).

  • Total GPU processor utilization is yo-yoing around 50% (like 30%-80%), unlike any other model that fits into memory where it will sit at 99%. "GPU memory" according to iStat Menus is similarly yo-yoing up and down. (I'm not sure what metric they're showing under "GPU memory" on a unified memory architecture.)

    image

    Note that this section shows the GPU specifically, despite the title.

  • Disk access only happens before inference starts while the model loads. So it's not like with the models that don't fit into memory, where it's just streaming 1GB/sec from disk the whole time during inference.

  • System memory pressure is yo-yoing between 28% and 83% during inference.

    image

Let me know if there's anything I can run to help debug this!

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.1.33

Originally created by @joliss on GitHub (May 4, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4151 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? This is possibly related to the fix for #4028. I updated to the 0.1.33 release and pulled the latest `mixtral:8x22b-instruct-v0.1-q4_0` (`6a0910fa6dc1`), so I'm running an 80 GB model on a 96 GB RAM machine. When I run ollama run mixtral:8x22b-instruct-v0.1-q4_0 'Hi!' it's now causing intermittent system freezes, where the entire screen just doesn't update for a second. *(Update: The freezes are less frequent on 0.1.37, but still happen.)* The inference is working fine now quality-wise, just running extremely slowly (0.27 tokens/s). This is despite it being an 80 GB model that fits comfortably into the 96 GB RAM on my machine. *Update: This also reproduces with other models that use most of my RAM, such as llama3:70b-instruct-q8_0 (74 GB), so it seems to be related to the high memory usage, rather than the specific model.* Curiously, I'm not sure what it's blocking/bottlenecking on during inference: - Ollama CPU usage is around 15% (of a single core). - Total GPU processor utilization is yo-yoing around 50% (like 30%-80%), unlike any other model that fits into memory where it will sit at 99%. "GPU memory" according to iStat Menus is similarly yo-yoing up and down. (I'm not sure what metric they're showing under "GPU memory" on a unified memory architecture.) <img width="277" alt="image" src="https://github.com/ollama/ollama/assets/524783/5310327a-5147-4035-9cc7-9212b4d08e60"> Note that this section shows the GPU specifically, despite the title. - Disk access only happens before inference starts while the model loads. So it's not like with the models that don't fit into memory, where it's just streaming 1GB/sec from disk the whole time during inference. - System memory pressure is yo-yoing between 28% and 83% during inference. <img width="597" alt="image" src="https://github.com/ollama/ollama/assets/524783/bd3e0280-14c4-4a1a-89e6-61ed045fd7d9"> Let me know if there's anything I can run to help debug this! ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.1.33
GiteaMirror added the memorybugmacos labels 2026-04-22 06:26:43 -05:00
Author
Owner

@igorschlum commented on GitHub (May 12, 2024):

Hi @joliss I have a Mac Station M1 with 192 GB of RAM and can launch the ollama run mixtral:8x22b-instruct-v0.1-q4_0 'Hi!' command without any trouble.
I'm using version 0.1.36 of Ollama. Can you test with this latest version?

<!-- gh-comment-id:2106198028 --> @igorschlum commented on GitHub (May 12, 2024): Hi @joliss I have a Mac Station M1 with 192 GB of RAM and can launch the ollama run mixtral:8x22b-instruct-v0.1-q4_0 'Hi!' command without any trouble. I'm using version 0.1.36 of Ollama. Can you test with this latest version?
Author
Owner

@joliss commented on GitHub (May 15, 2024):

On 0.1.37, the intermittent system freezes seem to be much less frequent (though I can only test without an external display right now). However, the yo-yoing memory pressure is still happening:

image

The above is for a single run of ollama run mixtral:8x22b-instruct-v0.1-q4_0 'Hi' -- it goes up and down during inference.

It's notable that this stops happening if I close a few Chrome tabs. It's very strange though -- I would expect the 80 GB weights + the K/V cache (I assume) for a small message to leave plenty of left-over RAM on a 96 GB machine. Here is my machine when it's idle:

image

I'd also expect background apps to get forced out to swap as the memory pressure rises, but what seems to be happening instead is that Ollama (or Metal) temporarily "gives up" on keeping the model in memory, causing the drops in memory pressure.

I know it seems like this might just be an issue of "well it's not enough RAM for this model", but the memory pressure behavior is strange enough that I think it warrants investigating.

Here is the debug log from running ollama run mixtral:8x22b-instruct-v0.1-q4_0 'Hi'. In particular, the following message might be relevant:

time=2024-05-15T23:43:17.444+01:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=52 memory.available="72.0 GiB" memory.required.full="76.4 GiB" memory.required.partial="71.1 GiB" memory.required.kv="448.0 MiB" memory.weights.total="75.3 GiB" memory.weights.repeating="75.1 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="244.0 MiB" memory.graph.partial="244.0 MiB"

Full log:

~ $ OLLAMA_DEBUG="1" ollama serve
2024/05/15 23:43:09 routes.go:1006: INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-05-15T23:43:09.032+01:00 level=INFO source=images.go:704 msg="total blobs: 138"
time=2024-05-15T23:43:09.040+01:00 level=INFO source=images.go:711 msg="total unused blobs removed: 0"
time=2024-05-15T23:43:09.041+01:00 level=INFO source=routes.go:1052 msg="Listening on 127.0.0.1:11434 (version 0.1.37)"
time=2024-05-15T23:43:09.041+01:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029/runners
time=2024-05-15T23:43:09.042+01:00 level=DEBUG source=payload.go:180 msg=extracting variant=metal file=build/darwin/arm64/metal/bin/ggml-common.h.gz
time=2024-05-15T23:43:09.042+01:00 level=DEBUG source=payload.go:180 msg=extracting variant=metal file=build/darwin/arm64/metal/bin/ggml-metal.metal.gz
time=2024-05-15T23:43:09.042+01:00 level=DEBUG source=payload.go:180 msg=extracting variant=metal file=build/darwin/arm64/metal/bin/ollama_llama_server.gz
time=2024-05-15T23:43:09.063+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029/runners/metal
time=2024-05-15T23:43:09.063+01:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [metal]"
time=2024-05-15T23:43:09.063+01:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-05-15T23:43:09.063+01:00 level=DEBUG source=sched.go:89 msg="starting llm scheduler"
time=2024-05-15T23:43:09.101+01:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=metal compute="" driver=0.0 name="" total="72.0 GiB" available="72.0 GiB"
[GIN] 2024/05/15 - 23:43:17 | 200 |      53.125µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/05/15 - 23:43:17 | 200 |    3.003916ms |       127.0.0.1 | POST     "/api/show"
time=2024-05-15T23:43:17.346+01:00 level=DEBUG source=gguf.go:57 msg="model = &llm.gguf{containerGGUF:(*llm.containerGGUF)(0x140005a9040), kv:llm.KV{}, tensors:[]*llm.Tensor(nil), parameters:0x0}"
time=2024-05-15T23:43:17.442+01:00 level=DEBUG source=sched.go:152 msg="loading first model" model=/Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9
time=2024-05-15T23:43:17.442+01:00 level=DEBUG source=memory.go:44 msg=evaluating library=metal gpu_count=1 available="72.0 GiB"
time=2024-05-15T23:43:17.442+01:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=52 memory.available="72.0 GiB" memory.required.full="76.4 GiB" memory.required.partial="71.1 GiB" memory.required.kv="448.0 MiB" memory.weights.total="75.3 GiB" memory.weights.repeating="75.1 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="244.0 MiB" memory.graph.partial="244.0 MiB"
time=2024-05-15T23:43:17.443+01:00 level=DEBUG source=memory.go:44 msg=evaluating library=metal gpu_count=1 available="72.0 GiB"
time=2024-05-15T23:43:17.443+01:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=52 memory.available="72.0 GiB" memory.required.full="76.4 GiB" memory.required.partial="71.1 GiB" memory.required.kv="448.0 MiB" memory.weights.total="75.3 GiB" memory.weights.repeating="75.1 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="244.0 MiB" memory.graph.partial="244.0 MiB"
time=2024-05-15T23:43:17.443+01:00 level=DEBUG source=server.go:98 msg="system memory" total="96.0 GiB"
time=2024-05-15T23:43:17.443+01:00 level=DEBUG source=memory.go:44 msg=evaluating library=metal gpu_count=1 available="72.0 GiB"
time=2024-05-15T23:43:17.444+01:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=52 memory.available="72.0 GiB" memory.required.full="76.4 GiB" memory.required.partial="71.1 GiB" memory.required.kv="448.0 MiB" memory.weights.total="75.3 GiB" memory.weights.repeating="75.1 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="244.0 MiB" memory.graph.partial="244.0 MiB"
time=2024-05-15T23:43:17.444+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029/runners/metal
time=2024-05-15T23:43:17.444+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029/runners/metal
time=2024-05-15T23:43:17.445+01:00 level=INFO source=server.go:318 msg="starting llama server" cmd="/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029/runners/metal/ollama_llama_server --model /Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 52 --verbose --parallel 1 --port 61226"
time=2024-05-15T23:43:17.445+01:00 level=DEBUG source=server.go:320 msg=subprocess environment="[OLLAMA_DEBUG=1 MANPATH=/opt/homebrew/share/man: TERM_PROGRAM=iTerm.app ... _=/usr/local/bin/ollama LD_LIBRARY_PATH=/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029/runners/metal]"
time=2024-05-15T23:43:17.450+01:00 level=INFO source=sched.go:333 msg="loaded runners" count=1
time=2024-05-15T23:43:17.450+01:00 level=INFO source=server.go:488 msg="waiting for llama runner to start responding"
time=2024-05-15T23:43:17.450+01:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=2770 commit="952d03d" tid="0x202e37ac0" timestamp=1715812997
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x202e37ac0" timestamp=1715812997 total_threads=12
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="61226" tid="0x202e37ac0" timestamp=1715812997
llama_model_loader: loaded meta data with 28 key-value pairs and 563 tensors from /Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Mixtral-8x22B-Instruct-v0.1
llama_model_loader: - kv   2:                          llama.block_count u32              = 56
llama_model_loader: - kv   3:                       llama.context_length u32              = 65536
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 6144
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 16384
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 48
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                         llama.expert_count u32              = 8
llama_model_loader: - kv  11:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  12:                          general.file_type u32              = 2
llama_model_loader: - kv  13:                           llama.vocab_size u32              = 32768
llama_model_loader: - kv  14:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,32768]   = ["<unk>", "<s>", "</s>", "[INST]", "[...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,32768]   = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,32768]   = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:           tokenizer.chat_template.tool_use str              = {{bos_token}}{% set user_messages = m...
llama_model_loader: - kv  25:                   tokenizer.chat_templates arr[str,1]       = ["tool_use"]
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{bos_token}}{% for message in messag...
llama_model_loader: - kv  27:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  113 tensors
llama_model_loader: - type  f16:   56 tensors
llama_model_loader: - type q4_0:  281 tensors
llama_model_loader: - type q8_0:  112 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: mismatch in special tokens definition ( 1027/32768 vs 259/32768 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32768
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 65536
llm_load_print_meta: n_embd           = 6144
llm_load_print_meta: n_head           = 48
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 56
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 6
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 16384
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 65536
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8x22B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 140.63 B
llm_load_print_meta: model size       = 74.05 GiB (4.52 BPW)
llm_load_print_meta: general.name     = Mixtral-8x22B-Instruct-v0.1
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 781 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.56 MiB
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 55296.00 MiB, offs =            0
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 15461.84 MiB, offs =  57529057280, (70757.91 / 73728.00)
llm_load_tensors: offloading 52 repeating layers to GPU
llm_load_tensors: offloaded 52/57 layers to GPU
llm_load_tensors:        CPU buffer size = 75831.40 MiB
llm_load_tensors:      Metal buffer size = 70325.82 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Max
ggml_metal_init: picking default device: Apple M2 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M2 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple8  (1008)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 77309.41 MB
llama_kv_cache_init:        CPU KV buffer size =    32.00 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =   416.00 MiB, (71175.72 / 73728.00)
llama_kv_cache_init:      Metal KV buffer size =   416.00 MiB
llama_new_context_with_model: KV self size  =  448.00 MiB, K (f16):  224.00 MiB, V (f16):  224.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.15 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =   244.05 MiB, (71419.77 / 73728.00)
llama_new_context_with_model:      Metal compute buffer size =   244.04 MiB
llama_new_context_with_model:        CPU compute buffer size =   244.01 MiB
llama_new_context_with_model: graph nodes  = 2638
llama_new_context_with_model: graph splits = 3
time=2024-05-15T23:43:17.702+01:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model"
DEBUG [initialize] initializing slots | n_slots=1 tid="0x202e37ac0" timestamp=1715813018
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=0 tid="0x202e37ac0" timestamp=1715813018
INFO [main] model loaded | tid="0x202e37ac0" timestamp=1715813018
DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="0x202e37ac0" timestamp=1715813018
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=0 tid="0x202e37ac0" timestamp=1715813018
time=2024-05-15T23:43:38.574+01:00 level=INFO source=server.go:529 msg="llama runner started in 21.12 seconds"
time=2024-05-15T23:43:38.574+01:00 level=DEBUG source=sched.go:346 msg="finished setting up runner" model=/Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9
time=2024-05-15T23:43:38.574+01:00 level=DEBUG source=routes.go:178 msg="generate handler" prompt=Hi
time=2024-05-15T23:43:38.574+01:00 level=DEBUG source=routes.go:179 msg="generate handler" template="[INST] {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }} [/INST]"
time=2024-05-15T23:43:38.574+01:00 level=DEBUG source=routes.go:180 msg="generate handler" system=""
time=2024-05-15T23:43:38.574+01:00 level=DEBUG source=routes.go:211 msg="generate handler" prompt="[INST] Hi [/INST]"
time=2024-05-15T23:43:38.574+01:00 level=DEBUG source=server.go:616 msg="setting token limit to 10x num_ctx" num_ctx=2048 num_predict=20480
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=1 tid="0x202e37ac0" timestamp=1715813018
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=2 tid="0x202e37ac0" timestamp=1715813018
DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=5 slot_id=0 task_id=2 tid="0x202e37ac0" timestamp=1715813018
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=2 tid="0x202e37ac0" timestamp=1715813018
DEBUG [print_timings] prompt eval time     =    5884.05 ms /     5 tokens ( 1176.81 ms per token,     0.85 tokens per second) | n_prompt_tokens_processed=5 n_tokens_second=0.8497547013103726 slot_id=0 t_prompt_processing=5884.051 t_token=1176.8102000000001 task_id=2 tid="0x202e37ac0" timestamp=1715813077
DEBUG [print_timings] generation eval time =   53382.95 ms /    43 runs   ( 1241.46 ms per token,     0.81 tokens per second) | n_decoded=43 n_tokens_second=0.805500662870848 slot_id=0 t_token=1241.463906976744 t_token_generation=53382.948 task_id=2 tid="0x202e37ac0" timestamp=1715813077
DEBUG [print_timings]           total time =   59267.00 ms | slot_id=0 t_prompt_processing=5884.051 t_token_generation=53382.948 t_total=59266.998999999996 task_id=2 tid="0x202e37ac0" timestamp=1715813077
DEBUG [update_slots] slot released | n_cache_tokens=48 n_ctx=2048 n_past=47 n_system_tokens=0 slot_id=0 task_id=2 tid="0x202e37ac0" timestamp=1715813077 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=61287 status=200 tid="0x16fbef000" timestamp=1715813077
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=48 tid="0x202e37ac0" timestamp=1715813077
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=61408 status=200 tid="0x16fc7b000" timestamp=1715813077
[GIN] 2024/05/15 - 23:44:37 | 200 |         1m20s |       127.0.0.1 | POST     "/api/generate"
time=2024-05-15T23:44:37.855+01:00 level=DEBUG source=sched.go:350 msg="context for request finished"
time=2024-05-15T23:44:37.855+01:00 level=DEBUG source=sched.go:236 msg="runner with non-zero duration has gone idle, adding timer" model=/Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 duration=5m0s
time=2024-05-15T23:44:37.855+01:00 level=DEBUG source=sched.go:252 msg="after processing request finished event" model=/Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 refCount=0
^Ctime=2024-05-15T23:44:57.962+01:00 level=DEBUG source=sched.go:215 msg="shutting down scheduler completed loop"
time=2024-05-15T23:44:57.963+01:00 level=DEBUG source=sched.go:103 msg="shutting down scheduler pending loop"
time=2024-05-15T23:44:57.964+01:00 level=DEBUG source=sched.go:611 msg="shutting down runner" model=/Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9
time=2024-05-15T23:44:57.965+01:00 level=DEBUG source=server.go:938 msg="stopping llama server"
time=2024-05-15T23:44:57.965+01:00 level=DEBUG source=server.go:944 msg="waiting for llama server to exit"
time=2024-05-15T23:44:57.986+01:00 level=DEBUG source=server.go:948 msg="llama server stopped"
time=2024-05-15T23:44:57.986+01:00 level=DEBUG source=assets.go:105 msg="cleaning up" dir=/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029
<!-- gh-comment-id:2113614854 --> @joliss commented on GitHub (May 15, 2024): On 0.1.37, the intermittent system freezes seem to be much less frequent (though I can only test without an external display right now). However, the yo-yoing memory pressure is still happening: <img width="587" alt="image" src="https://github.com/ollama/ollama/assets/524783/6fcc1c81-b65d-40e3-bdf7-e05c182e7ae0"> The above is for a single run of `ollama run mixtral:8x22b-instruct-v0.1-q4_0 'Hi'` -- it goes up and down during inference. It's notable that this stops happening if I close a few Chrome tabs. It's very strange though -- I would expect the 80 GB weights + the K/V cache (I assume) for a small message to leave plenty of left-over RAM on a 96 GB machine. Here is my machine when it's idle: <img width="589" alt="image" src="https://github.com/ollama/ollama/assets/524783/bf22bda0-7b7b-4436-8274-e8e5c51b6940"> I'd also expect background apps to get forced out to swap as the memory pressure rises, but what seems to be happening instead is that Ollama (or Metal) temporarily "gives up" on keeping the model in memory, causing the drops in memory pressure. I know it seems like this might just be an issue of "well it's not enough RAM for this model", but the memory pressure behavior is strange enough that I think it warrants investigating. Here is the debug log from running `ollama run mixtral:8x22b-instruct-v0.1-q4_0 'Hi'`. In particular, the following message might be relevant: > time=2024-05-15T23:43:17.444+01:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=52 memory.available="72.0 GiB" memory.required.full="76.4 GiB" memory.required.partial="71.1 GiB" memory.required.kv="448.0 MiB" memory.weights.total="75.3 GiB" memory.weights.repeating="75.1 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="244.0 MiB" memory.graph.partial="244.0 MiB" Full log: ``` ~ $ OLLAMA_DEBUG="1" ollama serve 2024/05/15 23:43:09 routes.go:1006: INFO server config env="map[OLLAMA_DEBUG:true OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" time=2024-05-15T23:43:09.032+01:00 level=INFO source=images.go:704 msg="total blobs: 138" time=2024-05-15T23:43:09.040+01:00 level=INFO source=images.go:711 msg="total unused blobs removed: 0" time=2024-05-15T23:43:09.041+01:00 level=INFO source=routes.go:1052 msg="Listening on 127.0.0.1:11434 (version 0.1.37)" time=2024-05-15T23:43:09.041+01:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029/runners time=2024-05-15T23:43:09.042+01:00 level=DEBUG source=payload.go:180 msg=extracting variant=metal file=build/darwin/arm64/metal/bin/ggml-common.h.gz time=2024-05-15T23:43:09.042+01:00 level=DEBUG source=payload.go:180 msg=extracting variant=metal file=build/darwin/arm64/metal/bin/ggml-metal.metal.gz time=2024-05-15T23:43:09.042+01:00 level=DEBUG source=payload.go:180 msg=extracting variant=metal file=build/darwin/arm64/metal/bin/ollama_llama_server.gz time=2024-05-15T23:43:09.063+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029/runners/metal time=2024-05-15T23:43:09.063+01:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [metal]" time=2024-05-15T23:43:09.063+01:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-05-15T23:43:09.063+01:00 level=DEBUG source=sched.go:89 msg="starting llm scheduler" time=2024-05-15T23:43:09.101+01:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=metal compute="" driver=0.0 name="" total="72.0 GiB" available="72.0 GiB" [GIN] 2024/05/15 - 23:43:17 | 200 | 53.125µs | 127.0.0.1 | HEAD "/" [GIN] 2024/05/15 - 23:43:17 | 200 | 3.003916ms | 127.0.0.1 | POST "/api/show" time=2024-05-15T23:43:17.346+01:00 level=DEBUG source=gguf.go:57 msg="model = &llm.gguf{containerGGUF:(*llm.containerGGUF)(0x140005a9040), kv:llm.KV{}, tensors:[]*llm.Tensor(nil), parameters:0x0}" time=2024-05-15T23:43:17.442+01:00 level=DEBUG source=sched.go:152 msg="loading first model" model=/Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 time=2024-05-15T23:43:17.442+01:00 level=DEBUG source=memory.go:44 msg=evaluating library=metal gpu_count=1 available="72.0 GiB" time=2024-05-15T23:43:17.442+01:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=52 memory.available="72.0 GiB" memory.required.full="76.4 GiB" memory.required.partial="71.1 GiB" memory.required.kv="448.0 MiB" memory.weights.total="75.3 GiB" memory.weights.repeating="75.1 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="244.0 MiB" memory.graph.partial="244.0 MiB" time=2024-05-15T23:43:17.443+01:00 level=DEBUG source=memory.go:44 msg=evaluating library=metal gpu_count=1 available="72.0 GiB" time=2024-05-15T23:43:17.443+01:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=52 memory.available="72.0 GiB" memory.required.full="76.4 GiB" memory.required.partial="71.1 GiB" memory.required.kv="448.0 MiB" memory.weights.total="75.3 GiB" memory.weights.repeating="75.1 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="244.0 MiB" memory.graph.partial="244.0 MiB" time=2024-05-15T23:43:17.443+01:00 level=DEBUG source=server.go:98 msg="system memory" total="96.0 GiB" time=2024-05-15T23:43:17.443+01:00 level=DEBUG source=memory.go:44 msg=evaluating library=metal gpu_count=1 available="72.0 GiB" time=2024-05-15T23:43:17.444+01:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=52 memory.available="72.0 GiB" memory.required.full="76.4 GiB" memory.required.partial="71.1 GiB" memory.required.kv="448.0 MiB" memory.weights.total="75.3 GiB" memory.weights.repeating="75.1 GiB" memory.weights.nonrepeating="157.5 MiB" memory.graph.full="244.0 MiB" memory.graph.partial="244.0 MiB" time=2024-05-15T23:43:17.444+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029/runners/metal time=2024-05-15T23:43:17.444+01:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029/runners/metal time=2024-05-15T23:43:17.445+01:00 level=INFO source=server.go:318 msg="starting llama server" cmd="/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029/runners/metal/ollama_llama_server --model /Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 52 --verbose --parallel 1 --port 61226" time=2024-05-15T23:43:17.445+01:00 level=DEBUG source=server.go:320 msg=subprocess environment="[OLLAMA_DEBUG=1 MANPATH=/opt/homebrew/share/man: TERM_PROGRAM=iTerm.app ... _=/usr/local/bin/ollama LD_LIBRARY_PATH=/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029/runners/metal]" time=2024-05-15T23:43:17.450+01:00 level=INFO source=sched.go:333 msg="loaded runners" count=1 time=2024-05-15T23:43:17.450+01:00 level=INFO source=server.go:488 msg="waiting for llama runner to start responding" time=2024-05-15T23:43:17.450+01:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=2770 commit="952d03d" tid="0x202e37ac0" timestamp=1715812997 INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x202e37ac0" timestamp=1715812997 total_threads=12 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="61226" tid="0x202e37ac0" timestamp=1715812997 llama_model_loader: loaded meta data with 28 key-value pairs and 563 tensors from /Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Mixtral-8x22B-Instruct-v0.1 llama_model_loader: - kv 2: llama.block_count u32 = 56 llama_model_loader: - kv 3: llama.context_length u32 = 65536 llama_model_loader: - kv 4: llama.embedding_length u32 = 6144 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 16384 llama_model_loader: - kv 6: llama.attention.head_count u32 = 48 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.expert_count u32 = 8 llama_model_loader: - kv 11: llama.expert_used_count u32 = 2 llama_model_loader: - kv 12: general.file_type u32 = 2 llama_model_loader: - kv 13: llama.vocab_size u32 = 32768 llama_model_loader: - kv 14: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = llama llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32768] = ["<unk>", "<s>", "</s>", "[INST]", "[... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,32768] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,32768] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template.tool_use str = {{bos_token}}{% set user_messages = m... llama_model_loader: - kv 25: tokenizer.chat_templates arr[str,1] = ["tool_use"] llama_model_loader: - kv 26: tokenizer.chat_template str = {{bos_token}}{% for message in messag... llama_model_loader: - kv 27: general.quantization_version u32 = 2 llama_model_loader: - type f32: 113 tensors llama_model_loader: - type f16: 56 tensors llama_model_loader: - type q4_0: 281 tensors llama_model_loader: - type q8_0: 112 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: mismatch in special tokens definition ( 1027/32768 vs 259/32768 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32768 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 65536 llm_load_print_meta: n_embd = 6144 llm_load_print_meta: n_head = 48 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 56 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 6 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 16384 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 65536 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8x22B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 140.63 B llm_load_print_meta: model size = 74.05 GiB (4.52 BPW) llm_load_print_meta: general.name = Mixtral-8x22B-Instruct-v0.1 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 781 '<0x0A>' llm_load_tensors: ggml ctx size = 0.56 MiB ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 55296.00 MiB, offs = 0 ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 15461.84 MiB, offs = 57529057280, (70757.91 / 73728.00) llm_load_tensors: offloading 52 repeating layers to GPU llm_load_tensors: offloaded 52/57 layers to GPU llm_load_tensors: CPU buffer size = 75831.40 MiB llm_load_tensors: Metal buffer size = 70325.82 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Max ggml_metal_init: picking default device: Apple M2 Max ggml_metal_init: using embedded metal library ggml_metal_init: GPU name: Apple M2 Max ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction support = true ggml_metal_init: simdgroup matrix mul. support = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB llama_kv_cache_init: CPU KV buffer size = 32.00 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 416.00 MiB, (71175.72 / 73728.00) llama_kv_cache_init: Metal KV buffer size = 416.00 MiB llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_new_context_with_model: CPU output buffer size = 0.15 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 244.05 MiB, (71419.77 / 73728.00) llama_new_context_with_model: Metal compute buffer size = 244.04 MiB llama_new_context_with_model: CPU compute buffer size = 244.01 MiB llama_new_context_with_model: graph nodes = 2638 llama_new_context_with_model: graph splits = 3 time=2024-05-15T23:43:17.702+01:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model" DEBUG [initialize] initializing slots | n_slots=1 tid="0x202e37ac0" timestamp=1715813018 DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=0 tid="0x202e37ac0" timestamp=1715813018 INFO [main] model loaded | tid="0x202e37ac0" timestamp=1715813018 DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="0x202e37ac0" timestamp=1715813018 DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=0 tid="0x202e37ac0" timestamp=1715813018 time=2024-05-15T23:43:38.574+01:00 level=INFO source=server.go:529 msg="llama runner started in 21.12 seconds" time=2024-05-15T23:43:38.574+01:00 level=DEBUG source=sched.go:346 msg="finished setting up runner" model=/Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 time=2024-05-15T23:43:38.574+01:00 level=DEBUG source=routes.go:178 msg="generate handler" prompt=Hi time=2024-05-15T23:43:38.574+01:00 level=DEBUG source=routes.go:179 msg="generate handler" template="[INST] {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }} [/INST]" time=2024-05-15T23:43:38.574+01:00 level=DEBUG source=routes.go:180 msg="generate handler" system="" time=2024-05-15T23:43:38.574+01:00 level=DEBUG source=routes.go:211 msg="generate handler" prompt="[INST] Hi [/INST]" time=2024-05-15T23:43:38.574+01:00 level=DEBUG source=server.go:616 msg="setting token limit to 10x num_ctx" num_ctx=2048 num_predict=20480 DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=1 tid="0x202e37ac0" timestamp=1715813018 DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=2 tid="0x202e37ac0" timestamp=1715813018 DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=5 slot_id=0 task_id=2 tid="0x202e37ac0" timestamp=1715813018 DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=2 tid="0x202e37ac0" timestamp=1715813018 DEBUG [print_timings] prompt eval time = 5884.05 ms / 5 tokens ( 1176.81 ms per token, 0.85 tokens per second) | n_prompt_tokens_processed=5 n_tokens_second=0.8497547013103726 slot_id=0 t_prompt_processing=5884.051 t_token=1176.8102000000001 task_id=2 tid="0x202e37ac0" timestamp=1715813077 DEBUG [print_timings] generation eval time = 53382.95 ms / 43 runs ( 1241.46 ms per token, 0.81 tokens per second) | n_decoded=43 n_tokens_second=0.805500662870848 slot_id=0 t_token=1241.463906976744 t_token_generation=53382.948 task_id=2 tid="0x202e37ac0" timestamp=1715813077 DEBUG [print_timings] total time = 59267.00 ms | slot_id=0 t_prompt_processing=5884.051 t_token_generation=53382.948 t_total=59266.998999999996 task_id=2 tid="0x202e37ac0" timestamp=1715813077 DEBUG [update_slots] slot released | n_cache_tokens=48 n_ctx=2048 n_past=47 n_system_tokens=0 slot_id=0 task_id=2 tid="0x202e37ac0" timestamp=1715813077 truncated=false DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=61287 status=200 tid="0x16fbef000" timestamp=1715813077 DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=48 tid="0x202e37ac0" timestamp=1715813077 DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=61408 status=200 tid="0x16fc7b000" timestamp=1715813077 [GIN] 2024/05/15 - 23:44:37 | 200 | 1m20s | 127.0.0.1 | POST "/api/generate" time=2024-05-15T23:44:37.855+01:00 level=DEBUG source=sched.go:350 msg="context for request finished" time=2024-05-15T23:44:37.855+01:00 level=DEBUG source=sched.go:236 msg="runner with non-zero duration has gone idle, adding timer" model=/Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 duration=5m0s time=2024-05-15T23:44:37.855+01:00 level=DEBUG source=sched.go:252 msg="after processing request finished event" model=/Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 refCount=0 ^Ctime=2024-05-15T23:44:57.962+01:00 level=DEBUG source=sched.go:215 msg="shutting down scheduler completed loop" time=2024-05-15T23:44:57.963+01:00 level=DEBUG source=sched.go:103 msg="shutting down scheduler pending loop" time=2024-05-15T23:44:57.964+01:00 level=DEBUG source=sched.go:611 msg="shutting down runner" model=/Users/primary/.ollama/models/blobs/sha256-d0eeef8264ce10a7e578789ee69986c66425639e72c9855e36a0345c230918c9 time=2024-05-15T23:44:57.965+01:00 level=DEBUG source=server.go:938 msg="stopping llama server" time=2024-05-15T23:44:57.965+01:00 level=DEBUG source=server.go:944 msg="waiting for llama server to exit" time=2024-05-15T23:44:57.986+01:00 level=DEBUG source=server.go:948 msg="llama server stopped" time=2024-05-15T23:44:57.986+01:00 level=DEBUG source=assets.go:105 msg="cleaning up" dir=/var/folders/ps/gy31ccy900s7btcl82hfg_gc0000gn/T/ollama4019954029 ```
Author
Owner

@dhiltgen commented on GitHub (Jul 25, 2024):

Are you still seeing this on the latest releases? I'm curious if you force a smaller number of layers with num_gpu if that results in better behavior when splitting between GPU and CPU near the limit of system memory on MacOS.

<!-- gh-comment-id:2250922079 --> @dhiltgen commented on GitHub (Jul 25, 2024): Are you still seeing this on the latest releases? I'm curious if you force a smaller number of layers with `num_gpu` if that results in better behavior when splitting between GPU and CPU near the limit of system memory on MacOS.
Author
Owner

@igorschlum commented on GitHub (Jul 28, 2024):

MacOS use a max of 2/3 of the memory to GPU. 80GB LLM cannot be loaded in GPU on a 96GB Mac unless you change settings as explain here:

https://techobsessed.net/2023/12/increasing-ram-available-to-gpu-on-apple-silicon-macs-for-running-large-language-models/

<!-- gh-comment-id:2254602496 --> @igorschlum commented on GitHub (Jul 28, 2024): MacOS use a max of 2/3 of the memory to GPU. 80GB LLM cannot be loaded in GPU on a 96GB Mac unless you change settings as explain here: https://techobsessed.net/2023/12/increasing-ram-available-to-gpu-on-apple-silicon-macs-for-running-large-language-models/
Author
Owner

@joliss commented on GitHub (Aug 20, 2024):

Are you still seeing this on the latest releases?

It does seem to be mostly resolved in Ollama 0.3.6. I'll go ahead and close this.

@igorschlum Thanks for the tip, increasing the wired limit makes it run much smoother!

<!-- gh-comment-id:2298592351 --> @joliss commented on GitHub (Aug 20, 2024): > Are you still seeing this on the latest releases? It does seem to be mostly resolved in Ollama 0.3.6. I'll go ahead and close this. @igorschlum Thanks for the tip, increasing the wired limit makes it run much smoother!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28338