[GH-ISSUE #1534] macOS M2 32 GB -- processing failed #836

Closed
opened 2026-04-12 10:30:08 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @enzyme69 on GitHub (Dec 15, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1534

I get error message:
"Error: llama runner process has terminated"

Does that mean it run out of memory?

Is it possible to make it smaller?

Originally created by @enzyme69 on GitHub (Dec 15, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1534 I get error message: "Error: llama runner process has terminated" Does that mean it run out of memory? Is it possible to make it smaller?
Author
Owner

@jmorganca commented on GitHub (Dec 15, 2023):

@enzyme69 which model are you running? sorry to hear you encountered an error

<!-- gh-comment-id:1857095876 --> @jmorganca commented on GitHub (Dec 15, 2023): @enzyme69 which model are you running? sorry to hear you encountered an error
Author
Owner

@jeffssss commented on GitHub (Dec 16, 2023):

Same issue with m1 pro 32GB
In my situation, I use llm : dolphin-2.5-mixtral-8x7b
related logs:

[GIN] 2023/12/16 - 22:24:41 | 200 |      63.875µs |       127.0.0.1 | HEAD     "/"
[GIN] 2023/12/16 - 22:24:41 | 200 |    4.554292ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2023/12/16 - 22:24:42 | 200 |     862.208µs |       127.0.0.1 | POST     "/api/show"
2023/12/16 22:24:42 llama.go:436: starting llama runner
2023/12/16 22:24:42 llama.go:494: waiting for llama runner to start responding
{"timestamp":1702736682,"level":"INFO","function":"main","line":2653,"message":"build info","build":441,"commit":"948ff13"}
{"timestamp":1702736682,"level":"INFO","function":"main","line":2660,"message":"system info","n_threads":8,"n_threads_batch":-1,"total_threads":10,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | "}
llama_model_loader: loaded meta data with 23 key-value pairs and 995 tensors from /Users/fengji/.ollama/models/blobs/sha256:bdb11b0699e03d791f0accd97279989d810d79615c6cf5ac21fb68e8f33e8ca3 (version GGUF V3 (latest))
llama_model_loader: - tensor    0:                token_embd.weight q4_0     [  4096, 32002,     1,     1 ]
llama_model_loader: - tensor    1:              blk.0.attn_q.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor    2:              blk.0.attn_k.weight q8_0     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor    3:              blk.0.attn_v.weight q8_0     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor    4:         blk.0.attn_output.weight q4_0     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor    5:        blk.0.ffn_gate_inp.weight f16      [  4096,     8,     1,     1 ]
llama_model_loader: - tensor    6:          blk.0.ffn_gate.0.weight q4_0     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor    7:          blk.0.ffn_down.0.weight q4_0     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor    8:            blk.0.ffn_up.0.weight q4_0     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor    9:          blk.0.ffn_gate.1.weight q4_0     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   10:          blk.0.ffn_down.1.weight q4_0     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   11:            blk.0.ffn_up.1.weight q4_0     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   12:          blk.0.ffn_gate.2.weight q4_0     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   13:          blk.0.ffn_down.2.weight q4_0     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   14:            blk.0.ffn_up.2.weight q4_0     [  4096, 14336,     1,     1 ]
...
...
llama_model_loader: - tensor  993:               output_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  994:                    output.weight q6_K     [  4096, 32002,     1,     1 ]
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = ehartford
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 2
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32002]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32002]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32002]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q4_0:  833 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 261/32002 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32002
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = mostly Q4_0
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 24.62 GiB (4.53 BPW)
llm_load_print_meta: general.name     = ehartford
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 32000 '<|im_end|>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.39 MiB
llm_load_tensors: mem required  = 25216.27 MiB
.....
.......
...
....
....
...
..
...
.
..
..
.
..
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: KV self size  =  512.00 MiB, K (f16):  256.00 MiB, V (f16):  256.00 MiB
llama_build_graph: non-view tensors processed: 1124/1124
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Pro
ggml_metal_init: picking default device: Apple M1 Pro
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/var/folders/jd/_9xbz5493rn_m477kjkvp5_m0000gn/T/ollama2957943658/llama.cpp/gguf/build/metal/bin/ggml-metal.metal'
ggml_metal_init: GPU name:   Apple M1 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 22906.50 MB
ggml_metal_init: maxTransferRate               = built-in GPU
llama_new_context_with_model: compute buffer total size = 319.35 MiB
llama_new_context_with_model: max tensor size =   102.55 MiB
ggml_metal_add_buffer: allocated 'data            ' buffer, size = 16384.00 MiB, offs =            0
ggml_metal_add_buffer: allocated 'data            ' buffer, size =  8935.19 MiB, offs =  17072324608, (25319.88 / 21845.34)ggml_metal_add_buffer: warning: current allocated size is greater than the recommended max working set size
ggml_metal_add_buffer: allocated 'kv              ' buffer, size =   512.03 MiB, (25831.91 / 21845.34)ggml_metal_add_buffer: warning: current allocated size is greater than the recommended max working set size
ggml_metal_add_buffer: allocated 'alloc           ' buffer, size =   316.05 MiB, (26147.95 / 21845.34)ggml_metal_add_buffer: warning: current allocated size is greater than the recommended max working set size
ggml_metal_graph_compute: command buffer 4 failed with status 5
GGML_ASSERT: /Users/jmorgan/workspace/ollama/llm/llama.cpp/gguf/ggml-metal.m:2353: false
2023/12/16 22:24:54 llama.go:451: signal: abort trap
2023/12/16 22:24:54 llama.go:459: error starting llama runner: llama runner process has terminated
2023/12/16 22:24:54 llama.go:525: llama runner stopped successfully
[GIN] 2023/12/16 - 22:24:54 | 500 | 12.347520041s |       127.0.0.1 | POST     "/api/generate"
<!-- gh-comment-id:1858833415 --> @jeffssss commented on GitHub (Dec 16, 2023): Same issue with m1 pro 32GB In my situation, I use llm : dolphin-2.5-mixtral-8x7b related logs: ``` [GIN] 2023/12/16 - 22:24:41 | 200 | 63.875µs | 127.0.0.1 | HEAD "/" [GIN] 2023/12/16 - 22:24:41 | 200 | 4.554292ms | 127.0.0.1 | POST "/api/show" [GIN] 2023/12/16 - 22:24:42 | 200 | 862.208µs | 127.0.0.1 | POST "/api/show" 2023/12/16 22:24:42 llama.go:436: starting llama runner 2023/12/16 22:24:42 llama.go:494: waiting for llama runner to start responding {"timestamp":1702736682,"level":"INFO","function":"main","line":2653,"message":"build info","build":441,"commit":"948ff13"} {"timestamp":1702736682,"level":"INFO","function":"main","line":2660,"message":"system info","n_threads":8,"n_threads_batch":-1,"total_threads":10,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | "} llama_model_loader: loaded meta data with 23 key-value pairs and 995 tensors from /Users/fengji/.ollama/models/blobs/sha256:bdb11b0699e03d791f0accd97279989d810d79615c6cf5ac21fb68e8f33e8ca3 (version GGUF V3 (latest)) llama_model_loader: - tensor 0: token_embd.weight q4_0 [ 4096, 32002, 1, 1 ] llama_model_loader: - tensor 1: blk.0.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 2: blk.0.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 3: blk.0.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 4: blk.0.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 5: blk.0.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 6: blk.0.ffn_gate.0.weight q4_0 [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 7: blk.0.ffn_down.0.weight q4_0 [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 8: blk.0.ffn_up.0.weight q4_0 [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 9: blk.0.ffn_gate.1.weight q4_0 [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 10: blk.0.ffn_down.1.weight q4_0 [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 11: blk.0.ffn_up.1.weight q4_0 [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 12: blk.0.ffn_gate.2.weight q4_0 [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 13: blk.0.ffn_down.2.weight q4_0 [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 14: blk.0.ffn_up.2.weight q4_0 [ 4096, 14336, 1, 1 ] ... ... llama_model_loader: - tensor 993: output_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 994: output.weight q6_K [ 4096, 32002, 1, 1 ] llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = ehartford llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32002] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32002] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q4_0: 833 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 261/32002 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32002 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = mostly Q4_0 llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) llm_load_print_meta: general.name = ehartford llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 32000 '<|im_end|>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.39 MiB llm_load_tensors: mem required = 25216.27 MiB ..... ....... ... .... .... ... .. ... . .. .. . .. . . .. . . . . . . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . llama_new_context_with_model: n_ctx = 4096 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: KV self size = 512.00 MiB, K (f16): 256.00 MiB, V (f16): 256.00 MiB llama_build_graph: non-view tensors processed: 1124/1124 ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 Pro ggml_metal_init: picking default device: Apple M1 Pro ggml_metal_init: default.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/var/folders/jd/_9xbz5493rn_m477kjkvp5_m0000gn/T/ollama2957943658/llama.cpp/gguf/build/metal/bin/ggml-metal.metal' ggml_metal_init: GPU name: Apple M1 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 22906.50 MB ggml_metal_init: maxTransferRate = built-in GPU llama_new_context_with_model: compute buffer total size = 319.35 MiB llama_new_context_with_model: max tensor size = 102.55 MiB ggml_metal_add_buffer: allocated 'data ' buffer, size = 16384.00 MiB, offs = 0 ggml_metal_add_buffer: allocated 'data ' buffer, size = 8935.19 MiB, offs = 17072324608, (25319.88 / 21845.34)ggml_metal_add_buffer: warning: current allocated size is greater than the recommended max working set size ggml_metal_add_buffer: allocated 'kv ' buffer, size = 512.03 MiB, (25831.91 / 21845.34)ggml_metal_add_buffer: warning: current allocated size is greater than the recommended max working set size ggml_metal_add_buffer: allocated 'alloc ' buffer, size = 316.05 MiB, (26147.95 / 21845.34)ggml_metal_add_buffer: warning: current allocated size is greater than the recommended max working set size ggml_metal_graph_compute: command buffer 4 failed with status 5 GGML_ASSERT: /Users/jmorgan/workspace/ollama/llm/llama.cpp/gguf/ggml-metal.m:2353: false 2023/12/16 22:24:54 llama.go:451: signal: abort trap 2023/12/16 22:24:54 llama.go:459: error starting llama runner: llama runner process has terminated 2023/12/16 22:24:54 llama.go:525: llama runner stopped successfully [GIN] 2023/12/16 - 22:24:54 | 500 | 12.347520041s | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@easp commented on GitHub (Dec 17, 2023):

@jeffssss You need to either choose a tag for a smaller quantization or tell MacOS to give the GPU access to more of your RAM. Try giving it 26,624 MB

<!-- gh-comment-id:1858992148 --> @easp commented on GitHub (Dec 17, 2023): @jeffssss You need to either choose a tag for a smaller quantization or [tell MacOS to give the GPU access to more of your RAM](https://techobsessed.net/2023/12/increasing-ram-available-to-gpu-on-apple-silicon-macs-for-running-large-language-models/). Try giving it 26,624 MB
Author
Owner

@Alino commented on GitHub (Dec 19, 2023):

I have 32gb m1 pro, and when I tried to give it 26gb to gpu the machine died :D and with 20gb it won't start the model.
UPDATE: after reboot it kind of works, but slows down the system a lot.

<!-- gh-comment-id:1862460865 --> @Alino commented on GitHub (Dec 19, 2023): I have 32gb m1 pro, and when I tried to give it 26gb to gpu the machine died :D and with 20gb it won't start the model. UPDATE: after reboot it kind of works, but slows down the system a lot.
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

Yes. These models require a lot of memory. If it's working after reboot it sounds like you had some other apps running taking up memory and now they are gone. But you are still using up nearly everything. If you want to use this one your best best is to go down to a 2bit model which is still amazing and might work or consider an online cloud provider.

Thanks so much for being part of this community.

<!-- gh-comment-id:1862811810 --> @technovangelist commented on GitHub (Dec 19, 2023): Yes. These models require a lot of memory. If it's working after reboot it sounds like you had some other apps running taking up memory and now they are gone. But you are still using up nearly everything. If you want to use this one your best best is to go down to a 2bit model which is still amazing and might work or consider an online cloud provider. Thanks so much for being part of this community.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#836