[GH-ISSUE #4985] CUDA error: out of memory - Phi-3 Mini 128k prompted with 20k+ tokens on 4GB GPU #3152

Open
opened 2026-04-12 13:38:11 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @kozuch on GitHub (Jun 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4985

What is the issue?

I get a CUDA out of memory error when sending large prompt (about 20k+ tokens) to Phi-3 Mini 128k model on laptop with Nvidia A2000 4GB RAM. At first about 3.3GB GPU RAM and 8GB CPU RAM is used by ollama, then the GPU ram usage slowly rises (3.4, 3.5GB etc.) and after about a minute it throws the error when GPU ram is exhaused probably (3.9GB is latest in task manager). The inference does not return any token (as answer) before crashing. Attaching server log. Using on Win11 + Ollama 0.1.42 + VS Code (1.90.0) + Continue plugin (v0.8.40).

The expected behavior would be not crashing and maybe rellocating the memory somehow so that GPU memory does not get exhausted. I want to disable GPU usage in ollama (to test for CPU inference only - I have 64GB RAM) but I am not able to find out how to turn the GPU off (even though I saw there is a command for it recently - am not able to find it again).

Continue settings log:

Settings:
contextLength: 24000
maxTokens: 4000
model: phi3:3.8-mini-128k-instruct-q4_0
stop: <|end|>,<|user|>,<|assistant|>
log: undefined

The memory error:

CUDA error: out of memory
  current device: 0, in function alloc at C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:375
  cuMemSetAccess(pool_addr + pool_size, reserve_size, &access, 1)
GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:100: !"CUDA error"

Full Ollama server log:

time=2024-06-11T20:39:29.457+02:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=3 memory.available="3.2 GiB" memory.required.full="12.7 GiB" memory.required.partial="3.0 GiB" memory.required.kv="8.8 GiB" memory.weights.total="2.0 GiB" memory.weights.repeating="1.9 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="1.5 GiB" memory.graph.partial="1.5 GiB"
time=2024-06-11T20:39:29.470+02:00 level=INFO source=server.go:341 msg="starting llama server" cmd="C:\\Users\\username\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\username\\.ollama\\models\\blobs\\sha256-90184928e9771e8b73392b3f18e605ad19be5a115a9b5763decd491e2058b889 --ctx-size 24000 --batch-size 512 --embedding --log-disable --n-gpu-layers 3 --parallel 1 --port 58154"
time=2024-06-11T20:39:29.683+02:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
time=2024-06-11T20:39:29.683+02:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding"
time=2024-06-11T20:39:29.683+02:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3051 commit="5921b8f0" tid="18220" timestamp=1718131169
INFO [wmain] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="18220" timestamp=1718131169 total_threads=16
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="58154" tid="18220" timestamp=1718131169
llama_model_loader: loaded meta data with 27 key-value pairs and 197 tensors from C:\Users\username\.ollama\models\blobs\sha256-90184928e9771e8b73392b3f18e605ad19be5a115a9b5763decd491e2058b889 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = phi3
llama_model_loader: - kv   1:                               general.name str              = Phi3
llama_model_loader: - kv   2:                        phi3.context_length u32              = 131072
llama_model_loader: - kv   3:  phi3.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv   4:                      phi3.embedding_length u32              = 3072
llama_model_loader: - kv   5:                   phi3.feed_forward_length u32              = 8192
llama_model_loader: - kv   6:                           phi3.block_count u32              = 32
llama_model_loader: - kv   7:                  phi3.attention.head_count u32              = 32
llama_model_loader: - kv   8:               phi3.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:      phi3.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                  phi3.rope.dimension_count u32              = 96
llama_model_loader: - kv  11:                        phi3.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 2
llama_model_loader: - kv  13:              phi3.rope.scaling.attn_factor f32              = 1.190238
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,32064]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,32064]   = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,32064]   = [3, 3, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 32000
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  24:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   67 tensors
llama_model_loader: - type q4_0:  129 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 323
llm_load_vocab: token to piece cache size = 0.3372 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = phi3
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32064
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 3072
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 96
llm_load_print_meta: n_embd_head_k    = 96
llm_load_print_meta: n_embd_head_v    = 96
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 3072
llm_load_print_meta: n_embd_v_gqa     = 3072
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 8192
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 3B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 3.82 B
llm_load_print_meta: model size       = 2.03 GiB (4.55 BPW)
llm_load_print_meta: general.name     = Phi3
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 32000 '<|endoftext|>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 32000 '<|endoftext|>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: EOT token        = 32007 '<|end|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA RTX A2000 Laptop GPU, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.22 MiB
time=2024-06-11T20:39:29.944+02:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: offloading 3 repeating layers to GPU
llm_load_tensors: offloaded 3/33 layers to GPU
llm_load_tensors:        CPU buffer size =  2074.66 MiB
llm_load_tensors:      CUDA0 buffer size =   182.32 MiB
llama_new_context_with_model: n_ctx      = 24000
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =  8156.25 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =   843.75 MiB
llama_new_context_with_model: KV self size  = 9000.00 MiB, K (f16): 4500.00 MiB, V (f16): 4500.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.13 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =  1986.75 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    58.88 MiB
llama_new_context_with_model: graph nodes  = 1286
llama_new_context_with_model: graph splits = 294
INFO [wmain] model loaded | tid="18220" timestamp=1718131173
time=2024-06-11T20:39:33.371+02:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server not responding"
time=2024-06-11T20:39:33.635+02:00 level=INFO source=server.go:572 msg="llama runner started in 3.95 seconds"
[GIN] 2024/06/11 - 20:39:37 | 200 |    8.2721184s |       127.0.0.1 | POST     "/api/chat"
CUDA error: out of memory
  current device: 0, in function alloc at C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:375
  cuMemSetAccess(pool_addr + pool_size, reserve_size, &access, 1)
GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:100: !"CUDA error"
[GIN] 2024/06/11 - 20:41:34 | 200 |          1m6s |       127.0.0.1 | POST     "/api/chat"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.1.42

Originally created by @kozuch on GitHub (Jun 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4985 ### What is the issue? I get a CUDA out of memory error when sending large prompt (about 20k+ tokens) to Phi-3 Mini 128k model on laptop with Nvidia A2000 4GB RAM. At first about 3.3GB GPU RAM and 8GB CPU RAM is used by ollama, then the GPU ram usage slowly rises (3.4, 3.5GB etc.) and after about a minute it throws the error when GPU ram is exhaused probably (3.9GB is latest in task manager). The inference does not return any token (as answer) before crashing. Attaching server log. Using on Win11 + Ollama 0.1.42 + VS Code (1.90.0) + Continue plugin (v0.8.40). The expected behavior would be not crashing and maybe rellocating the memory somehow so that GPU memory does not get exhausted. I want to disable GPU usage in ollama (to test for CPU inference only - I have 64GB RAM) but I am not able to find out how to turn the GPU off (even though I saw there is a command for it recently - am not able to find it again). Continue settings log: ``` Settings: contextLength: 24000 maxTokens: 4000 model: phi3:3.8-mini-128k-instruct-q4_0 stop: <|end|>,<|user|>,<|assistant|> log: undefined ``` The memory error: ``` CUDA error: out of memory current device: 0, in function alloc at C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:375 cuMemSetAccess(pool_addr + pool_size, reserve_size, &access, 1) GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:100: !"CUDA error" ``` Full Ollama server log: ``` time=2024-06-11T20:39:29.457+02:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=3 memory.available="3.2 GiB" memory.required.full="12.7 GiB" memory.required.partial="3.0 GiB" memory.required.kv="8.8 GiB" memory.weights.total="2.0 GiB" memory.weights.repeating="1.9 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="1.5 GiB" memory.graph.partial="1.5 GiB" time=2024-06-11T20:39:29.470+02:00 level=INFO source=server.go:341 msg="starting llama server" cmd="C:\\Users\\username\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\username\\.ollama\\models\\blobs\\sha256-90184928e9771e8b73392b3f18e605ad19be5a115a9b5763decd491e2058b889 --ctx-size 24000 --batch-size 512 --embedding --log-disable --n-gpu-layers 3 --parallel 1 --port 58154" time=2024-06-11T20:39:29.683+02:00 level=INFO source=sched.go:338 msg="loaded runners" count=1 time=2024-06-11T20:39:29.683+02:00 level=INFO source=server.go:529 msg="waiting for llama runner to start responding" time=2024-06-11T20:39:29.683+02:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=3051 commit="5921b8f0" tid="18220" timestamp=1718131169 INFO [wmain] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="18220" timestamp=1718131169 total_threads=16 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="58154" tid="18220" timestamp=1718131169 llama_model_loader: loaded meta data with 27 key-value pairs and 197 tensors from C:\Users\username\.ollama\models\blobs\sha256-90184928e9771e8b73392b3f18e605ad19be5a115a9b5763decd491e2058b889 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = phi3 llama_model_loader: - kv 1: general.name str = Phi3 llama_model_loader: - kv 2: phi3.context_length u32 = 131072 llama_model_loader: - kv 3: phi3.rope.scaling.original_context_length u32 = 4096 llama_model_loader: - kv 4: phi3.embedding_length u32 = 3072 llama_model_loader: - kv 5: phi3.feed_forward_length u32 = 8192 llama_model_loader: - kv 6: phi3.block_count u32 = 32 llama_model_loader: - kv 7: phi3.attention.head_count u32 = 32 llama_model_loader: - kv 8: phi3.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: phi3.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: phi3.rope.dimension_count u32 = 96 llama_model_loader: - kv 11: phi3.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 12: general.file_type u32 = 2 llama_model_loader: - kv 13: phi3.rope.scaling.attn_factor f32 = 1.190238 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.pre str = default llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32064] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,32064] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,32064] = [3, 3, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.padding_token_id u32 = 32000 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 24: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 25: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - type f32: 67 tensors llama_model_loader: - type q4_0: 129 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 323 llm_load_vocab: token to piece cache size = 0.3372 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = phi3 llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32064 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 96 llm_load_print_meta: n_embd_head_k = 96 llm_load_print_meta: n_embd_head_v = 96 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 3072 llm_load_print_meta: n_embd_v_gqa = 3072 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8192 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 3B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 3.82 B llm_load_print_meta: model size = 2.03 GiB (4.55 BPW) llm_load_print_meta: general.name = Phi3 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 32000 '<|endoftext|>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 32000 '<|endoftext|>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOT token = 32007 '<|end|>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA RTX A2000 Laptop GPU, compute capability 8.6, VMM: yes llm_load_tensors: ggml ctx size = 0.22 MiB time=2024-06-11T20:39:29.944+02:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model" llm_load_tensors: offloading 3 repeating layers to GPU llm_load_tensors: offloaded 3/33 layers to GPU llm_load_tensors: CPU buffer size = 2074.66 MiB llm_load_tensors: CUDA0 buffer size = 182.32 MiB llama_new_context_with_model: n_ctx = 24000 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 8156.25 MiB llama_kv_cache_init: CUDA0 KV buffer size = 843.75 MiB llama_new_context_with_model: KV self size = 9000.00 MiB, K (f16): 4500.00 MiB, V (f16): 4500.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.13 MiB llama_new_context_with_model: CUDA0 compute buffer size = 1986.75 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 58.88 MiB llama_new_context_with_model: graph nodes = 1286 llama_new_context_with_model: graph splits = 294 INFO [wmain] model loaded | tid="18220" timestamp=1718131173 time=2024-06-11T20:39:33.371+02:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server not responding" time=2024-06-11T20:39:33.635+02:00 level=INFO source=server.go:572 msg="llama runner started in 3.95 seconds" [GIN] 2024/06/11 - 20:39:37 | 200 | 8.2721184s | 127.0.0.1 | POST "/api/chat" CUDA error: out of memory current device: 0, in function alloc at C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:375 cuMemSetAccess(pool_addr + pool_size, reserve_size, &access, 1) GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:100: !"CUDA error" [GIN] 2024/06/11 - 20:41:34 | 200 | 1m6s | 127.0.0.1 | POST "/api/chat" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.42
GiteaMirror added the memorybug labels 2026-04-12 13:38:11 -05:00
Author
Owner

@kozuch commented on GitHub (Jun 11, 2024):

Looks like the problem comes from llama.cpp project. I am not sure what version of llama.cpp is used in my ollama. I dont have the resources right now to look through issues llama.cpp project.

<!-- gh-comment-id:2161441102 --> @kozuch commented on GitHub (Jun 11, 2024): Looks like the problem comes from [llama.cpp project](https://github.com/ggerganov/llama.cpp). I am not sure what version of llama.cpp is used in my ollama. I dont have the resources right now to look through issues llama.cpp project.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3152