[GH-ISSUE #1674] Out of memory at v0.1.17, revert to v0.1.14 OK. Related to "context" size #26704

Closed
opened 2026-04-22 03:09:05 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @coffeecodechem on GitHub (Dec 22, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1674

image

Context:
Windows Docker Desktop
WSL2
NVIDIA RTX3060
Running ollama docker with GPU enabled

What I think the key differences in the log is that the Ollama v0.1.17 gives extra context for 1116.00 MiB
2023-12-22 21:45:16 llama_new_context_with_model: total VRAM used: 5207.27 MiB (model: 4091.27 MiB, context: 1116.00 MiB)
while 0.1.14 gives 156.00 MiB
2023-12-22 21:37:37 llama_new_context_with_model: total VRAM used: 4247.27 MiB (model: 4091.27 MiB, context: 156.00 MiB)

Also v0.1.17 allocate 30/33 layers to GPU
while v0.1.14 allocate 30/35 layers to GPU
for the same model

Here are more logs

v0.1.17

2023-12-22 21:45:07 llama_model_loader: - type  f32:   65 tensors
2023-12-22 21:45:07 llama_model_loader: - type q5_K:  193 tensors
2023-12-22 21:45:07 llama_model_loader: - type q6_K:   33 tensors
2023-12-22 21:45:07 llm_load_vocab: mismatch in special tokens definition ( 243/32256 vs 237/32256 ).
2023-12-22 21:45:07 llm_load_print_meta: format           = GGUF V3 (latest)
2023-12-22 21:45:07 llm_load_print_meta: arch             = llama
2023-12-22 21:45:07 llm_load_print_meta: vocab type       = BPE
2023-12-22 21:45:07 llm_load_print_meta: n_vocab          = 32256
2023-12-22 21:45:07 llm_load_print_meta: n_merges         = 31757
2023-12-22 21:45:07 llm_load_print_meta: n_ctx_train      = 16384
2023-12-22 21:45:07 llm_load_print_meta: n_embd           = 4096
2023-12-22 21:45:07 llm_load_print_meta: n_head           = 32
2023-12-22 21:45:07 llm_load_print_meta: n_head_kv        = 32
2023-12-22 21:45:07 llm_load_print_meta: n_layer          = 32
2023-12-22 21:45:07 llm_load_print_meta: n_rot            = 128
2023-12-22 21:45:07 llm_load_print_meta: n_gqa            = 1
2023-12-22 21:45:07 llm_load_print_meta: f_norm_eps       = 0.0e+00
2023-12-22 21:45:07 llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
2023-12-22 21:45:07 llm_load_print_meta: f_clamp_kqv      = 0.0e+00
2023-12-22 21:45:07 llm_load_print_meta: f_max_alibi_bias = 0.0e+00
2023-12-22 21:45:07 llm_load_print_meta: n_ff             = 11008
2023-12-22 21:45:07 llm_load_print_meta: n_expert         = 0
2023-12-22 21:45:07 llm_load_print_meta: n_expert_used    = 0
2023-12-22 21:45:07 llm_load_print_meta: rope scaling     = linear
2023-12-22 21:45:07 llm_load_print_meta: freq_base_train  = 100000.0
2023-12-22 21:45:07 llm_load_print_meta: freq_scale_train = 0.25
2023-12-22 21:45:07 llm_load_print_meta: n_yarn_orig_ctx  = 16384
2023-12-22 21:45:07 llm_load_print_meta: rope_finetuned   = unknown
2023-12-22 21:45:07 llm_load_print_meta: model type       = 7B
2023-12-22 21:45:07 llm_load_print_meta: model ftype      = Q5_K - Medium
2023-12-22 21:45:07 llm_load_print_meta: model params     = 6.74 B
2023-12-22 21:45:07 llm_load_print_meta: model size       = 4.46 GiB (5.68 BPW) 
2023-12-22 21:45:07 llm_load_print_meta: general.name     = deepseek-ai_deepseek-coder-6.7b-instruct
2023-12-22 21:45:07 llm_load_print_meta: BOS token        = 32013 '<|begin▁of▁sentence|>'
2023-12-22 21:45:07 llm_load_print_meta: EOS token        = 32021 '<|EOT|>'
2023-12-22 21:45:07 llm_load_print_meta: PAD token        = 32014 '<|end▁of▁sentence|>'
2023-12-22 21:45:07 llm_load_print_meta: LF token         = 126 'Ä'
2023-12-22 21:45:07 llm_load_tensors: ggml ctx size =    0.11 MiB
2023-12-22 21:45:07 llm_load_tensors: using CUDA for GPU acceleration
2023-12-22 21:45:07 llm_load_tensors: mem required  =  471.22 MiB
2023-12-22 21:45:07 llm_load_tensors: offloading 30 repeating layers to GPU
2023-12-22 21:45:07 llm_load_tensors: offloaded 30   layers to GPU
2023-12-22 21:45:07 llm_load_tensors: VRAM used: 4091.27 MiB
2023-12-22 21:45:15 ..................................................................................................
2023-12-22 21:45:15 llama_new_context_with_model: n_ctx      = 2048
2023-12-22 21:45:15 llama_new_context_with_model: freq_base  = 100000.0
2023-12-22 21:45:15 llama_new_context_with_model: freq_scale = 0.25
2023-12-22 21:45:16 llama_kv_cache_init: VRAM kv self = 960.00 MB
2023-12-22 21:45:16 llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
2023-12-22 21:45:16 llama_build_graph: non-view tensors processed: 676/676
2023-12-22 21:45:16 llama_new_context_with_model: compute buffer total size = 159.19 MiB
2023-12-22 21:45:16 llama_new_context_with_model: VRAM scratch buffer: 156.00 MiB
2023-12-22 21:45:16 llama_new_context_with_model: total VRAM used: 5207.27 MiB (model: 4091.27 MiB, context: 1116.00 MiB)
2023-12-22 21:45:17 {"timestamp":1703256317,"level":"INFO","function":"main","line":3097,"message":"HTTP server listening","port":"55075","hostname":"127.0.0.1"}
2023-12-22 21:45:17 {"timestamp":1703256317,"level":"INFO","function":"log_server_request","line":2608,"message":"request","remote_addr":"127.0.0.1","remote_port":48006,"status":200,"method":"HEAD","path":"/","params":{}}
2023-12-22 21:45:17 2023/12/22 14:45:17 llama.go:512: llama runner started in 11.612463 seconds
2023-12-22 21:45:17 2023/12/22 14:45:17 llama.go:581: loaded 0 images
2023-12-22 21:45:17 map[frequency_penalty:0 image_data:[] main_gpu:0 mirostat:0 mirostat_eta:0.1 mirostat_tau:5 n_keep:0 n_predict:-1 penalize_nl:true presence_penalty:0 prompt:You are a sophisticated, accurate, and modern AI programming assistant Stop your response with ###END###
2023-12-22 21:45:17 ### Request:
2023-12-22 21:45:17 Python code to get current time and put it as a string with format something like "yyyy-mm-dd hh:mm"
2023-12-22 21:45:17 ### Code: **[Stream Issue] ollama: llama runner exited, you may not have enough available memory to run this model** Stop your response with ###END###
2023-12-22 21:45:17 ### Request:
2023-12-22 21:45:17 Python code to get current time and put it as a string with format something like "yyyy-mm-dd hh:mm"
2023-12-22 21:45:17 ### Code: repeat_last_n:64 repeat_penalty:1.1 seed:-1 stop:[###END###] stream:true temperature:0.5 tfs_z:1 top_k:40 top_p:0.9 typical_p:1]
2023-12-22 21:45:17 
2023-12-22 21:45:17 CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:6600: out of memory
2023-12-22 21:45:17 current device: 0
2023-12-22 21:45:17 GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:6600: !"CUDA error"
2023-12-22 21:45:18 2023/12/22 14:45:18 llama.go:455: 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:6600: out of memory
2023-12-22 21:45:18 current device: 0
2023-12-22 21:45:18 GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:6600: !"CUDA error"
2023-12-22 21:45:18 2023/12/22 14:45:18 llama.go:529: llama runner stopped successfully
2023-12-22 21:45:18 [GIN] 2023/12/22 - 14:45:18 | 200 | 14.020072602s |      172.17.0.1 | POST     "/api/chat"

v0.1.14

2023-12-22 21:37:31 llama_model_loader: - type  f32:   65 tensors
2023-12-22 21:37:31 llama_model_loader: - type q5_K:  193 tensors
2023-12-22 21:37:31 llama_model_loader: - type q6_K:   33 tensors
2023-12-22 21:37:31 llm_load_vocab: mismatch in special tokens definition ( 243/32256 vs 237/32256 ).
2023-12-22 21:37:31 llm_load_print_meta: format           = GGUF V3 (latest)
2023-12-22 21:37:31 llm_load_print_meta: arch             = llama
2023-12-22 21:37:31 llm_load_print_meta: vocab type       = BPE
2023-12-22 21:37:31 llm_load_print_meta: n_vocab          = 32256
2023-12-22 21:37:31 llm_load_print_meta: n_merges         = 31757
2023-12-22 21:37:31 llm_load_print_meta: n_ctx_train      = 16384
2023-12-22 21:37:31 llm_load_print_meta: n_embd           = 4096
2023-12-22 21:37:31 llm_load_print_meta: n_head           = 32
2023-12-22 21:37:31 llm_load_print_meta: n_head_kv        = 32
2023-12-22 21:37:31 llm_load_print_meta: n_layer          = 32
2023-12-22 21:37:31 llm_load_print_meta: n_rot            = 128
2023-12-22 21:37:31 llm_load_print_meta: n_gqa            = 1
2023-12-22 21:37:31 llm_load_print_meta: f_norm_eps       = 0.0e+00
2023-12-22 21:37:31 llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
2023-12-22 21:37:31 llm_load_print_meta: f_clamp_kqv      = 0.0e+00
2023-12-22 21:37:31 llm_load_print_meta: f_max_alibi_bias = 0.0e+00
2023-12-22 21:37:31 llm_load_print_meta: n_ff             = 11008
2023-12-22 21:37:31 llm_load_print_meta: rope scaling     = linear
2023-12-22 21:37:31 llm_load_print_meta: freq_base_train  = 100000.0
2023-12-22 21:37:31 llm_load_print_meta: freq_scale_train = 0.25
2023-12-22 21:37:31 llm_load_print_meta: n_yarn_orig_ctx  = 16384
2023-12-22 21:37:31 llm_load_print_meta: rope_finetuned   = unknown
2023-12-22 21:37:31 llm_load_print_meta: model type       = 7B
2023-12-22 21:37:31 llm_load_print_meta: model ftype      = mostly Q5_K - Medium
2023-12-22 21:37:31 llm_load_print_meta: model params     = 6.74 B
2023-12-22 21:37:31 llm_load_print_meta: model size       = 4.46 GiB (5.68 BPW) 
2023-12-22 21:37:31 llm_load_print_meta: general.name     = deepseek-ai_deepseek-coder-6.7b-instruct
2023-12-22 21:37:31 llm_load_print_meta: BOS token        = 32013 '<|begin▁of▁sentence|>'
2023-12-22 21:37:31 llm_load_print_meta: EOS token        = 32021 '<|EOT|>'
2023-12-22 21:37:31 llm_load_print_meta: PAD token        = 32014 '<|end▁of▁sentence|>'
2023-12-22 21:37:31 llm_load_print_meta: LF token         = 126 'Ä'
2023-12-22 21:37:31 llm_load_tensors: ggml ctx size =    0.11 MiB
2023-12-22 21:37:31 llm_load_tensors: using CUDA for GPU acceleration
2023-12-22 21:37:31 llm_load_tensors: mem required  =  471.22 MiB
2023-12-22 21:37:31 llm_load_tensors: offloading 30 repeating layers to GPU
2023-12-22 21:37:31 llm_load_tensors: offloaded 30/35 layers to GPU
2023-12-22 21:37:31 llm_load_tensors: VRAM used: 4091.27 MiB
2023-12-22 21:37:37 ..................................................................................................
2023-12-22 21:37:37 llama_new_context_with_model: n_ctx      = 2048
2023-12-22 21:37:37 llama_new_context_with_model: freq_base  = 100000.0
2023-12-22 21:37:37 llama_new_context_with_model: freq_scale = 0.25
2023-12-22 21:37:37 llama_new_context_with_model: kv self size  = 1024.00 MiB
2023-12-22 21:37:37 llama_build_graph: non-view tensors processed: 676/676
2023-12-22 21:37:37 llama_new_context_with_model: compute buffer total size = 159.07 MiB
2023-12-22 21:37:37 llama_new_context_with_model: VRAM scratch buffer: 156.00 MiB
2023-12-22 21:37:37 llama_new_context_with_model: total VRAM used: 4247.27 MiB (model: 4091.27 MiB, context: 156.00 MiB)
2023-12-22 21:37:37 {"timestamp":1703255857,"level":"INFO","function":"main","line":3038,"message":"HTTP server listening","hostname":"127.0.0.1","port":62142}
2023-12-22 21:37:37 {"timestamp":1703255857,"level":"INFO","function":"log_server_request","line":2599,"message":"request","remote_addr":"127.0.0.1","remote_port":49528,"status":200,"method":"HEAD","path":"/","params":{}}
2023-12-22 21:37:38 2023/12/22 14:37:37 llama.go:506: llama runner started in 8.410558 seconds
2023-12-22 21:37:42 {"timestamp":1703255862,"level":"INFO","function":"log_server_request","line":2599,"message":"request","remote_addr":"127.0.0.1","remote_port":49528,"status":200,"method":"POST","path":"/completion","params":{}}
2023-12-22 21:42:42 2023/12/22 14:42:42 llama.go:449: signal: killed
2023-12-22 21:42:42 2023/12/22 14:42:42 llama.go:523: llama runner stopped successfully
Originally created by @coffeecodechem on GitHub (Dec 22, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1674 ![image](https://github.com/jmorganca/ollama/assets/80149823/3c4ebad8-6209-4599-a361-8f7906d2553d) Context: Windows Docker Desktop WSL2 NVIDIA RTX3060 Running ollama docker with GPU enabled What I think the key differences in the log is that the Ollama v0.1.17 gives extra context for 1116.00 MiB `2023-12-22 21:45:16 llama_new_context_with_model: total VRAM used: 5207.27 MiB (model: 4091.27 MiB, context: 1116.00 MiB)` while 0.1.14 gives 156.00 MiB `2023-12-22 21:37:37 llama_new_context_with_model: total VRAM used: 4247.27 MiB (model: 4091.27 MiB, context: 156.00 MiB)` Also v0.1.17 allocate `30/33 layers to GPU` while v0.1.14 allocate `30/35 layers to GPU` for the same model Here are more logs ### v0.1.17 ``` 2023-12-22 21:45:07 llama_model_loader: - type f32: 65 tensors 2023-12-22 21:45:07 llama_model_loader: - type q5_K: 193 tensors 2023-12-22 21:45:07 llama_model_loader: - type q6_K: 33 tensors 2023-12-22 21:45:07 llm_load_vocab: mismatch in special tokens definition ( 243/32256 vs 237/32256 ). 2023-12-22 21:45:07 llm_load_print_meta: format = GGUF V3 (latest) 2023-12-22 21:45:07 llm_load_print_meta: arch = llama 2023-12-22 21:45:07 llm_load_print_meta: vocab type = BPE 2023-12-22 21:45:07 llm_load_print_meta: n_vocab = 32256 2023-12-22 21:45:07 llm_load_print_meta: n_merges = 31757 2023-12-22 21:45:07 llm_load_print_meta: n_ctx_train = 16384 2023-12-22 21:45:07 llm_load_print_meta: n_embd = 4096 2023-12-22 21:45:07 llm_load_print_meta: n_head = 32 2023-12-22 21:45:07 llm_load_print_meta: n_head_kv = 32 2023-12-22 21:45:07 llm_load_print_meta: n_layer = 32 2023-12-22 21:45:07 llm_load_print_meta: n_rot = 128 2023-12-22 21:45:07 llm_load_print_meta: n_gqa = 1 2023-12-22 21:45:07 llm_load_print_meta: f_norm_eps = 0.0e+00 2023-12-22 21:45:07 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 2023-12-22 21:45:07 llm_load_print_meta: f_clamp_kqv = 0.0e+00 2023-12-22 21:45:07 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 2023-12-22 21:45:07 llm_load_print_meta: n_ff = 11008 2023-12-22 21:45:07 llm_load_print_meta: n_expert = 0 2023-12-22 21:45:07 llm_load_print_meta: n_expert_used = 0 2023-12-22 21:45:07 llm_load_print_meta: rope scaling = linear 2023-12-22 21:45:07 llm_load_print_meta: freq_base_train = 100000.0 2023-12-22 21:45:07 llm_load_print_meta: freq_scale_train = 0.25 2023-12-22 21:45:07 llm_load_print_meta: n_yarn_orig_ctx = 16384 2023-12-22 21:45:07 llm_load_print_meta: rope_finetuned = unknown 2023-12-22 21:45:07 llm_load_print_meta: model type = 7B 2023-12-22 21:45:07 llm_load_print_meta: model ftype = Q5_K - Medium 2023-12-22 21:45:07 llm_load_print_meta: model params = 6.74 B 2023-12-22 21:45:07 llm_load_print_meta: model size = 4.46 GiB (5.68 BPW) 2023-12-22 21:45:07 llm_load_print_meta: general.name = deepseek-ai_deepseek-coder-6.7b-instruct 2023-12-22 21:45:07 llm_load_print_meta: BOS token = 32013 '<|begin▁of▁sentence|>' 2023-12-22 21:45:07 llm_load_print_meta: EOS token = 32021 '<|EOT|>' 2023-12-22 21:45:07 llm_load_print_meta: PAD token = 32014 '<|end▁of▁sentence|>' 2023-12-22 21:45:07 llm_load_print_meta: LF token = 126 'Ä' 2023-12-22 21:45:07 llm_load_tensors: ggml ctx size = 0.11 MiB 2023-12-22 21:45:07 llm_load_tensors: using CUDA for GPU acceleration 2023-12-22 21:45:07 llm_load_tensors: mem required = 471.22 MiB 2023-12-22 21:45:07 llm_load_tensors: offloading 30 repeating layers to GPU 2023-12-22 21:45:07 llm_load_tensors: offloaded 30 layers to GPU 2023-12-22 21:45:07 llm_load_tensors: VRAM used: 4091.27 MiB 2023-12-22 21:45:15 .................................................................................................. 2023-12-22 21:45:15 llama_new_context_with_model: n_ctx = 2048 2023-12-22 21:45:15 llama_new_context_with_model: freq_base = 100000.0 2023-12-22 21:45:15 llama_new_context_with_model: freq_scale = 0.25 2023-12-22 21:45:16 llama_kv_cache_init: VRAM kv self = 960.00 MB 2023-12-22 21:45:16 llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB 2023-12-22 21:45:16 llama_build_graph: non-view tensors processed: 676/676 2023-12-22 21:45:16 llama_new_context_with_model: compute buffer total size = 159.19 MiB 2023-12-22 21:45:16 llama_new_context_with_model: VRAM scratch buffer: 156.00 MiB 2023-12-22 21:45:16 llama_new_context_with_model: total VRAM used: 5207.27 MiB (model: 4091.27 MiB, context: 1116.00 MiB) 2023-12-22 21:45:17 {"timestamp":1703256317,"level":"INFO","function":"main","line":3097,"message":"HTTP server listening","port":"55075","hostname":"127.0.0.1"} 2023-12-22 21:45:17 {"timestamp":1703256317,"level":"INFO","function":"log_server_request","line":2608,"message":"request","remote_addr":"127.0.0.1","remote_port":48006,"status":200,"method":"HEAD","path":"/","params":{}} 2023-12-22 21:45:17 2023/12/22 14:45:17 llama.go:512: llama runner started in 11.612463 seconds 2023-12-22 21:45:17 2023/12/22 14:45:17 llama.go:581: loaded 0 images 2023-12-22 21:45:17 map[frequency_penalty:0 image_data:[] main_gpu:0 mirostat:0 mirostat_eta:0.1 mirostat_tau:5 n_keep:0 n_predict:-1 penalize_nl:true presence_penalty:0 prompt:You are a sophisticated, accurate, and modern AI programming assistant Stop your response with ###END### 2023-12-22 21:45:17 ### Request: 2023-12-22 21:45:17 Python code to get current time and put it as a string with format something like "yyyy-mm-dd hh:mm" 2023-12-22 21:45:17 ### Code: **[Stream Issue] ollama: llama runner exited, you may not have enough available memory to run this model** Stop your response with ###END### 2023-12-22 21:45:17 ### Request: 2023-12-22 21:45:17 Python code to get current time and put it as a string with format something like "yyyy-mm-dd hh:mm" 2023-12-22 21:45:17 ### Code: repeat_last_n:64 repeat_penalty:1.1 seed:-1 stop:[###END###] stream:true temperature:0.5 tfs_z:1 top_k:40 top_p:0.9 typical_p:1] 2023-12-22 21:45:17 2023-12-22 21:45:17 CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:6600: out of memory 2023-12-22 21:45:17 current device: 0 2023-12-22 21:45:17 GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:6600: !"CUDA error" 2023-12-22 21:45:18 2023/12/22 14:45:18 llama.go:455: 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:6600: out of memory 2023-12-22 21:45:18 current device: 0 2023-12-22 21:45:18 GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:6600: !"CUDA error" 2023-12-22 21:45:18 2023/12/22 14:45:18 llama.go:529: llama runner stopped successfully 2023-12-22 21:45:18 [GIN] 2023/12/22 - 14:45:18 | 200 | 14.020072602s | 172.17.0.1 | POST "/api/chat" ``` ### v0.1.14 ``` 2023-12-22 21:37:31 llama_model_loader: - type f32: 65 tensors 2023-12-22 21:37:31 llama_model_loader: - type q5_K: 193 tensors 2023-12-22 21:37:31 llama_model_loader: - type q6_K: 33 tensors 2023-12-22 21:37:31 llm_load_vocab: mismatch in special tokens definition ( 243/32256 vs 237/32256 ). 2023-12-22 21:37:31 llm_load_print_meta: format = GGUF V3 (latest) 2023-12-22 21:37:31 llm_load_print_meta: arch = llama 2023-12-22 21:37:31 llm_load_print_meta: vocab type = BPE 2023-12-22 21:37:31 llm_load_print_meta: n_vocab = 32256 2023-12-22 21:37:31 llm_load_print_meta: n_merges = 31757 2023-12-22 21:37:31 llm_load_print_meta: n_ctx_train = 16384 2023-12-22 21:37:31 llm_load_print_meta: n_embd = 4096 2023-12-22 21:37:31 llm_load_print_meta: n_head = 32 2023-12-22 21:37:31 llm_load_print_meta: n_head_kv = 32 2023-12-22 21:37:31 llm_load_print_meta: n_layer = 32 2023-12-22 21:37:31 llm_load_print_meta: n_rot = 128 2023-12-22 21:37:31 llm_load_print_meta: n_gqa = 1 2023-12-22 21:37:31 llm_load_print_meta: f_norm_eps = 0.0e+00 2023-12-22 21:37:31 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 2023-12-22 21:37:31 llm_load_print_meta: f_clamp_kqv = 0.0e+00 2023-12-22 21:37:31 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 2023-12-22 21:37:31 llm_load_print_meta: n_ff = 11008 2023-12-22 21:37:31 llm_load_print_meta: rope scaling = linear 2023-12-22 21:37:31 llm_load_print_meta: freq_base_train = 100000.0 2023-12-22 21:37:31 llm_load_print_meta: freq_scale_train = 0.25 2023-12-22 21:37:31 llm_load_print_meta: n_yarn_orig_ctx = 16384 2023-12-22 21:37:31 llm_load_print_meta: rope_finetuned = unknown 2023-12-22 21:37:31 llm_load_print_meta: model type = 7B 2023-12-22 21:37:31 llm_load_print_meta: model ftype = mostly Q5_K - Medium 2023-12-22 21:37:31 llm_load_print_meta: model params = 6.74 B 2023-12-22 21:37:31 llm_load_print_meta: model size = 4.46 GiB (5.68 BPW) 2023-12-22 21:37:31 llm_load_print_meta: general.name = deepseek-ai_deepseek-coder-6.7b-instruct 2023-12-22 21:37:31 llm_load_print_meta: BOS token = 32013 '<|begin▁of▁sentence|>' 2023-12-22 21:37:31 llm_load_print_meta: EOS token = 32021 '<|EOT|>' 2023-12-22 21:37:31 llm_load_print_meta: PAD token = 32014 '<|end▁of▁sentence|>' 2023-12-22 21:37:31 llm_load_print_meta: LF token = 126 'Ä' 2023-12-22 21:37:31 llm_load_tensors: ggml ctx size = 0.11 MiB 2023-12-22 21:37:31 llm_load_tensors: using CUDA for GPU acceleration 2023-12-22 21:37:31 llm_load_tensors: mem required = 471.22 MiB 2023-12-22 21:37:31 llm_load_tensors: offloading 30 repeating layers to GPU 2023-12-22 21:37:31 llm_load_tensors: offloaded 30/35 layers to GPU 2023-12-22 21:37:31 llm_load_tensors: VRAM used: 4091.27 MiB 2023-12-22 21:37:37 .................................................................................................. 2023-12-22 21:37:37 llama_new_context_with_model: n_ctx = 2048 2023-12-22 21:37:37 llama_new_context_with_model: freq_base = 100000.0 2023-12-22 21:37:37 llama_new_context_with_model: freq_scale = 0.25 2023-12-22 21:37:37 llama_new_context_with_model: kv self size = 1024.00 MiB 2023-12-22 21:37:37 llama_build_graph: non-view tensors processed: 676/676 2023-12-22 21:37:37 llama_new_context_with_model: compute buffer total size = 159.07 MiB 2023-12-22 21:37:37 llama_new_context_with_model: VRAM scratch buffer: 156.00 MiB 2023-12-22 21:37:37 llama_new_context_with_model: total VRAM used: 4247.27 MiB (model: 4091.27 MiB, context: 156.00 MiB) 2023-12-22 21:37:37 {"timestamp":1703255857,"level":"INFO","function":"main","line":3038,"message":"HTTP server listening","hostname":"127.0.0.1","port":62142} 2023-12-22 21:37:37 {"timestamp":1703255857,"level":"INFO","function":"log_server_request","line":2599,"message":"request","remote_addr":"127.0.0.1","remote_port":49528,"status":200,"method":"HEAD","path":"/","params":{}} 2023-12-22 21:37:38 2023/12/22 14:37:37 llama.go:506: llama runner started in 8.410558 seconds 2023-12-22 21:37:42 {"timestamp":1703255862,"level":"INFO","function":"log_server_request","line":2599,"message":"request","remote_addr":"127.0.0.1","remote_port":49528,"status":200,"method":"POST","path":"/completion","params":{}} 2023-12-22 21:42:42 2023/12/22 14:42:42 llama.go:449: signal: killed 2023-12-22 21:42:42 2023/12/22 14:42:42 llama.go:523: llama runner stopped successfully ```
Author
Owner

@yashchittora commented on GitHub (Dec 22, 2023):

Same happening with me. The latest versions tend to use more memory than the previous ones.

<!-- gh-comment-id:1867800695 --> @yashchittora commented on GitHub (Dec 22, 2023): Same happening with me. The latest versions tend to use more memory than the previous ones.
Author
Owner

@trustraptor commented on GitHub (Dec 23, 2023):

Is there a way to release memory manually using docker? It's supposed to release after 5 minutes, right?

<!-- gh-comment-id:1868202462 --> @trustraptor commented on GitHub (Dec 23, 2023): Is there a way to release memory manually using docker? It's supposed to release after 5 minutes, right?
Author
Owner

@TheOneValen commented on GitHub (Dec 28, 2023):

Same happening with me. The latest versions tend to use more memory than the previous ones.

I run a very old 4G card (1050ti) and was very happy that ollama still works with that. But that is only true until 0.1.14. Out of memory when upgrading.

<!-- gh-comment-id:1871622846 --> @TheOneValen commented on GitHub (Dec 28, 2023): > Same happening with me. The latest versions tend to use more memory than the previous ones. I run a very old 4G card (1050ti) and was very happy that ollama still works with that. But that is only true until 0.1.14. Out of memory when upgrading.
Author
Owner

@songlim327 commented on GitHub (Jan 5, 2024):

I have a NVIDIA GeForce GTX 1650 GPU, running ollama docker container with version 0.1.17 have the same out of memory error. It switches to CPU after the failure. Below are the logs captured from the docker container:

CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:9132: out of memory
current device: 0
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:9132: !"CUDA error"
2024/01/05 10:38:07 llama.go:455: 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:9132: out of memory
current device: 0
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:9132: !"CUDA error"
2024/01/05 10:38:07 llama.go:463: error starting llama runner: llama runner process has terminated
2024/01/05 10:38:07 llama.go:529: llama runner stopped successfully
<!-- gh-comment-id:1878474712 --> @songlim327 commented on GitHub (Jan 5, 2024): I have a **NVIDIA GeForce GTX 1650** GPU, running ollama docker container with version 0.1.17 have the same out of memory error. It switches to CPU after the failure. Below are the logs captured from the docker container: ``` CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:9132: out of memory current device: 0 GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:9132: !"CUDA error" 2024/01/05 10:38:07 llama.go:455: 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:9132: out of memory current device: 0 GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:9132: !"CUDA error" 2024/01/05 10:38:07 llama.go:463: error starting llama runner: llama runner process has terminated 2024/01/05 10:38:07 llama.go:529: llama runner stopped successfully ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26704