[GH-ISSUE #790] "out of memory" when using CUDA #46888

Closed
opened 2026-04-28 01:40:29 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @konstantin1722 on GitHub (Oct 14, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/790

Originally assigned to: @BruceMacD on GitHub.

I reinstalled ollama, after merge #724, now the error is gone on startup. At startup, it automatically calculates the number of layers that will be loaded into VRAM, but it does so incorrectly, which ultimately results in VRAM not being used at all.

I run the model nous-hermes:13b-llama2, after that I get this log in the log:

oct 11 15:40:01 desktop-pc systemd[1]: Started Ollama Service.
oct 11 15:40:01 desktop-pc ollama[32302]: 2023/10/11 15:40:01 images.go:996: total blobs: 17
oct 11 15:40:01 desktop-pc ollama[32302]: 2023/10/11 15:40:01 images.go:1003: total unused blobs removed: 0
oct 11 15:40:01 desktop-pc ollama[32302]: 2023/10/11 15:40:01 routes.go:572: Listening on 127.0.0.1:11434
oct 11 15:40:01 desktop-pc ollama[32302]: [GIN] 2023/10/11 - 15:40:01 | 200 |      33.576µs |       127.0.0.1 | HEAD     "/"
oct 11 15:40:01 desktop-pc ollama[32302]: [GIN] 2023/10/11 - 15:40:01 | 200 |     468.103µs |       127.0.0.1 | GET      "/api/tags"
oct 11 15:40:08 desktop-pc ollama[32302]: [GIN] 2023/10/11 - 15:40:08 | 200 |      13.184µs |       127.0.0.1 | HEAD     "/"
oct 11 15:40:08 desktop-pc ollama[32302]: [GIN] 2023/10/11 - 15:40:08 | 200 |     370.716µs |       127.0.0.1 | GET      "/api/tags"
oct 11 15:40:08 desktop-pc ollama[32302]: 2023/10/11 15:40:08 llama.go:239: 6144 MiB VRAM available, loading up to 35 GPU layers
oct 11 15:40:08 desktop-pc ollama[32302]: 2023/10/11 15:40:08 llama.go:313: starting llama runner
oct 11 15:40:08 desktop-pc ollama[32302]: 2023/10/11 15:40:08 llama.go:349: waiting for llama runner to start responding
oct 11 15:40:09 desktop-pc ollama[32352]: ggml_init_cublas: found 1 CUDA devices:
oct 11 15:40:09 desktop-pc ollama[32352]:   Device 0: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1
oct 11 15:40:09 desktop-pc ollama[32352]: {"timestamp":1697028009,"level":"INFO","function":"main","line":1190,"message":"build info","build":1009,"commit":"9e232f0"}
oct 11 15:40:09 desktop-pc ollama[32352]: {"timestamp":1697028009,"level":"INFO","function":"main","line":1192,"message":"system info","n_threads":3,"total_threads":6,"system_info":"AVX = 1| AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | "}
oct 11 15:40:09 desktop-pc ollama[32352]: llama.cpp: loading model from /usr/share/ollama/.ollama/models/blobs/sha256:f77c91fd65dd06ba92a6517fa5ab5bed86533b4171f0de63c0ab4883ac1ef826
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: format     = ggjt v3 (latest)
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_vocab    = 32032
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_ctx      = 2048
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_embd     = 5120
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_mult     = 256
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_head     = 40
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_head_kv  = 40
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_layer    = 40
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_rot      = 128
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_gqa      = 1
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: rnorm_eps  = 5.0e-06
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_ff       = 13824
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: freq_base  = 10000.0
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: freq_scale = 1
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: ftype      = 2 (mostly Q4_0)
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: model size = 13B
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: ggml ctx size =    0.11 MB
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: using CUDA for GPU acceleration
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: mem required  = 1521.06 MB (+ 1600.00 MB per state)
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: allocating batch_size x (640 kB + n_ctx x 160 B) = 480 MB VRAM for the scratch buffer
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: offloading 35 repeating layers to GPU
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: offloaded 35/43 layers to GPU
oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: total VRAM used: 6437 MB
oct 11 15:40:09 desktop-pc ollama[32352]: CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:6184: out of memory

From the log I can see that a value of 35 layers was selected which resulted in out of memory.

Then I tried to specify the number manually at 22 layers:

oct 11 15:42:01 desktop-pc systemd[1]: Started Ollama Service.
oct 11 15:42:01 desktop-pc ollama[32411]: 2023/10/11 15:42:01 images.go:996: total blobs: 17
oct 11 15:42:01 desktop-pc ollama[32411]: 2023/10/11 15:42:01 images.go:1003: total unused blobs removed: 0
oct 11 15:42:01 desktop-pc ollama[32411]: 2023/10/11 15:42:01 routes.go:572: Listening on 127.0.0.1:11434
oct 11 15:42:01 desktop-pc ollama[32411]: [GIN] 2023/10/11 - 15:42:01 | 200 |       39.45µs |       127.0.0.1 | HEAD     "/"
oct 11 15:42:01 desktop-pc ollama[32411]: [GIN] 2023/10/11 - 15:42:01 | 200 |     641.805µs |       127.0.0.1 | GET      "/api/tags"
oct 11 15:42:08 desktop-pc ollama[32411]: [GIN] 2023/10/11 - 15:42:08 | 200 |      23.622µs |       127.0.0.1 | HEAD     "/"
oct 11 15:42:08 desktop-pc ollama[32411]: [GIN] 2023/10/11 - 15:42:08 | 200 |     696.378µs |       127.0.0.1 | GET      "/api/tags"
oct 11 15:42:08 desktop-pc ollama[32411]: 2023/10/11 15:42:08 llama.go:313: starting llama runner
oct 11 15:42:08 desktop-pc ollama[32411]: 2023/10/11 15:42:08 llama.go:349: waiting for llama runner to start responding
oct 11 15:42:08 desktop-pc ollama[32462]: ggml_init_cublas: found 1 CUDA devices:
oct 11 15:42:08 desktop-pc ollama[32462]:   Device 0: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1
oct 11 15:42:08 desktop-pc ollama[32462]: {"timestamp":1697028128,"level":"INFO","function":"main","line":1190,"message":"build info","build":1009,"commit":"9e232f0"}
oct 11 15:42:08 desktop-pc ollama[32462]: {"timestamp":1697028128,"level":"INFO","function":"main","line":1192,"message":"system info","n_threads":6,"total_threads":6,"system_info":"AVX = 1| AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | "}
oct 11 15:42:08 desktop-pc ollama[32462]: llama.cpp: loading model from /usr/share/ollama/.ollama/models/blobs/sha256:f77c91fd65dd06ba92a6517fa5ab5bed86533b4171f0de63c0ab4883ac1ef826
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: format     = ggjt v3 (latest)
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_vocab    = 32032
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_ctx      = 2048
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_embd     = 5120
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_mult     = 256
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_head     = 40
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_head_kv  = 40
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_layer    = 40
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_rot      = 128
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_gqa      = 1
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: rnorm_eps  = 5.0e-06
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_ff       = 13824
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: freq_base  = 10000.0
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: freq_scale = 1
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: ftype      = 2 (mostly Q4_0)
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: model size = 13B
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: ggml ctx size =    0.11 MB
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: using CUDA for GPU acceleration
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: mem required  = 3733.60 MB (+ 1600.00 MB per state)
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: allocating batch_size x (640 kB + n_ctx x 160 B) = 480 MB VRAM for the scratch buffer
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: offloading 22 repeating layers to GPU
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: offloaded 22/43 layers to GPU
oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: total VRAM used: 4225 MB
oct 11 15:42:09 desktop-pc ollama[32462]: llama_new_context_with_model: kv self size  = 1600.00 MB
oct 11 15:42:09 desktop-pc ollama[32462]: llama server listening at http://127.0.0.1:62934

In this case the startup is successful, but the memory is still short when generating and I get:

oct 11 15:44:22 desktop-pc ollama[32462]: CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:4856: out of memory
oct 11 15:44:23 desktop-pc ollama[32411]: [GIN] 2023/10/11 - 15:44:23 | 200 |   2.83776208s |       127.0.0.1 | POST     "/api/generate"
oct 11 15:44:23 desktop-pc ollama[32411]: 2023/10/11 15:44:23 llama.go:323: llama runner exited with error: exit status 1

I'm assuming this behaviour is not the norm.

Generation with 18 layers works successfully for the 13B model.

Also, I noticed that for the llama2-uncensored:7b-chat-q8_0 model, no attempt is made to load layers into VRAM at all. The same goes for explicitly specifying num_gpu via Modelfile. Is this normal behaviour?

Also, can you answer a couple of additional questions on the topic?

  1. I noticed that the load on the graphics chip is not even, sometimes it is 0% sometimes it can go up to 100%, is this due to the number of loaded layers and random?
  2. I still see a high load on the hard drive and free RAM when generating text, is this how it should be? I thought all the model files should be uploaded to RAM, for example for 13B it will take 7.3GB of RAM, or am I wrong?

I would be very grateful if you could clarify these two points. But of course the underlying problem is the most important one within the scope of this post.

Originally created by @konstantin1722 on GitHub (Oct 14, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/790 Originally assigned to: @BruceMacD on GitHub. I reinstalled ollama, after merge #724, now the error is gone on startup. At startup, it automatically calculates the number of layers that will be loaded into VRAM, but it does so incorrectly, which ultimately results in VRAM not being used at all. I run the model `nous-hermes:13b-llama2`, after that I get this log in the log: ``` oct 11 15:40:01 desktop-pc systemd[1]: Started Ollama Service. oct 11 15:40:01 desktop-pc ollama[32302]: 2023/10/11 15:40:01 images.go:996: total blobs: 17 oct 11 15:40:01 desktop-pc ollama[32302]: 2023/10/11 15:40:01 images.go:1003: total unused blobs removed: 0 oct 11 15:40:01 desktop-pc ollama[32302]: 2023/10/11 15:40:01 routes.go:572: Listening on 127.0.0.1:11434 oct 11 15:40:01 desktop-pc ollama[32302]: [GIN] 2023/10/11 - 15:40:01 | 200 | 33.576µs | 127.0.0.1 | HEAD "/" oct 11 15:40:01 desktop-pc ollama[32302]: [GIN] 2023/10/11 - 15:40:01 | 200 | 468.103µs | 127.0.0.1 | GET "/api/tags" oct 11 15:40:08 desktop-pc ollama[32302]: [GIN] 2023/10/11 - 15:40:08 | 200 | 13.184µs | 127.0.0.1 | HEAD "/" oct 11 15:40:08 desktop-pc ollama[32302]: [GIN] 2023/10/11 - 15:40:08 | 200 | 370.716µs | 127.0.0.1 | GET "/api/tags" oct 11 15:40:08 desktop-pc ollama[32302]: 2023/10/11 15:40:08 llama.go:239: 6144 MiB VRAM available, loading up to 35 GPU layers oct 11 15:40:08 desktop-pc ollama[32302]: 2023/10/11 15:40:08 llama.go:313: starting llama runner oct 11 15:40:08 desktop-pc ollama[32302]: 2023/10/11 15:40:08 llama.go:349: waiting for llama runner to start responding oct 11 15:40:09 desktop-pc ollama[32352]: ggml_init_cublas: found 1 CUDA devices: oct 11 15:40:09 desktop-pc ollama[32352]: Device 0: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1 oct 11 15:40:09 desktop-pc ollama[32352]: {"timestamp":1697028009,"level":"INFO","function":"main","line":1190,"message":"build info","build":1009,"commit":"9e232f0"} oct 11 15:40:09 desktop-pc ollama[32352]: {"timestamp":1697028009,"level":"INFO","function":"main","line":1192,"message":"system info","n_threads":3,"total_threads":6,"system_info":"AVX = 1| AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | "} oct 11 15:40:09 desktop-pc ollama[32352]: llama.cpp: loading model from /usr/share/ollama/.ollama/models/blobs/sha256:f77c91fd65dd06ba92a6517fa5ab5bed86533b4171f0de63c0ab4883ac1ef826 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: format = ggjt v3 (latest) oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_vocab = 32032 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_ctx = 2048 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_embd = 5120 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_mult = 256 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_head = 40 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_head_kv = 40 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_layer = 40 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_rot = 128 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_gqa = 1 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: rnorm_eps = 5.0e-06 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: n_ff = 13824 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: freq_base = 10000.0 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: freq_scale = 1 oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: ftype = 2 (mostly Q4_0) oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: model size = 13B oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: ggml ctx size = 0.11 MB oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: using CUDA for GPU acceleration oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: mem required = 1521.06 MB (+ 1600.00 MB per state) oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: allocating batch_size x (640 kB + n_ctx x 160 B) = 480 MB VRAM for the scratch buffer oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: offloading 35 repeating layers to GPU oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: offloaded 35/43 layers to GPU oct 11 15:40:09 desktop-pc ollama[32352]: llama_model_load_internal: total VRAM used: 6437 MB oct 11 15:40:09 desktop-pc ollama[32352]: CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:6184: out of memory ``` From the log I can see that a value of 35 layers was selected which resulted in `out of memory`. Then I tried to specify the number manually at 22 layers: ``` oct 11 15:42:01 desktop-pc systemd[1]: Started Ollama Service. oct 11 15:42:01 desktop-pc ollama[32411]: 2023/10/11 15:42:01 images.go:996: total blobs: 17 oct 11 15:42:01 desktop-pc ollama[32411]: 2023/10/11 15:42:01 images.go:1003: total unused blobs removed: 0 oct 11 15:42:01 desktop-pc ollama[32411]: 2023/10/11 15:42:01 routes.go:572: Listening on 127.0.0.1:11434 oct 11 15:42:01 desktop-pc ollama[32411]: [GIN] 2023/10/11 - 15:42:01 | 200 | 39.45µs | 127.0.0.1 | HEAD "/" oct 11 15:42:01 desktop-pc ollama[32411]: [GIN] 2023/10/11 - 15:42:01 | 200 | 641.805µs | 127.0.0.1 | GET "/api/tags" oct 11 15:42:08 desktop-pc ollama[32411]: [GIN] 2023/10/11 - 15:42:08 | 200 | 23.622µs | 127.0.0.1 | HEAD "/" oct 11 15:42:08 desktop-pc ollama[32411]: [GIN] 2023/10/11 - 15:42:08 | 200 | 696.378µs | 127.0.0.1 | GET "/api/tags" oct 11 15:42:08 desktop-pc ollama[32411]: 2023/10/11 15:42:08 llama.go:313: starting llama runner oct 11 15:42:08 desktop-pc ollama[32411]: 2023/10/11 15:42:08 llama.go:349: waiting for llama runner to start responding oct 11 15:42:08 desktop-pc ollama[32462]: ggml_init_cublas: found 1 CUDA devices: oct 11 15:42:08 desktop-pc ollama[32462]: Device 0: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1 oct 11 15:42:08 desktop-pc ollama[32462]: {"timestamp":1697028128,"level":"INFO","function":"main","line":1190,"message":"build info","build":1009,"commit":"9e232f0"} oct 11 15:42:08 desktop-pc ollama[32462]: {"timestamp":1697028128,"level":"INFO","function":"main","line":1192,"message":"system info","n_threads":6,"total_threads":6,"system_info":"AVX = 1| AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | "} oct 11 15:42:08 desktop-pc ollama[32462]: llama.cpp: loading model from /usr/share/ollama/.ollama/models/blobs/sha256:f77c91fd65dd06ba92a6517fa5ab5bed86533b4171f0de63c0ab4883ac1ef826 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: format = ggjt v3 (latest) oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_vocab = 32032 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_ctx = 2048 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_embd = 5120 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_mult = 256 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_head = 40 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_head_kv = 40 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_layer = 40 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_rot = 128 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_gqa = 1 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: rnorm_eps = 5.0e-06 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: n_ff = 13824 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: freq_base = 10000.0 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: freq_scale = 1 oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: ftype = 2 (mostly Q4_0) oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: model size = 13B oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: ggml ctx size = 0.11 MB oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: using CUDA for GPU acceleration oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: mem required = 3733.60 MB (+ 1600.00 MB per state) oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: allocating batch_size x (640 kB + n_ctx x 160 B) = 480 MB VRAM for the scratch buffer oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: offloading 22 repeating layers to GPU oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: offloaded 22/43 layers to GPU oct 11 15:42:08 desktop-pc ollama[32462]: llama_model_load_internal: total VRAM used: 4225 MB oct 11 15:42:09 desktop-pc ollama[32462]: llama_new_context_with_model: kv self size = 1600.00 MB oct 11 15:42:09 desktop-pc ollama[32462]: llama server listening at http://127.0.0.1:62934 ``` In this case the startup is successful, but the memory is still short when generating and I get: ``` oct 11 15:44:22 desktop-pc ollama[32462]: CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:4856: out of memory oct 11 15:44:23 desktop-pc ollama[32411]: [GIN] 2023/10/11 - 15:44:23 | 200 | 2.83776208s | 127.0.0.1 | POST "/api/generate" oct 11 15:44:23 desktop-pc ollama[32411]: 2023/10/11 15:44:23 llama.go:323: llama runner exited with error: exit status 1 ``` I'm assuming this behaviour is not the norm. Generation with 18 layers works successfully for the 13B model. Also, I noticed that for the `llama2-uncensored:7b-chat-q8_0` model, no attempt is made to load layers into VRAM at all. The same goes for explicitly specifying `num_gpu` via Modelfile. Is this normal behaviour? Also, can you answer a couple of additional questions on the topic? 1. I noticed that the load on the graphics chip is not even, sometimes it is 0% sometimes it can go up to 100%, is this due to the number of loaded layers and random? 2. I still see a high load on the hard drive and free RAM when generating text, is this how it should be? I thought all the model files should be uploaded to RAM, for example for 13B it will take 7.3GB of RAM, or am I wrong? I would be very grateful if you could clarify these two points. But of course the underlying problem is the most important one within the scope of this post.
GiteaMirror added the linuxbug labels 2026-04-28 01:40:32 -05:00
Author
Owner

@lrvl commented on GitHub (Oct 14, 2023):

ollama version: 0.1.3
gpu: NVIDIA GeForce GTX 1660 SUPER, 6144 MiB, 5654 MiB Free

Issue as shown in the service log:

CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:4856: out of memory

Issue reproduction:

Many 7B models of 3.8GB and 4.1GB on disk do not load, resulting in runs with ⠹ Error: error reading llm response: unexpected EOF

Notice that the model mistral-openorca of 4.1GB does load and work fine.

Using cpu as workaround does work, example Modelfile

# FROM llama2-uncensored:7b-chat

FROM llama2-uncensored:7b-chat
PARAMETER num_gpu 0
TEMPLATE """### HUMAN:
{{ .Prompt }}

### RESPONSE:
"""
SYSTEM """"""

Thank you!

<!-- gh-comment-id:1762965116 --> @lrvl commented on GitHub (Oct 14, 2023): ollama version: 0.1.3 gpu: NVIDIA GeForce GTX 1660 SUPER, 6144 MiB, 5654 MiB Free Issue as shown in the service log: ```CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:4856: out of memory``` Issue reproduction: Many 7B models of 3.8GB and 4.1GB on disk do not load, resulting in runs with ```⠹ Error: error reading llm response: unexpected EOF``` Notice that the model ```mistral-openorca``` of 4.1GB does load and work fine. Using cpu as workaround does work, example ```Modelfile``` ``` # FROM llama2-uncensored:7b-chat FROM llama2-uncensored:7b-chat PARAMETER num_gpu 0 TEMPLATE """### HUMAN: {{ .Prompt }} ### RESPONSE: """ SYSTEM """""" ``` Thank you!
Author
Owner

@vxld100 commented on GitHub (Oct 14, 2023):

I have the same issue with an nvidia GeForce GTX 1650 Mobile, and 16 GB of RAM.

I too get the error CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:5487: out of memory.

However, I find that mistral-openorca:latest too doesn't work for me.

<!-- gh-comment-id:1762994383 --> @vxld100 commented on GitHub (Oct 14, 2023): I have the same issue with an nvidia GeForce GTX 1650 Mobile, and 16 GB of RAM. I too get the error `CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:5487: out of memory`. However, I find that mistral-openorca:latest too doesn't work for me.
Author
Owner

@jmorganca commented on GitHub (Oct 14, 2023):

Hi all, thanks for opening this and shedding some light on the bug. Looking into this – Ollama should definitely not be crashing – and will get it fixed as we improve how Ollama allocates VRAM.

<!-- gh-comment-id:1763177250 --> @jmorganca commented on GitHub (Oct 14, 2023): Hi all, thanks for opening this and shedding some light on the bug. Looking into this – Ollama should definitely not be crashing – and will get it fixed as we improve how Ollama allocates VRAM.
Author
Owner

@jerzydziewierz commented on GitHub (Oct 15, 2023):

@konstantin1722 how does one manually specify how many layers to load into VRAM? I can't find that option anywhere,

if explained here, I promise to add that to README in a very nice, readable fashion

if that helps, here's my experience:

  • using 𝄞 ollama run nous-hermes:13b
  • using RTX2060 with 12GB RAM (12288MiB)

Loading of the model succeeds or fails depending on what other applications are loaded in. I check nvidia-smi for processes. under normal computer usage (firefox, pycharm, and some smaller tools) are using 3.1GB GPU memory and the loading fails altogether. If I manually exit these apps, that frees up memory and now only 1.32GB is used -- then, the loading succeeds; uses 10438MiB of GPU memory and I get~ 31tokens/sec.

In other words, if there is only a slightly not enough memory, partial loading does not succeed.

when loading 𝄞 ollama run wizard-vicuna-uncensored:30b

I get:

Oct 15 11:00:27 ub20phy ollama[297224]:   Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5
(...)
Oct 15 11:00:27 ub20phy ollama[297224]: llama_model_load_internal: offloading 42 repeating layers to GPU
Oct 15 11:00:27 ub20phy ollama[297224]: llama_model_load_internal: offloaded 42/63 layers to GPU
Oct 15 11:00:27 ub20phy ollama[297224]: llama_model_load_internal: total VRAM used: 12649 MB
Oct 15 11:00:27 ub20phy ollama[297224]: CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:6184: out of memory
Oct 15 11:00:27 ub20phy ollama[3540]: 2023/10/15 11:00:27 llama.go:323: llama runner exited with error: exit status 1

12649 MB requested, is of course, too much -- it's actually 361MB more than my GPU has in the first place, even if there was nothing else on it.

So maybe there is a lost negation sign somewhere.

it then re-loads without a GPU, and works correctly.

I hope this helps...

<!-- gh-comment-id:1763338577 --> @jerzydziewierz commented on GitHub (Oct 15, 2023): @konstantin1722 how does one manually specify how many layers to load into VRAM? I can't find that option anywhere, if explained here, I promise to add that to README in a very nice, readable fashion if that helps, **here's my experience**: * using `𝄞 ollama run nous-hermes:13b` * using RTX2060 with 12GB RAM (12288MiB) Loading of the model succeeds or fails depending on what other applications are loaded in. I check `nvidia-smi` for `processes`. under normal computer usage (firefox, pycharm, and some smaller tools) are using 3.1GB GPU memory and the loading fails altogether. If I manually exit these apps, that frees up memory and now only 1.32GB is used -- then, the loading succeeds; uses 10438MiB of GPU memory and I get~ 31tokens/sec. In other words, if there is only a slightly not enough memory, partial loading does not succeed. when loading `𝄞 ollama run wizard-vicuna-uncensored:30b` I get: ``` Oct 15 11:00:27 ub20phy ollama[297224]: Device 0: NVIDIA GeForce RTX 2060, compute capability 7.5 (...) Oct 15 11:00:27 ub20phy ollama[297224]: llama_model_load_internal: offloading 42 repeating layers to GPU Oct 15 11:00:27 ub20phy ollama[297224]: llama_model_load_internal: offloaded 42/63 layers to GPU Oct 15 11:00:27 ub20phy ollama[297224]: llama_model_load_internal: total VRAM used: 12649 MB Oct 15 11:00:27 ub20phy ollama[297224]: CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:6184: out of memory Oct 15 11:00:27 ub20phy ollama[3540]: 2023/10/15 11:00:27 llama.go:323: llama runner exited with error: exit status 1 ``` 12649 MB requested, is of course, too much -- it's actually 361MB more than my GPU has in the first place, even if there was nothing else on it. **So maybe** there is a lost negation sign somewhere. it then re-loads without a GPU, and works correctly. I hope this helps...
Author
Owner

@konstantin1722 commented on GitHub (Oct 15, 2023):

how does one manually specify how many layers to load into VRAM?

@jerzydziewierz I edited Modelfile.
For example ollama show --modelfile nous-hermes:13b-llama2, then I'd take that and add it:

PARAMETER num_gpu 18 (The number of layers was chosen by eye.)
PARAMETER num_thread 6 (I have six physical cores.)

Next, I create my preset: ollama create 13b-GPU-18-CPU-6 -f /storage/ollama-data/Modelfile and ollama run 13b-GPU-18-CPU-6:latest.

As far as I know, you can't set the number of layers via command line arguments now, and the same goes for other parameters. Pull requests have already been suggested as far as I know.

@jerzydziewierz I'd like to take this opportunity to ask you a question. How fast does the 70B model work for you? How did you count the tokens/s? Is there any way to run some kind of test or is it an approximate count?

<!-- gh-comment-id:1763445180 --> @konstantin1722 commented on GitHub (Oct 15, 2023): > how does one manually specify how many layers to load into VRAM? @jerzydziewierz I edited Modelfile. For example `ollama show --modelfile nous-hermes:13b-llama2`, then I'd take that and add it: ``` PARAMETER num_gpu 18 (The number of layers was chosen by eye.) PARAMETER num_thread 6 (I have six physical cores.) ``` Next, I create my preset: `ollama create 13b-GPU-18-CPU-6 -f /storage/ollama-data/Modelfile` and `ollama run 13b-GPU-18-CPU-6:latest`. As far as I know, you can't set the number of layers via command line arguments now, and the same goes for other parameters. Pull requests have already been suggested as far as I know. @jerzydziewierz I'd like to take this opportunity to ask you a question. How fast does the 70B model work for you? How did you count the tokens/s? Is there any way to run some kind of test or is it an approximate count?
Author
Owner

@ENDER71 commented on GitHub (Oct 15, 2023):

Edit: it worked, for a while...now i got the same error even with the PARAMETER num_gpu 0

This worked for me:

Read the manifest of the AI you want to use, for example CodeUP:
ollama show --modelfile codeup:latest

I got something like:

# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM codeup:latest

FROM /usr/share/ollama/.ollama/models/blobs/sha256:a7356fa9c03a3a23a7757c79beb726eb95fd4d300b69c195624018bc1cb5a070
TEMPLATE """{{- if .First }}{{ .System }}{{- end }}

### Instruction:
{{ .Prompt }}

### Response:"""
SYSTEM """Below is an instruction that describes a task. Write a response that appropriately completes the request."""
`

create a Modelfile and copy the original modelfile in it

pico ModelCodeUP-NO-GPU

I modified the modelfile as follows:

# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM codeup:latest

FROM codeup:latest
TEMPLATE """{{- if .First }}{{ .System }}{{- end }}

### Added Instructions
PARAMETER num_gpu 0

### Instruction:
{{ .Prompt }}

### Response:"""
SYSTEM """Below is an instruction that describes a task. Write a response that appropriately completes the request."""

Create a new model using the new modelfile:

ollama create codeupNOGPU -f ./ModelCodeUP-NO-GPU

Run the new Model:

ollama run codeupNOGPU

Now it works (with no GPU)
Hope it helps.

<!-- gh-comment-id:1763487212 --> @ENDER71 commented on GitHub (Oct 15, 2023): **Edit: it worked, for a while...now i got the same error even with the PARAMETER num_gpu 0** This worked for me: Read the manifest of the AI you want to use, for example CodeUP: `ollama show --modelfile codeup:latest ` I got something like: ``` # Modelfile generated by "ollama show" # To build a new Modelfile based on this one, replace the FROM line with: # FROM codeup:latest FROM /usr/share/ollama/.ollama/models/blobs/sha256:a7356fa9c03a3a23a7757c79beb726eb95fd4d300b69c195624018bc1cb5a070 TEMPLATE """{{- if .First }}{{ .System }}{{- end }} ### Instruction: {{ .Prompt }} ### Response:""" SYSTEM """Below is an instruction that describes a task. Write a response that appropriately completes the request.""" ` ``` create a Modelfile and copy the original modelfile in it `pico ModelCodeUP-NO-GPU ` I modified the modelfile as follows: ``` # Modelfile generated by "ollama show" # To build a new Modelfile based on this one, replace the FROM line with: # FROM codeup:latest FROM codeup:latest TEMPLATE """{{- if .First }}{{ .System }}{{- end }} ### Added Instructions PARAMETER num_gpu 0 ### Instruction: {{ .Prompt }} ### Response:""" SYSTEM """Below is an instruction that describes a task. Write a response that appropriately completes the request.""" ``` Create a new model using the new modelfile: `ollama create codeupNOGPU -f ./ModelCodeUP-NO-GPU ` Run the new Model: `ollama run codeupNOGPU ` Now it works (with no GPU) Hope it helps.
Author
Owner

@missandi commented on GitHub (Oct 16, 2023):

Edit: it worked, for a while...now i got the same error even with the PARAMETER num_gpu 0

This worked for me:

Read the manifest of the AI you want to use, for example CodeUP: ollama show --modelfile codeup:latest

I got something like:

# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM codeup:latest

FROM /usr/share/ollama/.ollama/models/blobs/sha256:a7356fa9c03a3a23a7757c79beb726eb95fd4d300b69c195624018bc1cb5a070
TEMPLATE """{{- if .First }}{{ .System }}{{- end }}

### Instruction:
{{ .Prompt }}

### Response:"""
SYSTEM """Below is an instruction that describes a task. Write a response that appropriately completes the request."""
`

create a Modelfile and copy the original modelfile in it

pico ModelCodeUP-NO-GPU

I modified the modelfile as follows:

# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM codeup:latest

FROM codeup:latest
TEMPLATE """{{- if .First }}{{ .System }}{{- end }}

### Added Instructions
PARAMETER num_gpu 0

### Instruction:
{{ .Prompt }}

### Response:"""
SYSTEM """Below is an instruction that describes a task. Write a response that appropriately completes the request."""

Create a new model using the new modelfile:

ollama create codeupNOGPU -f ./ModelCodeUP-NO-GPU

Run the new Model:

ollama run codeupNOGPU

Now it works (with no GPU) Hope it helps.

I'm using same by added:

Added Instructions

PARAMETER num_gpu 0

But also not working

<!-- gh-comment-id:1764140393 --> @missandi commented on GitHub (Oct 16, 2023): > **Edit: it worked, for a while...now i got the same error even with the PARAMETER num_gpu 0** > > This worked for me: > > Read the manifest of the AI you want to use, for example CodeUP: `ollama show --modelfile codeup:latest ` > > I got something like: > > ``` > # Modelfile generated by "ollama show" > # To build a new Modelfile based on this one, replace the FROM line with: > # FROM codeup:latest > > FROM /usr/share/ollama/.ollama/models/blobs/sha256:a7356fa9c03a3a23a7757c79beb726eb95fd4d300b69c195624018bc1cb5a070 > TEMPLATE """{{- if .First }}{{ .System }}{{- end }} > > ### Instruction: > {{ .Prompt }} > > ### Response:""" > SYSTEM """Below is an instruction that describes a task. Write a response that appropriately completes the request.""" > ` > ``` > > create a Modelfile and copy the original modelfile in it > > `pico ModelCodeUP-NO-GPU ` > > I modified the modelfile as follows: > > ``` > # Modelfile generated by "ollama show" > # To build a new Modelfile based on this one, replace the FROM line with: > # FROM codeup:latest > > FROM codeup:latest > TEMPLATE """{{- if .First }}{{ .System }}{{- end }} > > ### Added Instructions > PARAMETER num_gpu 0 > > ### Instruction: > {{ .Prompt }} > > ### Response:""" > SYSTEM """Below is an instruction that describes a task. Write a response that appropriately completes the request.""" > ``` > > Create a new model using the new modelfile: > > `ollama create codeupNOGPU -f ./ModelCodeUP-NO-GPU ` > > Run the new Model: > > `ollama run codeupNOGPU ` > > Now it works (with no GPU) Hope it helps. I'm using same by added: ### Added Instructions PARAMETER num_gpu 0 But also not working
Author
Owner

@jerzydziewierz commented on GitHub (Oct 17, 2023):

@jerzydziewierz I'd like to take this opportunity to ask you a question. How fast does the 70B model work for you? How did you count the tokens/s? Is there any way to run some kind of test or is it an approximate count?

@konstantin1722 :

I have not tested 70b model on my system yet, so I cannot say

as to the test/benchmark, there are two ways

  1. in interactive mode, say /set verbose
  2. in CLI mode, say 𝄞 ollama run ${modelname}:latest --verbose "please tell me a story"
<!-- gh-comment-id:1766581129 --> @jerzydziewierz commented on GitHub (Oct 17, 2023): > @jerzydziewierz I'd like to take this opportunity to ask you a question. How fast does the 70B model work for you? How did you count the tokens/s? Is there any way to run some kind of test or is it an approximate count? @konstantin1722 : I have not tested 70b model on my system yet, so I cannot say as to the test/benchmark, there are two ways 1. in interactive mode, say `/set verbose` 2. in CLI mode, say `𝄞 ollama run ${modelname}:latest --verbose "please tell me a story"`
Author
Owner

@jerzydziewierz commented on GitHub (Oct 25, 2023):

I think the latest release might have broken something as this used to work just fine, and now it doesn't :

𝄞 sudo ollama create l7nogpu -f ./model7 
⠋ couldn't open modelfile '/home/user/model7'  Error: failed to open file: open /home/user/model7: permission denied

of course, the /home/user/model7 definitely does exist and has all good permissions


Update: this issue is tracked here: https://github.com/jmorganca/ollama/issues/892

and a PR is here: https://github.com/jmorganca/ollama/pull/898

<!-- gh-comment-id:1779144276 --> @jerzydziewierz commented on GitHub (Oct 25, 2023): I think the latest release might have broken something as this used to work just fine, and now it doesn't : ``` 𝄞 sudo ollama create l7nogpu -f ./model7 ⠋ couldn't open modelfile '/home/user/model7' Error: failed to open file: open /home/user/model7: permission denied ``` of course, the `/home/user/model7` definitely does exist and has all good permissions --- Update: this issue is tracked here: https://github.com/jmorganca/ollama/issues/892 and a PR is here: https://github.com/jmorganca/ollama/pull/898
Author
Owner

@OlivierMary commented on GitHub (Jan 9, 2024):

If that can help PARAMETER is before TEMPLATE

FROM xxxxx
PARAMETER num_gpu 0
TEMPLATE xxxx
....

example for actual llama

# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM llama2:latest

FROM /usr/share/ollama/.ollama/models/blobs/sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
PARAMETER num_gpu 0
TEMPLATE """[INST] <<SYS>>{{ .System }}<</SYS>>
{{ .Prompt }} [/INST]
"""
PARAMETER stop "[INST]"
PARAMETER stop "[/INST]"
PARAMETER stop "<<SYS>>"
PARAMETER stop "<</SYS>>"
<!-- gh-comment-id:1883176479 --> @OlivierMary commented on GitHub (Jan 9, 2024): If that can help PARAMETER is before TEMPLATE ``` FROM xxxxx PARAMETER num_gpu 0 TEMPLATE xxxx .... ``` example for actual llama ``` # Modelfile generated by "ollama show" # To build a new Modelfile based on this one, replace the FROM line with: # FROM llama2:latest FROM /usr/share/ollama/.ollama/models/blobs/sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 PARAMETER num_gpu 0 TEMPLATE """[INST] <<SYS>>{{ .System }}<</SYS>> {{ .Prompt }} [/INST] """ PARAMETER stop "[INST]" PARAMETER stop "[/INST]" PARAMETER stop "<<SYS>>" PARAMETER stop "<</SYS>>" ```
Author
Owner

@chuklee commented on GitHub (Mar 25, 2024):

Hello,
I have the same problem using gemma:7b :

llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =    96.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =   800.00 MiB
llama_new_context_with_model: KV self size  =  896.00 MiB, K (f16):  448.00 MiB, V (f16):  448.00 MiB
llama_new_context_with_model:  CUDA_Host input buffer size   =    11.02 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   112.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =   518.00 MiB
llama_new_context_with_model: graph splits (measure): 3
{"function":"initialize","level":"INFO","line":440,"msg":"initializing slots","n_slots":1,"tid":"29264","timestamp":1711360140}
{"function":"initialize","level":"INFO","line":452,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"29264","timestamp":1711360140}
time=2024-03-25T10:49:00.526+01:00 level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop"
{"function":"update_slots","level":"INFO","line":1590,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"18740","timestamp":1711360140}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"18740","timestamp":1711360140}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":1655,"slot_id":0,"task_id":0,"tid":"18740","timestamp":1711360140}
{"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"18740","timestamp":1711360140}
CUDA error: out of memory
  current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:8658
  cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1)
GGML_ASSERT: C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:256: !"CUDA error"

I have 32gb of RAM and a nvdia 3060? Any hint to solve this?

<!-- gh-comment-id:2017614493 --> @chuklee commented on GitHub (Mar 25, 2024): Hello, I have the same problem using gemma:7b : ``` llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 96.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 800.00 MiB llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB llama_new_context_with_model: CUDA_Host input buffer size = 11.02 MiB llama_new_context_with_model: CUDA0 compute buffer size = 112.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 518.00 MiB llama_new_context_with_model: graph splits (measure): 3 {"function":"initialize","level":"INFO","line":440,"msg":"initializing slots","n_slots":1,"tid":"29264","timestamp":1711360140} {"function":"initialize","level":"INFO","line":452,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"29264","timestamp":1711360140} time=2024-03-25T10:49:00.526+01:00 level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop" {"function":"update_slots","level":"INFO","line":1590,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"18740","timestamp":1711360140} {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"18740","timestamp":1711360140} {"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":1655,"slot_id":0,"task_id":0,"tid":"18740","timestamp":1711360140} {"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"18740","timestamp":1711360140} CUDA error: out of memory current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:8658 cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1) GGML_ASSERT: C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:256: !"CUDA error" ``` I have 32gb of RAM and a nvdia 3060? Any hint to solve this?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#46888