[GH-ISSUE #1952] CUDA out of memory when using long prompts and context sizes #63164

Closed
opened 2026-05-03 12:20:21 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @jmorganca on GitHub (Jan 12, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1952

Originally assigned to: @mxyng on GitHub.

When using a large context window (via num_ctx) and providing a large prompt, Ollama may run out of memory.

Originally created by @jmorganca on GitHub (Jan 12, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1952 Originally assigned to: @mxyng on GitHub. When using a large context window (via `num_ctx`) and providing a large prompt, Ollama may run out of memory.
GiteaMirror added the bugnvidia labels 2026-05-03 12:20:40 -05:00
Author
Owner

@peperunas commented on GitHub (Jan 14, 2024):

To add on this, based on my observation, it looks like Ollama calculates how many layers to offload to the GPU on the model alone - ignoring the overhead that is induced by the custom context size defined in the Modelfile.

In my experience, I can run Mistral by offloading all layers to the GPU. Specifying a bigger context size leads to CUDA being out of memory.

<!-- gh-comment-id:1891028749 --> @peperunas commented on GitHub (Jan 14, 2024): To add on this, based on my observation, it looks like Ollama calculates how many layers to offload to the GPU on the model alone - ignoring the overhead that is induced by the custom context size defined in the Modelfile. In my experience, I can run Mistral by offloading all layers to the GPU. Specifying a bigger context size leads to CUDA being out of memory.
Author
Owner

@basillicus commented on GitHub (Jan 19, 2024):

I have a similar problem. When running Mistral, it offloads 13/33 layers to GPU. But it will only work if the prompt is really small. Otherwise it gives out of memory. The parameter n_ctx = 2048. It seems that is not considering the maximum amount of context to may be loaded on memory?

Working prompt: 1. What is the capital city of New Zealand? 2. Who painted the Mona Lisa?
Gives Out of memory: 1. What is the capital city of New Zealand? 2. Who painted the Mona Lisa? 3. In what year did the Roman Empire fall?

I attach the log file, of runnig the model, first with the first working prompt (at 14:26) and then the out-of-memory prompt (at 14:27).
log-out_of_memory.txt

Edit: I forgot to mention that ollama once loaded the same model offloading 8/33 layers to GPU and the model worked with a bigger prompt. However I do not know what was the reason ollama offloaded 8 instead of 13 layers, and I can not recreate that offloading again.

<!-- gh-comment-id:1900433943 --> @basillicus commented on GitHub (Jan 19, 2024): I have a similar problem. When running Mistral, it offloads 13/33 layers to GPU. But it will only work if the prompt is really small. Otherwise it gives out of memory. The parameter n_ctx = 2048. It seems that is not considering the maximum amount of context to may be loaded on memory? Working prompt: 1. What is the capital city of New Zealand? 2. Who painted the Mona Lisa? Gives Out of memory: 1. What is the capital city of New Zealand? 2. Who painted the Mona Lisa? 3. In what year did the Roman Empire fall? I attach the log file, of runnig the model, first with the first working prompt (at 14:26) and then the out-of-memory prompt (at 14:27). [log-out_of_memory.txt](https://github.com/jmorganca/ollama/files/13989627/log-out_of_memory.txt) Edit: I forgot to mention that ollama once loaded the same model offloading 8/33 layers to GPU and the model worked with a bigger prompt. However I do not know what was the reason ollama offloaded 8 instead of 13 layers, and I can not recreate that offloading again.
Author
Owner

@coder543 commented on GitHub (Jan 25, 2024):

@jmorganca I tested the latest pre-release of 0.1.21 using one of my test cases that could consistently cause an OOM, and it seems like this issue is fixed for me. The q3_K_S model still offloads all 33 layers with a 2048 context, so that's great too. (although the q3_K_M only offloads 32 layers, even though they're virtually the same size? I guess the very slight difference is the tipping point.)

I haven't been pushing Mixtral with large contexts as much for the past week or so, but I also haven't seen any OOMs with the latest pre-release. So, I'm optimistic that this issue is fixed.

<!-- gh-comment-id:1911071927 --> @coder543 commented on GitHub (Jan 25, 2024): @jmorganca I tested the latest pre-release of 0.1.21 using one of my test cases that could consistently cause an OOM, and it seems like this issue is fixed for me. The q3_K_S model still offloads all 33 layers with a 2048 context, so that's great too. (although the q3_K_M only offloads 32 layers, even though they're virtually the same size? I guess the very slight difference is the tipping point.) I haven't been pushing Mixtral with large contexts as much for the past week or so, but I also haven't seen any OOMs with the latest pre-release. So, I'm optimistic that this issue is fixed.
Author
Owner

@basillicus commented on GitHub (Jan 28, 2024):

I've tried the new 0.1.22 version and seems that in my case the OOM is also fixed. It offloads less layers to the GPU. However, I tried (out of curiosity) yarn-mistral:7b-128k, and maybe because of the context window is so large, it does not offload any layer to the GPU, even when I provide exactly the same prompt.

As a reference, I have a 32 GB of RAM laptop with a crappy GPU (NVIDIA RTX A1000 Laptop) with 4GB of VRAM.

<!-- gh-comment-id:1913605777 --> @basillicus commented on GitHub (Jan 28, 2024): I've tried the new 0.1.22 version and seems that in my case the OOM is also fixed. It offloads less layers to the GPU. However, I tried (out of curiosity) yarn-mistral:7b-128k, and maybe because of the context window is so large, it does not offload any layer to the GPU, even when I provide exactly the same prompt. As a reference, I have a 32 GB of RAM laptop with a crappy GPU (NVIDIA RTX A1000 Laptop) with 4GB of VRAM.
Author
Owner

@medivh-jay commented on GitHub (Feb 27, 2024):

I am using olama/ollama:0.1.27, and CUDA: out of memory will also appear.
The model I am using is gemma:latest
The graphics card is Tesla P4.
The dialogue content is: 我现在需要你来扮演这样的角色, 比如当我对你说 "在下午3点打开客厅空调", 你返回给我一个json文本, 内容是一个类似 AST 逻辑的结构
No need to try multiple times, directly requesting this conversation after startup will crash

<!-- gh-comment-id:1965955924 --> @medivh-jay commented on GitHub (Feb 27, 2024): I am using olama/ollama:0.1.27, and CUDA: out of memory will also appear. The model I am using is gemma:latest The graphics card is Tesla P4. The dialogue content is: `我现在需要你来扮演这样的角色, 比如当我对你说 "在下午3点打开客厅空调", 你返回给我一个json文本, 内容是一个类似 AST 逻辑的结构` No need to try multiple times, directly requesting this conversation after startup will crash
Author
Owner

@909254 commented on GitHub (Feb 28, 2024):

me to

<!-- gh-comment-id:1969041113 --> @909254 commented on GitHub (Feb 28, 2024): me to
Author
Owner

@kennethwork101 commented on GitHub (Mar 5, 2024):

Is there anyway to recover from this issue other than reboot the computer?
Restarting the ollama server does not work for me.
I am hoping to find an workaround until this issue is fixed.
I run into this issue on windows 11 with wsl2 ubuntu and also on ubuntu 22.04.
The only way for me to recover is to reboot my computer.
Also is there an estimate of when this issue will be fixed?
ollama version is 0.1.27
22.04.1-Ubuntu
Mar 05 07:39:26 kenneth-MS-7E06 ollama[85037]: CUDA error: out of memory
Mar 05 11:34:14 kenneth-MS-7E06 ollama[556803]: CUDA error: out of memory

<!-- gh-comment-id:1979791300 --> @kennethwork101 commented on GitHub (Mar 5, 2024): Is there anyway to recover from this issue other than reboot the computer? Restarting the ollama server does not work for me. I am hoping to find an workaround until this issue is fixed. I run into this issue on windows 11 with wsl2 ubuntu and also on ubuntu 22.04. The only way for me to recover is to reboot my computer. Also is there an estimate of when this issue will be fixed? ollama version is 0.1.27 22.04.1-Ubuntu Mar 05 07:39:26 kenneth-MS-7E06 ollama[85037]: CUDA error: out of memory Mar 05 11:34:14 kenneth-MS-7E06 ollama[556803]: CUDA error: out of memory
Author
Owner

@coder543 commented on GitHub (Mar 5, 2024):

@kennethwork101 rebooting should make no difference as far as ollama is concerned. It sounds like you have other apps that are using VRAM on your GPU, causing ollama's calculations to be incorrect. (I'm not a developer on ollama, just someone who uses it.)

You can run nvidia-smi at any time to see what is using VRAM.

<!-- gh-comment-id:1979793498 --> @coder543 commented on GitHub (Mar 5, 2024): @kennethwork101 rebooting should make no difference as far as ollama is concerned. It sounds like you have other apps that are using VRAM on your GPU, causing ollama's calculations to be incorrect. (I'm not a developer on ollama, just someone who uses it.) You can run `nvidia-smi` at any time to see what is using VRAM.
Author
Owner

@thomasWos commented on GitHub (Mar 6, 2024):

I experience the same issue.

<!-- gh-comment-id:1980519062 --> @thomasWos commented on GitHub (Mar 6, 2024): I experience the same issue.
Author
Owner

@kennethwork101 commented on GitHub (Mar 6, 2024):

I kicked off the test yesterday at around Tue Mar 5 19:18:25 2024 and by this morning I see that cpu load is very high
and gpu load is low. My workload is mainly a bunch of online training on LangChain that I ported over to use open source
LLMs via ollama. I used pytest and use llm model as a fixture to my tests and pytest generate and runs each test with a different llm and this
is the case that I can more easily reproduce this issue. I also wrote a script to use one llm and run through all
the tests before switching to new llm and that case takes longer to reproduce but I did run into this issue there also.
When this issue happens nvidia-smi does not shows ollama on the Processes list:
|=======================================================================================|
| 0 N/A N/A 3202 G /usr/lib/xorg/Xorg 367MiB |
| 0 N/A N/A 3357 G /usr/bin/gnome-shell 118MiB |
| 0 N/A N/A 10120 G ...seed-version=20240305-080109.174000 27MiB |
+---------------------------------------------------------------------------------------+
I checked the ollama server log and it shows it took less than an hour to run into the issue:
grep "CPU only" outollama_7.txt
Mar 05 20:02:36 kenneth-MS-7E06 ollama[3037]: time=2024-03-05T20:02:36.518-08:00 level=INFO source=llm.go:111 msg="not enough vram available, falling back to CPU only"
Mar 05 20:23:42 kenneth-MS-7E06 ollama[3037]: time=2024-03-05T20:23:42.435-08:00 level=INFO source=llm.go:111 msg="not enough vram available, falling back to CPU only"
I restarted the ollama server and I do see a new process id for ollama but there is no change in terms of the use the GPU remains low and the CPU load remains high:
r b swpd free buff cache si so bi bo in cs us sy id wa st
17 0 910336 6437524 41884 121315936 5 0 5 117 4807 1677 98 2 0 0 0
16 0 910336 6437020 41892 121315936 0 0 0 50 5485 5980 98 2 0 0 0
So far on my ubuntu 22.04 system with ollama version is 0.1.27 the only way I can recover is to reboot my computer.
Is there a workaround for this or when can we expect this issue to be fixed? Thanks.

<!-- gh-comment-id:1981179818 --> @kennethwork101 commented on GitHub (Mar 6, 2024): I kicked off the test yesterday at around Tue Mar 5 19:18:25 2024 and by this morning I see that cpu load is very high and gpu load is low. My workload is mainly a bunch of online training on LangChain that I ported over to use open source LLMs via ollama. I used pytest and use llm model as a fixture to my tests and pytest generate and runs each test with a different llm and this is the case that I can more easily reproduce this issue. I also wrote a script to use one llm and run through all the tests before switching to new llm and that case takes longer to reproduce but I did run into this issue there also. When this issue happens nvidia-smi does not shows ollama on the Processes list: |=======================================================================================| | 0 N/A N/A 3202 G /usr/lib/xorg/Xorg 367MiB | | 0 N/A N/A 3357 G /usr/bin/gnome-shell 118MiB | | 0 N/A N/A 10120 G ...seed-version=20240305-080109.174000 27MiB | +---------------------------------------------------------------------------------------+ I checked the ollama server log and it shows it took less than an hour to run into the issue: grep "CPU only" outollama_7.txt Mar 05 20:02:36 kenneth-MS-7E06 ollama[3037]: time=2024-03-05T20:02:36.518-08:00 level=INFO source=llm.go:111 msg="not enough vram available, falling back to CPU only" Mar 05 20:23:42 kenneth-MS-7E06 ollama[3037]: time=2024-03-05T20:23:42.435-08:00 level=INFO source=llm.go:111 msg="not enough vram available, falling back to CPU only" I restarted the ollama server and I do see a new process id for ollama but there is no change in terms of the use the GPU remains low and the CPU load remains high: r b swpd free buff cache si so bi bo in cs us sy id wa st 17 0 910336 6437524 41884 121315936 5 0 5 117 4807 1677 98 2 0 0 0 16 0 910336 6437020 41892 121315936 0 0 0 50 5485 5980 98 2 0 0 0 So far on my ubuntu 22.04 system with ollama version is 0.1.27 the only way I can recover is to reboot my computer. Is there a workaround for this or when can we expect this issue to be fixed? Thanks.
Author
Owner

@kennethwork101 commented on GitHub (Mar 6, 2024):

Able to reproduce the issue on: 0.1.28

grep "CPU only" outollama_11.txt
Mar 06 10:26:15 kenneth-MS-7E06 ollama[2968]: time=2024-03-06T10:26:15.445-08:00 level=INFO source=llm.go:111 msg="not enough vram available, falling back to CPU only"
Mar 06 10:30:19 kenneth-MS-7E06 ollama[2968]: time=2024-03-06T10:30:19.651-08:00 level=INFO source=llm.go:111 msg="not enough vram available, falling back to CPU only"
kenneth@kenneth-MS-7E06:~/ollamalogs$ ollama --version
ollama version is 0.1.28

Wed Mar 6 10:26:12 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.07 Driver Version: 535.161.07 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4070 Ti Off | 00000000:01:00.0 On | N/A |
| 30% 51C P2 58W / 285W | 10989MiB / 12282MiB | 62% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 2968 C /usr/local/bin/ollama 10322MiB |
| 0 N/A N/A 4837 G /usr/lib/xorg/Xorg 421MiB |
| 0 N/A N/A 4988 G /usr/bin/gnome-shell 92MiB |
| 0 N/A N/A 104107 G ...seed-version=20240305-180126.934000 140MiB |
+---------------------------------------------------------------------------------------+

<!-- gh-comment-id:1981547640 --> @kennethwork101 commented on GitHub (Mar 6, 2024): Able to reproduce the issue on: 0.1.28 grep "CPU only" outollama_11.txt Mar 06 10:26:15 kenneth-MS-7E06 ollama[2968]: time=2024-03-06T10:26:15.445-08:00 level=INFO source=llm.go:111 msg="not enough vram available, falling back to CPU only" Mar 06 10:30:19 kenneth-MS-7E06 ollama[2968]: time=2024-03-06T10:30:19.651-08:00 level=INFO source=llm.go:111 msg="not enough vram available, falling back to CPU only" kenneth@kenneth-MS-7E06:~/ollamalogs$ ollama --version ollama version is 0.1.28 Wed Mar 6 10:26:12 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.161.07 Driver Version: 535.161.07 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 4070 Ti Off | 00000000:01:00.0 On | N/A | | 30% 51C P2 58W / 285W | 10989MiB / 12282MiB | 62% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 2968 C /usr/local/bin/ollama 10322MiB | | 0 N/A N/A 4837 G /usr/lib/xorg/Xorg 421MiB | | 0 N/A N/A 4988 G /usr/bin/gnome-shell 92MiB | | 0 N/A N/A 104107 G ...seed-version=20240305-180126.934000 140MiB | +---------------------------------------------------------------------------------------+
Author
Owner

@vrubzov1957 commented on GitHub (Mar 11, 2024):

Some notes about merged #3060 - server becomes unavailable not only with "long promts", but also after some time when normally working with Ollama, with some small promts. After 1 hour or 20-30 newly-topic promts. May be VRAM memory leaking?
Logs about this issue attached in topic #3060

<!-- gh-comment-id:1989232241 --> @vrubzov1957 commented on GitHub (Mar 11, 2024): Some notes about merged #3060 - server becomes unavailable not only with "long promts", but also after some time when normally working with Ollama, with some small promts. After 1 hour or 20-30 newly-topic promts. May be VRAM memory leaking? Logs about this issue attached in topic #3060
Author
Owner

@jmorganca commented on GitHub (May 10, 2024):

This should be much improved as of recent versions of Ollama. Will close for now but please do let me know if this isn't fixed and I can re-open

<!-- gh-comment-id:2105376333 --> @jmorganca commented on GitHub (May 10, 2024): This should be much improved as of recent versions of Ollama. Will close for now but please do let me know if this isn't fixed and I can re-open
Author
Owner

@thomasWos commented on GitHub (May 11, 2024):

Not crashing on my side since couple of versions. Thanks

<!-- gh-comment-id:2105440838 --> @thomasWos commented on GitHub (May 11, 2024): Not crashing on my side since couple of versions. Thanks
Author
Owner

@ProjectMoon commented on GitHub (Jun 14, 2024):

I'm adding a comment to this, based on advice in #4354. I am still experiencing lots of out of memory errors when it comes to ROCm with higher contexts and default batch size (usually 512). This seems to happen mostly on models that fill the entire VRAM, or would spill a bit over. The crashes often happen without using up the full system RAM (16 GB), so I'd expect things to work, even if they move slowly compared to when the model's in full VRAM.

Comparatively, my laptop that has 48 GB of RAM but no discrete GPU, can run Mixtral 8x7b at ~2 tokens per second fully on CPU and using up 30GB of RAM. But if I load, say, Deepseek Lite 16b at 32k context size, the runner crashes with an out of memory error before even starting to generate text. And it doesn't use up the system RAM either.

Should I post logs? Create a new issue?

<!-- gh-comment-id:2167779342 --> @ProjectMoon commented on GitHub (Jun 14, 2024): I'm adding a comment to this, based on advice in #4354. I am still experiencing lots of out of memory errors when it comes to ROCm with higher contexts and default batch size (usually 512). This seems to happen mostly on models that fill the entire VRAM, or would spill a bit over. The crashes often happen without using up the full system RAM (16 GB), so I'd expect things to work, even if they move slowly compared to when the model's in full VRAM. Comparatively, my laptop that has 48 GB of RAM but no discrete GPU, can run Mixtral 8x7b at ~2 tokens per second fully on CPU and using up 30GB of RAM. But if I load, say, Deepseek Lite 16b at 32k context size, the runner crashes with an out of memory error before even starting to generate text. And it doesn't use up the system RAM either. Should I post logs? Create a new issue?
Author
Owner

@ProjectMoon commented on GitHub (Jul 16, 2024):

Would like to prod this issue again, as I am still seeing this with GLM4 at 65k context size. Loads fine without much context, but has issues loading larger contexts. I even set the context size to 8k o_O.

Important bits:

  • It looks like GPU VRAM hits 100% but then can't spill over into memory for larger contexts. rocm-smi shows VRAM going 98%... 99%.. 100%, then crash.
  • Forcing GPU layers down to 15 out of 41 and disabling mmap and setting num_batch to 256 for GLM 4 makes VRAM hover around 35%, with 8k context size.
  • Leaving mmap disabled and num_batch at 256, and letting it load all 41 GPU layers into memory uses 69% VRAM.
  • Setting num_ctx to 60,000 will still make it try to load all layers into the GPU, and then it crashes because it runs out of VRAM.
  • Moving num_gpu down to 30 or even 20 allows it to load more context. But this is only delaying the inevitable. Long enough context will = crash.

Shouldn't ollama be calculating that it needs to load less layers into the GPU in this situation? Like I can adjust it manually, but if ollama receives num_ctx that'll make the model crash, shouldn't it start using system RAM instead?

<!-- gh-comment-id:2231387407 --> @ProjectMoon commented on GitHub (Jul 16, 2024): Would like to prod this issue again, as I am still seeing this with GLM4 at 65k context size. Loads fine without much context, but has issues loading larger contexts. I even set the context size to 8k o_O. Important bits: * It looks like GPU VRAM hits 100% but then can't spill over into memory for larger contexts. `rocm-smi` shows VRAM going 98%... 99%.. 100%, then crash. * Forcing GPU layers down to 15 out of 41 and disabling mmap and setting num_batch to 256 for GLM 4 makes VRAM hover around 35%, with 8k context size. * Leaving mmap disabled and num_batch at 256, and letting it load all 41 GPU layers into memory uses 69% VRAM. * Setting num_ctx to 60,000 will still make it try to load all layers into the GPU, and then it crashes because it runs out of VRAM. * Moving num_gpu down to 30 or even 20 allows it to load more context. But this is only delaying the inevitable. Long enough context will = crash. Shouldn't ollama be calculating that it needs to load less layers into the GPU in this situation? Like I can adjust it manually, but if ollama receives num_ctx that'll make the model crash, shouldn't it start using system RAM instead?
Author
Owner

@coniuc2d commented on GitHub (Jul 30, 2024):

Would like to prod this issue again, as I am still seeing this with GLM4 at 65k context size. Loads fine without much context, but has issues loading larger contexts. I even set the context size to 8k o_O.

Important bits:

  • It looks like GPU VRAM hits 100% but then can't spill over into memory for larger contexts. rocm-smi shows VRAM going 98%... 99%.. 100%, then crash.
  • Forcing GPU layers down to 15 out of 41 and disabling mmap and setting num_batch to 256 for GLM 4 makes VRAM hover around 35%, with 8k context size.
  • Leaving mmap disabled and num_batch at 256, and letting it load all 41 GPU layers into memory uses 69% VRAM.
  • Setting num_ctx to 60,000 will still make it try to load all layers into the GPU, and then it crashes because it runs out of VRAM.
  • Moving num_gpu down to 30 or even 20 allows it to load more context. But this is only delaying the inevitable. Long enough context will = crash.

Shouldn't ollama be calculating that it needs to load less layers into the GPU in this situation? Like I can adjust it manually, but if ollama receives num_ctx that'll make the model crash, shouldn't it start using system RAM instead?

If i may add to this, on windows it is working as intended.
1)Windows rx7800GRE(16GB) 100%GPU: llama3.1q8 loaded with num_ctx 16000, ollama swallowed vram and expanded to ram.
2)Ubuntu 22.04 rx5700xt (8GB) 13%CPU/87%GPU: llama3.1q8 gives oom when setting up context bigger than remaining 13% of gpu vram.

<!-- gh-comment-id:2257726708 --> @coniuc2d commented on GitHub (Jul 30, 2024): > Would like to prod this issue again, as I am still seeing this with GLM4 at 65k context size. Loads fine without much context, but has issues loading larger contexts. I even set the context size to 8k o_O. > > Important bits: > > * It looks like GPU VRAM hits 100% but then can't spill over into memory for larger contexts. `rocm-smi` shows VRAM going 98%... 99%.. 100%, then crash. > * Forcing GPU layers down to 15 out of 41 and disabling mmap and setting num_batch to 256 for GLM 4 makes VRAM hover around 35%, with 8k context size. > * Leaving mmap disabled and num_batch at 256, and letting it load all 41 GPU layers into memory uses 69% VRAM. > * Setting num_ctx to 60,000 will still make it try to load all layers into the GPU, and then it crashes because it runs out of VRAM. > * Moving num_gpu down to 30 or even 20 allows it to load more context. But this is only delaying the inevitable. Long enough context will = crash. > > Shouldn't ollama be calculating that it needs to load less layers into the GPU in this situation? Like I can adjust it manually, but if ollama receives num_ctx that'll make the model crash, shouldn't it start using system RAM instead? If i may add to this, on windows it is working as intended. 1)Windows rx7800GRE(16GB) 100%GPU: llama3.1q8 loaded with num_ctx 16000, ollama swallowed vram and expanded to ram. 2)Ubuntu 22.04 rx5700xt (8GB) 13%CPU/87%GPU: llama3.1q8 gives oom when setting up context bigger than remaining 13% of gpu vram.
Author
Owner

@svalaskevicius commented on GitHub (Nov 28, 2024):

still very much happening...

using ollama-rocm-git 0.4.5.git+940e6277-1, on arch linux, via aur
see the log here: https://pastebin.com/YaUnz5Sg

setting n_ctx to 8k works fine

<!-- gh-comment-id:2505970115 --> @svalaskevicius commented on GitHub (Nov 28, 2024): still very much happening... using `ollama-rocm-git 0.4.5.git+940e6277-1`, on arch linux, via aur see the log here: https://pastebin.com/YaUnz5Sg setting n_ctx to 8k works fine
Author
Owner

@coniuc2d commented on GitHub (Dec 17, 2024):

setting n_ctx to 8k works fine

how do you set it btw? i tried this but when the model loads i can still see that this string llm_load_print_meta: n_ctx_train = 32768 is still has 32k context instead of 8192 and it still crashes, but if i set up 32k context length in sillytavern it works but i guess it starting to offload to ram and becomes unresponsive

When i set context to 32k tokens, llama 7b q4 crashes on 32Gb vram system. Pagefault. All done in cli debian 12. I have two radeon 7600xt cards 16gb each. One would think extra 20+gb is enough.

<!-- gh-comment-id:2547675667 --> @coniuc2d commented on GitHub (Dec 17, 2024): > > setting n_ctx to 8k works fine > > how do you set it btw? i tried [this](https://github.com/ollama/ollama/issues/5902) but when the model loads i can still see that this string `llm_load_print_meta: n_ctx_train = 32768` is still has 32k context instead of 8192 and it still crashes, but if i set up 32k context length in sillytavern it works but i guess it starting to offload to ram and becomes unresponsive When i set context to 32k tokens, llama 7b q4 crashes on 32Gb vram system. Pagefault. All done in cli debian 12. I have two radeon 7600xt cards 16gb each. One would think extra 20+gb is enough.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63164