[GH-ISSUE #9791] Out of memory errors when running gemma3 #52915

Closed
opened 2026-04-29 01:22:09 -05:00 by GiteaMirror · 75 comments
Owner

Originally created by @ultramarinebicycle on GitHub (Mar 15, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9791

Originally assigned to: @jmorganca on GitHub.

What is the issue?

Earlier (0.6.0), I could run Gemma 3 12b q4 at around 20-25 tokens per second. Now it stays somewhere between 10-16 tokens per second.

Not only that, but I was also able to use 8k context length without any issues. Now doing that crashes my computer, so I have to use it at default 4k.

Computer specs:

  • Nvidia RTX 3060 12GB
  • 16GB RAM
  • AMD Ryzen 5600x

Relevant log output


OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.6.1

Originally created by @ultramarinebicycle on GitHub (Mar 15, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9791 Originally assigned to: @jmorganca on GitHub. ### What is the issue? Earlier (0.6.0), I could run Gemma 3 12b q4 at around 20-25 tokens per second. Now it stays somewhere between 10-16 tokens per second. Not only that, but I was also able to use 8k context length without any issues. Now doing that crashes my computer, so I have to use it at default 4k. Computer specs: - Nvidia RTX 3060 12GB - 16GB RAM - AMD Ryzen 5600x ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.6.1
GiteaMirror added the bug label 2026-04-29 01:22:09 -05:00
Author
Owner

@jmorganca commented on GitHub (Mar 16, 2025):

Hi so sorry about this. What does ‘ollama ps’ show for you?

<!-- gh-comment-id:2727076089 --> @jmorganca commented on GitHub (Mar 16, 2025): Hi so sorry about this. What does ‘ollama ps’ show for you?
Author
Owner

@jmorganca commented on GitHub (Mar 16, 2025):

And would it be possible to share your logs? Sorry again about the crash

<!-- gh-comment-id:2727076973 --> @jmorganca commented on GitHub (Mar 16, 2025): And would it be possible to share your logs? Sorry again about the crash
Author
Owner

@ultramarinebicycle commented on GitHub (Mar 16, 2025):

Hey @jmorganca

NAME          ID              SIZE     PROCESSOR         UNTIL
gemma3:12b    6fd036cefda5    12 GB    7%/93% CPU/GPU    4 minutes from now

This^ is my recent run with 4k context.

I'm attaching the app and server logs for the time it crashed.

server-1.log

app-1.log

<!-- gh-comment-id:2727088926 --> @ultramarinebicycle commented on GitHub (Mar 16, 2025): Hey @jmorganca ``` NAME ID SIZE PROCESSOR UNTIL gemma3:12b 6fd036cefda5 12 GB 7%/93% CPU/GPU 4 minutes from now ``` This^ is my recent run with 4k context. I'm attaching the app and server logs for the time it crashed. [server-1.log](https://github.com/user-attachments/files/19267483/server-1.log) [app-1.log](https://github.com/user-attachments/files/19267485/app-1.log)
Author
Owner

@jmorganca commented on GitHub (Mar 16, 2025):

Thanks so much

<!-- gh-comment-id:2727098052 --> @jmorganca commented on GitHub (Mar 16, 2025): Thanks so much
Author
Owner

@jmorganca commented on GitHub (Mar 16, 2025):

I don't see a crash in the the logs you sent. Do you have one for the 8k case where Ollama crashes? Thanks so much.

<!-- gh-comment-id:2727118942 --> @jmorganca commented on GitHub (Mar 16, 2025): I don't see a crash in the the logs you sent. Do you have one for the 8k case where Ollama crashes? Thanks so much.
Author
Owner

@ultramarinebicycle commented on GitHub (Mar 16, 2025):

No longer crashing for whatever reason (maybe some background program interfered with it the last time). Now my PC just becomes unresponsive; can move the cursor around but nothing else. You want me to provide logs for that?

<!-- gh-comment-id:2727132605 --> @ultramarinebicycle commented on GitHub (Mar 16, 2025): No longer crashing for whatever reason (maybe some background program interfered with it the last time). Now my PC just becomes unresponsive; can move the cursor around but nothing else. You want me to provide logs for that?
Author
Owner

@daihouzi commented on GitHub (Mar 16, 2025):

I encountered the same issue; the memory usage was normal with the 0.6.0 version, but after updating to the 0.6.1 version, the memory usage would double with just a little conversation or image transfer. With 3.7G of parameters, the memory usage would increase from an initial 4G to over 10G.

<!-- gh-comment-id:2727235878 --> @daihouzi commented on GitHub (Mar 16, 2025): I encountered the same issue; the memory usage was normal with the 0.6.0 version, but after updating to the 0.6.1 version, the memory usage would double with just a little conversation or image transfer. With 3.7G of parameters, the memory usage would increase from an initial 4G to over 10G.
Author
Owner

@lpdink commented on GitHub (Mar 16, 2025):

Same issue here on 4070ti (12GB). Gemma3 12B occupies 8.1GB of disk space, but after loading into memory/VRAM, it surprisingly takes up 12GB(only load without calling inference). Is this expected?

❯ ollama ps
NAME          ID              SIZE     PROCESSOR         UNTIL
gemma3:12b    6fd036cefda5    12 GB    7%/93% CPU/GPU    4 minutes from now
❯ ollama --version
ollama version is 0.6.1
❯ ollama list
NAME                                            ID              SIZE      MODIFIED
gemma3:12b                                      6fd036cefda5    8.1 GB    2 hours ago
<!-- gh-comment-id:2727276666 --> @lpdink commented on GitHub (Mar 16, 2025): Same issue here on 4070ti (12GB). Gemma3 12B occupies 8.1GB of disk space, but after loading into memory/VRAM, it surprisingly takes up 12GB(only load without calling inference). Is this expected? ``` ❯ ollama ps NAME ID SIZE PROCESSOR UNTIL gemma3:12b 6fd036cefda5 12 GB 7%/93% CPU/GPU 4 minutes from now ❯ ollama --version ollama version is 0.6.1 ❯ ollama list NAME ID SIZE MODIFIED gemma3:12b 6fd036cefda5 8.1 GB 2 hours ago ```
Author
Owner

@JamesInform commented on GitHub (Mar 16, 2025):

Hi All!

This is my first post here, so first of all thanks for your great work on Ollama.

Using Ollama 0.6.1.
Even worse on MacBook M2 Max, 64 GB RAM.

When running gemma3:27b the over all memory consumption rises from 15,5GB to 49,8GB.
So even more than "ollama ps" reports.

The differences in ram consumption in the following screenshot are introduced just be doing "ollama run".
No other actions have been made:

Image

There is no such issue with other models.

Hope that helps!

Cheers,
James

<!-- gh-comment-id:2727345632 --> @JamesInform commented on GitHub (Mar 16, 2025): Hi All! This is my first post here, so first of all thanks for your great work on Ollama. Using Ollama 0.6.1. Even worse on MacBook M2 Max, 64 GB RAM. When running gemma3:27b the over all memory consumption rises from 15,5GB to 49,8GB. So even more than "ollama ps" reports. The differences in ram consumption in the following screenshot are introduced just be doing "ollama run". No other actions have been made: <img width="1208" alt="Image" src="https://github.com/user-attachments/assets/73d3f5cc-16c3-44a3-adf3-edf88cf91f7a" /> There is no such issue with other models. Hope that helps! Cheers, James
Author
Owner

@raymondtri commented on GitHub (Mar 16, 2025):

I have a discord thread running about this. Even after the latest update, Gemma usage is all messed up.

I've got a 5070ti with 14.9 available vram and running the 12b_q6_k_l gemma overflows onto my hardware ram like nobody's business.

Image

Image

<!-- gh-comment-id:2727476953 --> @raymondtri commented on GitHub (Mar 16, 2025): I have a discord thread running about this. Even after the latest update, Gemma usage is all messed up. I've got a 5070ti with 14.9 available vram and running the 12b_q6_k_l gemma overflows onto my hardware ram like nobody's business. ![Image](https://github.com/user-attachments/assets/c64ebb32-4106-4e0c-9df1-d28b95596ac4) ![Image](https://github.com/user-attachments/assets/14dc38a5-6011-459e-8c17-6080dfcb3505)
Author
Owner

@smerschjohann commented on GitHub (Mar 16, 2025):

gemma3 does not work on my system as well. First communication iteration works, then this (using 10GB VRAM, 48GB RAM):

Mär 16 16:01:09 fedora ollama[2664]: [GIN] 2025/03/16 - 16:01:09 | 200 | 1m58s | 127.0.0.1 | POST "/api/chat"
Mär 16 16:01:14 fedora ollama[2664]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 7457.67 MiB on device 0: cudaMalloc failed: out of memory
Mär 16 16:01:14 fedora ollama[2664]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 7819937792
Mär 16 16:01:14 fedora ollama[2664]: SIGSEGV: segmentation violation
Mär 16 16:01:14 fedora ollama[2664]: PC=0x56509735c1d0 m=213 sigcode=1 addr=0x58
Mär 16 16:01:14 fedora ollama[2664]: signal arrived during cgo execution
Mär 16 16:01:14 fedora ollama[2664]: goroutine 8 gp=0xc00048d180 m=213 mp=0xc003080808 [syscall]:
Mär 16 16:01:14 fedora ollama[2664]: runtime.cgocall(0x5650973b01e0, 0xc00612db00)
Mär 16 16:01:14 fedora ollama[2664]: runtime/cgocall.go:167 +0x4b fp=0xc00612dad8 sp=0xc00612daa0 pc=0x56509657c60b
Mär 16 16:01:14 fedora ollama[2664]: github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x7f8bb800a4f0, 0x7f8c0432a720)
Mär 16 16:01:14 fedora ollama[2664]: _cgo_gotypes.go:485 +0x4a fp=0xc00612db00 sp=0xc00612dad8 pc=0x5650969678ca

simon@fedora:~$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
gemma3:12b 6fd036cefda5 12 GB 24%/76% CPU/GPU 46 seconds from now

<!-- gh-comment-id:2727489413 --> @smerschjohann commented on GitHub (Mar 16, 2025): gemma3 does not work on my system as well. First communication iteration works, then this (using 10GB VRAM, 48GB RAM): Mär 16 16:01:09 fedora ollama[2664]: [GIN] 2025/03/16 - 16:01:09 | 200 | 1m58s | 127.0.0.1 | POST "/api/chat" Mär 16 16:01:14 fedora ollama[2664]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 7457.67 MiB on device 0: cudaMalloc failed: out of memory Mär 16 16:01:14 fedora ollama[2664]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 7819937792 Mär 16 16:01:14 fedora ollama[2664]: SIGSEGV: segmentation violation Mär 16 16:01:14 fedora ollama[2664]: PC=0x56509735c1d0 m=213 sigcode=1 addr=0x58 Mär 16 16:01:14 fedora ollama[2664]: signal arrived during cgo execution Mär 16 16:01:14 fedora ollama[2664]: goroutine 8 gp=0xc00048d180 m=213 mp=0xc003080808 [syscall]: Mär 16 16:01:14 fedora ollama[2664]: runtime.cgocall(0x5650973b01e0, 0xc00612db00) Mär 16 16:01:14 fedora ollama[2664]: runtime/cgocall.go:167 +0x4b fp=0xc00612dad8 sp=0xc00612daa0 pc=0x56509657c60b Mär 16 16:01:14 fedora ollama[2664]: github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x7f8bb800a4f0, 0x7f8c0432a720) Mär 16 16:01:14 fedora ollama[2664]: _cgo_gotypes.go:485 +0x4a fp=0xc00612db00 sp=0xc00612dad8 pc=0x5650969678ca simon@fedora:~$ ollama ps NAME ID SIZE PROCESSOR UNTIL gemma3:12b 6fd036cefda5 12 GB 24%/76% CPU/GPU 46 seconds from now
Author
Owner

@illnesse commented on GitHub (Mar 16, 2025):

4b-27b, all crash for me (with openwebui v0.5.20)

Name            : ollama-git
Version         : 0.6.1.git+7bf793a60-1
Description     : Create, run and share large language models (LLMs) with ROCm
Architecture    : x86_64
URL             : https://github.com/ollama/ollama
Licenses        : MIT
Groups          : None
Provides        : ollama
Depends On      : gcc-libs
Optional Deps   : None
Required By     : None
Optional For    : None
Conflicts With  : ollama
Replaces        : None
Installed Size  : 30.86 MiB
Packager        : Unknown Packager
Build Date      : Fri 14 Mar 2025 11:40:48 PM CET
Install Date    : Sat 15 Mar 2025 12:06:28 AM CET
Install Reason  : Explicitly installed
Install Script  : No
Validated By    : None

ran it in debug to see whats up, hope this helps:

ollama_gemma3_4b.log

<!-- gh-comment-id:2727499251 --> @illnesse commented on GitHub (Mar 16, 2025): 4b-27b, all crash for me (with openwebui v0.5.20) ``` Name : ollama-git Version : 0.6.1.git+7bf793a60-1 Description : Create, run and share large language models (LLMs) with ROCm Architecture : x86_64 URL : https://github.com/ollama/ollama Licenses : MIT Groups : None Provides : ollama Depends On : gcc-libs Optional Deps : None Required By : None Optional For : None Conflicts With : ollama Replaces : None Installed Size : 30.86 MiB Packager : Unknown Packager Build Date : Fri 14 Mar 2025 11:40:48 PM CET Install Date : Sat 15 Mar 2025 12:06:28 AM CET Install Reason : Explicitly installed Install Script : No Validated By : None ``` ran it in debug to see whats up, hope this helps: [ollama_gemma3_4b.log](https://github.com/user-attachments/files/19272899/ollama_gemma3_4b.log)
Author
Owner

@rick-github commented on GitHub (Mar 16, 2025):

A commonality of the crashes is the model loading successfully, answering a query or two, and then crashing because ggml_backend_sched_graph_compute_async() wants to allocate an unrealistically large buffer, 7G in the example from @smerschjohann. For windows users with recent Nvidia drivers, that ends up in unified memory, causing the RAM blowout shown by @raymondtri. For Linux users without GGML_CUDA_ENABLE_UNIFIED_MEMORY that's an instant OOM.

Other examples:

I haven't been able to trigger this on my own systems yet, so there's perhaps some feature of the affected systems contributing to this.

<!-- gh-comment-id:2727513844 --> @rick-github commented on GitHub (Mar 16, 2025): A commonality of the crashes is the model loading successfully, answering a query or two, and then crashing because `ggml_backend_sched_graph_compute_async()` wants to allocate an unrealistically large buffer, 7G in the example from @smerschjohann. For windows users with recent Nvidia drivers, that ends up in unified memory, causing the RAM blowout shown by @raymondtri. For Linux users without `GGML_CUDA_ENABLE_UNIFIED_MEMORY` that's an instant OOM. Other examples: - #9687, 8G - #9707, 5G - #9685, 15G - #9782, 13G - #9674, 22G I haven't been able to trigger this on my own systems yet, so there's perhaps some feature of the affected systems contributing to this.
Author
Owner

@rick-github commented on GitHub (Mar 16, 2025):

@illnesse Thanks for the log, unfortunately it doesn't contain a crash.

<!-- gh-comment-id:2727516662 --> @rick-github commented on GitHub (Mar 16, 2025): @illnesse Thanks for the log, unfortunately it doesn't contain a crash.
Author
Owner

@wills106 commented on GitHub (Mar 16, 2025):

gemma3:4b & gemma3:12b keep crashing with level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 2"

I have a RTX3060 12GB and a GTX1650 4GB, they only ever sit in the RTX, they never seem to try and use the GTX1650 for extra RAM.

( Edit wrong log...)

Not sure if that actually indicates what's crashed though?

<!-- gh-comment-id:2727526668 --> @wills106 commented on GitHub (Mar 16, 2025): gemma3:4b & gemma3:12b keep crashing with `level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 2"` I have a RTX3060 12GB and a GTX1650 4GB, they only ever sit in the RTX, they never seem to try and use the GTX1650 for extra RAM. ( Edit wrong log...) Not sure if that actually indicates what's crashed though?
Author
Owner

@rick-github commented on GitHub (Mar 16, 2025):

@wills106 You appear to have re-uploaded @illnesse's log.

<!-- gh-comment-id:2727528933 --> @rick-github commented on GitHub (Mar 16, 2025): @wills106 You appear to have re-uploaded @illnesse's log.
Author
Owner

@wills106 commented on GitHub (Mar 16, 2025):

😅 Try again...

ollama.log

<!-- gh-comment-id:2727531347 --> @wills106 commented on GitHub (Mar 16, 2025): 😅 Try again... [ollama.log](https://github.com/user-attachments/files/19273327/ollama.log)
Author
Owner

@rick-github commented on GitHub (Mar 16, 2025):

This looks like slightly different.

ggml-backend.cpp:1556: GGML_ASSERT((int)sched->hash_set.size >= graph->n_nodes + graph->n_leafs) failed
SIGSEGV: segmentation violation
PC=0x15125f40ade7 m=253 sigcode=1 addr=0x204803bf4
signal arrived during cgo execution

goroutine 9 gp=0xc000582a80 m=253 mp=0xc023fa2008 [syscall]:
runtime.cgocall(0x5582cf07f1e0, 0xc000595b00)
        runtime/cgocall.go:167 +0x4b fp=0xc000595ad8 sp=0xc000595aa0 pc=0x5582ce24b60b
github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x15110400aaa0, 0x151305306fc0)

So the failure was an ASSERT instead of an OOM, but it sill happened in ggml_backend_sched_graph_compute_async(). It may be different because the API was called with a context field, so the usual tokenization that occurs for API calls didn't take place, leading to a different code path that didn't need memory allocation but still failed when computing the graph.

<!-- gh-comment-id:2727538072 --> @rick-github commented on GitHub (Mar 16, 2025): This looks like slightly different. ``` ggml-backend.cpp:1556: GGML_ASSERT((int)sched->hash_set.size >= graph->n_nodes + graph->n_leafs) failed SIGSEGV: segmentation violation PC=0x15125f40ade7 m=253 sigcode=1 addr=0x204803bf4 signal arrived during cgo execution goroutine 9 gp=0xc000582a80 m=253 mp=0xc023fa2008 [syscall]: runtime.cgocall(0x5582cf07f1e0, 0xc000595b00) runtime/cgocall.go:167 +0x4b fp=0xc000595ad8 sp=0xc000595aa0 pc=0x5582ce24b60b github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x15110400aaa0, 0x151305306fc0) ``` So the failure was an ASSERT instead of an OOM, but it sill happened in `ggml_backend_sched_graph_compute_async()`. It may be different because the API was called with a `context` field, so the usual tokenization that occurs for API calls didn't take place, leading to a different code path that didn't need memory allocation but still failed when computing the graph.
Author
Owner

@bioshazard commented on GitHub (Mar 16, 2025):

I don't run into a crash but attempting to load Gemma3 27b on my 3090 (24G VRAM) causes my system to lock up with insanely high iowait. Will share logs if I can get to them next time I try it. But did want to log that I am running into it on 0.6.1 w/ serve via OpenWebUI

<!-- gh-comment-id:2727538487 --> @bioshazard commented on GitHub (Mar 16, 2025): I don't run into a crash but attempting to load Gemma3 27b on my 3090 (24G VRAM) causes my system to lock up with insanely high iowait. Will share logs if I can get to them next time I try it. But did want to log that I am running into it on 0.6.1 w/ `serve` via OpenWebUI
Author
Owner

@rick-github commented on GitHub (Mar 16, 2025):

@bioshazard Windows or Linux?

<!-- gh-comment-id:2727539102 --> @rick-github commented on GitHub (Mar 16, 2025): @bioshazard Windows or Linux?
Author
Owner

@smerschjohann commented on GitHub (Mar 16, 2025):

@rick-github if I can help to pinpoint it in someway, let me know. This happens even with ollama run, so no "thirdparty settings" involved.

A random chat:

$ ollama run gemma3:12b
>>> Hi
(answer 120 chars/24 words)
>>> please compare Gemini to Gemma
(answer 5544 chars/783 words)
>>> can you analyse images?
Error: POST predict: Post "http://127.0.0.1:34171/completion": EOF

So it most likely has something todo with context length, but I would have enough memory free

$ free -m
               total        used        free      shared  buff/cache   available
Mem:           48087        9464       24124         856       15933       38622

after
ollama run gemma3:12b

$nvidia-smi
[...]
|   0  NVIDIA GeForce RTX 3080        Off |   00000000:2B:00.0  On |                  N/A |
|  0%   49C    P0            102W /  370W |    5561MiB /  10240MiB |      0%      Default |

[...]

>>> HI

$nvidia-smi
[...]
|   0  NVIDIA GeForce RTX 3080        Off |   00000000:2B:00.0  On |                  N/A |
|  0%   51C    P0            103W /  370W |    6719MiB /  10240MiB |      1%      Default |
[...]

>>> please compare Gemini to Gemma
[...]
|   0  NVIDIA GeForce RTX 3080        Off |   00000000:2B:00.0  On |                  N/A |
|  58%   51C    P0            103W /  370W |    6820MiB /  10240MiB |      1%      Default |
[...]
.. slowly increases VRAM usage during output (rises until 6947 MiB), nothing is freed after output is done

>>> can you analyse images?
[...]
|  0%   45C    P3             91W /  370W |    1507MiB /  10240MiB |      0%      Default |
[..]
everything is freed again, as it fails

I can free up all other VRAM usage if that helps for debugging.

<!-- gh-comment-id:2727540768 --> @smerschjohann commented on GitHub (Mar 16, 2025): @rick-github if I can help to pinpoint it in someway, let me know. This happens even with ollama run, so no "thirdparty settings" involved. A random chat: ``` $ ollama run gemma3:12b >>> Hi (answer 120 chars/24 words) >>> please compare Gemini to Gemma (answer 5544 chars/783 words) >>> can you analyse images? Error: POST predict: Post "http://127.0.0.1:34171/completion": EOF ``` So it most likely has something todo with context length, but I would have enough memory free ``` $ free -m total used free shared buff/cache available Mem: 48087 9464 24124 856 15933 38622 after ollama run gemma3:12b $nvidia-smi [...] | 0 NVIDIA GeForce RTX 3080 Off | 00000000:2B:00.0 On | N/A | | 0% 49C P0 102W / 370W | 5561MiB / 10240MiB | 0% Default | [...] >>> HI $nvidia-smi [...] | 0 NVIDIA GeForce RTX 3080 Off | 00000000:2B:00.0 On | N/A | | 0% 51C P0 103W / 370W | 6719MiB / 10240MiB | 1% Default | [...] >>> please compare Gemini to Gemma [...] | 0 NVIDIA GeForce RTX 3080 Off | 00000000:2B:00.0 On | N/A | | 58% 51C P0 103W / 370W | 6820MiB / 10240MiB | 1% Default | [...] .. slowly increases VRAM usage during output (rises until 6947 MiB), nothing is freed after output is done >>> can you analyse images? [...] | 0% 45C P3 91W / 370W | 1507MiB / 10240MiB | 0% Default | [..] everything is freed again, as it fails ``` I can free up all other VRAM usage if that helps for debugging.
Author
Owner

@wills106 commented on GitHub (Mar 16, 2025):

So the failure was an ASSERT instead of an OOM

Do you want me to try the 12b and see if I get the same or different error?

I seem to be able to chat with either Gemma3 version ok, but I get the crashes when I use the GenerativeAI in Frigate

<!-- gh-comment-id:2727540898 --> @wills106 commented on GitHub (Mar 16, 2025): > So the failure was an ASSERT instead of an OOM Do you want me to try the 12b and see if I get the same or different error? I seem to be able to chat with either Gemma3 version ok, but I get the crashes when I use the [GenerativeAI](https://docs.frigate.video/configuration/genai) in [Frigate](https://github.com/blakeblackshear/frigate)
Author
Owner

@bioshazard commented on GitHub (Mar 16, 2025):

@bioshazard Windows or Linux?

Kubuntu 24.04 booted from a USD 3.0 SSD. I really should try to get y'all some useful logs or other debug info... I will see about spending some more time on this today. Tag me again if you have any specific tests you want me to run or docker image tags to try on my system. I have it running in an nvidia-enabled docker compose successfully for months against many other models, so hopefully my system is a good example failure case.

<!-- gh-comment-id:2727542150 --> @bioshazard commented on GitHub (Mar 16, 2025): > [@bioshazard](https://github.com/bioshazard) Windows or Linux? Kubuntu 24.04 booted from a USD 3.0 SSD. I really should try to get y'all some useful logs or other debug info... I will see about spending some more time on this today. Tag me again if you have any specific tests you want me to run or docker image tags to try on my system. I have it running in an nvidia-enabled docker compose successfully for months against many other models, so hopefully my system is a good example failure case.
Author
Owner

@rick-github commented on GitHub (Mar 16, 2025):

@bioshazard If you add some logs, could you also include your docker config?

<!-- gh-comment-id:2727543000 --> @rick-github commented on GitHub (Mar 16, 2025): @bioshazard If you add some logs, could you also include your docker config?
Author
Owner

@bioshazard commented on GitHub (Mar 16, 2025):

@bioshazard If you add some logs, could you also include your docker config?

here is the docker config at least for now. just a slightly tweaked from coolify default. I confirmed in-terminal that -V shows 0.6.1

services:
  ollama-api:
    runtime: nvidia
    image: 'ollama/ollama:latest'
    environment:
      - SERVICE_FQDN_OLLAMA_11434
      - OLLAMA_KEEP_ALIVE=-1m
      - OLLAMA_MAX_LOADED_MODELS=5
    volumes:
      - '/opt/bios-dev/ollama:/root/.ollama'
    healthcheck:
      test:
        - CMD
        - ollama
        - list
      interval: 5s
      timeout: 30s
      retries: 10
    deploy:
      resources:
        reservations:
          devices:
            -
              driver: nvidia
              count: 1
              capabilities:
                - gpu
  open-webui:
    ...
<!-- gh-comment-id:2727543551 --> @bioshazard commented on GitHub (Mar 16, 2025): > [@bioshazard](https://github.com/bioshazard) If you add some logs, could you also include your docker config? here is the docker config at least for now. just a slightly tweaked from coolify default. I confirmed in-terminal that `-V` shows 0.6.1 ``` services: ollama-api: runtime: nvidia image: 'ollama/ollama:latest' environment: - SERVICE_FQDN_OLLAMA_11434 - OLLAMA_KEEP_ALIVE=-1m - OLLAMA_MAX_LOADED_MODELS=5 volumes: - '/opt/bios-dev/ollama:/root/.ollama' healthcheck: test: - CMD - ollama - list interval: 5s timeout: 30s retries: 10 deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: - gpu open-webui: ... ```
Author
Owner

@wills106 commented on GitHub (Mar 16, 2025):

Just ran nvidia-smi again and just noticed that the RAM usage on the RTX3060 is higher than what the processes section is indicating.
When it's crashing is it not fully clearing the RAM? Could be a different issue all together?

Image

<!-- gh-comment-id:2727543679 --> @wills106 commented on GitHub (Mar 16, 2025): Just ran nvidia-smi again and just noticed that the RAM usage on the RTX3060 is higher than what the processes section is indicating. When it's crashing is it not fully clearing the RAM? Could be a different issue all together? ![Image](https://github.com/user-attachments/assets/d241043f-2005-4fed-99f0-ace2dc4d9915)
Author
Owner

@rick-github commented on GitHub (Mar 16, 2025):

@smerschjohann I'm curious about how you limited your 3080 to 10G. I have a 3080 in the lab and I'd like to duplicate the environment to see if I can trigger the failure.

<!-- gh-comment-id:2727549999 --> @rick-github commented on GitHub (Mar 16, 2025): @smerschjohann I'm curious about how you limited your 3080 to 10G. I have a 3080 in the lab and I'd like to duplicate the environment to see if I can trigger the failure.
Author
Owner

@smerschjohann commented on GitHub (Mar 16, 2025):

@rick-github would be nice, if I simply limited my GPU to 10 GB, but no. I'm afraid, the early RTX 3080 only had 10 GB :(

<!-- gh-comment-id:2727552636 --> @smerschjohann commented on GitHub (Mar 16, 2025): @rick-github would be nice, if I simply limited my GPU to 10 GB, but no. I'm afraid, the early RTX 3080 only had 10 GB :(
Author
Owner

@ALLMI78 commented on GitHub (Mar 16, 2025):

how can you guys load gemma-3 with only 12 gb vram usage? https://github.com/ollama/ollama/issues/9791#issuecomment-2727276666

gemma3:12b 6fd036cefda5 24 GB 34%/66% CPU/GPU

my issue

<!-- gh-comment-id:2727561786 --> @ALLMI78 commented on GitHub (Mar 16, 2025): how can you guys load gemma-3 with only 12 gb vram usage? https://github.com/ollama/ollama/issues/9791#issuecomment-2727276666 > gemma3:12b 6fd036cefda5 24 GB 34%/66% CPU/GPU [my issue](https://github.com/ollama/ollama/issues/9730)
Author
Owner

@smerschjohann commented on GitHub (Mar 16, 2025):

with env var GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 is stabilizes at 9600 - 9800 MiB without getting slower. So at least that is a good thing ;)

<!-- gh-comment-id:2727564277 --> @smerschjohann commented on GitHub (Mar 16, 2025): with env var `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` is stabilizes at 9600 - 9800 MiB without getting slower. So at least that is a good thing ;)
Author
Owner

@smerschjohann commented on GitHub (Mar 16, 2025):

how can you guys load gemma-3 with only 12 gb vram usage? #9791 (comment)

gemma3:12b 6fd036cefda5 24 GB 34%/66% CPU/GPU

my issue

Yeah it does not fit completly in GPU, but here are my stats with the environment variable set:

$ ollama ps
NAME          ID              SIZE     PROCESSOR          UNTIL              
gemma3:12b    6fd036cefda5    12 GB    27%/73% CPU/GPU    4 minutes from now    
<!-- gh-comment-id:2727566291 --> @smerschjohann commented on GitHub (Mar 16, 2025): > how can you guys load gemma-3 with only 12 gb vram usage? [#9791 (comment)](https://github.com/ollama/ollama/issues/9791#issuecomment-2727276666) > > > gemma3:12b 6fd036cefda5 24 GB 34%/66% CPU/GPU > > [my issue](https://github.com/ollama/ollama/issues/9730) Yeah it does not fit completly in GPU, but here are my stats with the environment variable set: ``` $ ollama ps NAME ID SIZE PROCESSOR UNTIL gemma3:12b 6fd036cefda5 12 GB 27%/73% CPU/GPU 4 minutes from now ```
Author
Owner

@ALLMI78 commented on GitHub (Mar 16, 2025):

12 GB with 8k context length?

can one test it with 32k pls?

OLLAMA_CONTEXT_LENGTH=32768

<!-- gh-comment-id:2727571365 --> @ALLMI78 commented on GitHub (Mar 16, 2025): 12 GB with 8k context length? can one test it with 32k pls? OLLAMA_CONTEXT_LENGTH=32768
Author
Owner

@smerschjohann commented on GitHub (Mar 16, 2025):

this does not work for me (but 10GB VRAM)

<!-- gh-comment-id:2727573237 --> @smerschjohann commented on GitHub (Mar 16, 2025): this does not work for me (but 10GB VRAM)
Author
Owner

@rick-github commented on GitHub (Mar 16, 2025):

how can you guys load gemma-3 with only 12 gb vram usage? #9791 (comment)

ctx OLLAMA_NUM_PARALLEL=1 OLLAMA_NUM_PARALLEL=4
2048 8.8 GB 11 GB
4096 9.6 GB 14 GB
8192 11 GB 21 GB
16384 14 GB 36 GB
32768 21 GB 65 GB
65536 36 GB 124 GB
<!-- gh-comment-id:2727576383 --> @rick-github commented on GitHub (Mar 16, 2025): > how can you guys load gemma-3 with only 12 gb vram usage? [#9791 (comment)](https://github.com/ollama/ollama/issues/9791#issuecomment-2727276666) | ctx| OLLAMA_NUM_PARALLEL=1 | OLLAMA_NUM_PARALLEL=4 | | -- | -- | -- | | 2048 | 8.8 GB | 11 GB | | 4096 | 9.6 GB | 14 GB | | 8192 | 11 GB | 21 GB | | 16384 | 14 GB | 36 GB | | 32768 | 21 GB | 65 GB | | 65536 | 36 GB | 124 GB |
Author
Owner

@bioshazard commented on GitHub (Mar 16, 2025):

My errors seem related to booting off a USB SSD. I get nasty FIFO errors when I attempt to load up Gemma 27B in ollama 0.6.1 in docker. Qwen32B R1 distill worked fine, but I haven't found any useful logs yet. So count me out of troubleshooting for now. Sry yall

<!-- gh-comment-id:2727587180 --> @bioshazard commented on GitHub (Mar 16, 2025): My errors seem related to booting off a USB SSD. I get nasty FIFO errors when I attempt to load up Gemma 27B in ollama 0.6.1 in docker. Qwen32B R1 distill worked fine, but I haven't found any useful logs yet. So count me out of troubleshooting for now. Sry yall
Author
Owner

@jamon commented on GitHub (Mar 16, 2025):

ollama.service

...
Environment="OLLAMA_NUM_PARALLEL=1"
Environment="OLLAMA_CONTEXT_LENGTH=32768"
...
/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 --ctx-size 32768 --batch-size 512 --n-gpu-layers 38 --threads 12 --parallel 1 --port 34983

➜ ~ nvidia-smi

Sun Mar 16 14:25:30 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.86.16              Driver Version: 570.86.16      CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090 Ti     Off |   00000000:0B:00.0  On |                  Off |
| 30%   40C    P8             27W /  450W |    9284MiB /  24564MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A          427064      C   /usr/local/bin/ollama                  9258MiB |
+-----------------------------------------------------------------------------------------+

➜ ~ ollama ps

NAME          ID              SIZE     PROCESSOR          UNTIL
gemma3:27b    30ddded7fba6    38 GB    36%/64% CPU/GPU    4 minutes from now

it crashes with the context set to 32k... it'll run with it set to 6k or less...

at 6144 context, it's 100% GPU, 16,890MiB VRAM use and doesn't crash

<!-- gh-comment-id:2727608084 --> @jamon commented on GitHub (Mar 16, 2025): ollama.service ``` ... Environment="OLLAMA_NUM_PARALLEL=1" Environment="OLLAMA_CONTEXT_LENGTH=32768" ... ``` ``` /usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 --ctx-size 32768 --batch-size 512 --n-gpu-layers 38 --threads 12 --parallel 1 --port 34983 ``` ➜ ~ nvidia-smi ``` Sun Mar 16 14:25:30 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 570.86.16 Driver Version: 570.86.16 CUDA Version: 12.8 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3090 Ti Off | 00000000:0B:00.0 On | Off | | 30% 40C P8 27W / 450W | 9284MiB / 24564MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 427064 C /usr/local/bin/ollama 9258MiB | +-----------------------------------------------------------------------------------------+ ``` ➜ ~ ollama ps ``` NAME ID SIZE PROCESSOR UNTIL gemma3:27b 30ddded7fba6 38 GB 36%/64% CPU/GPU 4 minutes from now ``` it crashes with the context set to 32k... it'll run with it set to 6k or less... at 6144 context, it's 100% GPU, 16,890MiB VRAM use and doesn't crash
Author
Owner

@smerschjohann commented on GitHub (Mar 16, 2025):

there is nothing wrong with the settings, ollama's behavior is wrong as it should (and normally does) support CPU offloading just fine.

<!-- gh-comment-id:2727618487 --> @smerschjohann commented on GitHub (Mar 16, 2025): there is nothing wrong with the settings, ollama's behavior is wrong as it should (and normally does) support CPU offloading just fine.
Author
Owner

@smerschjohann commented on GitHub (Mar 16, 2025):

Calm down, they are trying to investigate here. What do you expect? This is opensource and free software, instead of ranting here, you can help. Also there are issues on windows and linux, so I'm not sure what you mean with your Windows comment.

With 8K it works on linux with GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 enabled..

<!-- gh-comment-id:2727622925 --> @smerschjohann commented on GitHub (Mar 16, 2025): Calm down, they are trying to investigate here. What do you expect? This is opensource and free software, instead of ranting here, you can help. Also there are issues on windows and linux, so I'm not sure what you mean with your Windows comment. With 8K it works on linux with `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` enabled..
Author
Owner

@rick-github commented on GitHub (Mar 16, 2025):

It is selecting the correct backend. The problem (in this issue) is that the backend is making unusually large allocations.

<!-- gh-comment-id:2727636711 --> @rick-github commented on GitHub (Mar 16, 2025): It is selecting the correct backend. The problem (in this issue) is that the backend is making unusually large allocations.
Author
Owner

@rick-github commented on GitHub (Mar 16, 2025):

https://github.com/ollama/ollama/issues/9791#issuecomment-2727513844

<!-- gh-comment-id:2727637770 --> @rick-github commented on GitHub (Mar 16, 2025): https://github.com/ollama/ollama/issues/9791#issuecomment-2727513844
Author
Owner

@bjj commented on GitHub (Mar 16, 2025):

I'm also observing that gemma3:27b q4_k_m is allocating the right amount of space on the GPU, but also allocating a ton of system memory (enough to OOM in my case, but I do have more VRAM than RAM on this system). The same exact configuration runs qwen2.5:32b q4_k_m just fine.

Logs of loading gemma3
time=2025-03-16T23:04:13.867Z level=INFO source=sched.go:508 msg="updated VRAM based on existing loaded models" gpu=GPU-f0543fa855d9e595 library=rocm total="32.0 GiB" available="7.7 GiB"
time=2025-03-16T23:04:14.978Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 gpu=GPU-f0543fa855d9e595 parallel=1 available=34336186368 required="26.8 GiB"
time=2025-03-16T23:04:14.978Z level=INFO source=server.go:105 msg="system memory" total="15.5 GiB" free="14.0 GiB" free_swap="2.1 GiB"
time=2025-03-16T23:04:14.980Z level=INFO source=server.go:138 msg=offload library=rocm layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[32.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="26.8 GiB" memory.required.partial="26.8 GiB" memory.required.kv="7.8 GiB" memory.required.allocations="[26.8 GiB]" memory.weights.total="14.3 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-03-16T23:04:14.980Z level=WARN source=server.go:196 msg="quantized kv cache requested but flash attention disabled" type=q8_0
time=2025-03-16T23:04:15.096Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-16T23:04:15.103Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-16T23:04:15.108Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-03-16T23:04:15.115Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-16T23:04:15.115Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-16T23:04:15.115Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-16T23:04:15.115Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-16T23:04:15.115Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-16T23:04:15.115Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 --ctx-size 16384 --batch-size 512 --n-gpu-layers 63 --threads 4 --no-mmap --parallel 1 --port 37869"
time=2025-03-16T23:04:15.115Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-16T23:04:15.115Z level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-16T23:04:15.116Z level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-16T23:04:15.127Z level=INFO source=runner.go:823 msg="starting ollama engine"
time=2025-03-16T23:04:15.127Z level=INFO source=runner.go:883 msg="Server listening on 127.0.0.1:37869"
time=2025-03-16T23:04:15.250Z level=WARN source=ggml.go:149 msg="key not found" key=general.name default=""
time=2025-03-16T23:04:15.250Z level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-03-16T23:04:15.250Z level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=36
/opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
time=2025-03-16T23:04:15.367Z level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: AMD Radeon Graphics, gfx908:sramecc+:xnack- (0x908), VMM: no, Wave Size: 64
load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so
time=2025-03-16T23:04:16.765Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-03-16T23:04:17.046Z level=INFO source=ggml.go:289 msg="model weights" buffer=ROCm0 size="16.2 GiB"
time=2025-03-16T23:04:17.046Z level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="1.1 GiB"
...
time=2025-03-16T23:06:04.128Z level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server not responding"
time=2025-03-16T23:06:10.426Z level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-16T23:06:10.684Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: signal: killed"
<!-- gh-comment-id:2727708681 --> @bjj commented on GitHub (Mar 16, 2025): I'm also observing that gemma3:27b q4_k_m is allocating the right amount of space on the GPU, but also allocating a ton of system memory (enough to OOM in my case, but I do have more VRAM than RAM on this system). The same exact configuration runs qwen2.5:32b q4_k_m just fine. <details> <summary>Logs of loading gemma3</summary> ``` time=2025-03-16T23:04:13.867Z level=INFO source=sched.go:508 msg="updated VRAM based on existing loaded models" gpu=GPU-f0543fa855d9e595 library=rocm total="32.0 GiB" available="7.7 GiB" time=2025-03-16T23:04:14.978Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 gpu=GPU-f0543fa855d9e595 parallel=1 available=34336186368 required="26.8 GiB" time=2025-03-16T23:04:14.978Z level=INFO source=server.go:105 msg="system memory" total="15.5 GiB" free="14.0 GiB" free_swap="2.1 GiB" time=2025-03-16T23:04:14.980Z level=INFO source=server.go:138 msg=offload library=rocm layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[32.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="26.8 GiB" memory.required.partial="26.8 GiB" memory.required.kv="7.8 GiB" memory.required.allocations="[26.8 GiB]" memory.weights.total="14.3 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-03-16T23:04:14.980Z level=WARN source=server.go:196 msg="quantized kv cache requested but flash attention disabled" type=q8_0 time=2025-03-16T23:04:15.096Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-16T23:04:15.103Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-16T23:04:15.108Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-16T23:04:15.115Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-03-16T23:04:15.115Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-16T23:04:15.115Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-16T23:04:15.115Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-16T23:04:15.115Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-16T23:04:15.115Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 --ctx-size 16384 --batch-size 512 --n-gpu-layers 63 --threads 4 --no-mmap --parallel 1 --port 37869" time=2025-03-16T23:04:15.115Z level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-16T23:04:15.115Z level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-16T23:04:15.116Z level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-16T23:04:15.127Z level=INFO source=runner.go:823 msg="starting ollama engine" time=2025-03-16T23:04:15.127Z level=INFO source=runner.go:883 msg="Server listening on 127.0.0.1:37869" time=2025-03-16T23:04:15.250Z level=WARN source=ggml.go:149 msg="key not found" key=general.name default="" time=2025-03-16T23:04:15.250Z level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" time=2025-03-16T23:04:15.250Z level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=36 /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory time=2025-03-16T23:04:15.367Z level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon Graphics, gfx908:sramecc+:xnack- (0x908), VMM: no, Wave Size: 64 load_backend: loaded ROCm backend from /usr/lib/ollama/rocm/libggml-hip.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-haswell.so time=2025-03-16T23:04:16.765Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-03-16T23:04:17.046Z level=INFO source=ggml.go:289 msg="model weights" buffer=ROCm0 size="16.2 GiB" time=2025-03-16T23:04:17.046Z level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="1.1 GiB" ... time=2025-03-16T23:06:04.128Z level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server not responding" time=2025-03-16T23:06:10.426Z level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-16T23:06:10.684Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: signal: killed" ``` </details>
Author
Owner

@rick-github commented on GitHub (Mar 16, 2025):

Yes, I found the same - allocation of system RAM is much greater for the gemma3 models even when the model is fully hosted in VRAM.

Image

<!-- gh-comment-id:2727711604 --> @rick-github commented on GitHub (Mar 16, 2025): Yes, I found the same - allocation of system RAM is much greater for the gemma3 models even when the model is fully hosted in VRAM. ![Image](https://github.com/user-attachments/assets/0d921427-0cf9-4c1d-b74b-606ba021ac48)
Author
Owner

@wills106 commented on GitHub (Mar 17, 2025):

So the failure was an ASSERT instead of an OOM, but it sill happened in ggml_backend_sched_graph_compute_async(). It may be different because the API was called with a context field, so the usual tokenization that occurs for API calls didn't take place, leading to a different code path that didn't need memory allocation but still failed when computing the graph.

I have tried gemma3:12b this time with the following settings:

Image

Seems to fail at a slightly different area now:

ggml-backend.cpp:1188: GGML_ASSERT(n_graph_inputs < GGML_SCHED_MAX_SPLIT_INPUTS) failed
SIGSEGV: segmentation violation
PC=0x147fa340ade7 m=140 sigcode=1 addr=0x204e03f88
signal arrived during cgo execution

goroutine 10 gp=0xc000103880 m=140 mp=0xc1373d6808 [syscall]:
runtime.cgocall(0x56091924e1e0, 0xc000511b00)
        runtime/cgocall.go:167 +0x4b fp=0xc000511ad8 sp=0xc000511aa0 pc=0x56091841a60b

ollama2.log

Do you want me to raise this as a separate issue?

<!-- gh-comment-id:2728425766 --> @wills106 commented on GitHub (Mar 17, 2025): > So the failure was an ASSERT instead of an OOM, but it sill happened in `ggml_backend_sched_graph_compute_async()`. It may be different because the API was called with a `context` field, so the usual tokenization that occurs for API calls didn't take place, leading to a different code path that didn't need memory allocation but still failed when computing the graph. I have tried gemma3:12b this time with the following settings: ![Image](https://github.com/user-attachments/assets/59c17dd3-2e1e-48f8-b822-4ee5b2548d80) Seems to fail at a slightly different area now: ``` ggml-backend.cpp:1188: GGML_ASSERT(n_graph_inputs < GGML_SCHED_MAX_SPLIT_INPUTS) failed SIGSEGV: segmentation violation PC=0x147fa340ade7 m=140 sigcode=1 addr=0x204e03f88 signal arrived during cgo execution goroutine 10 gp=0xc000103880 m=140 mp=0xc1373d6808 [syscall]: runtime.cgocall(0x56091924e1e0, 0xc000511b00) runtime/cgocall.go:167 +0x4b fp=0xc000511ad8 sp=0xc000511aa0 pc=0x56091841a60b ``` [ollama2.log](https://github.com/user-attachments/files/19280922/ollama2.log) Do you want me to raise this as a separate issue?
Author
Owner

@stimata-debug commented on GitHub (Mar 18, 2025):

Error Report

Version: 0.6.1
System Configuration: 70+ GB VRAM
Models: 27b, 12b, 4b

Issue Description:
On any model, if first message of chat contains a image, I encounter a segmentation fault (SIGSEGV) immediately when message is recieved. Otherwise after 10 messages on 27b with image in chat history, model gets similar crash.

<!-- gh-comment-id:2732222803 --> @stimata-debug commented on GitHub (Mar 18, 2025): Error Report Version: 0.6.1 System Configuration: 70+ GB VRAM Models: 27b, 12b, 4b Issue Description: On any model, if first message of chat contains a image, I encounter a segmentation fault (SIGSEGV) immediately when message is recieved. Otherwise after 10 messages on 27b with image in chat history, model gets similar crash.
Author
Owner

@hlinden commented on GitHub (Mar 18, 2025):

Version: 0.6.1
System Configuration: 16GB VRAM, 96GB RAM
Model: gemma3:27b
Error: allocating 20513.56 MiB on device 0: cudaMalloc failed: out of memory

Log ist attached.

ollama-0.6.1_gemma3-27b_oom_error.log

<!-- gh-comment-id:2732384682 --> @hlinden commented on GitHub (Mar 18, 2025): Version: 0.6.1 System Configuration: 16GB VRAM, 96GB RAM Model: gemma3:27b Error: `allocating 20513.56 MiB on device 0: cudaMalloc failed: out of memory` Log ist attached. [ollama-0.6.1_gemma3-27b_oom_error.log](https://github.com/user-attachments/files/19315582/ollama-0.6.1_gemma3-27b_oom_error.log)
Author
Owner

@konrad0101 commented on GitHub (Mar 18, 2025):

I'm trying out Ollama 0.6.2-rc0 and there is a substantial drop in quality on vision OCR tasks compared to 0.6.1 (though no longer getting OOM errors). The results went from very good, to unusable (with lots of repeating text in the response). Using gemma3:27b-it-q8_0 on Ubuntu 24.04, RTX 3090.

<!-- gh-comment-id:2732888376 --> @konrad0101 commented on GitHub (Mar 18, 2025): I'm trying out Ollama 0.6.2-rc0 and there is a substantial drop in quality on vision OCR tasks compared to 0.6.1 (though no longer getting OOM errors). The results went from very good, to unusable (with lots of repeating text in the response). Using `gemma3:27b-it-q8_0` on Ubuntu 24.04, RTX 3090.
Author
Owner

@rick-github commented on GitHub (Mar 18, 2025):

RSS is down with 0.6.2.

Image

<!-- gh-comment-id:2733027444 --> @rick-github commented on GitHub (Mar 18, 2025): RSS is down with 0.6.2. ![Image](https://github.com/user-attachments/assets/95a5a51a-22ff-47e5-a563-e19379fd8c51)
Author
Owner

@bjj commented on GitHub (Mar 18, 2025):

With ollama:0.6.2-rocm I can also load gemma3:27b without OOM. It does use about 6G more main memory (while the model is fully offloaded to VRAM) than qwen2.5:32b, but it is usable.

@konrad0101 I also see the q4_k_m performing poorly at vision tasks, including repeating image elements. However, a lot of that goes away with better parameters:

FROM gemma3:27b
PARAMETER temperature 1.0
PARAMETER repeat_penalty 1.0
PARAMETER top_k 64
PARAMETER top_p 0.95
PARAMETER min_p 0.01

Even then, the performance does not match an FP8 (not q8, I haven't downloaded that) quant.

Example description of a Rust radial build menu, default parameters, q4_k_m

Here's a description of the icons around the circular menu, starting at the top and going clockwise, based on the image:

  1. Roof Piece: A triangular roof section.
  2. Floor Piece: A square floor section.
  3. Cube: A simple cube shape.
  4. Triangle: A triangle shape.
  5. Wall with Window: A wall section with a window opening.
  6. Doorway: A doorway opening.
  7. Wall with Window: A wall section with a window opening.
  8. Wall with Window: A wall section with a window opening.
  9. Wall with Window: A wall section with a window opening.
  10. Wall with Window: A wall section with a window opening.
  11. Wall with Window: A wall section with a window opening.
  12. Wall with Window: A wall section with a window opening.
  13. Wall with Window: A wall section with a window opening.
  14. Wall with Window: A wall section with a window opening.
  15. Wall with Window: A wall section with a window opening.

It appears the majority of the icons are variations of wall pieces with windows.

...and q4_k_m with suggested parameters

Here's a description of the icons, starting at the cursor position and proceeding clockwise:

  1. Red block: A simple red block appears to be a highlighted selection.
  2. Curved roof: A white icon depicting a rounded rooftop structure.
  3. Ladder: A white icon representing a ladder.
  4. Cylinder: A white icon showing a cylindrical shape.
  5. Ramp: A white icon of a ramp.
  6. Box with top: A white icon depicting a square/cube with an open top.
  7. Pillar: A white icon resembling a column or pillar.
  8. Prism: A white icon of a pyramid/prism structure.
  9. Roof: A white icon depicting a peaked roof.
  10. Triangle with lines: A white icon showing a triangle with intersecting lines, possibly a support structure.
  11. Hexagon with lines: A white icon showing a hexagon with intersecting lines.
  12. Gate: A white icon depicting an arched gate.
  13. Cube: A white icon showing a cube.
  14. House: A white icon of a simple house structure.
  15. Arch: A white icon representing a curved arch.

These icons seem to be options for building or construction, possibly within a game or creative environment.

<!-- gh-comment-id:2733595103 --> @bjj commented on GitHub (Mar 18, 2025): With `ollama:0.6.2-rocm` I can also load `gemma3:27b` without OOM. It does use about 6G more main memory (while the model is fully offloaded to VRAM) than `qwen2.5:32b`, but it is usable. @konrad0101 I also see the q4_k_m performing poorly at vision tasks, including repeating image elements. However, a lot of that goes away with better parameters: ``` FROM gemma3:27b PARAMETER temperature 1.0 PARAMETER repeat_penalty 1.0 PARAMETER top_k 64 PARAMETER top_p 0.95 PARAMETER min_p 0.01 ``` Even then, the performance does not match an FP8 (not q8, I haven't downloaded that) quant. <details> <summary>Example description of a Rust radial build menu, default parameters, q4_k_m</summary> Here's a description of the icons around the circular menu, starting at the top and going clockwise, based on the image: 1. **Roof Piece:** A triangular roof section. 2. **Floor Piece:** A square floor section. 3. **Cube:** A simple cube shape. 4. **Triangle:** A triangle shape. 5. **Wall with Window:** A wall section with a window opening. 6. **Doorway:** A doorway opening. 7. **Wall with Window:** A wall section with a window opening. 8. **Wall with Window:** A wall section with a window opening. 9. **Wall with Window:** A wall section with a window opening. 10. **Wall with Window:** A wall section with a window opening. 11. **Wall with Window:** A wall section with a window opening. 12. **Wall with Window:** A wall section with a window opening. 13. **Wall with Window:** A wall section with a window opening. 14. **Wall with Window:** A wall section with a window opening. 15. **Wall with Window:** A wall section with a window opening. It appears the majority of the icons are variations of wall pieces with windows. </details> <details> <summary>...and q4_k_m with suggested parameters</summary> Here's a description of the icons, starting at the cursor position and proceeding clockwise: 1. **Red block:** A simple red block appears to be a highlighted selection. 2. **Curved roof:** A white icon depicting a rounded rooftop structure. 3. **Ladder:** A white icon representing a ladder. 4. **Cylinder:** A white icon showing a cylindrical shape. 5. **Ramp:** A white icon of a ramp. 6. **Box with top:** A white icon depicting a square/cube with an open top. 7. **Pillar:** A white icon resembling a column or pillar. 8. **Prism:** A white icon of a pyramid/prism structure. 9. **Roof:** A white icon depicting a peaked roof. 10. **Triangle with lines:** A white icon showing a triangle with intersecting lines, possibly a support structure. 11. **Hexagon with lines:** A white icon showing a hexagon with intersecting lines. 12. **Gate:** A white icon depicting an arched gate. 13. **Cube:** A white icon showing a cube. 14. **House:** A white icon of a simple house structure. 15. **Arch:** A white icon representing a curved arch. These icons seem to be options for building or construction, possibly within a game or creative environment. </details>
Author
Owner

@wills106 commented on GitHub (Mar 18, 2025):

I have tried gemma3:12b this time with the following settings:

Image

Seems to fail at a slightly different area now:

ggml-backend.cpp:1188: GGML_ASSERT(n_graph_inputs < GGML_SCHED_MAX_SPLIT_INPUTS) failed
SIGSEGV: segmentation violation
PC=0x147fa340ade7 m=140 sigcode=1 addr=0x204e03f88
signal arrived during cgo execution

goroutine 10 gp=0xc000103880 m=140 mp=0xc1373d6808 [syscall]:
runtime.cgocall(0x56091924e1e0, 0xc000511b00)
        runtime/cgocall.go:167 +0x4b fp=0xc000511ad8 sp=0xc000511aa0 pc=0x56091841a60b

I tried ollama 0.6.2 earlier. With Gemma3:12b with the above settings it was consuming 16GB of RAM, but Split between the RTX3060 and CPU. But was very very slow.

I then tried the same settings but with Gemma3:4b which seemed fine at fist. With it using about 6.7GB of VRAM.

But came back to my server and it was hardly responding.

Turns out it was using over 31GB of System RAM
Image

Even though ollama PS is still showing 6.7GB
Image

I'll try and limit the docker container to 16GB and see how that behaves.

At least it's a step in the right direction, as it's not fully crashed...

Edit:
I tried to get into the logs but the server was that unresponsive all I could do was restart the container.

<!-- gh-comment-id:2734336325 --> @wills106 commented on GitHub (Mar 18, 2025): > I have tried gemma3:12b this time with the following settings: > > ![Image](https://github.com/user-attachments/assets/59c17dd3-2e1e-48f8-b822-4ee5b2548d80) > > Seems to fail at a slightly different area now: > > ``` > ggml-backend.cpp:1188: GGML_ASSERT(n_graph_inputs < GGML_SCHED_MAX_SPLIT_INPUTS) failed > SIGSEGV: segmentation violation > PC=0x147fa340ade7 m=140 sigcode=1 addr=0x204e03f88 > signal arrived during cgo execution > > goroutine 10 gp=0xc000103880 m=140 mp=0xc1373d6808 [syscall]: > runtime.cgocall(0x56091924e1e0, 0xc000511b00) > runtime/cgocall.go:167 +0x4b fp=0xc000511ad8 sp=0xc000511aa0 pc=0x56091841a60b > ``` I tried ollama 0.6.2 earlier. With Gemma3:12b with the above settings it was consuming 16GB of RAM, but Split between the RTX3060 and CPU. But was very very slow. I then tried the same settings but with Gemma3:4b which seemed fine at fist. With it using about 6.7GB of VRAM. But came back to my server and it was hardly responding. Turns out it was using over 31GB of System RAM ![Image](https://github.com/user-attachments/assets/21dd8a0b-3c56-4591-933c-71c00ef75d22) Even though ollama PS is still showing 6.7GB ![Image](https://github.com/user-attachments/assets/1b629c92-a24f-4742-bf5d-49158065be09) I'll try and limit the docker container to 16GB and see how that behaves. At least it's a step in the right direction, as it's not fully crashed... Edit: I tried to get into the logs but the server was that unresponsive all I could do was restart the container.
Author
Owner

@JamesInform commented on GitHub (Mar 18, 2025):

Just out of curiosity and maybe a dump question:

Why are so many users reporting the same issue in different threads and the maintainers are not able to spot the bug, although it seems that on almost every hardware setup including Apple Silicon with unified memory the issue is reproducable immediately?

<!-- gh-comment-id:2734612611 --> @JamesInform commented on GitHub (Mar 18, 2025): Just out of curiosity and maybe a dump question: Why are so many users reporting the same issue in different threads and the maintainers are not able to spot the bug, although it seems that on almost every hardware setup including Apple Silicon with unified memory the issue is reproducable immediately?
Author
Owner

@ultramarinebicycle commented on GitHub (Mar 19, 2025):

Update using 0.6.2:

T/s is still not fixed. VRAM is not being saturated and instead RAM is being used. Is this a model issue or an ollama issue?

<!-- gh-comment-id:2735186424 --> @ultramarinebicycle commented on GitHub (Mar 19, 2025): Update using 0.6.2: T/s is still not fixed. VRAM is not being saturated and instead RAM is being used. Is this a model issue or an ollama issue?
Author
Owner

@OSULZER commented on GitHub (Mar 19, 2025):

can confirm, issue still persists

<!-- gh-comment-id:2735657927 --> @OSULZER commented on GitHub (Mar 19, 2025): can confirm, issue still persists
Author
Owner

@nhnzman commented on GitHub (Mar 19, 2025):

Why does Ollama keep logging warnings and restarting?

 3월 19 17:06:22 aitest-1avt68ma env[188425]: [GIN] 2025/03/19 - 17:06:22 | 200 |   37.547024ms |       127.0.0.1 | POST     "/api/show"
 3월 19 17:06:23 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:23.323+09:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-9610e3e07375303f6cd89086b496bcc1ab581177f52042eff536475a29283ba2 gpu=GPU-586e4f48-d68e-7071-7bd3-a4e12bfb08a0 parallel=4 available=15545794560 required="11.7 GiB"
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.339+09:00 level=INFO source=server.go:105 msg="system memory" total="125.8 GiB" free="120.5 GiB" free_swap="0 B"
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.341+09:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[14.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.7 GiB" memory.required.partial="11.7 GiB" memory.required.kv="3.0 GiB" memory.required.allocations="[11.7 GiB]" memory.weights.total="6.0 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="814.6 MiB" projector.graph="0 B"
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.411+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.415+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-9610e3e07375303f6cd89086b496bcc1ab581177f52042eff536475a29283ba2 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 16 --parallel 4 --port 36397"
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.454+09:00 level=INFO source=runner.go:823 msg="starting ollama engine"
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.454+09:00 level=INFO source=runner.go:883 msg="Server listening on 127.0.0.1:36397"
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.666+09:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.666+09:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="Gemma 3 12b It" description="" num_tensors=626 num_key_values=41
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.690+09:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
 3월 19 17:06:24 aitest-1avt68ma env[188425]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
 3월 19 17:06:24 aitest-1avt68ma env[188425]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
 3월 19 17:06:24 aitest-1avt68ma env[188425]: ggml_cuda_init: found 1 CUDA devices:
 3월 19 17:06:24 aitest-1avt68ma env[188425]:   Device 0: Tesla T4, compute capability 7.5, VMM: yes
 3월 19 17:06:24 aitest-1avt68ma env[188425]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
 3월 19 17:06:24 aitest-1avt68ma env[188425]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-skylakex.so
 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.799+09:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
 3월 19 17:06:25 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:25.001+09:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CUDA0 size="6.8 GiB"
 3월 19 17:06:25 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:25.001+09:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="787.5 MiB"
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.442+09:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CUDA0 buffer_type=CUDA0
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.442+09:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CPU buffer_type=CUDA_Host
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.442+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.446+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.455+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.455+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.455+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.455+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.566+09:00 level=INFO source=server.go:624 msg="llama runner started in 3.13 seconds"
<!-- gh-comment-id:2736116831 --> @nhnzman commented on GitHub (Mar 19, 2025): Why does Ollama keep logging warnings and restarting? ``` 3월 19 17:06:22 aitest-1avt68ma env[188425]: [GIN] 2025/03/19 - 17:06:22 | 200 | 37.547024ms | 127.0.0.1 | POST "/api/show" 3월 19 17:06:23 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:23.323+09:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-9610e3e07375303f6cd89086b496bcc1ab581177f52042eff536475a29283ba2 gpu=GPU-586e4f48-d68e-7071-7bd3-a4e12bfb08a0 parallel=4 available=15545794560 required="11.7 GiB" 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.339+09:00 level=INFO source=server.go:105 msg="system memory" total="125.8 GiB" free="120.5 GiB" free_swap="0 B" 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.341+09:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[14.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.7 GiB" memory.required.partial="11.7 GiB" memory.required.kv="3.0 GiB" memory.required.allocations="[11.7 GiB]" memory.weights.total="6.0 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="814.6 MiB" projector.graph="0 B" 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.411+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.415+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.417+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-9610e3e07375303f6cd89086b496bcc1ab581177f52042eff536475a29283ba2 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 16 --parallel 4 --port 36397" 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.437+09:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.454+09:00 level=INFO source=runner.go:823 msg="starting ollama engine" 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.454+09:00 level=INFO source=runner.go:883 msg="Server listening on 127.0.0.1:36397" 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.666+09:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.666+09:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="Gemma 3 12b It" description="" num_tensors=626 num_key_values=41 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.690+09:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" 3월 19 17:06:24 aitest-1avt68ma env[188425]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 3월 19 17:06:24 aitest-1avt68ma env[188425]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 3월 19 17:06:24 aitest-1avt68ma env[188425]: ggml_cuda_init: found 1 CUDA devices: 3월 19 17:06:24 aitest-1avt68ma env[188425]: Device 0: Tesla T4, compute capability 7.5, VMM: yes 3월 19 17:06:24 aitest-1avt68ma env[188425]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so 3월 19 17:06:24 aitest-1avt68ma env[188425]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-skylakex.so 3월 19 17:06:24 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:24.799+09:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) 3월 19 17:06:25 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:25.001+09:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CUDA0 size="6.8 GiB" 3월 19 17:06:25 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:25.001+09:00 level=INFO source=ggml.go:289 msg="model weights" buffer=CPU size="787.5 MiB" 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.442+09:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.442+09:00 level=INFO source=ggml.go:356 msg="compute graph" backend=CPU buffer_type=CUDA_Host 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.442+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.446+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.449+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.455+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.455+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.455+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.455+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 3월 19 17:06:27 aitest-1avt68ma env[188425]: time=2025-03-19T17:06:27.566+09:00 level=INFO source=server.go:624 msg="llama runner started in 3.13 seconds" ```
Author
Owner

@NandaIda commented on GitHub (Mar 19, 2025):

I encountered persistent OOM errors when using the Gemma3:27b model with 2x RTX 3060 12GB + 1x GTX 1060 3GB, specifically when attempting to upload images. The model loads successfully and responds to text-only prompts, but image uploads trigger an out-of-memory crash.

I suspect this is related to the mmproj component (used for multi-modal processing), as I’ve observed that Ollama’s engine loads it differently compared to the llama.cpp implementation. Notably, the same configuration works flawlessly in llama.cpp, suggesting a potential discrepancy in how Ollama handles GPU memory allocation for mmproj.

Codes:

  1. Working llama.cpp Command (no OOM):

    llama-gemma3-cli.exe --flash-attn -ctk q8_0 -ctv q8_0 -m gemma-3-27b-it-Q4_K_M.gguf --mmproj mmproj-model-27b-f16.gguf -ngl 63 -ts 28,34,1 --batch-size 512 --ctx-size 16392
    
  2. Failing Ollama Engine Command (OOM on image uploads):

    C:\Users\username\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model C:\Users\username\.ollama\models\blobs\sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 --ctx-size 2048 --batch-size 512 --n-gpu-layers 62 --threads 8 --flash-attn --kv-cache-type q4_0 --no-mmap --parallel 1 --tensor-split 29,33,0 --port 50619
    

Differences:

  • Ollama uses --n-gpu-layers 62, while llama.cpp uses 63.
  • The ctx-size in Ollama is set to 2048, whereas llama.cpp uses 16392 (a much larger context size).
  • The mmproj path is explicitly provided in llama.cpp, but not in the Ollama command (though it may be implicitly loaded).

Could this issue come from how mmproj is managed in Ollama’s GPU memory allocation?

<!-- gh-comment-id:2737363345 --> @NandaIda commented on GitHub (Mar 19, 2025): I encountered persistent OOM errors when using the Gemma3:27b model with **2x RTX 3060 12GB + 1x GTX 1060 3GB**, specifically when attempting to **upload images**. The model loads successfully and responds to text-only prompts, but image uploads trigger an out-of-memory crash. I suspect this is related to the `mmproj` component (used for multi-modal processing), as I’ve observed that Ollama’s engine loads it differently compared to the `llama.cpp` implementation. Notably, the same configuration **works flawlessly in `llama.cpp`**, suggesting a potential discrepancy in how Ollama handles GPU memory allocation for `mmproj`. ### Codes: 1. **Working `llama.cpp` Command** (no OOM): ```bash llama-gemma3-cli.exe --flash-attn -ctk q8_0 -ctv q8_0 -m gemma-3-27b-it-Q4_K_M.gguf --mmproj mmproj-model-27b-f16.gguf -ngl 63 -ts 28,34,1 --batch-size 512 --ctx-size 16392 ``` 2. **Failing Ollama Engine Command** (OOM on image uploads): ```bash C:\Users\username\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model C:\Users\username\.ollama\models\blobs\sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 --ctx-size 2048 --batch-size 512 --n-gpu-layers 62 --threads 8 --flash-attn --kv-cache-type q4_0 --no-mmap --parallel 1 --tensor-split 29,33,0 --port 50619 ``` ### Differences: - Ollama uses `--n-gpu-layers 62`, while `llama.cpp` uses `63`. - The `ctx-size` in Ollama is set to `2048`, whereas `llama.cpp` uses `16392` (a much larger context size). - The `mmproj` path is explicitly provided in `llama.cpp`, but not in the Ollama command (though it may be implicitly loaded). Could this issue come from how `mmproj` is managed in Ollama’s GPU memory allocation?
Author
Owner

@NikhilM42 commented on GitHub (Mar 19, 2025):

This looks to be an ollama issue, I ran an update from 0.6 to 0.6.1 and all of a sudden I was hit with "connection forcibly closed by remote host" errors. It looks to be a memory access issue, based on my logs and the logs of this fellow here
9816

<!-- gh-comment-id:2737456193 --> @NikhilM42 commented on GitHub (Mar 19, 2025): This looks to be an ollama issue, I ran an update from 0.6 to 0.6.1 and all of a sudden I was hit with "connection forcibly closed by remote host" errors. It looks to be a memory access issue, based on my logs and the logs of this fellow here [9816](https://github.com/ollama/ollama/issues/9816)
Author
Owner

@NikhilM42 commented on GitHub (Mar 19, 2025):

Nevermind, it looks like I just had an outdated AMD driver 🤦🏽

<!-- gh-comment-id:2737733609 --> @NikhilM42 commented on GitHub (Mar 19, 2025): Nevermind, it looks like I just had an outdated AMD driver 🤦🏽
Author
Owner

@rick-github commented on GitHub (Mar 20, 2025):

Bisected the commits between 0.6.0 and 0.6.1 and token generation rate falls 25% at a422ba39c9.

EDIT: ignore this, I re-ran the test to compare with 0.6.3-rc0 and didn't see same same drop, so the experimental config was flawed.

<!-- gh-comment-id:2738578089 --> @rick-github commented on GitHub (Mar 20, 2025): Bisected the commits between 0.6.0 and 0.6.1 and token generation rate falls 25% at a422ba39c94adc870da84e5fa442c0bf81c77f27. EDIT: ignore this, I re-ran the test to compare with 0.6.3-rc0 and didn't see same same drop, so the experimental config was flawed.
Author
Owner

@alsimms commented on GitHub (Mar 20, 2025):

I can confirm this issue as well. I have attached a full debug log which may provide some answers. I can run gemma2 27B and Qwen 32B on this setup but Gemma3-12b-it_K_M crashes and so does Gemma3-27b-it_K_M. I will try to replicate the crash on 12B and submit another debug.

Using ollama 6.2

Here is part of the issue.

##############################################Part1

time=2025-03-20T04:48:16.682Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05
dlsym: cuInit - 0x7f3ac06a8bc0
dlsym: cuDriverGetVersion - 0x7f3ac06a8be0
dlsym: cuDeviceGetCount - 0x7f3ac06a8c20
dlsym: cuDeviceGet - 0x7f3ac06a8c00
dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00
dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60
dlsym: cuDeviceGetName - 0x7f3ac06a8c40
dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0
dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20
dlsym: cuCtxDestroy - 0x7f3ac070d850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 2
time=2025-03-20T04:48:16.817Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a nam

gemma3-27b-q4_K_M_debug.txt

e="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB"
time=2025-03-20T04:48:16.965Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB"
releasing cuda driver library
time=2025-03-20T04:48:16.965Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.123572171 model=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3
time=2025-03-20T04:48:16.965Z level=DEBUG source=sched.go:385 msg="sending an unloaded event" modelPath=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3
time=2025-03-20T04:48:16.965Z level=DEBUG source=sched.go:303 msg="unload completed" modelPath=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3
time=2025-03-20T04:48:16.965Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05
dlsym: cuInit - 0x7f3ac06a8bc0
dlsym: cuDriverGetVersion - 0x7f3ac06a8be0
dlsym: cuDeviceGetCount - 0x7f3ac06a8c20
dlsym: cuDeviceGet - 0x7f3ac06a8c00
dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00
dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60
dlsym: cuDeviceGetName - 0x7f3ac06a8c40
dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0
dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20
dlsym: cuCtxDestroy - 0x7f3ac070d850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 2
time=2025-03-20T04:48:17.103Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB"
time=2025-03-20T04:48:17.240Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB"
releasing cuda driver library
time=2025-03-20T04:48:17.240Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.39818804 model=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3
time=2025-03-20T04:48:17.240Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05
dlsym: cuInit - 0x7f3ac06a8bc0
dlsym: cuDriverGetVersion - 0x7f3ac06a8be0
dlsym: cuDeviceGetCount - 0x7f3ac06a8c20
dlsym: cuDeviceGet - 0x7f3ac06a8c00
dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00
dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60
dlsym: cuDeviceGetName - 0x7f3ac06a8c40
dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0
dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20
dlsym: cuCtxDestroy - 0x7f3ac070d850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 2
time=2025-03-20T04:48:17.377Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB"
time=2025-03-20T04:48:17.526Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB"
releasing cuda driver library
time=2025-03-20T04:48:17.526Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.683948942 model=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3
time=2025-03-20T04:48:17.526Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05
dlsym: cuInit - 0x7f3ac06a8bc0
dlsym: cuDriverGetVersion - 0x7f3ac06a8be0
dlsym: cuDeviceGetCount - 0x7f3ac06a8c20
dlsym: cuDeviceGet - 0x7f3ac06a8c00
dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00
dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60
dlsym: cuDeviceGetName - 0x7f3ac06a8c40
dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0
dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20
dlsym: cuCtxDestroy - 0x7f3ac070d850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 2
time=2025-03-20T04:48:17.661Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB"
time=2025-03-20T04:48:17.806Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB"
releasing cuda driver library
time=2025-03-20T04:48:17.942Z level=DEBUG source=sched.go:225 msg="loading first model" model=/root/.ollama/models/blobs/sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541
time=2025-03-20T04:48:17.943Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=2 available="[9.8 GiB 9.6 GiB]"
time=2025-03-20T04:48:17.947Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=2 available="[9.8 GiB 9.6 GiB]"
time=2025-03-20T04:48:17.950Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05
dlsym: cuInit - 0x7f3ac06a8bc0
dlsym: cuDriverGetVersion - 0x7f3ac06a8be0
dlsym: cuDeviceGetCount - 0x7f3ac06a8c20
dlsym: cuDeviceGet - 0x7f3ac06a8c00
dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00
dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60
dlsym: cuDeviceGetName - 0x7f3ac06a8c40
dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0
dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20
dlsym: cuCtxDestroy - 0x7f3ac070d850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 2
time=2025-03-20T04:48:18.100Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB"
time=2025-03-20T04:48:18.237Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB"
releasing cuda driver library
time=2025-03-20T04:48:18.237Z level=INFO source=server.go:105 msg="system memory" total="31.3 GiB" free="16.4 GiB" free_swap="0 B"
time=2025-03-20T04:48:18.237Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=2 available="[9.6 GiB 9.8 GiB]"
time=2025-03-20T04:48:18.241Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=99 layers.model=63 layers.offload=52 layers.split=22,30 memory.available="[9.6 GiB 9.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="22.8 GiB" memory.required.partial="19.1 GiB" memory.required.kv="992.0 MiB" memory.required.allocations="[9.5 GiB 9.6 GiB]" memory.weights.total="14.3 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.6 GiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-03-20T04:48:18.241Z level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
time=2025-03-20T04:48:18.396Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]|\s[\r\n]+|\s+(?!\S)|\s+"
time=2025-03-20T04:48:18.401Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-20T04:48:18.401Z level=DEBUG source=process_text_spm.go:27 msg=Tokens "num tokens"=262145 vals="[ ]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]"
time=2025-03-20T04:48:18.407Z level=DEBUG source=process_text_spm.go:41 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93
time=2025-03-20T04:48:18.407Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]|\s[\r\n]+|\s+(?!\S)|\s+"
time=2025-03-20T04:48:18.412Z level=DEBUG source=process_text_spm.go:27 msg=Tokens "num tokens"=262145 vals="[ ]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]"
time=2025-03-20T04:48:18.418Z level=DEBUG source=process_text_spm.go:41 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93
time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-20T04:48:18.418Z level=DEBUG source=server.go:335 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12
time=2025-03-20T04:48:18.418Z level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12]
time=2025-03-20T04:48:18.418Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 --ctx-size 2048 --batch-size 512 --n-gpu-layers 99 --verbose --threads 8 --no-mmap --parallel 1 --tensor-split 22,30 --port 34497"
time=2025-03-20T04:48:18.418Z level=DEBUG source=server.go:423 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama/cuda_v12:/usr/lib/ollama CUDA_VISIBLE_DEVICES=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a,GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce]"
time=2025-03-20T04:48:18.440Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-20T04:48:18.440Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-03-20T04:48:18.441Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-03-20T04:48:18.710Z level=INFO source=runner.go:763 msg="starting ollama engine"
time=2025-03-20T04:48:18.711Z level=INFO source=runner.go:823 msg="Server listening on 127.0.0.1:34497"
time=2025-03-20T04:48:18.858Z level=WARN source=ggml.go:149 msg="key not found" key=general.name default=""
time=2025-03-20T04:48:18.858Z level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-03-20T04:48:18.858Z level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=36
time=2025-03-20T04:48:18.859Z level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12
time=2025-03-20T04:48:18.943Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA P102-100, compute capability 6.1, VMM: yes
Device 1: NVIDIA P102-100, compute capability 6.1, VMM: yes
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2025-03-20T04:48:19.549Z level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib
time=2025-03-20T04:48:19.549Z level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib64
time=2025-03-20T04:48:19.549Z level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sandybridge.so
time=2025-03-20T04:48:19.721Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-03-20T04:48:19.721Z level=DEBUG source=ggml.go:220 msg="created tensor" name=mm.mm_input_projection.weight shape="[5376 1152]" dtype=1 buffer_type=CUDA1
time=2025-03-20T04:48:19.721Z level=DEBUG source=ggml.go:220 msg="created tensor" name=mm.mm_soft_emb_norm.weight shape=[1152] dtype=0 buffer_type=CUDA1
time=2025-03-20T04:48:19.721Z level=DEBUG source=ggml.go:220 msg="created tensor" name=output_norm.weight shape=[5376] dtype=0 buffer_type=CUDA1
time=2025-03-20T04:48:19.721Z level=DEBUG source=ggml.go:220 msg="created tensor" name=token_embd.weight shape="[5376 262144]" dtype=14 buffer_type=CPU
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=output.weight shape="[5376 262144]" dtype=14 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_k.bias shape=[1152] dtype=0 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_output.bias shape=[1152] dtype=0 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_q.bias shape=[1152] dtype=0 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_v.bias shape=[1152] dtype=0 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CUDA1

##########################################################Part2

`ggml_backend_cuda_buffer_type_alloc_buffer: allocating 10180.80 MiB on device 1: cudaMalloc failed: out of memory
SIGSEGV: segmentation violation
PC=0x5642219fee1d m=8 sigcode=1 addr=0x60
signal arrived during cgo execution

goroutine 10 gp=0xc000582700 m=8 mp=0xc000600008 [syscall]:
runtime.cgocall(0x564221a518d0, 0xc000047268)
runtime/cgocall.go:167 +0x4b fp=0xc000047240 sp=0xc000047208 pc=0x564220c1d96b
github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_buffer_set_usage(0x0, 0x1)
_cgo_gotypes.go:249 +0x45 fp=0xc000047268 sp=0xc000047240 pc=0x564221016565
github.com/ollama/ollama/ml/backend/ggml.New.func12(...)
github.com/ollama/ollama/ml/backend/ggml/ggml.go:284
github.com/ollama/ollama/ml/backend/ggml.New(0xc0001360e0, {0x8, 0x0, 0x63, {0xc000478758, 0x2, 0x2}, 0x0})
github.com/ollama/ollama/ml/backend/ggml/ggml.go:284 +0x18cb fp=0xc000047d58 sp=0xc000047268 pc=0x56422101cb4b
github.com/ollama/ollama/ml.NewBackend(0xc0001360e0, {0x8, 0x0, 0x63, {0xc000478758, 0x2, 0x2}, 0x0})
github.com/ollama/ollama/ml/backend.go:91 +0x9c fp=0xc000047da8 sp=0xc000047d58 pc=0x564221010a3c
github.com/ollama/ollama/model.New({0x7ffe04599c7b?, 0x0?}, {0x8, 0x0, 0x63, {0xc000478758, 0x2, 0x2}, 0x0})
github.com/ollama/ollama/model/model.go:104 +0xfb fp=0xc000047ee0 sp=0xc000047da8 pc=0x56422104a67b
github.com/ollama/ollama/runner/ollamarunner.(*Server).loadModel(0xc0005c57a0, {0x7ffe04599c7b, 0x62}, {0x8, 0x0, 0x63, {0xc000478758, 0x2, 0x2}, 0x0}, ...)
github.com/ollama/ollama/runner/ollamarunner/runner.go:689 +0x95 fp=0xc000047f40 sp=0xc000047ee0 pc=0x5642210d2c15
github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap1()
github.com/ollama/ollama/runner/ollamarunner/runner.go:793 +0x91 fp=0xc000047fe0 sp=0xc000047f40 pc=0x5642210d40d1
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000047fe8 sp=0xc000047fe0 pc=0x564220c283a1
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
github.com/ollama/ollama/runner/ollamarunner/runner.go:793 +0x9c5

goroutine 1 gp=0xc000002380 m=nil [IO wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc0005cf648 sp=0xc0005cf628 pc=0x564220c20c6e
runtime.netpollblock(0xc0005cf698?, 0x20bba426?, 0x42?)
runtime/netpoll.go:575 +0xf7 fp=0xc0005cf680 sp=0xc0005cf648 pc=0x564220be5a57
internal/poll.runtime_pollWait(0x7f5479521eb0, 0x72)
runtime/netpoll.go:351 +0x85 fp=0xc0005cf6a0 sp=0xc0005cf680 pc=0x564220c1fe85
internal/poll.(*pollDesc).wait(0xc000133c80?, 0x900000036?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0005cf6c8 sp=0xc0005cf6a0 pc=0x564220ca7307
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000133c80)
internal/poll/fd_unix.go:620 +0x295 fp=0xc0005cf770 sp=0xc0005cf6c8 pc=0x564220cac6d5
net.(*netFD).accept(0xc000133c80)
net/fd_unix.go:172 +0x29 fp=0xc0005cf828 sp=0xc0005cf770 pc=0x564220d1f4e9
net.(*TCPListener).accept(0xc000142880)
net/tcpsock_posix.go:159 +0x1b fp=0xc0005cf878 sp=0xc0005cf828 pc=0x564220d34e9b
net.(*TCPListener).Accept(0xc000142880)
net/tcpsock.go:380 +0x30 fp=0xc0005cf8a8 sp=0xc0005cf878 pc=0x564220d33d50
net/http.(*onceCloseListener).Accept(0xc0004b81b0?)
:1 +0x24 fp=0xc0005cf8c0 sp=0xc0005cf8a8 pc=0x564220f4b384
net/http.(*Server).Serve(0xc0001f1500, {0x564221efad58, 0xc000142880})
net/http/server.go:3424 +0x30c fp=0xc0005cf9f0 sp=0xc0005cf8c0 pc=0x564220f22c4c
github.com/ollama/ollama/runner/ollamarunner.Execute({0xc000034190, 0x12, 0x13})
github.com/ollama/ollama/runner/ollamarunner/runner.go:824 +0xe29 fp=0xc0005cfd08 sp=0xc0005cf9f0 pc=0x5642210d3d49
github.com/ollama/ollama/runner.Execute({0xc000034170?, 0x0?, 0x0?})
github.com/ollama/ollama/runner/runner.go:20 +0xc9 fp=0xc0005cfd30 sp=0xc0005cfd08 pc=0x5642210d49a9
github.com/ollama/ollama/cmd.NewCLI.func2(0xc0001f1200?, {0x564221a6d053?, 0x4?, 0x564221a6d057?})
github.com/ollama/ollama/cmd/cmd.go:1327 +0x45 fp=0xc0005cfd58 sp=0xc0005cfd30 pc=0x564221822625
github.com/spf13/cobra.(*Command).execute(0xc0004baf08, {0xc000495180, 0x13, 0x14})
github.com/spf13/cobra@v1.7.0/command.go:940 +0x85c fp=0xc0005cfe78 sp=0xc0005cfd58 pc=0x564220d98b3c
github.com/spf13/cobra.(*Command).ExecuteC(0xc0004a6908)
github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc0005cff30 sp=0xc0005cfe78 pc=0x564220d99385
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
github.com/ollama/ollama/main.go:12 +0x4d fp=0xc0005cff50 sp=0xc0005cff30 pc=0x56422182298d
runtime.main()
runtime/proc.go:283 +0x29d fp=0xc0005cffe0 sp=0xc0005cff50 pc=0x564220bed05d
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0005cffe8 sp=0xc0005cffe0 pc=0x564220c283a1

goroutine 2 gp=0xc000002e00 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000070fa8 sp=0xc000070f88 pc=0x564220c20c6e
runtime.goparkunlock(...)
runtime/proc.go:441
runtime.forcegchelper()
runtime/proc.go:348 +0xb8 fp=0xc000070fe0 sp=0xc000070fa8 pc=0x564220bed398
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000070fe8 sp=0xc000070fe0 pc=0x564220c283a1
created by runtime.init.7 in goroutine 1
runtime/proc.go:336 +0x1a

goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000071780 sp=0xc000071760 pc=0x564220c20c6e
runtime.goparkunlock(...)
runtime/proc.go:441
runtime.bgsweep(0xc000040080)
runtime/mgcsweep.go:316 +0xdf fp=0xc0000717c8 sp=0xc000071780 pc=0x564220bd7a5f
runtime.gcenable.gowrap1()
runtime/mgc.go:204 +0x25 fp=0xc0000717e0 sp=0xc0000717c8 pc=0x564220bcbe45
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000717e8 sp=0xc0000717e0 pc=0x564220c283a1
created by runtime.gcenable in goroutine 1
runtime/mgc.go:204 +0x66

goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]:
runtime.gopark(0x10000?, 0x564221c24118?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000071f78 sp=0xc000071f58 pc=0x564220c20c6e
runtime.goparkunlock(...)
runtime/proc.go:441
runtime.(*scavengerState).park(0x564222762b20)
runtime/mgcscavenge.go:425 +0x49 fp=0xc000071fa8 sp=0xc000071f78 pc=0x564220bd54a9
runtime.bgscavenge(0xc000040080)
runtime/mgcscavenge.go:658 +0x59 fp=0xc000071fc8 sp=0xc000071fa8 pc=0x564220bd5a39
runtime.gcenable.gowrap2()
runtime/mgc.go:205 +0x25 fp=0xc000071fe0 sp=0xc000071fc8 pc=0x564220bcbde5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000071fe8 sp=0xc000071fe0 pc=0x564220c283a1
created by runtime.gcenable in goroutine 1
runtime/mgc.go:205 +0xa5

goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait]:
runtime.gopark(0x1b8?, 0xc000002380?, 0x1?, 0x23?, 0xc000070688?)
runtime/proc.go:435 +0xce fp=0xc000070630 sp=0xc000070610 pc=0x564220c20c6e
runtime.runfinq()
runtime/mfinal.go:196 +0x107 fp=0xc0000707e0 sp=0xc000070630 pc=0x564220bcae07
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000707e8 sp=0xc0000707e0 pc=0x564220c283a1
created by runtime.createfing in goroutine 1
runtime/mfinal.go:166 +0x3d

goroutine 6 gp=0xc0001d08c0 m=nil [chan receive]:
runtime.gopark(0xc00022b540?, 0xc00011e018?, 0x60?, 0x27?, 0x564220d06228?)
runtime/proc.go:435 +0xce fp=0xc000072718 sp=0xc0000726f8 pc=0x564220c20c6e
runtime.chanrecv(0xc00003e3f0, 0x0, 0x1)
runtime/chan.go:664 +0x445 fp=0xc000072790 sp=0xc000072718 pc=0x564220bbd005
runtime.chanrecv1(0x0?, 0x0?)
runtime/chan.go:506 +0x12 fp=0xc0000727b8 sp=0xc000072790 pc=0x564220bbcb92
runtime.unique_runtime_registerUniqueMapCleanup.func2(...)
runtime/mgc.go:1796
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
runtime/mgc.go:1799 +0x2f fp=0xc0000727e0 sp=0xc0000727b8 pc=0x564220bcefef
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000727e8 sp=0xc0000727e0 pc=0x564220c283a1
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
runtime/mgc.go:1794 +0x85

goroutine 7 gp=0xc0001d1340 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000072f38 sp=0xc000072f18 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc000072fc8 sp=0xc000072f38 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc000072fe0 sp=0xc000072fc8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000072fe8 sp=0xc000072fe0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105

goroutine 18 gp=0xc000504000 m=nil [GC worker (idle)]:
runtime.gopark(0x564222811280?, 0x1?, 0x64?, 0x1b?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc00006c738 sp=0xc00006c718 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc00006c7c8 sp=0xc00006c738 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc00006c7e0 sp=0xc00006c7c8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00006c7e8 sp=0xc00006c7e0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105

goroutine 34 gp=0xc000102380 m=nil [GC worker (idle)]:
runtime.gopark(0x3987fda0d37?, 0x3?, 0xf4?, 0x3d?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc00011a738 sp=0xc00011a718 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc00011a7c8 sp=0xc00011a738 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc00011a7e0 sp=0xc00011a7c8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00011a7e8 sp=0xc00011a7e0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105

goroutine 8 gp=0xc0001d1500 m=nil [GC worker (idle)]:
runtime.gopark(0x3987fda1cb6?, 0x3?, 0x50?, 0x39?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000073738 sp=0xc000073718 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc0000737c8 sp=0xc000073738 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc0000737e0 sp=0xc0000737c8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000737e8 sp=0xc0000737e0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105

goroutine 19 gp=0xc0005041c0 m=nil [GC worker (idle)]:
runtime.gopark(0x3987fda0a73?, 0x3?, 0xb5?, 0x68?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc00006cf38 sp=0xc00006cf18 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc00006cfc8 sp=0xc00006cf38 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc00006cfe0 sp=0xc00006cfc8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00006cfe8 sp=0xc00006cfe0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105

goroutine 35 gp=0xc000102540 m=nil [GC worker (idle)]:
runtime.gopark(0x3987fda055e?, 0x3?, 0xd?, 0xbc?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc00011af38 sp=0xc00011af18 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc00011afc8 sp=0xc00011af38 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc00011afe0 sp=0xc00011afc8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00011afe8 sp=0xc00011afe0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105

goroutine 9 gp=0xc0001d16c0 m=nil [GC worker (idle)]:
runtime.gopark(0x3987fda0f0e?, 0x3?, 0x70?, 0x30?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000073f38 sp=0xc000073f18 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc000073fc8 sp=0xc000073f38 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc000073fe0 sp=0xc000073fc8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000073fe8 sp=0xc000073fe0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105

goroutine 20 gp=0xc000504380 m=nil [GC worker (idle)]:
runtime.gopark(0x3987fda05ec?, 0x3?, 0x1c?, 0x50?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc00006d738 sp=0xc00006d718 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc00006d7c8 sp=0xc00006d738 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc00006d7e0 sp=0xc00006d7c8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00006d7e8 sp=0xc00006d7e0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105

goroutine 11 gp=0xc0005828c0 m=nil [sync.WaitGroup.Wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0xc0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc00011d6d0 sp=0xc00011d6b0 pc=0x564220c20c6e
runtime.goparkunlock(...)
runtime/proc.go:441
runtime.semacquire1(0xc0005c57a8, 0x0, 0x1, 0x0, 0x18)
runtime/sema.go:188 +0x229 fp=0xc00011d738 sp=0xc00011d6d0 pc=0x564220c00629
sync.runtime_SemacquireWaitGroup(0x0?)
runtime/sema.go:110 +0x25 fp=0xc00011d770 sp=0xc00011d738 pc=0x564220c22685
sync.(*WaitGroup).Wait(0x0?)
sync/waitgroup.go:118 +0x48 fp=0xc00011d798 sp=0xc00011d770 pc=0x564220c33e08
github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0005c57a0, {0x564221efd020, 0xc0005ada40})
github.com/ollama/ollama/runner/ollamarunner/runner.go:329 +0x25 fp=0xc00011d7b8 sp=0xc00011d798 pc=0x5642210cfce5
github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap2()
github.com/ollama/ollama/runner/ollamarunner/runner.go:800 +0x28 fp=0xc00011d7e0 sp=0xc00011d7b8 pc=0x5642210d4008
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00011d7e8 sp=0xc00011d7e0 pc=0x564220c283a1
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
github.com/ollama/ollama/runner/ollamarunner/runner.go:800 +0xa9c

goroutine 12 gp=0xc000102fc0 m=nil [IO wait]:
runtime.gopark(0x564220caa905?, 0xc000132100?, 0x40?, 0xda?, 0xb?)
runtime/proc.go:435 +0xce fp=0xc0005cd948 sp=0xc0005cd928 pc=0x564220c20c6e
runtime.netpollblock(0x564220c440f8?, 0x20bba426?, 0x42?)
runtime/netpoll.go:575 +0xf7 fp=0xc0005cd980 sp=0xc0005cd948 pc=0x564220be5a57
internal/poll.runtime_pollWait(0x7f5479521d98, 0x72)
runtime/netpoll.go:351 +0x85 fp=0xc0005cd9a0 sp=0xc0005cd980 pc=0x564220c1fe85
internal/poll.(*pollDesc).wait(0xc000132100?, 0xc002f86000?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0005cd9c8 sp=0xc0005cd9a0 pc=0x564220ca7307
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000132100, {0xc002f86000, 0x1000, 0x1000})
internal/poll/fd_unix.go:165 +0x27a fp=0xc0005cda60 sp=0xc0005cd9c8 pc=0x564220ca85fa
net.(*netFD).Read(0xc000132100, {0xc002f86000?, 0xc0005cdad0?, 0x564220ca77c5?})
net/fd_posix.go:55 +0x25 fp=0xc0005cdaa8 sp=0xc0005cda60 pc=0x564220d1d545
net.(*conn).Read(0xc0005a4010, {0xc002f86000?, 0x0?, 0x0?})
net/net.go:194 +0x45 fp=0xc0005cdaf0 sp=0xc0005cdaa8 pc=0x564220d2b905
net/http.(*connReader).Read(0xc0000b06c0, {0xc002f86000, 0x1000, 0x1000})
net/http/server.go:798 +0x159 fp=0xc0005cdb40 sp=0xc0005cdaf0 pc=0x564220f17af9
bufio.(*Reader).fill(0xc0001101e0)
bufio/bufio.go:113 +0x103 fp=0xc0005cdb78 sp=0xc0005cdb40 pc=0x564220d430a3
bufio.(*Reader).Peek(0xc0001101e0, 0x4)
bufio/bufio.go:152 +0x53 fp=0xc0005cdb98 sp=0xc0005cdb78 pc=0x564220d431d3
net/http.(*conn).serve(0xc0004b81b0, {0x564221efcfe8, 0xc000704840})
net/http/server.go:2137 +0x785 fp=0xc0005cdfb8 sp=0xc0005cdb98 pc=0x564220f1d8e5
net/http.(*Server).Serve.gowrap3()
net/http/server.go:3454 +0x28 fp=0xc0005cdfe0 sp=0xc0005cdfb8 pc=0x564220f23048
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0005cdfe8 sp=0xc0005cdfe0 pc=0x564220c283a1
created by net/http.(*Server).Serve in goroutine 1
net/http/server.go:3454 +0x485

rax 0x564221a518d0
rbx 0xc000047268
rcx 0xffffffffffffffd8
rdx 0xc0000471f8
rdi 0x0
rsi 0x1
rbp 0x0
rsp 0x7f54727fbe00
r8 0xc000600008
r9 0x0
r10 0x7f5400e00b4b
r11 0x0
r12 0x1
r13 0x0
r14 0xc000582700
r15 0x5642210d41a0
rip 0x5642219fee1d
rflags 0x10206
cs 0x33
fs 0x0
gs 0x0
time=2025-03-20T04:48:19.815Z level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 2"
time=2025-03-20T04:48:19.947Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory"`

<!-- gh-comment-id:2739187765 --> @alsimms commented on GitHub (Mar 20, 2025): I can confirm this issue as well. I have attached a full debug log which may provide some answers. I can run gemma2 27B and Qwen 32B on this setup but Gemma3-12b-it_K_M crashes and so does Gemma3-27b-it_K_M. I will try to replicate the crash on 12B and submit another debug. Using ollama 6.2 Here is part of the issue. ##############################################Part1 time=2025-03-20T04:48:16.682Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05 dlsym: cuInit - 0x7f3ac06a8bc0 dlsym: cuDriverGetVersion - 0x7f3ac06a8be0 dlsym: cuDeviceGetCount - 0x7f3ac06a8c20 dlsym: cuDeviceGet - 0x7f3ac06a8c00 dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00 dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60 dlsym: cuDeviceGetName - 0x7f3ac06a8c40 dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0 dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20 dlsym: cuCtxDestroy - 0x7f3ac070d850 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 2 time=2025-03-20T04:48:16.817Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a nam [gemma3-27b-q4_K_M_debug.txt](https://github.com/user-attachments/files/19359369/gemma3-27b-q4_K_M_debug.txt) e="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB" time=2025-03-20T04:48:16.965Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB" releasing cuda driver library time=2025-03-20T04:48:16.965Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.123572171 model=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3 time=2025-03-20T04:48:16.965Z level=DEBUG source=sched.go:385 msg="sending an unloaded event" modelPath=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3 time=2025-03-20T04:48:16.965Z level=DEBUG source=sched.go:303 msg="unload completed" modelPath=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3 time=2025-03-20T04:48:16.965Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05 dlsym: cuInit - 0x7f3ac06a8bc0 dlsym: cuDriverGetVersion - 0x7f3ac06a8be0 dlsym: cuDeviceGetCount - 0x7f3ac06a8c20 dlsym: cuDeviceGet - 0x7f3ac06a8c00 dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00 dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60 dlsym: cuDeviceGetName - 0x7f3ac06a8c40 dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0 dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20 dlsym: cuCtxDestroy - 0x7f3ac070d850 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 2 time=2025-03-20T04:48:17.103Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB" time=2025-03-20T04:48:17.240Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB" releasing cuda driver library time=2025-03-20T04:48:17.240Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.39818804 model=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3 time=2025-03-20T04:48:17.240Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05 dlsym: cuInit - 0x7f3ac06a8bc0 dlsym: cuDriverGetVersion - 0x7f3ac06a8be0 dlsym: cuDeviceGetCount - 0x7f3ac06a8c20 dlsym: cuDeviceGet - 0x7f3ac06a8c00 dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00 dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60 dlsym: cuDeviceGetName - 0x7f3ac06a8c40 dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0 dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20 dlsym: cuCtxDestroy - 0x7f3ac070d850 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 2 time=2025-03-20T04:48:17.377Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB" time=2025-03-20T04:48:17.526Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB" releasing cuda driver library time=2025-03-20T04:48:17.526Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.683948942 model=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3 time=2025-03-20T04:48:17.526Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05 dlsym: cuInit - 0x7f3ac06a8bc0 dlsym: cuDriverGetVersion - 0x7f3ac06a8be0 dlsym: cuDeviceGetCount - 0x7f3ac06a8c20 dlsym: cuDeviceGet - 0x7f3ac06a8c00 dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00 dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60 dlsym: cuDeviceGetName - 0x7f3ac06a8c40 dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0 dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20 dlsym: cuCtxDestroy - 0x7f3ac070d850 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 2 time=2025-03-20T04:48:17.661Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB" time=2025-03-20T04:48:17.806Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB" releasing cuda driver library time=2025-03-20T04:48:17.942Z level=DEBUG source=sched.go:225 msg="loading first model" model=/root/.ollama/models/blobs/sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 time=2025-03-20T04:48:17.943Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=2 available="[9.8 GiB 9.6 GiB]" time=2025-03-20T04:48:17.947Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=2 available="[9.8 GiB 9.6 GiB]" time=2025-03-20T04:48:17.950Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05 dlsym: cuInit - 0x7f3ac06a8bc0 dlsym: cuDriverGetVersion - 0x7f3ac06a8be0 dlsym: cuDeviceGetCount - 0x7f3ac06a8c20 dlsym: cuDeviceGet - 0x7f3ac06a8c00 dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00 dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60 dlsym: cuDeviceGetName - 0x7f3ac06a8c40 dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0 dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20 dlsym: cuCtxDestroy - 0x7f3ac070d850 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 2 time=2025-03-20T04:48:18.100Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB" time=2025-03-20T04:48:18.237Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB" releasing cuda driver library time=2025-03-20T04:48:18.237Z level=INFO source=server.go:105 msg="system memory" total="31.3 GiB" free="16.4 GiB" free_swap="0 B" time=2025-03-20T04:48:18.237Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=2 available="[9.6 GiB 9.8 GiB]" time=2025-03-20T04:48:18.241Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=99 layers.model=63 layers.offload=52 layers.split=22,30 memory.available="[9.6 GiB 9.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="22.8 GiB" memory.required.partial="19.1 GiB" memory.required.kv="992.0 MiB" memory.required.allocations="[9.5 GiB 9.6 GiB]" memory.weights.total="14.3 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.6 GiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-03-20T04:48:18.241Z level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]" time=2025-03-20T04:48:18.396Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-20T04:48:18.401Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-03-20T04:48:18.401Z level=DEBUG source=process_text_spm.go:27 msg=Tokens "num tokens"=262145 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]" time=2025-03-20T04:48:18.407Z level=DEBUG source=process_text_spm.go:41 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93 time=2025-03-20T04:48:18.407Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-03-20T04:48:18.412Z level=DEBUG source=process_text_spm.go:27 msg=Tokens "num tokens"=262145 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]" time=2025-03-20T04:48:18.418Z level=DEBUG source=process_text_spm.go:41 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93 time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-03-20T04:48:18.418Z level=DEBUG source=server.go:335 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12 time=2025-03-20T04:48:18.418Z level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12] time=2025-03-20T04:48:18.418Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 --ctx-size 2048 --batch-size 512 --n-gpu-layers 99 --verbose --threads 8 --no-mmap --parallel 1 --tensor-split 22,30 --port 34497" time=2025-03-20T04:48:18.418Z level=DEBUG source=server.go:423 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama/cuda_v12:/usr/lib/ollama CUDA_VISIBLE_DEVICES=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a,GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce]" time=2025-03-20T04:48:18.440Z level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-20T04:48:18.440Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-03-20T04:48:18.441Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-03-20T04:48:18.710Z level=INFO source=runner.go:763 msg="starting ollama engine" time=2025-03-20T04:48:18.711Z level=INFO source=runner.go:823 msg="Server listening on 127.0.0.1:34497" time=2025-03-20T04:48:18.858Z level=WARN source=ggml.go:149 msg="key not found" key=general.name default="" time=2025-03-20T04:48:18.858Z level=WARN source=ggml.go:149 msg="key not found" key=general.description default="" time=2025-03-20T04:48:18.858Z level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=36 time=2025-03-20T04:48:18.859Z level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12 time=2025-03-20T04:48:18.943Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA P102-100, compute capability 6.1, VMM: yes Device 1: NVIDIA P102-100, compute capability 6.1, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2025-03-20T04:48:19.549Z level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib time=2025-03-20T04:48:19.549Z level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib64 time=2025-03-20T04:48:19.549Z level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 0 ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sandybridge.so time=2025-03-20T04:48:19.721Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-03-20T04:48:19.721Z level=DEBUG source=ggml.go:220 msg="created tensor" name=mm.mm_input_projection.weight shape="[5376 1152]" dtype=1 buffer_type=CUDA1 time=2025-03-20T04:48:19.721Z level=DEBUG source=ggml.go:220 msg="created tensor" name=mm.mm_soft_emb_norm.weight shape=[1152] dtype=0 buffer_type=CUDA1 time=2025-03-20T04:48:19.721Z level=DEBUG source=ggml.go:220 msg="created tensor" name=output_norm.weight shape=[5376] dtype=0 buffer_type=CUDA1 time=2025-03-20T04:48:19.721Z level=DEBUG source=ggml.go:220 msg="created tensor" name=token_embd.weight shape="[5376 262144]" dtype=14 buffer_type=CPU time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=output.weight shape="[5376 262144]" dtype=14 buffer_type=CUDA1 time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_k.bias shape=[1152] dtype=0 buffer_type=CUDA1 time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CUDA1 time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_output.bias shape=[1152] dtype=0 buffer_type=CUDA1 time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CUDA1 time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_q.bias shape=[1152] dtype=0 buffer_type=CUDA1 time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CUDA1 time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_v.bias shape=[1152] dtype=0 buffer_type=CUDA1 time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CUDA1 ##########################################################Part2 `ggml_backend_cuda_buffer_type_alloc_buffer: allocating 10180.80 MiB on device 1: cudaMalloc failed: out of memory SIGSEGV: segmentation violation PC=0x5642219fee1d m=8 sigcode=1 addr=0x60 signal arrived during cgo execution goroutine 10 gp=0xc000582700 m=8 mp=0xc000600008 [syscall]: runtime.cgocall(0x564221a518d0, 0xc000047268) runtime/cgocall.go:167 +0x4b fp=0xc000047240 sp=0xc000047208 pc=0x564220c1d96b github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_buffer_set_usage(0x0, 0x1) _cgo_gotypes.go:249 +0x45 fp=0xc000047268 sp=0xc000047240 pc=0x564221016565 github.com/ollama/ollama/ml/backend/ggml.New.func12(...) github.com/ollama/ollama/ml/backend/ggml/ggml.go:284 github.com/ollama/ollama/ml/backend/ggml.New(0xc0001360e0, {0x8, 0x0, 0x63, {0xc000478758, 0x2, 0x2}, 0x0}) github.com/ollama/ollama/ml/backend/ggml/ggml.go:284 +0x18cb fp=0xc000047d58 sp=0xc000047268 pc=0x56422101cb4b github.com/ollama/ollama/ml.NewBackend(0xc0001360e0, {0x8, 0x0, 0x63, {0xc000478758, 0x2, 0x2}, 0x0}) github.com/ollama/ollama/ml/backend.go:91 +0x9c fp=0xc000047da8 sp=0xc000047d58 pc=0x564221010a3c github.com/ollama/ollama/model.New({0x7ffe04599c7b?, 0x0?}, {0x8, 0x0, 0x63, {0xc000478758, 0x2, 0x2}, 0x0}) github.com/ollama/ollama/model/model.go:104 +0xfb fp=0xc000047ee0 sp=0xc000047da8 pc=0x56422104a67b github.com/ollama/ollama/runner/ollamarunner.(*Server).loadModel(0xc0005c57a0, {0x7ffe04599c7b, 0x62}, {0x8, 0x0, 0x63, {0xc000478758, 0x2, 0x2}, 0x0}, ...) github.com/ollama/ollama/runner/ollamarunner/runner.go:689 +0x95 fp=0xc000047f40 sp=0xc000047ee0 pc=0x5642210d2c15 github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap1() github.com/ollama/ollama/runner/ollamarunner/runner.go:793 +0x91 fp=0xc000047fe0 sp=0xc000047f40 pc=0x5642210d40d1 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000047fe8 sp=0xc000047fe0 pc=0x564220c283a1 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/ollamarunner/runner.go:793 +0x9c5 goroutine 1 gp=0xc000002380 m=nil [IO wait]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc0005cf648 sp=0xc0005cf628 pc=0x564220c20c6e runtime.netpollblock(0xc0005cf698?, 0x20bba426?, 0x42?) runtime/netpoll.go:575 +0xf7 fp=0xc0005cf680 sp=0xc0005cf648 pc=0x564220be5a57 internal/poll.runtime_pollWait(0x7f5479521eb0, 0x72) runtime/netpoll.go:351 +0x85 fp=0xc0005cf6a0 sp=0xc0005cf680 pc=0x564220c1fe85 internal/poll.(*pollDesc).wait(0xc000133c80?, 0x900000036?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0005cf6c8 sp=0xc0005cf6a0 pc=0x564220ca7307 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Accept(0xc000133c80) internal/poll/fd_unix.go:620 +0x295 fp=0xc0005cf770 sp=0xc0005cf6c8 pc=0x564220cac6d5 net.(*netFD).accept(0xc000133c80) net/fd_unix.go:172 +0x29 fp=0xc0005cf828 sp=0xc0005cf770 pc=0x564220d1f4e9 net.(*TCPListener).accept(0xc000142880) net/tcpsock_posix.go:159 +0x1b fp=0xc0005cf878 sp=0xc0005cf828 pc=0x564220d34e9b net.(*TCPListener).Accept(0xc000142880) net/tcpsock.go:380 +0x30 fp=0xc0005cf8a8 sp=0xc0005cf878 pc=0x564220d33d50 net/http.(*onceCloseListener).Accept(0xc0004b81b0?) <autogenerated>:1 +0x24 fp=0xc0005cf8c0 sp=0xc0005cf8a8 pc=0x564220f4b384 net/http.(*Server).Serve(0xc0001f1500, {0x564221efad58, 0xc000142880}) net/http/server.go:3424 +0x30c fp=0xc0005cf9f0 sp=0xc0005cf8c0 pc=0x564220f22c4c github.com/ollama/ollama/runner/ollamarunner.Execute({0xc000034190, 0x12, 0x13}) github.com/ollama/ollama/runner/ollamarunner/runner.go:824 +0xe29 fp=0xc0005cfd08 sp=0xc0005cf9f0 pc=0x5642210d3d49 github.com/ollama/ollama/runner.Execute({0xc000034170?, 0x0?, 0x0?}) github.com/ollama/ollama/runner/runner.go:20 +0xc9 fp=0xc0005cfd30 sp=0xc0005cfd08 pc=0x5642210d49a9 github.com/ollama/ollama/cmd.NewCLI.func2(0xc0001f1200?, {0x564221a6d053?, 0x4?, 0x564221a6d057?}) github.com/ollama/ollama/cmd/cmd.go:1327 +0x45 fp=0xc0005cfd58 sp=0xc0005cfd30 pc=0x564221822625 github.com/spf13/cobra.(*Command).execute(0xc0004baf08, {0xc000495180, 0x13, 0x14}) github.com/spf13/cobra@v1.7.0/command.go:940 +0x85c fp=0xc0005cfe78 sp=0xc0005cfd58 pc=0x564220d98b3c github.com/spf13/cobra.(*Command).ExecuteC(0xc0004a6908) github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc0005cff30 sp=0xc0005cfe78 pc=0x564220d99385 github.com/spf13/cobra.(*Command).Execute(...) github.com/spf13/cobra@v1.7.0/command.go:992 github.com/spf13/cobra.(*Command).ExecuteContext(...) github.com/spf13/cobra@v1.7.0/command.go:985 main.main() github.com/ollama/ollama/main.go:12 +0x4d fp=0xc0005cff50 sp=0xc0005cff30 pc=0x56422182298d runtime.main() runtime/proc.go:283 +0x29d fp=0xc0005cffe0 sp=0xc0005cff50 pc=0x564220bed05d runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0005cffe8 sp=0xc0005cffe0 pc=0x564220c283a1 goroutine 2 gp=0xc000002e00 m=nil [force gc (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000070fa8 sp=0xc000070f88 pc=0x564220c20c6e runtime.goparkunlock(...) runtime/proc.go:441 runtime.forcegchelper() runtime/proc.go:348 +0xb8 fp=0xc000070fe0 sp=0xc000070fa8 pc=0x564220bed398 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000070fe8 sp=0xc000070fe0 pc=0x564220c283a1 created by runtime.init.7 in goroutine 1 runtime/proc.go:336 +0x1a goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000071780 sp=0xc000071760 pc=0x564220c20c6e runtime.goparkunlock(...) runtime/proc.go:441 runtime.bgsweep(0xc000040080) runtime/mgcsweep.go:316 +0xdf fp=0xc0000717c8 sp=0xc000071780 pc=0x564220bd7a5f runtime.gcenable.gowrap1() runtime/mgc.go:204 +0x25 fp=0xc0000717e0 sp=0xc0000717c8 pc=0x564220bcbe45 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000717e8 sp=0xc0000717e0 pc=0x564220c283a1 created by runtime.gcenable in goroutine 1 runtime/mgc.go:204 +0x66 goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]: runtime.gopark(0x10000?, 0x564221c24118?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000071f78 sp=0xc000071f58 pc=0x564220c20c6e runtime.goparkunlock(...) runtime/proc.go:441 runtime.(*scavengerState).park(0x564222762b20) runtime/mgcscavenge.go:425 +0x49 fp=0xc000071fa8 sp=0xc000071f78 pc=0x564220bd54a9 runtime.bgscavenge(0xc000040080) runtime/mgcscavenge.go:658 +0x59 fp=0xc000071fc8 sp=0xc000071fa8 pc=0x564220bd5a39 runtime.gcenable.gowrap2() runtime/mgc.go:205 +0x25 fp=0xc000071fe0 sp=0xc000071fc8 pc=0x564220bcbde5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000071fe8 sp=0xc000071fe0 pc=0x564220c283a1 created by runtime.gcenable in goroutine 1 runtime/mgc.go:205 +0xa5 goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait]: runtime.gopark(0x1b8?, 0xc000002380?, 0x1?, 0x23?, 0xc000070688?) runtime/proc.go:435 +0xce fp=0xc000070630 sp=0xc000070610 pc=0x564220c20c6e runtime.runfinq() runtime/mfinal.go:196 +0x107 fp=0xc0000707e0 sp=0xc000070630 pc=0x564220bcae07 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000707e8 sp=0xc0000707e0 pc=0x564220c283a1 created by runtime.createfing in goroutine 1 runtime/mfinal.go:166 +0x3d goroutine 6 gp=0xc0001d08c0 m=nil [chan receive]: runtime.gopark(0xc00022b540?, 0xc00011e018?, 0x60?, 0x27?, 0x564220d06228?) runtime/proc.go:435 +0xce fp=0xc000072718 sp=0xc0000726f8 pc=0x564220c20c6e runtime.chanrecv(0xc00003e3f0, 0x0, 0x1) runtime/chan.go:664 +0x445 fp=0xc000072790 sp=0xc000072718 pc=0x564220bbd005 runtime.chanrecv1(0x0?, 0x0?) runtime/chan.go:506 +0x12 fp=0xc0000727b8 sp=0xc000072790 pc=0x564220bbcb92 runtime.unique_runtime_registerUniqueMapCleanup.func2(...) runtime/mgc.go:1796 runtime.unique_runtime_registerUniqueMapCleanup.gowrap1() runtime/mgc.go:1799 +0x2f fp=0xc0000727e0 sp=0xc0000727b8 pc=0x564220bcefef runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000727e8 sp=0xc0000727e0 pc=0x564220c283a1 created by unique.runtime_registerUniqueMapCleanup in goroutine 1 runtime/mgc.go:1794 +0x85 goroutine 7 gp=0xc0001d1340 m=nil [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000072f38 sp=0xc000072f18 pc=0x564220c20c6e runtime.gcBgMarkWorker(0xc00003f9d0) runtime/mgc.go:1423 +0xe9 fp=0xc000072fc8 sp=0xc000072f38 pc=0x564220bce309 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000072fe0 sp=0xc000072fc8 pc=0x564220bce1e5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000072fe8 sp=0xc000072fe0 pc=0x564220c283a1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 18 gp=0xc000504000 m=nil [GC worker (idle)]: runtime.gopark(0x564222811280?, 0x1?, 0x64?, 0x1b?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00006c738 sp=0xc00006c718 pc=0x564220c20c6e runtime.gcBgMarkWorker(0xc00003f9d0) runtime/mgc.go:1423 +0xe9 fp=0xc00006c7c8 sp=0xc00006c738 pc=0x564220bce309 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00006c7e0 sp=0xc00006c7c8 pc=0x564220bce1e5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00006c7e8 sp=0xc00006c7e0 pc=0x564220c283a1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 34 gp=0xc000102380 m=nil [GC worker (idle)]: runtime.gopark(0x3987fda0d37?, 0x3?, 0xf4?, 0x3d?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011a738 sp=0xc00011a718 pc=0x564220c20c6e runtime.gcBgMarkWorker(0xc00003f9d0) runtime/mgc.go:1423 +0xe9 fp=0xc00011a7c8 sp=0xc00011a738 pc=0x564220bce309 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011a7e0 sp=0xc00011a7c8 pc=0x564220bce1e5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011a7e8 sp=0xc00011a7e0 pc=0x564220c283a1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 8 gp=0xc0001d1500 m=nil [GC worker (idle)]: runtime.gopark(0x3987fda1cb6?, 0x3?, 0x50?, 0x39?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000073738 sp=0xc000073718 pc=0x564220c20c6e runtime.gcBgMarkWorker(0xc00003f9d0) runtime/mgc.go:1423 +0xe9 fp=0xc0000737c8 sp=0xc000073738 pc=0x564220bce309 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc0000737e0 sp=0xc0000737c8 pc=0x564220bce1e5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000737e8 sp=0xc0000737e0 pc=0x564220c283a1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 19 gp=0xc0005041c0 m=nil [GC worker (idle)]: runtime.gopark(0x3987fda0a73?, 0x3?, 0xb5?, 0x68?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00006cf38 sp=0xc00006cf18 pc=0x564220c20c6e runtime.gcBgMarkWorker(0xc00003f9d0) runtime/mgc.go:1423 +0xe9 fp=0xc00006cfc8 sp=0xc00006cf38 pc=0x564220bce309 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00006cfe0 sp=0xc00006cfc8 pc=0x564220bce1e5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00006cfe8 sp=0xc00006cfe0 pc=0x564220c283a1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 35 gp=0xc000102540 m=nil [GC worker (idle)]: runtime.gopark(0x3987fda055e?, 0x3?, 0xd?, 0xbc?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011af38 sp=0xc00011af18 pc=0x564220c20c6e runtime.gcBgMarkWorker(0xc00003f9d0) runtime/mgc.go:1423 +0xe9 fp=0xc00011afc8 sp=0xc00011af38 pc=0x564220bce309 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00011afe0 sp=0xc00011afc8 pc=0x564220bce1e5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011afe8 sp=0xc00011afe0 pc=0x564220c283a1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 9 gp=0xc0001d16c0 m=nil [GC worker (idle)]: runtime.gopark(0x3987fda0f0e?, 0x3?, 0x70?, 0x30?, 0x0?) runtime/proc.go:435 +0xce fp=0xc000073f38 sp=0xc000073f18 pc=0x564220c20c6e runtime.gcBgMarkWorker(0xc00003f9d0) runtime/mgc.go:1423 +0xe9 fp=0xc000073fc8 sp=0xc000073f38 pc=0x564220bce309 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc000073fe0 sp=0xc000073fc8 pc=0x564220bce1e5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc000073fe8 sp=0xc000073fe0 pc=0x564220c283a1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 20 gp=0xc000504380 m=nil [GC worker (idle)]: runtime.gopark(0x3987fda05ec?, 0x3?, 0x1c?, 0x50?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00006d738 sp=0xc00006d718 pc=0x564220c20c6e runtime.gcBgMarkWorker(0xc00003f9d0) runtime/mgc.go:1423 +0xe9 fp=0xc00006d7c8 sp=0xc00006d738 pc=0x564220bce309 runtime.gcBgMarkStartWorkers.gowrap1() runtime/mgc.go:1339 +0x25 fp=0xc00006d7e0 sp=0xc00006d7c8 pc=0x564220bce1e5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00006d7e8 sp=0xc00006d7e0 pc=0x564220c283a1 created by runtime.gcBgMarkStartWorkers in goroutine 1 runtime/mgc.go:1339 +0x105 goroutine 11 gp=0xc0005828c0 m=nil [sync.WaitGroup.Wait]: runtime.gopark(0x0?, 0x0?, 0x0?, 0xc0?, 0x0?) runtime/proc.go:435 +0xce fp=0xc00011d6d0 sp=0xc00011d6b0 pc=0x564220c20c6e runtime.goparkunlock(...) runtime/proc.go:441 runtime.semacquire1(0xc0005c57a8, 0x0, 0x1, 0x0, 0x18) runtime/sema.go:188 +0x229 fp=0xc00011d738 sp=0xc00011d6d0 pc=0x564220c00629 sync.runtime_SemacquireWaitGroup(0x0?) runtime/sema.go:110 +0x25 fp=0xc00011d770 sp=0xc00011d738 pc=0x564220c22685 sync.(*WaitGroup).Wait(0x0?) sync/waitgroup.go:118 +0x48 fp=0xc00011d798 sp=0xc00011d770 pc=0x564220c33e08 github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0005c57a0, {0x564221efd020, 0xc0005ada40}) github.com/ollama/ollama/runner/ollamarunner/runner.go:329 +0x25 fp=0xc00011d7b8 sp=0xc00011d798 pc=0x5642210cfce5 github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap2() github.com/ollama/ollama/runner/ollamarunner/runner.go:800 +0x28 fp=0xc00011d7e0 sp=0xc00011d7b8 pc=0x5642210d4008 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00011d7e8 sp=0xc00011d7e0 pc=0x564220c283a1 created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/ollamarunner/runner.go:800 +0xa9c goroutine 12 gp=0xc000102fc0 m=nil [IO wait]: runtime.gopark(0x564220caa905?, 0xc000132100?, 0x40?, 0xda?, 0xb?) runtime/proc.go:435 +0xce fp=0xc0005cd948 sp=0xc0005cd928 pc=0x564220c20c6e runtime.netpollblock(0x564220c440f8?, 0x20bba426?, 0x42?) runtime/netpoll.go:575 +0xf7 fp=0xc0005cd980 sp=0xc0005cd948 pc=0x564220be5a57 internal/poll.runtime_pollWait(0x7f5479521d98, 0x72) runtime/netpoll.go:351 +0x85 fp=0xc0005cd9a0 sp=0xc0005cd980 pc=0x564220c1fe85 internal/poll.(*pollDesc).wait(0xc000132100?, 0xc002f86000?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0005cd9c8 sp=0xc0005cd9a0 pc=0x564220ca7307 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc000132100, {0xc002f86000, 0x1000, 0x1000}) internal/poll/fd_unix.go:165 +0x27a fp=0xc0005cda60 sp=0xc0005cd9c8 pc=0x564220ca85fa net.(*netFD).Read(0xc000132100, {0xc002f86000?, 0xc0005cdad0?, 0x564220ca77c5?}) net/fd_posix.go:55 +0x25 fp=0xc0005cdaa8 sp=0xc0005cda60 pc=0x564220d1d545 net.(*conn).Read(0xc0005a4010, {0xc002f86000?, 0x0?, 0x0?}) net/net.go:194 +0x45 fp=0xc0005cdaf0 sp=0xc0005cdaa8 pc=0x564220d2b905 net/http.(*connReader).Read(0xc0000b06c0, {0xc002f86000, 0x1000, 0x1000}) net/http/server.go:798 +0x159 fp=0xc0005cdb40 sp=0xc0005cdaf0 pc=0x564220f17af9 bufio.(*Reader).fill(0xc0001101e0) bufio/bufio.go:113 +0x103 fp=0xc0005cdb78 sp=0xc0005cdb40 pc=0x564220d430a3 bufio.(*Reader).Peek(0xc0001101e0, 0x4) bufio/bufio.go:152 +0x53 fp=0xc0005cdb98 sp=0xc0005cdb78 pc=0x564220d431d3 net/http.(*conn).serve(0xc0004b81b0, {0x564221efcfe8, 0xc000704840}) net/http/server.go:2137 +0x785 fp=0xc0005cdfb8 sp=0xc0005cdb98 pc=0x564220f1d8e5 net/http.(*Server).Serve.gowrap3() net/http/server.go:3454 +0x28 fp=0xc0005cdfe0 sp=0xc0005cdfb8 pc=0x564220f23048 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0005cdfe8 sp=0xc0005cdfe0 pc=0x564220c283a1 created by net/http.(*Server).Serve in goroutine 1 net/http/server.go:3454 +0x485 rax 0x564221a518d0 rbx 0xc000047268 rcx 0xffffffffffffffd8 rdx 0xc0000471f8 rdi 0x0 rsi 0x1 rbp 0x0 rsp 0x7f54727fbe00 r8 0xc000600008 r9 0x0 r10 0x7f5400e00b4b r11 0x0 r12 0x1 r13 0x0 r14 0xc000582700 r15 0x5642210d41a0 rip 0x5642219fee1d rflags 0x10206 cs 0x33 fs 0x0 gs 0x0 time=2025-03-20T04:48:19.815Z level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 2" time=2025-03-20T04:48:19.947Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory"`
Author
Owner

@alsimms commented on GitHub (Mar 20, 2025):

Here is the 27B debug
gemma3-27b-q4_K_M_debug.txt

<!-- gh-comment-id:2739200856 --> @alsimms commented on GitHub (Mar 20, 2025): Here is the 27B debug [gemma3-27b-q4_K_M_debug.txt](https://github.com/user-attachments/files/19359375/gemma3-27b-q4_K_M_debug.txt)
Author
Owner

@alsimms commented on GitHub (Mar 20, 2025):

Interesting, I can load the unsloth model but not the one directly from ollama.

Image

<!-- gh-comment-id:2739260789 --> @alsimms commented on GitHub (Mar 20, 2025): Interesting, I can load the unsloth model but not the one directly from ollama. ![Image](https://github.com/user-attachments/assets/a22db553-cd12-4dbd-881e-20f03140af51)
Author
Owner

@NandaIda commented on GitHub (Mar 20, 2025):

Interesting, I can load the unsloth model but not the one directly from ollama.

Image

Can you process an image input using the unsloth?

<!-- gh-comment-id:2739333692 --> @NandaIda commented on GitHub (Mar 20, 2025): > Interesting, I can load the unsloth model but not the one directly from ollama. > > ![Image](https://github.com/user-attachments/assets/a22db553-cd12-4dbd-881e-20f03140af51) Can you process an image input using the unsloth?
Author
Owner

@rick-github commented on GitHub (Mar 20, 2025):

time=2025-03-20T04:48:18.241Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=99 layers.model=63 layers.offload=52 layers.split=22,30 memory.available="[9.6 GiB 9.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="22.8 GiB" memory.required.partial="19.1 GiB" memory.required.kv="992.0 MiB" memory.required.allocations="[9.5 GiB 9.6 GiB]" memory.weights.total="14.3 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.6 GiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"

ollama is over-allocating layers to the GPU: available [9.6 GiB 9.8 GiB] allocating [9.5 GiB 9.6 GiB] doesn't leave much margin. See here for ways to mitigate this.

Note this is different to the ggml_backend_sched_graph_compute_async() crashes which are the bulk of the reports in this issue.

<!-- gh-comment-id:2740964734 --> @rick-github commented on GitHub (Mar 20, 2025): ``` time=2025-03-20T04:48:18.241Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=99 layers.model=63 layers.offload=52 layers.split=22,30 memory.available="[9.6 GiB 9.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="22.8 GiB" memory.required.partial="19.1 GiB" memory.required.kv="992.0 MiB" memory.required.allocations="[9.5 GiB 9.6 GiB]" memory.weights.total="14.3 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.6 GiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" ``` ollama is over-allocating layers to the GPU: available [9.6 GiB 9.8 GiB] allocating [9.5 GiB 9.6 GiB] doesn't leave much margin. See [here](https://github.com/ollama/ollama/issues/8597#issuecomment-2614533288) for ways to mitigate this. Note this is different to the `ggml_backend_sched_graph_compute_async()` crashes which are the bulk of the reports in this issue.
Author
Owner

@Kazunarit commented on GitHub (Mar 20, 2025):

Image

Thanks to everyone who is working on this issue.

My ChatBot app system executes about 3000 text chats per batch, but Gemma3:27b stops midway with ollama 0.6.2 due to a memory allocation error.
There seems to be a memory leak problem when running Gemma3:27b.

In the sample program, when "tell me a story" is repeated about 50 times, the docker desktop container memory usage increases by about 7-8GB.
requests.post(OLLAMA_API_URL...) "options": {"num_ctx": 8192} is specified. (same as my application)
Image data is not used.

In Gemma2:27b and Qwen2.5:32b, there is no memory increase or it is only slight, and no error occurs.

This may not be directly related to this phenomenon, but when running Gemma3, the CPU increases about 40%.
Other models run at about 10% and basically on the GPU.

I hope this helps with debugging.

RTX4090 CUDA NVIDIA APP v11.0.2.341, 64GB RAM
ollama 0.6.0 to 0.6.2

<!-- gh-comment-id:2741910637 --> @Kazunarit commented on GitHub (Mar 20, 2025): ![Image](https://github.com/user-attachments/assets/458a6a66-a332-4231-9f2d-95fd5e3b2a55) Thanks to everyone who is working on this issue. My ChatBot app system executes about 3000 text chats per batch, but Gemma3:27b stops midway with ollama 0.6.2 due to a memory allocation error. There seems to be a memory leak problem when running Gemma3:27b. In the sample program, when "tell me a story" is repeated about 50 times, the docker desktop container memory usage increases by about 7-8GB. requests.post(OLLAMA_API_URL...) "options": {"num_ctx": 8192} is specified. (same as my application) Image data is not used. In Gemma2:27b and Qwen2.5:32b, there is no memory increase or it is only slight, and no error occurs. This may not be directly related to this phenomenon, but when running Gemma3, the CPU increases about 40%. Other models run at about 10% and basically on the GPU. I hope this helps with debugging. RTX4090 CUDA NVIDIA APP v11.0.2.341, 64GB RAM ollama 0.6.0 to 0.6.2
Author
Owner

@bjj commented on GitHub (Mar 21, 2025):

Bisected the commits between 0.6.0 and 0.6.1 and token generation rate falls 25% at a422ba3.

but did that also make vision work?

<!-- gh-comment-id:2742091596 --> @bjj commented on GitHub (Mar 21, 2025): > Bisected the commits between 0.6.0 and 0.6.1 and token generation rate falls 25% at [a422ba3](https://github.com/ollama/ollama/commit/a422ba39c94adc870da84e5fa442c0bf81c77f27). but did that also make vision work?
Author
Owner

@rick-github commented on GitHub (Mar 21, 2025):

Vision has always worked.

$ OLLAMA_DOCKER_TAG=0.6.0  OLLAMA_KEEP_ALIVE=-1 OLLAMA_NUM_PARALLEL=1 docker compose up -d ollama
[+] Running 1/1
 ✔ Container ollama  Started                                                                                                                                                 0.5s 
$ ollama run gemma3:12b describe this image: ./puppy.jpg 
Added image './puppy.jpg'
Here's a description of the image:

**Overall Impression:**

The image features an adorable, fluffy, all-white puppy sitting on a stone surface. It's a close-up shot, focusing entirely on the puppy.

**Details:**

*   **Puppy:** The puppy is small and fluffy, with a thick, white coat. It has dark eyes and a slightly tilted head, giving it a curious and endearing expression. It's wearing a bright red collar with a golden 
bell.
*   **Background:** The background is blurred, suggesting a stone or concrete surface, possibly a patio or steps. There's a hint of greenery and a dark, indistinct area behind the puppy.
*   **Lighting:** The lighting appears soft and natural, highlighting the puppy's fur and features.
*   **Composition:** The puppy is centered in the frame, drawing the viewer's attention directly to it. The close-up perspective emphasizes its cuteness and vulnerability.

**Overall Tone:**

The image evokes feelings of tenderness, innocence, and charm. It's a very appealing and heartwarming picture.
$ OLLAMA_DOCKER_TAG=0.6.1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_NUM_PARALLEL=1 docker compose up -d ollama
[+] Running 1/1
 ✔ Container ollama  Started                                                                                                                                                0.8s 
$ ollama run gemma3:12b describe this image: ./puppy.jpg 
Added image './puppy.jpg'
Here's a description of the image:

**Overall Impression:**

The image features an adorable, fluffy, all-white puppy sitting on a textured stone surface. The puppy is the clear focal point, and the background is blurred, drawing attention to its cuteness.

**Details:**

*   **Puppy:** The puppy is small and fluffy, with a thick, white coat. It has dark eyes and a slightly tilted head, giving it a curious and endearing expression. It's wearing a bright red collar with a small, 
golden bell attached.
*   **Surface:** The puppy is sitting on a stone surface that appears to be a step or a patio. The stone has a rough, textured appearance.
*   **Background:** The background is out of focus, suggesting a blurred outdoor setting. It appears to be a dark, possibly wooden, structure.
*   **Lighting:** The lighting is soft and diffused, highlighting the puppy's fur and creating a gentle mood.

**Overall Tone:**

The image evokes feelings of warmth, cuteness, and tenderness. It's a charming portrait of a young, innocent creature.
$ OLLAMA_DOCKER_TAG=0.6.2 OLLAMA_KEEP_ALIVE=-1 OLLAMA_NUM_PARALLEL=1 docker compose up -d ollama
[+] Running 1/1
 ✔ Container ollama  Started                                                                                                                                                    0.8s 
$ ollama run gemma3:12b describe this image: ./puppy.jpg 
Added image './puppy.jpg'
Here's a description of the image:

**Overall Impression:**

The image is a close-up, vertical shot of an adorable, fluffy, all-white puppy. It's a heartwarming and charming portrait.

**Details:**

*   **Puppy:** The puppy is the clear focal point. It has a fluffy, almost cloud-like coat of pure white fur. Its ears are floppy and its eyes are dark and expressive. It's wearing a bright red collar with a 
small golden bell attached.
*   **Background:** The puppy is sitting on a textured, gray stone surface, possibly a patio or steps. The background is blurred, suggesting a shallow depth of field, which helps to keep the puppy in focus. 
There's a hint of greenery and a dark, indistinct structure in the background.
*   **Lighting:** The lighting appears to be soft and natural, highlighting the puppy's fur and giving it a gentle glow.
*   **Composition:** The puppy is positioned slightly off-center, which creates a more dynamic and visually appealing composition.

**Overall Tone:**

The image evokes feelings of cuteness, innocence, and warmth. It's a delightful portrait of a young, fluffy companion.
<!-- gh-comment-id:2742115440 --> @rick-github commented on GitHub (Mar 21, 2025): Vision has always worked. ```console $ OLLAMA_DOCKER_TAG=0.6.0 OLLAMA_KEEP_ALIVE=-1 OLLAMA_NUM_PARALLEL=1 docker compose up -d ollama [+] Running 1/1 ✔ Container ollama Started 0.5s $ ollama run gemma3:12b describe this image: ./puppy.jpg Added image './puppy.jpg' Here's a description of the image: **Overall Impression:** The image features an adorable, fluffy, all-white puppy sitting on a stone surface. It's a close-up shot, focusing entirely on the puppy. **Details:** * **Puppy:** The puppy is small and fluffy, with a thick, white coat. It has dark eyes and a slightly tilted head, giving it a curious and endearing expression. It's wearing a bright red collar with a golden bell. * **Background:** The background is blurred, suggesting a stone or concrete surface, possibly a patio or steps. There's a hint of greenery and a dark, indistinct area behind the puppy. * **Lighting:** The lighting appears soft and natural, highlighting the puppy's fur and features. * **Composition:** The puppy is centered in the frame, drawing the viewer's attention directly to it. The close-up perspective emphasizes its cuteness and vulnerability. **Overall Tone:** The image evokes feelings of tenderness, innocence, and charm. It's a very appealing and heartwarming picture. ``` ```console $ OLLAMA_DOCKER_TAG=0.6.1 OLLAMA_KEEP_ALIVE=-1 OLLAMA_NUM_PARALLEL=1 docker compose up -d ollama [+] Running 1/1 ✔ Container ollama Started 0.8s $ ollama run gemma3:12b describe this image: ./puppy.jpg Added image './puppy.jpg' Here's a description of the image: **Overall Impression:** The image features an adorable, fluffy, all-white puppy sitting on a textured stone surface. The puppy is the clear focal point, and the background is blurred, drawing attention to its cuteness. **Details:** * **Puppy:** The puppy is small and fluffy, with a thick, white coat. It has dark eyes and a slightly tilted head, giving it a curious and endearing expression. It's wearing a bright red collar with a small, golden bell attached. * **Surface:** The puppy is sitting on a stone surface that appears to be a step or a patio. The stone has a rough, textured appearance. * **Background:** The background is out of focus, suggesting a blurred outdoor setting. It appears to be a dark, possibly wooden, structure. * **Lighting:** The lighting is soft and diffused, highlighting the puppy's fur and creating a gentle mood. **Overall Tone:** The image evokes feelings of warmth, cuteness, and tenderness. It's a charming portrait of a young, innocent creature. ``` ```console $ OLLAMA_DOCKER_TAG=0.6.2 OLLAMA_KEEP_ALIVE=-1 OLLAMA_NUM_PARALLEL=1 docker compose up -d ollama [+] Running 1/1 ✔ Container ollama Started 0.8s $ ollama run gemma3:12b describe this image: ./puppy.jpg Added image './puppy.jpg' Here's a description of the image: **Overall Impression:** The image is a close-up, vertical shot of an adorable, fluffy, all-white puppy. It's a heartwarming and charming portrait. **Details:** * **Puppy:** The puppy is the clear focal point. It has a fluffy, almost cloud-like coat of pure white fur. Its ears are floppy and its eyes are dark and expressive. It's wearing a bright red collar with a small golden bell attached. * **Background:** The puppy is sitting on a textured, gray stone surface, possibly a patio or steps. The background is blurred, suggesting a shallow depth of field, which helps to keep the puppy in focus. There's a hint of greenery and a dark, indistinct structure in the background. * **Lighting:** The lighting appears to be soft and natural, highlighting the puppy's fur and giving it a gentle glow. * **Composition:** The puppy is positioned slightly off-center, which creates a more dynamic and visually appealing composition. **Overall Tone:** The image evokes feelings of cuteness, innocence, and warmth. It's a delightful portrait of a young, fluffy companion. ```
Author
Owner

@rick-github commented on GitHub (Mar 22, 2025):

Image

<!-- gh-comment-id:2745200101 --> @rick-github commented on GitHub (Mar 22, 2025): ![Image](https://github.com/user-attachments/assets/0c689cbc-561d-422b-8bb1-2bfae06612aa)
Author
Owner

@rick-github commented on GitHub (Mar 22, 2025):

q4_0 and q8_0 KV quant still see a performance hit.

Image

<!-- gh-comment-id:2745205101 --> @rick-github commented on GitHub (Mar 22, 2025): q4_0 and q8_0 KV quant still see a performance hit. ![Image](https://github.com/user-attachments/assets/44383f76-b680-464b-b23b-38f78e8c8986)
Author
Owner

@ultramarinebicycle commented on GitHub (Mar 23, 2025):

@rick-github for me, performance is acceptable (PC no longer becomes unresponsive at 8k context) now but GPU memory allocation still seems to be wonky:

At 2k context:
gemma3:12b 6fd036cefda5 12 GB 7%/93% CPU/GPU
17t/s. VRAM usage is around 8GB and RAM usage at 10.

At 8k context:
gemma3:12b 6fd036cefda5 14 GB 23%/77% CPU/GPU
7t/s VRAM usage is around 7GB and RAM usage at 12.

<!-- gh-comment-id:2746223825 --> @ultramarinebicycle commented on GitHub (Mar 23, 2025): @rick-github for me, performance is acceptable (PC no longer becomes unresponsive at 8k context) now but GPU memory allocation still seems to be wonky: At 2k context: gemma3:12b 6fd036cefda5 12 GB 7%/93% CPU/GPU 17t/s. VRAM usage is around 8GB and RAM usage at 10. At 8k context: gemma3:12b 6fd036cefda5 14 GB 23%/77% CPU/GPU 7t/s VRAM usage is around 7GB and RAM usage at 12.
Author
Owner

@rick-github commented on GitHub (Mar 23, 2025):

Yes, the changes that have reduced the size of the context buffer have made it harder for ollama to estimate the usage compared to what the GPU backend actually allocates. The ollama team is aware of this, I assume the estimation logic will receive some attention in the next couple of releases. In the meantime you can improve VRAM utlization by overriding num_gpu.

<!-- gh-comment-id:2746241796 --> @rick-github commented on GitHub (Mar 23, 2025): Yes, the changes that have reduced the size of the context buffer have made it harder for ollama to estimate the usage compared to what the GPU backend actually allocates. The ollama team is aware of this, I assume the estimation logic will receive some attention in the next couple of releases. In the meantime you can improve VRAM utlization by [overriding](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650) `num_gpu`.
Author
Owner

@rick-github commented on GitHub (Mar 26, 2025):

#9987 has been merged, ollama:0.6.3-rc0 goes from estimating 17G for gemma3:12b+16K cache to 12G. nvidia-smi shows there's still room for improvement but it's getting there.

<!-- gh-comment-id:2755771294 --> @rick-github commented on GitHub (Mar 26, 2025): #9987 has been merged, ollama:0.6.3-rc0 goes from estimating 17G for gemma3:12b+16K cache to 12G. nvidia-smi shows there's still room for improvement but it's getting there.
Author
Owner

@jessegross commented on GitHub (Mar 26, 2025):

@rick-github One thing to be aware of is that the old engine preallocates the worst case computation graph (max context + max batch) whereas the new engine currently does not. The KV cache is preallocated for the full context in both cases though.

As a result, unless you have exercised the worst case, nvidia-smi will underreport the total amount of memory that may be needed, which is what ollama ps is showing. The memory consumption will stay at the high water mark of a batch until the runner process is restarted.

There is definitely still a gap between the estimate and actual worst case usage but it might not be quite as large as it seems.

The behavior of not preallocating the worst case may change in the future but that's the way it is now.

<!-- gh-comment-id:2755958292 --> @jessegross commented on GitHub (Mar 26, 2025): @rick-github One thing to be aware of is that the old engine preallocates the worst case computation graph (max context + max batch) whereas the new engine currently does not. The KV cache is preallocated for the full context in both cases though. As a result, unless you have exercised the worst case, `nvidia-smi` will underreport the total amount of memory that may be needed, which is what `ollama ps` is showing. The memory consumption will stay at the high water mark of a batch until the runner process is restarted. There is definitely still a gap between the estimate and actual worst case usage but it might not be quite as large as it seems. The behavior of not preallocating the worst case may change in the future but that's the way it is now.
Author
Owner

@Kazunarit commented on GitHub (Mar 31, 2025):

Image

Run with 0.6.3.
When running the sample program that repeats "tell me a story", memory usage continues to increase for Gemma3:12b and Gemma3:27b.
It does not increase for other models.

In the attached graph, gemma3:12b is running from around 10:52,
phi4 from just after 11:01,
deepseek-r1:14b from around 11:07.

Increased memory usage will cause crashes on systems that run continuously.
I would appreciate it if you could address this issue.


import requests
import json
import time

Global Constants

OLLAMA_API_URL = "http://localhost:11434/api/chat"
OLLAMA_MODEL = "gemma3:12b"
REQUEST_COUNT = 1000 # Number of requests
CONTEXT_LENGTH = 8192 # Context length

def send_request(count):
"""
Sends a request to the Ollama API and processes the streaming response.
"""
data = {
"model": OLLAMA_MODEL,
"messages": [{"role": "user", "content": "Tell me a story"}],
"stream": True,
"options": {"num_ctx": CONTEXT_LENGTH}
}

print(f"\n===== Request {count}/{REQUEST_COUNT} =====")

try:
    response = requests.post(OLLAMA_API_URL, json=data, stream=True)
    response.raise_for_status()
    
    print(f"\nStory #{count}:\n")
    
    # Process streaming response
    for line in response.iter_lines():
        if line:
            try:
                chunk = json.loads(line)
                
                if "message" in chunk and "content" in chunk["message"]:
                    print(chunk["message"]["content"], end="", flush=True)

                if chunk.get("done"):
                    break

            except json.JSONDecodeError as e:
                print(f"\nJSON Decode Error: {e}")
                print(f"Problematic line: {line}")
                continue
    
    print("\n")  # Ensure newline after story
    return True
    
except requests.exceptions.RequestException as e:
    print(f"Error: API request failed: {e}")
    return False

def main():
"""
Main execution function.
"""
print(f"Starting Ollama API requests")
print(f"API URL: {OLLAMA_API_URL}")
print(f"Model: {OLLAMA_MODEL}")
print(f"Context Length: {CONTEXT_LENGTH}")
print(f"Total Requests: {REQUEST_COUNT}")
print("=" * 50)

for count in range(1, REQUEST_COUNT + 1):
    if not send_request(count):
        break  # Stop execution on API failure
    time.sleep(1)  # Delay between requests to avoid overload

print(f"\nExecution complete. {count} stories requested.")

if name == "main":
main()

<!-- gh-comment-id:2764935855 --> @Kazunarit commented on GitHub (Mar 31, 2025): ![Image](https://github.com/user-attachments/assets/df83fba3-a401-435e-a2a0-916752211aa6) Run with 0.6.3. When running the sample program that repeats "tell me a story", memory usage continues to increase for Gemma3:12b and Gemma3:27b. It does not increase for other models. In the attached graph, gemma3:12b is running from around 10:52, phi4 from just after 11:01, deepseek-r1:14b from around 11:07. Increased memory usage will cause crashes on systems that run continuously. I would appreciate it if you could address this issue. --- import requests import json import time # Global Constants OLLAMA_API_URL = "http://localhost:11434/api/chat" OLLAMA_MODEL = "gemma3:12b" REQUEST_COUNT = 1000 # Number of requests CONTEXT_LENGTH = 8192 # Context length def send_request(count): """ Sends a request to the Ollama API and processes the streaming response. """ data = { "model": OLLAMA_MODEL, "messages": [{"role": "user", "content": "Tell me a story"}], "stream": True, "options": {"num_ctx": CONTEXT_LENGTH} } print(f"\n===== Request {count}/{REQUEST_COUNT} =====") try: response = requests.post(OLLAMA_API_URL, json=data, stream=True) response.raise_for_status() print(f"\nStory #{count}:\n") # Process streaming response for line in response.iter_lines(): if line: try: chunk = json.loads(line) if "message" in chunk and "content" in chunk["message"]: print(chunk["message"]["content"], end="", flush=True) if chunk.get("done"): break except json.JSONDecodeError as e: print(f"\nJSON Decode Error: {e}") print(f"Problematic line: {line}") continue print("\n") # Ensure newline after story return True except requests.exceptions.RequestException as e: print(f"Error: API request failed: {e}") return False def main(): """ Main execution function. """ print(f"Starting Ollama API requests") print(f"API URL: {OLLAMA_API_URL}") print(f"Model: {OLLAMA_MODEL}") print(f"Context Length: {CONTEXT_LENGTH}") print(f"Total Requests: {REQUEST_COUNT}") print("=" * 50) for count in range(1, REQUEST_COUNT + 1): if not send_request(count): break # Stop execution on API failure time.sleep(1) # Delay between requests to avoid overload print(f"\nExecution complete. {count} stories requested.") if __name__ == "__main__": main()
Author
Owner

@rzykov commented on GitHub (Mar 31, 2025):

I observed the same issue with Gemma 3 4b with a long context on 0.6.3. But 0.6.3 definitely reduced the leak.
My context length is 50 000. 3090 GPU

<!-- gh-comment-id:2766335608 --> @rzykov commented on GitHub (Mar 31, 2025): I observed the same issue with Gemma 3 4b with a long context on 0.6.3. But 0.6.3 definitely reduced the leak. My context length is 50 000. 3090 GPU
Author
Owner

@rick-github commented on GitHub (Mar 31, 2025):

Growth in RSS is being investigated in https://github.com/ollama/ollama/issues/10040.

<!-- gh-comment-id:2766348761 --> @rick-github commented on GitHub (Mar 31, 2025): Growth in RSS is being investigated in https://github.com/ollama/ollama/issues/10040.
Author
Owner

@jessegross commented on GitHub (Apr 8, 2025):

Closing this as original issue related to VRAM has been solved, please follow on the system memory leak in #10040

<!-- gh-comment-id:2787712894 --> @jessegross commented on GitHub (Apr 8, 2025): Closing this as original issue related to VRAM has been solved, please follow on the system memory leak in #10040
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52915