[GH-ISSUE #7509] Support partial loads of LLaMA 3.2 Vision 11b on 6G GPUs #4776

Open
opened 2026-04-12 15:43:04 -05:00 by GiteaMirror · 16 comments
Owner

Originally created by @Romultra on GitHub (Nov 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7509

What is the issue?

Description:
I encountered an issue where the LLaMA 3.2 Vision 11b model loads entirely in CPU RAM, without utilizing the GPU memory as expected. The issue occurs on my Windows-based laptop with 6GB VRAM, where models that exceed GPU memory capacity should load the rest into system RAM while still leveraging the GPU.

Steps to Reproduce:

  1. Run LLaMA 3.2 Vision 11b with ollama on a system with limited VRAM (6 GB in my case).
  2. Check the memory allocation using the ollama ps command.

Expected Behavior:
When running models larger than available VRAM, the model should partially load into VRAM and utilize system RAM for the remainder. This behavior works as intended for other models (e.g., Llama 3.1), which utilize the GPU and offload excess data to RAM.

Actual Behavior:
When running Llama 3.2 Vision, the entire model loads into the CPU RAM, as shown in the output of the ollama ps command. Additionally, the Task Manager indicates no significant GPU or VRAM usage, confirming that the model is not utilizing the GPU at all.

Laptop Specifications:

  • CPU: AMD Ryzen 9 7940HS
  • RAM: 16 GB
  • GPU: NVIDIA RTX 4050 Mobile 6 GB VRAM
  • Ollama Version: Pre-release 0.4.0-rc8

Supporting Evidence:

  1. Screenshot of ollama ps showing LLaMA 3.1 partially loading into VRAM (expected behavior):
    image

  2. Screenshot of ollama ps showing LLaMA 3.2 Vision 11b loaded fully into CPU RAM:
    image

Further Testing:
On my desktop with higher VRAM (24GB):
Specs:

  • Processor: Ryzen 7 7800X3D
  • Memory: 64 GB RAM
  • GPU: NVIDIA RTX 4090 24GB VRAM
  • Ollama Version: Pre-release 0.4.0-rc8

Running the LLaMA 3.2 Vision 11b model on the desktop:

  • The model loaded entirely in the GPU VRAM as expected.
  • Screenshot of ollama ps for this case:
    image

Running the LLaMA 3.2 Vision 90b model on the desktop (which exceeds 24GB VRAM):

  • The model loaded partially into GPU and partially into CPU RAM, which is correct.
  • Screenshot of ollama ps for this case:
    image

Note: Both machines are running Windows, and GPU drivers are up to date.

Conclusion:
The behavior seems specific to running the LLaMA 3.2 Vision 11b model on systems with VRAM insufficient to load the entire model, where the expected split between VRAM and RAM doesn't occur.

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.4.0-rc8

Originally created by @Romultra on GitHub (Nov 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7509 ### What is the issue? **Description:** I encountered an issue where the **LLaMA 3.2 Vision 11b** model loads entirely in CPU RAM, without utilizing the GPU memory as expected. The issue occurs on my Windows-based laptop with 6GB VRAM, where models that exceed GPU memory capacity should load the rest into system RAM while still leveraging the GPU. **Steps to Reproduce:** 1. Run **LLaMA 3.2 Vision 11b** with `ollama` on a system with limited VRAM (6 GB in my case). 2. Check the memory allocation using the `ollama ps` command. **Expected Behavior:** When running models larger than available VRAM, the model should partially load into VRAM and utilize system RAM for the remainder. This behavior works as intended for other models (e.g., **Llama 3.1**), which utilize the GPU and offload excess data to RAM. **Actual Behavior:** When running **Llama 3.2 Vision**, the entire model loads into the CPU RAM, as shown in the output of the `ollama ps` command. Additionally, the Task Manager indicates no significant GPU or VRAM usage, confirming that the model is not utilizing the GPU at all. **Laptop Specifications:** - **CPU**: AMD Ryzen 9 7940HS - **RAM**: 16 GB - **GPU**: NVIDIA RTX 4050 Mobile 6 GB VRAM - **Ollama Version**: Pre-release 0.4.0-rc8 **Supporting Evidence:** 1. Screenshot of `ollama ps` showing **LLaMA 3.1** partially loading into VRAM (expected behavior): ![image](https://github.com/user-attachments/assets/1ad9f015-2209-4f9d-aae7-01d8c0b877c8) 2. Screenshot of `ollama ps` showing **LLaMA 3.2 Vision 11b** loaded fully into CPU RAM: ![image](https://github.com/user-attachments/assets/05ec4610-a8e3-48e7-b93d-18d137f2b5e1) **Further Testing**: On my **desktop** with higher VRAM (24GB): **Specs**: - **Processor**: Ryzen 7 7800X3D - **Memory**: 64 GB RAM - **GPU**: NVIDIA RTX 4090 24GB VRAM - **Ollama Version**: Pre-release 0.4.0-rc8 Running the **LLaMA 3.2 Vision 11b** model on the desktop: - The model loaded entirely in the GPU VRAM as expected. - Screenshot of `ollama ps` for this case: ![image](https://github.com/user-attachments/assets/ee771212-6e1d-4821-afec-9ac4fd8871ad) Running the **LLaMA 3.2 Vision 90b** model on the desktop (which exceeds 24GB VRAM): - The model loaded partially into GPU and partially into CPU RAM, which is correct. - Screenshot of `ollama ps` for this case: ![image](https://github.com/user-attachments/assets/b44f1b82-0b42-4c00-886f-a5d3c15ac43a) **Note**: Both machines are running Windows, and GPU drivers are up to date. **Conclusion:** The behavior seems specific to running the **LLaMA 3.2 Vision 11b** model on systems with VRAM insufficient to load the entire model, where the expected split between VRAM and RAM doesn't occur. ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.4.0-rc8
GiteaMirror added the feature request label 2026-04-12 15:43:04 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 5, 2024):

Server logs will aid in debugging. As a guess, I'd say OLLAMA_NUM_PARALLEL is unset and your context window is pushing everything into system RAM.

<!-- gh-comment-id:2457311289 --> @rick-github commented on GitHub (Nov 5, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging. As a guess, I'd say `OLLAMA_NUM_PARALLEL` is unset and your context window is pushing everything into system RAM.
Author
Owner

@jessegross commented on GitHub (Nov 5, 2024):

There are minimum pieces of the model that have to be loaded in VRAM in their entirety for anything to run on the GPU. These pieces are much larger for llama3.2-vision than they are for llama3.1. Most likely your available VRAM (including any used by Windows or other processes) is falling below this threshold.

<!-- gh-comment-id:2457887328 --> @jessegross commented on GitHub (Nov 5, 2024): There are minimum pieces of the model that have to be loaded in VRAM in their entirety for anything to run on the GPU. These pieces are much larger for llama3.2-vision than they are for llama3.1. Most likely your available VRAM (including any used by Windows or other processes) is falling below this threshold.
Author
Owner

@dhiltgen commented on GitHub (Nov 5, 2024):

To build on what Jesse mentioned, with the current default quantization, llama3.2 vision can not currently load on a 6G GPU.

Some excerpts from the server log on a 6G GPU system

time=2024-11-05T18:36:56.141Z level=DEBUG source=memory.go:173 msg="gpu has too little memory to allocate any layers" id=GPU-d9bdc19d-a9f0-663d-27dc-d8e6b4c715db library=cuda variant=v12 compute=6.1 driver=12.5 name="NVIDIA GeForce GTX 1060 6GB" total="5.9 GiB" available="5.9 GiB" minimum_memory=479199232 layer_size="148.9 MiB" gpu_zer_overhead="4.6 GiB" partial_offload="669.5 MiB" full_offload="258.5 MiB"
time=2024-11-05T18:36:56.141Z level=DEBUG source=memory.go:317 msg="insufficient VRAM to load any model layers"
...
time=2024-11-05T18:36:56.143Z level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama2224773720/runners/cpu_avx/ollama_llama_server --model /home/daniel/.ollama/models/blobs/sha256-652e85aa1e14c9087a4ccc3ab516fb794cbcf152f8b4b8d3c0b828da4ada62d9 --ctx-size 2048 --batch-size 512 --embedding --verbose --mmproj /home/daniel/.ollama/models/blobs/sha256-622429e8d31810962dd984bc98559e706db2fb1d40e99cb073beb7148d909d73 --threads 6 --no-mmap --parallel 1 --port 40901"
% ollama ps
NAME                        ID              SIZE      PROCESSOR    UNTIL
x/llama3.2-vision:latest    06bfba5b92a1    6.7 GB    100% CPU     4 minutes from now

There's some complexity around the vision portion of this new model, but this seems like a good feature request to try to see if we can get partial loads to fit into smaller GPUs. At present, I think ~8G is roughly where you need to be for a partial load. When partial loading, there's quite a bit more memory that's required for the model.

Here's an example 8G system:

% ollama ps
NAME                        ID              SIZE     PROCESSOR          UNTIL
x/llama3.2-vision:latest    06bfba5b92a1    12 GB    37%/63% CPU/GPU    4 minutes from now
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A     32749      C   ...unners/cuda_v12/ollama_llama_server       7510MiB |
+-----------------------------------------------------------------------------------------+
<!-- gh-comment-id:2457940986 --> @dhiltgen commented on GitHub (Nov 5, 2024): To build on what Jesse mentioned, with the current default quantization, llama3.2 vision can not currently load on a 6G GPU. Some excerpts from the server log on a 6G GPU system ``` time=2024-11-05T18:36:56.141Z level=DEBUG source=memory.go:173 msg="gpu has too little memory to allocate any layers" id=GPU-d9bdc19d-a9f0-663d-27dc-d8e6b4c715db library=cuda variant=v12 compute=6.1 driver=12.5 name="NVIDIA GeForce GTX 1060 6GB" total="5.9 GiB" available="5.9 GiB" minimum_memory=479199232 layer_size="148.9 MiB" gpu_zer_overhead="4.6 GiB" partial_offload="669.5 MiB" full_offload="258.5 MiB" time=2024-11-05T18:36:56.141Z level=DEBUG source=memory.go:317 msg="insufficient VRAM to load any model layers" ... time=2024-11-05T18:36:56.143Z level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama2224773720/runners/cpu_avx/ollama_llama_server --model /home/daniel/.ollama/models/blobs/sha256-652e85aa1e14c9087a4ccc3ab516fb794cbcf152f8b4b8d3c0b828da4ada62d9 --ctx-size 2048 --batch-size 512 --embedding --verbose --mmproj /home/daniel/.ollama/models/blobs/sha256-622429e8d31810962dd984bc98559e706db2fb1d40e99cb073beb7148d909d73 --threads 6 --no-mmap --parallel 1 --port 40901" ``` ``` % ollama ps NAME ID SIZE PROCESSOR UNTIL x/llama3.2-vision:latest 06bfba5b92a1 6.7 GB 100% CPU 4 minutes from now ``` There's some complexity around the vision portion of this new model, but this seems like a good feature request to try to see if we can get partial loads to fit into smaller GPUs. At present, I think ~8G is roughly where you need to be for a partial load. When partial loading, there's quite a bit more memory that's required for the model. Here's an example 8G system: ``` % ollama ps NAME ID SIZE PROCESSOR UNTIL x/llama3.2-vision:latest 06bfba5b92a1 12 GB 37%/63% CPU/GPU 4 minutes from now ``` ``` +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 32749 C ...unners/cuda_v12/ollama_llama_server 7510MiB | +-----------------------------------------------------------------------------------------+ ```
Author
Owner

@apmanikandan commented on GitHub (Nov 8, 2024):

Observing the same issue with below information in the logs

  1. gpu VRAM usage didn't recover within timeout
  2. offload to cuda projector.weights="1.8 GiB" projector.graph="2.8 GiB" memory.required.full="6.2 GiB" memory.available="[5.0 GiB]"
  3. starting llama server cmd="/tmp/ollama3492838483/runners/cpu_avx2/ollama_llama_server
  4. multimodal models don't support parallel requests yet

image

<!-- gh-comment-id:2463647426 --> @apmanikandan commented on GitHub (Nov 8, 2024): Observing the same issue with below information in the logs 1. gpu VRAM usage didn't recover within timeout 2. offload to cuda projector.weights="1.8 GiB" projector.graph="2.8 GiB" memory.required.full="6.2 GiB" memory.available="[5.0 GiB]" 3. starting llama server cmd="/tmp/ollama3492838483/runners/cpu_avx2/ollama_llama_server 4. multimodal models don't support parallel requests yet ![image](https://github.com/user-attachments/assets/b87ccf23-2de9-4301-808f-bb6b174e71b6)
Author
Owner

@konstantin1722 commented on GitHub (Nov 15, 2024):

Hello, same problem, GPU is not being used. I have a 1060 and 6GB of memory, but even with it I think at least some layers can be loaded? Even llava:34b successfully loads 10 out of 61 layers for me.

<!-- gh-comment-id:2478463878 --> @konstantin1722 commented on GitHub (Nov 15, 2024): Hello, same problem, GPU is not being used. I have a 1060 and 6GB of memory, but even with it I think at least some layers can be loaded? Even `llava:34b` successfully loads 10 out of 61 layers for me.
Author
Owner

@rick-github commented on GitHub (Nov 15, 2024):

Server logs will aid in debugging.

<!-- gh-comment-id:2478558832 --> @rick-github commented on GitHub (Nov 15, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@konstantin1722 commented on GitHub (Nov 20, 2024):

Hello, same problem, GPU is not being used.

I attach the full log from the time the service starts until the request is executed and shutdown (with debug).

Journal link (txt): https://filebin.net/2ceh37xt14d1f0x1 OR https://jmp.sh/s/b1NqdJ9DO28tEOy4rQhw (in viewing mode)

GitHub: ollama_journal.txt

<!-- gh-comment-id:2487909555 --> @konstantin1722 commented on GitHub (Nov 20, 2024): > Hello, same problem, GPU is not being used. I attach the full log from the time the service starts until the request is executed and shutdown (with debug). Journal link (txt): https://filebin.net/2ceh37xt14d1f0x1 OR https://jmp.sh/s/b1NqdJ9DO28tEOy4rQhw (in viewing mode) GitHub: [ollama_journal.txt](https://github.com/user-attachments/files/17827857/ollama_journal.txt)
Author
Owner

@rick-github commented on GitHub (Nov 20, 2024):

Please attach the log to this thread.

<!-- gh-comment-id:2487952272 --> @rick-github commented on GitHub (Nov 20, 2024): Please attach the log to this thread.
Author
Owner

@konstantin1722 commented on GitHub (Nov 20, 2024):

Please attach the log to this thread.

My comment has been updated.

<!-- gh-comment-id:2487961492 --> @konstantin1722 commented on GitHub (Nov 20, 2024): > Please attach the log to this thread. My comment has been updated.
Author
Owner

@rick-github commented on GitHub (Nov 20, 2024):

nov 20 11:35:29 desktop-pc ollama[17132]: time=2024-11-20T11:35:29.780+03:00 level=DEBUG source=memory.go:173 msg="gpu has too little memory to allocate any layers" id=GPU-b931d51f-228b-ac3c-1e70-11d8e499239a library=cuda variant=v12 compute=6.1 driver=12.7 name="NVIDIA GeForce GTX 1060 6GB" total="5.9 GiB" available="4.8 GiB" minimum_memory=479199232 layer_size="148.9 MiB" gpu_zer_overhead="4.6 GiB" partial_offload="669.5 MiB" full_offload="258.5 MiB"
nov 20 11:35:29 desktop-pc ollama[17132]: time=2024-11-20T11:35:29.780+03:00 level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=0 layers.split="" memory.available="[4.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="0 B" memory.required.kv="656.2 MiB" memory.required.allocations="[0 B]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB"

Vision models are different to plain text models in that they require an additional GGUF file (the projector) to be loaded. Your GPU has 4.8G free and the projector requires 4.6G. Combined with the KV of 0.6G, that leaves no space for loading the base model.

<!-- gh-comment-id:2488157015 --> @rick-github commented on GitHub (Nov 20, 2024): ``` nov 20 11:35:29 desktop-pc ollama[17132]: time=2024-11-20T11:35:29.780+03:00 level=DEBUG source=memory.go:173 msg="gpu has too little memory to allocate any layers" id=GPU-b931d51f-228b-ac3c-1e70-11d8e499239a library=cuda variant=v12 compute=6.1 driver=12.7 name="NVIDIA GeForce GTX 1060 6GB" total="5.9 GiB" available="4.8 GiB" minimum_memory=479199232 layer_size="148.9 MiB" gpu_zer_overhead="4.6 GiB" partial_offload="669.5 MiB" full_offload="258.5 MiB" nov 20 11:35:29 desktop-pc ollama[17132]: time=2024-11-20T11:35:29.780+03:00 level=INFO source=memory.go:343 msg="offload to cuda" projector.weights="1.8 GiB" projector.graph="2.8 GiB" layers.requested=-1 layers.model=41 layers.offload=0 layers.split="" memory.available="[4.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="0 B" memory.required.kv="656.2 MiB" memory.required.allocations="[0 B]" memory.weights.total="5.5 GiB" memory.weights.repeating="5.1 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="669.5 MiB" ``` Vision models are different to plain text models in that they require an additional GGUF file (the projector) to be loaded. Your GPU has 4.8G free and the projector requires 4.6G. Combined with the KV of 0.6G, that leaves no space for loading the base model.
Author
Owner

@Bruce-Kwok commented on GitHub (Nov 21, 2024):

I'm using Ollama with the LLaMA 3.2 Vision model (11B), but I noticed that it is utilizing 100% of the CPU instead of the GPU.

mcpadmin@980ti:$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
llama3.2-vision:latest 38107a0cd119 6.7 GB 100% CPU 31 seconds from now
mcpadmin@980ti:
$ nvidia-smi
Thu Nov 21 02:22:58 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce GTX 980 Ti Off | 00000000:01:00.0 Off | N/A |
| 0% 57C P8 31W / 280W | 5MiB / 6144MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
abcadmin@980ti:~$

<!-- gh-comment-id:2489951358 --> @Bruce-Kwok commented on GitHub (Nov 21, 2024): I'm using Ollama with the LLaMA 3.2 Vision model (11B), but I noticed that it is utilizing 100% of the CPU instead of the GPU. mcpadmin@980ti:~$ ollama ps NAME ID SIZE PROCESSOR UNTIL llama3.2-vision:latest 38107a0cd119 6.7 GB 100% CPU 31 seconds from now mcpadmin@980ti:~$ nvidia-smi Thu Nov 21 02:22:58 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce GTX 980 Ti Off | 00000000:01:00.0 Off | N/A | | 0% 57C P8 31W / 280W | 5MiB / 6144MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ abcadmin@980ti:~$
Author
Owner

@kreier commented on GitHub (Jan 12, 2025):

In my testing (see https://github.com/ollama/ollama/issues/8310#issuecomment-2585349945) I can confirm that currently llama3.2-vision does not work with 6GB GPUs and barely fit into 8GB cards. It seems that the first 7 of the 41 layers need to be imported to the GPU at minimum. Yet it looks that this is the prompt processing, or image analyzing part. This usually uses much less time than the token generation. I would interpret my result with the combination of the 8GB and 6GB card that the actual token generation is done with the later layers, and they would fit into a 6GB card.

Would it therefore be possible to have the first 7 (?) layers for prompt processing and image analyzing have run in the CPU, and then offload the token generation to the GPU? In case there is no image to analyze this would run the llama3.2 with some 11 billion parameters on the GPU while still being multi-modal and able to process images, if needed.

<!-- gh-comment-id:2585521606 --> @kreier commented on GitHub (Jan 12, 2025): In my testing (see https://github.com/ollama/ollama/issues/8310#issuecomment-2585349945) I can confirm that currently llama3.2-vision does not work with 6GB GPUs and barely fit into 8GB cards. It seems that the first 7 of the 41 layers need to be imported to the GPU at minimum. Yet it looks that this is the prompt processing, or image analyzing part. This usually uses much less time than the token generation. I would interpret my result with the combination of the 8GB and 6GB card that the actual token generation is done with the later layers, and they would fit into a 6GB card. Would it therefore be possible to have the first 7 (?) layers for prompt processing and image analyzing have run in the CPU, and then offload the token generation to the GPU? In case there is no image to analyze this would run the llama3.2 with some 11 billion parameters on the GPU while still being multi-modal and able to process images, if needed.
Author
Owner

@GJoe2 commented on GitHub (Mar 3, 2025):

Would it be possible to quantized an even smaller model based on llama3.2-vision:11b?

<!-- gh-comment-id:2695779218 --> @GJoe2 commented on GitHub (Mar 3, 2025): Would it be possible to quantized an even smaller model based on llama3.2-vision:11b?
Author
Owner

@rick-github commented on GitHub (Mar 4, 2025):

You can quantize the model to make it smaller, but the quality will suffer.

$ ollama pull llama3.2-vision:11b-instruct-fp16
$ echo FROM llama3.2-vision:11b-instruct-fp16 > Modelfile
$ ollama create --quantize q2_k llama3.2-vision:11b-instruct-q2_K

Quantizing in this case only saves 2.2G compared to the default model.

$ ollama list llama3.2-vision:11b-instruct
NAME                                   ID              SIZE      MODIFIED       
llama3.2-vision:11b-instruct-q2_K      272af51e075c    5.7 GB    13 seconds ago    
llama3.2-vision:11b-instruct-q5_K_M    44cb75911c17    8.9 GB    11 days ago       
llama3.2-vision:11b-instruct-fp16      61be32b20340    21 GB     11 days ago       
llama3.2-vision:11b-instruct-q8_0      7a7cc5461ef1    12 GB     2 weeks ago       
llama3.2-vision:11b-instruct-q4_K_M    38107a0cd119    7.9 GB    3 months ago      

So the overall VRAM footprint only goes down 2G:

$ ollama ps
NAME                                   ID              SIZE     PROCESSOR    UNTIL   
llama3.2-vision:11b-instruct-q4_K_M    38107a0cd119    12 GB    100% GPU     Forever    
llama3.2-vision:11b-instruct-q2_K      272af51e075c    10 GB    100% GPU     Forever    
$ for i in llama3.2-vision:11b-instruct-q4_K_M llama3.2-vision:11b-instruct-q2_K ; do echo -n "$i: " ; echo '{"model": "'$i'",
         "messages":[{
            "role":"user","content":"Describe this image.",
            "images": [
              "'"$(base64 puppy.jpg)"'"
            ]
          }],
         "stream":false}' | curl -s http://localhost:11434/api/chat -d @- | jq  .message.content; done
llama3.2-vision:11b-instruct-q4_K_M: "This photograph features a small, white puppy with a fluffy coat and a distinctive red collar adorned with a gold bell. The puppy's short tail is visible in the background, suggesting it may be sitting or lying down.\n\nThe puppy's posture suggests that it is standing on its paws, although its position makes it difficult to determine for certain. The image is slightly blurry, contributing to the uncertainty surrounding its stance.\n\nThe puppy's gaze is directed towards a point beyond the left side of the frame, as indicated by its head being turned in that direction. This subtle detail adds character to the photograph.\n\nIn the background, a dark area resembling a wall or fence can be seen, providing context for the puppy's location. Overall, this image presents a charming and intimate portrait of a small white puppy."
llama3.2-vision:11b-instruct-q2_K: "The image depicts a small, fluffy white dog sitting on a stone or concrete surface. The dog's fur is short and dense, with a rounded shape that suggests it has been recently groomed. It appears to be healthy and well-cared-for.\n\nThe dog's eyes are dark brown or black, with a shiny appearance. Its nose is small and pointed, giving the impression of being alert and aware of its surroundings. The dog's ears are floppy and rounded, with a natural, relaxed appearance.\n\nThe dog's body is covered in short, dense fur that appears to be white or light-colored. There are no visible patches or markings on the dog's coat, suggesting it may be a purebred breed.\n\nThe background of the image is out of focus, making it difficult to discern any details about the setting or environment. However, the overall atmosphere appears to be calm and peaceful, with the dog seeming relaxed and content in its surroundings.\n\nOverall, the image suggests that this dog is a healthy and well-cared-for individual who appears happy and comfortable in its current environment."
<!-- gh-comment-id:2695858399 --> @rick-github commented on GitHub (Mar 4, 2025): You can quantize the model to make it smaller, but the quality will suffer. ```console $ ollama pull llama3.2-vision:11b-instruct-fp16 $ echo FROM llama3.2-vision:11b-instruct-fp16 > Modelfile $ ollama create --quantize q2_k llama3.2-vision:11b-instruct-q2_K ``` Quantizing in this case only saves 2.2G compared to the default model. ```console $ ollama list llama3.2-vision:11b-instruct NAME ID SIZE MODIFIED llama3.2-vision:11b-instruct-q2_K 272af51e075c 5.7 GB 13 seconds ago llama3.2-vision:11b-instruct-q5_K_M 44cb75911c17 8.9 GB 11 days ago llama3.2-vision:11b-instruct-fp16 61be32b20340 21 GB 11 days ago llama3.2-vision:11b-instruct-q8_0 7a7cc5461ef1 12 GB 2 weeks ago llama3.2-vision:11b-instruct-q4_K_M 38107a0cd119 7.9 GB 3 months ago ``` So the overall VRAM footprint only goes down 2G: ```console $ ollama ps NAME ID SIZE PROCESSOR UNTIL llama3.2-vision:11b-instruct-q4_K_M 38107a0cd119 12 GB 100% GPU Forever llama3.2-vision:11b-instruct-q2_K 272af51e075c 10 GB 100% GPU Forever ``` ```console $ for i in llama3.2-vision:11b-instruct-q4_K_M llama3.2-vision:11b-instruct-q2_K ; do echo -n "$i: " ; echo '{"model": "'$i'", "messages":[{ "role":"user","content":"Describe this image.", "images": [ "'"$(base64 puppy.jpg)"'" ] }], "stream":false}' | curl -s http://localhost:11434/api/chat -d @- | jq .message.content; done llama3.2-vision:11b-instruct-q4_K_M: "This photograph features a small, white puppy with a fluffy coat and a distinctive red collar adorned with a gold bell. The puppy's short tail is visible in the background, suggesting it may be sitting or lying down.\n\nThe puppy's posture suggests that it is standing on its paws, although its position makes it difficult to determine for certain. The image is slightly blurry, contributing to the uncertainty surrounding its stance.\n\nThe puppy's gaze is directed towards a point beyond the left side of the frame, as indicated by its head being turned in that direction. This subtle detail adds character to the photograph.\n\nIn the background, a dark area resembling a wall or fence can be seen, providing context for the puppy's location. Overall, this image presents a charming and intimate portrait of a small white puppy." llama3.2-vision:11b-instruct-q2_K: "The image depicts a small, fluffy white dog sitting on a stone or concrete surface. The dog's fur is short and dense, with a rounded shape that suggests it has been recently groomed. It appears to be healthy and well-cared-for.\n\nThe dog's eyes are dark brown or black, with a shiny appearance. Its nose is small and pointed, giving the impression of being alert and aware of its surroundings. The dog's ears are floppy and rounded, with a natural, relaxed appearance.\n\nThe dog's body is covered in short, dense fur that appears to be white or light-colored. There are no visible patches or markings on the dog's coat, suggesting it may be a purebred breed.\n\nThe background of the image is out of focus, making it difficult to discern any details about the setting or environment. However, the overall atmosphere appears to be calm and peaceful, with the dog seeming relaxed and content in its surroundings.\n\nOverall, the image suggests that this dog is a healthy and well-cared-for individual who appears happy and comfortable in its current environment." ```
Author
Owner

@GJoe2 commented on GitHub (Mar 4, 2025):

Oh right , that means comprising the model wont make it fit into a 6GB VRAM or if possible results would be too degraded . For everyone else I look for alternatives models and find minicpm-v a very good model that fits into a 6GB VRAM (at least 80%), and its excels at OCR math formulas and text.
Try out : ollama run minicpm-v

<!-- gh-comment-id:2696119348 --> @GJoe2 commented on GitHub (Mar 4, 2025): Oh right , that means comprising the model wont make it fit into a 6GB VRAM or if possible results would be too degraded . For everyone else I look for alternatives models and find minicpm-v a very good model that fits into a 6GB VRAM (at least 80%), and its excels at OCR math formulas and text. Try out : ollama run minicpm-v
Author
Owner

@mihirahlawat commented on GitHub (Apr 22, 2025):

Have you tried these - OLLAMA_FLASH_ATTENTION=1 and OLLAMA_KV_CACHE_TYPE=q4_0 ?
Maybe quantizing the kv cache should help.

<!-- gh-comment-id:2821707647 --> @mihirahlawat commented on GitHub (Apr 22, 2025): Have you tried these - OLLAMA_FLASH_ATTENTION=1 and OLLAMA_KV_CACHE_TYPE=q4_0 ? Maybe quantizing the kv cache should help.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4776