[GH-ISSUE #10155] Feature Request:Always use the GPU #6663

Closed
opened 2026-04-12 18:22:21 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @yukkuriTV on GitHub (Apr 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10155

If you try to run a model that exceeds the VRAM capacity on the GPU, it will fall back to the CPU, which is inefficient.

If system memory fallback is available, models will run faster on the GPU using system memory fallback.

Even when limiting system memory to 4GB and running the model with other services that use llama.cpp, the model was much faster using the GPU's very slow virtual memory than on the CPU.

A force GPU environment variable is required that will return an error if you try to run a model that exceeds the VRAM capacity without system memory fallback.

Originally created by @yukkuriTV on GitHub (Apr 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10155 If you try to run a model that exceeds the VRAM capacity on the GPU, it will fall back to the CPU, which is inefficient. If system memory fallback is available, models will run faster on the GPU using system memory fallback. Even when limiting system memory to 4GB and running the model with other services that use llama.cpp, the model was much faster using the GPU's very slow virtual memory than on the CPU. A force GPU environment variable is required that will return an error if you try to run a model that exceeds the VRAM capacity without system memory fallback.
GiteaMirror added the feature request label 2026-04-12 18:22:21 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 7, 2025):

C:\> echo FROM model > Modelfile
C:\> echo PARAMETER num_gpu 999 >> Modelfile
C:\> ollama create model:gpu
C:\> ollama run model:gpu

If you have time, I would appreciate it you could do a little performance test and post the result here. In my tests, using system memory fallback in this way always leads to much poorer performance than the split VRAM/RAM method that ollama usually uses. However, I have only run these tests on Linux as I don't have access to a Windows system with an Nvidia GPU, and it could be that the WIndows Nvidia driver performs differently.

The test is easy. First run a baseline:

C:\> ollama run model --verbose 'why is the sky blue?'

The run the GPU only model:

C:\> ollama run model:gpu --verbose 'why is the sky blue?'

Post the output of both runs, and if possible, the server logs.

<!-- gh-comment-id:2783402987 --> @rick-github commented on GitHub (Apr 7, 2025): ```console C:\> echo FROM model > Modelfile C:\> echo PARAMETER num_gpu 999 >> Modelfile C:\> ollama create model:gpu C:\> ollama run model:gpu ``` If you have time, I would appreciate it you could do a little performance test and post the result here. In my tests, using system memory fallback in this way always leads to much [poorer performance](https://github.com/ollama/ollama/issues/7584#issuecomment-2466715900) than the split VRAM/RAM method that ollama usually uses. However, I have only run these tests on Linux as I don't have access to a Windows system with an Nvidia GPU, and it could be that the WIndows Nvidia driver performs differently. The test is easy. First run a baseline: ```console C:\> ollama run model --verbose 'why is the sky blue?' ``` The run the GPU only model: ```console C:\> ollama run model:gpu --verbose 'why is the sky blue?' ``` Post the output of both runs, and if possible, the [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues).
Author
Owner

@yukkuriTV commented on GitHub (Apr 7, 2025):

"ollama run gemma3:4b-it-q8_0:gpu --verbose 'why is the sky blue?'" could not be executed. The same result was obtained when changing to "ollama run gemma3:gpu:4b-it-q8_0 --verbose 'why is the sky blue?'".

The error code was "Error: invalid model path", which means that the model does not exist, but it was different from "Error: pull model manifest: file does not exist", which appears when selecting a non-existent model such as "modeltest".

Also, ollama on Windows did not behave like the "split VRAM/RAM method" at all, and only used RAM without allocating to VRAM.

I also executed "ollama run gemma3:1b-it-q8_0 --verbose 'why is the sky blue?'", but this was GPU 100% (VRAM only).

The reality is that I have not even experienced the significant drop in performance caused by the overflow of VRAM being transferred to RAM.

This is the result of executing the command.

C:\Users\yukkuriTV>ollama run gemma3:4b-it-q8_0 --verbose "why is the sky blue?"
Okay, let's break down why the sky is blue! It's a really fascinating phenomenon that boils down to something
called Rayleigh scattering. Here's the explanation:

1. Sunlight and Colors:

  • Sunlight, as we often think of it, is actually made up of all the colors of the rainbow. You can see this
    when light passes through a prism and splits into its component colors.

2. The Atmosphere & Tiny Particles:

  • The Earth is surrounded by an atmosphere – a layer of gases like nitrogen and oxygen.
  • Within this atmosphere are incredibly tiny particles: mostly nitrogen and oxygen molecules. They’re so small
    that they’re much smaller than the wavelengths of visible light.

3. Rayleigh Scattering – The Key Process:

  • What it is: When sunlight enters the atmosphere, it bumps into these tiny air molecules. This causes the
    light to scatter in different directions.
  • Wavelength Matters: Here's the crucial part: shorter wavelengths of light (like blue and violet) are
    scattered much more strongly than longer wavelengths (like red and orange). This is because the scattering is
    inversely proportional to the fourth power of the wavelength. (This means that if you halve the wavelength, you
    increase the scattering by a factor of 16!)

4. Why Blue Specifically?

  • Violet light is scattered even more than blue light. However, the sun emits less violet light than blue
    light, and our eyes are also less sensitive to violet. As a result, we primarily see the scattered blue light
    coming from all directions.

5. Sunsets & Sunrises (Why are they red/orange?)

  • At sunset and sunrise, the sunlight has to travel through much more of the atmosphere to reach our eyes.
  • During this long journey, almost all the blue light has been scattered away. The longer wavelengths – red
    and orange – are able to penetrate through the atmosphere and reach our eyes, giving us those beautiful
    colors.

In simple terms: The sky is blue because blue light from the sun is scattered around by tiny air molecules
in the atmosphere, and that's the color we see.


Resources for further learning:

Do you want me to delve deeper into any specific aspect of this explanation, such as:

  • The math behind Rayleigh scattering?
  • How clouds affect the color of the sky?

total duration: 1m51.6125673s
load duration: 42.247ms
prompt eval count: 15 token(s)
prompt eval duration: 193.6958ms
prompt eval rate: 77.44 tokens/s
eval count: 623 token(s)
eval duration: 1m51.3766245s
eval rate: 5.59 tokens/s

C:\Users\yukkuriTV>ollama run gemma3:4b-it-q8_0:gpu --verbose 'why is the sky blue?'
Error: invalid model path

C:\Users\yukkuriTV>ollama run gemma3:gpu:4b-it-q8_0 --verbose 'why is the sky blue?'
Error: invalid model path

C:\Users\yukkuriTV>ollama run modeltest --verbose 'why is the sky blue?'
pulling manifest
Error: pull model manifest: file does not exist

C:\Users\yukkuriTV>ollama run gemma3:1b-it-q8_0 --verbose 'why is the sky blue?'
pulling manifest
pulling 62901574f252... 100% ▕█████████████████████████████████████████████████████▏ 1.1 GB
pulling e0a42594d802... 100% ▕█████████████████████████████████████████████████████▏ 358 B
pulling dd084c7d92a3... 100% ▕█████████████████████████████████████████████████████▏ 8.4 KB
pulling 3116c5225075... 100% ▕█████████████████████████████████████████████████████▏ 77 B
pulling de9e0e095f71... 100% ▕█████████████████████████████████████████████████████▏ 490 B
verifying sha256 digest
writing manifest
success
The sky is blue due to a phenomenon called Rayleigh scattering. Here's a breakdown of how it works:

  1. Sunlight is Made of All Colors: White sunlight is actually composed of all the colors of the rainbow –
    red, orange, yellow, green, blue, indigo, and violet.

  2. Entering the Atmosphere: When sunlight enters the Earth's atmosphere, it bumps into tiny air molecules
    (mostly nitrogen and oxygen).

  3. Rayleigh Scattering: This is where the magic happens. Rayleigh scattering describes how light is
    scattered by particles of a much smaller wavelength than the light itself. Blue and violet light have shorter
    wavelengths than other colors.

  4. Blue Dominates: Because blue and violet light are scattered much more strongly than other colors, they
    bounce around the atmosphere in all directions. This scattered blue light is what we see when we look up at
    the sky.

Why not violet then?

  • Less Violet in Sunlight: The sun emits slightly less violet light than blue light.
  • Our Eyes are Less Sensitive: Our eyes are also less sensitive to violet light.

Think of it like this: Imagine throwing a bunch of marbles (blue light) and small pebbles (red light) at a
field. The marbles are more likely to bounce in random directions, while the pebbles are more likely to go
straight through.

Do you want to learn more about:

  • Why are sunsets red? (Because the sun is lower in the sky)
  • The effect of pollution on the sky?
  • How does scattering work in different atmospheric conditions?

total duration: 7.4904928s
load duration: 2.0651936s
prompt eval count: 16 token(s)
prompt eval duration: 309.9919ms
prompt eval rate: 51.61 tokens/s
eval count: 342 token(s)
eval duration: 5.1138168s
eval rate: 66.88 tokens/s

<!-- gh-comment-id:2784566575 --> @yukkuriTV commented on GitHub (Apr 7, 2025): "ollama run gemma3:4b-it-q8_0:gpu --verbose 'why is the sky blue?'" could not be executed. The same result was obtained when changing to "ollama run gemma3:gpu:4b-it-q8_0 --verbose 'why is the sky blue?'". The error code was "Error: invalid model path", which means that the model does not exist, but it was different from "Error: pull model manifest: file does not exist", which appears when selecting a non-existent model such as "modeltest". Also, ollama on Windows did not behave like the "split VRAM/RAM method" at all, and only used RAM without allocating to VRAM. I also executed "ollama run gemma3:1b-it-q8_0 --verbose 'why is the sky blue?'", but this was GPU 100% (VRAM only). The reality is that I have not even experienced the significant drop in performance caused by the overflow of VRAM being transferred to RAM. This is the result of executing the command. C:\Users\yukkuriTV>ollama run gemma3:4b-it-q8_0 --verbose "why is the sky blue?" Okay, let's break down why the sky is blue! It's a really fascinating phenomenon that boils down to something called **Rayleigh scattering**. Here's the explanation: **1. Sunlight and Colors:** * Sunlight, as we often think of it, is actually made up of *all* the colors of the rainbow. You can see this when light passes through a prism and splits into its component colors. **2. The Atmosphere & Tiny Particles:** * The Earth is surrounded by an atmosphere – a layer of gases like nitrogen and oxygen. * Within this atmosphere are incredibly tiny particles: mostly nitrogen and oxygen molecules. They’re so small that they’re much smaller than the wavelengths of visible light. **3. Rayleigh Scattering – The Key Process:** * **What it is:** When sunlight enters the atmosphere, it bumps into these tiny air molecules. This causes the light to scatter in different directions. * **Wavelength Matters:** Here's the crucial part: shorter wavelengths of light (like blue and violet) are scattered *much* more strongly than longer wavelengths (like red and orange). This is because the scattering is inversely proportional to the fourth power of the wavelength. (This means that if you halve the wavelength, you increase the scattering by a factor of 16!) **4. Why Blue Specifically?** * Violet light is scattered even *more* than blue light. However, the sun emits less violet light than blue light, and our eyes are also less sensitive to violet. As a result, we primarily see the scattered blue light coming from all directions. **5. Sunsets & Sunrises (Why are they red/orange?)** * At sunset and sunrise, the sunlight has to travel through *much more* of the atmosphere to reach our eyes. * During this long journey, almost all the blue light has been scattered away. The longer wavelengths – red and orange – are able to penetrate through the atmosphere and reach our eyes, giving us those beautiful colors. **In simple terms:** The sky is blue because blue light from the sun is scattered around by tiny air molecules in the atmosphere, and that's the color we see. --- **Resources for further learning:** * **NASA - Why is the sky blue?** [https://science.nasa.gov/sky-observatory/ask-an-astronomer/why-is-the-sky-blue/](https://science.nasa.gov/sky-o[https://science.nasa.gov/sky-observatory/ask-an-astronomer/why-is-the-sky-blu/](https://science.nasa.gov/sky-observatory/ask-an-astronomer/why-is-the-sky-blue/) * **Wikipedia - Rayleigh scattering:** [https://en.wikipedia.org/wiki/Rayleigh_scattering](https://en.wikipedia.org/wiki/Rayleigh_scattering) Do you want me to delve deeper into any specific aspect of this explanation, such as: * The math behind Rayleigh scattering? * How clouds affect the color of the sky? total duration: 1m51.6125673s load duration: 42.247ms prompt eval count: 15 token(s) prompt eval duration: 193.6958ms prompt eval rate: 77.44 tokens/s eval count: 623 token(s) eval duration: 1m51.3766245s eval rate: 5.59 tokens/s C:\Users\yukkuriTV>ollama run gemma3:4b-it-q8_0:gpu --verbose 'why is the sky blue?' Error: invalid model path C:\Users\yukkuriTV>ollama run gemma3:gpu:4b-it-q8_0 --verbose 'why is the sky blue?' Error: invalid model path C:\Users\yukkuriTV>ollama run modeltest --verbose 'why is the sky blue?' pulling manifest Error: pull model manifest: file does not exist C:\Users\yukkuriTV>ollama run gemma3:1b-it-q8_0 --verbose 'why is the sky blue?' pulling manifest pulling 62901574f252... 100% ▕█████████████████████████████████████████████████████▏ 1.1 GB pulling e0a42594d802... 100% ▕█████████████████████████████████████████████████████▏ 358 B pulling dd084c7d92a3... 100% ▕█████████████████████████████████████████████████████▏ 8.4 KB pulling 3116c5225075... 100% ▕█████████████████████████████████████████████████████▏ 77 B pulling de9e0e095f71... 100% ▕█████████████████████████████████████████████████████▏ 490 B verifying sha256 digest writing manifest success The sky is blue due to a phenomenon called **Rayleigh scattering**. Here's a breakdown of how it works: 1. **Sunlight is Made of All Colors:** White sunlight is actually composed of all the colors of the rainbow – red, orange, yellow, green, blue, indigo, and violet. 2. **Entering the Atmosphere:** When sunlight enters the Earth's atmosphere, it bumps into tiny air molecules (mostly nitrogen and oxygen). 3. **Rayleigh Scattering:** This is where the magic happens. Rayleigh scattering describes how light is scattered by particles of a much smaller wavelength than the light itself. Blue and violet light have shorter wavelengths than other colors. 4. **Blue Dominates:** Because blue and violet light are scattered much more strongly than other colors, they bounce around the atmosphere in all directions. This scattered blue light is what we see when we look up at the sky. **Why not violet then?** * **Less Violet in Sunlight:** The sun emits slightly less violet light than blue light. * **Our Eyes are Less Sensitive:** Our eyes are also less sensitive to violet light. **Think of it like this:** Imagine throwing a bunch of marbles (blue light) and small pebbles (red light) at a field. The marbles are more likely to bounce in random directions, while the pebbles are more likely to go straight through. **Do you want to learn more about:** * **Why are sunsets red?** (Because the sun is lower in the sky) * **The effect of pollution on the sky?** * **How does scattering work in different atmospheric conditions?** total duration: 7.4904928s load duration: 2.0651936s prompt eval count: 16 token(s) prompt eval duration: 309.9919ms prompt eval rate: 51.61 tokens/s eval count: 342 token(s) eval duration: 5.1138168s eval rate: 66.88 tokens/s
Author
Owner

@rick-github commented on GitHub (Apr 7, 2025):

You have to create the model before you can test it.

But gemma3:4b-it-q8_0 and gemma3:1b-it-q8_0 are very small models and will (unless you have a tiny GPU) always fit in VRAM, so you will get no performance increase by setting num_gpu high.

<!-- gh-comment-id:2784620372 --> @rick-github commented on GitHub (Apr 7, 2025): You have to create the model before you can test it. But gemma3:4b-it-q8_0 and gemma3:1b-it-q8_0 are very small models and will (unless you have a tiny GPU) always fit in VRAM, so you will get no performance increase by setting `num_gpu` high.
Author
Owner

@yukkuriTV commented on GitHub (Apr 10, 2025):

I created the model based on this site. However, if you look closely, you'll notice that I'm using 4b-it-q8_0 by mistake instead of the usual 4b-it-fp16. This should have been fully loaded into the GPU before.

https://ollama.com/library/gemma3/tags

I ran the tests in gemma3 4b-it-fp16 using the following method, and in all cases it ran on the CPU only, without using the GPU at all.

1 Change the system memory fallback setting in the NVIDIA control panel from "Prefer system memory fallback" to "Driver default" or "Prefer no system memory fallback"

2 Set GGML_CUDA_ENABLE_UNIFIED_MEMORY in the system and user environment variables and set it to 1 or 0 (this should not be relevant on Windows, but just to be sure)

3 Change the GeForce driver to the Studio driver

4 Set the advanced parameters in the Open WebUI as follows

num_thread (Ollama) 1 to 12

num_gpu (Ollama) 1,2,3,4,5,6,7,8,9,10,256

use_mmap (Ollama) enable or disable

5 Clean install Windows 10 (I did a clean install for a different reason, but it didn't fix the problem, so I'm writing this down)

6 Run the System File Checker

7 Roll back Windows Update (from March to February)

8 Specify the GPU for ollama.exe from the graphics performance settings

When the VRAM is exceeded, it switches completely to CPU only without swapping to RAM, with CUDA usage at 0% (perhaps it is turned away internally, and it is not used for an instant). If this is not the normal behavior of Ollama, a bug has occurred.

Note that, for some reason, when trying to run fp16 halfway through, ollama either crashes, or uses all the system memory and crashes the entire OS, so we have switched to gemma3:4b-it-q8_0.

The following is the server log.

<!-- gh-comment-id:2793576369 --> @yukkuriTV commented on GitHub (Apr 10, 2025): I created the model based on this site. However, if you look closely, you'll notice that I'm using 4b-it-q8_0 by mistake instead of the usual 4b-it-fp16. This should have been fully loaded into the GPU before. https://ollama.com/library/gemma3/tags I ran the tests in gemma3 4b-it-fp16 using the following method, and in all cases it ran on the CPU only, without using the GPU at all. 1 Change the system memory fallback setting in the NVIDIA control panel from "Prefer system memory fallback" to "Driver default" or "Prefer no system memory fallback" 2 Set GGML_CUDA_ENABLE_UNIFIED_MEMORY in the system and user environment variables and set it to 1 or 0 (this should not be relevant on Windows, but just to be sure) 3 Change the GeForce driver to the Studio driver 4 Set the advanced parameters in the Open WebUI as follows num_thread (Ollama) 1 to 12 num_gpu (Ollama) 1,2,3,4,5,6,7,8,9,10,256 use_mmap (Ollama) enable or disable 5 Clean install Windows 10 (I did a clean install for a different reason, but it didn't fix the problem, so I'm writing this down) 6 Run the System File Checker 7 Roll back Windows Update (from March to February) 8 Specify the GPU for ollama.exe from the graphics performance settings When the VRAM is exceeded, it switches completely to CPU only without swapping to RAM, with CUDA usage at 0% (perhaps it is turned away internally, and it is not used for an instant). If this is not the normal behavior of Ollama, a bug has occurred. Note that, for some reason, when trying to run fp16 halfway through, ollama either crashes, or uses all the system memory and crashes the entire OS, so we have switched to gemma3:4b-it-q8_0. The following is the server log.
Author
Owner

@yukkuriTV commented on GitHub (Apr 10, 2025):

2025-04-10 23:05:36.818 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "POST /api/v1/chats/new HTTP/1.1" 200 - {}
2025-04-10 23:05:36.846 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /static/favicon.png HTTP/1.1" 304 - {}
2025-04-10 23:05:36.870 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
2025-04-10 23:05:36.893 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "POST /api/v1/chats/0056fa76-02e7-4ef5-80bf-158502d9404d HTTP/1.1" 200 - {}
2025-04-10 23:05:36.966 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /static/favicon.png HTTP/1.1" 304 - {}
2025-04-10 23:05:36.975 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
Batches: 100%|███████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4.88it/s]
2025-04-10 23:05:37.240 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "POST /api/v1/memories/query HTTP/1.1" 200 - {}
2025-04-10 23:05:37.600 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50265 - "GET /api/v1/chats/72d2437f-43b2-49c6-8c6a-408710f3dcb3 HTTP/1.1" 200 - {}
2025-04-10 23:05:45.451 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "POST /api/chat/completions HTTP/1.1" 200 - {}
2025-04-10 23:06:25.410 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50272 - "GET /api/v1/chats/a923773e-d085-467b-a27a-414677b424cb HTTP/1.1" 200 - {}
2025-04-10 23:06:25.412 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50273 - "GET /api/v1/chats/ba8328ff-15e5-4f59-9c0d-595af67c53c9 HTTP/1.1" 200 - {}
2025-04-10 23:06:25.417 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
2025-04-10 23:06:25.421 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50279 - "GET /api/v1/chats/654625ec-cbbc-4410-b0de-a3d18063a828 HTTP/1.1" 200 - {}
2025-04-10 23:06:25.421 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/v1/chats/087c93bf-00a0-41f5-830f-8a563f521156 HTTP/1.1" 200 - {}
2025-04-10 23:06:25.423 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /c/0056fa76-02e7-4ef5-80bf-158502d9404d HTTP/1.1" 200 - {}
2025-04-10 23:06:25.495 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /static/loader.js HTTP/1.1" 304 - {}
2025-04-10 23:06:25.499 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /static/splash.png HTTP/1.1" 304 - {}
2025-04-10 23:06:25.955 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /static/favicon.ico HTTP/1.1" 304 - {}
2025-04-10 23:06:25.978 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/config HTTP/1.1" 200 - {}
2025-04-10 23:06:26.006 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/v1/auths/ HTTP/1.1" 200 - {}
2025-04-10 23:06:26.016 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/config HTTP/1.1" 200 - {}
2025-04-10 23:06:26.031 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/changelog HTTP/1.1" 200 - {}
2025-04-10 23:06:26.035 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /api/v1/users/user/settings HTTP/1.1" 200 - {}
2025-04-10 23:06:26.045 | INFO | open_webui.routers.openai:get_all_models:389 - get_all_models() - {}
2025-04-10 23:06:26.257 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50288 - "GET /manifest.json HTTP/1.1" 200 - {}
2025-04-10 23:06:26.816 | INFO | open_webui.routers.ollama:get_all_models:300 - get_all_models() - {}
2025-04-10 23:06:27.083 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /api/models HTTP/1.1" 200 - {}
2025-04-10 23:06:27.092 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /api/v1/configs/banners HTTP/1.1" 200 - {}
2025-04-10 23:06:27.099 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /api/v1/tools/ HTTP/1.1" 200 - {}
2025-04-10 23:06:27.123 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/v1/chats/0056fa76-02e7-4ef5-80bf-158502d9404d HTTP/1.1" 200 - {}
2025-04-10 23:06:27.140 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/v1/channels/ HTTP/1.1" 200 - {}
2025-04-10 23:06:27.141 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50279 - "GET /static/favicon.png HTTP/1.1" 304 - {}
2025-04-10 23:06:27.155 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /api/v1/chats/0056fa76-02e7-4ef5-80bf-158502d9404d/tags HTTP/1.1" 200 - {}
2025-04-10 23:06:27.167 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {}
2025-04-10 23:06:27.169 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /api/v1/users/user/settings HTTP/1.1" 200 - {}
2025-04-10 23:06:27.176 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/v1/chats/pinned HTTP/1.1" 200 - {}
2025-04-10 23:06:27.398 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50279 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
2025-04-10 23:06:27.407 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /api/v1/folders/ HTTP/1.1" 200 - {}
2025-04-10 23:06:27.438 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /api/version/updates HTTP/1.1" 200 - {}
2025-04-10 23:06:27.517 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /ollama/api/version HTTP/1.1" 200 - {}
2025-04-10 23:06:36.564 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50298 - "GET /api/v1/chats/ceb6d3fb-a5b3-47bf-bca2-b45ab4264e45 HTTP/1.1" 200 - {}
2025-04-10 23:06:36.573 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50298 - "GET /api/v1/chats/ceb6d3fb-a5b3-47bf-bca2-b45ab4264e45 HTTP/1.1" 200 - {}
2025-04-10 23:06:40.908 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50298 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
2025-04-10 23:06:40.917 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50299 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {}
2025-04-10 23:06:55.960 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50304 - "GET /api/v1/chats/0056fa76-02e7-4ef5-80bf-158502d9404d HTTP/1.1" 200 - {}
2025-04-10 23:06:57.784 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50304 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {}

<!-- gh-comment-id:2793580858 --> @yukkuriTV commented on GitHub (Apr 10, 2025): 2025-04-10 23:05:36.818 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "POST /api/v1/chats/new HTTP/1.1" 200 - {} 2025-04-10 23:05:36.846 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /static/favicon.png HTTP/1.1" 304 - {} 2025-04-10 23:05:36.870 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-04-10 23:05:36.893 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "POST /api/v1/chats/0056fa76-02e7-4ef5-80bf-158502d9404d HTTP/1.1" 200 - {} 2025-04-10 23:05:36.966 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /static/favicon.png HTTP/1.1" 304 - {} 2025-04-10 23:05:36.975 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} Batches: 100%|███████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4.88it/s] 2025-04-10 23:05:37.240 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "POST /api/v1/memories/query HTTP/1.1" 200 - {} 2025-04-10 23:05:37.600 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50265 - "GET /api/v1/chats/72d2437f-43b2-49c6-8c6a-408710f3dcb3 HTTP/1.1" 200 - {} 2025-04-10 23:05:45.451 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "POST /api/chat/completions HTTP/1.1" 200 - {} 2025-04-10 23:06:25.410 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50272 - "GET /api/v1/chats/a923773e-d085-467b-a27a-414677b424cb HTTP/1.1" 200 - {} 2025-04-10 23:06:25.412 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50273 - "GET /api/v1/chats/ba8328ff-15e5-4f59-9c0d-595af67c53c9 HTTP/1.1" 200 - {} 2025-04-10 23:06:25.417 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-04-10 23:06:25.421 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50279 - "GET /api/v1/chats/654625ec-cbbc-4410-b0de-a3d18063a828 HTTP/1.1" 200 - {} 2025-04-10 23:06:25.421 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/v1/chats/087c93bf-00a0-41f5-830f-8a563f521156 HTTP/1.1" 200 - {} 2025-04-10 23:06:25.423 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /c/0056fa76-02e7-4ef5-80bf-158502d9404d HTTP/1.1" 200 - {} 2025-04-10 23:06:25.495 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /static/loader.js HTTP/1.1" 304 - {} 2025-04-10 23:06:25.499 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /static/splash.png HTTP/1.1" 304 - {} 2025-04-10 23:06:25.955 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /static/favicon.ico HTTP/1.1" 304 - {} 2025-04-10 23:06:25.978 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/config HTTP/1.1" 200 - {} 2025-04-10 23:06:26.006 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/v1/auths/ HTTP/1.1" 200 - {} 2025-04-10 23:06:26.016 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/config HTTP/1.1" 200 - {} 2025-04-10 23:06:26.031 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/changelog HTTP/1.1" 200 - {} 2025-04-10 23:06:26.035 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /api/v1/users/user/settings HTTP/1.1" 200 - {} 2025-04-10 23:06:26.045 | INFO | open_webui.routers.openai:get_all_models:389 - get_all_models() - {} 2025-04-10 23:06:26.257 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50288 - "GET /manifest.json HTTP/1.1" 200 - {} 2025-04-10 23:06:26.816 | INFO | open_webui.routers.ollama:get_all_models:300 - get_all_models() - {} 2025-04-10 23:06:27.083 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /api/models HTTP/1.1" 200 - {} 2025-04-10 23:06:27.092 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /api/v1/configs/banners HTTP/1.1" 200 - {} 2025-04-10 23:06:27.099 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /api/v1/tools/ HTTP/1.1" 200 - {} 2025-04-10 23:06:27.123 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/v1/chats/0056fa76-02e7-4ef5-80bf-158502d9404d HTTP/1.1" 200 - {} 2025-04-10 23:06:27.140 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/v1/channels/ HTTP/1.1" 200 - {} 2025-04-10 23:06:27.141 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50279 - "GET /static/favicon.png HTTP/1.1" 304 - {} 2025-04-10 23:06:27.155 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /api/v1/chats/0056fa76-02e7-4ef5-80bf-158502d9404d/tags HTTP/1.1" 200 - {} 2025-04-10 23:06:27.167 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {} 2025-04-10 23:06:27.169 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /api/v1/users/user/settings HTTP/1.1" 200 - {} 2025-04-10 23:06:27.176 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /api/v1/chats/pinned HTTP/1.1" 200 - {} 2025-04-10 23:06:27.398 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50279 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-04-10 23:06:27.407 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50262 - "GET /api/v1/folders/ HTTP/1.1" 200 - {} 2025-04-10 23:06:27.438 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50276 - "GET /api/version/updates HTTP/1.1" 200 - {} 2025-04-10 23:06:27.517 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50280 - "GET /ollama/api/version HTTP/1.1" 200 - {} 2025-04-10 23:06:36.564 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50298 - "GET /api/v1/chats/ceb6d3fb-a5b3-47bf-bca2-b45ab4264e45 HTTP/1.1" 200 - {} 2025-04-10 23:06:36.573 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50298 - "GET /api/v1/chats/ceb6d3fb-a5b3-47bf-bca2-b45ab4264e45 HTTP/1.1" 200 - {} 2025-04-10 23:06:40.908 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50298 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-04-10 23:06:40.917 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50299 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 - {} 2025-04-10 23:06:55.960 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50304 - "GET /api/v1/chats/0056fa76-02e7-4ef5-80bf-158502d9404d HTTP/1.1" 200 - {} 2025-04-10 23:06:57.784 | INFO | uvicorn.protocols.http.httptools_impl:send:476 - 127.0.0.1:50304 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 - {}
Author
Owner

@rick-github commented on GitHub (Apr 10, 2025):

This log is not from ollama.

<!-- gh-comment-id:2793610651 --> @rick-github commented on GitHub (Apr 10, 2025): This log is not from ollama.
Author
Owner

@yukkuriTV commented on GitHub (Apr 10, 2025):

Sorry, here is the correct one.
2025/04/10 23:03:39 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\yukkuriTV\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-04-10T23:03:39.906+09:00 level=INFO source=images.go:458 msg="total blobs: 19"
time=2025-04-10T23:03:39.907+09:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-10T23:03:39.909+09:00 level=INFO source=routes.go:1298 msg="Listening on 127.0.0.1:11434 (version 0.6.5)"
time=2025-04-10T23:03:39.909+09:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-10T23:03:39.910+09:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-04-10T23:03:39.910+09:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-04-10T23:03:40.160+09:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-23f2619b-2453-9ed8-f622-383f1a5428fb library=cuda compute=7.5 driver=12.6 name="NVIDIA GeForce GTX 1650" overhead="286.0 MiB"
time=2025-04-10T23:03:40.162+09:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-23f2619b-2453-9ed8-f622-383f1a5428fb library=cuda variant=v12 compute=7.5 driver=12.6 name="NVIDIA GeForce GTX 1650" total="4.0 GiB" available="3.2 GiB"
time=2025-04-10T23:05:37.754+09:00 level=INFO source=server.go:105 msg="system memory" total="11.8 GiB" free="8.0 GiB" free_swap="34.7 GiB"
time=2025-04-10T23:05:37.756+09:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=6 layers.model=35 layers.offload=0 layers.split="" memory.available="[3.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="4.0 GiB" memory.required.partial="0 B" memory.required.kv="214.0 MiB" memory.required.allocations="[0 B]" memory.weights.total="3.8 GiB" memory.weights.repeating="3.2 GiB" memory.weights.nonrepeating="680.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-04-10T23:05:37.877+09:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-10T23:05:37.886+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-04-10T23:05:37.886+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-10T23:05:37.886+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-10T23:05:37.886+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-10T23:05:37.886+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-10T23:05:37.895+09:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\Users\yukkuriTV\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model C:\Users\yukkuriTV\.ollama\models\blobs\sha256-b3ec67796e032db2b124e3b57ac83dadeed7cd55d62a673f6cecd9f4e1a611be --ctx-size 2048 --batch-size 512 --n-gpu-layers 6 --threads 6 --parallel 1 --port 50267"
time=2025-04-10T23:05:37.898+09:00 level=INFO source=sched.go:451 msg="loaded runners" count=1
time=2025-04-10T23:05:37.898+09:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-04-10T23:05:37.899+09:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-04-10T23:05:37.916+09:00 level=INFO source=runner.go:816 msg="starting ollama engine"
time=2025-04-10T23:05:37.943+09:00 level=INFO source=runner.go:879 msg="Server listening on 127.0.0.1:50267"
time=2025-04-10T23:05:38.054+09:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
time=2025-04-10T23:05:38.054+09:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-04-10T23:05:38.054+09:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q8_0 name="" description="" num_tensors=883 num_key_values=36
load_backend: lo

<!-- gh-comment-id:2794179181 --> @yukkuriTV commented on GitHub (Apr 10, 2025): Sorry, here is the correct one. 2025/04/10 23:03:39 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\yukkuriTV\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-04-10T23:03:39.906+09:00 level=INFO source=images.go:458 msg="total blobs: 19" time=2025-04-10T23:03:39.907+09:00 level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-10T23:03:39.909+09:00 level=INFO source=routes.go:1298 msg="Listening on 127.0.0.1:11434 (version 0.6.5)" time=2025-04-10T23:03:39.909+09:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-10T23:03:39.910+09:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-04-10T23:03:39.910+09:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12 time=2025-04-10T23:03:40.160+09:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-23f2619b-2453-9ed8-f622-383f1a5428fb library=cuda compute=7.5 driver=12.6 name="NVIDIA GeForce GTX 1650" overhead="286.0 MiB" time=2025-04-10T23:03:40.162+09:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-23f2619b-2453-9ed8-f622-383f1a5428fb library=cuda variant=v12 compute=7.5 driver=12.6 name="NVIDIA GeForce GTX 1650" total="4.0 GiB" available="3.2 GiB" time=2025-04-10T23:05:37.754+09:00 level=INFO source=server.go:105 msg="system memory" total="11.8 GiB" free="8.0 GiB" free_swap="34.7 GiB" time=2025-04-10T23:05:37.756+09:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=6 layers.model=35 layers.offload=0 layers.split="" memory.available="[3.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="4.0 GiB" memory.required.partial="0 B" memory.required.kv="214.0 MiB" memory.required.allocations="[0 B]" memory.weights.total="3.8 GiB" memory.weights.repeating="3.2 GiB" memory.weights.nonrepeating="680.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-04-10T23:05:37.877+09:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.add_eot_token default=false time=2025-04-10T23:05:37.886+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 time=2025-04-10T23:05:37.886+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.local.freq_base default=10000 time=2025-04-10T23:05:37.886+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 time=2025-04-10T23:05:37.886+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.rope.freq_scale default=1 time=2025-04-10T23:05:37.886+09:00 level=WARN source=ggml.go:152 msg="key not found" key=gemma3.mm_tokens_per_image default=256 time=2025-04-10T23:05:37.895+09:00 level=INFO source=server.go:405 msg="starting llama server" cmd="C:\\Users\\yukkuriTV\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\yukkuriTV\\.ollama\\models\\blobs\\sha256-b3ec67796e032db2b124e3b57ac83dadeed7cd55d62a673f6cecd9f4e1a611be --ctx-size 2048 --batch-size 512 --n-gpu-layers 6 --threads 6 --parallel 1 --port 50267" time=2025-04-10T23:05:37.898+09:00 level=INFO source=sched.go:451 msg="loaded runners" count=1 time=2025-04-10T23:05:37.898+09:00 level=INFO source=server.go:580 msg="waiting for llama runner to start responding" time=2025-04-10T23:05:37.899+09:00 level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error" time=2025-04-10T23:05:37.916+09:00 level=INFO source=runner.go:816 msg="starting ollama engine" time=2025-04-10T23:05:37.943+09:00 level=INFO source=runner.go:879 msg="Server listening on 127.0.0.1:50267" time=2025-04-10T23:05:38.054+09:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" time=2025-04-10T23:05:38.054+09:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" time=2025-04-10T23:05:38.054+09:00 level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q8_0 name="" description="" num_tensors=883 num_key_values=36 load_backend: lo
Author
Owner

@rick-github commented on GitHub (Apr 10, 2025):

time=2025-04-10T23:03:40.162+09:00 level=INFO source=types.go:130 msg="inference compute"
 id=GPU-23f2619b-2453-9ed8-f622-383f1a5428fb library=cuda variant=v12 compute=7.5 driver=12.6
 name="NVIDIA GeForce GTX 1650" total="4.0 GiB" available="3.2 GiB"

OK, you have a tiny GPU, with only 3.2G available.

time=2025-04-10T23:05:37.756+09:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=6
 layers.model=35 layers.offload=0 layers.split="" memory.available="[3.2 GiB]" memory.gpu_overhead="0 B"
 memory.required.full="4.0 GiB" memory.required.partial="0 B" memory.required.kv="214.0 MiB"
 memory.required.allocations="[0 B]" memory.weights.total="3.8 GiB" memory.weights.repeating="3.2 GiB"
 memory.weights.nonrepeating="680.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB"
 projector.weights="795.9 MiB" projector.graph="1.0 GiB"

There's a minimum amount of VRAM required for loading the weights and graphs before ollama allocates VRAM for context. In this case, ollama estimates that the required VRAM for the graphs and weights means that it can't fit any context in the VRAM. If no context can be allocated in VRAM, the model will run in system RAM.

<!-- gh-comment-id:2794222121 --> @rick-github commented on GitHub (Apr 10, 2025): ``` time=2025-04-10T23:03:40.162+09:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-23f2619b-2453-9ed8-f622-383f1a5428fb library=cuda variant=v12 compute=7.5 driver=12.6 name="NVIDIA GeForce GTX 1650" total="4.0 GiB" available="3.2 GiB" ``` OK, you have a tiny GPU, with only 3.2G available. ``` time=2025-04-10T23:05:37.756+09:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=6 layers.model=35 layers.offload=0 layers.split="" memory.available="[3.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="4.0 GiB" memory.required.partial="0 B" memory.required.kv="214.0 MiB" memory.required.allocations="[0 B]" memory.weights.total="3.8 GiB" memory.weights.repeating="3.2 GiB" memory.weights.nonrepeating="680.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" ``` There's a minimum amount of VRAM required for loading the weights and graphs before ollama allocates VRAM for context. In this case, ollama estimates that the required VRAM for the graphs and weights means that it can't fit any context in the VRAM. If no context can be allocated in VRAM, the model will run in system RAM.
Author
Owner

@yukkuriTV commented on GitHub (Apr 10, 2025):

I think I specified the GeForce GTX 1660 SUPER. The VRAM capacity is 6GB. Is it not being recognized? I don't have a GeForce GTX 1650 connected (I've already sold it).

<!-- gh-comment-id:2794261619 --> @yukkuriTV commented on GitHub (Apr 10, 2025): I think I specified the GeForce GTX 1660 SUPER. The VRAM capacity is 6GB. Is it not being recognized? I don't have a GeForce GTX 1650 connected (I've already sold it).
Author
Owner

@rick-github commented on GitHub (Apr 10, 2025):

What's the output of nvidia-smi?

<!-- gh-comment-id:2794268161 --> @rick-github commented on GitHub (Apr 10, 2025): What's the output of `nvidia-smi`?
Author
Owner

@yukkuriTV commented on GitHub (Apr 10, 2025):

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 561.09 Driver Version: 561.09 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce GTX 1660 WDDM | 00000000:01:00.0 On | N/A |
| N/A 51C P0 16W / 00W | 383MiB / 0000MiB | 2% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2560 C ...ta\Local\Programs\Ollama\ollama.exe N/A |
| 0 N/A N/A 5408 C+G ...oogle\Chrome\Application\chrome.exe N/A |
| 0 N/A N/A 5976 C+G ...anese Input\GoogleIMEJaRenderer.exe N/A |
| 0 N/A N/A 6216 C+G ...siveControlPanel\SystemSettings.exe N/A |
| 0 N/A N/A 8548 C+G ...2txyewy\StartMenuExperienceHost.exe N/A |
| 0 N/A N/A 9264 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 10796 C+G ....Search_cw5n1h2txyewy\SearchApp.exe N/A |
| 0 N/A N/A 10956 C+G ...crosoft\Edge\Application\msedge.exe N/A |
| 0 N/A N/A 11812 C+G ...CBS_cw5n1h2txyewy\TextInputHost.exe N/A |
| 0 N/A N/A 13208 C+G ...ces\Razer Central\Razer Central.exe N/A |
| 0 N/A N/A 14840 C+G ...5n1h2txyewy\ShellExperienceHost.exe N/A |
+-----------------------------------------------------------------------------------------+

<!-- gh-comment-id:2794316283 --> @yukkuriTV commented on GitHub (Apr 10, 2025): +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 561.09 Driver Version: 561.09 CUDA Version: 12.6 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce GTX 1660 WDDM | 00000000:01:00.0 On | N/A | | N/A 51C P0 16W / 00W | 383MiB / 0000MiB | 2% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 2560 C ...ta\Local\Programs\Ollama\ollama.exe N/A | | 0 N/A N/A 5408 C+G ...oogle\Chrome\Application\chrome.exe N/A | | 0 N/A N/A 5976 C+G ...anese Input\GoogleIMEJaRenderer.exe N/A | | 0 N/A N/A 6216 C+G ...siveControlPanel\SystemSettings.exe N/A | | 0 N/A N/A 8548 C+G ...2txyewy\StartMenuExperienceHost.exe N/A | | 0 N/A N/A 9264 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 10796 C+G ....Search_cw5n1h2txyewy\SearchApp.exe N/A | | 0 N/A N/A 10956 C+G ...crosoft\Edge\Application\msedge.exe N/A | | 0 N/A N/A 11812 C+G ...CBS_cw5n1h2txyewy\TextInputHost.exe N/A | | 0 N/A N/A 13208 C+G ...ces\Razer Central\Razer Central.exe N/A | | 0 N/A N/A 14840 C+G ...5n1h2txyewy\ShellExperienceHost.exe N/A | +-----------------------------------------------------------------------------------------+
Author
Owner

@yukkuriTV commented on GitHub (Apr 10, 2025):

Since something is wrong, I will do a clean install of the OS and reset the BIOS (if I look closely, even the memory capacity is the previous configuration).
However, even if it is recognized correctly, the VRAM is still 6GB, so I still need to set it to force load Gemma 3 12B q8_0 to the GPU with system memory fallback.
Or if there is a function like that, please let me know.

<!-- gh-comment-id:2794353766 --> @yukkuriTV commented on GitHub (Apr 10, 2025): Since something is wrong, I will do a clean install of the OS and reset the BIOS (if I look closely, even the memory capacity is the previous configuration). However, even if it is recognized correctly, the VRAM is still 6GB, so I still need to set it to force load Gemma 3 12B q8_0 to the GPU with system memory fallback. Or if there is a function like that, please let me know.
Author
Owner

@rick-github commented on GitHub (Apr 10, 2025):

| N/A 51C P0 16W / 00W | 383MiB / 0000MiB | 2% Default |

nvidia-smi is reporting 0GiB available VRAM so something is broken or incorrectly configured. GeForce GTX 1660 WDDM is apparently a version of card tuned for used with WDDM (Windows Display Driver Model) so maybe you need to update the drivers for that.

<!-- gh-comment-id:2794367562 --> @rick-github commented on GitHub (Apr 10, 2025): ``` | N/A 51C P0 16W / 00W | 383MiB / 0000MiB | 2% Default | ``` `nvidia-smi` is reporting 0GiB available VRAM so something is broken or incorrectly configured. `GeForce GTX 1660 WDDM` is apparently a version of card tuned for used with WDDM (Windows Display Driver Model) so maybe you need to update the drivers for that.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6663