[GH-ISSUE #5240] [LINUX] Not using VRAM #3280

Closed
opened 2026-04-12 13:49:38 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @liberteryen on GitHub (Jun 23, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5240

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

When I install the model, 11 MB vram is used and nearly 5 GB RAM is used.

➜  ~ free -h
               total        used        free      shared  buff/cache   available
Mem:            31Gi       4,6Gi        20Gi       697Mi       7,2Gi        26Gi
Swap:             0B          0B          0B

The model I use : sunapi386/llama-3-lexi-uncensored:8b

CPU: 12th Gen Intel i7-12650H (16) @ 4.600GHz
GPU: NVIDIA GeForce RTX 3050 Mobile
RAM: 32 GB

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

ollama version is 0.1.45

Originally created by @liberteryen on GitHub (Jun 23, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5240 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? When I install the model, 11 MB vram is used and nearly 5 GB RAM is used. ```bash ➜ ~ free -h total used free shared buff/cache available Mem: 31Gi 4,6Gi 20Gi 697Mi 7,2Gi 26Gi Swap: 0B 0B 0B ``` The model I use : sunapi386/llama-3-lexi-uncensored:8b CPU: 12th Gen Intel i7-12650H (16) @ 4.600GHz GPU: NVIDIA GeForce RTX 3050 Mobile RAM: 32 GB ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version ollama version is 0.1.45
GiteaMirror added the nvidiabug labels 2026-04-12 13:49:38 -05:00
Author
Owner

@liberteryen commented on GitHub (Jun 23, 2024):

EDİT: Ollama installed from AUR

<!-- gh-comment-id:2185174244 --> @liberteryen commented on GitHub (Jun 23, 2024): EDİT: Ollama installed from AUR
Author
Owner

@dhiltgen commented on GitHub (Jun 23, 2024):

What do you see in ollama ps and which model are you trying to load?

Assuming it says 100% CPU, please share your server log so we can see why it isn't running on your GPU.

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md

<!-- gh-comment-id:2185310813 --> @dhiltgen commented on GitHub (Jun 23, 2024): What do you see in `ollama ps` and which model are you trying to load? Assuming it says 100% CPU, please share your server log so we can see why it isn't running on your GPU. https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md
Author
Owner

@liberteryen commented on GitHub (Jun 27, 2024):

resim

➜ ~ cat ~/.ollama/logs/server.log
cat: /home/h/.ollama/logs/server.log: Böyle bir dosya ya da dizin yok
(no such file or direcory)

journalctl:

Haz 27 19:00:53 TR systemd[1]: Started Ollama Service.
Haz 27 19:00:53 TR ollama[7527]: 2024/06/27 19:00:53 routes.go:1060: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLL>
Haz 27 19:00:53 TR ollama[7527]: time=2024-06-27T19:00:53.709+03:00 level=INFO source=images.go:725 msg="total blobs: 2"
Haz 27 19:00:53 TR ollama[7527]: time=2024-06-27T19:00:53.710+03:00 level=INFO source=images.go:732 msg="total unused blobs removed: 0"
Haz 27 19:00:53 TR ollama[7527]: time=2024-06-27T19:00:53.710+03:00 level=INFO source=routes.go:1106 msg="Listening on 127.0.0.1:11434 (version 0.1.45)"
Haz 27 19:00:53 TR ollama[7527]: time=2024-06-27T19:00:53.710+03:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama4197040954/runners
Haz 27 19:00:53 TR ollama[7527]: time=2024-06-27T19:00:53.876+03:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cpu]"
Haz 27 19:00:55 TR ollama[7527]: time=2024-06-27T19:00:55.102+03:00 level=INFO source=types.go:98 msg="inference compute" id=GPU-e3887261-dc8c-0463-82a9-ebe5e760166f library=cuda compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3050 Lapt>
Haz 27 19:00:58 TR ollama[7527]: [GIN] 2024/06/27 - 19:00:58 | 200 |    3.382914ms |       127.0.0.1 | HEAD     "/"
Haz 27 19:00:58 TR ollama[7527]: [GIN] 2024/06/27 - 19:00:58 | 200 |  500.383004ms |       127.0.0.1 | POST     "/api/show"
Haz 27 19:00:59 TR ollama[7527]: [GIN] 2024/06/27 - 19:00:59 | 200 |  494.127616ms |       127.0.0.1 | POST     "/api/show"
Haz 27 19:00:59 TR ollama[7527]: time=2024-06-27T19:00:59.762+03:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=19 layers.split="" memory.available="[3.7 GiB]" memory.required.>
Haz 27 19:00:59 TR ollama[7527]: time=2024-06-27T19:00:59.763+03:00 level=INFO source=server.go:359 msg="starting llama server" cmd="/tmp/ollama4197040954/runners/cpu_avx2/ollama_llama_server --model /var/lib/ollama/.ollama/models/blobs/>
Haz 27 19:00:59 TR ollama[7527]: time=2024-06-27T19:00:59.763+03:00 level=INFO source=sched.go:382 msg="loaded runners" count=1
Haz 27 19:00:59 TR ollama[7527]: time=2024-06-27T19:00:59.763+03:00 level=INFO source=server.go:547 msg="waiting for llama runner to start responding"
Haz 27 19:00:59 TR ollama[7527]: time=2024-06-27T19:00:59.763+03:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server error"
Haz 27 19:00:59 TR ollama[7629]: WARN [server_params_parse] Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support | n_gpu_layers=-1 tid="133675588192>
Haz 27 19:00:59 TR ollama[7629]: INFO [main] build info | build=3171 commit="7c26775ad" tid="133675588192064" timestamp=1719504059
Haz 27 19:00:59 TR ollama[7629]: INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | >
Haz 27 19:00:59 TR ollama[7629]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="41667" tid="133675588192064" timestamp=1719504059
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /var/lib/ollama/.ollama/models/blobs/sha256-3f3a13eb3fbc7a52c11f075cd72e476117fc4a9fbc8d93c8e3145bc54bf10a17 (version GGUF>
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv   0:                       general.architecture str              = llama
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv   1:                               general.name str              = bk
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv   2:                          llama.block_count u32              = 32
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv  10:                          general.file_type u32              = 15
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 128000
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 128009
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 128001
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv  21:               general.quantization_version u32              = 2
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - type  f32:   65 tensors
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - type q4_K:  193 tensors
Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - type q6_K:   33 tensors
Haz 27 19:00:59 TR ollama[7527]: llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
Haz 27 19:01:00 TR ollama[7527]: llm_load_vocab: special tokens cache size = 256
Haz 27 19:01:00 TR ollama[7527]: time=2024-06-27T19:01:00.014+03:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server loading model"
Haz 27 19:01:00 TR ollama[7527]: llm_load_vocab: token to piece cache size = 0.8000 MB


➜  ~ ollama ps
NAME                                	ID          	SIZE  	PROCESSOR      UNTIL              
sunapi386/llama-3-lexi-uncensored:8b	ec6c7923b45f	6.3 GB	37%/63% CPU/GPU2 minutes from now```


<!-- gh-comment-id:2194645210 --> @liberteryen commented on GitHub (Jun 27, 2024): ![resim](https://github.com/ollama/ollama/assets/84645312/9c0e96ca-ff0e-4325-8bde-cdedcbc4c72f) ➜ ~ cat ~/.ollama/logs/server.log cat: /home/h/.ollama/logs/server.log: Böyle bir dosya ya da dizin yok (no such file or direcory) journalctl: ```bash Haz 27 19:00:53 TR systemd[1]: Started Ollama Service. Haz 27 19:00:53 TR ollama[7527]: 2024/06/27 19:00:53 routes.go:1060: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLL> Haz 27 19:00:53 TR ollama[7527]: time=2024-06-27T19:00:53.709+03:00 level=INFO source=images.go:725 msg="total blobs: 2" Haz 27 19:00:53 TR ollama[7527]: time=2024-06-27T19:00:53.710+03:00 level=INFO source=images.go:732 msg="total unused blobs removed: 0" Haz 27 19:00:53 TR ollama[7527]: time=2024-06-27T19:00:53.710+03:00 level=INFO source=routes.go:1106 msg="Listening on 127.0.0.1:11434 (version 0.1.45)" Haz 27 19:00:53 TR ollama[7527]: time=2024-06-27T19:00:53.710+03:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama4197040954/runners Haz 27 19:00:53 TR ollama[7527]: time=2024-06-27T19:00:53.876+03:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cpu]" Haz 27 19:00:55 TR ollama[7527]: time=2024-06-27T19:00:55.102+03:00 level=INFO source=types.go:98 msg="inference compute" id=GPU-e3887261-dc8c-0463-82a9-ebe5e760166f library=cuda compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3050 Lapt> Haz 27 19:00:58 TR ollama[7527]: [GIN] 2024/06/27 - 19:00:58 | 200 | 3.382914ms | 127.0.0.1 | HEAD "/" Haz 27 19:00:58 TR ollama[7527]: [GIN] 2024/06/27 - 19:00:58 | 200 | 500.383004ms | 127.0.0.1 | POST "/api/show" Haz 27 19:00:59 TR ollama[7527]: [GIN] 2024/06/27 - 19:00:59 | 200 | 494.127616ms | 127.0.0.1 | POST "/api/show" Haz 27 19:00:59 TR ollama[7527]: time=2024-06-27T19:00:59.762+03:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=19 layers.split="" memory.available="[3.7 GiB]" memory.required.> Haz 27 19:00:59 TR ollama[7527]: time=2024-06-27T19:00:59.763+03:00 level=INFO source=server.go:359 msg="starting llama server" cmd="/tmp/ollama4197040954/runners/cpu_avx2/ollama_llama_server --model /var/lib/ollama/.ollama/models/blobs/> Haz 27 19:00:59 TR ollama[7527]: time=2024-06-27T19:00:59.763+03:00 level=INFO source=sched.go:382 msg="loaded runners" count=1 Haz 27 19:00:59 TR ollama[7527]: time=2024-06-27T19:00:59.763+03:00 level=INFO source=server.go:547 msg="waiting for llama runner to start responding" Haz 27 19:00:59 TR ollama[7527]: time=2024-06-27T19:00:59.763+03:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server error" Haz 27 19:00:59 TR ollama[7629]: WARN [server_params_parse] Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support | n_gpu_layers=-1 tid="133675588192> Haz 27 19:00:59 TR ollama[7629]: INFO [main] build info | build=3171 commit="7c26775ad" tid="133675588192064" timestamp=1719504059 Haz 27 19:00:59 TR ollama[7629]: INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | > Haz 27 19:00:59 TR ollama[7629]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="41667" tid="133675588192064" timestamp=1719504059 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /var/lib/ollama/.ollama/models/blobs/sha256-3f3a13eb3fbc7a52c11f075cd72e476117fc4a9fbc8d93c8e3145bc54bf10a17 (version GGUF> Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 0: general.architecture str = llama Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 1: general.name str = bk Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 2: llama.block_count u32 = 32 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 3: llama.context_length u32 = 8192 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 10: general.file_type u32 = 15 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128009 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 128001 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - kv 21: general.quantization_version u32 = 2 Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - type f32: 65 tensors Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - type q4_K: 193 tensors Haz 27 19:00:59 TR ollama[7527]: llama_model_loader: - type q6_K: 33 tensors Haz 27 19:00:59 TR ollama[7527]: llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' Haz 27 19:01:00 TR ollama[7527]: llm_load_vocab: special tokens cache size = 256 Haz 27 19:01:00 TR ollama[7527]: time=2024-06-27T19:01:00.014+03:00 level=INFO source=server.go:585 msg="waiting for server to become available" status="llm server loading model" Haz 27 19:01:00 TR ollama[7527]: llm_load_vocab: token to piece cache size = 0.8000 MB ➜ ~ ollama ps NAME ID SIZE PROCESSOR UNTIL sunapi386/llama-3-lexi-uncensored:8b ec6c7923b45f 6.3 GB 37%/63% CPU/GPU2 minutes from now```
Author
Owner

@liberteryen commented on GitHub (Jun 27, 2024):

Only Xorg using NVİDİA

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      1267      G   /usr/lib/Xorg                                   4MiB |
+-----------------------------------------------------------------------------------------+```
<!-- gh-comment-id:2194648777 --> @liberteryen commented on GitHub (Jun 27, 2024): Only Xorg using NVİDİA ```bash +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1267 G /usr/lib/Xorg 4MiB | +-----------------------------------------------------------------------------------------+```
Author
Owner

@dhiltgen commented on GitHub (Jul 5, 2024):

@Hhk78 I don't know how you installed ollama, but this wasn't an offiical release from our project, but must have been from some other OS distro or packaging system. The build you installed doesn't have GPU support Dynamic LLM libraries [cpu_avx cpu_avx2 cpu]. (there is no cuda or rocm dynamic library built in for your install)

If you install from our releases page it should work.

<!-- gh-comment-id:2211166599 --> @dhiltgen commented on GitHub (Jul 5, 2024): @Hhk78 I don't know how you installed ollama, but this wasn't an offiical release from our project, but must have been from some other OS distro or packaging system. The build you installed doesn't have GPU support `Dynamic LLM libraries [cpu_avx cpu_avx2 cpu]`. (there is no cuda or rocm dynamic library built in for your install) If you [install](https://github.com/ollama/ollama/blob/main/docs/linux.md#install) from our [releases](https://github.com/ollama/ollama/releases) page it should work.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3280