[GH-ISSUE #13955] Ollama painfully slow with Quadro RTX 6000 8GB vGPU Q profile #9131

Closed
opened 2026-04-12 21:59:05 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @mikeyo on GitHub (Jan 28, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13955

What is the issue?

I have an Ubuntu 24.04 LTS VM running proxmox configured with 32GB RAM, vGPU 8GB VRAM profile and 8 vCPU cores.
I have set up latest Ollama running as a service and OpenWebUI in docker.

Upon first boot of the vm, responses are very quick and nvidia-smi shows GPU being used both RAM (4-5GB) and 7% utilization.

However, if i leave openwebui / ollama idle for around 5 minutes, when i send some tokens, ollama takes forever to respond. A simple "say hello" can take over 5 minutes per token to generate!

I have scoured the internet for solutions some of which suggested keeping the GPU alive and using smaller models but nothing works. It is always rapid on first boot of the VM but always falls over if i leave it idle for so many minutes. I am seriously tearing my hair out with this on the verge of going bald.

Please can you advise a torubleshooting procedure and what logs are required so I can determince if this is a CUDA, vGPU, ollama, config issue?

Thank you!

Relevant log output

nvidia-smi
Wed Jan 28 15:14:13 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.105.08             Driver Version: 580.105.08     CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  GRID RTX6000-8Q                On  |   00000000:01:00.0 Off |                  N/A |
| N/A   N/A    P0            N/A  /  N/A  |    1015MiB /   8192MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            1509      C   /usr/local/bin/python3                  268MiB |
|    0   N/A  N/A            1516      C   /opt/app-root/bin/python3               746MiB |
+-----------------------------------------------------------------------------------------+
 ollama --version
ollama version is 0.15.2

ollama list
llama3:latest                                          365c0bd3c000    4.7 GB    27 hours ago

OS

Linux

GPU

Nvidia

CPU

Other

Ollama version

0.15.2

Originally created by @mikeyo on GitHub (Jan 28, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13955 ### What is the issue? I have an Ubuntu 24.04 LTS VM running proxmox configured with 32GB RAM, vGPU 8GB VRAM profile and 8 vCPU cores. I have set up latest Ollama running as a service and OpenWebUI in docker. Upon first boot of the vm, responses are very quick and nvidia-smi shows GPU being used both RAM (4-5GB) and 7% utilization. However, if i leave openwebui / ollama idle for around 5 minutes, when i send some tokens, ollama takes forever to respond. A simple "say hello" can take over 5 minutes per token to generate! I have scoured the internet for solutions some of which suggested keeping the GPU alive and using smaller models but nothing works. It is always rapid on first boot of the VM but always falls over if i leave it idle for so many minutes. I am seriously tearing my hair out with this on the verge of going bald. Please can you advise a torubleshooting procedure and what logs are required so I can determince if this is a CUDA, vGPU, ollama, config issue? Thank you! ### Relevant log output ```shell nvidia-smi Wed Jan 28 15:14:13 2026 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.105.08 Driver Version: 580.105.08 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 GRID RTX6000-8Q On | 00000000:01:00.0 Off | N/A | | N/A N/A P0 N/A / N/A | 1015MiB / 8192MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1509 C /usr/local/bin/python3 268MiB | | 0 N/A N/A 1516 C /opt/app-root/bin/python3 746MiB | +-----------------------------------------------------------------------------------------+ ollama --version ollama version is 0.15.2 ollama list llama3:latest 365c0bd3c000 4.7 GB 27 hours ago ``` ### OS Linux ### GPU Nvidia ### CPU Other ### Ollama version 0.15.2
GiteaMirror added the bug label 2026-04-12 21:59:05 -05:00
Author
Owner

@mikeyo commented on GitHub (Jan 28, 2026):

Not sure if this will help.

Jan 28 15:15:24 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:15:24 | 200 | 27.536µs | 127.0.0.1 | HEAD >
Jan 28 15:15:24 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:15:24 | 200 | 2.713841ms | 127.0.0.1 | GET >
Jan 28 15:17:05 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:17:05 | 200 | 19.874µs | 127.0.0.1 | HEAD >
Jan 28 15:17:05 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:17:05 | 200 | 95.567836ms | 127.0.0.1 | POST >
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.014Z level=INFO source=server.go:429 msg="starting run>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors fr>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not app>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 0: general.architecture str >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 1: general.name str >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 2: llama.block_count u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 3: llama.context_length u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 4: llama.embedding_length u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 5: llama.feed_forward_length u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 6: llama.attention.head_count u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 7: llama.attention.head_count_kv u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 8: llama.rope.freq_base f32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 10: general.file_type u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 11: llama.vocab_size u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 12: llama.rope.dimension_count u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 13: tokenizer.ggml.model str >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 14: tokenizer.ggml.pre str >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[st>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i3>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 17: tokenizer.ggml.merges arr[st>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 20: tokenizer.chat_template str >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 21: general.quantization_version u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - type f32: 65 tensors
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - type q4_0: 225 tensors
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - type q6_K: 1 tensors
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: file format = GGUF V3 (latest)
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: file type = Q4_0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: file size = 4.33 GiB (4.64 BPW)
Jan 28 15:17:06 moubuntu01 ollama[743]: load: printing all EOG tokens:
Jan 28 15:17:06 moubuntu01 ollama[743]: load: - 128001 ('<|end_of_text|>')
Jan 28 15:17:06 moubuntu01 ollama[743]: load: - 128009 ('<|eot_id|>')
Jan 28 15:17:06 moubuntu01 ollama[743]: load: special tokens cache size = 256
Jan 28 15:17:06 moubuntu01 ollama[743]: load: token to piece cache size = 0.8000 MB
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: arch = llama
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: vocab_only = 1
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: no_alloc = 0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: model type = ?B
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: model params = 8.03 B
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: general.name = Meta-Llama-3-8B-Instruct
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: vocab type = BPE
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_vocab = 128256
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_merges = 280147
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: BOS token = 128000 '<|begin_of_text|>'
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOS token = 128009 '<|eot_id|>'
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOT token = 128009 '<|eot_id|>'
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: LF token = 198 'Ċ'
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOG token = 128001 '<|end_of_text|>'
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOG token = 128009 '<|eot_id|>'
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: max token length = 256
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_load: vocab only - skipping tensors
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.323Z level=INFO source=server.go:429 msg="starting run>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.323Z level=INFO source=sched.go:452 msg="system memory>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.323Z level=INFO source=sched.go:459 msg="gpu memory" i>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.323Z level=INFO source=server.go:496 msg="loading mode>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.324Z level=INFO source=device.go:240 msg="model weight>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.324Z level=INFO source=device.go:251 msg="kv cache" de>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.324Z level=INFO source=device.go:262 msg="compute grap>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.324Z level=INFO source=device.go:272 msg="total memory>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.331Z level=INFO source=runner.go:965 msg="starting go >
Jan 28 15:17:06 moubuntu01 ollama[743]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-sse42.so
Jan 28 15:17:06 moubuntu01 ollama[743]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Jan 28 15:17:06 moubuntu01 ollama[743]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Jan 28 15:17:06 moubuntu01 ollama[743]: ggml_cuda_init: found 1 CUDA devices:
Jan 28 15:17:06 moubuntu01 ollama[743]: Device 0: GRID RTX6000-8Q, compute capability 7.5, VMM: no, ID: GPU-f254cbb0->
Jan 28 15:17:06 moubuntu01 ollama[743]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-c>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.390Z level=INFO source=ggml.go:104 msg=system CPU.0.SS>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.392Z level=INFO source=runner.go:1001 msg="Server list>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.397Z level=INFO source=runner.go:895 msg=load request=>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.397Z level=INFO source=server.go:1347 msg="waiting for>
Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.397Z level=INFO source=server.go:1381 msg="waiting for>
Jan 28 15:17:06 moubuntu01 ollama[743]: ggml_backend_cuda_device_get_memory device GPU-f254cbb0-fc5b-11f0-a81b-ea3fb678>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_load_from_file_impl: using device CUDA0 (GRID RTX6000-8Q) (0000:01:>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors fr>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not app>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 0: general.architecture str >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 1: general.name str >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 2: llama.block_count u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 3: llama.context_length u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 4: llama.embedding_length u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 5: llama.feed_forward_length u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 6: llama.attention.head_count u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 7: llama.attention.head_count_kv u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 8: llama.rope.freq_base f32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 10: general.file_type u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 11: llama.vocab_size u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 12: llama.rope.dimension_count u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 13: tokenizer.ggml.model str >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 14: tokenizer.ggml.pre str >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[st>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i3>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 17: tokenizer.ggml.merges arr[st>
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 20: tokenizer.chat_template str >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 21: general.quantization_version u32 >
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - type f32: 65 tensors
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - type q4_0: 225 tensors
Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - type q6_K: 1 tensors
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: file format = GGUF V3 (latest)
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: file type = Q4_0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: file size = 4.33 GiB (4.64 BPW)
Jan 28 15:17:06 moubuntu01 ollama[743]: load: printing all EOG tokens:
Jan 28 15:17:06 moubuntu01 ollama[743]: load: - 128001 ('<|end_of_text|>')
Jan 28 15:17:06 moubuntu01 ollama[743]: load: - 128009 ('<|eot_id|>')
Jan 28 15:17:06 moubuntu01 ollama[743]: load: special tokens cache size = 256
Jan 28 15:17:06 moubuntu01 ollama[743]: load: token to piece cache size = 0.8000 MB
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: arch = llama
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: vocab_only = 0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: no_alloc = 0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_ctx_train = 8192
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_embd = 4096
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_embd_inp = 4096
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_layer = 32
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_head = 32
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_head_kv = 8
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_rot = 128
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_swa = 0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: is_swa_any = 0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_embd_head_k = 128
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_embd_head_v = 128
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_gqa = 4
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_embd_k_gqa = 1024
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_embd_v_gqa = 1024
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: f_norm_eps = 0.0e+00
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: f_norm_rms_eps = 1.0e-05
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: f_clamp_kqv = 0.0e+00
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: f_max_alibi_bias = 0.0e+00
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: f_logit_scale = 0.0e+00
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: f_attn_scale = 0.0e+00
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_ff = 14336
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_expert = 0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_expert_used = 0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_expert_groups = 0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_group_used = 0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: causal attn = 1
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: pooling type = 0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: rope type = 0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: rope scaling = linear
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: freq_base_train = 500000.0
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: freq_scale_train = 1
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_ctx_orig_yarn = 8192
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: rope_yarn_log_mul= 0.0000
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: rope_finetuned = unknown
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: model type = 8B
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: model params = 8.03 B
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: general.name = Meta-Llama-3-8B-Instruct
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: vocab type = BPE
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_vocab = 128256
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_merges = 280147
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: BOS token = 128000 '<|begin_of_text|>'
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOS token = 128009 '<|eot_id|>'
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOT token = 128009 '<|eot_id|>'
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: LF token = 198 'Ċ'
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOG token = 128001 '<|end_of_text|>'
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOG token = 128009 '<|eot_id|>'
Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: max token length = 256
Jan 28 15:17:06 moubuntu01 ollama[743]: load_tensors: loading model tensors, this can take a while... (mmap = true)
Jan 28 15:17:08 moubuntu01 ollama[743]: load_tensors: offloading 32 repeating layers to GPU
Jan 28 15:17:08 moubuntu01 ollama[743]: load_tensors: offloading output layer to GPU
Jan 28 15:17:08 moubuntu01 ollama[743]: load_tensors: offloaded 33/33 layers to GPU
Jan 28 15:17:08 moubuntu01 ollama[743]: load_tensors: CPU_Mapped model buffer size = 281.81 MiB
Jan 28 15:17:08 moubuntu01 ollama[743]: load_tensors: CUDA0 model buffer size = 4155.99 MiB
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: constructing llama_context
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: n_seq_max = 1
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: n_ctx = 4096
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: n_ctx_seq = 4096
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: n_batch = 512
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: n_ubatch = 512
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: causal_attn = 1
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: flash_attn = disabled
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: kv_unified = false
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: freq_base = 500000.0
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: freq_scale = 1
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: n_ctx_seq (4096) < n_ctx_train (8192) -- the full capacity of th>
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: CUDA_Host output buffer size = 0.50 MiB
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_kv_cache: CUDA0 KV buffer size = 512.00 MiB
Jan 28 15:17:08 moubuntu01 ollama[743]: llama_kv_cache: size = 512.00 MiB ( 4096 cells, 32 layers, 1/1 seqs), K (f1>
Jan 28 15:17:09 moubuntu01 ollama[743]: llama_context: CUDA0 compute buffer size = 300.01 MiB
Jan 28 15:17:09 moubuntu01 ollama[743]: llama_context: CUDA_Host compute buffer size = 20.01 MiB
Jan 28 15:17:09 moubuntu01 ollama[743]: llama_context: graph nodes = 1158
Jan 28 15:17:09 moubuntu01 ollama[743]: llama_context: graph splits = 2
Jan 28 15:17:09 moubuntu01 ollama[743]: time=2026-01-28T15:17:09.154Z level=INFO source=server.go:1385 msg="llama runne>
Jan 28 15:17:09 moubuntu01 ollama[743]: time=2026-01-28T15:17:09.154Z level=INFO source=sched.go:526 msg="loaded runner>
Jan 28 15:17:09 moubuntu01 ollama[743]: time=2026-01-28T15:17:09.154Z level=INFO source=server.go:1347 msg="waiting for>
Jan 28 15:17:09 moubuntu01 ollama[743]: time=2026-01-28T15:17:09.154Z level=INFO source=server.go:1385 msg="llama runne>
Jan 28 15:17:09 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:17:09 | 200 | 3.298812716s | 127.0.0.1 | POST >
Jan 28 15:22:09 moubuntu01 ollama[743]: time=2026-01-28T15:22:09.213Z level=INFO source=server.go:429 msg="starting run>
Jan 28 15:22:09 moubuntu01 ollama[743]: time=2026-01-28T15:22:09.627Z level=INFO source=server.go:429 msg="starting run>
Jan 28 15:36:45 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:36:45 | 200 | 18.178µs | 127.0.0.1 | HEAD >
Jan 28 15:36:45 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:36:45 | 200 | 91.865729ms | 127.0.0.1 | POST >
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.296Z level=INFO source=server.go:429 msg="starting run>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors fr>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not app>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 0: general.architecture str >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 1: general.name str >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 2: llama.block_count u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 3: llama.context_length u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 4: llama.embedding_length u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 5: llama.feed_forward_length u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 6: llama.attention.head_count u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 7: llama.attention.head_count_kv u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 8: llama.rope.freq_base f32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 10: general.file_type u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 11: llama.vocab_size u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 12: llama.rope.dimension_count u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 13: tokenizer.ggml.model str >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 14: tokenizer.ggml.pre str >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[st>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i3>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 17: tokenizer.ggml.merges arr[st>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 20: tokenizer.chat_template str >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 21: general.quantization_version u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - type f32: 65 tensors
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - type q4_0: 225 tensors
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - type q6_K: 1 tensors
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: file format = GGUF V3 (latest)
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: file type = Q4_0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: file size = 4.33 GiB (4.64 BPW)
Jan 28 15:36:45 moubuntu01 ollama[743]: load: printing all EOG tokens:
Jan 28 15:36:45 moubuntu01 ollama[743]: load: - 128001 ('<|end_of_text|>')
Jan 28 15:36:45 moubuntu01 ollama[743]: load: - 128009 ('<|eot_id|>')
Jan 28 15:36:45 moubuntu01 ollama[743]: load: special tokens cache size = 256
Jan 28 15:36:45 moubuntu01 ollama[743]: load: token to piece cache size = 0.8000 MB
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: arch = llama
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: vocab_only = 1
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: no_alloc = 0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: model type = ?B
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: model params = 8.03 B
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: general.name = Meta-Llama-3-8B-Instruct
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: vocab type = BPE
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_vocab = 128256
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_merges = 280147
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: BOS token = 128000 '<|begin_of_text|>'
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOS token = 128009 '<|eot_id|>'
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOT token = 128009 '<|eot_id|>'
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: LF token = 198 'Ċ'
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOG token = 128001 '<|end_of_text|>'
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOG token = 128009 '<|eot_id|>'
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: max token length = 256
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_load: vocab only - skipping tensors
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.582Z level=INFO source=server.go:429 msg="starting run>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=sched.go:452 msg="system memory>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=sched.go:459 msg="gpu memory" i>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=server.go:496 msg="loading mode>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=device.go:240 msg="model weight>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=device.go:251 msg="kv cache" de>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=device.go:262 msg="compute grap>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=device.go:272 msg="total memory>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.590Z level=INFO source=runner.go:965 msg="starting go >
Jan 28 15:36:45 moubuntu01 ollama[743]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-sse42.so
Jan 28 15:36:45 moubuntu01 ollama[743]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Jan 28 15:36:45 moubuntu01 ollama[743]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Jan 28 15:36:45 moubuntu01 ollama[743]: ggml_cuda_init: found 1 CUDA devices:
Jan 28 15:36:45 moubuntu01 ollama[743]: Device 0: GRID RTX6000-8Q, compute capability 7.5, VMM: no, ID: GPU-f254cbb0->
Jan 28 15:36:45 moubuntu01 ollama[743]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-c>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.644Z level=INFO source=ggml.go:104 msg=system CPU.0.SS>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.646Z level=INFO source=runner.go:1001 msg="Server list>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.647Z level=INFO source=runner.go:895 msg=load request=>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.647Z level=INFO source=server.go:1347 msg="waiting for>
Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.647Z level=INFO source=server.go:1381 msg="waiting for>
Jan 28 15:36:45 moubuntu01 ollama[743]: ggml_backend_cuda_device_get_memory device GPU-f254cbb0-fc5b-11f0-a81b-ea3fb678>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_load_from_file_impl: using device CUDA0 (GRID RTX6000-8Q) (0000:01:>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors fr>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not app>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 0: general.architecture str >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 1: general.name str >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 2: llama.block_count u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 3: llama.context_length u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 4: llama.embedding_length u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 5: llama.feed_forward_length u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 6: llama.attention.head_count u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 7: llama.attention.head_count_kv u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 8: llama.rope.freq_base f32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 10: general.file_type u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 11: llama.vocab_size u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 12: llama.rope.dimension_count u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 13: tokenizer.ggml.model str >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 14: tokenizer.ggml.pre str >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[st>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i3>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 17: tokenizer.ggml.merges arr[st>
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 20: tokenizer.chat_template str >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 21: general.quantization_version u32 >
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - type f32: 65 tensors
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - type q4_0: 225 tensors
Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - type q6_K: 1 tensors
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: file format = GGUF V3 (latest)
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: file type = Q4_0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: file size = 4.33 GiB (4.64 BPW)
Jan 28 15:36:45 moubuntu01 ollama[743]: load: printing all EOG tokens:
Jan 28 15:36:45 moubuntu01 ollama[743]: load: - 128001 ('<|end_of_text|>')
Jan 28 15:36:45 moubuntu01 ollama[743]: load: - 128009 ('<|eot_id|>')
Jan 28 15:36:45 moubuntu01 ollama[743]: load: special tokens cache size = 256
Jan 28 15:36:45 moubuntu01 ollama[743]: load: token to piece cache size = 0.8000 MB
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: arch = llama
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: vocab_only = 0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: no_alloc = 0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_ctx_train = 8192
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_embd = 4096
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_embd_inp = 4096
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_layer = 32
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_head = 32
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_head_kv = 8
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_rot = 128
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_swa = 0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: is_swa_any = 0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_embd_head_k = 128
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_embd_head_v = 128
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_gqa = 4
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_embd_k_gqa = 1024
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_embd_v_gqa = 1024
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: f_norm_eps = 0.0e+00
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: f_norm_rms_eps = 1.0e-05
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: f_clamp_kqv = 0.0e+00
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: f_max_alibi_bias = 0.0e+00
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: f_logit_scale = 0.0e+00
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: f_attn_scale = 0.0e+00
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_ff = 14336
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_expert = 0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_expert_used = 0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_expert_groups = 0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_group_used = 0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: causal attn = 1
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: pooling type = 0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: rope type = 0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: rope scaling = linear
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: freq_base_train = 500000.0
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: freq_scale_train = 1
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_ctx_orig_yarn = 8192
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: rope_yarn_log_mul= 0.0000
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: rope_finetuned = unknown
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: model type = 8B
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: model params = 8.03 B
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: general.name = Meta-Llama-3-8B-Instruct
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: vocab type = BPE
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_vocab = 128256
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_merges = 280147
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: BOS token = 128000 '<|begin_of_text|>'
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOS token = 128009 '<|eot_id|>'
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOT token = 128009 '<|eot_id|>'
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: LF token = 198 'Ċ'
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOG token = 128001 '<|end_of_text|>'
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOG token = 128009 '<|eot_id|>'
Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: max token length = 256
Jan 28 15:36:45 moubuntu01 ollama[743]: load_tensors: loading model tensors, this can take a while... (mmap = true)
Jan 28 15:36:46 moubuntu01 ollama[743]: load_tensors: offloading 32 repeating layers to GPU
Jan 28 15:36:46 moubuntu01 ollama[743]: load_tensors: offloading output layer to GPU
Jan 28 15:36:46 moubuntu01 ollama[743]: load_tensors: offloaded 33/33 layers to GPU
Jan 28 15:36:46 moubuntu01 ollama[743]: load_tensors: CPU_Mapped model buffer size = 281.81 MiB
Jan 28 15:36:46 moubuntu01 ollama[743]: load_tensors: CUDA0 model buffer size = 4155.99 MiB
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: constructing llama_context
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: n_seq_max = 1
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: n_ctx = 4096
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: n_ctx_seq = 4096
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: n_batch = 512
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: n_ubatch = 512
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: causal_attn = 1
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: flash_attn = disabled
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: kv_unified = false
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: freq_base = 500000.0
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: freq_scale = 1
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: n_ctx_seq (4096) < n_ctx_train (8192) -- the full capacity of th>
Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: CUDA_Host output buffer size = 0.50 MiB
Jan 28 15:37:16 moubuntu01 ollama[743]: llama_kv_cache: CUDA0 KV buffer size = 512.00 MiB
Jan 28 15:37:16 moubuntu01 ollama[743]: llama_kv_cache: size = 512.00 MiB ( 4096 cells, 32 layers, 1/1 seqs), K (f1>
Jan 28 15:41:37 moubuntu01 ollama[743]: llama_context: CUDA0 compute buffer size = 300.01 MiB
Jan 28 15:41:37 moubuntu01 ollama[743]: llama_context: CUDA_Host compute buffer size = 20.01 MiB
Jan 28 15:41:37 moubuntu01 ollama[743]: llama_context: graph nodes = 1158
Jan 28 15:41:37 moubuntu01 ollama[743]: llama_context: graph splits = 2
Jan 28 15:41:37 moubuntu01 ollama[743]: time=2026-01-28T15:41:37.279Z level=INFO source=server.go:1385 msg="llama runne>
Jan 28 15:41:37 moubuntu01 ollama[743]: time=2026-01-28T15:41:37.279Z level=INFO source=sched.go:526 msg="loaded runner>
Jan 28 15:41:37 moubuntu01 ollama[743]: time=2026-01-28T15:41:37.279Z level=INFO source=server.go:1347 msg="waiting for>
Jan 28 15:41:37 moubuntu01 ollama[743]: time=2026-01-28T15:41:37.280Z level=INFO source=server.go:1385 msg="llama runne>
Jan 28 15:44:16 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:44:16 | 200 | 7m31s | 127.0.0.1 | POST >
lines 310-388/388 (END)

<!-- gh-comment-id:3812143564 --> @mikeyo commented on GitHub (Jan 28, 2026): Not sure if this will help. Jan 28 15:15:24 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:15:24 | 200 | 27.536µs | 127.0.0.1 | HEAD > Jan 28 15:15:24 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:15:24 | 200 | 2.713841ms | 127.0.0.1 | GET > Jan 28 15:17:05 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:17:05 | 200 | 19.874µs | 127.0.0.1 | HEAD > Jan 28 15:17:05 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:17:05 | 200 | 95.567836ms | 127.0.0.1 | POST > Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.014Z level=INFO source=server.go:429 msg="starting run> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors fr> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not app> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 0: general.architecture str > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 1: general.name str > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 2: llama.block_count u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 3: llama.context_length u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 4: llama.embedding_length u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 5: llama.feed_forward_length u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 6: llama.attention.head_count u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 7: llama.attention.head_count_kv u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 8: llama.rope.freq_base f32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 10: general.file_type u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 11: llama.vocab_size u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 12: llama.rope.dimension_count u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 13: tokenizer.ggml.model str > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 14: tokenizer.ggml.pre str > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[st> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i3> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 17: tokenizer.ggml.merges arr[st> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 20: tokenizer.chat_template str > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 21: general.quantization_version u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - type f32: 65 tensors Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - type q4_0: 225 tensors Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - type q6_K: 1 tensors Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: file format = GGUF V3 (latest) Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: file type = Q4_0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: file size = 4.33 GiB (4.64 BPW) Jan 28 15:17:06 moubuntu01 ollama[743]: load: printing all EOG tokens: Jan 28 15:17:06 moubuntu01 ollama[743]: load: - 128001 ('<|end_of_text|>') Jan 28 15:17:06 moubuntu01 ollama[743]: load: - 128009 ('<|eot_id|>') Jan 28 15:17:06 moubuntu01 ollama[743]: load: special tokens cache size = 256 Jan 28 15:17:06 moubuntu01 ollama[743]: load: token to piece cache size = 0.8000 MB Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: arch = llama Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: vocab_only = 1 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: no_alloc = 0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: model type = ?B Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: model params = 8.03 B Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: general.name = Meta-Llama-3-8B-Instruct Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: vocab type = BPE Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_vocab = 128256 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_merges = 280147 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: BOS token = 128000 '<|begin_of_text|>' Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOS token = 128009 '<|eot_id|>' Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOT token = 128009 '<|eot_id|>' Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: LF token = 198 'Ċ' Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOG token = 128001 '<|end_of_text|>' Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOG token = 128009 '<|eot_id|>' Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: max token length = 256 Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_load: vocab only - skipping tensors Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.323Z level=INFO source=server.go:429 msg="starting run> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.323Z level=INFO source=sched.go:452 msg="system memory> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.323Z level=INFO source=sched.go:459 msg="gpu memory" i> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.323Z level=INFO source=server.go:496 msg="loading mode> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.324Z level=INFO source=device.go:240 msg="model weight> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.324Z level=INFO source=device.go:251 msg="kv cache" de> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.324Z level=INFO source=device.go:262 msg="compute grap> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.324Z level=INFO source=device.go:272 msg="total memory> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.331Z level=INFO source=runner.go:965 msg="starting go > Jan 28 15:17:06 moubuntu01 ollama[743]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-sse42.so Jan 28 15:17:06 moubuntu01 ollama[743]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Jan 28 15:17:06 moubuntu01 ollama[743]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Jan 28 15:17:06 moubuntu01 ollama[743]: ggml_cuda_init: found 1 CUDA devices: Jan 28 15:17:06 moubuntu01 ollama[743]: Device 0: GRID RTX6000-8Q, compute capability 7.5, VMM: no, ID: GPU-f254cbb0-> Jan 28 15:17:06 moubuntu01 ollama[743]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-c> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.390Z level=INFO source=ggml.go:104 msg=system CPU.0.SS> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.392Z level=INFO source=runner.go:1001 msg="Server list> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.397Z level=INFO source=runner.go:895 msg=load request=> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.397Z level=INFO source=server.go:1347 msg="waiting for> Jan 28 15:17:06 moubuntu01 ollama[743]: time=2026-01-28T15:17:06.397Z level=INFO source=server.go:1381 msg="waiting for> Jan 28 15:17:06 moubuntu01 ollama[743]: ggml_backend_cuda_device_get_memory device GPU-f254cbb0-fc5b-11f0-a81b-ea3fb678> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_load_from_file_impl: using device CUDA0 (GRID RTX6000-8Q) (0000:01:> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors fr> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not app> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 0: general.architecture str > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 1: general.name str > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 2: llama.block_count u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 3: llama.context_length u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 4: llama.embedding_length u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 5: llama.feed_forward_length u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 6: llama.attention.head_count u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 7: llama.attention.head_count_kv u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 8: llama.rope.freq_base f32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 10: general.file_type u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 11: llama.vocab_size u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 12: llama.rope.dimension_count u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 13: tokenizer.ggml.model str > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 14: tokenizer.ggml.pre str > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[st> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i3> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 17: tokenizer.ggml.merges arr[st> Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 20: tokenizer.chat_template str > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - kv 21: general.quantization_version u32 > Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - type f32: 65 tensors Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - type q4_0: 225 tensors Jan 28 15:17:06 moubuntu01 ollama[743]: llama_model_loader: - type q6_K: 1 tensors Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: file format = GGUF V3 (latest) Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: file type = Q4_0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: file size = 4.33 GiB (4.64 BPW) Jan 28 15:17:06 moubuntu01 ollama[743]: load: printing all EOG tokens: Jan 28 15:17:06 moubuntu01 ollama[743]: load: - 128001 ('<|end_of_text|>') Jan 28 15:17:06 moubuntu01 ollama[743]: load: - 128009 ('<|eot_id|>') Jan 28 15:17:06 moubuntu01 ollama[743]: load: special tokens cache size = 256 Jan 28 15:17:06 moubuntu01 ollama[743]: load: token to piece cache size = 0.8000 MB Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: arch = llama Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: vocab_only = 0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: no_alloc = 0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_ctx_train = 8192 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_embd = 4096 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_embd_inp = 4096 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_layer = 32 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_head = 32 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_head_kv = 8 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_rot = 128 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_swa = 0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: is_swa_any = 0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_embd_head_k = 128 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_embd_head_v = 128 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_gqa = 4 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_embd_k_gqa = 1024 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_embd_v_gqa = 1024 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: f_norm_eps = 0.0e+00 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: f_norm_rms_eps = 1.0e-05 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: f_clamp_kqv = 0.0e+00 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: f_max_alibi_bias = 0.0e+00 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: f_logit_scale = 0.0e+00 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: f_attn_scale = 0.0e+00 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_ff = 14336 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_expert = 0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_expert_used = 0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_expert_groups = 0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_group_used = 0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: causal attn = 1 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: pooling type = 0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: rope type = 0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: rope scaling = linear Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: freq_base_train = 500000.0 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: freq_scale_train = 1 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_ctx_orig_yarn = 8192 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: rope_yarn_log_mul= 0.0000 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: rope_finetuned = unknown Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: model type = 8B Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: model params = 8.03 B Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: general.name = Meta-Llama-3-8B-Instruct Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: vocab type = BPE Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_vocab = 128256 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: n_merges = 280147 Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: BOS token = 128000 '<|begin_of_text|>' Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOS token = 128009 '<|eot_id|>' Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOT token = 128009 '<|eot_id|>' Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: LF token = 198 'Ċ' Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOG token = 128001 '<|end_of_text|>' Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: EOG token = 128009 '<|eot_id|>' Jan 28 15:17:06 moubuntu01 ollama[743]: print_info: max token length = 256 Jan 28 15:17:06 moubuntu01 ollama[743]: load_tensors: loading model tensors, this can take a while... (mmap = true) Jan 28 15:17:08 moubuntu01 ollama[743]: load_tensors: offloading 32 repeating layers to GPU Jan 28 15:17:08 moubuntu01 ollama[743]: load_tensors: offloading output layer to GPU Jan 28 15:17:08 moubuntu01 ollama[743]: load_tensors: offloaded 33/33 layers to GPU Jan 28 15:17:08 moubuntu01 ollama[743]: load_tensors: CPU_Mapped model buffer size = 281.81 MiB Jan 28 15:17:08 moubuntu01 ollama[743]: load_tensors: CUDA0 model buffer size = 4155.99 MiB Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: constructing llama_context Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: n_seq_max = 1 Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: n_ctx = 4096 Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: n_ctx_seq = 4096 Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: n_batch = 512 Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: n_ubatch = 512 Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: causal_attn = 1 Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: flash_attn = disabled Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: kv_unified = false Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: freq_base = 500000.0 Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: freq_scale = 1 Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: n_ctx_seq (4096) < n_ctx_train (8192) -- the full capacity of th> Jan 28 15:17:08 moubuntu01 ollama[743]: llama_context: CUDA_Host output buffer size = 0.50 MiB Jan 28 15:17:08 moubuntu01 ollama[743]: llama_kv_cache: CUDA0 KV buffer size = 512.00 MiB Jan 28 15:17:08 moubuntu01 ollama[743]: llama_kv_cache: size = 512.00 MiB ( 4096 cells, 32 layers, 1/1 seqs), K (f1> Jan 28 15:17:09 moubuntu01 ollama[743]: llama_context: CUDA0 compute buffer size = 300.01 MiB Jan 28 15:17:09 moubuntu01 ollama[743]: llama_context: CUDA_Host compute buffer size = 20.01 MiB Jan 28 15:17:09 moubuntu01 ollama[743]: llama_context: graph nodes = 1158 Jan 28 15:17:09 moubuntu01 ollama[743]: llama_context: graph splits = 2 Jan 28 15:17:09 moubuntu01 ollama[743]: time=2026-01-28T15:17:09.154Z level=INFO source=server.go:1385 msg="llama runne> Jan 28 15:17:09 moubuntu01 ollama[743]: time=2026-01-28T15:17:09.154Z level=INFO source=sched.go:526 msg="loaded runner> Jan 28 15:17:09 moubuntu01 ollama[743]: time=2026-01-28T15:17:09.154Z level=INFO source=server.go:1347 msg="waiting for> Jan 28 15:17:09 moubuntu01 ollama[743]: time=2026-01-28T15:17:09.154Z level=INFO source=server.go:1385 msg="llama runne> Jan 28 15:17:09 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:17:09 | 200 | 3.298812716s | 127.0.0.1 | POST > Jan 28 15:22:09 moubuntu01 ollama[743]: time=2026-01-28T15:22:09.213Z level=INFO source=server.go:429 msg="starting run> Jan 28 15:22:09 moubuntu01 ollama[743]: time=2026-01-28T15:22:09.627Z level=INFO source=server.go:429 msg="starting run> Jan 28 15:36:45 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:36:45 | 200 | 18.178µs | 127.0.0.1 | HEAD > Jan 28 15:36:45 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:36:45 | 200 | 91.865729ms | 127.0.0.1 | POST > Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.296Z level=INFO source=server.go:429 msg="starting run> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors fr> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not app> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 0: general.architecture str > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 1: general.name str > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 2: llama.block_count u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 3: llama.context_length u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 4: llama.embedding_length u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 5: llama.feed_forward_length u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 6: llama.attention.head_count u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 7: llama.attention.head_count_kv u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 8: llama.rope.freq_base f32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 10: general.file_type u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 11: llama.vocab_size u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 12: llama.rope.dimension_count u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 13: tokenizer.ggml.model str > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 14: tokenizer.ggml.pre str > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[st> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i3> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 17: tokenizer.ggml.merges arr[st> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 20: tokenizer.chat_template str > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 21: general.quantization_version u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - type f32: 65 tensors Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - type q4_0: 225 tensors Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - type q6_K: 1 tensors Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: file format = GGUF V3 (latest) Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: file type = Q4_0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: file size = 4.33 GiB (4.64 BPW) Jan 28 15:36:45 moubuntu01 ollama[743]: load: printing all EOG tokens: Jan 28 15:36:45 moubuntu01 ollama[743]: load: - 128001 ('<|end_of_text|>') Jan 28 15:36:45 moubuntu01 ollama[743]: load: - 128009 ('<|eot_id|>') Jan 28 15:36:45 moubuntu01 ollama[743]: load: special tokens cache size = 256 Jan 28 15:36:45 moubuntu01 ollama[743]: load: token to piece cache size = 0.8000 MB Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: arch = llama Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: vocab_only = 1 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: no_alloc = 0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: model type = ?B Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: model params = 8.03 B Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: general.name = Meta-Llama-3-8B-Instruct Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: vocab type = BPE Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_vocab = 128256 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_merges = 280147 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: BOS token = 128000 '<|begin_of_text|>' Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOS token = 128009 '<|eot_id|>' Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOT token = 128009 '<|eot_id|>' Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: LF token = 198 'Ċ' Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOG token = 128001 '<|end_of_text|>' Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOG token = 128009 '<|eot_id|>' Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: max token length = 256 Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_load: vocab only - skipping tensors Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.582Z level=INFO source=server.go:429 msg="starting run> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=sched.go:452 msg="system memory> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=sched.go:459 msg="gpu memory" i> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=server.go:496 msg="loading mode> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=device.go:240 msg="model weight> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=device.go:251 msg="kv cache" de> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=device.go:262 msg="compute grap> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.583Z level=INFO source=device.go:272 msg="total memory> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.590Z level=INFO source=runner.go:965 msg="starting go > Jan 28 15:36:45 moubuntu01 ollama[743]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-sse42.so Jan 28 15:36:45 moubuntu01 ollama[743]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Jan 28 15:36:45 moubuntu01 ollama[743]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Jan 28 15:36:45 moubuntu01 ollama[743]: ggml_cuda_init: found 1 CUDA devices: Jan 28 15:36:45 moubuntu01 ollama[743]: Device 0: GRID RTX6000-8Q, compute capability 7.5, VMM: no, ID: GPU-f254cbb0-> Jan 28 15:36:45 moubuntu01 ollama[743]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-c> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.644Z level=INFO source=ggml.go:104 msg=system CPU.0.SS> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.646Z level=INFO source=runner.go:1001 msg="Server list> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.647Z level=INFO source=runner.go:895 msg=load request=> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.647Z level=INFO source=server.go:1347 msg="waiting for> Jan 28 15:36:45 moubuntu01 ollama[743]: time=2026-01-28T15:36:45.647Z level=INFO source=server.go:1381 msg="waiting for> Jan 28 15:36:45 moubuntu01 ollama[743]: ggml_backend_cuda_device_get_memory device GPU-f254cbb0-fc5b-11f0-a81b-ea3fb678> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_load_from_file_impl: using device CUDA0 (GRID RTX6000-8Q) (0000:01:> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors fr> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not app> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 0: general.architecture str > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 1: general.name str > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 2: llama.block_count u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 3: llama.context_length u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 4: llama.embedding_length u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 5: llama.feed_forward_length u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 6: llama.attention.head_count u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 7: llama.attention.head_count_kv u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 8: llama.rope.freq_base f32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 10: general.file_type u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 11: llama.vocab_size u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 12: llama.rope.dimension_count u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 13: tokenizer.ggml.model str > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 14: tokenizer.ggml.pre str > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[st> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i3> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 17: tokenizer.ggml.merges arr[st> Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 20: tokenizer.chat_template str > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - kv 21: general.quantization_version u32 > Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - type f32: 65 tensors Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - type q4_0: 225 tensors Jan 28 15:36:45 moubuntu01 ollama[743]: llama_model_loader: - type q6_K: 1 tensors Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: file format = GGUF V3 (latest) Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: file type = Q4_0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: file size = 4.33 GiB (4.64 BPW) Jan 28 15:36:45 moubuntu01 ollama[743]: load: printing all EOG tokens: Jan 28 15:36:45 moubuntu01 ollama[743]: load: - 128001 ('<|end_of_text|>') Jan 28 15:36:45 moubuntu01 ollama[743]: load: - 128009 ('<|eot_id|>') Jan 28 15:36:45 moubuntu01 ollama[743]: load: special tokens cache size = 256 Jan 28 15:36:45 moubuntu01 ollama[743]: load: token to piece cache size = 0.8000 MB Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: arch = llama Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: vocab_only = 0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: no_alloc = 0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_ctx_train = 8192 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_embd = 4096 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_embd_inp = 4096 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_layer = 32 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_head = 32 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_head_kv = 8 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_rot = 128 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_swa = 0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: is_swa_any = 0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_embd_head_k = 128 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_embd_head_v = 128 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_gqa = 4 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_embd_k_gqa = 1024 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_embd_v_gqa = 1024 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: f_norm_eps = 0.0e+00 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: f_norm_rms_eps = 1.0e-05 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: f_clamp_kqv = 0.0e+00 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: f_max_alibi_bias = 0.0e+00 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: f_logit_scale = 0.0e+00 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: f_attn_scale = 0.0e+00 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_ff = 14336 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_expert = 0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_expert_used = 0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_expert_groups = 0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_group_used = 0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: causal attn = 1 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: pooling type = 0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: rope type = 0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: rope scaling = linear Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: freq_base_train = 500000.0 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: freq_scale_train = 1 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_ctx_orig_yarn = 8192 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: rope_yarn_log_mul= 0.0000 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: rope_finetuned = unknown Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: model type = 8B Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: model params = 8.03 B Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: general.name = Meta-Llama-3-8B-Instruct Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: vocab type = BPE Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_vocab = 128256 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: n_merges = 280147 Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: BOS token = 128000 '<|begin_of_text|>' Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOS token = 128009 '<|eot_id|>' Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOT token = 128009 '<|eot_id|>' Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: LF token = 198 'Ċ' Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOG token = 128001 '<|end_of_text|>' Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: EOG token = 128009 '<|eot_id|>' Jan 28 15:36:45 moubuntu01 ollama[743]: print_info: max token length = 256 Jan 28 15:36:45 moubuntu01 ollama[743]: load_tensors: loading model tensors, this can take a while... (mmap = true) Jan 28 15:36:46 moubuntu01 ollama[743]: load_tensors: offloading 32 repeating layers to GPU Jan 28 15:36:46 moubuntu01 ollama[743]: load_tensors: offloading output layer to GPU Jan 28 15:36:46 moubuntu01 ollama[743]: load_tensors: offloaded 33/33 layers to GPU Jan 28 15:36:46 moubuntu01 ollama[743]: load_tensors: CPU_Mapped model buffer size = 281.81 MiB Jan 28 15:36:46 moubuntu01 ollama[743]: load_tensors: CUDA0 model buffer size = 4155.99 MiB Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: constructing llama_context Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: n_seq_max = 1 Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: n_ctx = 4096 Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: n_ctx_seq = 4096 Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: n_batch = 512 Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: n_ubatch = 512 Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: causal_attn = 1 Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: flash_attn = disabled Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: kv_unified = false Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: freq_base = 500000.0 Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: freq_scale = 1 Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: n_ctx_seq (4096) < n_ctx_train (8192) -- the full capacity of th> Jan 28 15:37:15 moubuntu01 ollama[743]: llama_context: CUDA_Host output buffer size = 0.50 MiB Jan 28 15:37:16 moubuntu01 ollama[743]: llama_kv_cache: CUDA0 KV buffer size = 512.00 MiB Jan 28 15:37:16 moubuntu01 ollama[743]: llama_kv_cache: size = 512.00 MiB ( 4096 cells, 32 layers, 1/1 seqs), K (f1> Jan 28 15:41:37 moubuntu01 ollama[743]: llama_context: CUDA0 compute buffer size = 300.01 MiB Jan 28 15:41:37 moubuntu01 ollama[743]: llama_context: CUDA_Host compute buffer size = 20.01 MiB Jan 28 15:41:37 moubuntu01 ollama[743]: llama_context: graph nodes = 1158 Jan 28 15:41:37 moubuntu01 ollama[743]: llama_context: graph splits = 2 Jan 28 15:41:37 moubuntu01 ollama[743]: time=2026-01-28T15:41:37.279Z level=INFO source=server.go:1385 msg="llama runne> Jan 28 15:41:37 moubuntu01 ollama[743]: time=2026-01-28T15:41:37.279Z level=INFO source=sched.go:526 msg="loaded runner> Jan 28 15:41:37 moubuntu01 ollama[743]: time=2026-01-28T15:41:37.279Z level=INFO source=server.go:1347 msg="waiting for> Jan 28 15:41:37 moubuntu01 ollama[743]: time=2026-01-28T15:41:37.280Z level=INFO source=server.go:1385 msg="llama runne> Jan 28 15:44:16 moubuntu01 ollama[743]: [GIN] 2026/01/28 - 15:44:16 | 200 | 7m31s | 127.0.0.1 | POST > lines 310-388/388 (END)
Author
Owner

@rick-github commented on GitHub (Jan 28, 2026):

What's the output of

journalctl -u ollama --no-pager
nvidia-smi -q
<!-- gh-comment-id:3812785844 --> @rick-github commented on GitHub (Jan 28, 2026): What's the output of ``` journalctl -u ollama --no-pager nvidia-smi -q ```
Author
Owner

@mikeyo commented on GitHub (Jan 28, 2026):

What's the output of

journalctl -u ollama --no-pager
nvidia-smi -q

Here you go.

jrnl.txt
nv.txt

<!-- gh-comment-id:3813083685 --> @mikeyo commented on GitHub (Jan 28, 2026): > What's the output of > > ``` > journalctl -u ollama --no-pager > nvidia-smi -q > ``` Here you go. [jrnl.txt](https://github.com/user-attachments/files/24919032/jrnl.txt) [nv.txt](https://github.com/user-attachments/files/24919031/nv.txt)
Author
Owner

@rick-github commented on GitHub (Jan 28, 2026):

    vGPU Software Licensed Product
        Product Name                      : NVIDIA RTX Virtual Workstation
        License Status                    : Unlicensed (Restricted)

Likely a licensing issue.

<!-- gh-comment-id:3813117093 --> @rick-github commented on GitHub (Jan 28, 2026): ``` vGPU Software Licensed Product Product Name : NVIDIA RTX Virtual Workstation License Status : Unlicensed (Restricted) ``` Likely a [licensing issue](https://docs.nvidia.com/vgpu/latest/grid-licensing-user-guide/index.html#software-enforcement-grid-licensing:~:text=GPU%20is%20degraded%20if%20the%20VM%20fails%20to%20obtain%20a%20license%20within%2020%20minutes).
Author
Owner

@mikeyo commented on GitHub (Jan 28, 2026):

Yep that's it. Thanks!

<!-- gh-comment-id:3813360443 --> @mikeyo commented on GitHub (Jan 28, 2026): Yep that's it. Thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9131