[GH-ISSUE #5532] Ollama CPU based don't run in a LXC (Host Kernel 6.8.4-3) #3455

Open
opened 2026-04-12 14:08:01 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @T-Herrmann-WI on GitHub (Jul 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5532

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I use Proxmox VE for virtualization. If I install ollama in a Linux VM it works fine. If I install Ollama in a LXC (Host Kernel 6.8.4-3) it don't works with CPU.

ollama run tinyllama
Error: timed out waiting for llama runner to start - progress 1.0

For LXC with Ollama and Nvidia GPU it works but not for CPU.

It make no difference to install it nativ (curl -fsSL https://ollama.com/install.sh | sh ) or use docker. I have no idea what the problem is maybe an Kernel Issue.

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 3,14,15
Off-line CPU(s) list: 0-2,4-13,16-63
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7282 16-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 104%
CPU max MHz: 2800,0000
CPU min MHz: 1500,0000
BogoMIPS: 5600,11
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr
_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3
fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalign
sse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pst
ate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt
xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt
lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v
_spec_ctrl umip rdpid overflow_recov succor smca sev sev_e

OS

Linux

GPU

No response

CPU

AMD

Ollama version

0.1.48

Originally created by @T-Herrmann-WI on GitHub (Jul 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5532 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I use Proxmox VE for virtualization. If I install ollama in a Linux VM it works fine. If I install Ollama in a LXC (Host Kernel 6.8.4-3) it don't works with CPU. ##### ollama run tinyllama Error: timed out waiting for llama runner to start - progress 1.0 ##### For LXC with Ollama and Nvidia GPU it works but not for CPU. It make no difference to install it nativ (curl -fsSL https://ollama.com/install.sh | sh ) or use docker. I have no idea what the problem is maybe an Kernel Issue. CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 3,14,15 Off-line CPU(s) list: 0-2,4-13,16-63 Vendor ID: AuthenticAMD Model name: AMD EPYC 7282 16-Core Processor CPU family: 23 Model: 49 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 Stepping: 0 Frequency boost: enabled CPU(s) scaling MHz: 104% CPU max MHz: 2800,0000 CPU min MHz: 1500,0000 BogoMIPS: 5600,11 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr _opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalign sse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pst ate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v _spec_ctrl umip rdpid overflow_recov succor smca sev sev_e ### OS Linux ### GPU _No response_ ### CPU AMD ### Ollama version 0.1.48
GiteaMirror added the linuxperformanceneeds more infobug labels 2026-04-12 14:08:02 -05:00
Author
Owner

@jmorganca commented on GitHub (Jul 7, 2024):

Sorry about this - do you have the logs handy? journalctl -fu ollama should do it with the standard installer

<!-- gh-comment-id:2212517170 --> @jmorganca commented on GitHub (Jul 7, 2024): Sorry about this - do you have the logs handy? `journalctl -fu ollama` should do it with the standard installer
Author
Owner

@T-Herrmann-WI commented on GitHub (Jul 7, 2024):

Here are the logs:

journalctl -fu ollama
Jul 07 18:52:41 DIZ-UMMD-C6006-ETL ollama[716]: llama_new_context_with_model: graph nodes = 710
Jul 07 18:52:41 DIZ-UMMD-C6006-ETL ollama[716]: llama_new_context_with_model: graph splits = 1
Jul 07 18:52:47 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:52:47.061+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server not responding"
Jul 07 18:52:48 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:52:48.233+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
Jul 07 18:53:43 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:53:43.242+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server not responding"
Jul 07 18:53:43 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:53:43.495+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
Jul 07 18:57:01 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:57:01.444+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server not responding"
Jul 07 18:57:01 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:57:01.695+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
Jul 07 18:57:41 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:57:41.952+02:00 level=ERROR source=sched.go:388 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 1.00 - "
Jul 07 18:57:41 DIZ-UMMD-C6006-ETL ollama[716]: [GIN] 2024/07/07 - 18:57:41 | 500 | 5m0s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:2212550147 --> @T-Herrmann-WI commented on GitHub (Jul 7, 2024): Here are the logs: journalctl -fu ollama Jul 07 18:52:41 DIZ-UMMD-C6006-ETL ollama[716]: llama_new_context_with_model: graph nodes = 710 Jul 07 18:52:41 DIZ-UMMD-C6006-ETL ollama[716]: llama_new_context_with_model: graph splits = 1 Jul 07 18:52:47 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:52:47.061+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server not responding" Jul 07 18:52:48 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:52:48.233+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model" Jul 07 18:53:43 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:53:43.242+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server not responding" Jul 07 18:53:43 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:53:43.495+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model" Jul 07 18:57:01 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:57:01.444+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server not responding" Jul 07 18:57:01 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:57:01.695+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model" Jul 07 18:57:41 DIZ-UMMD-C6006-ETL ollama[716]: time=2024-07-07T18:57:41.952+02:00 level=ERROR source=sched.go:388 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 1.00 - " Jul 07 18:57:41 DIZ-UMMD-C6006-ETL ollama[716]: [GIN] 2024/07/07 - 18:57:41 | 500 | 5m0s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@T-Herrmann-WI commented on GitHub (Jul 26, 2024):

The combination LXC + Docker or LXC only without docker leads to very very slow reactions even with the test model tinyllama:latest. Same config in VM with or without docker is much faster with CPU. I think to use ollama inside LXC is very important to overcome the GPU RAM limits for big size LLM's and provide CPU's in a over provision scenario. Also with actual version ollama:0.3.0

<!-- gh-comment-id:2252902896 --> @T-Herrmann-WI commented on GitHub (Jul 26, 2024): The combination LXC + Docker or LXC only without docker leads to very very slow reactions even with the test model tinyllama:latest. Same config in VM with or without docker is much faster with CPU. I think to use ollama inside LXC is very important to overcome the GPU RAM limits for big size LLM's and provide CPU's in a over provision scenario. Also with actual version ollama:0.3.0
Author
Owner

@havardthom commented on GitHub (Oct 28, 2024):

For anyone interested, I've added an Ollama LXC script to tteck's Proxmox Helper-Scripts. The script installs intel-basekit and builds Ollama from source and supports Intel iGPU passthrough (though it has a very long install time). It can be run like any other proxmox helper script: bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/ollama.sh)"

A script for Open WebUI LXC with optional Ollama install is also available: https://tteck.github.io/Proxmox/#open-webui-lxc

<!-- gh-comment-id:2442100941 --> @havardthom commented on GitHub (Oct 28, 2024): For anyone interested, I've added an Ollama LXC script to tteck's Proxmox Helper-Scripts. The script installs intel-basekit and builds Ollama from source and supports Intel iGPU passthrough (though it has a very long install time). It can be run like any other proxmox helper script: `bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/ollama.sh)"` A script for Open WebUI LXC with optional Ollama install is also available: https://tteck.github.io/Proxmox/#open-webui-lxc
Author
Owner

@dhiltgen commented on GitHub (Nov 6, 2024):

Please give 0.4.0 a try and let us know if it does a better job getting the default thread count right and generates good performance with the cores mapped to the VM.

<!-- gh-comment-id:2458504572 --> @dhiltgen commented on GitHub (Nov 6, 2024): Please give 0.4.0 a try and let us know if it does a better job getting the default thread count right and generates good performance with the cores mapped to the VM.
Author
Owner

@T-Herrmann-WI commented on GitHub (Nov 6, 2024):

Dear @dhiltgen I tested today 0.4.0rc8 in a LXC with 8 cores and the tiny model. It was unusable all 8 cores on 100% but it was slow without output. I the VM with 8 core it was fast. This is way beyond some dual socket NUMA stuff problem.

<!-- gh-comment-id:2459075087 --> @T-Herrmann-WI commented on GitHub (Nov 6, 2024): Dear @dhiltgen I tested today 0.4.0rc8 in a LXC with 8 cores and the tiny model. It was unusable all 8 cores on 100% but it was slow without output. I the VM with 8 core it was fast. This is way beyond some dual socket NUMA stuff problem.
Author
Owner

@dhiltgen commented on GitHub (Nov 6, 2024):

Sorry to hear that. Please share server logs for the host and VM scenarios so we can see what the difference is.

<!-- gh-comment-id:2460080437 --> @dhiltgen commented on GitHub (Nov 6, 2024): Sorry to hear that. Please share server logs for the host and VM scenarios so we can see what the difference is.
Author
Owner

@T-Herrmann-WI commented on GitHub (Nov 7, 2024):

You mean journalctl -fu ollama ?

<!-- gh-comment-id:2462117095 --> @T-Herrmann-WI commented on GitHub (Nov 7, 2024): You mean journalctl -fu ollama ?
Author
Owner

@T-Herrmann-WI commented on GitHub (Nov 9, 2024):

Dear @dhiltgen journalctl -fu ollama leads to no output in the VM or LXC with running Ollama as docker container.

<!-- gh-comment-id:2466200058 --> @T-Herrmann-WI commented on GitHub (Nov 9, 2024): Dear @dhiltgen journalctl -fu ollama leads to no output in the VM or LXC with running Ollama as docker container.
Author
Owner

@dhiltgen commented on GitHub (Nov 12, 2024):

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues

When running in a container, the logs are going to be associated with the container. If you followed our guide, something like docker logs ollama should work.

<!-- gh-comment-id:2471696315 --> @dhiltgen commented on GitHub (Nov 12, 2024): https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues When running in a container, the logs are going to be associated with the container. If you followed our guide, something like `docker logs ollama` should work.
Author
Owner

@T-Herrmann-WI commented on GitHub (Nov 13, 2024):

Dear @dhiltgen Ok here are the Logs of Ollama 4.1:

Ollama docker container inside the LXC:

docker logs b8349f36daf9
2024/11/09 12:30:43 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-09T12:30:43.571Z level=INFO source=images.go:755 msg="total blobs: 16"
time=2024-11-09T12:30:43.572Z level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-09T12:30:43.574Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.1)"
time=2024-11-09T12:30:43.578Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 cpu cpu_avx cpu_avx2 cuda_v11]"
time=2024-11-09T12:30:43.580Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-09T12:30:43.621Z level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
time=2024-11-09T12:30:43.621Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="2267.3 GiB" available="2092.2 GiB"
[GIN] 2024/11/09 - 12:31:08 | 200 | 159.092µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/09 - 12:31:08 | 200 | 3.526201ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/09 - 12:31:13 | 200 | 696.321µs | 172.19.0.1 | GET "/api/tags"
[GIN] 2024/11/09 - 12:31:21 | 200 | 21.452µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/09 - 12:31:21 | 200 | 25.225717ms | 127.0.0.1 | POST "/api/show"
time=2024-11-09T12:31:21.276Z level=INFO source=server.go:105 msg="system memory" total="2267.3 GiB" free="2091.5 GiB" free_swap="0 B"
time=2024-11-09T12:31:21.278Z level=INFO source=memory.go:343 msg="offload to cpu" layers.requested=-1 layers.model=23 layers.offload=0 layers.split="" memory.available="[2091.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.3 GiB" memory.required.partial="0 B" memory.required.kv="176.0 MiB" memory.required.allocations="[1.3 GiB]" memory.weights.total="696.1 MiB" memory.weights.repeating="644.8 MiB" memory.weights.nonrepeating="51.3 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="546.3 MiB"
time=2024-11-09T12:31:21.279Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 --ctx-size 8192 --batch-size 512 --threads 256 --no-mmap --parallel 4 --port 35587"
time=2024-11-09T12:31:21.280Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-09T12:31:21.280Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
time=2024-11-09T12:31:21.281Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-09T12:31:21.304Z level=INFO source=runner.go:863 msg="starting go runner"
time=2024-11-09T12:31:21.305Z level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=256
time=2024-11-09T12:31:21.306Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:35587"
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = TinyLlama
llama_model_loader: - kv 2: llama.context_length u32 = 2048
llama_model_loader: - kv 3: llama.embedding_length u32 = 2048
llama_model_loader: - kv 4: llama.block_count u32 = 22
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 45 tensors
llama_model_loader: - type q4_0: 155 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 2048
llm_load_print_meta: n_embd = 2048
llm_load_print_meta: n_layer = 22
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 4
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 256
llm_load_print_meta: n_embd_v_gqa = 256
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 5632
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 2048
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 1B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 1.10 B
llm_load_print_meta: model size = 606.53 MiB (4.63 BPW)
llm_load_print_meta: general.name = TinyLlama
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 2 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: EOG token = 2 ''
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size = 0.09 MiB
llm_load_tensors: CPU buffer size = 606.53 MiB
time=2024-11-09T12:31:21.533Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 176.00 MiB
llama_new_context_with_model: KV self size = 176.00 MiB, K (f16): 88.00 MiB, V (f16): 88.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.52 MiB
llama_new_context_with_model: CPU compute buffer size = 544.01 MiB
llama_new_context_with_model: graph nodes = 710
llama_new_context_with_model: graph splits = 1
time=2024-11-09T12:31:23.036Z level=INFO source=server.go:601 msg="llama runner started in 1.76 seconds"
[GIN] 2024/11/09 - 12:31:23 | 200 | 1.77915024s | 127.0.0.1 | POST "/api/generate"
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = TinyLlama
llama_model_loader: - kv 2: llama.context_length u32 = 2048
llama_model_loader: - kv 3: llama.embedding_length u32 = 2048
llama_model_loader: - kv 4: llama.block_count u32 = 22
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 45 tensors
llama_model_loader: - type q4_0: 155 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 1.10 B
llm_load_print_meta: model size = 606.53 MiB (4.63 BPW)
llm_load_print_meta: general.name = TinyLlama
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 2 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: EOG token = 2 ''
llm_load_print_meta: max token length = 48
llama_model_load: vocab only - skipping tensors
[GIN] 2024/11/09 - 12:31:48 | 200 | 23.110422176s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/11/09 - 12:34:26 | 200 | 23.566µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/09 - 12:34:26 | 200 | 681.658µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/09 - 12:34:52 | 200 | 22.744µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/09 - 12:34:52 | 200 | 1.528348ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/09 - 12:37:05 | 200 | 19.59µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/09 - 12:37:05 | 200 | 10.423508ms | 127.0.0.1 | POST "/api/show"
time=2024-11-09T12:37:05.505Z level=INFO source=server.go:105 msg="system memory" total="2267.3 GiB" free="2091.4 GiB" free_swap="0 B"
time=2024-11-09T12:37:05.506Z level=INFO source=memory.go:343 msg="offload to cpu" layers.requested=-1 layers.model=23 layers.offload=0 layers.split="" memory.available="[2091.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.3 GiB" memory.required.partial="0 B" memory.required.kv="176.0 MiB" memory.required.allocations="[1.3 GiB]" memory.weights.total="696.1 MiB" memory.weights.repeating="644.8 MiB" memory.weights.nonrepeating="51.3 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="546.3 MiB"
time=2024-11-09T12:37:05.506Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 --ctx-size 8192 --batch-size 512 --threads 256 --no-mmap --parallel 4 --port 46761"
time=2024-11-09T12:37:05.507Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-09T12:37:05.507Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
time=2024-11-09T12:37:05.507Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-09T12:37:05.511Z level=INFO source=runner.go:863 msg="starting go runner"
time=2024-11-09T12:37:05.511Z level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=256
time=2024-11-09T12:37:05.511Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:46761"
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = TinyLlama
llama_model_loader: - kv 2: llama.context_length u32 = 2048
llama_model_loader: - kv 3: llama.embedding_length u32 = 2048
llama_model_loader: - kv 4: llama.block_count u32 = 22
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 45 tensors
llama_model_loader: - type q4_0: 155 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 2048
llm_load_print_meta: n_embd = 2048
llm_load_print_meta: n_layer = 22
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 4
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 256
llm_load_print_meta: n_embd_v_gqa = 256
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 5632
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 2048
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 1B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 1.10 B
llm_load_print_meta: model size = 606.53 MiB (4.63 BPW)
llm_load_print_meta: general.name = TinyLlama
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 2 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: EOG token = 2 ''
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size = 0.09 MiB
llm_load_tensors: CPU buffer size = 606.53 MiB
time=2024-11-09T12:37:05.758Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 176.00 MiB
llama_new_context_with_model: KV self size = 176.00 MiB, K (f16): 88.00 MiB, V (f16): 88.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.52 MiB
llama_new_context_with_model: CPU compute buffer size = 544.01 MiB
llama_new_context_with_model: graph nodes = 710
llama_new_context_with_model: graph splits = 1
time=2024-11-09T12:37:06.260Z level=INFO source=server.go:601 msg="llama runner started in 0.75 seconds"
[GIN] 2024/11/09 - 12:37:06 | 200 | 773.946189ms | 127.0.0.1 | POST "/api/generate"
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = TinyLlama
llama_model_loader: - kv 2: llama.context_length u32 = 2048
llama_model_loader: - kv 3: llama.embedding_length u32 = 2048
llama_model_loader: - kv 4: llama.block_count u32 = 22
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 45 tensors
llama_model_loader: - type q4_0: 155 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 1.10 B
llm_load_print_meta: model size = 606.53 MiB (4.63 BPW)
llm_load_print_meta: general.name = TinyLlama
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 2 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: EOG token = 2 ''
llm_load_print_meta: max token length = 48
llama_model_load: vocab only - skipping tensors
[GIN] 2024/11/09 - 12:37:25 | 200 | 4.415673937s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/11/09 - 16:52:37 | 200 | 20.411µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/09 - 16:52:37 | 200 | 2.986319ms | 127.0.0.1 | GET "/api/tags"

Ollama docker container inside the VM:

docker logs 3e17ea4c52cc
2024/11/13 07:47:08 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-13T07:47:08.878Z level=INFO source=images.go:755 msg="total blobs: 13"
time=2024-11-13T07:47:08.878Z level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-13T07:47:08.878Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.1)"
time=2024-11-13T07:47:08.879Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 cpu]"
time=2024-11-13T07:47:08.879Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-13T07:47:08.882Z level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
time=2024-11-13T07:47:08.882Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="349.6 GiB" available="335.7 GiB"
[GIN] 2024/11/13 - 07:47:39 | 200 | 619.245µs | 172.19.0.1 | GET "/api/tags"
[GIN] 2024/11/13 - 07:48:05 | 200 | 32.81µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/13 - 07:48:05 | 200 | 1.473676ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/13 - 07:48:19 | 200 | 39.842µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/13 - 07:48:19 | 200 | 6.040854ms | 127.0.0.1 | POST "/api/show"
time=2024-11-13T07:48:19.471Z level=INFO source=server.go:105 msg="system memory" total="349.6 GiB" free="335.1 GiB" free_swap="4.0 GiB"
time=2024-11-13T07:48:19.471Z level=INFO source=memory.go:343 msg="offload to cpu" layers.requested=-1 layers.model=23 layers.offload=0 layers.split="" memory.available="[335.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.3 GiB" memory.required.partial="0 B" memory.required.kv="176.0 MiB" memory.required.allocations="[1.3 GiB]" memory.weights.total="696.1 MiB" memory.weights.repeating="644.8 MiB" memory.weights.nonrepeating="51.3 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="546.3 MiB"
time=2024-11-13T07:48:19.472Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 --ctx-size 8192 --batch-size 512 --threads 16 --no-mmap --parallel 4 --port 35667"
time=2024-11-13T07:48:19.472Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-13T07:48:19.472Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
time=2024-11-13T07:48:19.473Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-13T07:48:19.477Z level=INFO source=runner.go:863 msg="starting go runner"
time=2024-11-13T07:48:19.477Z level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=16
time=2024-11-13T07:48:19.478Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:35667"
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = TinyLlama
llama_model_loader: - kv 2: llama.context_length u32 = 2048
llama_model_loader: - kv 3: llama.embedding_length u32 = 2048
llama_model_loader: - kv 4: llama.block_count u32 = 22
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 45 tensors
llama_model_loader: - type q4_0: 155 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 2048
llm_load_print_meta: n_embd = 2048
llm_load_print_meta: n_layer = 22
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 4
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 256
llm_load_print_meta: n_embd_v_gqa = 256
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 5632
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 2048
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 1B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 1.10 B
llm_load_print_meta: model size = 606.53 MiB (4.63 BPW)
llm_load_print_meta: general.name = TinyLlama
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 2 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: EOG token = 2 ''
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size = 0.09 MiB
llm_load_tensors: CPU buffer size = 606.53 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 176.00 MiB
llama_new_context_with_model: KV self size = 176.00 MiB, K (f16): 88.00 MiB, V (f16): 88.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.52 MiB
llama_new_context_with_model: CPU compute buffer size = 544.01 MiB
llama_new_context_with_model: graph nodes = 710
llama_new_context_with_model: graph splits = 1
time=2024-11-13T07:48:19.724Z level=INFO source=server.go:601 msg="llama runner started in 0.25 seconds"
[GIN] 2024/11/13 - 07:48:19 | 200 | 271.311319ms | 127.0.0.1 | POST "/api/generate"
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = TinyLlama
llama_model_loader: - kv 2: llama.context_length u32 = 2048
llama_model_loader: - kv 3: llama.embedding_length u32 = 2048
llama_model_loader: - kv 4: llama.block_count u32 = 22
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 45 tensors
llama_model_loader: - type q4_0: 155 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 1.10 B
llm_load_print_meta: model size = 606.53 MiB (4.63 BPW)
llm_load_print_meta: general.name = TinyLlama
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 2 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: EOG token = 2 ''
llm_load_print_meta: max token length = 48
llama_model_load: vocab only - skipping tensors
[GIN] 2024/11/13 - 07:48:23 | 200 | 1.225339054s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:2472722821 --> @T-Herrmann-WI commented on GitHub (Nov 13, 2024): Dear @dhiltgen Ok here are the Logs of Ollama 4.1: ## Ollama docker container inside the LXC: docker logs b8349f36daf9 2024/11/09 12:30:43 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-11-09T12:30:43.571Z level=INFO source=images.go:755 msg="total blobs: 16" time=2024-11-09T12:30:43.572Z level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-09T12:30:43.574Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.1)" time=2024-11-09T12:30:43.578Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 cpu cpu_avx cpu_avx2 cuda_v11]" time=2024-11-09T12:30:43.580Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-09T12:30:43.621Z level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" time=2024-11-09T12:30:43.621Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="2267.3 GiB" available="2092.2 GiB" [GIN] 2024/11/09 - 12:31:08 | 200 | 159.092µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/09 - 12:31:08 | 200 | 3.526201ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/09 - 12:31:13 | 200 | 696.321µs | 172.19.0.1 | GET "/api/tags" [GIN] 2024/11/09 - 12:31:21 | 200 | 21.452µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/09 - 12:31:21 | 200 | 25.225717ms | 127.0.0.1 | POST "/api/show" time=2024-11-09T12:31:21.276Z level=INFO source=server.go:105 msg="system memory" total="2267.3 GiB" free="2091.5 GiB" free_swap="0 B" time=2024-11-09T12:31:21.278Z level=INFO source=memory.go:343 msg="offload to cpu" layers.requested=-1 layers.model=23 layers.offload=0 layers.split="" memory.available="[2091.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.3 GiB" memory.required.partial="0 B" memory.required.kv="176.0 MiB" memory.required.allocations="[1.3 GiB]" memory.weights.total="696.1 MiB" memory.weights.repeating="644.8 MiB" memory.weights.nonrepeating="51.3 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="546.3 MiB" time=2024-11-09T12:31:21.279Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 --ctx-size 8192 --batch-size 512 --threads 256 --no-mmap --parallel 4 --port 35587" time=2024-11-09T12:31:21.280Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-09T12:31:21.280Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding" time=2024-11-09T12:31:21.281Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-09T12:31:21.304Z level=INFO source=runner.go:863 msg="starting go runner" time=2024-11-09T12:31:21.305Z level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=256 time=2024-11-09T12:31:21.306Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:35587" llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = TinyLlama llama_model_loader: - kv 2: llama.context_length u32 = 2048 llama_model_loader: - kv 3: llama.embedding_length u32 = 2048 llama_model_loader: - kv 4: llama.block_count u32 = 22 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 45 tensors llama_model_loader: - type q4_0: 155 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 3 llm_load_vocab: token to piece cache size = 0.1684 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 2048 llm_load_print_meta: n_layer = 22 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 256 llm_load_print_meta: n_embd_v_gqa = 256 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 5632 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 1B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 1.10 B llm_load_print_meta: model size = 606.53 MiB (4.63 BPW) llm_load_print_meta: general.name = TinyLlama llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 2 '</s>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOG token = 2 '</s>' llm_load_print_meta: max token length = 48 llm_load_tensors: ggml ctx size = 0.09 MiB llm_load_tensors: CPU buffer size = 606.53 MiB time=2024-11-09T12:31:21.533Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 176.00 MiB llama_new_context_with_model: KV self size = 176.00 MiB, K (f16): 88.00 MiB, V (f16): 88.00 MiB llama_new_context_with_model: CPU output buffer size = 0.52 MiB llama_new_context_with_model: CPU compute buffer size = 544.01 MiB llama_new_context_with_model: graph nodes = 710 llama_new_context_with_model: graph splits = 1 time=2024-11-09T12:31:23.036Z level=INFO source=server.go:601 msg="llama runner started in 1.76 seconds" [GIN] 2024/11/09 - 12:31:23 | 200 | 1.77915024s | 127.0.0.1 | POST "/api/generate" llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = TinyLlama llama_model_loader: - kv 2: llama.context_length u32 = 2048 llama_model_loader: - kv 3: llama.embedding_length u32 = 2048 llama_model_loader: - kv 4: llama.block_count u32 = 22 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 45 tensors llama_model_loader: - type q4_0: 155 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 3 llm_load_vocab: token to piece cache size = 0.1684 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 1.10 B llm_load_print_meta: model size = 606.53 MiB (4.63 BPW) llm_load_print_meta: general.name = TinyLlama llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 2 '</s>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOG token = 2 '</s>' llm_load_print_meta: max token length = 48 llama_model_load: vocab only - skipping tensors [GIN] 2024/11/09 - 12:31:48 | 200 | 23.110422176s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/11/09 - 12:34:26 | 200 | 23.566µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/09 - 12:34:26 | 200 | 681.658µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/09 - 12:34:52 | 200 | 22.744µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/09 - 12:34:52 | 200 | 1.528348ms | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/09 - 12:37:05 | 200 | 19.59µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/09 - 12:37:05 | 200 | 10.423508ms | 127.0.0.1 | POST "/api/show" time=2024-11-09T12:37:05.505Z level=INFO source=server.go:105 msg="system memory" total="2267.3 GiB" free="2091.4 GiB" free_swap="0 B" time=2024-11-09T12:37:05.506Z level=INFO source=memory.go:343 msg="offload to cpu" layers.requested=-1 layers.model=23 layers.offload=0 layers.split="" memory.available="[2091.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.3 GiB" memory.required.partial="0 B" memory.required.kv="176.0 MiB" memory.required.allocations="[1.3 GiB]" memory.weights.total="696.1 MiB" memory.weights.repeating="644.8 MiB" memory.weights.nonrepeating="51.3 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="546.3 MiB" time=2024-11-09T12:37:05.506Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 --ctx-size 8192 --batch-size 512 --threads 256 --no-mmap --parallel 4 --port 46761" time=2024-11-09T12:37:05.507Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-09T12:37:05.507Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding" time=2024-11-09T12:37:05.507Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-09T12:37:05.511Z level=INFO source=runner.go:863 msg="starting go runner" time=2024-11-09T12:37:05.511Z level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=256 time=2024-11-09T12:37:05.511Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:46761" llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = TinyLlama llama_model_loader: - kv 2: llama.context_length u32 = 2048 llama_model_loader: - kv 3: llama.embedding_length u32 = 2048 llama_model_loader: - kv 4: llama.block_count u32 = 22 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 45 tensors llama_model_loader: - type q4_0: 155 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 3 llm_load_vocab: token to piece cache size = 0.1684 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 2048 llm_load_print_meta: n_layer = 22 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 256 llm_load_print_meta: n_embd_v_gqa = 256 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 5632 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 1B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 1.10 B llm_load_print_meta: model size = 606.53 MiB (4.63 BPW) llm_load_print_meta: general.name = TinyLlama llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 2 '</s>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOG token = 2 '</s>' llm_load_print_meta: max token length = 48 llm_load_tensors: ggml ctx size = 0.09 MiB llm_load_tensors: CPU buffer size = 606.53 MiB time=2024-11-09T12:37:05.758Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 176.00 MiB llama_new_context_with_model: KV self size = 176.00 MiB, K (f16): 88.00 MiB, V (f16): 88.00 MiB llama_new_context_with_model: CPU output buffer size = 0.52 MiB llama_new_context_with_model: CPU compute buffer size = 544.01 MiB llama_new_context_with_model: graph nodes = 710 llama_new_context_with_model: graph splits = 1 time=2024-11-09T12:37:06.260Z level=INFO source=server.go:601 msg="llama runner started in 0.75 seconds" [GIN] 2024/11/09 - 12:37:06 | 200 | 773.946189ms | 127.0.0.1 | POST "/api/generate" llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = TinyLlama llama_model_loader: - kv 2: llama.context_length u32 = 2048 llama_model_loader: - kv 3: llama.embedding_length u32 = 2048 llama_model_loader: - kv 4: llama.block_count u32 = 22 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 45 tensors llama_model_loader: - type q4_0: 155 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 3 llm_load_vocab: token to piece cache size = 0.1684 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 1.10 B llm_load_print_meta: model size = 606.53 MiB (4.63 BPW) llm_load_print_meta: general.name = TinyLlama llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 2 '</s>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOG token = 2 '</s>' llm_load_print_meta: max token length = 48 llama_model_load: vocab only - skipping tensors [GIN] 2024/11/09 - 12:37:25 | 200 | 4.415673937s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/11/09 - 16:52:37 | 200 | 20.411µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/09 - 16:52:37 | 200 | 2.986319ms | 127.0.0.1 | GET "/api/tags" ## Ollama docker container inside the VM: docker logs 3e17ea4c52cc 2024/11/13 07:47:08 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-11-13T07:47:08.878Z level=INFO source=images.go:755 msg="total blobs: 13" time=2024-11-13T07:47:08.878Z level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-13T07:47:08.878Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.1)" time=2024-11-13T07:47:08.879Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 cpu]" time=2024-11-13T07:47:08.879Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-13T07:47:08.882Z level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" time=2024-11-13T07:47:08.882Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="349.6 GiB" available="335.7 GiB" [GIN] 2024/11/13 - 07:47:39 | 200 | 619.245µs | 172.19.0.1 | GET "/api/tags" [GIN] 2024/11/13 - 07:48:05 | 200 | 32.81µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/13 - 07:48:05 | 200 | 1.473676ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/13 - 07:48:19 | 200 | 39.842µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/13 - 07:48:19 | 200 | 6.040854ms | 127.0.0.1 | POST "/api/show" time=2024-11-13T07:48:19.471Z level=INFO source=server.go:105 msg="system memory" total="349.6 GiB" free="335.1 GiB" free_swap="4.0 GiB" time=2024-11-13T07:48:19.471Z level=INFO source=memory.go:343 msg="offload to cpu" layers.requested=-1 layers.model=23 layers.offload=0 layers.split="" memory.available="[335.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.3 GiB" memory.required.partial="0 B" memory.required.kv="176.0 MiB" memory.required.allocations="[1.3 GiB]" memory.weights.total="696.1 MiB" memory.weights.repeating="644.8 MiB" memory.weights.nonrepeating="51.3 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="546.3 MiB" time=2024-11-13T07:48:19.472Z level=INFO source=server.go:383 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 --ctx-size 8192 --batch-size 512 --threads 16 --no-mmap --parallel 4 --port 35667" time=2024-11-13T07:48:19.472Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-13T07:48:19.472Z level=INFO source=server.go:562 msg="waiting for llama runner to start responding" time=2024-11-13T07:48:19.473Z level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-13T07:48:19.477Z level=INFO source=runner.go:863 msg="starting go runner" time=2024-11-13T07:48:19.477Z level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=16 time=2024-11-13T07:48:19.478Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:35667" llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = TinyLlama llama_model_loader: - kv 2: llama.context_length u32 = 2048 llama_model_loader: - kv 3: llama.embedding_length u32 = 2048 llama_model_loader: - kv 4: llama.block_count u32 = 22 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 45 tensors llama_model_loader: - type q4_0: 155 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 3 llm_load_vocab: token to piece cache size = 0.1684 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 2048 llm_load_print_meta: n_layer = 22 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 256 llm_load_print_meta: n_embd_v_gqa = 256 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 5632 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 1B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 1.10 B llm_load_print_meta: model size = 606.53 MiB (4.63 BPW) llm_load_print_meta: general.name = TinyLlama llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 2 '</s>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOG token = 2 '</s>' llm_load_print_meta: max token length = 48 llm_load_tensors: ggml ctx size = 0.09 MiB llm_load_tensors: CPU buffer size = 606.53 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 176.00 MiB llama_new_context_with_model: KV self size = 176.00 MiB, K (f16): 88.00 MiB, V (f16): 88.00 MiB llama_new_context_with_model: CPU output buffer size = 0.52 MiB llama_new_context_with_model: CPU compute buffer size = 544.01 MiB llama_new_context_with_model: graph nodes = 710 llama_new_context_with_model: graph splits = 1 time=2024-11-13T07:48:19.724Z level=INFO source=server.go:601 msg="llama runner started in 0.25 seconds" [GIN] 2024/11/13 - 07:48:19 | 200 | 271.311319ms | 127.0.0.1 | POST "/api/generate" llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /root/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = TinyLlama llama_model_loader: - kv 2: llama.context_length u32 = 2048 llama_model_loader: - kv 3: llama.embedding_length u32 = 2048 llama_model_loader: - kv 4: llama.block_count u32 = 22 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 45 tensors llama_model_loader: - type q4_0: 155 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 3 llm_load_vocab: token to piece cache size = 0.1684 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 1.10 B llm_load_print_meta: model size = 606.53 MiB (4.63 BPW) llm_load_print_meta: general.name = TinyLlama llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 2 '</s>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOG token = 2 '</s>' llm_load_print_meta: max token length = 48 llama_model_load: vocab only - skipping tensors [GIN] 2024/11/13 - 07:48:23 | 200 | 1.225339054s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@dhiltgen commented on GitHub (Nov 19, 2024):

Looking at the logs forOllama docker container inside the LXC, we seem to be getting the default thread count wrong. --threads 256 is going to lead to thrashing if you only have 32 cores total across 2 sockets.

The Ollama docker container inside the VM with --threads 16 is closer to optimal given our lack of NUMA support at present.

I'd like to understand why we're getting the core count wrong inside LXC so we can fix the system to get the right default thread setting. Maybe the following will help shed some light.

cat /proc/cpuinfo |grep "^physical id" | sort | uniq -c
cat /proc/cpuinfo |grep "^core id" | sort | uniq -c
<!-- gh-comment-id:2484505646 --> @dhiltgen commented on GitHub (Nov 19, 2024): Looking at the logs for`Ollama docker container inside the LXC`, we seem to be getting the default thread count wrong. `--threads 256` is going to lead to thrashing if you only have 32 cores total across 2 sockets. The `Ollama docker container inside the VM` with `--threads 16` is closer to optimal given our lack of NUMA support at present. I'd like to understand why we're getting the core count wrong `inside LXC` so we can fix the system to get the right default thread setting. Maybe the following will help shed some light. ``` cat /proc/cpuinfo |grep "^physical id" | sort | uniq -c cat /proc/cpuinfo |grep "^core id" | sort | uniq -c ```
Author
Owner

@T-Herrmann-WI commented on GitHub (Nov 19, 2024):

Dear @dhiltgen here are the Logs:

LXC Logs:

cat /proc/cpuinfo |grep "^physical id" | sort | uniq -c
2 physical id : 0
6 physical id : 1
cat /proc/cpuinfo |grep "^core id" | sort | uniq -c
1 core id : 100
1 core id : 114
1 core id : 124
1 core id : 13
1 core id : 17
1 core id : 44
1 core id : 76
1 core id : 88

VM Logs:

cat /proc/cpuinfo |grep "^physical id" | sort | uniq -c
8 physical id : 0
8 physical id : 1
cat /proc/cpuinfo |grep "^core id" | sort | uniq -c
2 core id : 0
2 core id : 1
2 core id : 2
2 core id : 3
2 core id : 4
2 core id : 5
2 core id : 6
2 core id : 7

<!-- gh-comment-id:2486025953 --> @T-Herrmann-WI commented on GitHub (Nov 19, 2024): Dear @dhiltgen here are the Logs: ## LXC Logs: cat /proc/cpuinfo |grep "^physical id" | sort | uniq -c 2 physical id : 0 6 physical id : 1 cat /proc/cpuinfo |grep "^core id" | sort | uniq -c 1 core id : 100 1 core id : 114 1 core id : 124 1 core id : 13 1 core id : 17 1 core id : 44 1 core id : 76 1 core id : 88 ## VM Logs: cat /proc/cpuinfo |grep "^physical id" | sort | uniq -c 8 physical id : 0 8 physical id : 1 cat /proc/cpuinfo |grep "^core id" | sort | uniq -c 2 core id : 0 2 core id : 1 2 core id : 2 2 core id : 3 2 core id : 4 2 core id : 5 2 core id : 6 2 core id : 7
Author
Owner

@T-Herrmann-WI commented on GitHub (Apr 20, 2025):

Dear @dhiltgen , any news to fix this bug?
I tested it last week with Proxmox 8.4 and Ollama 6.5. I run tinyllama for testing and it load fast but doesn't give a result but the CPU load was on 100%. It's frustrating.

<!-- gh-comment-id:2817112889 --> @T-Herrmann-WI commented on GitHub (Apr 20, 2025): Dear @dhiltgen , any news to fix this bug? I tested it last week with Proxmox 8.4 and Ollama 6.5. I run tinyllama for testing and it load fast but doesn't give a result but the CPU load was on 100%. It's frustrating.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3455