[GH-ISSUE #1495] ollama on Proxmox?? #26568

Closed
opened 2026-04-22 02:55:10 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @Paulie420 on GitHub (Dec 13, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1495

So I know this is user error, but... I can install and use ollama on my Framework laptop (without GPU) easily. Install w/ curl command and get going right away - but on a ProxMox VM w/ MORE RAM than my Framework, I get an Error ollama failed at the run command.

Am I missing something simple that I can 'fix'? I feel like my server has more CPU than my laptop - and wondering if others are running ollama on Proxmox w/o GPU?

Originally created by @Paulie420 on GitHub (Dec 13, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1495 So I know this is user error, but... I can install and use ollama on my Framework laptop (without GPU) easily. Install w/ curl command and get going right away - but on a ProxMox VM w/ MORE RAM than my Framework, I get an Error ollama failed at the run command. Am I missing something simple that I can 'fix'? I feel like my server has more CPU than my laptop - and wondering if others are running ollama on Proxmox w/o GPU?
Author
Owner

@rgaidot commented on GitHub (Dec 13, 2023):

You've deploy ollama on CT or VM? On which OS? How many resources have you allocated?

FYI, I've deploy on CT (OS Debian 11) with 16GB RAM and I haven't GPU activated (gpu passthrough)

My Proxmox runs on:

  • Ryzen 7 5800H
  • 64GB RAM DDR4
  • 500GB M.2 NVME SSD
  • SSD Crucial MX500 1To
<!-- gh-comment-id:1853579890 --> @rgaidot commented on GitHub (Dec 13, 2023): You've deploy ollama on CT or VM? On which OS? How many resources have you allocated? FYI, I've deploy on CT (OS Debian 11) with 16GB RAM and I haven't GPU activated (gpu passthrough) My Proxmox runs on: - Ryzen 7 5800H - 64GB RAM DDR4 - 500GB M.2 NVME SSD - SSD Crucial MX500 1To
Author
Owner

@Cybervet commented on GitHub (Dec 13, 2023):

I managed to run Ollama in proxmox fine in a old workstation withou AVX capable cpu. It runs in a CT (without GPU) slow, but on a VM with GPU passthrough runs fine. Just to mention that on the VM I compile the code not just d/l it.

<!-- gh-comment-id:1854243019 --> @Cybervet commented on GitHub (Dec 13, 2023): I managed to run Ollama in proxmox fine in a old workstation withou AVX capable cpu. It runs in a CT (without GPU) slow, but on a VM with GPU passthrough runs fine. Just to mention that on the VM I compile the code not just d/l it.
Author
Owner

@Paulie420 commented on GitHub (Dec 17, 2023):

I ran on a VM w/ 4 cores and 16GB of RAM - but it IS a Debian server... I was thinking maybe I need graphics packages installed??? LOL - works just fine on my GUI/Plasma laptop.

Am I forgetting something simple???

<!-- gh-comment-id:1858998462 --> @Paulie420 commented on GitHub (Dec 17, 2023): I ran on a VM w/ 4 cores and 16GB of RAM - but it IS a Debian server... I was thinking maybe I need graphics packages installed??? LOL - works just fine on my GUI/Plasma laptop. Am I forgetting something simple???
Author
Owner

@FlPie commented on GitHub (Dec 20, 2023):

I encountered a similar issue with a Proxmox VM. I believe the cause is related to AVX, SSE4, or other advanced instruction sets.

You may be able to resolve it by changing the type of processors assigned to the VM in Proxmox.

For me, setting the CPU type to "host" in Proxmox has worked as a solution.

Additional information can be found at the Proxmox forum:
https://forum.proxmox.com/threads/avx2-and-avx-flags-on-vm.87808/

<!-- gh-comment-id:1864645837 --> @FlPie commented on GitHub (Dec 20, 2023): I encountered a similar issue with a Proxmox VM. I believe the cause is related to AVX, SSE4, or other advanced instruction sets. You may be able to resolve it by changing the type of processors assigned to the VM in Proxmox. For me, setting the CPU type to "host" in Proxmox has worked as a solution. Additional information can be found at the Proxmox forum: https://forum.proxmox.com/threads/avx2-and-avx-flags-on-vm.87808/
Author
Owner

@abayomi185 commented on GitHub (Dec 29, 2023):

I recently set up GPU passthrough (Nvidia) to an LXC container running Ollama. If it helps anyone, here's how I did it:
https://yomis.blog/nvidia-gpu-in-proxmox-lxc/

<!-- gh-comment-id:1871723071 --> @abayomi185 commented on GitHub (Dec 29, 2023): I recently set up GPU passthrough (Nvidia) to an LXC container running Ollama. If it helps anyone, here's how I did it: https://yomis.blog/nvidia-gpu-in-proxmox-lxc/
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

We've recently added CPU variants so that Ollama can run on CPUs without AVX support. This should cover proxmox, although you should expect a pretty massive performance hit. I would recommend enabling host CPU in the advanced settings, but regardless, it will work without AVX now.

https://forum.proxmox.com/threads/avx2-and-avx-flags-on-vm.87808/

<!-- gh-comment-id:1912904033 --> @dhiltgen commented on GitHub (Jan 27, 2024): We've recently added CPU variants so that Ollama can run on CPUs without AVX support. This should cover proxmox, although you should expect a pretty massive performance hit. I would recommend enabling host CPU in the advanced settings, but regardless, it will work without AVX now. https://forum.proxmox.com/threads/avx2-and-avx-flags-on-vm.87808/
Author
Owner

@DocMAX commented on GitHub (Feb 24, 2024):

I tried with AMD iGPU 5800U and RX5700. Can't get it to work.
Passing /dev/dri and /dev/kfd to a LXC container. Ollama detects ROCm but stuck.

<!-- gh-comment-id:1962739821 --> @DocMAX commented on GitHub (Feb 24, 2024): I tried with AMD iGPU 5800U and RX5700. Can't get it to work. Passing /dev/dri and /dev/kfd to a LXC container. Ollama detects ROCm but stuck.
Author
Owner

@tristan-k commented on GitHub (Sep 20, 2024):

I recently tried to run IPEX-LLM on a Proxmox 8 host inside a Ubuntu 24.04 LXC but for some unkown reason ollama just gives empty answers.

./ollama serve
2024/09/20 14:34:34 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-09-20T14:34:34.932+02:00 level=INFO source=images.go:753 msg="total blobs: 5"
time=2024-09-20T14:34:34.932+02:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-09-20T14:34:34.933+02:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6-ipexllm-20240920)"
time=2024-09-20T14:34:34.933+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama499157638/runners
time=2024-09-20T14:34:35.027+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2]"
[GIN] 2024/09/20 - 14:35:01 | 200 |     342.519µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/09/20 - 14:35:01 | 200 |    32.81047ms |       127.0.0.1 | POST     "/api/show"
time=2024-09-20T14:35:02.011+02:00 level=INFO source=gpu.go:168 msg="looking for compatible GPUs"
time=2024-09-20T14:35:02.012+02:00 level=WARN source=gpu.go:560 msg="unable to locate gpu dependency libraries"
time=2024-09-20T14:35:02.012+02:00 level=WARN source=gpu.go:560 msg="unable to locate gpu dependency libraries"
time=2024-09-20T14:35:02.013+02:00 level=WARN source=gpu.go:560 msg="unable to locate gpu dependency libraries"
time=2024-09-20T14:35:02.014+02:00 level=INFO source=gpu.go:280 msg="no compatible GPUs were discovered"
time=2024-09-20T14:35:02.048+02:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=43 layers.offload=0 layers.split="" memory.available="[31.6 GiB]" memory.required.full="8.4 GiB" memory.required.partial="0 B" memory.required.kv="2.6 GiB" memory.required.allocations="[8.4 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB"
time=2024-09-20T14:35:02.053+02:00 level=INFO source=server.go:395 msg="starting llama server" cmd="/tmp/ollama499157638/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 999 --no-mmap --parallel 4 --port 36611"
time=2024-09-20T14:35:02.053+02:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-09-20T14:35:02.053+02:00 level=INFO source=server.go:595 msg="waiting for llama runner to start responding"
time=2024-09-20T14:35:02.053+02:00 level=INFO source=server.go:629 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="1810c22" tid="131835344998976" timestamp=1726835702
INFO [main] system info | n_threads=14 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="131835344998976" timestamp=1726835702 total_threads=12
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="36611" tid="131835344998976" timestamp=1726835702
llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from /root/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.name str              = gemma-2-9b-it
llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   3:                    gemma2.embedding_length u32              = 3584
llama_model_loader: - kv   4:                         gemma2.block_count u32              = 42
llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 16
llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                gemma2.attention.key_length u32              = 256
llama_model_loader: - kv  10:              gemma2.attention.value_length u32              = 256
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:              gemma2.attn_logit_softcapping f32              = 50.000000
llama_model_loader: - kv  13:             gemma2.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  14:            gemma2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  18:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  20:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  22:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  23:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  25:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  169 tensors
llama_model_loader: - type q4_0:  294 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 108
time=2024-09-20T14:35:02.305+02:00 level=INFO source=server.go:629 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma2
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 3584
llm_load_print_meta: n_layer          = 42
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 256
llm_load_print_meta: n_swa            = 4096
llm_load_print_meta: n_embd_head_k    = 256
llm_load_print_meta: n_embd_head_v    = 256
llm_load_print_meta: n_gqa            = 2
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 9B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 9.24 B
llm_load_print_meta: model size       = 5.06 GiB (4.71 BPW)
llm_load_print_meta: general.name     = gemma-2-9b-it
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
ggml_sycl_init: GGML_SYCL_FORCE_MMQ:   no
ggml_sycl_init: SYCL_USE_XMX: yes
ggml_sycl_init: found 1 SYCL devices:
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llm_load_tensors: ggml ctx size =    0.41 MiB
llm_load_tensors: offloading 42 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 43/43 layers to GPU
llm_load_tensors:      SYCL0 buffer size =  5185.21 MiB
llm_load_tensors:  SYCL_Host buffer size =   717.77 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
[SYCL] call ggml_check_sycl
ggml_check_sycl: GGML_SYCL_DEBUG: 0
ggml_check_sycl: GGML_SYCL_F16: no
found 1 SYCL devices:
|  |                   |                                       |       |Max    |        |Max  |Global |                     |
|  |                   |                                       |       |compute|Max work|sub  |mem    |                     |
|ID|        Device Type|                                   Name|Version|units  |group   |group|size   |       Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]|                     Intel Arc Graphics|    1.3|    112|    1024|   32| 62228M|            1.3.29735|
llama_kv_cache_init:      SYCL0 KV buffer size =  2688.00 MiB
llama_new_context_with_model: KV self size  = 2688.00 MiB, K (f16): 1344.00 MiB, V (f16): 1344.00 MiB
llama_new_context_with_model:  SYCL_Host  output buffer size =     3.96 MiB
[1726835715] warming up the model with an empty run
llama_new_context_with_model:      SYCL0 compute buffer size =   507.00 MiB
llama_new_context_with_model:  SYCL_Host compute buffer size =    39.01 MiB
llama_new_context_with_model: graph nodes  = 1732
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="131835344998976" timestamp=1726835716
time=2024-09-20T14:35:17.119+02:00 level=INFO source=server.go:634 msg="llama runner started in 15.07 seconds"
[GIN] 2024/09/20 - 14:35:17 | 200 | 15.126045979s |       127.0.0.1 | POST     "/api/chat"
INFO [print_timings] prompt eval time     =    1379.57 ms /    15 tokens (   91.97 ms per token,    10.87 tokens per second) | n_prompt_tokens_processed=15 n_tokens_second=10.872984692287284 slot_id=0 t_prompt_processing=1379.566 t_token=91.97106666666667 task_id=3 tid="131835344998976" timestamp=1726835724
INFO [print_timings] generation eval time =       0.01 ms /     1 runs   (    0.01 ms per token, 83333.33 tokens per second) | n_decoded=1 n_tokens_second=83333.33333333333 slot_id=0 t_token=0.012 t_token_generation=0.012 task_id=3 tid="131835344998976" timestamp=1726835724
INFO [print_timings]           total time =    1379.58 ms | slot_id=0 t_prompt_processing=1379.566 t_token_generation=0.012 t_total=1379.578 task_id=3 tid="131835344998976" timestamp=1726835724
[GIN] 2024/09/20 - 14:35:24 | 200 |  1.438643598s |       127.0.0.1 | POST     "/api/chat"
INFO [print_timings] prompt eval time     =    1395.62 ms /    15 tokens (   93.04 ms per token,    10.75 tokens per second) | n_prompt_tokens_processed=15 n_tokens_second=10.747895920242014 slot_id=0 t_prompt_processing=1395.622 t_token=93.04146666666666 task_id=11 tid="131835344998976" timestamp=1726835736
INFO [print_timings] generation eval time =       0.01 ms /     1 runs   (    0.01 ms per token, 111111.11 tokens per second) | n_decoded=1 n_tokens_second=111111.1111111111 slot_id=0 t_token=0.009000000000000001 t_token_generation=0.009000000000000001 task_id=11 tid="131835344998976" timestamp=1726835736
INFO [print_timings]           total time =    1395.63 ms | slot_id=0 t_prompt_processing=1395.622 t_token_generation=0.009000000000000001 t_total=1395.631 task_id=11 tid="131835344998976" timestamp=1726835736
[GIN] 2024/09/20 - 14:35:36 | 200 |  1.540893033s |       127.0.0.1 | POST     "/api/chat"
./ollama run gemma2:9b
>>> Why is the sky blue?


>>> Why is the sky blue?
<!-- gh-comment-id:2363639441 --> @tristan-k commented on GitHub (Sep 20, 2024): I recently tried to run [IPEX-LLM](https://github.com/intel-analytics/ipex-llm) on a Proxmox 8 host inside a Ubuntu 24.04 LXC but for some unkown reason ollama just gives empty answers. ``` ./ollama serve 2024/09/20 14:34:34 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-09-20T14:34:34.932+02:00 level=INFO source=images.go:753 msg="total blobs: 5" time=2024-09-20T14:34:34.932+02:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2024-09-20T14:34:34.933+02:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6-ipexllm-20240920)" time=2024-09-20T14:34:34.933+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama499157638/runners time=2024-09-20T14:34:35.027+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2]" [GIN] 2024/09/20 - 14:35:01 | 200 | 342.519µs | 127.0.0.1 | HEAD "/" [GIN] 2024/09/20 - 14:35:01 | 200 | 32.81047ms | 127.0.0.1 | POST "/api/show" time=2024-09-20T14:35:02.011+02:00 level=INFO source=gpu.go:168 msg="looking for compatible GPUs" time=2024-09-20T14:35:02.012+02:00 level=WARN source=gpu.go:560 msg="unable to locate gpu dependency libraries" time=2024-09-20T14:35:02.012+02:00 level=WARN source=gpu.go:560 msg="unable to locate gpu dependency libraries" time=2024-09-20T14:35:02.013+02:00 level=WARN source=gpu.go:560 msg="unable to locate gpu dependency libraries" time=2024-09-20T14:35:02.014+02:00 level=INFO source=gpu.go:280 msg="no compatible GPUs were discovered" time=2024-09-20T14:35:02.048+02:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=43 layers.offload=0 layers.split="" memory.available="[31.6 GiB]" memory.required.full="8.4 GiB" memory.required.partial="0 B" memory.required.kv="2.6 GiB" memory.required.allocations="[8.4 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="6.3 GiB" memory.weights.nonrepeating="717.8 MiB" memory.graph.full="507.0 MiB" memory.graph.partial="1.2 GiB" time=2024-09-20T14:35:02.053+02:00 level=INFO source=server.go:395 msg="starting llama server" cmd="/tmp/ollama499157638/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 999 --no-mmap --parallel 4 --port 36611" time=2024-09-20T14:35:02.053+02:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2024-09-20T14:35:02.053+02:00 level=INFO source=server.go:595 msg="waiting for llama runner to start responding" time=2024-09-20T14:35:02.053+02:00 level=INFO source=server.go:629 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="1810c22" tid="131835344998976" timestamp=1726835702 INFO [main] system info | n_threads=14 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="131835344998976" timestamp=1726835702 total_threads=12 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="36611" tid="131835344998976" timestamp=1726835702 llama_model_loader: loaded meta data with 29 key-value pairs and 464 tensors from /root/.ollama/models/blobs/sha256-ff1d1fc78170d787ee1201778e2dd65ea211654ca5fb7d69b5a2e7b123a50373 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gemma2 llama_model_loader: - kv 1: general.name str = gemma-2-9b-it llama_model_loader: - kv 2: gemma2.context_length u32 = 8192 llama_model_loader: - kv 3: gemma2.embedding_length u32 = 3584 llama_model_loader: - kv 4: gemma2.block_count u32 = 42 llama_model_loader: - kv 5: gemma2.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: gemma2.attention.head_count u32 = 16 llama_model_loader: - kv 7: gemma2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 9: gemma2.attention.key_length u32 = 256 llama_model_loader: - kv 10: gemma2.attention.value_length u32 = 256 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: gemma2.attn_logit_softcapping f32 = 50.000000 llama_model_loader: - kv 13: gemma2.final_logit_softcapping f32 = 30.000000 llama_model_loader: - kv 14: gemma2.attention.sliding_window u32 = 4096 llama_model_loader: - kv 15: tokenizer.ggml.model str = llama llama_model_loader: - kv 16: tokenizer.ggml.pre str = default llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... llama_model_loader: - kv 18: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 20: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 22: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 25: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 26: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... llama_model_loader: - kv 27: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 169 tensors llama_model_loader: - type q4_0: 294 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 108 time=2024-09-20T14:35:02.305+02:00 level=INFO source=server.go:629 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: token to piece cache size = 1.6014 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = gemma2 llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 256000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 3584 llm_load_print_meta: n_layer = 42 llm_load_print_meta: n_head = 16 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 256 llm_load_print_meta: n_swa = 4096 llm_load_print_meta: n_embd_head_k = 256 llm_load_print_meta: n_embd_head_v = 256 llm_load_print_meta: n_gqa = 2 llm_load_print_meta: n_embd_k_gqa = 2048 llm_load_print_meta: n_embd_v_gqa = 2048 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 9B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 9.24 B llm_load_print_meta: model size = 5.06 GiB (4.71 BPW) llm_load_print_meta: general.name = gemma-2-9b-it llm_load_print_meta: BOS token = 2 '<bos>' llm_load_print_meta: EOS token = 1 '<eos>' llm_load_print_meta: UNK token = 3 '<unk>' llm_load_print_meta: PAD token = 0 '<pad>' llm_load_print_meta: LF token = 227 '<0x0A>' llm_load_print_meta: EOT token = 107 '<end_of_turn>' llm_load_print_meta: max token length = 93 ggml_sycl_init: GGML_SYCL_FORCE_MMQ: no ggml_sycl_init: SYCL_USE_XMX: yes ggml_sycl_init: found 1 SYCL devices: get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory llm_load_tensors: ggml ctx size = 0.41 MiB llm_load_tensors: offloading 42 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 43/43 layers to GPU llm_load_tensors: SYCL0 buffer size = 5185.21 MiB llm_load_tensors: SYCL_Host buffer size = 717.77 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 [SYCL] call ggml_check_sycl ggml_check_sycl: GGML_SYCL_DEBUG: 0 ggml_check_sycl: GGML_SYCL_F16: no found 1 SYCL devices: | | | | |Max | |Max |Global | | | | | | |compute|Max work|sub |mem | | |ID| Device Type| Name|Version|units |group |group|size | Driver version| |--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------| | 0| [level_zero:gpu:0]| Intel Arc Graphics| 1.3| 112| 1024| 32| 62228M| 1.3.29735| llama_kv_cache_init: SYCL0 KV buffer size = 2688.00 MiB llama_new_context_with_model: KV self size = 2688.00 MiB, K (f16): 1344.00 MiB, V (f16): 1344.00 MiB llama_new_context_with_model: SYCL_Host output buffer size = 3.96 MiB [1726835715] warming up the model with an empty run llama_new_context_with_model: SYCL0 compute buffer size = 507.00 MiB llama_new_context_with_model: SYCL_Host compute buffer size = 39.01 MiB llama_new_context_with_model: graph nodes = 1732 llama_new_context_with_model: graph splits = 2 INFO [main] model loaded | tid="131835344998976" timestamp=1726835716 time=2024-09-20T14:35:17.119+02:00 level=INFO source=server.go:634 msg="llama runner started in 15.07 seconds" [GIN] 2024/09/20 - 14:35:17 | 200 | 15.126045979s | 127.0.0.1 | POST "/api/chat" INFO [print_timings] prompt eval time = 1379.57 ms / 15 tokens ( 91.97 ms per token, 10.87 tokens per second) | n_prompt_tokens_processed=15 n_tokens_second=10.872984692287284 slot_id=0 t_prompt_processing=1379.566 t_token=91.97106666666667 task_id=3 tid="131835344998976" timestamp=1726835724 INFO [print_timings] generation eval time = 0.01 ms / 1 runs ( 0.01 ms per token, 83333.33 tokens per second) | n_decoded=1 n_tokens_second=83333.33333333333 slot_id=0 t_token=0.012 t_token_generation=0.012 task_id=3 tid="131835344998976" timestamp=1726835724 INFO [print_timings] total time = 1379.58 ms | slot_id=0 t_prompt_processing=1379.566 t_token_generation=0.012 t_total=1379.578 task_id=3 tid="131835344998976" timestamp=1726835724 [GIN] 2024/09/20 - 14:35:24 | 200 | 1.438643598s | 127.0.0.1 | POST "/api/chat" INFO [print_timings] prompt eval time = 1395.62 ms / 15 tokens ( 93.04 ms per token, 10.75 tokens per second) | n_prompt_tokens_processed=15 n_tokens_second=10.747895920242014 slot_id=0 t_prompt_processing=1395.622 t_token=93.04146666666666 task_id=11 tid="131835344998976" timestamp=1726835736 INFO [print_timings] generation eval time = 0.01 ms / 1 runs ( 0.01 ms per token, 111111.11 tokens per second) | n_decoded=1 n_tokens_second=111111.1111111111 slot_id=0 t_token=0.009000000000000001 t_token_generation=0.009000000000000001 task_id=11 tid="131835344998976" timestamp=1726835736 INFO [print_timings] total time = 1395.63 ms | slot_id=0 t_prompt_processing=1395.622 t_token_generation=0.009000000000000001 t_total=1395.631 task_id=11 tid="131835344998976" timestamp=1726835736 [GIN] 2024/09/20 - 14:35:36 | 200 | 1.540893033s | 127.0.0.1 | POST "/api/chat" ``` ``` ./ollama run gemma2:9b >>> Why is the sky blue? >>> Why is the sky blue? ```
Author
Owner

@dhiltgen commented on GitHub (Sep 24, 2024):

@tristan-k Intel GPU support isn't officially part of Ollama yet - we're tracking that via #1590

Something seems to be going wrong with the Intel GPU support, so you should probably follow up with the maintainers of IPEX-LLM

<!-- gh-comment-id:2371986766 --> @dhiltgen commented on GitHub (Sep 24, 2024): @tristan-k Intel GPU support isn't officially part of Ollama yet - we're tracking that via #1590 Something seems to be going wrong with the Intel GPU support, so you should probably follow up with the maintainers of [IPEX-LLM](https://github.com/intel-analytics/ipex-llm)
Author
Owner

@havardthom commented on GitHub (Oct 28, 2024):

For anyone interested, I've added an Ollama LXC script to tteck's Proxmox Helper-Scripts. The script installs intel-basekit and builds Ollama from source and supports Intel iGPU passthrough (though it has a very long install time). It can be run like any other proxmox helper script: bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/ollama.sh)"

A script for Open WebUI LXC with optional Ollama install is also available: https://tteck.github.io/Proxmox/#open-webui-lxc

<!-- gh-comment-id:2442105479 --> @havardthom commented on GitHub (Oct 28, 2024): For anyone interested, I've added an Ollama LXC script to tteck's Proxmox Helper-Scripts. The script installs intel-basekit and builds Ollama from source and supports Intel iGPU passthrough (though it has a very long install time). It can be run like any other proxmox helper script: `bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/ollama.sh)"` A script for Open WebUI LXC with optional Ollama install is also available: https://tteck.github.io/Proxmox/#open-webui-lxc
Author
Owner

@tristan-k commented on GitHub (Oct 29, 2024):

@havardthom I just installed your Ollama LXC on my Metero Lake Intel Core Ultra 5 125H but despite your efforts to install oneapi runtime Ollama still uses the CPU (AVX2).

Note: The systemd service also runs on CPU. For the debug output I run Ollama manually from the install directory in the LXC at /opt/ollama/ and exported the environment variables.

$ export OLLAMA_INTEL_GPU=true 
$ export OLLAMA_HOST=0.0.0.0 
$ export OLLAMA_NUM_GPU=999 
$ export SYCL_CACHE_PERSISTENT=1
$ export ZES_ENABLE_SYSMAN=1

$ cd /opt/ollama/
$ ./ollama serve
2024/10/29 21:48:08 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:true OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-10-29T21:48:08.257+01:00 level=INFO source=images.go:754 msg="total blobs: 5"
time=2024-10-29T21:48:08.257+01:00 level=INFO source=images.go:761 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-10-29T21:48:08.258+01:00 level=INFO source=routes.go:1236 msg="Listening on [::]:11434 (version 0.0.0)"
time=2024-10-29T21:48:08.258+01:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama3364633028/runners
time=2024-10-29T21:48:08.311+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 oneapi]"
time=2024-10-29T21:48:08.311+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-10-29T21:48:08.369+01:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=oneapi variant="" compute="" driver=0.0 name="Intel(R) Arc(TM) Graphics" total="0 B" available="0 B"
[GIN] 2024/10/29 - 21:48:12 | 200 |      32.205µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/10/29 - 21:48:12 | 200 |   27.761505ms |       127.0.0.1 | POST     "/api/show"
time=2024-10-29T21:48:13.101+01:00 level=INFO source=server.go:105 msg="system memory" total="4.0 GiB" free="3.9 GiB" free_swap="512.0 MiB"
time=2024-10-29T21:48:13.101+01:00 level=INFO source=memory.go:346 msg="offload to oneapi" layers.requested=-1 layers.model=27 layers.offload=0 layers.split="" memory.available="[0 B]" memory.gpu_overhead="0 B" memory.required.full="1.7 GiB" memory.required.partial="0 B" memory.required.kv="208.0 MiB" memory.required.allocations="[0 B]" memory.weights.total="1.3 GiB" memory.weights.repeating="833.4 MiB" memory.weights.nonrepeating="461.4 MiB" memory.graph.full="504.5 MiB" memory.graph.partial="965.9 MiB"
time=2024-10-29T21:48:13.102+01:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama3364633028/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b --ctx-size 2048 --batch-size 512 --embedding --threads 4 --no-mmap --parallel 1 --port 39171"
time=2024-10-29T21:48:13.102+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-10-29T21:48:13.102+01:00 level=INFO source=server.go:567 msg="waiting for llama runner to start responding"
time=2024-10-29T21:48:13.103+01:00 level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server error"
INFO [main] starting c++ runner | tid="135233673087936" timestamp=1730234893
INFO [main] build info | build=3871 commit="5382a715" tid="135233673087936" timestamp=1730234893
INFO [main] system info | n_threads=4 n_threads_batch=4 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="135233673087936" timestamp=1730234893 total_threads=4
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="3" port="39171" tid="135233673087936" timestamp=1730234893
llama_model_loader: loaded meta data with 34 key-value pairs and 288 tensors from /root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Gemma 2.0 2b It Transformers
llama_model_loader: - kv   3:                           general.finetune str              = it-transformers
llama_model_loader: - kv   4:                           general.basename str              = gemma-2.0
llama_model_loader: - kv   5:                         general.size_label str              = 2B
llama_model_loader: - kv   6:                            general.license str              = gemma
llama_model_loader: - kv   7:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   8:                    gemma2.embedding_length u32              = 2304
llama_model_loader: - kv   9:                         gemma2.block_count u32              = 26
llama_model_loader: - kv  10:                 gemma2.feed_forward_length u32              = 9216
llama_model_loader: - kv  11:                gemma2.attention.head_count u32              = 8
llama_model_loader: - kv  12:             gemma2.attention.head_count_kv u32              = 4
llama_model_loader: - kv  13:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                gemma2.attention.key_length u32              = 256
llama_model_loader: - kv  15:              gemma2.attention.value_length u32              = 256
llama_model_loader: - kv  16:                          general.file_type u32              = 2
llama_model_loader: - kv  17:              gemma2.attn_logit_softcapping f32              = 50.000000
llama_model_loader: - kv  18:             gemma2.final_logit_softcapping f32              = 30.000000
llama_model_loader: - kv  19:            gemma2.attention.sliding_window u32              = 4096
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  23:                      tokenizer.ggml.scores arr[f32,256000]  = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  27:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  28:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  29:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  30:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  31:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
llama_model_loader: - kv  32:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  105 tensors
llama_model_loader: - type q4_0:  182 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: control-looking token: '<end_of_turn>' was not control-type; this is probably a bug in the model. its type will be overridden
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 249
time=2024-10-29T21:48:13.354+01:00 level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: token to piece cache size = 1.6014 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma2
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 2304
llm_load_print_meta: n_layer          = 26
llm_load_print_meta: n_head           = 8
llm_load_print_meta: n_head_kv        = 4
llm_load_print_meta: n_rot            = 256
llm_load_print_meta: n_swa            = 4096
llm_load_print_meta: n_embd_head_k    = 256
llm_load_print_meta: n_embd_head_v    = 256
llm_load_print_meta: n_gqa            = 2
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 9216
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 2B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 2.61 B
llm_load_print_meta: model size       = 1.51 GiB (4.97 BPW)
llm_load_print_meta: general.name     = Gemma 2.0 2b It Transformers
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
llm_load_print_meta: EOG token        = 1 '<eos>'
llm_load_print_meta: EOG token        = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size =    0.13 MiB
llm_load_tensors:        CPU buffer size =  2009.68 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   208.00 MiB
llama_new_context_with_model: KV self size  =  208.00 MiB, K (f16):  104.00 MiB, V (f16):  104.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.99 MiB
llama_new_context_with_model:        CPU compute buffer size =   509.00 MiB
llama_new_context_with_model: graph nodes  = 1050
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="135233673087936" timestamp=1730234894
time=2024-10-29T21:48:14.610+01:00 level=INFO source=server.go:606 msg="llama runner started in 1.51 seconds"
[GIN] 2024/10/29 - 21:48:14 | 200 |  1.676665178s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2024/10/29 - 21:48:47 | 200 |  24.67426603s |       127.0.0.1 | POST     "/api/chat"
$ cat  /etc/pve/lxc/104.conf
# Ollama LXC
arch: amd64
cores: 4
dev0: /dev/dri/card1,gid=44
dev1: /dev/dri/renderD128,gid=104
features: keyctl=1,nesting=1
hostname: ollama
memory: 4096
net0: name=eth0,bridge=vmbr0,hwaddr=REDACTED,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-104-disk-1,size=24G
swap: 512
tags: proxmox-helper-scripts
unprivileged: 1
<!-- gh-comment-id:2445281499 --> @tristan-k commented on GitHub (Oct 29, 2024): @havardthom I just installed your Ollama LXC on my Metero Lake Intel Core Ultra 5 125H but despite your efforts to install oneapi runtime Ollama still uses the CPU (AVX2). Note: The systemd service also runs on CPU. For the debug output I run Ollama manually from the install directory in the LXC at `/opt/ollama/` and exported the environment variables. ``` $ export OLLAMA_INTEL_GPU=true $ export OLLAMA_HOST=0.0.0.0 $ export OLLAMA_NUM_GPU=999 $ export SYCL_CACHE_PERSISTENT=1 $ export ZES_ENABLE_SYSMAN=1 $ cd /opt/ollama/ $ ./ollama serve 2024/10/29 21:48:08 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:true OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-10-29T21:48:08.257+01:00 level=INFO source=images.go:754 msg="total blobs: 5" time=2024-10-29T21:48:08.257+01:00 level=INFO source=images.go:761 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2024-10-29T21:48:08.258+01:00 level=INFO source=routes.go:1236 msg="Listening on [::]:11434 (version 0.0.0)" time=2024-10-29T21:48:08.258+01:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama3364633028/runners time=2024-10-29T21:48:08.311+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 oneapi]" time=2024-10-29T21:48:08.311+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-10-29T21:48:08.369+01:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=oneapi variant="" compute="" driver=0.0 name="Intel(R) Arc(TM) Graphics" total="0 B" available="0 B" [GIN] 2024/10/29 - 21:48:12 | 200 | 32.205µs | 127.0.0.1 | HEAD "/" [GIN] 2024/10/29 - 21:48:12 | 200 | 27.761505ms | 127.0.0.1 | POST "/api/show" time=2024-10-29T21:48:13.101+01:00 level=INFO source=server.go:105 msg="system memory" total="4.0 GiB" free="3.9 GiB" free_swap="512.0 MiB" time=2024-10-29T21:48:13.101+01:00 level=INFO source=memory.go:346 msg="offload to oneapi" layers.requested=-1 layers.model=27 layers.offload=0 layers.split="" memory.available="[0 B]" memory.gpu_overhead="0 B" memory.required.full="1.7 GiB" memory.required.partial="0 B" memory.required.kv="208.0 MiB" memory.required.allocations="[0 B]" memory.weights.total="1.3 GiB" memory.weights.repeating="833.4 MiB" memory.weights.nonrepeating="461.4 MiB" memory.graph.full="504.5 MiB" memory.graph.partial="965.9 MiB" time=2024-10-29T21:48:13.102+01:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama3364633028/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b --ctx-size 2048 --batch-size 512 --embedding --threads 4 --no-mmap --parallel 1 --port 39171" time=2024-10-29T21:48:13.102+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-10-29T21:48:13.102+01:00 level=INFO source=server.go:567 msg="waiting for llama runner to start responding" time=2024-10-29T21:48:13.103+01:00 level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server error" INFO [main] starting c++ runner | tid="135233673087936" timestamp=1730234893 INFO [main] build info | build=3871 commit="5382a715" tid="135233673087936" timestamp=1730234893 INFO [main] system info | n_threads=4 n_threads_batch=4 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="135233673087936" timestamp=1730234893 total_threads=4 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="3" port="39171" tid="135233673087936" timestamp=1730234893 llama_model_loader: loaded meta data with 34 key-value pairs and 288 tensors from /root/.ollama/models/blobs/sha256-7462734796d67c40ecec2ca98eddf970e171dbb6b370e43fd633ee75b69abe1b (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gemma2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Gemma 2.0 2b It Transformers llama_model_loader: - kv 3: general.finetune str = it-transformers llama_model_loader: - kv 4: general.basename str = gemma-2.0 llama_model_loader: - kv 5: general.size_label str = 2B llama_model_loader: - kv 6: general.license str = gemma llama_model_loader: - kv 7: gemma2.context_length u32 = 8192 llama_model_loader: - kv 8: gemma2.embedding_length u32 = 2304 llama_model_loader: - kv 9: gemma2.block_count u32 = 26 llama_model_loader: - kv 10: gemma2.feed_forward_length u32 = 9216 llama_model_loader: - kv 11: gemma2.attention.head_count u32 = 8 llama_model_loader: - kv 12: gemma2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 13: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: gemma2.attention.key_length u32 = 256 llama_model_loader: - kv 15: gemma2.attention.value_length u32 = 256 llama_model_loader: - kv 16: general.file_type u32 = 2 llama_model_loader: - kv 17: gemma2.attn_logit_softcapping f32 = 50.000000 llama_model_loader: - kv 18: gemma2.final_logit_softcapping f32 = 30.000000 llama_model_loader: - kv 19: gemma2.attention.sliding_window u32 = 4096 llama_model_loader: - kv 20: tokenizer.ggml.model str = llama llama_model_loader: - kv 21: tokenizer.ggml.pre str = default llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... llama_model_loader: - kv 23: tokenizer.ggml.scores arr[f32,256000] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 27: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 28: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 30: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 31: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... llama_model_loader: - kv 32: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 105 tensors llama_model_loader: - type q4_0: 182 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: control-looking token: '<end_of_turn>' was not control-type; this is probably a bug in the model. its type will be overridden llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 249 time=2024-10-29T21:48:13.354+01:00 level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: token to piece cache size = 1.6014 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = gemma2 llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 256000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 2304 llm_load_print_meta: n_layer = 26 llm_load_print_meta: n_head = 8 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 256 llm_load_print_meta: n_swa = 4096 llm_load_print_meta: n_embd_head_k = 256 llm_load_print_meta: n_embd_head_v = 256 llm_load_print_meta: n_gqa = 2 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 9216 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 2B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 2.61 B llm_load_print_meta: model size = 1.51 GiB (4.97 BPW) llm_load_print_meta: general.name = Gemma 2.0 2b It Transformers llm_load_print_meta: BOS token = 2 '<bos>' llm_load_print_meta: EOS token = 1 '<eos>' llm_load_print_meta: UNK token = 3 '<unk>' llm_load_print_meta: PAD token = 0 '<pad>' llm_load_print_meta: LF token = 227 '<0x0A>' llm_load_print_meta: EOT token = 107 '<end_of_turn>' llm_load_print_meta: EOG token = 1 '<eos>' llm_load_print_meta: EOG token = 107 '<end_of_turn>' llm_load_print_meta: max token length = 48 llm_load_tensors: ggml ctx size = 0.13 MiB llm_load_tensors: CPU buffer size = 2009.68 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 208.00 MiB llama_new_context_with_model: KV self size = 208.00 MiB, K (f16): 104.00 MiB, V (f16): 104.00 MiB llama_new_context_with_model: CPU output buffer size = 0.99 MiB llama_new_context_with_model: CPU compute buffer size = 509.00 MiB llama_new_context_with_model: graph nodes = 1050 llama_new_context_with_model: graph splits = 1 INFO [main] model loaded | tid="135233673087936" timestamp=1730234894 time=2024-10-29T21:48:14.610+01:00 level=INFO source=server.go:606 msg="llama runner started in 1.51 seconds" [GIN] 2024/10/29 - 21:48:14 | 200 | 1.676665178s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/10/29 - 21:48:47 | 200 | 24.67426603s | 127.0.0.1 | POST "/api/chat" ``` ``` $ cat /etc/pve/lxc/104.conf # Ollama LXC arch: amd64 cores: 4 dev0: /dev/dri/card1,gid=44 dev1: /dev/dri/renderD128,gid=104 features: keyctl=1,nesting=1 hostname: ollama memory: 4096 net0: name=eth0,bridge=vmbr0,hwaddr=REDACTED,ip=dhcp,type=veth onboot: 1 ostype: ubuntu rootfs: local-lvm:vm-104-disk-1,size=24G swap: 512 tags: proxmox-helper-scripts unprivileged: 1 ```
Author
Owner

@havardthom commented on GitHub (Oct 29, 2024):

What is the output if you run this in LXC console?

source /opt/intel/oneapi/setvars.sh
sycl-ls

Intel built-in Arc GPU should also be supported according to llama.cpp docs: https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md#verified-devices

Though I don't see it on the Intel Level Zero supported platforms: https://github.com/intel/compute-runtime?tab=readme-ov-file#supported-platforms

<!-- gh-comment-id:2445308951 --> @havardthom commented on GitHub (Oct 29, 2024): What is the output if you run this in LXC console? ``` source /opt/intel/oneapi/setvars.sh sycl-ls ``` Intel built-in Arc GPU should also be supported according to llama.cpp docs: https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md#verified-devices Though I don't see it on the Intel Level Zero supported platforms: https://github.com/intel/compute-runtime?tab=readme-ov-file#supported-platforms
Author
Owner

@tristan-k commented on GitHub (Oct 29, 2024):

$ source /opt/intel/oneapi/setvars.sh

:: initializing oneAPI environment ...
   bash: BASH_VERSION = 5.1.16(1)-release
   args: Using "$@" for setvars.sh arguments:
:: advisor -- latest
:: ccl -- latest
:: compiler -- latest
:: dal -- latest
:: debugger -- latest
:: dev-utilities -- latest
:: dnnl -- latest
:: dpcpp-ct -- latest
:: dpl -- latest
:: ipp -- latest
:: ippcp -- latest
:: mkl -- latest
:: mpi -- latest
:: tbb -- latest
:: umf -- latest
:: vtune -- latest
:: oneAPI environment initialized ::
$ sycl-ls
[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2  [2024.17.5.0.08_160000.xmain-hotfix]
[opencl:cpu:1] Intel(R) OpenCL, Intel(R) Core(TM) Ultra 5 125H OpenCL 3.0 (Build 0) [2024.17.5.0.08_160000.xmain-hotfix]
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) Graphics OpenCL 3.0 NEO  [24.35.30872]
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) Graphics 1.3 [1.3.29735]

I already tried if the issue was related to unprivileged: 1 but changing to unprivileged: 0 doesnt make any difference.

<!-- gh-comment-id:2445313757 --> @tristan-k commented on GitHub (Oct 29, 2024): ``` $ source /opt/intel/oneapi/setvars.sh :: initializing oneAPI environment ... bash: BASH_VERSION = 5.1.16(1)-release args: Using "$@" for setvars.sh arguments: :: advisor -- latest :: ccl -- latest :: compiler -- latest :: dal -- latest :: debugger -- latest :: dev-utilities -- latest :: dnnl -- latest :: dpcpp-ct -- latest :: dpl -- latest :: ipp -- latest :: ippcp -- latest :: mkl -- latest :: mpi -- latest :: tbb -- latest :: umf -- latest :: vtune -- latest :: oneAPI environment initialized :: ``` ``` $ sycl-ls [opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2 [2024.17.5.0.08_160000.xmain-hotfix] [opencl:cpu:1] Intel(R) OpenCL, Intel(R) Core(TM) Ultra 5 125H OpenCL 3.0 (Build 0) [2024.17.5.0.08_160000.xmain-hotfix] [opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) Graphics OpenCL 3.0 NEO [24.35.30872] [ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) Graphics 1.3 [1.3.29735] ``` I already tried if the issue was related to `unprivileged: 1` but changing to `unprivileged: 0` doesnt make any difference.
Author
Owner

@havardthom commented on GitHub (Oct 29, 2024):

Looking again at the first logs you posted it looks like the GPU is being used:

time=2024-10-29T21:48:08.311+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 oneapi]"
time=2024-10-29T21:48:08.311+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-10-29T21:48:08.369+01:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=oneapi variant="" compute="" driver=0.0 name="Intel(R) Arc(TM) Graphics" total="0 B" available="0 B"
<!-- gh-comment-id:2445321140 --> @havardthom commented on GitHub (Oct 29, 2024): Looking again at the first logs you posted it looks like the GPU is being used: ``` time=2024-10-29T21:48:08.311+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 oneapi]" time=2024-10-29T21:48:08.311+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-10-29T21:48:08.369+01:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=oneapi variant="" compute="" driver=0.0 name="Intel(R) Arc(TM) Graphics" total="0 B" available="0 B" ```
Author
Owner

@tristan-k commented on GitHub (Oct 29, 2024):

I thought so as well but intel_gpu_top on the proxmox host doesnt show any load and htop clearly shows a load on the CPU cores at /tmp/ollama264177643/runners/cpu_avx2/ollama_llama_server.

For what it's worth I think the issue is related to Intel and it's lack of support for Meteor Lake on Ubuntu 22.04. See my issue here. At the same time @celesrenata got it working on nixOS with SR-IOV.

<!-- gh-comment-id:2445324172 --> @tristan-k commented on GitHub (Oct 29, 2024): I thought so as well but `intel_gpu_top` on the proxmox host doesnt show any load and `htop` clearly shows a load on the CPU cores at `/tmp/ollama264177643/runners/cpu_avx2/ollama_llama_server`. For what it's worth I think the issue is related to Intel and it's lack of support for Meteor Lake on Ubuntu 22.04. See my issue [here](https://github.com/intel-analytics/ipex-llm/issues/11605). At the same time @celesrenata got it [working](https://github.com/strongtz/i915-sriov-dkms/issues/195#issuecomment-2407898215) on nixOS with SR-IOV.
Author
Owner

@havardthom commented on GitHub (Oct 29, 2024):

Same situation for me (N100 alder lake), I guess iGPU just isn't supported in Ollama yet then :/ tracking https://github.com/ollama/ollama/issues/3113

<!-- gh-comment-id:2445343501 --> @havardthom commented on GitHub (Oct 29, 2024): Same situation for me (N100 alder lake), I guess iGPU just isn't supported in Ollama yet then :/ tracking https://github.com/ollama/ollama/issues/3113
Author
Owner

@tristan-k commented on GitHub (Oct 29, 2024):

Maybe have a lookt at this. @celesrenata seems to be using Ubuntu 24.04.

<!-- gh-comment-id:2445351451 --> @tristan-k commented on GitHub (Oct 29, 2024): Maybe have a lookt at [this](https://github.com/celesrenata/nixos-k3s-configs/blob/main/kubevirt/ipex-1x/bootstrap-ipex-fleet.sh). @celesrenata seems to be using Ubuntu 24.04.
Author
Owner

@havardthom commented on GitHub (Oct 29, 2024):

GPU is used if I build and run llama backend directly (https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md#iii-run-the-inference), so the limitation seems to be with Ollama

edit: related https://github.com/ollama/ollama/issues/1590#issuecomment-2406086817

edit2: watch this PR for fix https://github.com/ollama/ollama/pull/5593

image

<!-- gh-comment-id:2445420510 --> @havardthom commented on GitHub (Oct 29, 2024): GPU is used if I build and run llama backend directly (https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md#iii-run-the-inference), so the limitation seems to be with Ollama edit: related https://github.com/ollama/ollama/issues/1590#issuecomment-2406086817 edit2: watch this PR for fix https://github.com/ollama/ollama/pull/5593 ![image](https://github.com/user-attachments/assets/703398eb-aeb0-4963-b983-33a6675e6c4e)
Author
Owner

@tristan-k commented on GitHub (Feb 15, 2025):

@havardthom This docker container seems to run really well on intel gpus. Maybe its possible to add it to the ollama community-script?

<!-- gh-comment-id:2661017579 --> @tristan-k commented on GitHub (Feb 15, 2025): @havardthom [This](https://github.com/mattcurf/ollama-intel-gpu) docker container seems to run really well on intel gpus. Maybe its possible to add it to the [ollama](https://github.com/community-scripts/ProxmoxVE/blob/main/ct/ollama.sh) community-script?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26568