[GH-ISSUE #12088] Ollama vs TEI embeddings performance gap with qwen3-Embedding #8031

Open
opened 2026-04-12 20:16:21 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @mbretter on GitHub (Aug 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12088

What is the issue?

Hey,

I have just recently tested TEI (Text Embeddings Inference) vs. ollama when generating embeddings, same model Qwen3-Embedding-0.6B, same hardware (RTX 4060).

I have found a kinda big performance gap between ollama and TEI.

TEI:

time curl 192.168.201.2:11001/embed  -X POST -d '{"inputs": "What is Deep Learning?"}'  -H 'Content-Type: application/json'
real	0m0,020s
user	0m0,001s
sys	0m0,009s

around 20msecs

Ollama:

time curl 192.168.201.2:11434/api/embed -X POST  -d '{"input": "What is Deep Learning?","model":"hf.co/Qwen/Qwen3-Embedding-0.6B-GGUF:Q8_0"}'     -H 'Content-Type: application/json'
real	0m0,099s
user	0m0,006s
sys	0m0,009s

around 99msecs, ollama is almost 5 times slower than TEI!

Is there any setting which might increase ollama's embedding performance, or am I missing something?

Relevant log output

ollama ps:

hf.co/Qwen/Qwen3-Embedding-0.6B-GGUF:Q8_0    9d61ad4da73a    1.8 GB    100% GPU     4096       4 minutes from now    


tei /info:

{
  "model_id": "Qwen/Qwen3-Embedding-0.6B",
  "model_sha": null,
  "model_dtype": "float16",
  "model_type": {
    "embedding": {
      "pooling": "last_token"
    }
  },
  "max_concurrent_requests": 512,
  "max_input_length": 32768,
  "max_batch_tokens": 16384,
  "max_batch_requests": null,
  "max_client_batch_size": 32,
  "auto_truncate": false,
  "tokenization_workers": 4,
  "version": "1.8.0",
  "sha": "2bff275313a7b93e9a5d4dc1dbfdce8e72c7d820",
  "docker_label": "sha-2bff275"
}



root@zeus:~# nvidia-smi 
Tue Aug 26 16:51:59 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.65.06              Driver Version: 580.65.06      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4060        On  |   00000000:01:00.0 Off |                  N/A |
|  0%   39C    P8            N/A  /  115W |    3510MiB /   8188MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A           63709      C   text-embeddings-router                 1296MiB |
|    0   N/A  N/A           63710      C   text-embeddings-router                  720MiB |
|    0   N/A  N/A           91350      C   /usr/local/bin/ollama                  1474MiB |
+-----------------------------------------------------------------------------------------+


ollama server logs

Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.569+02:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY>
Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.573+02:00 level=INFO source=images.go:477 msg="total blobs: 76"
Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.574+02:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.575+02:00 level=INFO source=routes.go:1384 msg="Listening on [::]:11434 (version 0.11.7)"
Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.575+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.723+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-08cc5ae1-5153-db23-2efe-e59cf47b61c8 library=cuda variant=v12 compute=8.9 driver=13.0 name="NVIDIA G>
Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.723+02:00 level=INFO source=routes.go:1425 msg="entering low vram mode" "total vram"="7.6 GiB" threshold="20.0 GiB"



Aug 26 16:48:28 zeus ollama[57289]: llama_model_load: vocab only - skipping tensors
Aug 26 16:48:28 zeus ollama[57289]: time=2025-08-26T16:48:28.906+02:00 level=INFO source=server.go:383 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-06507c7b42688469c4e7298b0a1e16deff06caf291cf0a5b278c308249c3e439 --port 39449"
Aug 26 16:48:28 zeus ollama[57289]: time=2025-08-26T16:48:28.916+02:00 level=INFO source=runner.go:864 msg="starting go runner"
Aug 26 16:48:28 zeus ollama[57289]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Aug 26 16:48:28 zeus ollama[57289]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Aug 26 16:48:28 zeus ollama[57289]: ggml_cuda_init: found 1 CUDA devices:
Aug 26 16:48:28 zeus ollama[57289]:   Device 0: NVIDIA GeForce RTX 4060, compute capability 8.9, VMM: yes, ID: GPU-08cc5ae1-5153-db23-2efe-e59cf47b61c8
Aug 26 16:48:28 zeus ollama[57289]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so
Aug 26 16:48:28 zeus ollama[57289]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
Aug 26 16:48:28 zeus ollama[57289]: time=2025-08-26T16:48:28.982+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Aug 26 16:48:28 zeus ollama[57289]: time=2025-08-26T16:48:28.983+02:00 level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:39449"
Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.038+02:00 level=INFO source=server.go:488 msg="system memory" total="62.7 GiB" free="52.8 GiB" free_swap="0 B"
Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.038+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-06507c7b42688469c4e7298b0a1e16deff06caf291cf0a5b278c308249c3e439 library=cuda parallel=1 required="1.7 GiB" gpus=1
Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.038+02:00 level=INFO source=server.go:528 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split=[29] memory.available="[5.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.7 GiB" memory.required.partial="1.7 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[1.7 GiB]" memory.weights.total="603.9 MiB" memory.weights.repeating="446.5 MiB" memory.weights.nonrepeating="157.4 MiB" memory.graph.full="149.3 MiB" memory.graph.partial="149.3 MiB"
Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.039+02:00 level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:4 GPULayers:29[ID:GPU-08cc5ae1-5153-db23-2efe-e59cf47b61c8 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
Aug 26 16:48:29 zeus ollama[57289]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4060) - 5675 MiB free
Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.121+02:00 level=INFO source=server.go:1231 msg="waiting for llama runner to start responding"
Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.121+02:00 level=INFO source=server.go:1265 msg="waiting for server to become available" status="llm server loading model"
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: loaded meta data with 36 key-value pairs and 310 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-06507c7b42688469c4e7298b0a1e16deff06caf291cf0a5b278c308249c3e439 (version GGUF V3 (latest))
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv   0:                       general.architecture str              = qwen3
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv   1:                               general.type str              = model
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv   2:                               general.name str              = Qwen3 Embedding 0.6b
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv   3:                           general.basename str              = qwen3-embedding
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv   4:                         general.size_label str              = 0.6B
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv   5:                            general.license str              = apache-2.0
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv   6:                   general.base_model.count u32              = 1
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv   7:                  general.base_model.0.name str              = Qwen3 0.6B Base
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv   8:          general.base_model.0.organization str              = Qwen
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv   9:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen3-0.6...
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  10:                               general.tags arr[str,5]       = ["transformers", "sentence-transforme...
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  11:                          qwen3.block_count u32              = 28
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  12:                       qwen3.context_length u32              = 32768
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  13:                     qwen3.embedding_length u32              = 1024
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  14:                  qwen3.feed_forward_length u32              = 3072
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  15:                 qwen3.attention.head_count u32              = 16
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  16:              qwen3.attention.head_count_kv u32              = 8
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  17:                       qwen3.rope.freq_base f32              = 1000000.000000
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  18:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  19:                 qwen3.attention.key_length u32              = 128
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  20:               qwen3.attention.value_length u32              = 128
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  21:                         qwen3.pooling_type u32              = 3
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  22:                       tokenizer.ggml.model str              = gpt2
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  23:                         tokenizer.ggml.pre str              = qwen2
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  24:                      tokenizer.ggml.tokens arr[str,151669]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  25:                  tokenizer.ggml.token_type arr[i32,151669]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  26:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 151643
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  28:            tokenizer.ggml.padding_token_id u32              = 151643
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  29:                tokenizer.ggml.eot_token_id u32              = 151645
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  31:               tokenizer.ggml.add_eos_token bool             = true
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = false
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  33:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  34:               general.quantization_version u32              = 2
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv  35:                          general.file_type u32              = 7
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - type  f32:  113 tensors
Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - type q8_0:  197 tensors
Aug 26 16:48:29 zeus ollama[57289]: print_info: file format = GGUF V3 (latest)
Aug 26 16:48:29 zeus ollama[57289]: print_info: file type   = Q8_0
Aug 26 16:48:29 zeus ollama[57289]: print_info: file size   = 603.87 MiB (8.50 BPW)
Aug 26 16:48:29 zeus ollama[57289]: load: printing all EOG tokens:
Aug 26 16:48:29 zeus ollama[57289]: load:   - 151643 ('<|endoftext|>')
Aug 26 16:48:29 zeus ollama[57289]: load:   - 151645 ('<|im_end|>')
Aug 26 16:48:29 zeus ollama[57289]: load:   - 151662 ('<|fim_pad|>')
Aug 26 16:48:29 zeus ollama[57289]: load:   - 151663 ('<|repo_name|>')
Aug 26 16:48:29 zeus ollama[57289]: load:   - 151664 ('<|file_sep|>')
Aug 26 16:48:29 zeus ollama[57289]: load: special tokens cache size = 26
Aug 26 16:48:29 zeus ollama[57289]: load: token to piece cache size = 0.9311 MB
Aug 26 16:48:29 zeus ollama[57289]: print_info: arch             = qwen3
Aug 26 16:48:29 zeus ollama[57289]: print_info: vocab_only       = 0
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_ctx_train      = 32768
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_embd           = 1024
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_layer          = 28
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_head           = 16
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_head_kv        = 8
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_rot            = 128
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_swa            = 0
Aug 26 16:48:29 zeus ollama[57289]: print_info: is_swa_any       = 0
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_embd_head_k    = 128
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_embd_head_v    = 128
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_gqa            = 2
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_embd_k_gqa     = 1024
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_embd_v_gqa     = 1024
Aug 26 16:48:29 zeus ollama[57289]: print_info: f_norm_eps       = 0.0e+00
Aug 26 16:48:29 zeus ollama[57289]: print_info: f_norm_rms_eps   = 1.0e-06
Aug 26 16:48:29 zeus ollama[57289]: print_info: f_clamp_kqv      = 0.0e+00
Aug 26 16:48:29 zeus ollama[57289]: print_info: f_max_alibi_bias = 0.0e+00
Aug 26 16:48:29 zeus ollama[57289]: print_info: f_logit_scale    = 0.0e+00
Aug 26 16:48:29 zeus ollama[57289]: print_info: f_attn_scale     = 0.0e+00
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_ff             = 3072
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_expert         = 0
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_expert_used    = 0
Aug 26 16:48:29 zeus ollama[57289]: print_info: causal attn      = 1
Aug 26 16:48:29 zeus ollama[57289]: print_info: pooling type     = 3
Aug 26 16:48:29 zeus ollama[57289]: print_info: rope type        = 2
Aug 26 16:48:29 zeus ollama[57289]: print_info: rope scaling     = linear
Aug 26 16:48:29 zeus ollama[57289]: print_info: freq_base_train  = 1000000.0
Aug 26 16:48:29 zeus ollama[57289]: print_info: freq_scale_train = 1
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_ctx_orig_yarn  = 32768
Aug 26 16:48:29 zeus ollama[57289]: print_info: rope_finetuned   = unknown
Aug 26 16:48:29 zeus ollama[57289]: print_info: model type       = 0.6B
Aug 26 16:48:29 zeus ollama[57289]: print_info: model params     = 595.78 M
Aug 26 16:48:29 zeus ollama[57289]: print_info: general.name     = Qwen3 Embedding 0.6b
Aug 26 16:48:29 zeus ollama[57289]: print_info: vocab type       = BPE
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_vocab          = 151669
Aug 26 16:48:29 zeus ollama[57289]: print_info: n_merges         = 151387
Aug 26 16:48:29 zeus ollama[57289]: print_info: BOS token        = 151643 '<|endoftext|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: EOS token        = 151643 '<|endoftext|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: EOT token        = 151645 '<|im_end|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: PAD token        = 151643 '<|endoftext|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: LF token         = 198 'Ċ'
Aug 26 16:48:29 zeus ollama[57289]: print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: FIM MID token    = 151660 '<|fim_middle|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: FIM PAD token    = 151662 '<|fim_pad|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: FIM REP token    = 151663 '<|repo_name|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: FIM SEP token    = 151664 '<|file_sep|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: EOG token        = 151643 '<|endoftext|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: EOG token        = 151645 '<|im_end|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: EOG token        = 151662 '<|fim_pad|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: EOG token        = 151663 '<|repo_name|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: EOG token        = 151664 '<|file_sep|>'
Aug 26 16:48:29 zeus ollama[57289]: print_info: max token length = 256
Aug 26 16:48:29 zeus ollama[57289]: load_tensors: loading model tensors, this can take a while... (mmap = true)
Aug 26 16:48:29 zeus ollama[57289]: load_tensors: offloading 28 repeating layers to GPU
Aug 26 16:48:29 zeus ollama[57289]: load_tensors: offloading output layer to GPU
Aug 26 16:48:29 zeus ollama[57289]: load_tensors: offloaded 29/29 layers to GPU
Aug 26 16:48:29 zeus ollama[57289]: load_tensors:        CUDA0 model buffer size =   603.87 MiB
Aug 26 16:48:29 zeus ollama[57289]: load_tensors:   CPU_Mapped model buffer size =   157.37 MiB
Aug 26 16:48:29 zeus ollama[57289]: llama_context: constructing llama_context
Aug 26 16:48:29 zeus ollama[57289]: llama_context: n_seq_max     = 1
Aug 26 16:48:29 zeus ollama[57289]: llama_context: n_ctx         = 4096
Aug 26 16:48:29 zeus ollama[57289]: llama_context: n_ctx_per_seq = 4096
Aug 26 16:48:29 zeus ollama[57289]: llama_context: n_batch       = 512
Aug 26 16:48:29 zeus ollama[57289]: llama_context: n_ubatch      = 512
Aug 26 16:48:29 zeus ollama[57289]: llama_context: causal_attn   = 1
Aug 26 16:48:29 zeus ollama[57289]: llama_context: flash_attn    = 0
Aug 26 16:48:29 zeus ollama[57289]: llama_context: kv_unified    = false
Aug 26 16:48:29 zeus ollama[57289]: llama_context: freq_base     = 1000000.0
Aug 26 16:48:29 zeus ollama[57289]: llama_context: freq_scale    = 1
Aug 26 16:48:29 zeus ollama[57289]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
Aug 26 16:48:29 zeus ollama[57289]: llama_context:  CUDA_Host  output buffer size =     0.58 MiB
Aug 26 16:48:29 zeus ollama[57289]: llama_kv_cache_unified:      CUDA0 KV buffer size =   448.00 MiB
Aug 26 16:48:29 zeus ollama[57289]: llama_kv_cache_unified: size =  448.00 MiB (  4096 cells,  28 layers,  1/1 seqs), K (f16):  224.00 MiB, V (f16):  224.00 MiB
Aug 26 16:48:29 zeus ollama[57289]: llama_context:      CUDA0 compute buffer size =   310.24 MiB
Aug 26 16:48:29 zeus ollama[57289]: llama_context:  CUDA_Host compute buffer size =    14.01 MiB
Aug 26 16:48:29 zeus ollama[57289]: llama_context: graph nodes  = 1099
Aug 26 16:48:29 zeus ollama[57289]: llama_context: graph splits = 2
Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.622+02:00 level=INFO source=server.go:1269 msg="llama runner started in 0.72 seconds"
Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.622+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.622+02:00 level=INFO source=server.go:1231 msg="waiting for llama runner to start responding"
Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.623+02:00 level=INFO source=server.go:1269 msg="llama runner started in 0.72 seconds"
Aug 26 16:48:29 zeus ollama[57289]: [GIN] 2025/08/26 - 16:48:29 | 200 |  1.357618062s | 192.168.201.136 | POST     "/api/embed"
Aug 26 16:48:31 zeus ollama[57289]: [GIN] 2025/08/26 - 16:48:31 | 200 |    78.18403ms | 192.168.201.136 | POST     "/api/embed"
Aug 26 16:48:35 zeus ollama[57289]: [GIN] 2025/08/26 - 16:48:35 | 200 |      16.799µs |       127.0.0.1 | HEAD     "/"
Aug 26 16:48:35 zeus ollama[57289]: [GIN] 2025/08/26 - 16:48:35 | 200 |      19.518µs |       127.0.0.1 | GET      "/api/ps"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.11.7

Originally created by @mbretter on GitHub (Aug 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12088 ### What is the issue? Hey, I have just recently tested TEI (Text Embeddings Inference) vs. ollama when generating embeddings, same model Qwen3-Embedding-0.6B, same hardware (RTX 4060). I have found a kinda big performance gap between ollama and TEI. TEI: ``` time curl 192.168.201.2:11001/embed -X POST -d '{"inputs": "What is Deep Learning?"}' -H 'Content-Type: application/json' real 0m0,020s user 0m0,001s sys 0m0,009s ``` around 20msecs Ollama: ``` time curl 192.168.201.2:11434/api/embed -X POST -d '{"input": "What is Deep Learning?","model":"hf.co/Qwen/Qwen3-Embedding-0.6B-GGUF:Q8_0"}' -H 'Content-Type: application/json' real 0m0,099s user 0m0,006s sys 0m0,009s ``` around 99msecs, ollama is almost 5 times slower than TEI! Is there any setting which might increase ollama's embedding performance, or am I missing something? ### Relevant log output ```shell ollama ps: hf.co/Qwen/Qwen3-Embedding-0.6B-GGUF:Q8_0 9d61ad4da73a 1.8 GB 100% GPU 4096 4 minutes from now tei /info: { "model_id": "Qwen/Qwen3-Embedding-0.6B", "model_sha": null, "model_dtype": "float16", "model_type": { "embedding": { "pooling": "last_token" } }, "max_concurrent_requests": 512, "max_input_length": 32768, "max_batch_tokens": 16384, "max_batch_requests": null, "max_client_batch_size": 32, "auto_truncate": false, "tokenization_workers": 4, "version": "1.8.0", "sha": "2bff275313a7b93e9a5d4dc1dbfdce8e72c7d820", "docker_label": "sha-2bff275" } root@zeus:~# nvidia-smi Tue Aug 26 16:51:59 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.65.06 Driver Version: 580.65.06 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4060 On | 00000000:01:00.0 Off | N/A | | 0% 39C P8 N/A / 115W | 3510MiB / 8188MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 63709 C text-embeddings-router 1296MiB | | 0 N/A N/A 63710 C text-embeddings-router 720MiB | | 0 N/A N/A 91350 C /usr/local/bin/ollama 1474MiB | +-----------------------------------------------------------------------------------------+ ollama server logs Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.569+02:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY> Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.573+02:00 level=INFO source=images.go:477 msg="total blobs: 76" Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.574+02:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.575+02:00 level=INFO source=routes.go:1384 msg="Listening on [::]:11434 (version 0.11.7)" Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.575+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.723+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-08cc5ae1-5153-db23-2efe-e59cf47b61c8 library=cuda variant=v12 compute=8.9 driver=13.0 name="NVIDIA G> Aug 26 16:55:24 zeus ollama[92770]: time=2025-08-26T16:55:24.723+02:00 level=INFO source=routes.go:1425 msg="entering low vram mode" "total vram"="7.6 GiB" threshold="20.0 GiB" Aug 26 16:48:28 zeus ollama[57289]: llama_model_load: vocab only - skipping tensors Aug 26 16:48:28 zeus ollama[57289]: time=2025-08-26T16:48:28.906+02:00 level=INFO source=server.go:383 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-06507c7b42688469c4e7298b0a1e16deff06caf291cf0a5b278c308249c3e439 --port 39449" Aug 26 16:48:28 zeus ollama[57289]: time=2025-08-26T16:48:28.916+02:00 level=INFO source=runner.go:864 msg="starting go runner" Aug 26 16:48:28 zeus ollama[57289]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Aug 26 16:48:28 zeus ollama[57289]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Aug 26 16:48:28 zeus ollama[57289]: ggml_cuda_init: found 1 CUDA devices: Aug 26 16:48:28 zeus ollama[57289]: Device 0: NVIDIA GeForce RTX 4060, compute capability 8.9, VMM: yes, ID: GPU-08cc5ae1-5153-db23-2efe-e59cf47b61c8 Aug 26 16:48:28 zeus ollama[57289]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so Aug 26 16:48:28 zeus ollama[57289]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so Aug 26 16:48:28 zeus ollama[57289]: time=2025-08-26T16:48:28.982+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Aug 26 16:48:28 zeus ollama[57289]: time=2025-08-26T16:48:28.983+02:00 level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:39449" Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.038+02:00 level=INFO source=server.go:488 msg="system memory" total="62.7 GiB" free="52.8 GiB" free_swap="0 B" Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.038+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-06507c7b42688469c4e7298b0a1e16deff06caf291cf0a5b278c308249c3e439 library=cuda parallel=1 required="1.7 GiB" gpus=1 Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.038+02:00 level=INFO source=server.go:528 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split=[29] memory.available="[5.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.7 GiB" memory.required.partial="1.7 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[1.7 GiB]" memory.weights.total="603.9 MiB" memory.weights.repeating="446.5 MiB" memory.weights.nonrepeating="157.4 MiB" memory.graph.full="149.3 MiB" memory.graph.partial="149.3 MiB" Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.039+02:00 level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:4 GPULayers:29[ID:GPU-08cc5ae1-5153-db23-2efe-e59cf47b61c8 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" Aug 26 16:48:29 zeus ollama[57289]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4060) - 5675 MiB free Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.121+02:00 level=INFO source=server.go:1231 msg="waiting for llama runner to start responding" Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.121+02:00 level=INFO source=server.go:1265 msg="waiting for server to become available" status="llm server loading model" Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: loaded meta data with 36 key-value pairs and 310 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-06507c7b42688469c4e7298b0a1e16deff06caf291cf0a5b278c308249c3e439 (version GGUF V3 (latest)) Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 0: general.architecture str = qwen3 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 1: general.type str = model Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 2: general.name str = Qwen3 Embedding 0.6b Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 3: general.basename str = qwen3-embedding Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 4: general.size_label str = 0.6B Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 5: general.license str = apache-2.0 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 6: general.base_model.count u32 = 1 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 7: general.base_model.0.name str = Qwen3 0.6B Base Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 8: general.base_model.0.organization str = Qwen Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 9: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen3-0.6... Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 10: general.tags arr[str,5] = ["transformers", "sentence-transforme... Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 11: qwen3.block_count u32 = 28 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 12: qwen3.context_length u32 = 32768 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 13: qwen3.embedding_length u32 = 1024 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 14: qwen3.feed_forward_length u32 = 3072 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 15: qwen3.attention.head_count u32 = 16 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 16: qwen3.attention.head_count_kv u32 = 8 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 17: qwen3.rope.freq_base f32 = 1000000.000000 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 18: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 19: qwen3.attention.key_length u32 = 128 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 20: qwen3.attention.value_length u32 = 128 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 21: qwen3.pooling_type u32 = 3 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 23: tokenizer.ggml.pre str = qwen2 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,151669] = ["!", "\"", "#", "$", "%", "&", "'", ... Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,151669] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 151643 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 28: tokenizer.ggml.padding_token_id u32 = 151643 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 29: tokenizer.ggml.eot_token_id u32 = 151645 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 34: general.quantization_version u32 = 2 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - kv 35: general.file_type u32 = 7 Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - type f32: 113 tensors Aug 26 16:48:29 zeus ollama[57289]: llama_model_loader: - type q8_0: 197 tensors Aug 26 16:48:29 zeus ollama[57289]: print_info: file format = GGUF V3 (latest) Aug 26 16:48:29 zeus ollama[57289]: print_info: file type = Q8_0 Aug 26 16:48:29 zeus ollama[57289]: print_info: file size = 603.87 MiB (8.50 BPW) Aug 26 16:48:29 zeus ollama[57289]: load: printing all EOG tokens: Aug 26 16:48:29 zeus ollama[57289]: load: - 151643 ('<|endoftext|>') Aug 26 16:48:29 zeus ollama[57289]: load: - 151645 ('<|im_end|>') Aug 26 16:48:29 zeus ollama[57289]: load: - 151662 ('<|fim_pad|>') Aug 26 16:48:29 zeus ollama[57289]: load: - 151663 ('<|repo_name|>') Aug 26 16:48:29 zeus ollama[57289]: load: - 151664 ('<|file_sep|>') Aug 26 16:48:29 zeus ollama[57289]: load: special tokens cache size = 26 Aug 26 16:48:29 zeus ollama[57289]: load: token to piece cache size = 0.9311 MB Aug 26 16:48:29 zeus ollama[57289]: print_info: arch = qwen3 Aug 26 16:48:29 zeus ollama[57289]: print_info: vocab_only = 0 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_ctx_train = 32768 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_embd = 1024 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_layer = 28 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_head = 16 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_head_kv = 8 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_rot = 128 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_swa = 0 Aug 26 16:48:29 zeus ollama[57289]: print_info: is_swa_any = 0 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_embd_head_k = 128 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_embd_head_v = 128 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_gqa = 2 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_embd_k_gqa = 1024 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_embd_v_gqa = 1024 Aug 26 16:48:29 zeus ollama[57289]: print_info: f_norm_eps = 0.0e+00 Aug 26 16:48:29 zeus ollama[57289]: print_info: f_norm_rms_eps = 1.0e-06 Aug 26 16:48:29 zeus ollama[57289]: print_info: f_clamp_kqv = 0.0e+00 Aug 26 16:48:29 zeus ollama[57289]: print_info: f_max_alibi_bias = 0.0e+00 Aug 26 16:48:29 zeus ollama[57289]: print_info: f_logit_scale = 0.0e+00 Aug 26 16:48:29 zeus ollama[57289]: print_info: f_attn_scale = 0.0e+00 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_ff = 3072 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_expert = 0 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_expert_used = 0 Aug 26 16:48:29 zeus ollama[57289]: print_info: causal attn = 1 Aug 26 16:48:29 zeus ollama[57289]: print_info: pooling type = 3 Aug 26 16:48:29 zeus ollama[57289]: print_info: rope type = 2 Aug 26 16:48:29 zeus ollama[57289]: print_info: rope scaling = linear Aug 26 16:48:29 zeus ollama[57289]: print_info: freq_base_train = 1000000.0 Aug 26 16:48:29 zeus ollama[57289]: print_info: freq_scale_train = 1 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_ctx_orig_yarn = 32768 Aug 26 16:48:29 zeus ollama[57289]: print_info: rope_finetuned = unknown Aug 26 16:48:29 zeus ollama[57289]: print_info: model type = 0.6B Aug 26 16:48:29 zeus ollama[57289]: print_info: model params = 595.78 M Aug 26 16:48:29 zeus ollama[57289]: print_info: general.name = Qwen3 Embedding 0.6b Aug 26 16:48:29 zeus ollama[57289]: print_info: vocab type = BPE Aug 26 16:48:29 zeus ollama[57289]: print_info: n_vocab = 151669 Aug 26 16:48:29 zeus ollama[57289]: print_info: n_merges = 151387 Aug 26 16:48:29 zeus ollama[57289]: print_info: BOS token = 151643 '<|endoftext|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: EOS token = 151643 '<|endoftext|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: EOT token = 151645 '<|im_end|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: PAD token = 151643 '<|endoftext|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: LF token = 198 'Ċ' Aug 26 16:48:29 zeus ollama[57289]: print_info: FIM PRE token = 151659 '<|fim_prefix|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: FIM SUF token = 151661 '<|fim_suffix|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: FIM MID token = 151660 '<|fim_middle|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: FIM PAD token = 151662 '<|fim_pad|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: FIM REP token = 151663 '<|repo_name|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: FIM SEP token = 151664 '<|file_sep|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: EOG token = 151643 '<|endoftext|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: EOG token = 151645 '<|im_end|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: EOG token = 151662 '<|fim_pad|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: EOG token = 151663 '<|repo_name|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: EOG token = 151664 '<|file_sep|>' Aug 26 16:48:29 zeus ollama[57289]: print_info: max token length = 256 Aug 26 16:48:29 zeus ollama[57289]: load_tensors: loading model tensors, this can take a while... (mmap = true) Aug 26 16:48:29 zeus ollama[57289]: load_tensors: offloading 28 repeating layers to GPU Aug 26 16:48:29 zeus ollama[57289]: load_tensors: offloading output layer to GPU Aug 26 16:48:29 zeus ollama[57289]: load_tensors: offloaded 29/29 layers to GPU Aug 26 16:48:29 zeus ollama[57289]: load_tensors: CUDA0 model buffer size = 603.87 MiB Aug 26 16:48:29 zeus ollama[57289]: load_tensors: CPU_Mapped model buffer size = 157.37 MiB Aug 26 16:48:29 zeus ollama[57289]: llama_context: constructing llama_context Aug 26 16:48:29 zeus ollama[57289]: llama_context: n_seq_max = 1 Aug 26 16:48:29 zeus ollama[57289]: llama_context: n_ctx = 4096 Aug 26 16:48:29 zeus ollama[57289]: llama_context: n_ctx_per_seq = 4096 Aug 26 16:48:29 zeus ollama[57289]: llama_context: n_batch = 512 Aug 26 16:48:29 zeus ollama[57289]: llama_context: n_ubatch = 512 Aug 26 16:48:29 zeus ollama[57289]: llama_context: causal_attn = 1 Aug 26 16:48:29 zeus ollama[57289]: llama_context: flash_attn = 0 Aug 26 16:48:29 zeus ollama[57289]: llama_context: kv_unified = false Aug 26 16:48:29 zeus ollama[57289]: llama_context: freq_base = 1000000.0 Aug 26 16:48:29 zeus ollama[57289]: llama_context: freq_scale = 1 Aug 26 16:48:29 zeus ollama[57289]: llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized Aug 26 16:48:29 zeus ollama[57289]: llama_context: CUDA_Host output buffer size = 0.58 MiB Aug 26 16:48:29 zeus ollama[57289]: llama_kv_cache_unified: CUDA0 KV buffer size = 448.00 MiB Aug 26 16:48:29 zeus ollama[57289]: llama_kv_cache_unified: size = 448.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 224.00 MiB, V (f16): 224.00 MiB Aug 26 16:48:29 zeus ollama[57289]: llama_context: CUDA0 compute buffer size = 310.24 MiB Aug 26 16:48:29 zeus ollama[57289]: llama_context: CUDA_Host compute buffer size = 14.01 MiB Aug 26 16:48:29 zeus ollama[57289]: llama_context: graph nodes = 1099 Aug 26 16:48:29 zeus ollama[57289]: llama_context: graph splits = 2 Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.622+02:00 level=INFO source=server.go:1269 msg="llama runner started in 0.72 seconds" Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.622+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.622+02:00 level=INFO source=server.go:1231 msg="waiting for llama runner to start responding" Aug 26 16:48:29 zeus ollama[57289]: time=2025-08-26T16:48:29.623+02:00 level=INFO source=server.go:1269 msg="llama runner started in 0.72 seconds" Aug 26 16:48:29 zeus ollama[57289]: [GIN] 2025/08/26 - 16:48:29 | 200 | 1.357618062s | 192.168.201.136 | POST "/api/embed" Aug 26 16:48:31 zeus ollama[57289]: [GIN] 2025/08/26 - 16:48:31 | 200 | 78.18403ms | 192.168.201.136 | POST "/api/embed" Aug 26 16:48:35 zeus ollama[57289]: [GIN] 2025/08/26 - 16:48:35 | 200 | 16.799µs | 127.0.0.1 | HEAD "/" Aug 26 16:48:35 zeus ollama[57289]: [GIN] 2025/08/26 - 16:48:35 | 200 | 19.518µs | 127.0.0.1 | GET "/api/ps" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.11.7
GiteaMirror added the bug label 2026-04-12 20:16:21 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8031