[GH-ISSUE #3727] Unable to load default model context length num_ctx for embedding #28052

Closed
opened 2026-04-22 05:47:41 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @Kanishk-Kumar on GitHub (Apr 18, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3727

What is the issue?

This is the code I tried:

from ollama import Client
def generate_embedding(prompt: str):
    r"""
    Add this to utils later.
    """
    client = Client(host="http://localhost:11434")
    response = client.embeddings(
        model="nomic-embed-text:latest",
        prompt=prompt,
        options={"temperature": 0, "num_ctx": 8192}
    )
    return response["embedding"]

generate_embedding("Why is the sky blue?")

Error I'm getting:
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.071+05:30 level=WARN source=server.go:51 msg="requested context length is greater than model max context length" requested=8192 model=2048

But model card clearly states I should be able to use full 8192 tokens for embedding:
https://ollama.com/library/nomic-embed-text
https://huggingface.co/nomic-ai/nomic-embed-text-v1

Full log:

Apr 18 11:20:50 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:50.673+05:30 level=INFO source=images.go:817 msg="total blobs: 17"
Apr 18 11:20:50 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:50.673+05:30 level=INFO source=images.go:824 msg="total unused blobs removed: 0"
Apr 18 11:20:50 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:50.673+05:30 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32)"
Apr 18 11:20:50 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:50.674+05:30 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama2506861456/runners
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.016+05:30 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.016+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.016+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.017+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2506861456/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.4.99 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]"
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.021+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.021+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.061+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.071+05:30 level=WARN source=server.go:51 msg="requested context length is greater than model max context length" requested=8192 model=2048
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.071+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.071+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.072+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2506861456/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.4.99 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.072+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.072+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.114+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.125+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.125+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.126+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2506861456/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.4.99 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.127+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.127+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.148+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.159+05:30 level=INFO source=server.go:127 msg="offload to gpu" reallayers=13 layers=13 required="691.1 MiB" used="691.1 MiB" available="11364.1 MiB" kv="6.0 MiB" fulloffload="12.0 MiB" partialoffload="12.0 MiB"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.159+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.159+05:30 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama2506861456/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 13 --port 45855"
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.159+05:30 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"132941561552896","timestamp":1713419482,"total_threads":32}
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: loaded meta data with 24 key-value pairs and 112 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 (version GGUF V3 (latest))
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv   0:                       general.architecture str              = nomic-bert
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv   1:                               general.name str              = nomic-embed-text-v1.5
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv   2:                     nomic-bert.block_count u32              = 12
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv   3:                  nomic-bert.context_length u32              = 2048
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv   4:                nomic-bert.embedding_length u32              = 768
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv   5:             nomic-bert.feed_forward_length u32              = 3072
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv   6:            nomic-bert.attention.head_count u32              = 12
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv   7:    nomic-bert.attention.layer_norm_epsilon f32              = 0.000000
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv   8:                          general.file_type u32              = 1
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv   9:                nomic-bert.attention.causal bool             = false
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  10:                    nomic-bert.pooling_type u32              = 1
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  11:                  nomic-bert.rope.freq_base f32              = 1000.000000
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  12:            tokenizer.ggml.token_type_count u32              = 2
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  13:                tokenizer.ggml.bos_token_id u32              = 101
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  14:                tokenizer.ggml.eos_token_id u32              = 102
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = bert
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,30522]   = ["[PAD]", "[unused0]", "[unused1]", "...
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,30522]   = [-1000.000000, -1000.000000, -1000.00...
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,30522]   = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 100
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  20:          tokenizer.ggml.seperator_token_id u32              = 102
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  22:                tokenizer.ggml.cls_token_id u32              = 101
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv  23:               tokenizer.ggml.mask_token_id u32              = 103
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - type  f32:   51 tensors
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - type  f16:   61 tensors
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_vocab: mismatch in special tokens definition ( 7104/30522 vs 5/30522 ).
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: format           = GGUF V3 (latest)
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: arch             = nomic-bert
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: vocab type       = WPM
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_vocab          = 30522
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_merges         = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_ctx_train      = 2048
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd           = 768
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_head           = 12
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_head_kv        = 12
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_layer          = 12
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_rot            = 64
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd_head_k    = 64
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd_head_v    = 64
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_gqa            = 1
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd_k_gqa     = 768
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd_v_gqa     = 768
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_norm_eps       = 1.0e-12
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_norm_rms_eps   = 0.0e+00
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_ff             = 3072
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_expert         = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_expert_used    = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: causal attn      = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: pooling type     = 1
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: rope type        = 2
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: rope scaling     = linear
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: freq_base_train  = 1000.0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: freq_scale_train = 1
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_yarn_orig_ctx  = 2048
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: rope_finetuned   = unknown
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: ssm_d_conv       = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: ssm_d_inner      = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: ssm_d_state      = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: ssm_dt_rank      = 0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: model type       = 137M
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: model ftype      = F16
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: model params     = 136.73 M
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: model size       = 260.86 MiB (16.00 BPW)
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: general.name     = nomic-embed-text-v1.5
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: BOS token        = 101 '[CLS]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: EOS token        = 102 '[SEP]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: UNK token        = 100 '[UNK]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: SEP token        = 102 '[SEP]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: PAD token        = 0 '[PAD]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: CLS token        = 101 '[CLS]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: MASK token       = 103 '[MASK]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: LF token         = 0 '[PAD]'
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: ggml_cuda_init: found 1 CUDA devices:
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]:   Device 0: NVIDIA GeForce RTX 4070 Ti, compute capability 8.9, VMM: yes
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: ggml ctx size =    0.09 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: offloading 12 repeating layers to GPU
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: offloading non-repeating layers to GPU
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: offloaded 13/13 layers to GPU
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors:        CPU buffer size =    44.72 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors:      CUDA0 buffer size =   216.15 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: .......................................................
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: n_ctx      = 2048
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: n_batch    = 512
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: n_ubatch   = 512
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: freq_base  = 1000.0
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: freq_scale = 1
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_kv_cache_init:      CUDA0 KV buffer size =    72.00 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: KV self size  =   72.00 MiB, K (f16):   36.00 MiB, V (f16):   36.00 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model:        CPU  output buffer size =     0.00 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model:      CUDA0 compute buffer size =    23.00 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model:  CUDA_Host compute buffer size =     3.50 MiB
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: graph nodes  = 453
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: graph splits = 2
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"31","port":"45855","tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":48102,"status":200,"tid":"132940209668096","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":48112,"status":200,"tid":"132940187918336","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":48112,"status":200,"tid":"132940187918336","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":3,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":3,"tid":"132941561552896","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":6,"n_ctx":2048,"n_past":6,"n_system_tokens":0,"slot_id":0,"task_id":3,"tid":"132941561552896","timestamp":1713419482,"truncated":false}
Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":48112,"status":200,"tid":"132940187918336","timestamp":1713419482}
Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: [GIN] 2024/04/18 - 11:21:22 | 200 |   705.26423ms |       127.0.0.1 | POST     "/api/embeddings"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.32

Originally created by @Kanishk-Kumar on GitHub (Apr 18, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3727 ### What is the issue? This is the code I tried: ``` from ollama import Client def generate_embedding(prompt: str): r""" Add this to utils later. """ client = Client(host="http://localhost:11434") response = client.embeddings( model="nomic-embed-text:latest", prompt=prompt, options={"temperature": 0, "num_ctx": 8192} ) return response["embedding"] generate_embedding("Why is the sky blue?") ``` Error I'm getting: `Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.071+05:30 level=WARN source=server.go:51 msg="requested context length is greater than model max context length" requested=8192 model=2048` But model card clearly states I should be able to use full 8192 tokens for embedding: https://ollama.com/library/nomic-embed-text https://huggingface.co/nomic-ai/nomic-embed-text-v1 Full log: ``` Apr 18 11:20:50 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:50.673+05:30 level=INFO source=images.go:817 msg="total blobs: 17" Apr 18 11:20:50 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:50.673+05:30 level=INFO source=images.go:824 msg="total unused blobs removed: 0" Apr 18 11:20:50 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:50.673+05:30 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32)" Apr 18 11:20:50 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:50.674+05:30 level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama2506861456/runners Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.016+05:30 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]" Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.016+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type" Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.016+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.017+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2506861456/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.4.99 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]" Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.021+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.021+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Apr 18 11:20:52 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:20:52.061+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.071+05:30 level=WARN source=server.go:51 msg="requested context length is greater than model max context length" requested=8192 model=2048 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.071+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.071+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.072+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2506861456/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.4.99 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.072+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.072+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.114+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.125+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.125+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.126+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2506861456/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.4.99 /usr/lib/x86_64-linux-gnu/libcudart.so.11.5.117]" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.127+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.127+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.148+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.9" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.159+05:30 level=INFO source=server.go:127 msg="offload to gpu" reallayers=13 layers=13 required="691.1 MiB" used="691.1 MiB" available="11364.1 MiB" kv="6.0 MiB" fulloffload="12.0 MiB" partialoffload="12.0 MiB" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.159+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.159+05:30 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama2506861456/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 13 --port 45855" Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: time=2024-04-18T11:21:22.159+05:30 level=INFO source=server.go:389 msg="waiting for llama runner to start responding" Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"132941561552896","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"132941561552896","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"132941561552896","timestamp":1713419482,"total_threads":32} Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: loaded meta data with 24 key-value pairs and 112 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 (version GGUF V3 (latest)) Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 0: general.architecture str = nomic-bert Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 1: general.name str = nomic-embed-text-v1.5 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 2: nomic-bert.block_count u32 = 12 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 3: nomic-bert.context_length u32 = 2048 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 4: nomic-bert.embedding_length u32 = 768 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 5: nomic-bert.feed_forward_length u32 = 3072 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 6: nomic-bert.attention.head_count u32 = 12 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 7: nomic-bert.attention.layer_norm_epsilon f32 = 0.000000 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 8: general.file_type u32 = 1 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 9: nomic-bert.attention.causal bool = false Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 10: nomic-bert.pooling_type u32 = 1 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 11: nomic-bert.rope.freq_base f32 = 1000.000000 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 12: tokenizer.ggml.token_type_count u32 = 2 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 13: tokenizer.ggml.bos_token_id u32 = 101 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 102 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 15: tokenizer.ggml.model str = bert Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "... Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00... Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 100 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 20: tokenizer.ggml.seperator_token_id u32 = 102 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 22: tokenizer.ggml.cls_token_id u32 = 101 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - kv 23: tokenizer.ggml.mask_token_id u32 = 103 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - type f32: 51 tensors Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_model_loader: - type f16: 61 tensors Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_vocab: mismatch in special tokens definition ( 7104/30522 vs 5/30522 ). Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: format = GGUF V3 (latest) Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: arch = nomic-bert Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: vocab type = WPM Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_vocab = 30522 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_merges = 0 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_ctx_train = 2048 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd = 768 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_head = 12 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_head_kv = 12 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_layer = 12 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_rot = 64 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd_head_k = 64 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd_head_v = 64 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_gqa = 1 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd_k_gqa = 768 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_embd_v_gqa = 768 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_norm_eps = 1.0e-12 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: f_logit_scale = 0.0e+00 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_ff = 3072 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_expert = 0 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_expert_used = 0 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: causal attn = 0 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: pooling type = 1 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: rope type = 2 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: rope scaling = linear Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: freq_base_train = 1000.0 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: freq_scale_train = 1 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: n_yarn_orig_ctx = 2048 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: rope_finetuned = unknown Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: ssm_d_conv = 0 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: ssm_d_inner = 0 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: ssm_d_state = 0 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: ssm_dt_rank = 0 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: model type = 137M Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: model ftype = F16 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: model params = 136.73 M Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: model size = 260.86 MiB (16.00 BPW) Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: general.name = nomic-embed-text-v1.5 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: BOS token = 101 '[CLS]' Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: EOS token = 102 '[SEP]' Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: UNK token = 100 '[UNK]' Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: SEP token = 102 '[SEP]' Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: PAD token = 0 '[PAD]' Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: CLS token = 101 '[CLS]' Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: MASK token = 103 '[MASK]' Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_print_meta: LF token = 0 '[PAD]' Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: ggml_cuda_init: found 1 CUDA devices: Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: Device 0: NVIDIA GeForce RTX 4070 Ti, compute capability 8.9, VMM: yes Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: ggml ctx size = 0.09 MiB Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: offloading 12 repeating layers to GPU Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: offloading non-repeating layers to GPU Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: offloaded 13/13 layers to GPU Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: CPU buffer size = 44.72 MiB Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llm_load_tensors: CUDA0 buffer size = 216.15 MiB Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: ....................................................... Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: n_ctx = 2048 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: n_batch = 512 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: n_ubatch = 512 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: freq_base = 1000.0 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: freq_scale = 1 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_kv_cache_init: CUDA0 KV buffer size = 72.00 MiB Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: KV self size = 72.00 MiB, K (f16): 36.00 MiB, V (f16): 36.00 MiB Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: CPU output buffer size = 0.00 MiB Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: CUDA0 compute buffer size = 23.00 MiB Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: CUDA_Host compute buffer size = 3.50 MiB Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: graph nodes = 453 Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: llama_new_context_with_model: graph splits = 2 Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"132941561552896","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"132941561552896","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"132941561552896","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"31","port":"45855","tid":"132941561552896","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"132941561552896","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"132941561552896","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":48102,"status":200,"tid":"132940209668096","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"132941561552896","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":48112,"status":200,"tid":"132940187918336","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"132941561552896","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":48112,"status":200,"tid":"132940187918336","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":3,"tid":"132941561552896","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":3,"tid":"132941561552896","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":6,"n_ctx":2048,"n_past":6,"n_system_tokens":0,"slot_id":0,"task_id":3,"tid":"132941561552896","timestamp":1713419482,"truncated":false} Apr 18 11:21:22 xyz-MS-7D91 ollama[39090]: {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":48112,"status":200,"tid":"132940187918336","timestamp":1713419482} Apr 18 11:21:22 xyz-MS-7D91 ollama[38865]: [GIN] 2024/04/18 - 11:21:22 | 200 | 705.26423ms | 127.0.0.1 | POST "/api/embeddings" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.32
GiteaMirror added the bug label 2026-04-22 05:47:41 -05:00
Author
Owner

@remy415 commented on GitHub (Apr 18, 2024):

The error msg="requested context length is greater than model max context length" requested=8192 model=2048 only occurs when the ctx you request (8192) is greater than the context length set in the modelfile (2048 in this case).

Please double check the model information with ollama show nomic-embed-text:latest --modelfile

tegra@ok3d-1:~/ok3d/ollama-container/dev/ollama$ ollama show nomic-embed-text:v1.5 --modelfile
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM nomic-embed-text:v1.5

FROM /usr/share/ollama/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6
TEMPLATE """{{ .Prompt }}"""
PARAMETER num_ctx 8192

I ran a test on my system with 0.1.32 using the default settings for nomic-embed-text:v1.5 and the 8192 ctx worked just fine:

tegra@ok3d-1:~/ok3d/ollama-container/dev/ollama$ curl http://localhost:11434/api/embeddings -d '{
  "model": "nomic-embed-text:v1.5",
  "prompt": "Here is an article about llamas...", 
  "num_ctx": 8192
}'
{"embedding":[-0.2280130684375763,1.1470876932144165,-2.3157222270965576,-0.24247822165489197,-0.520927906036377,0.5437105298042297,-0.19731146097183228,...]}
<!-- gh-comment-id:2064268551 --> @remy415 commented on GitHub (Apr 18, 2024): The error `msg="requested context length is greater than model max context length" requested=8192 model=2048` only occurs when the ctx you request (8192) is greater than the context length set in the modelfile (2048 in this case). Please double check the model information with `ollama show nomic-embed-text:latest --modelfile` ``` tegra@ok3d-1:~/ok3d/ollama-container/dev/ollama$ ollama show nomic-embed-text:v1.5 --modelfile # Modelfile generated by "ollama show" # To build a new Modelfile based on this one, replace the FROM line with: # FROM nomic-embed-text:v1.5 FROM /usr/share/ollama/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 TEMPLATE """{{ .Prompt }}""" PARAMETER num_ctx 8192 ``` I ran a test on my system with 0.1.32 using the default settings for nomic-embed-text:v1.5 and the 8192 ctx worked just fine: ``` tegra@ok3d-1:~/ok3d/ollama-container/dev/ollama$ curl http://localhost:11434/api/embeddings -d '{ "model": "nomic-embed-text:v1.5", "prompt": "Here is an article about llamas...", "num_ctx": 8192 }' {"embedding":[-0.2280130684375763,1.1470876932144165,-2.3157222270965576,-0.24247822165489197,-0.520927906036377,0.5437105298042297,-0.19731146097183228,...]} ```
Author
Owner

@Kanishk-Kumar commented on GitHub (Apr 19, 2024):

Hi @remy415, I had checked that previously, yes it does show the same when I try ollama show nomic-embed-text:latest --modelfile:

# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM nomic-embed-text:latest

FROM /usr/share/ollama/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6
TEMPLATE """{{ .Prompt }}"""
PARAMETER num_ctx 8192

But can you please check the log of Ollama using sudo journalctl -xeu ollama.service -f as that is where the issue is visible (like shown above).
Also, I've reverted to Ollama v0.1.30 from v0.1.32 as creating embeddings became 10x slower and inaccurate (I was using them for clustering), inferences worked as usual.

<!-- gh-comment-id:2065758174 --> @Kanishk-Kumar commented on GitHub (Apr 19, 2024): Hi @remy415, I had checked that previously, yes it does show the same when I try `ollama show nomic-embed-text:latest --modelfile`: ``` # Modelfile generated by "ollama show" # To build a new Modelfile based on this one, replace the FROM line with: # FROM nomic-embed-text:latest FROM /usr/share/ollama/.ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 TEMPLATE """{{ .Prompt }}""" PARAMETER num_ctx 8192 ``` But can you please check the log of Ollama using `sudo journalctl -xeu ollama.service -f` as that is where the issue is visible (like shown above). Also, I've reverted to Ollama v0.1.30 from v0.1.32 as creating embeddings became 10x slower and inaccurate (I was using them for clustering), inferences worked as usual.
Author
Owner

@jimscard commented on GitHub (Apr 22, 2024):

It appears to me that ollama is mapping the model’s max_trained_positions config value to nctx_train which blocks the num_ctx override, when it should be using max_positions instead.

Either that, or whoever did the quantization of the model weights did that mapping.

<!-- gh-comment-id:2070006251 --> @jimscard commented on GitHub (Apr 22, 2024): It appears to me that ollama is mapping the model’s `max_trained_positions` config value to `nctx_train` which blocks the `num_ctx` override, when it should be using `max_positions` instead. Either that, or whoever did the quantization of the model weights did that mapping.
Author
Owner

@jimscard commented on GitHub (Apr 22, 2024):

@remy415 @Kanishk-Kumar @jmorganca Looks like it's something to do with llama.cpp - check out the comment in the [Readme.md for the GGUF versions of the file from the nomic repository on huggingface.

<!-- gh-comment-id:2070141269 --> @jimscard commented on GitHub (Apr 22, 2024): @remy415 @Kanishk-Kumar @jmorganca Looks like it's something to do with llama.cpp - check out the comment in the [Readme.md[](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF/blob/main/README.md) for the GGUF versions of the file from the nomic repository on huggingface.
Author
Owner

@Kanishk-Kumar commented on GitHub (Apr 23, 2024):

Okay so its a GGUF model. Like pointed out by @jimscard, description here explains the issue:
https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF

Any idea when Ollama/llama.cpp will support Dynamic NTK-Aware RoPE scaling?

And is there a way to run it with a different context extension method in Ollama?

<!-- gh-comment-id:2071440760 --> @Kanishk-Kumar commented on GitHub (Apr 23, 2024): Okay so its a GGUF model. Like pointed out by @jimscard, description here explains the issue: https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF Any idea when Ollama/llama.cpp will support Dynamic NTK-Aware RoPE scaling? And is there a way to run it with a different context extension method _**in**_ Ollama?
Author
Owner

@mxyng commented on GitHub (May 16, 2024):

This is fixed by https://github.com/ollama/ollama/pull/3988 which was released in 0.1.35

<!-- gh-comment-id:2116272722 --> @mxyng commented on GitHub (May 16, 2024): This is fixed by https://github.com/ollama/ollama/pull/3988 which was released in 0.1.35
Author
Owner

@jjmlovesgit commented on GitHub (May 17, 2024):

Tested and validated --- fixed:

Old Log:

                ollama-2  | llama_new_context_with_model: n_ctx      = 2048
                ollama-2  | llama_new_context_with_model: freq_base  = 1000.0
                ollama-2  | llama_new_context_with_model: freq_scale = 1
                ollama-2  | llama_kv_cache_init:  CUDA_Host KV buffer size =    66.00 MiB
                ollama-2  | llama_kv_cache_init:      CUDA0 KV buffer size =     6.00 MiB
                ollama-2  | llama_new_context_with_model: KV self size  =   72.00 MiB, K (f16):   36.00 MiB, V (f16):   36.00 MiB
                ollama-2  | llama_new_context_with_model:  CUDA_Host input buffer size   =     6.52 MiB
                ollama-2  | llama_new_context_with_model:      CUDA0 compute buffer size =    23.00 MiB
                ollama-2  | llama_new_context_with_model:  CUDA_Host compute buffer size =    22.00 MiB

New Log:

2024-05-17 12:02:29 llama_new_context_with_model: n_ctx = 8192
2024-05-17 12:02:29 llama_new_context_with_model: n_batch = 512
2024-05-17 12:02:29 llama_new_context_with_model: n_ubatch = 512
2024-05-17 12:02:29 llama_new_context_with_model: freq_base = 1000.0
2024-05-17 12:02:29 llama_new_context_with_model: freq_scale = 1
2024-05-17 12:02:29 llama_kv_cache_init: CUDA0 KV buffer size = 288.00 MiB
2024-05-17 12:02:29 llama_new_context_with_model: KV self size = 288.00 MiB, K (f16): 144.00 MiB, V (f16): 144.00 MiB
2024-05-17 12:02:29 llama_new_context_with_model: CPU output buffer size = 0.00 MiB
2024-05-17 12:02:29 llama_new_context_with_model: CUDA0 compute buffer size = 23.00 MiB
2024-05-17 12:02:29 llama_new_context_with_model: CUDA_Host compute buffer size = 3.50 MiB

<!-- gh-comment-id:2117927483 --> @jjmlovesgit commented on GitHub (May 17, 2024): Tested and validated --- fixed: Old Log: ollama-2 | llama_new_context_with_model: n_ctx = 2048 ollama-2 | llama_new_context_with_model: freq_base = 1000.0 ollama-2 | llama_new_context_with_model: freq_scale = 1 ollama-2 | llama_kv_cache_init: CUDA_Host KV buffer size = 66.00 MiB ollama-2 | llama_kv_cache_init: CUDA0 KV buffer size = 6.00 MiB ollama-2 | llama_new_context_with_model: KV self size = 72.00 MiB, K (f16): 36.00 MiB, V (f16): 36.00 MiB ollama-2 | llama_new_context_with_model: CUDA_Host input buffer size = 6.52 MiB ollama-2 | llama_new_context_with_model: CUDA0 compute buffer size = 23.00 MiB ollama-2 | llama_new_context_with_model: CUDA_Host compute buffer size = 22.00 MiB New Log: 2024-05-17 12:02:29 llama_new_context_with_model: n_ctx = 8192 2024-05-17 12:02:29 llama_new_context_with_model: n_batch = 512 2024-05-17 12:02:29 llama_new_context_with_model: n_ubatch = 512 2024-05-17 12:02:29 llama_new_context_with_model: freq_base = 1000.0 2024-05-17 12:02:29 llama_new_context_with_model: freq_scale = 1 2024-05-17 12:02:29 llama_kv_cache_init: CUDA0 KV buffer size = 288.00 MiB 2024-05-17 12:02:29 llama_new_context_with_model: KV self size = 288.00 MiB, K (f16): 144.00 MiB, V (f16): 144.00 MiB 2024-05-17 12:02:29 llama_new_context_with_model: CPU output buffer size = 0.00 MiB 2024-05-17 12:02:29 llama_new_context_with_model: CUDA0 compute buffer size = 23.00 MiB 2024-05-17 12:02:29 llama_new_context_with_model: CUDA_Host compute buffer size = 3.50 MiB
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28052