[GH-ISSUE #3281] PrivatyGPT Embedding with Ollama : "concurrent llm servers not yet supported" #48534

Closed
opened 2026-04-28 08:47:28 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @Malozorus on GitHub (Mar 21, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3281

What is the issue?

I'm runnign on WSL, ollama installed and properly running mistral 7b model. I'm also using PrivateGPT in Ollama mode. The problem come when i'm trying to use embeding model. It seems ollama can't handle llm and embeding at the same time, but it's look like i'm the only one having this issue, thus is there any configuration settings i've unmanaged ?

settings-ollama.yaml for privateGPT : ```server:
env_name: ${APP_ENV:ollama}

llm:
mode: ollama
max_new_tokens: 512
context_window: 3900
temperature: 0.1 #The temperature of the model. Increasing the temperature will make the model answer more creatively. A value of 0.1 would be more factual. (Default: 0.1)

embedding:
mode: ollama

ollama:
llm_model: mistral
embedding_model: nomic-embed-text
api_base: http://localhost:11434
tfs_z: 1.0 # Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting.
top_k: 40 # Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40)
top_p: 0.9 # Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)
repeat_last_n: 64 # Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)
repeat_penalty: 1.2 # Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1)

vectorstore:
database: qdrant

qdrant:
path: local_data/private_gpt/qdrant```

logs of ollama when trying to query already embeded files : llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = nomic-bert llama_model_loader: - kv 1: general.name str = nomic-embed-text-v1.5 llama_model_loader: - kv 2: nomic-bert.block_count u32 = 12 llama_model_loader: - kv 3: nomic-bert.context_length u32 = 2048 llama_model_loader: - kv 4: nomic-bert.embedding_length u32 = 768 llama_model_loader: - kv 5: nomic-bert.feed_forward_length u32 = 3072 llama_model_loader: - kv 6: nomic-bert.attention.head_count u32 = 12 llama_model_loader: - kv 7: nomic-bert.attention.layer_norm_epsilon f32 = 0.000000 llama_model_loader: - kv 8: general.file_type u32 = 1 llama_model_loader: - kv 9: nomic-bert.attention.causal bool = false llama_model_loader: - kv 10: nomic-bert.pooling_type u32 = 1 llama_model_loader: - kv 11: nomic-bert.rope.freq_base f32 = 1000.000000 llama_model_loader: - kv 12: tokenizer.ggml.token_type_count u32 = 2 llama_model_loader: - kv 13: tokenizer.ggml.bos_token_id u32 = 101 llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 102 llama_model_loader: - kv 15: tokenizer.ggml.model str = bert llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 100 llama_model_loader: - kv 20: tokenizer.ggml.seperator_token_id u32 = 102 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.cls_token_id u32 = 101 llama_model_loader: - kv 23: tokenizer.ggml.mask_token_id u32 = 103 llama_model_loader: - type f32: 51 tensors llama_model_loader: - type f16: 61 tensors error loading model: unknown model architecture: 'nomic-bert' llama_load_model_from_file: failed to load model llama_init_from_gpt_params: error: failed to load model '/root/.ollama/models/blobs/sha256:970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6' {"timestamp":1711013580,"level":"ERROR","function":"load_model","line":581,"message":"unable to load model","model":"/root/.ollama/models/blobs/sha256:970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6"} 2024/03/21 10:33:00 llm.go:129: Failed to load dynamic library cuda - falling back to CPU mode error loading model /root/.ollama/models/blobs/sha256:970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 2024/03/21 10:33:00 ext_server_common.go:85: **concurrent llm servers not yet supported**, waiting for prior server to complete

What did you expect to see?

expecting to see ollama to load noemic

Steps to reproduce

No response

Are there any recent changes that introduced the issue?

No response

OS

Windows

Architecture

x86

Platform

WSL2

Ollama version

ollama version is 0.1.20

GPU

Nvidia

GPU info

NVIDIA GeForce RTX 3070 Laptop GPU, compute capability 8.6

CPU

AMD

Other software

No response

Originally created by @Malozorus on GitHub (Mar 21, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3281 ### What is the issue? I'm runnign on WSL, ollama installed and properly running mistral 7b model. I'm also using PrivateGPT in Ollama mode. The problem come when i'm trying to use embeding model. It seems ollama can't handle llm and embeding at the same time, but it's look like i'm the only one having this issue, thus is there any configuration settings i've unmanaged ? settings-ollama.yaml for privateGPT : ```server: env_name: ${APP_ENV:ollama} llm: mode: ollama max_new_tokens: 512 context_window: 3900 temperature: 0.1 #The temperature of the model. Increasing the temperature will make the model answer more creatively. A value of 0.1 would be more factual. (Default: 0.1) embedding: mode: ollama ollama: llm_model: mistral embedding_model: nomic-embed-text api_base: http://localhost:11434 tfs_z: 1.0 # Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. top_k: 40 # Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) top_p: 0.9 # Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9) repeat_last_n: 64 # Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx) repeat_penalty: 1.2 # Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1) vectorstore: database: qdrant qdrant: path: local_data/private_gpt/qdrant``` logs of ollama when trying to query already embeded files : ```llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = nomic-bert llama_model_loader: - kv 1: general.name str = nomic-embed-text-v1.5 llama_model_loader: - kv 2: nomic-bert.block_count u32 = 12 llama_model_loader: - kv 3: nomic-bert.context_length u32 = 2048 llama_model_loader: - kv 4: nomic-bert.embedding_length u32 = 768 llama_model_loader: - kv 5: nomic-bert.feed_forward_length u32 = 3072 llama_model_loader: - kv 6: nomic-bert.attention.head_count u32 = 12 llama_model_loader: - kv 7: nomic-bert.attention.layer_norm_epsilon f32 = 0.000000 llama_model_loader: - kv 8: general.file_type u32 = 1 llama_model_loader: - kv 9: nomic-bert.attention.causal bool = false llama_model_loader: - kv 10: nomic-bert.pooling_type u32 = 1 llama_model_loader: - kv 11: nomic-bert.rope.freq_base f32 = 1000.000000 llama_model_loader: - kv 12: tokenizer.ggml.token_type_count u32 = 2 llama_model_loader: - kv 13: tokenizer.ggml.bos_token_id u32 = 101 llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 102 llama_model_loader: - kv 15: tokenizer.ggml.model str = bert llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 100 llama_model_loader: - kv 20: tokenizer.ggml.seperator_token_id u32 = 102 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.cls_token_id u32 = 101 llama_model_loader: - kv 23: tokenizer.ggml.mask_token_id u32 = 103 llama_model_loader: - type f32: 51 tensors llama_model_loader: - type f16: 61 tensors error loading model: unknown model architecture: 'nomic-bert' llama_load_model_from_file: failed to load model llama_init_from_gpt_params: error: failed to load model '/root/.ollama/models/blobs/sha256:970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6' {"timestamp":1711013580,"level":"ERROR","function":"load_model","line":581,"message":"unable to load model","model":"/root/.ollama/models/blobs/sha256:970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6"} 2024/03/21 10:33:00 llm.go:129: Failed to load dynamic library cuda - falling back to CPU mode error loading model /root/.ollama/models/blobs/sha256:970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 2024/03/21 10:33:00 ext_server_common.go:85: **concurrent llm servers not yet supported**, waiting for prior server to complete``` ### What did you expect to see? expecting to see ollama to load noemic ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS Windows ### Architecture x86 ### Platform WSL2 ### Ollama version ollama version is 0.1.20 ### GPU Nvidia ### GPU info NVIDIA GeForce RTX 3070 Laptop GPU, compute capability 8.6 ### CPU AMD ### Other software _No response_
GiteaMirror added the bug label 2026-04-28 08:47:28 -05:00
Author
Owner

@dhiltgen commented on GitHub (Mar 21, 2024):

Multiple concurrent model support is tracked in #2109 and is something we're working towards supporting.

<!-- gh-comment-id:2012319057 --> @dhiltgen commented on GitHub (Mar 21, 2024): Multiple concurrent model support is tracked in #2109 and is something we're working towards supporting.
Author
Owner

@alfi4000 commented on GitHub (Mar 23, 2024):

Multiple concurrent model support is tracked in #2109 and is something we're working towards supporting.

In some way multi model already works I just can tell for Linux users:

https://github.com/ollama/ollama/issues/2109#issuecomment-2016416883

<!-- gh-comment-id:2016418866 --> @alfi4000 commented on GitHub (Mar 23, 2024): > Multiple concurrent model support is tracked in #2109 and is something we're working towards supporting. In some way multi model already works I just can tell for Linux users: https://github.com/ollama/ollama/issues/2109#issuecomment-2016416883
Author
Owner

@kungfu-eric commented on GitHub (Apr 1, 2024):

Edit: works with an ollama update. Would be nice to throw a more descriptive error (eg. update your ollama if the llama cpp binding isn't updated for the newer models)

<!-- gh-comment-id:2030272565 --> @kungfu-eric commented on GitHub (Apr 1, 2024): Edit: works with an ollama update. Would be nice to throw a more descriptive error (eg. update your ollama if the llama cpp binding isn't updated for the newer models)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48534