[GH-ISSUE #4102] Ollama running in docker with concurrent requests doesn't work #64587

Closed
opened 2026-05-03 18:16:51 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @BBjie on GitHub (May 2, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4102

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I have tried to use Ollama in Docker and tested the handling of concurrent requests feature. I have added OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS as env variables. The env values have successfully passed but it didn't work.
can anyone kindly help me out

services:
ollama:
    image: ollama/ollama:0.1.33-rc6
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama:/root/.ollama
    environment:
      OLLAMA_NUM_PARALLEL: "4"
      OLLAMA_MAX_LOADED_MODELS: "4"
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]
    command: serve
volumes:
  ollama:

OS

Docker

GPU

Other

CPU

Other

Ollama version

No response

Originally created by @BBjie on GitHub (May 2, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4102 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I have tried to use Ollama in Docker and tested the handling of concurrent requests feature. I have added `OLLAMA_NUM_PARALLEL` and `OLLAMA_MAX_LOADED_MODELS` as env variables. The env values have successfully passed but it didn't work. can anyone kindly help me out ``` services: ollama: image: ollama/ollama:0.1.33-rc6 container_name: ollama ports: - "11434:11434" volumes: - ollama:/root/.ollama environment: OLLAMA_NUM_PARALLEL: "4" OLLAMA_MAX_LOADED_MODELS: "4" deploy: resources: reservations: devices: - capabilities: [gpu] command: serve volumes: ollama: ``` ### OS Docker ### GPU Other ### CPU Other ### Ollama version _No response_
GiteaMirror added the dockerbug labels 2026-05-03 18:16:51 -05:00
Author
Owner

@dhiltgen commented on GitHub (May 2, 2024):

Can you share the server log from the container? It may be helpful to set OLLAMA_DEBUG: "1" as well to get extra logging.

<!-- gh-comment-id:2091492710 --> @dhiltgen commented on GitHub (May 2, 2024): Can you share the server log from the container? It may be helpful to set `OLLAMA_DEBUG: "1"` as well to get extra logging.
Author
Owner

@BBjie commented on GitHub (May 3, 2024):

Can you share the server log from the container? It may be helpful to set OLLAMA_DEBUG: "1" as well to get extra logging.

Yes sir. Below is the logging with env setting after running llama2 model

   environment:
      OLLAMA_NUM_PARALLEL: 4
      OLLAMA_MAX_LOADED_MODELS: 4
      OLLAMA_DEBUG: 1
`ollama  | time=2024-05-03T03:01:36.660Z level=INFO source=images.go:828 msg="total blobs: 10"
ollama  | time=2024-05-03T03:01:36.661Z level=INFO source=images.go:835 msg="total unused blobs removed: 0"
ollama  | time=2024-05-03T03:01:36.661Z level=INFO source=routes.go:1071 msg="Listening on [::]:11434 (version 0.1.33)"
ollama  | time=2024-05-03T03:01:36.663Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2318660654/runners
ollama  | time=2024-05-03T03:01:36.664Z level=DEBUG source=payload.go:180 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz
ollama  | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz
ollama  | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz
ollama  | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz
ollama  | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz
ollama  | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz
ollama  | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz
ollama  | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/deps.txt.gz
ollama  | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/ollama_llama_server.gz
ollama  | time=2024-05-03T03:01:41.415Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu
ollama  | time=2024-05-03T03:01:41.415Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu_avx
ollama  | time=2024-05-03T03:01:41.415Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu_avx2
ollama  | time=2024-05-03T03:01:41.415Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cuda_v11
ollama  | time=2024-05-03T03:01:41.415Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/rocm_v60002
ollama  | time=2024-05-03T03:01:41.415Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
ollama  | time=2024-05-03T03:01:41.415Z level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"    
ollama  | time=2024-05-03T03:01:41.415Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
ollama  | time=2024-05-03T03:01:41.416Z level=INFO source=gpu.go:96 msg="Detecting GPUs"
ollama  | time=2024-05-03T03:01:41.416Z level=DEBUG source=gpu.go:203 msg="Searching for GPU library" name=libcudart.so*
ollama  | time=2024-05-03T03:01:41.416Z level=DEBUG source=gpu.go:221 msg="gpu library search" globs="[/tmp/ollama2318660654/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so**]"
ollama  | time=2024-05-03T03:01:41.420Z level=DEBUG source=gpu.go:249 msg="discovered GPU libraries" paths=[/tmp/ollama2318660654/runners/cuda_v11/libcudart.so.11.0]
ollama  | CUDA driver version: 12-3
ollama  | time=2024-05-03T03:01:41.603Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama2318660654/runners/cuda_v11/libcudart.so.11.0 count=1
ollama  | time=2024-05-03T03:01:41.603Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama  | [GPU-797bde1a-f350-3a7e-cf25-09fa106f1039] CUDA totalMem 8589410304
ollama  | [GPU-797bde1a-f350-3a7e-cf25-09fa106f1039] CUDA freeMem 7459569664
ollama  | [GPU-797bde1a-f350-3a7e-cf25-09fa106f1039] Compute Capability 8.6
ollama  | time=2024-05-03T03:01:42.002Z level=DEBUG source=amd_linux.go:297 msg="amdgpu driver not detected /sys/module/amdgpu"
ollama  | releasing cudart library
ollama  | [GIN] 2024/05/03 - 03:02:18 | 200 |     517.322µs |       127.0.0.1 | HEAD     "/"
ollama  | [GIN] 2024/05/03 - 03:02:18 | 200 |     1.97122ms |       127.0.0.1 | POST     "/api/show"
ollama  | time=2024-05-03T03:02:18.540Z level=INFO source=gpu.go:96 msg="Detecting GPUs"
ollama  | time=2024-05-03T03:02:18.540Z level=DEBUG source=gpu.go:203 msg="Searching for GPU library" name=libcudart.so*
ollama  | time=2024-05-03T03:02:18.540Z level=DEBUG source=gpu.go:221 msg="gpu library search" globs="[/tmp/ollama2318660654/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so**]"
ollama  | time=2024-05-03T03:02:18.543Z level=DEBUG source=gpu.go:249 msg="discovered GPU libraries" paths=[/tmp/ollama2318660654/runners/cuda_v11/libcudart.so.11.0]
ollama  | CUDA driver version: 12-3
ollama  | time=2024-05-03T03:02:18.544Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama2318660654/runners/cuda_v11/libcudart.so.11.0 count=1
ollama  | time=2024-05-03T03:02:18.544Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama  | [GPU-797bde1a-f350-3a7e-cf25-09fa106f1039] CUDA totalMem 8589410304
ollama  | [GPU-797bde1a-f350-3a7e-cf25-09fa106f1039] CUDA freeMem 7459569664
ollama  | [GPU-797bde1a-f350-3a7e-cf25-09fa106f1039] Compute Capability 8.6
ollama  | time=2024-05-03T03:02:18.679Z level=DEBUG source=amd_linux.go:297 msg="amdgpu driver not detected /sys/module/amdgpu"
ollama  | releasing cudart library
ollama  | time=2024-05-03T03:02:18.711Z level=DEBUG source=gguf.go:57 msg="model = &llm.gguf{containerGGUF:(*llm.containerGGUF)(0xc000113840), kv:llm.KV{}, tensors:[]*llm.Tensor(nil), parameters:0x0}"
ollama  | time=2024-05-03T03:02:18.863Z level=DEBUG source=sched.go:162 msg="loading first model" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
ollama  | time=2024-05-03T03:02:18.863Z level=DEBUG source=memory.go:64 msg=evaluating library=cuda gpu_count=1 available="7114.0 MiB"   
ollama  | time=2024-05-03T03:02:18.864Z level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="7114.0 MiB" memory.required.full="5222.6 MiB" memory.required.partial="5222.6 MiB" memory.required.kv="1024.0 MiB" memory.weights.total="3577.6 MiB" memory.weights.repeating="3475.0 MiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB"
ollama  | time=2024-05-03T03:02:18.864Z level=DEBUG source=sched.go:508 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 gpu=GPU-797bde1a-f350-3a7e-cf25-09fa106f1039 available=7459569664 required="5222.6 MiB"
ollama  | time=2024-05-03T03:02:18.864Z level=DEBUG source=memory.go:64 msg=evaluating library=cuda gpu_count=1 available="7114.0 MiB"   
ollama  | time=2024-05-03T03:02:18.864Z level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="7114.0 MiB" memory.required.full="5222.6 MiB" memory.required.partial="5222.6 MiB" memory.required.kv="1024.0 MiB" memory.weights.total="3577.6 MiB" memory.weights.repeating="3475.0 MiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB"
ollama  | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu
ollama  | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu_avx
ollama  | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu_avx2
ollama  | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cuda_v11
ollama  | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/rocm_v60002
ollama  | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu
ollama  | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu_avx
ollama  | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu_avx2
ollama  | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cuda_v11
ollama  | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/rocm_v60002
ollama  | time=2024-05-03T03:02:18.865Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama  | time=2024-05-03T03:02:18.867Z level=INFO source=server.go:289 msg="starting llama server" cmd="/tmp/ollama2318660654/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --embedding --log-format json --n-gpu-layers 33 --verbose --parallel 4 --port 37117"
ollama  | time=2024-05-03T03:02:18.867Z level=DEBUG source=server.go:291 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=9a2cbc148bdd OLLAMA_MAX_LOADED_MODELS=4 OLLAMA_DEBUG=1 OLLAMA_NUM_PARALLEL=4 OLLAMA_HOST=0.0.0.0 LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_DRIVER_CAPABILITIES=compute,utility NVIDIA_VISIBLE_DEVICES=all HOME=/root LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/tmp/ollama2318660654/runners/cuda_v11 CUDA_VISIBLE_DEVICES=GPU-797bde1a-f350-3a7e-cf25-09fa106f1039]"
ollama  | time=2024-05-03T03:02:18.868Z level=INFO source=sched.go:340 msg="loaded runners" count=1
ollama  | time=2024-05-03T03:02:18.868Z level=INFO source=server.go:432 msg="waiting for llama runner to start responding"
ollama  | {"function":"server_params_parse","level":"WARN","line":2497,"msg":"server.cpp is not built with verbose logging.","tid":"139698153259008","timestamp":1714705338}
ollama  | {"build":1,"commit":"952d03d","function":"main","level":"INFO","line":2822,"msg":"build info","tid":"139698153259008","timestamp":1714705338}
ollama  | {"function":"main","level":"INFO","line":2825,"msg":"system info","n_threads":10,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | ","tid":"139698153259008","timestamp":1714705338,"total_threads":20}
ollama  | llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
ollama  | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama  | llama_model_loader: - kv   0:                       general.architecture str              = llama
ollama  | llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
ollama  | llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
ollama  | llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
ollama  | llama_model_loader: - kv   4:                          llama.block_count u32              = 32
ollama  | llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
ollama  | llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
ollama  | llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
ollama  | llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
ollama  | llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
ollama  | llama_model_loader: - kv  10:                          general.file_type u32              = 2
ollama  | llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
ollama  | llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
ollama  | llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
ollama  | llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
ollama  | llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,61249]   = ["▁ t", "e r", "i n", "▁ a", "e n...
ollama  | llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
ollama  | llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
ollama  | llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
ollama  | llama_model_loader: - kv  19:               tokenizer.ggml.add_bos_token bool             = true
ollama  | llama_model_loader: - kv  20:               tokenizer.ggml.add_eos_token bool             = false
ollama  | llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% if messages[0]['role'] == 'system'...
ollama  | llama_model_loader: - kv  22:               general.quantization_version u32              = 2
ollama  | llama_model_loader: - type  f32:   65 tensors
ollama  | llama_model_loader: - type q4_0:  225 tensors
ollama  | llama_model_loader: - type q6_K:    1 tensors
ollama  | llm_load_vocab: special tokens definition check successful ( 259/32000 ).
ollama  | llm_load_print_meta: format           = GGUF V3 (latest)
ollama  | llm_load_print_meta: arch             = llama
ollama  | llm_load_print_meta: vocab type       = SPM
ollama  | llm_load_print_meta: n_vocab          = 32000
ollama  | llm_load_print_meta: n_merges         = 0
ollama  | llm_load_print_meta: n_ctx_train      = 4096
ollama  | llm_load_print_meta: n_embd           = 4096
ollama  | llm_load_print_meta: n_head           = 32
ollama  | llm_load_print_meta: n_head_kv        = 32
ollama  | llm_load_print_meta: n_layer          = 32
ollama  | llm_load_print_meta: n_rot            = 128
ollama  | llm_load_print_meta: n_embd_head_k    = 128
ollama  | llm_load_print_meta: n_embd_head_v    = 128
ollama  | llm_load_print_meta: n_gqa            = 1
ollama  | llm_load_print_meta: n_embd_k_gqa     = 4096
ollama  | llm_load_print_meta: n_embd_v_gqa     = 4096
ollama  | llm_load_print_meta: f_norm_eps       = 0.0e+00
ollama  | llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
ollama  | llm_load_print_meta: f_clamp_kqv      = 0.0e+00
ollama  | llm_load_print_meta: f_max_alibi_bias = 0.0e+00
ollama  | llm_load_print_meta: f_logit_scale    = 0.0e+00
ollama  | llm_load_print_meta: n_ff             = 11008
ollama  | llm_load_print_meta: n_expert         = 0
ollama  | llm_load_print_meta: n_expert_used    = 0
ollama  | llm_load_print_meta: causal attn      = 1
ollama  | llm_load_print_meta: pooling type     = 0
ollama  | llm_load_print_meta: rope type        = 0
ollama  | llm_load_print_meta: rope scaling     = linear
ollama  | llm_load_print_meta: freq_base_train  = 10000.0
ollama  | llm_load_print_meta: freq_scale_train = 1
ollama  | llm_load_print_meta: n_yarn_orig_ctx  = 4096
ollama  | llm_load_print_meta: rope_finetuned   = unknown
ollama  | llm_load_print_meta: ssm_d_conv       = 0
ollama  | llm_load_print_meta: ssm_d_inner      = 0
ollama  | llm_load_print_meta: ssm_d_state      = 0
ollama  | llm_load_print_meta: ssm_dt_rank      = 0
ollama  | llm_load_print_meta: model type       = 7B
ollama  | llm_load_print_meta: model ftype      = Q4_0
ollama  | llm_load_print_meta: model params     = 6.74 B
ollama  | llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW)
ollama  | llm_load_print_meta: general.name     = LLaMA v2
ollama  | llm_load_print_meta: BOS token        = 1 '<s>'
ollama  | llm_load_print_meta: EOS token        = 2 '</s>'
ollama  | llm_load_print_meta: UNK token        = 0 '<unk>'
ollama  | llm_load_print_meta: LF token         = 13 '<0x0A>'
ollama  | ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
ollama  | ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ollama  | ggml_cuda_init: found 1 CUDA devices:
ollama  |   Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes
ollama  | llm_load_tensors: ggml ctx size =    0.30 MiB
ollama  | time=2024-05-03T03:02:19.119Z level=DEBUG source=server.go:466 msg="server not yet available" error="server not responding"    
ollama  | llm_load_tensors: offloading 32 repeating layers to GPU
ollama  | llm_load_tensors: offloading non-repeating layers to GPU
ollama  | llm_load_tensors: offloaded 33/33 layers to GPU
ollama  | llm_load_tensors:        CPU buffer size =    70.31 MiB
ollama  | llm_load_tensors:      CUDA0 buffer size =  3577.56 MiB
ollama  | ............................................................................................time=2024-05-03T03:02:33.166Z level=DEBUG source=server.go:466 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:37117/health\": dial tcp 127.0.0.1:37117: i/o timeout"
ollama  | .time=2024-05-03T03:02:33.367Z level=DEBUG source=server.go:466 msg="server not yet available" error="server not responding"   
ollama  | .....
ollama  | llama_new_context_with_model: n_ctx      = 2048
ollama  | llama_new_context_with_model: n_batch    = 512
ollama  | llama_new_context_with_model: n_ubatch   = 512
ollama  | llama_new_context_with_model: freq_base  = 10000.0
ollama  | llama_new_context_with_model: freq_scale = 1
ollama  | llama_kv_cache_init:      CUDA0 KV buffer size =  1024.00 MiB
ollama  | llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
ollama  | llama_new_context_with_model:  CUDA_Host  output buffer size =     0.55 MiB
ollama  | llama_new_context_with_model:      CUDA0 compute buffer size =   164.00 MiB
ollama  | llama_new_context_with_model:  CUDA_Host compute buffer size =    12.01 MiB
ollama  | llama_new_context_with_model: graph nodes  = 1030
ollama  | llama_new_context_with_model: graph splits = 2
ollama  | {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":4,"tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":512,"slot_id":0,"tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":512,"slot_id":1,"tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":512,"slot_id":2,"tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":512,"slot_id":3,"tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"main","level":"INFO","line":3067,"msg":"model loaded","tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3270,"msg":"HTTP server listening","n_threads_http":"19","port":"37117","tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"update_slots","level":"INFO","line":1581,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":0,"tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":1,"tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":3,"tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":2,"tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":49552,"status":200,"tid":"139697310412800","timestamp":1714705355}
ollama  | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":49558,"status":200,"tid":"139696954535936","timestamp":1714705355}
ollama  | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":49556,"status":200,"tid":"139696962928640","timestamp":1714705355}
ollama  | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":4,"tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":49540,"status":200,"tid":"139696747270144","timestamp":1714705355}
ollama  | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":49568,"status":200,"tid":"139696946143232","timestamp":1714705355}
ollama  | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":5,"tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":49574,"status":200,"tid":"139696937750528","timestamp":1714705355}
ollama  | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":6,"tid":"139698153259008","timestamp":1714705355}
ollama  | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":51238,"status":200,"tid":"139696920965120","timestamp":1714705355}
ollama  | time=2024-05-03T03:02:35.575Z level=DEBUG source=server.go:477 msg="llama runner started in 16.706584 seconds"
ollama  | time=2024-05-03T03:02:35.575Z level=DEBUG source=sched.go:353 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
ollama  | [GIN] 2024/05/03 - 03:02:35 | 200 | 17.035618065s |       127.0.0.1 | POST     "/api/generate"
ollama  | time=2024-05-03T03:02:35.575Z level=DEBUG source=sched.go:357 msg="context for request finished"
ollama  | time=2024-05-03T03:02:35.576Z level=DEBUG source=sched.go:246 msg="runner with non-zero duration has gone idle, adding timer" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 duration=5m0s
ollama  | time=2024-05-03T03:02:35.576Z level=DEBUG source=sched.go:262 msg="after processing request finished event" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 refCount=0
`
<!-- gh-comment-id:2092050270 --> @BBjie commented on GitHub (May 3, 2024): > Can you share the server log from the container? It may be helpful to set `OLLAMA_DEBUG: "1"` as well to get extra logging. Yes sir. Below is the logging with env setting after running llama2 model ``` environment: OLLAMA_NUM_PARALLEL: 4 OLLAMA_MAX_LOADED_MODELS: 4 OLLAMA_DEBUG: 1 ``` ``` `ollama | time=2024-05-03T03:01:36.660Z level=INFO source=images.go:828 msg="total blobs: 10" ollama | time=2024-05-03T03:01:36.661Z level=INFO source=images.go:835 msg="total unused blobs removed: 0" ollama | time=2024-05-03T03:01:36.661Z level=INFO source=routes.go:1071 msg="Listening on [::]:11434 (version 0.1.33)" ollama | time=2024-05-03T03:01:36.663Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2318660654/runners ollama | time=2024-05-03T03:01:36.664Z level=DEBUG source=payload.go:180 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz ollama | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz ollama | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz ollama | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz ollama | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz ollama | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz ollama | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz ollama | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/deps.txt.gz ollama | time=2024-05-03T03:01:36.665Z level=DEBUG source=payload.go:180 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/ollama_llama_server.gz ollama | time=2024-05-03T03:01:41.415Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu ollama | time=2024-05-03T03:01:41.415Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu_avx ollama | time=2024-05-03T03:01:41.415Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu_avx2 ollama | time=2024-05-03T03:01:41.415Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cuda_v11 ollama | time=2024-05-03T03:01:41.415Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/rocm_v60002 ollama | time=2024-05-03T03:01:41.415Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]" ollama | time=2024-05-03T03:01:41.415Z level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" ollama | time=2024-05-03T03:01:41.415Z level=DEBUG source=sched.go:105 msg="starting llm scheduler" ollama | time=2024-05-03T03:01:41.416Z level=INFO source=gpu.go:96 msg="Detecting GPUs" ollama | time=2024-05-03T03:01:41.416Z level=DEBUG source=gpu.go:203 msg="Searching for GPU library" name=libcudart.so* ollama | time=2024-05-03T03:01:41.416Z level=DEBUG source=gpu.go:221 msg="gpu library search" globs="[/tmp/ollama2318660654/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so**]" ollama | time=2024-05-03T03:01:41.420Z level=DEBUG source=gpu.go:249 msg="discovered GPU libraries" paths=[/tmp/ollama2318660654/runners/cuda_v11/libcudart.so.11.0] ollama | CUDA driver version: 12-3 ollama | time=2024-05-03T03:01:41.603Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama2318660654/runners/cuda_v11/libcudart.so.11.0 count=1 ollama | time=2024-05-03T03:01:41.603Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama | [GPU-797bde1a-f350-3a7e-cf25-09fa106f1039] CUDA totalMem 8589410304 ollama | [GPU-797bde1a-f350-3a7e-cf25-09fa106f1039] CUDA freeMem 7459569664 ollama | [GPU-797bde1a-f350-3a7e-cf25-09fa106f1039] Compute Capability 8.6 ollama | time=2024-05-03T03:01:42.002Z level=DEBUG source=amd_linux.go:297 msg="amdgpu driver not detected /sys/module/amdgpu" ollama | releasing cudart library ollama | [GIN] 2024/05/03 - 03:02:18 | 200 | 517.322µs | 127.0.0.1 | HEAD "/" ollama | [GIN] 2024/05/03 - 03:02:18 | 200 | 1.97122ms | 127.0.0.1 | POST "/api/show" ollama | time=2024-05-03T03:02:18.540Z level=INFO source=gpu.go:96 msg="Detecting GPUs" ollama | time=2024-05-03T03:02:18.540Z level=DEBUG source=gpu.go:203 msg="Searching for GPU library" name=libcudart.so* ollama | time=2024-05-03T03:02:18.540Z level=DEBUG source=gpu.go:221 msg="gpu library search" globs="[/tmp/ollama2318660654/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /usr/local/nvidia/lib/libcudart.so** /usr/local/nvidia/lib64/libcudart.so**]" ollama | time=2024-05-03T03:02:18.543Z level=DEBUG source=gpu.go:249 msg="discovered GPU libraries" paths=[/tmp/ollama2318660654/runners/cuda_v11/libcudart.so.11.0] ollama | CUDA driver version: 12-3 ollama | time=2024-05-03T03:02:18.544Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama2318660654/runners/cuda_v11/libcudart.so.11.0 count=1 ollama | time=2024-05-03T03:02:18.544Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama | [GPU-797bde1a-f350-3a7e-cf25-09fa106f1039] CUDA totalMem 8589410304 ollama | [GPU-797bde1a-f350-3a7e-cf25-09fa106f1039] CUDA freeMem 7459569664 ollama | [GPU-797bde1a-f350-3a7e-cf25-09fa106f1039] Compute Capability 8.6 ollama | time=2024-05-03T03:02:18.679Z level=DEBUG source=amd_linux.go:297 msg="amdgpu driver not detected /sys/module/amdgpu" ollama | releasing cudart library ollama | time=2024-05-03T03:02:18.711Z level=DEBUG source=gguf.go:57 msg="model = &llm.gguf{containerGGUF:(*llm.containerGGUF)(0xc000113840), kv:llm.KV{}, tensors:[]*llm.Tensor(nil), parameters:0x0}" ollama | time=2024-05-03T03:02:18.863Z level=DEBUG source=sched.go:162 msg="loading first model" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 ollama | time=2024-05-03T03:02:18.863Z level=DEBUG source=memory.go:64 msg=evaluating library=cuda gpu_count=1 available="7114.0 MiB" ollama | time=2024-05-03T03:02:18.864Z level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="7114.0 MiB" memory.required.full="5222.6 MiB" memory.required.partial="5222.6 MiB" memory.required.kv="1024.0 MiB" memory.weights.total="3577.6 MiB" memory.weights.repeating="3475.0 MiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" ollama | time=2024-05-03T03:02:18.864Z level=DEBUG source=sched.go:508 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 gpu=GPU-797bde1a-f350-3a7e-cf25-09fa106f1039 available=7459569664 required="5222.6 MiB" ollama | time=2024-05-03T03:02:18.864Z level=DEBUG source=memory.go:64 msg=evaluating library=cuda gpu_count=1 available="7114.0 MiB" ollama | time=2024-05-03T03:02:18.864Z level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="7114.0 MiB" memory.required.full="5222.6 MiB" memory.required.partial="5222.6 MiB" memory.required.kv="1024.0 MiB" memory.weights.total="3577.6 MiB" memory.weights.repeating="3475.0 MiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="193.0 MiB" ollama | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu ollama | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu_avx ollama | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu_avx2 ollama | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cuda_v11 ollama | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/rocm_v60002 ollama | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu ollama | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu_avx ollama | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cpu_avx2 ollama | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/cuda_v11 ollama | time=2024-05-03T03:02:18.865Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama2318660654/runners/rocm_v60002 ollama | time=2024-05-03T03:02:18.865Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama | time=2024-05-03T03:02:18.867Z level=INFO source=server.go:289 msg="starting llama server" cmd="/tmp/ollama2318660654/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 2048 --batch-size 512 --embedding --log-format json --n-gpu-layers 33 --verbose --parallel 4 --port 37117" ollama | time=2024-05-03T03:02:18.867Z level=DEBUG source=server.go:291 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=9a2cbc148bdd OLLAMA_MAX_LOADED_MODELS=4 OLLAMA_DEBUG=1 OLLAMA_NUM_PARALLEL=4 OLLAMA_HOST=0.0.0.0 LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_DRIVER_CAPABILITIES=compute,utility NVIDIA_VISIBLE_DEVICES=all HOME=/root LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/tmp/ollama2318660654/runners/cuda_v11 CUDA_VISIBLE_DEVICES=GPU-797bde1a-f350-3a7e-cf25-09fa106f1039]" ollama | time=2024-05-03T03:02:18.868Z level=INFO source=sched.go:340 msg="loaded runners" count=1 ollama | time=2024-05-03T03:02:18.868Z level=INFO source=server.go:432 msg="waiting for llama runner to start responding" ollama | {"function":"server_params_parse","level":"WARN","line":2497,"msg":"server.cpp is not built with verbose logging.","tid":"139698153259008","timestamp":1714705338} ollama | {"build":1,"commit":"952d03d","function":"main","level":"INFO","line":2822,"msg":"build info","tid":"139698153259008","timestamp":1714705338} ollama | {"function":"main","level":"INFO","line":2825,"msg":"system info","n_threads":10,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | ","tid":"139698153259008","timestamp":1714705338,"total_threads":20} ollama | llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) ollama | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama | llama_model_loader: - kv 0: general.architecture str = llama ollama | llama_model_loader: - kv 1: general.name str = LLaMA v2 ollama | llama_model_loader: - kv 2: llama.context_length u32 = 4096 ollama | llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 ollama | llama_model_loader: - kv 4: llama.block_count u32 = 32 ollama | llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 ollama | llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 ollama | llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 ollama | llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 ollama | llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 ollama | llama_model_loader: - kv 10: general.file_type u32 = 2 ollama | llama_model_loader: - kv 11: tokenizer.ggml.model str = llama ollama | llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... ollama | llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... ollama | llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... ollama | llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... ollama | llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 ollama | llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 ollama | llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 ollama | llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true ollama | llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false ollama | llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... ollama | llama_model_loader: - kv 22: general.quantization_version u32 = 2 ollama | llama_model_loader: - type f32: 65 tensors ollama | llama_model_loader: - type q4_0: 225 tensors ollama | llama_model_loader: - type q6_K: 1 tensors ollama | llm_load_vocab: special tokens definition check successful ( 259/32000 ). ollama | llm_load_print_meta: format = GGUF V3 (latest) ollama | llm_load_print_meta: arch = llama ollama | llm_load_print_meta: vocab type = SPM ollama | llm_load_print_meta: n_vocab = 32000 ollama | llm_load_print_meta: n_merges = 0 ollama | llm_load_print_meta: n_ctx_train = 4096 ollama | llm_load_print_meta: n_embd = 4096 ollama | llm_load_print_meta: n_head = 32 ollama | llm_load_print_meta: n_head_kv = 32 ollama | llm_load_print_meta: n_layer = 32 ollama | llm_load_print_meta: n_rot = 128 ollama | llm_load_print_meta: n_embd_head_k = 128 ollama | llm_load_print_meta: n_embd_head_v = 128 ollama | llm_load_print_meta: n_gqa = 1 ollama | llm_load_print_meta: n_embd_k_gqa = 4096 ollama | llm_load_print_meta: n_embd_v_gqa = 4096 ollama | llm_load_print_meta: f_norm_eps = 0.0e+00 ollama | llm_load_print_meta: f_norm_rms_eps = 1.0e-05 ollama | llm_load_print_meta: f_clamp_kqv = 0.0e+00 ollama | llm_load_print_meta: f_max_alibi_bias = 0.0e+00 ollama | llm_load_print_meta: f_logit_scale = 0.0e+00 ollama | llm_load_print_meta: n_ff = 11008 ollama | llm_load_print_meta: n_expert = 0 ollama | llm_load_print_meta: n_expert_used = 0 ollama | llm_load_print_meta: causal attn = 1 ollama | llm_load_print_meta: pooling type = 0 ollama | llm_load_print_meta: rope type = 0 ollama | llm_load_print_meta: rope scaling = linear ollama | llm_load_print_meta: freq_base_train = 10000.0 ollama | llm_load_print_meta: freq_scale_train = 1 ollama | llm_load_print_meta: n_yarn_orig_ctx = 4096 ollama | llm_load_print_meta: rope_finetuned = unknown ollama | llm_load_print_meta: ssm_d_conv = 0 ollama | llm_load_print_meta: ssm_d_inner = 0 ollama | llm_load_print_meta: ssm_d_state = 0 ollama | llm_load_print_meta: ssm_dt_rank = 0 ollama | llm_load_print_meta: model type = 7B ollama | llm_load_print_meta: model ftype = Q4_0 ollama | llm_load_print_meta: model params = 6.74 B ollama | llm_load_print_meta: model size = 3.56 GiB (4.54 BPW) ollama | llm_load_print_meta: general.name = LLaMA v2 ollama | llm_load_print_meta: BOS token = 1 '<s>' ollama | llm_load_print_meta: EOS token = 2 '</s>' ollama | llm_load_print_meta: UNK token = 0 '<unk>' ollama | llm_load_print_meta: LF token = 13 '<0x0A>' ollama | ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ollama | ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ollama | ggml_cuda_init: found 1 CUDA devices: ollama | Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes ollama | llm_load_tensors: ggml ctx size = 0.30 MiB ollama | time=2024-05-03T03:02:19.119Z level=DEBUG source=server.go:466 msg="server not yet available" error="server not responding" ollama | llm_load_tensors: offloading 32 repeating layers to GPU ollama | llm_load_tensors: offloading non-repeating layers to GPU ollama | llm_load_tensors: offloaded 33/33 layers to GPU ollama | llm_load_tensors: CPU buffer size = 70.31 MiB ollama | llm_load_tensors: CUDA0 buffer size = 3577.56 MiB ollama | ............................................................................................time=2024-05-03T03:02:33.166Z level=DEBUG source=server.go:466 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:37117/health\": dial tcp 127.0.0.1:37117: i/o timeout" ollama | .time=2024-05-03T03:02:33.367Z level=DEBUG source=server.go:466 msg="server not yet available" error="server not responding" ollama | ..... ollama | llama_new_context_with_model: n_ctx = 2048 ollama | llama_new_context_with_model: n_batch = 512 ollama | llama_new_context_with_model: n_ubatch = 512 ollama | llama_new_context_with_model: freq_base = 10000.0 ollama | llama_new_context_with_model: freq_scale = 1 ollama | llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB ollama | llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB ollama | llama_new_context_with_model: CUDA_Host output buffer size = 0.55 MiB ollama | llama_new_context_with_model: CUDA0 compute buffer size = 164.00 MiB ollama | llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB ollama | llama_new_context_with_model: graph nodes = 1030 ollama | llama_new_context_with_model: graph splits = 2 ollama | {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":4,"tid":"139698153259008","timestamp":1714705355} ollama | {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":512,"slot_id":0,"tid":"139698153259008","timestamp":1714705355} ollama | {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":512,"slot_id":1,"tid":"139698153259008","timestamp":1714705355} ollama | {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":512,"slot_id":2,"tid":"139698153259008","timestamp":1714705355} ollama | {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":512,"slot_id":3,"tid":"139698153259008","timestamp":1714705355} ollama | {"function":"main","level":"INFO","line":3067,"msg":"model loaded","tid":"139698153259008","timestamp":1714705355} ollama | {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3270,"msg":"HTTP server listening","n_threads_http":"19","port":"37117","tid":"139698153259008","timestamp":1714705355} ollama | {"function":"update_slots","level":"INFO","line":1581,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"139698153259008","timestamp":1714705355} ollama | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":0,"tid":"139698153259008","timestamp":1714705355} ollama | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":1,"tid":"139698153259008","timestamp":1714705355} ollama | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":3,"tid":"139698153259008","timestamp":1714705355} ollama | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":2,"tid":"139698153259008","timestamp":1714705355} ollama | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":49552,"status":200,"tid":"139697310412800","timestamp":1714705355} ollama | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":49558,"status":200,"tid":"139696954535936","timestamp":1714705355} ollama | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":49556,"status":200,"tid":"139696962928640","timestamp":1714705355} ollama | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":4,"tid":"139698153259008","timestamp":1714705355} ollama | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":49540,"status":200,"tid":"139696747270144","timestamp":1714705355} ollama | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":49568,"status":200,"tid":"139696946143232","timestamp":1714705355} ollama | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":5,"tid":"139698153259008","timestamp":1714705355} ollama | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":49574,"status":200,"tid":"139696937750528","timestamp":1714705355} ollama | {"function":"process_single_task","level":"INFO","line":1509,"msg":"slot data","n_idle_slots":4,"n_processing_slots":0,"task_id":6,"tid":"139698153259008","timestamp":1714705355} ollama | {"function":"log_server_request","level":"INFO","line":2737,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":51238,"status":200,"tid":"139696920965120","timestamp":1714705355} ollama | time=2024-05-03T03:02:35.575Z level=DEBUG source=server.go:477 msg="llama runner started in 16.706584 seconds" ollama | time=2024-05-03T03:02:35.575Z level=DEBUG source=sched.go:353 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 ollama | [GIN] 2024/05/03 - 03:02:35 | 200 | 17.035618065s | 127.0.0.1 | POST "/api/generate" ollama | time=2024-05-03T03:02:35.575Z level=DEBUG source=sched.go:357 msg="context for request finished" ollama | time=2024-05-03T03:02:35.576Z level=DEBUG source=sched.go:246 msg="runner with non-zero duration has gone idle, adding timer" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 duration=5m0s ollama | time=2024-05-03T03:02:35.576Z level=DEBUG source=sched.go:262 msg="after processing request finished event" model=/root/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 refCount=0 ` ```
Author
Owner

@dhiltgen commented on GitHub (May 3, 2024):

The log seems to only show a single request. In debug mode, there should be more source=sched.go messages when concurrent requests come in explaining decisions it makes about the request.

<!-- gh-comment-id:2093531220 --> @dhiltgen commented on GitHub (May 3, 2024): The log seems to only show a single request. In debug mode, there should be more `source=sched.go` messages when concurrent requests come in explaining decisions it makes about the request.
Author
Owner

@terrabys commented on GitHub (May 4, 2024):

Same here further more OLLAMA_NUM_PARALLEL seems to work,
if I load the same model I get simultaneous responses from both instances,
but if I queue model A then model B then model A again it runs synchronously

Docker compose:
`
name: ollama
services:
ollama:
environment:
OLLAMA_MAX_LOADED_MODELS: "3"
OLLAMA_NUM_PARALLEL: "3"
OLLAMA_DEBUG: "1"

restart: always
deploy:
  resources:
    reservations:
      devices:
        - driver: nvidia
          count: all
          capabilities:
            - gpu
volumes:
  - ollama:/root/.ollama
ports:
  - 11434:11434
container_name: ollama
image: ollama/ollama:latest

volumes:
ollama:
external: true
name: ollama
`

Ollama Debug:
ollamaDebug.log

<!-- gh-comment-id:2094094206 --> @terrabys commented on GitHub (May 4, 2024): Same here further more OLLAMA_NUM_PARALLEL seems to work, if I load the same model I get simultaneous responses from both instances, but if I queue model A then model B then model A again it runs synchronously Docker compose: ` name: ollama services: ollama: environment: OLLAMA_MAX_LOADED_MODELS: "3" OLLAMA_NUM_PARALLEL: "3" OLLAMA_DEBUG: "1" restart: always deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: - gpu volumes: - ollama:/root/.ollama ports: - 11434:11434 container_name: ollama image: ollama/ollama:latest volumes: ollama: external: true name: ollama ` Ollama Debug: [ollamaDebug.log](https://github.com/ollama/ollama/files/15209192/ollamaDebug.log)
Author
Owner

@dhiltgen commented on GitHub (May 4, 2024):

@terrabys the current algorithm for multiple models requires the models to fully fit into available VRAM. We haven't exposed a generalized ability to allow partial offloading, since that can have such a large performance impact. The second model you're attempting to load doesn't fit in the available VRAM without first unloading the already loaded model.

A few log lines excerpted:

ollama  | time=2024-05-04T09:17:51.234Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama1166710289/runners/cuda_v11/libcudart.so.11.0 count=1
...
ollama  | time=2024-05-04T09:17:51.450Z level=INFO source=sched.go:407 msg="updated VRAM" gpu=GPU-67e7bd91-f714-8591-03f5-1129faba2fb1 library=cuda total="8191.5 MiB" available="4652.6 MiB"
...
ollama  | time=2024-05-04T09:17:51.450Z level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=32 memory.available="4652.6 MiB" memory.required.full="4724.6 MiB" memory.required.partial="4639.0 MiB" memory.required.kv="256.0 MiB" memory.weights.total="3847.6 MiB" memory.weights.repeating="3745.0 MiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="181.0 MiB"

So the GPU has 4652.6 MiB space remaining with the first model loaded, but the new model will require 4724.6 MiB, thus we unload the first before proceeding.

Note: the scheduler should take the num_gpu setting into consideration, so if you want to experiment with secondary partially loaded models you can force a reduced num_gpu.

<!-- gh-comment-id:2094370342 --> @dhiltgen commented on GitHub (May 4, 2024): @terrabys the current algorithm for multiple models requires the models to fully fit into available VRAM. We haven't exposed a generalized ability to allow partial offloading, since that can have such a large performance impact. The second model you're attempting to load doesn't fit in the available VRAM without first unloading the already loaded model. A few log lines excerpted: ``` ollama | time=2024-05-04T09:17:51.234Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama1166710289/runners/cuda_v11/libcudart.so.11.0 count=1 ... ollama | time=2024-05-04T09:17:51.450Z level=INFO source=sched.go:407 msg="updated VRAM" gpu=GPU-67e7bd91-f714-8591-03f5-1129faba2fb1 library=cuda total="8191.5 MiB" available="4652.6 MiB" ... ollama | time=2024-05-04T09:17:51.450Z level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=32 memory.available="4652.6 MiB" memory.required.full="4724.6 MiB" memory.required.partial="4639.0 MiB" memory.required.kv="256.0 MiB" memory.weights.total="3847.6 MiB" memory.weights.repeating="3745.0 MiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="181.0 MiB" ``` So the GPU has 4652.6 MiB space remaining with the first model loaded, but the new model will require 4724.6 MiB, thus we unload the first before proceeding. Note: the scheduler should take the num_gpu setting into consideration, so if you want to experiment with secondary partially loaded models you can force a reduced num_gpu.
Author
Owner

@terrabys commented on GitHub (May 5, 2024):

@dhiltgen Thanks for the explanation.

<!-- gh-comment-id:2094667653 --> @terrabys commented on GitHub (May 5, 2024): @dhiltgen Thanks for the explanation.
Author
Owner

@xcjs commented on GitHub (May 19, 2024):

@dhiltgen Are there plans to allow parallel models to span VRAM and system memory?

<!-- gh-comment-id:2119114583 --> @xcjs commented on GitHub (May 19, 2024): @dhiltgen Are there plans to allow parallel models to span VRAM and system memory?
Author
Owner

@dhiltgen commented on GitHub (May 20, 2024):

@xcjs we don't currently have plans to support a generalized approach for this, but you should be able to ~force this behavior by setting num_gpu when you load the later model to a value below 100% of the layers, but at a value that will fit in the available VRAM. You'll need to experiment a bit to try it out (I'd start with a small number and increment)

<!-- gh-comment-id:2121220263 --> @dhiltgen commented on GitHub (May 20, 2024): @xcjs we don't currently have plans to support a generalized approach for this, but you should be able to ~force this behavior by setting `num_gpu` when you load the later model to a value below 100% of the layers, but at a value that will fit in the available VRAM. You'll need to experiment a bit to try it out (I'd start with a small number and increment)
Author
Owner

@flytzen commented on GitHub (May 21, 2024):

Just in case anyone else ends up here: Turns out OLLAMA_NUM_PARALLEL also increases memory consumption. If you are struggling to fit multiple models into memory, even though you think they should fit, try changing OLLAMA_NUM_PARALLEL and use ollama ps to see how it affects the memory usage of each model.

<!-- gh-comment-id:2122889842 --> @flytzen commented on GitHub (May 21, 2024): Just in case anyone else ends up here: Turns out `OLLAMA_NUM_PARALLEL` also increases memory consumption. If you are struggling to fit multiple models into memory, even though you think they should fit, try changing `OLLAMA_NUM_PARALLEL` and use `ollama ps` to see how it affects the memory usage of each model.
Author
Owner

@dhiltgen commented on GitHub (Jun 21, 2024):

I don't believe there are any outstanding problems now. @BBjie if you're still having trouble getting concurrency to work, please share an updated log with your settings so I can take another look.

<!-- gh-comment-id:2183574653 --> @dhiltgen commented on GitHub (Jun 21, 2024): I don't believe there are any outstanding problems now. @BBjie if you're still having trouble getting concurrency to work, please share an updated log with your settings so I can take another look.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64587