[GH-ISSUE #6398] When running ollama via docker, it won't respond to any request by API-call or python-client-library #29780

Closed
opened 2026-04-22 09:00:07 -05:00 by GiteaMirror · 26 comments
Owner

Originally created by @itinance on GitHub (Aug 17, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6398

What is the issue?

I setup the nvidia docker toolkit sucessfully on my Ubuntu 22 Machine with a RTX-4000, and start ollama as docker-container with exposed port 11434:

docker run -d --gpus=all --env OLLAMA_NUM_PARALLEL=1 -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

After that, "docker ps" shows:

CONTAINER ID   IMAGE           COMMAND               CREATED          STATUS          PORTS                      NAMES
bb799064d233   ollama/ollama   "/bin/ollama serve"   38 minutes ago   Up 38 minutes   0.0.0.0:11434->11434/tcp   ollama´

Starting a conversation in CLI works perfect:

docker exec -it ollama ollama run llama3

>>> hello
Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?

But when I want to do a CURL request (or use the python-library for ollama), it hangs forever:

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.1",
  "prompt":"Why is the sky blue?"
}'

Open Ports:

root@Ubuntu-2204-jammy-amd64-base ~ # sudo netstat -tulpn | grep LISTEN
tcp        0      0 0.0.0.0:11434           0.0.0.0:*               LISTEN      320493/docker-proxy
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      300450/nginx: maste
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      299159/sshd: /usr/s
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      299170/systemd-reso
tcp6       0      0 :::80                   :::*                    LISTEN      300450/nginx: maste
tcp6       0      0 :::22                   :::*                    LISTEN      299159/sshd: /usr/s

This works again when I start the ollama service directly on the machine, installed by

curl -fsSL https://ollama.com/install.sh | sh?

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.6

Originally created by @itinance on GitHub (Aug 17, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6398 ### What is the issue? I setup the nvidia docker toolkit sucessfully on my Ubuntu 22 Machine with a RTX-4000, and start ollama as docker-container with exposed port 11434: `docker run -d --gpus=all --env OLLAMA_NUM_PARALLEL=1 -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama` After that, "docker ps" shows: ``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bb799064d233 ollama/ollama "/bin/ollama serve" 38 minutes ago Up 38 minutes 0.0.0.0:11434->11434/tcp ollama´ ``` Starting a conversation in CLI works perfect: `docker exec -it ollama ollama run llama3` ``` >>> hello Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat? ``` But when I want to do a CURL request (or use the python-library for ollama), it hangs forever: ``` curl http://localhost:11434/api/generate -d '{ "model": "llama3.1", "prompt":"Why is the sky blue?" }' ``` Open Ports: ``` root@Ubuntu-2204-jammy-amd64-base ~ # sudo netstat -tulpn | grep LISTEN tcp 0 0 0.0.0.0:11434 0.0.0.0:* LISTEN 320493/docker-proxy tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 300450/nginx: maste tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 299159/sshd: /usr/s tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 299170/systemd-reso tcp6 0 0 :::80 :::* LISTEN 300450/nginx: maste tcp6 0 0 :::22 :::* LISTEN 299159/sshd: /usr/s ``` This works again when I start the ollama service directly on the machine, installed by `curl -fsSL https://ollama.com/install.sh | sh?` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.6
GiteaMirror added the bug label 2026-04-22 09:00:07 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 17, 2024):

What do logs show: docker logs ollama

<!-- gh-comment-id:2294870977 --> @rick-github commented on GitHub (Aug 17, 2024): What do logs show: `docker logs ollama`
Author
Owner

@itinance commented on GitHub (Aug 17, 2024):

at this time not a single line will be logged. Probably the request never reaches the docker container?

<!-- gh-comment-id:2294872297 --> @itinance commented on GitHub (Aug 17, 2024): at this time not a single line will be logged. Probably the request never reaches the docker container?
Author
Owner

@rick-github commented on GitHub (Aug 17, 2024):

Logs will show the environment that the ollama instance is running in.

<!-- gh-comment-id:2294872751 --> @rick-github commented on GitHub (Aug 17, 2024): Logs will show the environment that the ollama instance is running in.
Author
Owner

@itinance commented on GitHub (Aug 17, 2024):

Ah, okay, so here is the full log statement @rick-github :

ollama-1  | Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
ollama-1  | Your new public key is:
ollama-1  |
ollama-1  | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMXlosdDSzQfYwTPTrXmAm0valBoqxkbW9YFnhYOthHE
ollama-1  |
ollama-1  | 2024/08/17 14:11:10 routes.go:1123: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
ollama-1  | time=2024-08-17T14:11:10.852Z level=INFO source=images.go:782 msg="total blobs: 0"
ollama-1  | time=2024-08-17T14:11:10.852Z level=INFO source=images.go:790 msg="total unused blobs removed: 0"
ollama-1  | time=2024-08-17T14:11:10.852Z level=INFO source=routes.go:1170 msg="Listening on [::]:11434 (version 0.3.5)"
ollama-1  | time=2024-08-17T14:11:10.853Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2916392341/runners
ollama-1  | time=2024-08-17T14:11:13.620Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60102 cpu cpu_avx]"
ollama-1  | time=2024-08-17T14:11:13.620Z level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
ollama-1  | time=2024-08-17T14:11:13.620Z level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered"
ollama-1  | time=2024-08-17T14:11:13.621Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="62.6 GiB" available="61.1 GiB"
ollama-1  | [GIN] 2024/08/17 - 14:11:40 | 200 |      33.017µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:12:10 | 200 |      17.141µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:12:27 | 200 |      14.109µs |       127.0.0.1 | HEAD     "/"
ollama-1  | [GIN] 2024/08/17 - 14:12:27 | 404 |      123.13µs |       127.0.0.1 | POST     "/api/show"
ollama-1  | time=2024-08-17T14:12:28.737Z level=INFO source=download.go:175 msg="downloading 6a0746a1ec1a in 47 100 MB part(s)"
ollama-1  | [GIN] 2024/08/17 - 14:12:40 | 200 |      33.994µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | time=2024-08-17T14:13:10.759Z level=INFO source=download.go:175 msg="downloading 4fa551d4f938 in 1 12 KB part(s)"
ollama-1  | [GIN] 2024/08/17 - 14:13:11 | 200 |      19.436µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | time=2024-08-17T14:13:12.715Z level=INFO source=download.go:175 msg="downloading 8ab4849b038c in 1 254 B part(s)"
ollama-1  | time=2024-08-17T14:13:14.800Z level=INFO source=download.go:175 msg="downloading 577073ffcc6c in 1 110 B part(s)"
ollama-1  | time=2024-08-17T14:13:16.811Z level=INFO source=download.go:175 msg="downloading 3f8eb4da87fa in 1 485 B part(s)"
ollama-1  | [GIN] 2024/08/17 - 14:13:20 | 200 | 53.040699134s |       127.0.0.1 | POST     "/api/pull"
ollama-1  | [GIN] 2024/08/17 - 14:13:20 | 200 |   10.223286ms |       127.0.0.1 | POST     "/api/show"
ollama-1  | time=2024-08-17T14:13:20.284Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[61.1 GiB]" memory.required.full="4.6 GiB" memory.required.partial="0 B" memory.required.kv="256.0 MiB" memory.required.allocations="[4.6 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
ollama-1  | time=2024-08-17T14:13:20.285Z level=INFO source=server.go:393 msg="starting llama server" cmd="/tmp/ollama2916392341/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 2048 --batch-size 512 --embedding --log-disable --no-mmap --parallel 1 --port 37147"
ollama-1  | time=2024-08-17T14:13:20.286Z level=INFO source=sched.go:445 msg="loaded runners" count=1
ollama-1  | time=2024-08-17T14:13:20.286Z level=INFO source=server.go:593 msg="waiting for llama runner to start responding"
ollama-1  | time=2024-08-17T14:13:20.286Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error"
ollama-1  | INFO [main] build info | build=1 commit="1e6f655" tid="140647703275392" timestamp=1723904000
ollama-1  | INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140647703275392" timestamp=1723904000 total_threads=20
ollama-1  | INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="37147" tid="140647703275392" timestamp=1723904000
ollama-1  | llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest))
ollama-1  | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama-1  | llama_model_loader: - kv   0:                       general.architecture str              = llama
ollama-1  | llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
ollama-1  | llama_model_loader: - kv   2:                          llama.block_count u32              = 32
ollama-1  | llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
ollama-1  | llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
ollama-1  | llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
ollama-1  | llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
ollama-1  | llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
ollama-1  | llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
ollama-1  | llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
ollama-1  | llama_model_loader: - kv  10:                          general.file_type u32              = 2
ollama-1  | llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
ollama-1  | llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
ollama-1  | llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
ollama-1  | llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = llama-bpe
ollama-1  | llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
ollama-1  | llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
ollama-1  | llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
ollama-1  | llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
ollama-1  | llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128009
ollama-1  | llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
ollama-1  | llama_model_loader: - kv  21:               general.quantization_version u32              = 2
ollama-1  | llama_model_loader: - type  f32:   65 tensors
ollama-1  | llama_model_loader: - type q4_0:  225 tensors
ollama-1  | llama_model_loader: - type q6_K:    1 tensors
ollama-1  | time=2024-08-17T14:13:20.537Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
ollama-1  | llm_load_vocab: special tokens cache size = 256
ollama-1  | llm_load_vocab: token to piece cache size = 0.8000 MB
ollama-1  | llm_load_print_meta: format           = GGUF V3 (latest)
ollama-1  | llm_load_print_meta: arch             = llama
ollama-1  | llm_load_print_meta: vocab type       = BPE
ollama-1  | llm_load_print_meta: n_vocab          = 128256
ollama-1  | llm_load_print_meta: n_merges         = 280147
ollama-1  | llm_load_print_meta: vocab_only       = 0
ollama-1  | llm_load_print_meta: n_ctx_train      = 8192
ollama-1  | llm_load_print_meta: n_embd           = 4096
ollama-1  | llm_load_print_meta: n_layer          = 32
ollama-1  | llm_load_print_meta: n_head           = 32
ollama-1  | llm_load_print_meta: n_head_kv        = 8
ollama-1  | llm_load_print_meta: n_rot            = 128
ollama-1  | llm_load_print_meta: n_swa            = 0
ollama-1  | llm_load_print_meta: n_embd_head_k    = 128
ollama-1  | llm_load_print_meta: n_embd_head_v    = 128
ollama-1  | llm_load_print_meta: n_gqa            = 4
ollama-1  | llm_load_print_meta: n_embd_k_gqa     = 1024
ollama-1  | llm_load_print_meta: n_embd_v_gqa     = 1024
ollama-1  | llm_load_print_meta: f_norm_eps       = 0.0e+00
ollama-1  | llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
ollama-1  | llm_load_print_meta: f_clamp_kqv      = 0.0e+00
ollama-1  | llm_load_print_meta: f_max_alibi_bias = 0.0e+00
ollama-1  | llm_load_print_meta: f_logit_scale    = 0.0e+00
ollama-1  | llm_load_print_meta: n_ff             = 14336
ollama-1  | llm_load_print_meta: n_expert         = 0
ollama-1  | llm_load_print_meta: n_expert_used    = 0
ollama-1  | llm_load_print_meta: causal attn      = 1
ollama-1  | llm_load_print_meta: pooling type     = 0
ollama-1  | llm_load_print_meta: rope type        = 0
ollama-1  | llm_load_print_meta: rope scaling     = linear
ollama-1  | llm_load_print_meta: freq_base_train  = 500000.0
ollama-1  | llm_load_print_meta: freq_scale_train = 1
ollama-1  | llm_load_print_meta: n_ctx_orig_yarn  = 8192
ollama-1  | llm_load_print_meta: rope_finetuned   = unknown
ollama-1  | llm_load_print_meta: ssm_d_conv       = 0
ollama-1  | llm_load_print_meta: ssm_d_inner      = 0
ollama-1  | llm_load_print_meta: ssm_d_state      = 0
ollama-1  | llm_load_print_meta: ssm_dt_rank      = 0
ollama-1  | llm_load_print_meta: model type       = 8B
ollama-1  | llm_load_print_meta: model ftype      = Q4_0
ollama-1  | llm_load_print_meta: model params     = 8.03 B
ollama-1  | llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
ollama-1  | llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
ollama-1  | llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
ollama-1  | llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
ollama-1  | llm_load_print_meta: LF token         = 128 'Ä'
ollama-1  | llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
ollama-1  | llm_load_print_meta: max token length = 256
ollama-1  | llm_load_tensors: ggml ctx size =    0.14 MiB
ollama-1  | llm_load_tensors:        CPU buffer size =  4437.80 MiB
ollama-1  | llama_new_context_with_model: n_ctx      = 2048
ollama-1  | llama_new_context_with_model: n_batch    = 512
ollama-1  | llama_new_context_with_model: n_ubatch   = 512
ollama-1  | llama_new_context_with_model: flash_attn = 0
ollama-1  | llama_new_context_with_model: freq_base  = 500000.0
ollama-1  | llama_new_context_with_model: freq_scale = 1
ollama-1  | llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
ollama-1  | llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
ollama-1  | llama_new_context_with_model:        CPU  output buffer size =     0.50 MiB
ollama-1  | llama_new_context_with_model:        CPU compute buffer size =   258.50 MiB
ollama-1  | llama_new_context_with_model: graph nodes  = 1030
ollama-1  | llama_new_context_with_model: graph splits = 1
ollama-1  | INFO [main] model loaded | tid="140647703275392" timestamp=1723904002
ollama-1  | time=2024-08-17T14:13:22.547Z level=INFO source=server.go:632 msg="llama runner started in 2.26 seconds"
ollama-1  | [GIN] 2024/08/17 - 14:13:22 | 200 |  2.287724508s |       127.0.0.1 | POST     "/api/chat"
ollama-1  | [GIN] 2024/08/17 - 14:13:28 | 200 |  3.168350244s |       127.0.0.1 | POST     "/api/chat"
ollama-1  | [GIN] 2024/08/17 - 14:13:38 | 200 |   3.79428092s |       127.0.0.1 | POST     "/api/chat"
ollama-1  | [GIN] 2024/08/17 - 14:13:41 | 200 |      24.033µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:13:44 | 200 |      12.613µs |       127.0.0.1 | HEAD     "/"
ollama-1  | [GIN] 2024/08/17 - 14:13:44 | 404 |      49.385µs |       127.0.0.1 | POST     "/api/show"
ollama-1  | time=2024-08-17T14:13:52.282Z level=INFO source=download.go:175 msg="downloading 8eeb52dfb3bb in 47 100 MB part(s)"
ollama-1  | time=2024-08-17T14:14:08.403Z level=INFO source=download.go:370 msg="8eeb52dfb3bb part 15 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
ollama-1  | [GIN] 2024/08/17 - 14:14:11 | 200 |      25.932µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | time=2024-08-17T14:14:34.286Z level=INFO source=download.go:175 msg="downloading 11ce4ee3e170 in 1 1.7 KB part(s)"
ollama-1  | time=2024-08-17T14:14:36.294Z level=INFO source=download.go:175 msg="downloading 0ba8f0e314b4 in 1 12 KB part(s)"
ollama-1  | time=2024-08-17T14:14:38.347Z level=INFO source=download.go:175 msg="downloading 56bb8bd477a5 in 1 96 B part(s)"
ollama-1  | time=2024-08-17T14:14:40.417Z level=INFO source=download.go:175 msg="downloading 1a4c3c319823 in 1 485 B part(s)"
ollama-1  | [GIN] 2024/08/17 - 14:14:41 | 200 |       25.61µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:14:43 | 200 | 59.027329224s |       127.0.0.1 | POST     "/api/pull"
ollama-1  | [GIN] 2024/08/17 - 14:14:44 | 200 |  153.920658ms |       127.0.0.1 | POST     "/api/show"
ollama-1  | time=2024-08-17T14:14:44.242Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[61.1 GiB]" memory.required.full="4.6 GiB" memory.required.partial="0 B" memory.required.kv="256.0 MiB" memory.required.allocations="[4.6 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
ollama-1  | time=2024-08-17T14:14:44.243Z level=INFO source=server.go:393 msg="starting llama server" cmd="/tmp/ollama2916392341/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 2048 --batch-size 512 --embedding --log-disable --no-mmap --parallel 1 --port 45319"
ollama-1  | time=2024-08-17T14:14:44.243Z level=INFO source=sched.go:445 msg="loaded runners" count=1
ollama-1  | time=2024-08-17T14:14:44.243Z level=INFO source=server.go:593 msg="waiting for llama runner to start responding"
ollama-1  | time=2024-08-17T14:14:44.243Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error"
ollama-1  | INFO [main] build info | build=1 commit="1e6f655" tid="139997298423680" timestamp=1723904084
ollama-1  | INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139997298423680" timestamp=1723904084 total_threads=20
ollama-1  | INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="45319" tid="139997298423680" timestamp=1723904084
ollama-1  | llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest))
ollama-1  | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama-1  | llama_model_loader: - kv   0:                       general.architecture str              = llama
ollama-1  | llama_model_loader: - kv   1:                               general.type str              = model
ollama-1  | llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
ollama-1  | llama_model_loader: - kv   3:                           general.finetune str              = Instruct
ollama-1  | llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
ollama-1  | llama_model_loader: - kv   5:                         general.size_label str              = 8B
ollama-1  | llama_model_loader: - kv   6:                            general.license str              = llama3.1
ollama-1  | llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
ollama-1  | llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
ollama-1  | llama_model_loader: - kv   9:                          llama.block_count u32              = 32
ollama-1  | llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
ollama-1  | llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
ollama-1  | llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
ollama-1  | llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
ollama-1  | llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
ollama-1  | llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
ollama-1  | llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
ollama-1  | llama_model_loader: - kv  17:                          general.file_type u32              = 2
ollama-1  | llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
ollama-1  | llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
ollama-1  | llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
ollama-1  | llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
ollama-1  | llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
ollama-1  | llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
ollama-1  | llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
ollama-1  | llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
ollama-1  | llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
ollama-1  | llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
ollama-1  | llama_model_loader: - kv  28:               general.quantization_version u32              = 2
ollama-1  | llama_model_loader: - type  f32:   66 tensors
ollama-1  | llama_model_loader: - type q4_0:  225 tensors
ollama-1  | llama_model_loader: - type q6_K:    1 tensors
ollama-1  | time=2024-08-17T14:14:44.494Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
ollama-1  | llm_load_vocab: special tokens cache size = 256
ollama-1  | llm_load_vocab: token to piece cache size = 0.7999 MB
ollama-1  | llm_load_print_meta: format           = GGUF V3 (latest)
ollama-1  | llm_load_print_meta: arch             = llama
ollama-1  | llm_load_print_meta: vocab type       = BPE
ollama-1  | llm_load_print_meta: n_vocab          = 128256
ollama-1  | llm_load_print_meta: n_merges         = 280147
ollama-1  | llm_load_print_meta: vocab_only       = 0
ollama-1  | llm_load_print_meta: n_ctx_train      = 131072
ollama-1  | llm_load_print_meta: n_embd           = 4096
ollama-1  | llm_load_print_meta: n_layer          = 32
ollama-1  | llm_load_print_meta: n_head           = 32
ollama-1  | llm_load_print_meta: n_head_kv        = 8
ollama-1  | llm_load_print_meta: n_rot            = 128
ollama-1  | llm_load_print_meta: n_swa            = 0
ollama-1  | llm_load_print_meta: n_embd_head_k    = 128
ollama-1  | llm_load_print_meta: n_embd_head_v    = 128
ollama-1  | llm_load_print_meta: n_gqa            = 4
ollama-1  | llm_load_print_meta: n_embd_k_gqa     = 1024
ollama-1  | llm_load_print_meta: n_embd_v_gqa     = 1024
ollama-1  | llm_load_print_meta: f_norm_eps       = 0.0e+00
ollama-1  | llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
ollama-1  | llm_load_print_meta: f_clamp_kqv      = 0.0e+00
ollama-1  | llm_load_print_meta: f_max_alibi_bias = 0.0e+00
ollama-1  | llm_load_print_meta: f_logit_scale    = 0.0e+00
ollama-1  | llm_load_print_meta: n_ff             = 14336
ollama-1  | llm_load_print_meta: n_expert         = 0
ollama-1  | llm_load_print_meta: n_expert_used    = 0
ollama-1  | llm_load_print_meta: causal attn      = 1
ollama-1  | llm_load_print_meta: pooling type     = 0
ollama-1  | llm_load_print_meta: rope type        = 0
ollama-1  | llm_load_print_meta: rope scaling     = linear
ollama-1  | llm_load_print_meta: freq_base_train  = 500000.0
ollama-1  | llm_load_print_meta: freq_scale_train = 1
ollama-1  | llm_load_print_meta: n_ctx_orig_yarn  = 131072
ollama-1  | llm_load_print_meta: rope_finetuned   = unknown
ollama-1  | llm_load_print_meta: ssm_d_conv       = 0
ollama-1  | llm_load_print_meta: ssm_d_inner      = 0
ollama-1  | llm_load_print_meta: ssm_d_state      = 0
ollama-1  | llm_load_print_meta: ssm_dt_rank      = 0
ollama-1  | llm_load_print_meta: model type       = 8B
ollama-1  | llm_load_print_meta: model ftype      = Q4_0
ollama-1  | llm_load_print_meta: model params     = 8.03 B
ollama-1  | llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
ollama-1  | llm_load_print_meta: general.name     = Meta Llama 3.1 8B Instruct
ollama-1  | llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
ollama-1  | llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
ollama-1  | llm_load_print_meta: LF token         = 128 'Ä'
ollama-1  | llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
ollama-1  | llm_load_print_meta: max token length = 256
ollama-1  | llm_load_tensors: ggml ctx size =    0.14 MiB
ollama-1  | llm_load_tensors:        CPU buffer size =  4437.81 MiB
ollama-1  | llama_new_context_with_model: n_ctx      = 2048
ollama-1  | llama_new_context_with_model: n_batch    = 512
ollama-1  | llama_new_context_with_model: n_ubatch   = 512
ollama-1  | llama_new_context_with_model: flash_attn = 0
ollama-1  | llama_new_context_with_model: freq_base  = 500000.0
ollama-1  | llama_new_context_with_model: freq_scale = 1
ollama-1  | llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
ollama-1  | llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
ollama-1  | llama_new_context_with_model:        CPU  output buffer size =     0.50 MiB
ollama-1  | llama_new_context_with_model:        CPU compute buffer size =   258.50 MiB
ollama-1  | llama_new_context_with_model: graph nodes  = 1030
ollama-1  | llama_new_context_with_model: graph splits = 1
ollama-1  | INFO [main] model loaded | tid="139997298423680" timestamp=1723904086
ollama-1  | time=2024-08-17T14:14:46.253Z level=INFO source=server.go:632 msg="llama runner started in 2.01 seconds"
ollama-1  | [GIN] 2024/08/17 - 14:14:46 | 200 |  2.211858694s |       127.0.0.1 | POST     "/api/chat"
ollama-1  | [GIN] 2024/08/17 - 14:14:51 | 200 |  2.832885978s |       127.0.0.1 | POST     "/api/chat"
ollama-1  | [GIN] 2024/08/17 - 14:15:11 | 200 |       21.33µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:15:41 | 200 |      27.205µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:16:11 | 200 |      22.658µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:16:41 | 200 |      20.162µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:17:11 | 200 |       25.65µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:17:41 | 200 |      19.182µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:18:11 | 200 |      34.505µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:18:41 | 200 |      21.676µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:19:11 | 200 |      20.415µs |       127.0.0.1 | GET      "/api/version"
ollama-1  | [GIN] 2024/08/17 - 14:19:41 | 200 |      23.279µs |       127.0.0.1 | GET      "/api/version"
<!-- gh-comment-id:2294873116 --> @itinance commented on GitHub (Aug 17, 2024): Ah, okay, so here is the full log statement @rick-github : ``` ollama-1 | Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. ollama-1 | Your new public key is: ollama-1 | ollama-1 | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMXlosdDSzQfYwTPTrXmAm0valBoqxkbW9YFnhYOthHE ollama-1 | ollama-1 | 2024/08/17 14:11:10 routes.go:1123: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" ollama-1 | time=2024-08-17T14:11:10.852Z level=INFO source=images.go:782 msg="total blobs: 0" ollama-1 | time=2024-08-17T14:11:10.852Z level=INFO source=images.go:790 msg="total unused blobs removed: 0" ollama-1 | time=2024-08-17T14:11:10.852Z level=INFO source=routes.go:1170 msg="Listening on [::]:11434 (version 0.3.5)" ollama-1 | time=2024-08-17T14:11:10.853Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2916392341/runners ollama-1 | time=2024-08-17T14:11:13.620Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 rocm_v60102 cpu cpu_avx]" ollama-1 | time=2024-08-17T14:11:13.620Z level=INFO source=gpu.go:204 msg="looking for compatible GPUs" ollama-1 | time=2024-08-17T14:11:13.620Z level=INFO source=gpu.go:350 msg="no compatible GPUs were discovered" ollama-1 | time=2024-08-17T14:11:13.621Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="62.6 GiB" available="61.1 GiB" ollama-1 | [GIN] 2024/08/17 - 14:11:40 | 200 | 33.017µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:12:10 | 200 | 17.141µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:12:27 | 200 | 14.109µs | 127.0.0.1 | HEAD "/" ollama-1 | [GIN] 2024/08/17 - 14:12:27 | 404 | 123.13µs | 127.0.0.1 | POST "/api/show" ollama-1 | time=2024-08-17T14:12:28.737Z level=INFO source=download.go:175 msg="downloading 6a0746a1ec1a in 47 100 MB part(s)" ollama-1 | [GIN] 2024/08/17 - 14:12:40 | 200 | 33.994µs | 127.0.0.1 | GET "/api/version" ollama-1 | time=2024-08-17T14:13:10.759Z level=INFO source=download.go:175 msg="downloading 4fa551d4f938 in 1 12 KB part(s)" ollama-1 | [GIN] 2024/08/17 - 14:13:11 | 200 | 19.436µs | 127.0.0.1 | GET "/api/version" ollama-1 | time=2024-08-17T14:13:12.715Z level=INFO source=download.go:175 msg="downloading 8ab4849b038c in 1 254 B part(s)" ollama-1 | time=2024-08-17T14:13:14.800Z level=INFO source=download.go:175 msg="downloading 577073ffcc6c in 1 110 B part(s)" ollama-1 | time=2024-08-17T14:13:16.811Z level=INFO source=download.go:175 msg="downloading 3f8eb4da87fa in 1 485 B part(s)" ollama-1 | [GIN] 2024/08/17 - 14:13:20 | 200 | 53.040699134s | 127.0.0.1 | POST "/api/pull" ollama-1 | [GIN] 2024/08/17 - 14:13:20 | 200 | 10.223286ms | 127.0.0.1 | POST "/api/show" ollama-1 | time=2024-08-17T14:13:20.284Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[61.1 GiB]" memory.required.full="4.6 GiB" memory.required.partial="0 B" memory.required.kv="256.0 MiB" memory.required.allocations="[4.6 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB" ollama-1 | time=2024-08-17T14:13:20.285Z level=INFO source=server.go:393 msg="starting llama server" cmd="/tmp/ollama2916392341/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 2048 --batch-size 512 --embedding --log-disable --no-mmap --parallel 1 --port 37147" ollama-1 | time=2024-08-17T14:13:20.286Z level=INFO source=sched.go:445 msg="loaded runners" count=1 ollama-1 | time=2024-08-17T14:13:20.286Z level=INFO source=server.go:593 msg="waiting for llama runner to start responding" ollama-1 | time=2024-08-17T14:13:20.286Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error" ollama-1 | INFO [main] build info | build=1 commit="1e6f655" tid="140647703275392" timestamp=1723904000 ollama-1 | INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140647703275392" timestamp=1723904000 total_threads=20 ollama-1 | INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="37147" tid="140647703275392" timestamp=1723904000 ollama-1 | llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest)) ollama-1 | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama-1 | llama_model_loader: - kv 0: general.architecture str = llama ollama-1 | llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct ollama-1 | llama_model_loader: - kv 2: llama.block_count u32 = 32 ollama-1 | llama_model_loader: - kv 3: llama.context_length u32 = 8192 ollama-1 | llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 ollama-1 | llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 ollama-1 | llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 ollama-1 | llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 ollama-1 | llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 ollama-1 | llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 ollama-1 | llama_model_loader: - kv 10: general.file_type u32 = 2 ollama-1 | llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 ollama-1 | llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 ollama-1 | llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 ollama-1 | llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe ollama-1 | llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... ollama-1 | llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ollama-1 | llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... ollama-1 | llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 ollama-1 | llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 ollama-1 | llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... ollama-1 | llama_model_loader: - kv 21: general.quantization_version u32 = 2 ollama-1 | llama_model_loader: - type f32: 65 tensors ollama-1 | llama_model_loader: - type q4_0: 225 tensors ollama-1 | llama_model_loader: - type q6_K: 1 tensors ollama-1 | time=2024-08-17T14:13:20.537Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model" ollama-1 | llm_load_vocab: special tokens cache size = 256 ollama-1 | llm_load_vocab: token to piece cache size = 0.8000 MB ollama-1 | llm_load_print_meta: format = GGUF V3 (latest) ollama-1 | llm_load_print_meta: arch = llama ollama-1 | llm_load_print_meta: vocab type = BPE ollama-1 | llm_load_print_meta: n_vocab = 128256 ollama-1 | llm_load_print_meta: n_merges = 280147 ollama-1 | llm_load_print_meta: vocab_only = 0 ollama-1 | llm_load_print_meta: n_ctx_train = 8192 ollama-1 | llm_load_print_meta: n_embd = 4096 ollama-1 | llm_load_print_meta: n_layer = 32 ollama-1 | llm_load_print_meta: n_head = 32 ollama-1 | llm_load_print_meta: n_head_kv = 8 ollama-1 | llm_load_print_meta: n_rot = 128 ollama-1 | llm_load_print_meta: n_swa = 0 ollama-1 | llm_load_print_meta: n_embd_head_k = 128 ollama-1 | llm_load_print_meta: n_embd_head_v = 128 ollama-1 | llm_load_print_meta: n_gqa = 4 ollama-1 | llm_load_print_meta: n_embd_k_gqa = 1024 ollama-1 | llm_load_print_meta: n_embd_v_gqa = 1024 ollama-1 | llm_load_print_meta: f_norm_eps = 0.0e+00 ollama-1 | llm_load_print_meta: f_norm_rms_eps = 1.0e-05 ollama-1 | llm_load_print_meta: f_clamp_kqv = 0.0e+00 ollama-1 | llm_load_print_meta: f_max_alibi_bias = 0.0e+00 ollama-1 | llm_load_print_meta: f_logit_scale = 0.0e+00 ollama-1 | llm_load_print_meta: n_ff = 14336 ollama-1 | llm_load_print_meta: n_expert = 0 ollama-1 | llm_load_print_meta: n_expert_used = 0 ollama-1 | llm_load_print_meta: causal attn = 1 ollama-1 | llm_load_print_meta: pooling type = 0 ollama-1 | llm_load_print_meta: rope type = 0 ollama-1 | llm_load_print_meta: rope scaling = linear ollama-1 | llm_load_print_meta: freq_base_train = 500000.0 ollama-1 | llm_load_print_meta: freq_scale_train = 1 ollama-1 | llm_load_print_meta: n_ctx_orig_yarn = 8192 ollama-1 | llm_load_print_meta: rope_finetuned = unknown ollama-1 | llm_load_print_meta: ssm_d_conv = 0 ollama-1 | llm_load_print_meta: ssm_d_inner = 0 ollama-1 | llm_load_print_meta: ssm_d_state = 0 ollama-1 | llm_load_print_meta: ssm_dt_rank = 0 ollama-1 | llm_load_print_meta: model type = 8B ollama-1 | llm_load_print_meta: model ftype = Q4_0 ollama-1 | llm_load_print_meta: model params = 8.03 B ollama-1 | llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) ollama-1 | llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct ollama-1 | llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' ollama-1 | llm_load_print_meta: EOS token = 128009 '<|eot_id|>' ollama-1 | llm_load_print_meta: LF token = 128 'Ä' ollama-1 | llm_load_print_meta: EOT token = 128009 '<|eot_id|>' ollama-1 | llm_load_print_meta: max token length = 256 ollama-1 | llm_load_tensors: ggml ctx size = 0.14 MiB ollama-1 | llm_load_tensors: CPU buffer size = 4437.80 MiB ollama-1 | llama_new_context_with_model: n_ctx = 2048 ollama-1 | llama_new_context_with_model: n_batch = 512 ollama-1 | llama_new_context_with_model: n_ubatch = 512 ollama-1 | llama_new_context_with_model: flash_attn = 0 ollama-1 | llama_new_context_with_model: freq_base = 500000.0 ollama-1 | llama_new_context_with_model: freq_scale = 1 ollama-1 | llama_kv_cache_init: CPU KV buffer size = 256.00 MiB ollama-1 | llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB ollama-1 | llama_new_context_with_model: CPU output buffer size = 0.50 MiB ollama-1 | llama_new_context_with_model: CPU compute buffer size = 258.50 MiB ollama-1 | llama_new_context_with_model: graph nodes = 1030 ollama-1 | llama_new_context_with_model: graph splits = 1 ollama-1 | INFO [main] model loaded | tid="140647703275392" timestamp=1723904002 ollama-1 | time=2024-08-17T14:13:22.547Z level=INFO source=server.go:632 msg="llama runner started in 2.26 seconds" ollama-1 | [GIN] 2024/08/17 - 14:13:22 | 200 | 2.287724508s | 127.0.0.1 | POST "/api/chat" ollama-1 | [GIN] 2024/08/17 - 14:13:28 | 200 | 3.168350244s | 127.0.0.1 | POST "/api/chat" ollama-1 | [GIN] 2024/08/17 - 14:13:38 | 200 | 3.79428092s | 127.0.0.1 | POST "/api/chat" ollama-1 | [GIN] 2024/08/17 - 14:13:41 | 200 | 24.033µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:13:44 | 200 | 12.613µs | 127.0.0.1 | HEAD "/" ollama-1 | [GIN] 2024/08/17 - 14:13:44 | 404 | 49.385µs | 127.0.0.1 | POST "/api/show" ollama-1 | time=2024-08-17T14:13:52.282Z level=INFO source=download.go:175 msg="downloading 8eeb52dfb3bb in 47 100 MB part(s)" ollama-1 | time=2024-08-17T14:14:08.403Z level=INFO source=download.go:370 msg="8eeb52dfb3bb part 15 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." ollama-1 | [GIN] 2024/08/17 - 14:14:11 | 200 | 25.932µs | 127.0.0.1 | GET "/api/version" ollama-1 | time=2024-08-17T14:14:34.286Z level=INFO source=download.go:175 msg="downloading 11ce4ee3e170 in 1 1.7 KB part(s)" ollama-1 | time=2024-08-17T14:14:36.294Z level=INFO source=download.go:175 msg="downloading 0ba8f0e314b4 in 1 12 KB part(s)" ollama-1 | time=2024-08-17T14:14:38.347Z level=INFO source=download.go:175 msg="downloading 56bb8bd477a5 in 1 96 B part(s)" ollama-1 | time=2024-08-17T14:14:40.417Z level=INFO source=download.go:175 msg="downloading 1a4c3c319823 in 1 485 B part(s)" ollama-1 | [GIN] 2024/08/17 - 14:14:41 | 200 | 25.61µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:14:43 | 200 | 59.027329224s | 127.0.0.1 | POST "/api/pull" ollama-1 | [GIN] 2024/08/17 - 14:14:44 | 200 | 153.920658ms | 127.0.0.1 | POST "/api/show" ollama-1 | time=2024-08-17T14:14:44.242Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[61.1 GiB]" memory.required.full="4.6 GiB" memory.required.partial="0 B" memory.required.kv="256.0 MiB" memory.required.allocations="[4.6 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB" ollama-1 | time=2024-08-17T14:14:44.243Z level=INFO source=server.go:393 msg="starting llama server" cmd="/tmp/ollama2916392341/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 2048 --batch-size 512 --embedding --log-disable --no-mmap --parallel 1 --port 45319" ollama-1 | time=2024-08-17T14:14:44.243Z level=INFO source=sched.go:445 msg="loaded runners" count=1 ollama-1 | time=2024-08-17T14:14:44.243Z level=INFO source=server.go:593 msg="waiting for llama runner to start responding" ollama-1 | time=2024-08-17T14:14:44.243Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error" ollama-1 | INFO [main] build info | build=1 commit="1e6f655" tid="139997298423680" timestamp=1723904084 ollama-1 | INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139997298423680" timestamp=1723904084 total_threads=20 ollama-1 | INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="45319" tid="139997298423680" timestamp=1723904084 ollama-1 | llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest)) ollama-1 | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama-1 | llama_model_loader: - kv 0: general.architecture str = llama ollama-1 | llama_model_loader: - kv 1: general.type str = model ollama-1 | llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct ollama-1 | llama_model_loader: - kv 3: general.finetune str = Instruct ollama-1 | llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 ollama-1 | llama_model_loader: - kv 5: general.size_label str = 8B ollama-1 | llama_model_loader: - kv 6: general.license str = llama3.1 ollama-1 | llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... ollama-1 | llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... ollama-1 | llama_model_loader: - kv 9: llama.block_count u32 = 32 ollama-1 | llama_model_loader: - kv 10: llama.context_length u32 = 131072 ollama-1 | llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 ollama-1 | llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 ollama-1 | llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 ollama-1 | llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 ollama-1 | llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 ollama-1 | llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 ollama-1 | llama_model_loader: - kv 17: general.file_type u32 = 2 ollama-1 | llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 ollama-1 | llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 ollama-1 | llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 ollama-1 | llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe ollama-1 | llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... ollama-1 | llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ollama-1 | llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... ollama-1 | llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 ollama-1 | llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 ollama-1 | llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... ollama-1 | llama_model_loader: - kv 28: general.quantization_version u32 = 2 ollama-1 | llama_model_loader: - type f32: 66 tensors ollama-1 | llama_model_loader: - type q4_0: 225 tensors ollama-1 | llama_model_loader: - type q6_K: 1 tensors ollama-1 | time=2024-08-17T14:14:44.494Z level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model" ollama-1 | llm_load_vocab: special tokens cache size = 256 ollama-1 | llm_load_vocab: token to piece cache size = 0.7999 MB ollama-1 | llm_load_print_meta: format = GGUF V3 (latest) ollama-1 | llm_load_print_meta: arch = llama ollama-1 | llm_load_print_meta: vocab type = BPE ollama-1 | llm_load_print_meta: n_vocab = 128256 ollama-1 | llm_load_print_meta: n_merges = 280147 ollama-1 | llm_load_print_meta: vocab_only = 0 ollama-1 | llm_load_print_meta: n_ctx_train = 131072 ollama-1 | llm_load_print_meta: n_embd = 4096 ollama-1 | llm_load_print_meta: n_layer = 32 ollama-1 | llm_load_print_meta: n_head = 32 ollama-1 | llm_load_print_meta: n_head_kv = 8 ollama-1 | llm_load_print_meta: n_rot = 128 ollama-1 | llm_load_print_meta: n_swa = 0 ollama-1 | llm_load_print_meta: n_embd_head_k = 128 ollama-1 | llm_load_print_meta: n_embd_head_v = 128 ollama-1 | llm_load_print_meta: n_gqa = 4 ollama-1 | llm_load_print_meta: n_embd_k_gqa = 1024 ollama-1 | llm_load_print_meta: n_embd_v_gqa = 1024 ollama-1 | llm_load_print_meta: f_norm_eps = 0.0e+00 ollama-1 | llm_load_print_meta: f_norm_rms_eps = 1.0e-05 ollama-1 | llm_load_print_meta: f_clamp_kqv = 0.0e+00 ollama-1 | llm_load_print_meta: f_max_alibi_bias = 0.0e+00 ollama-1 | llm_load_print_meta: f_logit_scale = 0.0e+00 ollama-1 | llm_load_print_meta: n_ff = 14336 ollama-1 | llm_load_print_meta: n_expert = 0 ollama-1 | llm_load_print_meta: n_expert_used = 0 ollama-1 | llm_load_print_meta: causal attn = 1 ollama-1 | llm_load_print_meta: pooling type = 0 ollama-1 | llm_load_print_meta: rope type = 0 ollama-1 | llm_load_print_meta: rope scaling = linear ollama-1 | llm_load_print_meta: freq_base_train = 500000.0 ollama-1 | llm_load_print_meta: freq_scale_train = 1 ollama-1 | llm_load_print_meta: n_ctx_orig_yarn = 131072 ollama-1 | llm_load_print_meta: rope_finetuned = unknown ollama-1 | llm_load_print_meta: ssm_d_conv = 0 ollama-1 | llm_load_print_meta: ssm_d_inner = 0 ollama-1 | llm_load_print_meta: ssm_d_state = 0 ollama-1 | llm_load_print_meta: ssm_dt_rank = 0 ollama-1 | llm_load_print_meta: model type = 8B ollama-1 | llm_load_print_meta: model ftype = Q4_0 ollama-1 | llm_load_print_meta: model params = 8.03 B ollama-1 | llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) ollama-1 | llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct ollama-1 | llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' ollama-1 | llm_load_print_meta: EOS token = 128009 '<|eot_id|>' ollama-1 | llm_load_print_meta: LF token = 128 'Ä' ollama-1 | llm_load_print_meta: EOT token = 128009 '<|eot_id|>' ollama-1 | llm_load_print_meta: max token length = 256 ollama-1 | llm_load_tensors: ggml ctx size = 0.14 MiB ollama-1 | llm_load_tensors: CPU buffer size = 4437.81 MiB ollama-1 | llama_new_context_with_model: n_ctx = 2048 ollama-1 | llama_new_context_with_model: n_batch = 512 ollama-1 | llama_new_context_with_model: n_ubatch = 512 ollama-1 | llama_new_context_with_model: flash_attn = 0 ollama-1 | llama_new_context_with_model: freq_base = 500000.0 ollama-1 | llama_new_context_with_model: freq_scale = 1 ollama-1 | llama_kv_cache_init: CPU KV buffer size = 256.00 MiB ollama-1 | llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB ollama-1 | llama_new_context_with_model: CPU output buffer size = 0.50 MiB ollama-1 | llama_new_context_with_model: CPU compute buffer size = 258.50 MiB ollama-1 | llama_new_context_with_model: graph nodes = 1030 ollama-1 | llama_new_context_with_model: graph splits = 1 ollama-1 | INFO [main] model loaded | tid="139997298423680" timestamp=1723904086 ollama-1 | time=2024-08-17T14:14:46.253Z level=INFO source=server.go:632 msg="llama runner started in 2.01 seconds" ollama-1 | [GIN] 2024/08/17 - 14:14:46 | 200 | 2.211858694s | 127.0.0.1 | POST "/api/chat" ollama-1 | [GIN] 2024/08/17 - 14:14:51 | 200 | 2.832885978s | 127.0.0.1 | POST "/api/chat" ollama-1 | [GIN] 2024/08/17 - 14:15:11 | 200 | 21.33µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:15:41 | 200 | 27.205µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:16:11 | 200 | 22.658µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:16:41 | 200 | 20.162µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:17:11 | 200 | 25.65µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:17:41 | 200 | 19.182µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:18:11 | 200 | 34.505µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:18:41 | 200 | 21.676µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:19:11 | 200 | 20.415µs | 127.0.0.1 | GET "/api/version" ollama-1 | [GIN] 2024/08/17 - 14:19:41 | 200 | 23.279µs | 127.0.0.1 | GET "/api/version" ```
Author
Owner

@rick-github commented on GitHub (Aug 17, 2024):

ollama is bound to [::]:11434. Try adding --env OLLAMA_HOST=0.0.0.0:11434 to the docker command.

<!-- gh-comment-id:2294873605 --> @rick-github commented on GitHub (Aug 17, 2024): ollama is bound to `[::]:11434`. Try adding `--env OLLAMA_HOST=0.0.0.0:11434` to the docker command.
Author
Owner

@itinance commented on GitHub (Aug 17, 2024):

docker run -d --gpus=all  --env OLLAMA_HOST=0.0.0.0:11434 -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

curl request still hanging.

Since not a single line will be logged after starting the curl request, it seems to me like GIN is not listening on this port at all. Or docker is not exposing this port (although it does with other services that I use on other machines)

<!-- gh-comment-id:2294874977 --> @itinance commented on GitHub (Aug 17, 2024): ``` docker run -d --gpus=all --env OLLAMA_HOST=0.0.0.0:11434 -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama ``` curl request still hanging. Since not a single line will be logged after starting the curl request, it seems to me like GIN is not listening on this port at all. Or docker is not exposing this port (although it does with other services that I use on other machines)
Author
Owner

@rick-github commented on GitHub (Aug 17, 2024):

The docker proxy process that forwards tcp connections from the host to the container is listening on ip4:

tcp        0      0 0.0.0.0:11434           0.0.0.0:*               LISTEN      320493/docker-proxy

ollama inside the container is listening on ip6:

ollama-1  | time=2024-08-17T14:11:10.852Z level=INFO source=routes.go:1170 msg="Listening on [::]:11434 (version 0.3.5)"

These need to be aligned for ollama to receive the request from the client. If setting OLLAMA_HOST didn't work (wasn't sure if it would) you need to figure out why the docker proxy is not listening on ip6. On my system, ollama in docker:

$ netstat -pant | grep 11434
tcp        0      0 0.0.0.0:11434           0.0.0.0:*               LISTEN      -                   
tcp6       0      0 :::11434                :::*                    LISTEN      -                
<!-- gh-comment-id:2294878735 --> @rick-github commented on GitHub (Aug 17, 2024): The docker proxy process that forwards tcp connections from the host to the container is listening on ip4: ``` tcp 0 0 0.0.0.0:11434 0.0.0.0:* LISTEN 320493/docker-proxy ``` ollama inside the container is listening on ip6: ``` ollama-1 | time=2024-08-17T14:11:10.852Z level=INFO source=routes.go:1170 msg="Listening on [::]:11434 (version 0.3.5)" ``` These need to be aligned for ollama to receive the request from the client. If setting OLLAMA_HOST didn't work (wasn't sure if it would) you need to figure out why the docker proxy is not listening on ip6. On my system, ollama in docker: ``` $ netstat -pant | grep 11434 tcp 0 0 0.0.0.0:11434 0.0.0.0:* LISTEN - tcp6 0 0 :::11434 :::* LISTEN - ```
Author
Owner

@itinance commented on GitHub (Aug 17, 2024):

This is weird. I ran docker so that it uses the ipv6-port:

docker run -d --gpus=all --env OLLAMA_HOST=0.0.0.0:11434 -v ollama:/root/.ollama -p [::]:11434:11434 --name ollama ollama/ollama

netstat -pant | grep 11434

tcp6 0 0 :::11434 :::* LISTEN 325678/docker-proxy

still hanging.

I wonder why docker is such a problem on my machine while it seems to work for others...

<!-- gh-comment-id:2294889687 --> @itinance commented on GitHub (Aug 17, 2024): This is weird. I ran docker so that it uses the ipv6-port: `docker run -d --gpus=all --env OLLAMA_HOST=0.0.0.0:11434 -v ollama:/root/.ollama -p [::]:11434:11434 --name ollama ollama/ollama ` ```netstat -pant | grep 11434``` `tcp6 0 0 :::11434 :::* LISTEN 325678/docker-proxy` still hanging. I wonder why docker is such a problem on my machine while it seems to work for others...
Author
Owner

@itinance commented on GitHub (Aug 17, 2024):

I tested both protocols:

docker run -d --gpus=all  --env OLLAMA_HOST=0.0.0.0:11434 -v ollama:/root/.ollama -p [::]:11434:11434 -p 11434:11434 --name ollama ollama/ollama

netstat -pant | grep 11434
tcp        0      0 0.0.0.0:11434           0.0.0.0:*               LISTEN      325981/docker-proxy
tcp6       0      0 :::11434                :::*                    LISTEN      325988/docker-proxy

Curl request still hanging, GIN is not listening on the port or will not receive any network packet...

<!-- gh-comment-id:2294890334 --> @itinance commented on GitHub (Aug 17, 2024): I tested both protocols: ``` docker run -d --gpus=all --env OLLAMA_HOST=0.0.0.0:11434 -v ollama:/root/.ollama -p [::]:11434:11434 -p 11434:11434 --name ollama ollama/ollama netstat -pant | grep 11434 tcp 0 0 0.0.0.0:11434 0.0.0.0:* LISTEN 325981/docker-proxy tcp6 0 0 :::11434 :::* LISTEN 325988/docker-proxy ``` Curl request still hanging, GIN is not listening on the port or will not receive any network packet...
Author
Owner

@itinance commented on GitHub (Aug 17, 2024):

Whoa.... this was caused by ufw-firewall.
Docker and ufw are no big friends. Here is a blog post that describes it more in detail: https://blog.jarrousse.org/2023/03/18/how-to-use-ufw-firewall-with-docker-containers/

However, my issue had nothing to do with ollama. Thanks everybody for helping!

<!-- gh-comment-id:2295002931 --> @itinance commented on GitHub (Aug 17, 2024): Whoa.... this was caused by ufw-firewall. Docker and ufw are no big friends. Here is a blog post that describes it more in detail: https://blog.jarrousse.org/2023/03/18/how-to-use-ufw-firewall-with-docker-containers/ However, my issue had nothing to do with ollama. Thanks everybody for helping!
Author
Owner

@ntelo007 commented on GitHub (Aug 27, 2024):

Hey guys. I am still facing a similar issue. Containerized API gets an error when trying to access the Ollama docker container. The API is implemented with FastAPI and Langchain. Can someone please help me resolve it? I can upload context if you want.

<!-- gh-comment-id:2313350039 --> @ntelo007 commented on GitHub (Aug 27, 2024): Hey guys. I am still facing a similar issue. Containerized API gets an error when trying to access the Ollama docker container. The API is implemented with FastAPI and Langchain. Can someone please help me resolve it? I can upload context if you want.
Author
Owner

@rick-github commented on GitHub (Aug 27, 2024):

Plain docker or docker compose? If the former, what commands are you using to start the containers? If the latter, can you post your docker-compose.yaml? What connections string does your langchain use for connecting to ollama? What error messages does it throw? Can you connect to ollama from outside of the langchain app container?

<!-- gh-comment-id:2313357125 --> @rick-github commented on GitHub (Aug 27, 2024): Plain docker or docker compose? If the former, what commands are you using to start the containers? If the latter, can you post your docker-compose.yaml? What connections string does your langchain use for connecting to ollama? What error messages does it throw? Can you connect to ollama from outside of the langchain app container?
Author
Owner

@ntelo007 commented on GitHub (Aug 29, 2024):

Here is my docker-compose.yaml file:

services:
  ollama:
    image: ollama/ollama
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ./data/ollama:/root/.ollama
    networks:
      - langchain-network

  langchain_app:
    image: langchain_app
    container_name: langchain_app
    ports:
      - "8501:8501"
    depends_on:
      - ollama
    networks:
      - langchain-network

volumes:
  ollama: {}

networks:
  langchain-network:
    driver: bridge
    ipam:
      config:
        - subnet: "10.0.0.0/19"
          gateway: "10.0.0.1"

and this is the connection string:

llm = ChatOllama(
    model='llama3-groq-tool-use',  # or 'llama3.1'
    base_url='http://ollama:11434',
    temperature=0,
    verbose=True
)

I can ping the Ollama docker from the langchain docker:

# curl http://ollama:11434
Ollama is running#

I can receive an API response from my langchain app, if I don't containerize my langchain app.

The error I get if my app is containerized is the following:

{
  "error": "[Errno 111] Connection refused"
}
<!-- gh-comment-id:2317047384 --> @ntelo007 commented on GitHub (Aug 29, 2024): Here is my docker-compose.yaml file: ``` services: ollama: image: ollama/ollama container_name: ollama ports: - "11434:11434" volumes: - ./data/ollama:/root/.ollama networks: - langchain-network langchain_app: image: langchain_app container_name: langchain_app ports: - "8501:8501" depends_on: - ollama networks: - langchain-network volumes: ollama: {} networks: langchain-network: driver: bridge ipam: config: - subnet: "10.0.0.0/19" gateway: "10.0.0.1" ``` and this is the connection string: ``` llm = ChatOllama( model='llama3-groq-tool-use', # or 'llama3.1' base_url='http://ollama:11434', temperature=0, verbose=True ) ``` I can ping the Ollama docker from the langchain docker: ``` # curl http://ollama:11434 Ollama is running# ``` I can receive an API response from my langchain app, if I don't containerize my langchain app. The error I get if my app is containerized is the following: ``` { "error": "[Errno 111] Connection refused" } ```
Author
Owner

@rick-github commented on GitHub (Aug 29, 2024):

It looks like it should work. If you run sudo tcpdump -i any port 11434 in a terminal and then trigger a text generation in your langchain app, where is your app trying to connect to?

<!-- gh-comment-id:2317276505 --> @rick-github commented on GitHub (Aug 29, 2024): It looks like it should work. If you run `sudo tcpdump -i any port 11434` in a terminal and then trigger a text generation in your langchain app, where is your app trying to connect to?
Author
Owner

@ntelo007 commented on GitHub (Aug 29, 2024):

I am using Windows 10.

<!-- gh-comment-id:2317481289 --> @ntelo007 commented on GitHub (Aug 29, 2024): I am using Windows 10.
Author
Owner

@rick-github commented on GitHub (Aug 29, 2024):

If you replace ollama in ChatOllama with the ip address of the ollama container, does it work? (docker network inspect langchain-network to find the address).

<!-- gh-comment-id:2317495751 --> @rick-github commented on GitHub (Aug 29, 2024): If you replace `ollama` in ChatOllama with the ip address of the ollama container, does it work? (`docker network inspect langchain-network` to find the address).
Author
Owner

@ntelo007 commented on GitHub (Aug 29, 2024):

Do you mean the IPv4Address? Doesn't this change every time we initialize a multicontainer application?

<!-- gh-comment-id:2317518103 --> @ntelo007 commented on GitHub (Aug 29, 2024): Do you mean the IPv4Address? Doesn't this change every time we initialize a multicontainer application?
Author
Owner

@rick-github commented on GitHub (Aug 29, 2024):

If you restart all containers, yes. If you change just the app and restart it (docker compose up -d langchain_app), the langchain container will get a new IP address but the ollama one will continue with the one it was originally assigned.

My theory here is that name resolution in the langchain app is not returning the right address, replacing the container name with a hard coded address will test that theory.

<!-- gh-comment-id:2317533478 --> @rick-github commented on GitHub (Aug 29, 2024): If you restart all containers, yes. If you change just the app and restart it (`docker compose up -d langchain_app`), the langchain container will get a new IP address but the ollama one will continue with the one it was originally assigned. My theory here is that name resolution in the langchain app is not returning the right address, replacing the container name with a hard coded address will test that theory.
Author
Owner

@ntelo007 commented on GitHub (Aug 30, 2024):

It didn't work. I don't know if it's LangChain's bug or yours :(

<!-- gh-comment-id:2320317399 --> @ntelo007 commented on GitHub (Aug 30, 2024): It didn't work. I don't know if it's LangChain's bug or yours :(
Author
Owner

@rick-github commented on GitHub (Aug 30, 2024):

Add this code to your app:

import logging

logging.basicConfig(
    format="%(levelname)s [%(asctime)s] %(name)s - %(message)s",
    datefmt="%Y-%m-%d %H:%M:%S",
    level=logging.DEBUG
)

Then search for httpx in the log, it will show where the app is trying to connect.

<!-- gh-comment-id:2320338231 --> @rick-github commented on GitHub (Aug 30, 2024): Add this code to your app: ``` import logging logging.basicConfig( format="%(levelname)s [%(asctime)s] %(name)s - %(message)s", datefmt="%Y-%m-%d %H:%M:%S", level=logging.DEBUG ) ``` Then search for `httpx` in the log, it will show where the app is trying to connect.
Author
Owner

@ntelo007 commented on GitHub (Aug 30, 2024):

It continues to send the request to the wrong address:

langchain_app  | INFO:     Started server process [1]
langchain_app  | INFO:     Waiting for application startup.
langchain_app  | INFO [2024-08-30 09:05:39] main - Starting up FastAPI application
langchain_app  | INFO:     Application startup complete.
langchain_app  | INFO:     Uvicorn running on http://0.0.0.0:8501 (Press CTRL+C to quit)
langchain_app  | DEBUG [2024-08-30 09:05:39] urllib3.connectionpool - Starting new HTTPS connection (1): us-api.i.posthog.com:443
langchain_app  | DEBUG [2024-08-30 09:05:40] urllib3.connectionpool - https://us-api.i.posthog.com:443 "POST /batch/ HTTP/11" 200 15
langchain_app  | INFO:     172.18.0.1:60324 - "GET /docs HTTP/1.1" 200 OK
langchain_app  | INFO:     172.18.0.1:60324 - "GET /openapi.json HTTP/1.1" 200 OK
langchain_app  | INFO [2024-08-30 09:06:12] main - Received request: hey
langchain_app  | DEBUG [2024-08-30 09:06:12] main - Using base_url: http://ollama:11434
langchain_app  | DEBUG [2024-08-30 09:06:12] httpcore.connection - connect_tcp.started host='127.0.0.1' port=11434 local_address=None timeout=None socket_options=None
langchain_app  | DEBUG [2024-08-30 09:06:12] httpcore.connection - connect_tcp.failed exception=ConnectError(ConnectionRefusedError(111, 'Connection refused'))
langchain_app  | INFO:     172.18.0.1:57716 - "POST /pure_llm?request=hey HTTP/1.1" 200 OK
<!-- gh-comment-id:2320603188 --> @ntelo007 commented on GitHub (Aug 30, 2024): It continues to send the request to the wrong address: ``` langchain_app | INFO: Started server process [1] langchain_app | INFO: Waiting for application startup. langchain_app | INFO [2024-08-30 09:05:39] main - Starting up FastAPI application langchain_app | INFO: Application startup complete. langchain_app | INFO: Uvicorn running on http://0.0.0.0:8501 (Press CTRL+C to quit) langchain_app | DEBUG [2024-08-30 09:05:39] urllib3.connectionpool - Starting new HTTPS connection (1): us-api.i.posthog.com:443 langchain_app | DEBUG [2024-08-30 09:05:40] urllib3.connectionpool - https://us-api.i.posthog.com:443 "POST /batch/ HTTP/11" 200 15 langchain_app | INFO: 172.18.0.1:60324 - "GET /docs HTTP/1.1" 200 OK langchain_app | INFO: 172.18.0.1:60324 - "GET /openapi.json HTTP/1.1" 200 OK langchain_app | INFO [2024-08-30 09:06:12] main - Received request: hey langchain_app | DEBUG [2024-08-30 09:06:12] main - Using base_url: http://ollama:11434 langchain_app | DEBUG [2024-08-30 09:06:12] httpcore.connection - connect_tcp.started host='127.0.0.1' port=11434 local_address=None timeout=None socket_options=None langchain_app | DEBUG [2024-08-30 09:06:12] httpcore.connection - connect_tcp.failed exception=ConnectError(ConnectionRefusedError(111, 'Connection refused')) langchain_app | INFO: 172.18.0.1:57716 - "POST /pure_llm?request=hey HTTP/1.1" 200 OK ```
Author
Owner

@rick-github commented on GitHub (Aug 30, 2024):

If you replace ollama in the base_url with the IP address of the ollama container, do the httpcore.connection log lines change? It's like your app is ignoring base_url and using the built-in default of 127.0.0.1:11434. What version of langchain_ollama are you using (pip show langchain_ollama langchain-core langchain on linux, I guess the same on windows)?

<!-- gh-comment-id:2320709563 --> @rick-github commented on GitHub (Aug 30, 2024): If you replace `ollama` in the base_url with the IP address of the ollama container, do the `httpcore.connection` log lines change? It's like your app is ignoring base_url and using the built-in default of 127.0.0.1:11434. What version of langchain_ollama are you using (`pip show langchain_ollama langchain-core langchain` on linux, I guess the same on windows)?
Author
Owner

@ntelo007 commented on GitHub (Aug 30, 2024):

It indeed ignores the base_url. I modified it to use the IP address and it still used the localhost 127.0.0.1:11434.

root@161d10d26a94:/app# pip show langchain_ollama
Name: langchain-ollama
Version: 0.1.0
Summary: An integration package connecting Ollama and LangChain
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /usr/local/lib/python3.12/site-packages
Requires: langchain-core, ollama
Required-by:
root@161d10d26a94:/app# pip show langchain
Name: langchain
Version: 0.2.11
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /usr/local/lib/python3.12/site-packages
Requires: aiohttp, langchain-core, langchain-text-splitters, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: langchain-community
root@161d10d26a94:/app# pip show langchain_core
Name: langchain-core
Version: 0.2.25
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /usr/local/lib/python3.12/site-packages
Requires: jsonpatch, langsmith, packaging, pydantic, PyYAML, tenacity
Required-by: langchain, langchain-community, langchain-ollama, langchain-text-splitters, langgraph
<!-- gh-comment-id:2320777189 --> @ntelo007 commented on GitHub (Aug 30, 2024): It indeed ignores the base_url. I modified it to use the IP address and it still used the localhost 127.0.0.1:11434. ``` root@161d10d26a94:/app# pip show langchain_ollama Name: langchain-ollama Version: 0.1.0 Summary: An integration package connecting Ollama and LangChain Home-page: https://github.com/langchain-ai/langchain Author: Author-email: License: MIT Location: /usr/local/lib/python3.12/site-packages Requires: langchain-core, ollama Required-by: root@161d10d26a94:/app# pip show langchain Name: langchain Version: 0.2.11 Summary: Building applications with LLMs through composability Home-page: https://github.com/langchain-ai/langchain Author: Author-email: License: MIT Location: /usr/local/lib/python3.12/site-packages Requires: aiohttp, langchain-core, langchain-text-splitters, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity Required-by: langchain-community root@161d10d26a94:/app# pip show langchain_core Name: langchain-core Version: 0.2.25 Summary: Building applications with LLMs through composability Home-page: https://github.com/langchain-ai/langchain Author: Author-email: License: MIT Location: /usr/local/lib/python3.12/site-packages Requires: jsonpatch, langsmith, packaging, pydantic, PyYAML, tenacity Required-by: langchain, langchain-community, langchain-ollama, langchain-text-splitters, langgraph ```
Author
Owner

@rick-github commented on GitHub (Aug 30, 2024):

What happens if you skip base_url altogether and set OLLAMA_HOST in the environment:

services:
  langchain_app:
    image: langchain_app
    container_name: langchain_app
    environment:
      - OLLAMA_HOST=http://ollama:11434
    ports:
      - "8501:8501"
    depends_on:
      - ollama
    networks:
      - langchain-network

<!-- gh-comment-id:2320821248 --> @rick-github commented on GitHub (Aug 30, 2024): What happens if you skip base_url altogether and set `OLLAMA_HOST` in the environment: ```yaml services: langchain_app: image: langchain_app container_name: langchain_app environment: - OLLAMA_HOST=http://ollama:11434 ports: - "8501:8501" depends_on: - ollama networks: - langchain-network ```
Author
Owner

@ntelo007 commented on GitHub (Aug 30, 2024):

This worked!!!! Thank you so much!

<!-- gh-comment-id:2320874148 --> @ntelo007 commented on GitHub (Aug 30, 2024): This worked!!!! Thank you so much!
Author
Owner

@jeqele commented on GitHub (Nov 19, 2025):

same problem on windows.
solved by running in "PowerShell as admin"

<!-- gh-comment-id:3554014479 --> @jeqele commented on GitHub (Nov 19, 2025): same problem on windows. solved by running in "PowerShell as admin"
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29780