[GH-ISSUE #8346] Unable to run llama on IPv6 Single Stack env #5351

Closed
opened 2026-04-12 16:33:22 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @chaturvedi-kna on GitHub (Jan 8, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8346

What is the issue?

Hi Guys,

I am using Ollama on OpenShift (v4.16), Open Data Hub followed below guide and used the same image mentioned there
https://github.com/rh-aiservices-bu/llm-on-openshift/tree/main/serving-runtimes/ollama_runtime

In the ollama-runtime.yaml changes OLLAMA_HOST value to '::'

Then used /api/pull for model llama3.2-vision:11b which is success post that tried to check if it is running fine with /api/generate and got error core dump upon looking into logs I have found that hostname is being shown as 127.0.0.1 and I am suspecting that this might be the reason for llama not starting properly.

Kindly guide me to resolve this.

Api Calls and outputs:
{"status":"verifying sha256 digest"}
{"status":"writing manifest"}
{"status":"removing any unused layers"}
{"status":"success"}
(app-root) sh-5.1$ curl https://ollma-doc.chatur.svc.cluster.local/api/tags -k {"models":[{"name":"llama3.2-vision:11b","model":"llama3.2-vision:11b","modified_at":"2025-01-08T15:00:32.627854408Z","size":7901829417,"digest":"085a1fdae525a3804ac95416b38498099c241defd0f1efc71dcca7f63190ba3d","details":{"parent_model":"","format":"gguf","family":"mllama","families":["mllama","mllama"],"parameter_size":"9.8B","quantization_level":"Q4_K_M"}}]}(app-root) sh-5.1$ curl https://ollma-doc.chatur.svc.cluster.local/api/generate -k -H "Content-Type: application/json" -d '{"model": "llama3.2-vision:11b", "prompt": "why is the sky blue?"}'
curl: (6) Could not resolve host: ollma-doc.chatur.svc.cluster.locagenerate
(app-root) sh-5.1$ curl https://ollma-doc.chatur.svc.cluster.local/api/generate -k -H "Content-Type: application/json" -d '{"model": "llama3.2-vision:11b", "prompt": "why is the sky blue?"}'
{"error":"llama runner process has terminated: signal: aborted (core dumped)"}(app-root) sh-5.1$
(app-root) sh-5.1$

Logs from Ollama pod:
Couldn't find '/.ollama/id_ed25519'. Generating new private key.
Your new public key is:

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFgZvE8cJ0k0eZH/4I6S9r/EKNzEuKNGh/aC3AWTsf+n

2025/01/08 14:09:34 routes.go:1100: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://:::11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2025-01-08T14:09:34.834Z level=INFO source=images.go:784 msg="total blobs: 0"
time=2025-01-08T14:09:34.834Z level=INFO source=images.go:791 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-08T14:09:34.834Z level=INFO source=routes.go:1147 msg="Listening on [::]:11434 (version 0.0.0)"
time=2025-01-08T14:09:34.834Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3519681156/runners
time=2025-01-08T14:09:34.883Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2]"
time=2025-01-08T14:09:34.883Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2025-01-08T14:09:34.884Z level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered"
time=2025-01-08T14:09:34.884Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="377.5 GiB" available="343.4 GiB"
[GIN] 2025/01/08 - 14:34:56 | 200 | 377.045µs | fd01:0:0:1::af9 | GET "/api/tags"
[GIN] 2025/01/08 - 14:37:21 | 200 | 128.615µs | fd01:0:0:1::af9 | GET "/api/tags"
time=2025-01-08T14:40:51.547Z level=INFO source=download.go:136 msg="downloading 11f274007f09 in 60 100 MB part(s)"
time=2025-01-08T14:41:06.549Z level=INFO source=download.go:251 msg="11f274007f09 part 57 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:08.550Z level=INFO source=download.go:251 msg="11f274007f09 part 25 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:09.550Z level=INFO source=download.go:251 msg="11f274007f09 part 52 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:09.550Z level=INFO source=download.go:251 msg="11f274007f09 part 46 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:12.547Z level=INFO source=download.go:251 msg="11f274007f09 part 18 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:12.550Z level=INFO source=download.go:251 msg="11f274007f09 part 40 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:15.550Z level=INFO source=download.go:251 msg="11f274007f09 part 27 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:16.547Z level=INFO source=download.go:251 msg="11f274007f09 part 3 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:19.551Z level=INFO source=download.go:251 msg="11f274007f09 part 36 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:20.551Z level=INFO source=download.go:251 msg="11f274007f09 part 47 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:21.548Z level=INFO source=download.go:251 msg="11f274007f09 part 10 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:28.551Z level=INFO source=download.go:251 msg="11f274007f09 part 9 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:33.553Z level=INFO source=download.go:251 msg="11f274007f09 part 47 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:41:55.551Z level=INFO source=download.go:251 msg="11f274007f09 part 37 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:42:15.551Z level=INFO source=download.go:251 msg="11f274007f09 part 42 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:42:22.548Z level=INFO source=download.go:251 msg="11f274007f09 part 4 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:53:51.667Z level=INFO source=download.go:136 msg="downloading ece5e659647a in 20 100 MB part(s)"
time=2025-01-08T14:54:19.668Z level=INFO source=download.go:251 msg="ece5e659647a part 5 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:54:22.668Z level=INFO source=download.go:251 msg="ece5e659647a part 3 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:54:33.668Z level=INFO source=download.go:251 msg="ece5e659647a part 4 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:54:33.669Z level=INFO source=download.go:251 msg="ece5e659647a part 5 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:54:48.668Z level=INFO source=download.go:251 msg="ece5e659647a part 15 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:59:32.668Z level=INFO source=download.go:251 msg="ece5e659647a part 6 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2025-01-08T14:59:52.181Z level=INFO source=download.go:136 msg="downloading 715415638c9c in 1 269 B part(s)"
time=2025-01-08T14:59:54.665Z level=INFO source=download.go:136 msg="downloading 0b4284c1f870 in 1 7.7 KB part(s)"
time=2025-01-08T14:59:56.042Z level=INFO source=download.go:136 msg="downloading fefc914e46e6 in 1 32 B part(s)"
time=2025-01-08T14:59:58.651Z level=INFO source=download.go:136 msg="downloading fbd313562bb7 in 1 572 B part(s)"
[GIN] 2025/01/08 - 15:00:32 | 200 | 19m43s | fd01:0:0:1::af9 | POST "/api/pull"
[GIN] 2025/01/08 - 15:00:55 | 200 | 459.312µs | fd01:0:0:1::af9 | GET "/api/tags"
time=2025-01-08T15:03:07.761Z level=WARN source=sched.go:134 msg="multimodal models don't support parallel requests yet"
time=2025-01-08T15:03:07.794Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=41 layers.offload=0 layers.split="" memory.available="[343.2 GiB]" memory.required.full="7.7 GiB" memory.required.partial="0 B" memory.required.kv="320.0 MiB" memory.required.allocations="[7.7 GiB]" memory.weights.total="5.2 GiB" memory.weights.repeating="4.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="213.3 MiB" memory.graph.partial="213.3 MiB"
time=2025-01-08T15:03:07.795Z level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama3519681156/runners/cpu_avx2/ollama_llama_server --model /.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --embedding --log-disable --mmproj /.ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --no-mmap --parallel 1 --port 36453"
time=2025-01-08T15:03:07.795Z level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2025-01-08T15:03:07.795Z level=INFO source=server.go:583 msg="waiting for llama runner to start responding"
time=2025-01-08T15:03:07.795Z level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=3440 commit="d94c6e0c" tid="139814772467584" timestamp=1736348587
INFO [main] system info | n_threads=40 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="139814772467584" timestamp=1736348587 total_threads=80
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="79" port="36453" tid="139814772467584" timestamp=1736348587
terminate called after throwing an instance of 'std::runtime_error'
what(): Missing required key: clip.has_text_encoder
time=2025-01-08T15:03:08.046Z level=ERROR source=sched.go:443 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped)"
[GIN] 2025/01/08 - 15:03:08 | 500 | 314.868828ms | fd01:0:0:1::af9 | POST "/api/generate"

OS

No response

GPU

No response

CPU

Intel

Ollama version

No response

Originally created by @chaturvedi-kna on GitHub (Jan 8, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8346 ### What is the issue? Hi Guys, I am using Ollama on OpenShift (v4.16), Open Data Hub followed below guide and used the same image mentioned there https://github.com/rh-aiservices-bu/llm-on-openshift/tree/main/serving-runtimes/ollama_runtime In the [ollama-runtime.yaml](https://github.com/rh-aiservices-bu/llm-on-openshift/blob/main/serving-runtimes/ollama_runtime/ollama-runtime.yaml) changes OLLAMA_HOST value to '::' Then used /api/pull for model llama3.2-vision:11b which is success post that tried to check if it is running fine with /api/generate and got error core dump upon looking into logs I have found that hostname is being shown as 127.0.0.1 and I am suspecting that this might be the reason for llama not starting properly. Kindly guide me to resolve this. Api Calls and outputs: {"status":"verifying sha256 digest"} {"status":"writing manifest"} {"status":"removing any unused layers"} {"status":"success"} (app-root) sh-5.1$ curl https://ollma-doc.chatur.svc.cluster.local/api/tags -k {"models":[{"name":"llama3.2-vision:11b","model":"llama3.2-vision:11b","modified_at":"2025-01-08T15:00:32.627854408Z","size":7901829417,"digest":"085a1fdae525a3804ac95416b38498099c241defd0f1efc71dcca7f63190ba3d","details":{"parent_model":"","format":"gguf","family":"mllama","families":["mllama","mllama"],"parameter_size":"9.8B","quantization_level":"Q4_K_M"}}]}(app-root) sh-5.1$ curl https://ollma-doc.chatur.svc.cluster.local/api/generate -k -H "Content-Type: application/json" -d '{"model": "llama3.2-vision:11b", "prompt": "why is the sky blue?"}' curl: (6) Could not resolve host: ollma-doc.chatur.svc.cluster.locagenerate (app-root) sh-5.1$ curl https://ollma-doc.chatur.svc.cluster.local/api/generate -k -H "Content-Type: application/json" -d '{"model": "llama3.2-vision:11b", "prompt": "why is the sky blue?"}' {"error":"llama runner process has terminated: signal: aborted (core dumped)"}(app-root) sh-5.1$ (app-root) sh-5.1$ Logs from Ollama pod: Couldn't find '/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFgZvE8cJ0k0eZH/4I6S9r/EKNzEuKNGh/aC3AWTsf+n 2025/01/08 14:09:34 routes.go:1100: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://:::11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2025-01-08T14:09:34.834Z level=INFO source=images.go:784 msg="total blobs: 0" time=2025-01-08T14:09:34.834Z level=INFO source=images.go:791 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-01-08T14:09:34.834Z level=INFO source=routes.go:1147 msg="Listening on [::]:11434 (version 0.0.0)" time=2025-01-08T14:09:34.834Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3519681156/runners time=2025-01-08T14:09:34.883Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2]" time=2025-01-08T14:09:34.883Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2025-01-08T14:09:34.884Z level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered" time=2025-01-08T14:09:34.884Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="377.5 GiB" available="343.4 GiB" [GIN] 2025/01/08 - 14:34:56 | 200 | 377.045µs | fd01:0:0:1::af9 | GET "/api/tags" [GIN] 2025/01/08 - 14:37:21 | 200 | 128.615µs | fd01:0:0:1::af9 | GET "/api/tags" time=2025-01-08T14:40:51.547Z level=INFO source=download.go:136 msg="downloading 11f274007f09 in 60 100 MB part(s)" time=2025-01-08T14:41:06.549Z level=INFO source=download.go:251 msg="11f274007f09 part 57 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:08.550Z level=INFO source=download.go:251 msg="11f274007f09 part 25 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:09.550Z level=INFO source=download.go:251 msg="11f274007f09 part 52 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:09.550Z level=INFO source=download.go:251 msg="11f274007f09 part 46 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:12.547Z level=INFO source=download.go:251 msg="11f274007f09 part 18 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:12.550Z level=INFO source=download.go:251 msg="11f274007f09 part 40 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:15.550Z level=INFO source=download.go:251 msg="11f274007f09 part 27 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:16.547Z level=INFO source=download.go:251 msg="11f274007f09 part 3 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:19.551Z level=INFO source=download.go:251 msg="11f274007f09 part 36 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:20.551Z level=INFO source=download.go:251 msg="11f274007f09 part 47 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:21.548Z level=INFO source=download.go:251 msg="11f274007f09 part 10 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:28.551Z level=INFO source=download.go:251 msg="11f274007f09 part 9 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:33.553Z level=INFO source=download.go:251 msg="11f274007f09 part 47 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:41:55.551Z level=INFO source=download.go:251 msg="11f274007f09 part 37 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:42:15.551Z level=INFO source=download.go:251 msg="11f274007f09 part 42 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:42:22.548Z level=INFO source=download.go:251 msg="11f274007f09 part 4 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:53:51.667Z level=INFO source=download.go:136 msg="downloading ece5e659647a in 20 100 MB part(s)" time=2025-01-08T14:54:19.668Z level=INFO source=download.go:251 msg="ece5e659647a part 5 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:54:22.668Z level=INFO source=download.go:251 msg="ece5e659647a part 3 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:54:33.668Z level=INFO source=download.go:251 msg="ece5e659647a part 4 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:54:33.669Z level=INFO source=download.go:251 msg="ece5e659647a part 5 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:54:48.668Z level=INFO source=download.go:251 msg="ece5e659647a part 15 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:59:32.668Z level=INFO source=download.go:251 msg="ece5e659647a part 6 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2025-01-08T14:59:52.181Z level=INFO source=download.go:136 msg="downloading 715415638c9c in 1 269 B part(s)" time=2025-01-08T14:59:54.665Z level=INFO source=download.go:136 msg="downloading 0b4284c1f870 in 1 7.7 KB part(s)" time=2025-01-08T14:59:56.042Z level=INFO source=download.go:136 msg="downloading fefc914e46e6 in 1 32 B part(s)" time=2025-01-08T14:59:58.651Z level=INFO source=download.go:136 msg="downloading fbd313562bb7 in 1 572 B part(s)" [GIN] 2025/01/08 - 15:00:32 | 200 | 19m43s | fd01:0:0:1::af9 | POST "/api/pull" [GIN] 2025/01/08 - 15:00:55 | 200 | 459.312µs | fd01:0:0:1::af9 | GET "/api/tags" time=2025-01-08T15:03:07.761Z level=WARN source=sched.go:134 msg="multimodal models don't support parallel requests yet" time=2025-01-08T15:03:07.794Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=41 layers.offload=0 layers.split="" memory.available="[343.2 GiB]" memory.required.full="7.7 GiB" memory.required.partial="0 B" memory.required.kv="320.0 MiB" memory.required.allocations="[7.7 GiB]" memory.weights.total="5.2 GiB" memory.weights.repeating="4.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="213.3 MiB" memory.graph.partial="213.3 MiB" time=2025-01-08T15:03:07.795Z level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama3519681156/runners/cpu_avx2/ollama_llama_server --model /.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 --ctx-size 2048 --batch-size 512 --embedding --log-disable --mmproj /.ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f --no-mmap --parallel 1 --port 36453" time=2025-01-08T15:03:07.795Z level=INFO source=sched.go:437 msg="loaded runners" count=1 time=2025-01-08T15:03:07.795Z level=INFO source=server.go:583 msg="waiting for llama runner to start responding" time=2025-01-08T15:03:07.795Z level=INFO source=server.go:617 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=3440 commit="d94c6e0c" tid="139814772467584" timestamp=1736348587 INFO [main] system info | n_threads=40 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="139814772467584" timestamp=1736348587 total_threads=80 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="79" port="36453" tid="139814772467584" timestamp=1736348587 terminate called after throwing an instance of 'std::runtime_error' what(): Missing required key: clip.has_text_encoder time=2025-01-08T15:03:08.046Z level=ERROR source=sched.go:443 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped)" [GIN] 2025/01/08 - 15:03:08 | 500 | 314.868828ms | fd01:0:0:1::af9 | POST "/api/generate" ### OS _No response_ ### GPU _No response_ ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 16:33:22 -05:00
Author
Owner

@chaturvedi-kna commented on GitHub (Jan 8, 2025):

Below is env inside the pod if that helps

sh-5.1$ env
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
no_proxy=few internal adresses
HOSTNAME=ollma-doc-predictor-00003-deployment-5bcd48d8b4-85ssc
NSS_SDB_USE_CACHE=no
K_REVISION=ollma-doc-predictor-00003
OLLAMA_MODELS=/.ollama/models
PWD=/
PORT=11434
_=/usr/bin/env
container=oci
HOME=/
KUBERNETES_PORT_443_TCP=tcp://[fd02::1]:443
https_proxy=http://[ipv6-proxy]:8080
K_SERVICE=ollma-doc-predictor
OLLAMA_HOST=::
TERM=xterm
SHLVL=1
KUBERNETES_PORT_443_TCP_PROTO=tcp
http_proxy=http://[ipv6-proxy]:8080
KUBERNETES_PORT_443_TCP_ADDR=fd02::1
KUBERNETES_SERVICE_HOST=fd02::1
KUBERNETES_PORT=tcp://[fd02::1]:443
KUBERNETES_PORT_443_TCP_PORT=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
OLLAMA_KEEP_ALIVE=-1m
K_CONFIGURATION=ollma-doc-predictor
sh-5.1$

<!-- gh-comment-id:2577933440 --> @chaturvedi-kna commented on GitHub (Jan 8, 2025): Below is env inside the pod if that helps sh-5.1$ env KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_SERVICE_PORT=443 no_proxy=few internal adresses HOSTNAME=ollma-doc-predictor-00003-deployment-5bcd48d8b4-85ssc NSS_SDB_USE_CACHE=no K_REVISION=ollma-doc-predictor-00003 OLLAMA_MODELS=/.ollama/models PWD=/ PORT=11434 _=/usr/bin/env container=oci HOME=/ KUBERNETES_PORT_443_TCP=tcp://[fd02::1]:443 https_proxy=http://[ipv6-proxy]:8080 K_SERVICE=ollma-doc-predictor OLLAMA_HOST=:: TERM=xterm SHLVL=1 KUBERNETES_PORT_443_TCP_PROTO=tcp http_proxy=http://[ipv6-proxy]:8080 KUBERNETES_PORT_443_TCP_ADDR=fd02::1 KUBERNETES_SERVICE_HOST=fd02::1 KUBERNETES_PORT=tcp://[fd02::1]:443 KUBERNETES_PORT_443_TCP_PORT=443 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_KEEP_ALIVE=-1m K_CONFIGURATION=ollma-doc-predictor sh-5.1$
Author
Owner

@rick-github commented on GitHub (Jan 8, 2025):

time=2025-01-08T14:09:34.834Z level=INFO source=routes.go:1147 msg="Listening on [::]:11434 (version 0.0.0)"

The server is listening on [::]. 127.0.0.1 is the address of the runner, the program that actually runs the model.

terminate called after throwing an instance of 'std::runtime_error'
what(): Missing required key: clip.has_text_encoder

The runner died because the model is missing data. I've never seen this error before, so I don't know what the cause is - it seems to imply that the model download was incomplete (you did have a lot of stalls) but ollama should have done a model integrity check before allowing it to be loadable. You can double check the model data by checking the sha256 checksum (the output should be same as the filename):

$ sha256sum /.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068
$ sha256sum /.ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f

If you run the server with OLLAMA_DEBUG=1 in the environment there will be more verbose logging.

<!-- gh-comment-id:2577997184 --> @rick-github commented on GitHub (Jan 8, 2025): ``` time=2025-01-08T14:09:34.834Z level=INFO source=routes.go:1147 msg="Listening on [::]:11434 (version 0.0.0)" ``` The server is listening on `[::]`. `127.0.0.1` is the address of the runner, the program that actually runs the model. ``` terminate called after throwing an instance of 'std::runtime_error' what(): Missing required key: clip.has_text_encoder ``` The runner died because the model is missing data. I've never seen this error before, so I don't know what the cause is - it seems to imply that the model download was incomplete (you did have a lot of stalls) but ollama should have done a model integrity check before allowing it to be loadable. You can double check the model data by checking the sha256 checksum (the output should be same as the filename): ``` $ sha256sum /.ollama/models/blobs/sha256-11f274007f093fefeec994a5dbbb33d0733a4feb87f7ab66dcd7c1069fef0068 $ sha256sum /.ollama/models/blobs/sha256-ece5e659647a20a5c28ab9eea1c12a1ad430bc0f2a27021d00ad103b3bf5206f ``` If you run the server with `OLLAMA_DEBUG=1` in the environment there will be more verbose logging.
Author
Owner

@rick-github commented on GitHub (Jan 8, 2025):

From the ollama-runtime.yaml you may be running an old version of ollama. The version tag is 0.2.8, and if that's the ollama version, it's ancient. llama3.2-vision:11b requires at least 0.4.0. It looks like the newest version in rh-aiservices-bu is 0.3.13, so you won't be able to run llama3.2-vision.

<!-- gh-comment-id:2578036762 --> @rick-github commented on GitHub (Jan 8, 2025): From the ollama-runtime.yaml you may be running an old version of ollama. The version tag is 0.2.8, and if that's the ollama version, it's ancient. llama3.2-vision:11b requires at least [0.4.0](https://github.com/ollama/ollama/releases/tag/v0.4.0). It looks like the newest version in rh-aiservices-bu is [0.3.13](https://quay.io/repository/rh-aiservices-bu/ollama-ubi9?tab=tags), so you won't be able to run llama3.2-vision.
Author
Owner

@chaturvedi-kna commented on GitHub (Jan 8, 2025):

@rick-github Thanks for your reply!!

I overlooked that this might be an incompatible version. Let me use the latest llama image and check.

Even in the case of using CPU only cluster, I guess it is not necessary to compile for the avx2 instruction set as done in rh-aiservices-bu let me know if I am wrong

<!-- gh-comment-id:2578179668 --> @chaturvedi-kna commented on GitHub (Jan 8, 2025): @rick-github Thanks for your reply!! I overlooked that this might be an incompatible version. Let me use the latest llama [image](https://hub.docker.com/r/ollama/ollama) and check. Even in the case of using CPU only cluster, I guess it is not necessary to compile for the avx2 instruction set as done in rh-aiservices-bu let me know if I am wrong
Author
Owner

@rick-github commented on GitHub (Jan 8, 2025):

The ollama container contains runners with different permutations (cuda_v12_avx cpu cpu_avx cpu_avx2 cuda_v11_avx), so a bespoke build is not as necessary as earlier.

<!-- gh-comment-id:2578895549 --> @rick-github commented on GitHub (Jan 8, 2025): The ollama container contains runners with different permutations (cuda_v12_avx cpu cpu_avx cpu_avx2 cuda_v11_avx), so a bespoke build is not as necessary as earlier.
Author
Owner

@chaturvedi-kna commented on GitHub (Jan 11, 2025):

Update: I've verified with the latest version of Ollama. The latest official image requires root user privileges for deployment.
To address this, I've created a modified Dockerfile that enables non-root user execution and submitted PR #8383 with these changes. The custom-built image successfully runs with ODH on OCP.

<!-- gh-comment-id:2585158353 --> @chaturvedi-kna commented on GitHub (Jan 11, 2025): Update: I've verified with the latest version of Ollama. The latest official image requires root user privileges for deployment. To address this, I've created a modified Dockerfile that enables non-root user execution and submitted PR #8383 with these changes. The custom-built image successfully runs with ODH on OCP.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5351