[GH-ISSUE #9888] cannot see custom model with param's inside docker container #32234

Closed
opened 2026-04-22 13:18:20 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @babu-kandyala on GitHub (Mar 19, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9888

What is the issue?

docker build & run successful using below Dockerfile, but when i exec inside container cannot see custom model under ollama list

when i try accessing http://localhost:11434/api/tags using postman, nothing under models
{
"models": []
}

Using base image

FROM ollama/ollama

Adding an unprivileged user

RUN groupadd --gid 10001 ollama &&
useradd --uid 10001 --gid ollama --shell /bin/bash --create-home ollama

Change the ownership to ollama and provide necessary permissions

RUN chown -R ollama:ollama /bin/ollama && chmod 755 /bin/ollama

COPY modelfile modelfile
RUN ollama -v

Create custom model

RUN ollama serve & server=$! ; sleep 2 ; ollama pull llama3.2 ; ollama create ollama_custom -f modelfile ; kill $server

Run the custom model

ENTRYPOINT [ "/bin/bash", "-c", "(sleep 2 ; ollama run ollama_custom '') & exec /bin/ollama $0" ]
CMD [ "serve" ]

Relevant log output

root@1115b7f872eb:/# ollama -v
ollama version is 0.6.2
root@1115b7f872eb:/# ollama list
NAME    ID    SIZE    MODIFIED 
root@1115b7f872eb:/#

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.6.2

Originally created by @babu-kandyala on GitHub (Mar 19, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9888 ### What is the issue? docker build & run successful using below Dockerfile, but when i exec inside container cannot see custom model under ollama list when i try accessing http://localhost:11434/api/tags using postman, nothing under models { "models": [] } # Using base image FROM ollama/ollama # Adding an unprivileged user RUN groupadd --gid 10001 ollama && \ useradd --uid 10001 --gid ollama --shell /bin/bash --create-home ollama # Change the ownership to ollama and provide necessary permissions RUN chown -R ollama:ollama /bin/ollama && chmod 755 /bin/ollama COPY modelfile modelfile RUN ollama -v # Create custom model RUN ollama serve & server=$! ; sleep 2 ; ollama pull llama3.2 ; ollama create ollama_custom -f modelfile ; kill $server # Run the custom model ENTRYPOINT [ "/bin/bash", "-c", "(sleep 2 ; ollama run ollama_custom '') & exec /bin/ollama $0" ] CMD [ "serve" ] ### Relevant log output ```shell root@1115b7f872eb:/# ollama -v ollama version is 0.6.2 root@1115b7f872eb:/# ollama list NAME ID SIZE MODIFIED root@1115b7f872eb:/# ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.6.2
GiteaMirror added the bug label 2026-04-22 13:18:20 -05:00
Author
Owner

@babu-kandyala commented on GitHub (Mar 19, 2025):

Refer #9859

<!-- gh-comment-id:2736158419 --> @babu-kandyala commented on GitHub (Mar 19, 2025): Refer #9859
Author
Owner

@rick-github commented on GitHub (Mar 19, 2025):

$ cat Dockerfile
FROM ollama/ollama
RUN ollama serve & server=$! ; sleep 2 ; ollama pull llama3.2 ; ollama create ollama_custom -f modelfile ; kill $server
ENTRYPOINT [ "/bin/bash", "-c", "(sleep 2 ; ollama run ollama_custom '') & exec /bin/ollama $0" ]
CMD [ "serve" ]
$ docker build -f Dockerfile -t ollama-9888 .
...
 => => naming to docker.io/library/ollama-9888                                                                       0.0s
$ docker run --rm -d --name ollama-9888 ollama-9888 
$ docker exec -it ollama-9888 ollama -v
ollama version is 0.6.2
$ docker exec -it ollama-9888 ollama list
NAME               ID              SIZE      MODIFIED       
llama3.2:latest    a80c4f17acd5    2.0 GB    45 seconds ago    
<!-- gh-comment-id:2736224280 --> @rick-github commented on GitHub (Mar 19, 2025): ```console $ cat Dockerfile FROM ollama/ollama RUN ollama serve & server=$! ; sleep 2 ; ollama pull llama3.2 ; ollama create ollama_custom -f modelfile ; kill $server ENTRYPOINT [ "/bin/bash", "-c", "(sleep 2 ; ollama run ollama_custom '') & exec /bin/ollama $0" ] CMD [ "serve" ] $ docker build -f Dockerfile -t ollama-9888 . ... => => naming to docker.io/library/ollama-9888 0.0s $ docker run --rm -d --name ollama-9888 ollama-9888 $ docker exec -it ollama-9888 ollama -v ollama version is 0.6.2 $ docker exec -it ollama-9888 ollama list NAME ID SIZE MODIFIED llama3.2:latest a80c4f17acd5 2.0 GB 45 seconds ago ```
Author
Owner

@babu-kandyala commented on GitHub (Mar 19, 2025):

@rick-github actually i am trying to create custom model using modelfile with param's, and before creating custom model pulling llama3.2. But both llama3.2 & ollama_custom not listing.

PS C:\ollma\sampleapp-test\ollama-web-app\pipelines> docker exec -it ollama19032025 ollama list
NAME ID SIZE MODIFIED

What's next:
Try Docker Debug for seamless, persistent debugging tools in any container or image → docker debug ollama19032025
Learn more at https://docs.docker.com/go/debug-cli/

here is logs for contianer

2025-03-19 15:22:00 2025/03/19 09:52:00 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2025-03-19 15:22:00 time=2025-03-19T09:52:00.741Z level=INFO source=images.go:432 msg="total blobs: 3"
2025-03-19 15:22:00 time=2025-03-19T09:52:00.741Z level=INFO source=images.go:439 msg="total unused blobs removed: 3"
2025-03-19 15:22:00 time=2025-03-19T09:52:00.742Z level=INFO source=routes.go:1297 msg="Listening on [::]:11434 (version 0.6.2)"
2025-03-19 15:22:00 time=2025-03-19T09:52:00.742Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
2025-03-19 15:22:00 time=2025-03-19T09:52:00.745Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
2025-03-19 15:22:00 time=2025-03-19T09:52:00.745Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="15.5 GiB" available="14.4 GiB"
pulling manifest ⠋ time=2025-03-19T09:52:02.885Z level=INFO source=images.go:669 msg="request failed: Get "https://registry.ollama.ai/v2/library/ollama_custom/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority"
pulling manifest
2025-03-19 15:22:02 Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/ollama_custom/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority
2025-03-19 15:25:57 time=2025-03-19T09:55:57.117Z level=INFO source=images.go:669 msg="request failed: Get "https://registry.ollama.ai/v2/library/llama3.2/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority"
2025-03-19 15:22:02 [GIN] 2025/03/19 - 09:52:02 | 200 | 47.739µs | 127.0.0.1 | HEAD "/"
2025-03-19 15:22:02 [GIN] 2025/03/19 - 09:52:02 | 404 | 243.049µs | 127.0.0.1 | POST "/api/show"

<!-- gh-comment-id:2736248453 --> @babu-kandyala commented on GitHub (Mar 19, 2025): @rick-github actually i am trying to create custom model using modelfile with param's, and before creating custom model pulling llama3.2. But both llama3.2 & ollama_custom not listing. PS C:\ollma\sampleapp-test\ollama-web-app\pipelines> docker exec -it ollama19032025 ollama list NAME ID SIZE MODIFIED What's next: Try Docker Debug for seamless, persistent debugging tools in any container or image → docker debug ollama19032025 Learn more at https://docs.docker.com/go/debug-cli/ here is logs for contianer 2025-03-19 15:22:00 2025/03/19 09:52:00 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2025-03-19 15:22:00 time=2025-03-19T09:52:00.741Z level=INFO source=images.go:432 msg="total blobs: 3" 2025-03-19 15:22:00 time=2025-03-19T09:52:00.741Z level=INFO source=images.go:439 msg="total unused blobs removed: 3" 2025-03-19 15:22:00 time=2025-03-19T09:52:00.742Z level=INFO source=routes.go:1297 msg="Listening on [::]:11434 (version 0.6.2)" 2025-03-19 15:22:00 time=2025-03-19T09:52:00.742Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" 2025-03-19 15:22:00 time=2025-03-19T09:52:00.745Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" 2025-03-19 15:22:00 time=2025-03-19T09:52:00.745Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="15.5 GiB" available="14.4 GiB" pulling manifest ⠋ time=2025-03-19T09:52:02.885Z level=INFO source=images.go:669 msg="request failed: Get \"https://registry.ollama.ai/v2/library/ollama_custom/manifests/latest\": tls: failed to verify certificate: x509: certificate signed by unknown authority" pulling manifest 2025-03-19 15:22:02 Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/ollama_custom/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority 2025-03-19 15:25:57 time=2025-03-19T09:55:57.117Z level=INFO source=images.go:669 msg="request failed: Get \"https://registry.ollama.ai/v2/library/llama3.2/manifests/latest\": tls: failed to verify certificate: x509: certificate signed by unknown authority" 2025-03-19 15:22:02 [GIN] 2025/03/19 - 09:52:02 | 200 | 47.739µs | 127.0.0.1 | HEAD "/" 2025-03-19 15:22:02 [GIN] 2025/03/19 - 09:52:02 | 404 | 243.049µs | 127.0.0.1 | POST "/api/show"
Author
Owner

@rick-github commented on GitHub (Mar 19, 2025):

actually i am trying to create custom model using modelfile with param's, and before creating custom model pulling llama3.2. But both llama3.2 & ollama_custom not listing.

Logs of the build process might show why the image doesn't contain the models.

docker build -f Dockerfile --progress=plain --no-cache -t ollama19032025 .
<!-- gh-comment-id:2736275416 --> @rick-github commented on GitHub (Mar 19, 2025): > actually i am trying to create custom model using modelfile with param's, and before creating custom model pulling llama3.2. But both llama3.2 & ollama_custom not listing. Logs of the build process might show why the image doesn't contain the models. ``` docker build -f Dockerfile --progress=plain --no-cache -t ollama19032025 . ```
Author
Owner

@babu-kandyala commented on GitHub (Mar 19, 2025):

@rick-github here is the log

PS C:\ollma\sampleapp-test\ollama-web-app\pipelines> docker build --progress=plain --no-cache -t ollama19032025_2110 .
#0 building with "desktop-linux" instance using docker driver

#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 687B 0.0s done
#1 DONE 0.1s

#2 [internal] load metadata for docker.io/ollama/ollama:latest
#2 DONE 0.0s

#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

#4 [1/6] FROM docker.io/ollama/ollama:latest
#4 CACHED

#5 [internal] load build context
#5 transferring context: 31B done
#5 DONE 0.1s

#6 [2/6] RUN groupadd --gid 10001 ollama && useradd --uid 10001 --gid ollama --shell /bin/bash --create-home ollama
#6 DONE 0.8s

#7 [3/6] RUN chown -R ollama:ollama /bin/ollama && chmod 755 /bin/ollama
#7 DONE 0.7s

#8 [4/6] COPY modelfile modelfile
#8 DONE 0.1s

#9 [5/6] RUN ollama -v
#9 0.377 Warning: could not connect to a running Ollama instance
#9 0.377 Warning: client version is 0.6.2
#9 DONE 0.4s

#10 [6/6] RUN ollama serve & server=$! ; sleep 2 ; ollama pull llama3.2 ; ollama create ollama_custom -f modelfile ; kill $server
#10 0.678 Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
#10 0.680 Your new public key is:
#10 0.680
#10 0.680 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEtyf6vct8/pbu+KmtknW2v7E9NO+TevyxPwjG1xLC34
#10 0.680
#10 0.693 2025/03/19 15:41:17 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
#10 0.694 time=2025-03-19T15:41:17.730Z level=INFO source=images.go:432 msg="total blobs: 0"
#10 0.694 time=2025-03-19T15:41:17.730Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
#10 0.694 time=2025-03-19T15:41:17.730Z level=INFO source=routes.go:1297 msg="Listening on [::]:11434 (version 0.6.2)"
#10 0.694 time=2025-03-19T15:41:17.730Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
#10 0.708 time=2025-03-19T15:41:17.744Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
#10 0.708 time=2025-03-19T15:41:17.744Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="15.5 GiB" available="14.4 GiB"
#10 2.688 [GIN] 2025/03/19 - 15:41:19 | 200 | 64.16µs | 127.0.0.1 | HEAD "/"
pulling manifest ⠹ time=2025-03-19T15:41:20.080Z level=INFO source=images.go:669 msg="request failed: Get "https://registry.ollama.ai/v2/library/llama3.2/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority"
#10 3.044 [GIN] 2025/03/19 - 15:41:20 | 200 | 355.936405ms | 127.0.0.1 | POST "/api/pull"
pulling manifest
#10 3.045 Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3.2/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority
#10 3.059 [GIN] 2025/03/19 - 15:41:20 | 200 | 22.221µs | 127.0.0.1 | HEAD "/"
#10 3.117 time=2025-03-19T15:41:20.153Z level=INFO source=images.go:669 msg="request failed: Get "https://registry.ollama.ai/v2/library/llama3.2/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority"
gathering model components
#10 3.118 pulling manifest
#10 3.118 Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3.2/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority
#10 3.119 [GIN] 2025/03/19 - 15:41:20 | 200 | 59.896666ms | 127.0.0.1 | POST "/api/create"
#10 DONE 3.2s

#11 exporting to image
#11 exporting layers
#11 exporting layers 0.3s done
#11 writing image sha256:12870e9fa0febd4429974be1f091e34ce4e950f49957de36ce339473a4b13c41 done
#11 naming to docker.io/library/ollama19032025_2110 0.0s done
#11 DONE 0.4s

View build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/nl2m93rk0h891ctxo9cccfk31

What's next:
View a summary of image vulnerabilities and recommendations → docker scout quickview
PS C:\ollma\sampleapp-test\ollama-web-app\pipelines>

<!-- gh-comment-id:2737132361 --> @babu-kandyala commented on GitHub (Mar 19, 2025): @rick-github here is the log PS C:\ollma\sampleapp-test\ollama-web-app\pipelines> **docker build --progress=plain --no-cache -t ollama19032025_2110 .** #0 building with "desktop-linux" instance using docker driver #1 [internal] load build definition from Dockerfile #1 transferring dockerfile: 687B 0.0s done #1 DONE 0.1s #2 [internal] load metadata for docker.io/ollama/ollama:latest #2 DONE 0.0s #3 [internal] load .dockerignore #3 transferring context: 2B done #3 DONE 0.0s #4 [1/6] FROM docker.io/ollama/ollama:latest #4 CACHED #5 [internal] load build context #5 transferring context: 31B done #5 DONE 0.1s #6 [2/6] RUN groupadd --gid 10001 ollama && useradd --uid 10001 --gid ollama --shell /bin/bash --create-home ollama #6 DONE 0.8s #7 [3/6] RUN chown -R ollama:ollama /bin/ollama && chmod 755 /bin/ollama #7 DONE 0.7s #8 [4/6] COPY modelfile modelfile #8 DONE 0.1s #9 [5/6] RUN ollama -v #9 0.377 Warning: could not connect to a running Ollama instance #9 0.377 Warning: client version is 0.6.2 #9 DONE 0.4s #10 [6/6] RUN ollama serve & server=$! ; sleep 2 ; ollama pull llama3.2 ; ollama create ollama_custom -f modelfile ; kill $server #10 0.678 Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. #10 0.680 Your new public key is: #10 0.680 #10 0.680 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEtyf6vct8/pbu+KmtknW2v7E9NO+TevyxPwjG1xLC34 #10 0.680 #10 0.693 2025/03/19 15:41:17 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" #10 0.694 time=2025-03-19T15:41:17.730Z level=INFO source=images.go:432 msg="total blobs: 0" #10 0.694 time=2025-03-19T15:41:17.730Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" #10 0.694 time=2025-03-19T15:41:17.730Z level=INFO source=routes.go:1297 msg="Listening on [::]:11434 (version 0.6.2)" #10 0.694 time=2025-03-19T15:41:17.730Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" #10 0.708 time=2025-03-19T15:41:17.744Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" #10 0.708 time=2025-03-19T15:41:17.744Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="15.5 GiB" available="14.4 GiB" #10 2.688 [GIN] 2025/03/19 - 15:41:19 | 200 | 64.16µs | 127.0.0.1 | HEAD "/" **_pulling manifest ⠹ time=2025-03-19T15:41:20.080Z level=INFO source=images.go:669 msg="request failed: Get \"https://registry.ollama.ai/v2/library/llama3.2/manifests/latest\": tls: failed to verify certificate: x509: certificate signed by unknown authority"_** #10 3.044 [GIN] 2025/03/19 - 15:41:20 | 200 | 355.936405ms | 127.0.0.1 | POST "/api/pull" pulling manifest #10 3.045 Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3.2/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority #10 3.059 [GIN] 2025/03/19 - 15:41:20 | 200 | 22.221µs | 127.0.0.1 | HEAD "/" #10 3.117 time=2025-03-19T15:41:20.153Z level=INFO source=images.go:669 msg="request failed: Get \"https://registry.ollama.ai/v2/library/llama3.2/manifests/latest\": tls: failed to verify certificate: x509: certificate signed by unknown authority" gathering model components #10 3.118 pulling manifest #10 3.118 Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3.2/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority #10 3.119 [GIN] 2025/03/19 - 15:41:20 | 200 | 59.896666ms | 127.0.0.1 | POST "/api/create" #10 DONE 3.2s #11 exporting to image #11 exporting layers #11 exporting layers 0.3s done #11 writing image sha256:12870e9fa0febd4429974be1f091e34ce4e950f49957de36ce339473a4b13c41 done #11 naming to docker.io/library/ollama19032025_2110 0.0s done #11 DONE 0.4s View build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/nl2m93rk0h891ctxo9cccfk31 What's next: View a summary of image vulnerabilities and recommendations → docker scout quickview PS C:\ollma\sampleapp-test\ollama-web-app\pipelines>
Author
Owner

@rick-github commented on GitHub (Mar 19, 2025):

pulling manifest ⠹ time=2025-03-19T15:41:20.080Z level=INFO source=images.go:669 msg="request failed: Get "[https://registry.ollama.ai/v2/library/llama3.2/manifests/latest\](https://registry.ollama.ai/v2/library/llama3.2/manifests/latest%5C)": tls: failed to verify certificate: x509: certificate signed by unknown authority"

Model pull failed. Are you behind a proxy? https://github.com/ollama/ollama/issues/9391#issuecomment-2698816430

<!-- gh-comment-id:2737156985 --> @rick-github commented on GitHub (Mar 19, 2025): ``` pulling manifest ⠹ time=2025-03-19T15:41:20.080Z level=INFO source=images.go:669 msg="request failed: Get "[https://registry.ollama.ai/v2/library/llama3.2/manifests/latest\](https://registry.ollama.ai/v2/library/llama3.2/manifests/latest%5C)": tls: failed to verify certificate: x509: certificate signed by unknown authority" ``` Model pull failed. Are you behind a proxy? https://github.com/ollama/ollama/issues/9391#issuecomment-2698816430
Author
Owner

@babu-kandyala commented on GitHub (Mar 19, 2025):

@rick-github yes, i think,its Zscalar. what should i do in this case?

<!-- gh-comment-id:2737289104 --> @babu-kandyala commented on GitHub (Mar 19, 2025): @rick-github yes, i think,its Zscalar. what should i do in this case?
Author
Owner

@rick-github commented on GitHub (Mar 19, 2025):

https://github.com/ollama/ollama/issues/9391#issuecomment-2698816430

<!-- gh-comment-id:2737293985 --> @rick-github commented on GitHub (Mar 19, 2025): https://github.com/ollama/ollama/issues/9391#issuecomment-2698816430
Author
Owner

@pdevine commented on GitHub (Mar 21, 2025):

I'm going to close this as answered (thank you @rick-github !)

<!-- gh-comment-id:2742531574 --> @pdevine commented on GitHub (Mar 21, 2025): I'm going to close this as answered (thank you @rick-github !)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32234