[GH-ISSUE #10332] Deploying ollama on #openshift but error "could not find /.ollama/id_ed25519' Generating new private key. #32546

Closed
opened 2026-04-22 13:55:36 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @doyoungim999 on GitHub (Apr 18, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10332

What is the issue?

Hi I cannot deploy ollama docker image ( https://hub.docker.com/r/ollama/ollama) with the fowowing error:
Could's find '/.ollama/id_ed25519', Generating new private key.
Error: could not create directory mkdir /.ollama:permission denied.

How to fix this authority issue inside container of the openshift ?

Relevant log output


OS

Linux

GPU

_No

CPU

Yes.

Ollama version

docker pull ollama/ollama:latest

Originally created by @doyoungim999 on GitHub (Apr 18, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10332 ### What is the issue? Hi I cannot deploy ollama docker image ( https://hub.docker.com/r/ollama/ollama) with the fowowing error: Could's find '/.ollama/id_ed25519', Generating new private key. Error: could not create directory mkdir /.ollama:permission denied. How to fix this authority issue inside container of the openshift ? ### Relevant log output ```shell ``` ### OS Linux ### GPU _No ### CPU Yes. ### Ollama version docker pull ollama/ollama:latest
GiteaMirror added the bug label 2026-04-22 13:55:36 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 18, 2025):

The user that ollama is running as has no permissions to create a directory in /. Since the default user is root, your configuration appears to have changed that. If you can provide more information about your deployment, it will be easier to resolve.

<!-- gh-comment-id:2814302327 --> @rick-github commented on GitHub (Apr 18, 2025): The user that ollama is running as has no permissions to create a directory in /. Since the default user is root, your configuration appears to have changed that. If you can provide more information about your deployment, it will be easier to resolve.
Author
Owner

@doyoungim999 commented on GitHub (Apr 18, 2025):

Hi this is the running deployment descriptor.

kind: Deployment
apiVersion: apps/v1
metadata:
name: ollama-9999
namespace: dp-codeai
spec:
replicas: 1
selector:
matchLabels:
app: ollama-9999
template:
metadata:
creationTimestamp: null
labels:
app: ollama-9999
spec:
volumes:
- name: ollama-storage
persistentVolumeClaim:
claimName: ollama-storage-pvc
containers:
- name: ollama-9999
image: '{registryURL}/dpapi/ollama'
ports:
- containerPort: 11434
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600

<!-- gh-comment-id:2814320396 --> @doyoungim999 commented on GitHub (Apr 18, 2025): Hi this is the running deployment descriptor. kind: Deployment apiVersion: apps/v1 metadata: name: ollama-9999 namespace: dp-codeai spec: replicas: 1 selector: matchLabels: app: ollama-9999 template: metadata: creationTimestamp: null labels: app: ollama-9999 spec: volumes: - name: ollama-storage persistentVolumeClaim: claimName: ollama-storage-pvc containers: - name: ollama-9999 image: '{registryURL}/dpapi/ollama' ports: - containerPort: 11434 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: Always restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst securityContext: {} schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600
Author
Owner

@rick-github commented on GitHub (Apr 18, 2025):

How are you creating ollama-9999?

<!-- gh-comment-id:2814331694 --> @rick-github commented on GitHub (Apr 18, 2025): How are you creating ollama-9999?
Author
Owner

@doyoungim999 commented on GitHub (Apr 18, 2025):

I am sorry I cannot capture the content and upload a image capture.
I think /.ollama is required from docker image.

this is the part of spec from deployment descriptor.

spec:
containers:
-name: ollama-9999
image: 'registryURL'
ports:
- containerPort: 11434

This is all I have in deployment file.

<!-- gh-comment-id:2814334345 --> @doyoungim999 commented on GitHub (Apr 18, 2025): I am sorry I cannot capture the content and upload a image capture. I think /.ollama is required from docker image. this is the part of spec from deployment descriptor. spec: containers: -name: ollama-9999 image: 'registryURL' ports: - containerPort: 11434 This is all I have in deployment file.
Author
Owner

@doyoungim999 commented on GitHub (Apr 18, 2025):

Hi,
When I run podman in linux , I got the error message
Can it be related with the error?

$podman run --name ollama docker.io/ollama/ollama
Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is:

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBedhdrppW25lV6awIcq633JkzRH6zbM4knSM7llvU1w

2025/04/18 01:58:30 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY:localhost,127.0.0.1/8,::1,16.3.30.54,166.79.51.50,166.79.51.70 OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-04-18T01:58:30.466Z level=INFO source=images.go:458 msg="total blobs: 0"
time=2025-04-18T01:58:30.466Z level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-18T01:58:30.466Z level=INFO source=routes.go:1298 msg="Listening on [::]:11434 (version 0.6.5)"
time=2025-04-18T01:58:30.466Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-18T01:58:30.468Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-04-18T01:58:30.468Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="31.1 GiB" available="29.4 GiB"

<!-- gh-comment-id:2814368094 --> @doyoungim999 commented on GitHub (Apr 18, 2025): Hi, When I run podman in linux , I got the error message Can it be related with the error? **$podman run --name ollama docker.io/ollama/ollama** **Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is:** ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBedhdrppW25lV6awIcq633JkzRH6zbM4knSM7llvU1w 2025/04/18 01:58:30 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY:localhost,127.0.0.1/8,::1,16.3.30.54,166.79.51.50,166.79.51.70 OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-04-18T01:58:30.466Z level=INFO source=images.go:458 msg="total blobs: 0" time=2025-04-18T01:58:30.466Z level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-18T01:58:30.466Z level=INFO source=routes.go:1298 msg="Listening on [::]:11434 (version 0.6.5)" time=2025-04-18T01:58:30.466Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-18T01:58:30.468Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-04-18T01:58:30.468Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="31.1 GiB" available="29.4 GiB"
Author
Owner

@rick-github commented on GitHub (Apr 18, 2025):

In this case, it succeeded because ollama was able to create the directory /root/.ollama. In your original post, it failed because the user that ollama is running as does not have permissions to create a directory in /.

<!-- gh-comment-id:2814920271 --> @rick-github commented on GitHub (Apr 18, 2025): In this case, it succeeded because ollama was able to create the directory `/root/.ollama`. In your original post, it failed because the user that ollama is running as does not have permissions to create a directory in `/`.
Author
Owner

@doyoungim999 commented on GitHub (Apr 20, 2025):

Hi I started from the scratch again for openshift.
I just created a pod from the docker hub.
I created a pod with the following deployment descriptor and got an error:
How can I resolve the following issue?

$oc apply -f mydd.yml
[ Error]
Couldn't find '/.ollama/id_ed25519'. Generating new private key.
Error: could not create directory mkdir /.ollama: permission denied

[deployment descriptor -mydd.yml ]

apiVersion: v1
kind: Pod
metadata:
name: ollama
labels:
app: ollama
namespace: dp-api
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: ollama
image: 'docker.io/ollama/ollama:latest'
ports:
- containerPort: 11434
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL

$oc describe pod ollama
Name: ollama
Namespace: dp-api
Priority: 0
Service Account: default
Node: s2-worker1.infra.cp4dex.com/192.168.30.21
Start Time: Sun, 20 Apr 2025 02:26:55 -0400
Labels: app=ollama
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "openshift-sdn",
"interface": "eth0",
"ips": [
"10.131.1.164"
],
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "openshift-sdn",
"interface": "eth0",
"ips": [
"10.131.1.164"
],
"default": true,
"dns": {}
}]
openshift.io/scc: restricted-v2
seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status: Running
IP: 10.131.1.164
IPs:
IP: 10.131.1.164
Containers:
ollama:
Container ID: cri-o://3ce47551ae5a0c77aa2a076ef195ebcc1de40a86af66d2d57d251bbfd6add401
Image: docker.io/ollama/ollama:latest
Image ID: docker.io/ollama/ollama@sha256:92981c232175337f2bab52e94e1c8f2c4aab8f95aeb412350c39fd48712b057f
Port: 11434/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 20 Apr 2025 02:27:20 -0400
Finished: Sun, 20 Apr 2025 02:27:21 -0400
Ready: False
Restart Count: 2
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-799qw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-799qw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional:
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 51s default-scheduler Successfully assigned dp-api/ollama to s2-worker1.infra.cp4dex.com
Normal AddedInterface 49s multus Add eth0 [10.131.1.164/23] from openshift-sdn
Normal Pulled 47s kubelet Successfully pulled image "docker.io/ollama/ollama:latest" in 2.295472138s (2.295485174s including waiting)
Normal Pulled 43s kubelet Successfully pulled image "docker.io/ollama/ollama:latest" in 2.302373378s (2.302432219s including waiting)
Normal Pulling 28s (x3 over 49s) kubelet Pulling image "docker.io/ollama/ollama:latest"
Normal Created 26s (x3 over 47s) kubelet Created container ollama
Normal Started 26s (x3 over 47s) kubelet Started container ollama
Normal Pulled 26s kubelet Successfully pulled image "docker.io/ollama/ollama:latest" in 2.271924085s (2.271943631s including waiting)
Warning BackOff 10s (x4 over 43s) kubelet Back-off restarting failed container

<!-- gh-comment-id:2817022987 --> @doyoungim999 commented on GitHub (Apr 20, 2025): Hi I started from the scratch again for openshift. I just created a pod from the docker hub. I created a pod with the following deployment descriptor and got an error: How can I resolve the following issue? $oc apply -f mydd.yml [ Error] Couldn't find '/.ollama/id_ed25519'. Generating new private key. Error: could not create directory mkdir /.ollama: permission denied [deployment descriptor -mydd.yml ] apiVersion: v1 kind: Pod metadata: name: ollama labels: app: ollama namespace: dp-api spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ollama image: 'docker.io/ollama/ollama:latest' ports: - containerPort: 11434 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL $oc describe pod ollama Name: ollama Namespace: dp-api Priority: 0 Service Account: default Node: s2-worker1.infra.cp4dex.com/192.168.30.21 Start Time: Sun, 20 Apr 2025 02:26:55 -0400 Labels: app=ollama Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.1.164" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.1.164" ], "default": true, "dns": {} }] openshift.io/scc: restricted-v2 seccomp.security.alpha.kubernetes.io/pod: runtime/default Status: Running IP: 10.131.1.164 IPs: IP: 10.131.1.164 Containers: ollama: Container ID: cri-o://3ce47551ae5a0c77aa2a076ef195ebcc1de40a86af66d2d57d251bbfd6add401 Image: docker.io/ollama/ollama:latest Image ID: docker.io/ollama/ollama@sha256:92981c232175337f2bab52e94e1c8f2c4aab8f95aeb412350c39fd48712b057f Port: 11434/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sun, 20 Apr 2025 02:27:20 -0400 Finished: Sun, 20 Apr 2025 02:27:21 -0400 Ready: False Restart Count: 2 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-799qw (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-799qw: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 51s default-scheduler Successfully assigned dp-api/ollama to s2-worker1.infra.cp4dex.com Normal AddedInterface 49s multus Add eth0 [10.131.1.164/23] from openshift-sdn Normal Pulled 47s kubelet Successfully pulled image "docker.io/ollama/ollama:latest" in 2.295472138s (2.295485174s including waiting) Normal Pulled 43s kubelet Successfully pulled image "docker.io/ollama/ollama:latest" in 2.302373378s (2.302432219s including waiting) Normal Pulling 28s (x3 over 49s) kubelet Pulling image "docker.io/ollama/ollama:latest" Normal Created 26s (x3 over 47s) kubelet Created container ollama Normal Started 26s (x3 over 47s) kubelet Started container ollama Normal Pulled 26s kubelet Successfully pulled image "docker.io/ollama/ollama:latest" in 2.271924085s (2.271943631s including waiting) Warning BackOff 10s (x4 over 43s) kubelet Back-off restarting failed container
Author
Owner

@rick-github commented on GitHub (Apr 20, 2025):

The problem is still that the the user that ollama is running as has no permissions to create a directory in /. Your configuration has runAsNonRoot: true which is at odds with they way that the ollama docker image works - it runs as root. So your configuration is running ollama as a non root user, and as a result, the user has no permissions to create a directory in /. Presumably the user that it is running as has no entry in the internal password file, which is why it thinks its home directory is /, where it's trying to create the directory .ollama.

<!-- gh-comment-id:2817138516 --> @rick-github commented on GitHub (Apr 20, 2025): The problem is still that the the user that ollama is running as has no permissions to create a directory in /. Your configuration has `runAsNonRoot: true` which is at odds with they way that the ollama docker image works - it runs as root. So your configuration is running ollama as a non root user, and as a result, the user has no permissions to create a directory in /. Presumably the user that it is running as has no entry in the internal password file, which is why it thinks its home directory is /, where it's trying to create the directory `.ollama`.
Author
Owner

@doyoungim999 commented on GitHub (Apr 21, 2025):

When the container on openshift is created, a user and a group is created by openshift. it uses /.ollma as a home directory.

<!-- gh-comment-id:2817479068 --> @doyoungim999 commented on GitHub (Apr 21, 2025): When the container on openshift is created, a user and a group is created by openshift. it uses /.ollma as a home directory.
Author
Owner

@pradhyu commented on GitHub (Sep 25, 2025):

A better way would have been allowing to select home directory. In openshift the users are provided by runAsUser so they won't have access to /.ollma the root directory.

<!-- gh-comment-id:3335645914 --> @pradhyu commented on GitHub (Sep 25, 2025): A better way would have been allowing to select home directory. In openshift the users are provided by runAsUser so they won't have access to /.ollma the root directory.
Author
Owner

@rick-github commented on GitHub (Sep 25, 2025):

Set HOME.

<!-- gh-comment-id:3336143024 --> @rick-github commented on GitHub (Sep 25, 2025): Set `HOME`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32546