[GH-ISSUE #10764] Llama 3.2 Vision Python Library does not work with offical docker #69130

Closed
opened 2026-05-04 17:14:26 -05:00 by GiteaMirror · 16 comments
Owner

Originally created by @guoyejun on GitHub (May 18, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10764

What is the issue?

I'm able to run 'ollama run llama3.2-vision' within the docker, but it crashs with the python code at https://ollama.com/library/llama3.2-vision:latest

docker run --gpus all -v /mypath:/workspace -p 11434:11434 -e HTTPS_PROXY=$https_proxy --name official_ollama ollama/ollama

# login into docker container from another console
docker exec -it official_ollama bash
root@26844b4932e6:/# ollama run llama3.2-vision     # it works
root@26844b4932e6:/workspace/ollama/test# ollama --version
ollama version is 0.7.0


# try python lib within docker
root@26844b4932e6:/workspace/ollama/test# apt-get update; apt-get install python3 python3-pip
root@26844b4932e6:/workspace/ollama/test# pip install ollama
root@26844b4932e6:/workspace/ollama/test# cat tryllamav.py
import ollama

response = ollama.chat(
    model='llama3.2-vision',
    messages=[{
        'role': 'user',
        'content': 'What is in this image?',
        'images': ['1.jpg']
    }]
)

print(response)

root@26844b4932e6:/workspace/ollama/test# python3 tryllamav.py
Traceback (most recent call last):
  File "tryllamav.py", line 3, in <module>
    response = ollama.chat(
  File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 333, in chat
    return self._request(
  File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 178, in _request
    return cls(**self._request_raw(*args, **kwargs).json())
  File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 122, in _request_raw
    raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: POST predict: Post "http://127.0.0.1:36247/completion": EOF (status code: 500)

And there are error messages in the console to run ollama server
...
[GIN] 2025/05/18 - 09:55:43 | 500 |   2.64495819s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-18T09:55:43.522Z level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 2"

Relevant log output

Traceback (most recent call last):
  File "tryllamav.py", line 3, in <module>
    response = ollama.chat(
  File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 333, in chat
    return self._request(
  File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 178, in _request
    return cls(**self._request_raw(*args, **kwargs).json())
  File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 122, in _request_raw
    raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: POST predict: Post "http://127.0.0.1:36247/completion": EOF (status code: 500)

OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

ollama version is 0.7.0

Originally created by @guoyejun on GitHub (May 18, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10764 ### What is the issue? I'm able to run 'ollama run llama3.2-vision' within the docker, but it crashs with the python code at https://ollama.com/library/llama3.2-vision:latest ``` docker run --gpus all -v /mypath:/workspace -p 11434:11434 -e HTTPS_PROXY=$https_proxy --name official_ollama ollama/ollama # login into docker container from another console docker exec -it official_ollama bash root@26844b4932e6:/# ollama run llama3.2-vision # it works root@26844b4932e6:/workspace/ollama/test# ollama --version ollama version is 0.7.0 # try python lib within docker root@26844b4932e6:/workspace/ollama/test# apt-get update; apt-get install python3 python3-pip root@26844b4932e6:/workspace/ollama/test# pip install ollama root@26844b4932e6:/workspace/ollama/test# cat tryllamav.py import ollama response = ollama.chat( model='llama3.2-vision', messages=[{ 'role': 'user', 'content': 'What is in this image?', 'images': ['1.jpg'] }] ) print(response) root@26844b4932e6:/workspace/ollama/test# python3 tryllamav.py Traceback (most recent call last): File "tryllamav.py", line 3, in <module> response = ollama.chat( File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 333, in chat return self._request( File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 178, in _request return cls(**self._request_raw(*args, **kwargs).json()) File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 122, in _request_raw raise ResponseError(e.response.text, e.response.status_code) from None ollama._types.ResponseError: POST predict: Post "http://127.0.0.1:36247/completion": EOF (status code: 500) And there are error messages in the console to run ollama server ... [GIN] 2025/05/18 - 09:55:43 | 500 | 2.64495819s | 127.0.0.1 | POST "/api/chat" time=2025-05-18T09:55:43.522Z level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 2" ``` ### Relevant log output ```shell Traceback (most recent call last): File "tryllamav.py", line 3, in <module> response = ollama.chat( File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 333, in chat return self._request( File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 178, in _request return cls(**self._request_raw(*args, **kwargs).json()) File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 122, in _request_raw raise ResponseError(e.response.text, e.response.status_code) from None ollama._types.ResponseError: POST predict: Post "http://127.0.0.1:36247/completion": EOF (status code: 500) ``` ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version ollama version is 0.7.0
GiteaMirror added the bug label 2026-05-04 17:14:26 -05:00
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

Set OLLAMA_DEBUG=1 in the docker environment to add more detail to the logs.

docker run --gpus all -v /mypath:/workspace -p 11434:11434 -e HTTPS_PROXY=$https_proxy -e OLLAMA_DEBUG=1 --name official_ollama ollama/ollama
<!-- gh-comment-id:2888901248 --> @rick-github commented on GitHub (May 18, 2025): Set `OLLAMA_DEBUG=1` in the docker environment to add more detail to the logs. ``` docker run --gpus all -v /mypath:/workspace -p 11434:11434 -e HTTPS_PROXY=$https_proxy -e OLLAMA_DEBUG=1 --name official_ollama ollama/ollama ```
Author
Owner

@guoyejun commented on GitHub (May 18, 2025):

thanks, there's no new message in the console of 'python3 tryllama.py', there's new message in the console with ollama server running.

root@1dd4a1cbd84d:/workspace/ollama/test# export | grep -i debug
declare -x OLLAMA_DEBUG="1"
root@1dd4a1cbd84d:/workspace/ollama/test# python3 tryllamav.py

Traceback (most recent call last):
  File "tryllamav.py", line 3, in <module>
    response = ollama.chat(
  File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 333, in chat
    return self._request(
  File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 178, in _request
    return cls(**self._request_raw(*args, **kwargs).json())
  File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 122, in _request_raw
    raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: POST predict: Post "http://127.0.0.1:40589/completion": EOF (status code: 500)

the console to run ollama server:

...
rax    0x0
rbx    0x7e94d3fff700
rcx    0x7e9a38f7d00b
rdx    0x0
rdi    0x2
rsi    0x7e94d3ffe830
rbp    0x7e996391510f
rsp    0x7e94d3ffe830
r8     0x0
r9     0x7e94d3ffe830
r10    0x8
r11    0x246
r12    0x7e99639d3b08
r13    0x2c
r14    0x7e94b8002f30
r15    0x0
rip    0x7e9a38f7d00b
rflags 0x246
cs     0x33
fs     0x0
gs     0x0
[GIN] 2025/05/18 - 10:32:21 | 500 |  8.467673236s |       127.0.0.1 | POST     "/api/chat"

time=2025-05-18T10:32:21.067Z level=DEBUG source=sched.go:492 msg="context for request finished"
time=2025-05-18T10:32:21.067Z level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/llama3.2-vision:latest runner.inference=cuda runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=1 runner.pid=1779 runner.model=/root/.ollama/models/blobs/sha256-7633fdffe14c0f7acc115402376be5bd6052220c348676c5133dc011b35e2429 runner.num_ctx=4096 duration=5m0s
time=2025-05-18T10:32:21.067Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama3.2-vision:latest runner.inference=cuda runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=1 runner.pid=1779 runner.model=/root/.ollama/models/blobs/sha256-7633fdffe14c0f7acc115402376be5bd6052220c348676c5133dc011b35e2429 runner.num_ctx=4096 refCount=0

time=2025-05-18T10:32:21.140Z level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 2"

<!-- gh-comment-id:2888915356 --> @guoyejun commented on GitHub (May 18, 2025): thanks, there's no new message in the console of 'python3 tryllama.py', there's new message in the console with ollama server running. ``` root@1dd4a1cbd84d:/workspace/ollama/test# export | grep -i debug declare -x OLLAMA_DEBUG="1" root@1dd4a1cbd84d:/workspace/ollama/test# python3 tryllamav.py Traceback (most recent call last): File "tryllamav.py", line 3, in <module> response = ollama.chat( File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 333, in chat return self._request( File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 178, in _request return cls(**self._request_raw(*args, **kwargs).json()) File "/usr/local/lib/python3.8/dist-packages/ollama/_client.py", line 122, in _request_raw raise ResponseError(e.response.text, e.response.status_code) from None ollama._types.ResponseError: POST predict: Post "http://127.0.0.1:40589/completion": EOF (status code: 500) ``` the console to run ollama server: ``` ... rax 0x0 rbx 0x7e94d3fff700 rcx 0x7e9a38f7d00b rdx 0x0 rdi 0x2 rsi 0x7e94d3ffe830 rbp 0x7e996391510f rsp 0x7e94d3ffe830 r8 0x0 r9 0x7e94d3ffe830 r10 0x8 r11 0x246 r12 0x7e99639d3b08 r13 0x2c r14 0x7e94b8002f30 r15 0x0 rip 0x7e9a38f7d00b rflags 0x246 cs 0x33 fs 0x0 gs 0x0 [GIN] 2025/05/18 - 10:32:21 | 500 | 8.467673236s | 127.0.0.1 | POST "/api/chat" time=2025-05-18T10:32:21.067Z level=DEBUG source=sched.go:492 msg="context for request finished" time=2025-05-18T10:32:21.067Z level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/llama3.2-vision:latest runner.inference=cuda runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=1 runner.pid=1779 runner.model=/root/.ollama/models/blobs/sha256-7633fdffe14c0f7acc115402376be5bd6052220c348676c5133dc011b35e2429 runner.num_ctx=4096 duration=5m0s time=2025-05-18T10:32:21.067Z level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama3.2-vision:latest runner.inference=cuda runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=1 runner.pid=1779 runner.model=/root/.ollama/models/blobs/sha256-7633fdffe14c0f7acc115402376be5bd6052220c348676c5133dc011b35e2429 runner.num_ctx=4096 refCount=0 time=2025-05-18T10:32:21.140Z level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 2" ```
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

Need the logs leading up to the stack trace.

<!-- gh-comment-id:2888922030 --> @rick-github commented on GitHub (May 18, 2025): Need the logs leading up to the stack trace.
Author
Owner

@guoyejun commented on GitHub (May 18, 2025):

ollama.log

please see the full logs attached, and also copy the possible key part below. thanks.

time=2025-05-18T11:17:22.716Z level=DEBUG source=ggml.go:154 msg="key not found" key=mllama.rope.freq_scale default=1
time=2025-05-18T11:17:22.731Z level=DEBUG source=ggml.go:553 msg="compute graph" nodes=1126 splits=2
time=2025-05-18T11:17:22.731Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="280.0 MiB"
time=2025-05-18T11:17:22.731Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="8.0 MiB"
time=2025-05-18T11:17:22.953Z level=INFO source=server.go:630 msg="llama runner started in 1.51 seconds"
time=2025-05-18T11:17:22.953Z level=DEBUG source=sched.go:484 msg="finished setting up" runner.name=registry.ollama.ai/library/llama3.2-vision:latest runner.inference=cuda runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=1 runner.pid=1486 runner.model=/root/.ollama/models/blobs/sha256-7633fdffe14c0f7acc115402376be5bd6052220c348676c5133dc011b35e2429 runner.num_ctx=4096
time=2025-05-18T11:17:22.954Z level=DEBUG source=server.go:729 msg="completion request" images=1 prompt=128 format=""
time=2025-05-18T11:17:23.084Z level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=0 prompt=16 used=0 remaining=16
//ml/backend/ggml/ggml/src/ggml-cuda/pad.cu:44: GGML_ASSERT(src0->ne[3] == 1 && dst->ne[3] == 1) failed
SIGSEGV: segmentation violation
PC=0x75257a0a3e47 m=17 sigcode=1 addr=0x204803c34
signal arrived during cgo execution

<!-- gh-comment-id:2888932128 --> @guoyejun commented on GitHub (May 18, 2025): [ollama.log](https://github.com/user-attachments/files/20272565/ollama.log) please see the full logs attached, and also copy the possible key part below. thanks. time=2025-05-18T11:17:22.716Z level=DEBUG source=ggml.go:154 msg="key not found" key=mllama.rope.freq_scale default=1 time=2025-05-18T11:17:22.731Z level=DEBUG source=ggml.go:553 msg="compute graph" nodes=1126 splits=2 time=2025-05-18T11:17:22.731Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="280.0 MiB" time=2025-05-18T11:17:22.731Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="8.0 MiB" time=2025-05-18T11:17:22.953Z level=INFO source=server.go:630 msg="llama runner started in 1.51 seconds" time=2025-05-18T11:17:22.953Z level=DEBUG source=sched.go:484 msg="finished setting up" runner.name=registry.ollama.ai/library/llama3.2-vision:latest runner.inference=cuda runner.devices=1 runner.size="11.7 GiB" runner.vram="11.7 GiB" runner.parallel=1 runner.pid=1486 runner.model=/root/.ollama/models/blobs/sha256-7633fdffe14c0f7acc115402376be5bd6052220c348676c5133dc011b35e2429 runner.num_ctx=4096 time=2025-05-18T11:17:22.954Z level=DEBUG source=server.go:729 msg="completion request" images=1 prompt=128 format="" time=2025-05-18T11:17:23.084Z level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=0 prompt=16 used=0 remaining=16 //ml/backend/ggml/ggml/src/ggml-cuda/pad.cu:44: GGML_ASSERT(src0->ne[3] == 1 && dst->ne[3] == 1) failed SIGSEGV: segmentation violation PC=0x75257a0a3e47 m=17 sigcode=1 addr=0x204803c34 signal arrived during cgo execution
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

//ml/backend/ggml/ggml/src/ggml-cuda/pad.cu:44: GGML_ASSERT(src0->ne[3] == 1 && dst->ne[3] == 1) failed

Looks like the same error as in #10730

<!-- gh-comment-id:2888940738 --> @rick-github commented on GitHub (May 18, 2025): ``` //ml/backend/ggml/ggml/src/ggml-cuda/pad.cu:44: GGML_ASSERT(src0->ne[3] == 1 && dst->ne[3] == 1) failed ``` Looks like the same error as in #10730
Author
Owner

@guoyejun commented on GitHub (May 18, 2025):

possible to try llama3.2-vision with docker image 0.6.0? thanks. There's an error below with default settings.

docker run --gpus all -v /mypath:/workspace -p 11434:11434 -e HTTPS_PROXY=$https_proxy --name official_ollama ollama/ollama:0.6.0

docker exec -it official_ollama ollama run llama3.2-vision
pulling manifest
Error: pull model manifest: 412:

The model you are attempting to pull requires a newer version of Ollama.

Please download the latest version at:

        https://ollama.com/download
<!-- gh-comment-id:2888953172 --> @guoyejun commented on GitHub (May 18, 2025): possible to try llama3.2-vision with docker image 0.6.0? thanks. There's an error below with default settings. ``` docker run --gpus all -v /mypath:/workspace -p 11434:11434 -e HTTPS_PROXY=$https_proxy --name official_ollama ollama/ollama:0.6.0 docker exec -it official_ollama ollama run llama3.2-vision pulling manifest Error: pull model manifest: 412: The model you are attempting to pull requires a newer version of Ollama. Please download the latest version at: https://ollama.com/download ```
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

No, the current version of the llama3.2-vision model requires 0.7.0 or newer. However, the model has some issues: #10731

<!-- gh-comment-id:2888957657 --> @rick-github commented on GitHub (May 18, 2025): No, the current version of the llama3.2-vision model requires 0.7.0 or newer. However, the model has some issues: #10731
Author
Owner

@guoyejun commented on GitHub (May 18, 2025):

thanks, my plan is to try a model to OCR the texts, and so I tried qwen2.5vl:7b, looks that it works with python code as below!

import ollama

response = ollama.chat(
    model='qwen2.5vl:7b',
    messages=[{
        'role': 'user',
        'content': 'Please OCR this image and provide all the texts',
        'images': ['1.jpg']
    }]
)

print(response)

I just have another question, could you help, thanks. It might be a simple question for you, but it requires me lots of investigation.

My final purpose is to prvoid a simple web page (visit localhost) for the end users, and they can easily drag the images for OCR and provide the prompts in the webpage. Which direction I should try? thanks.

(I last week did a quick basic try on Open WebUI, but the docker image ollama-openwebui even does not provide the webpage for ports 11434 and 8080, I may did something wrong)

<!-- gh-comment-id:2889003194 --> @guoyejun commented on GitHub (May 18, 2025): thanks, my plan is to try a model to OCR the texts, and so I tried qwen2.5vl:7b, looks that it works with python code as below! ``` import ollama response = ollama.chat( model='qwen2.5vl:7b', messages=[{ 'role': 'user', 'content': 'Please OCR this image and provide all the texts', 'images': ['1.jpg'] }] ) print(response) ``` I just have another question, could you help, thanks. It might be a simple question for you, but it requires me lots of investigation. My final purpose is to prvoid a simple web page (visit localhost) for the end users, and they can easily drag the images for OCR and provide the prompts in the webpage. Which direction I should try? thanks. (I last week did a quick basic try on Open WebUI, but the docker image ollama-openwebui even does not provide the webpage for ports 11434 and 8080, I may did something wrong)
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

What's your docker config file for open-webui? I have it running in a container and just tried it, works fine.

Image

<!-- gh-comment-id:2889015987 --> @rick-github commented on GitHub (May 18, 2025): What's your docker config file for open-webui? I have it running in a container and just tried it, works fine. ![Image](https://github.com/user-attachments/assets/3aaa54a2-be61-4152-9ca0-8df5025bd4ad)
Author
Owner

@guoyejun commented on GitHub (May 18, 2025):

I'm able to visit localhost:11434 in my brower when the docker image 'ollama/ollama' is started. I can see Ollama is running in the webpage.

But if I changed the docker image to --name yjguo_ollama_openwebui thelocallab/ollama-openwebui (did I find the wrong docker image?), I'm unable to visit localhost:11434 or localhost:8080

$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ae4365d30765 thelocallab/ollama-openwebui "/app/start.sh" 8 minutes ago Up 18 seconds 8080/tcp, 0.0.0.0:11434->11434/tcp, :::11434->11434/tcp yjguo_ollama_openwebui

<!-- gh-comment-id:2889048797 --> @guoyejun commented on GitHub (May 18, 2025): I'm able to visit `localhost:11434` in my brower when the docker image 'ollama/ollama' is started. I can see `Ollama is running` in the webpage. But if I changed the docker image to `--name yjguo_ollama_openwebui thelocallab/ollama-openwebui` (did I find the wrong docker image?), I'm unable to visit `localhost:11434` or `localhost:8080` $ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ae4365d30765 thelocallab/ollama-openwebui "/app/start.sh" 8 minutes ago Up 18 seconds 8080/tcp, 0.0.0.0:11434->11434/tcp, :::11434->11434/tcp yjguo_ollama_openwebui
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

If you started it 8 minutes ago and it's only been up 18 seconds, I suspect it's crash looping. Logs will show why.

<!-- gh-comment-id:2889050095 --> @rick-github commented on GitHub (May 18, 2025): If you started it 8 minutes ago and it's only been up 18 seconds, I suspect it's crash looping. Logs will show why.
Author
Owner

@guoyejun commented on GitHub (May 18, 2025):

thanks, but I think the container is alive there. The reason that we see the log is that I just stopped it and started official_ollama for a try, and then just re-started the container again.

$ docker container ls
CONTAINER ID   IMAGE                          COMMAND           CREATED          STATUS         PORTS                                                     NAMES
ae4365d30765   thelocallab/ollama-openwebui   "/app/start.sh"   16 minutes ago   Up 8 minutes   8080/tcp, 0.0.0.0:11434->11434/tcp, :::11434->11434/tcp   yjguo_ollama_openwebui


$ docker logs yjguo_ollama_openwebui
....
v0.6.9 - building the best AI user interface.

https://github.com/open-webui/open-webui

Fetching 30 files: 100%|██████████| 30/30 [00:00<00:00, 266023.51it/s]
INFO:     Started server process [42]
INFO:     Waiting for application startup.
2025-05-18 15:18:50.433 | INFO     | open_webui.utils.logger:start_logger:140 - GLOBAL_LOG_LEVEL: INFO - {}
2025-05-18 15:18:50.433 | INFO     | open_webui.main:lifespan:464 - Installing external dependencies of functions and tools... - {}
2025-05-18 15:18:50.440 | INFO     | open_webui.utils.plugin:install_frontmatter_requirements:185 - No requirements found in frontmatter. - {}
<!-- gh-comment-id:2889054337 --> @guoyejun commented on GitHub (May 18, 2025): thanks, but I think the container is alive there. The reason that we see the log is that I just stopped it and started official_ollama for a try, and then just re-started the container again. ``` $ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ae4365d30765 thelocallab/ollama-openwebui "/app/start.sh" 16 minutes ago Up 8 minutes 8080/tcp, 0.0.0.0:11434->11434/tcp, :::11434->11434/tcp yjguo_ollama_openwebui $ docker logs yjguo_ollama_openwebui .... v0.6.9 - building the best AI user interface. https://github.com/open-webui/open-webui Fetching 30 files: 100%|██████████| 30/30 [00:00<00:00, 266023.51it/s] INFO: Started server process [42] INFO: Waiting for application startup. 2025-05-18 15:18:50.433 | INFO | open_webui.utils.logger:start_logger:140 - GLOBAL_LOG_LEVEL: INFO - {} 2025-05-18 15:18:50.433 | INFO | open_webui.main:lifespan:464 - Installing external dependencies of functions and tools... - {} 2025-05-18 15:18:50.440 | INFO | open_webui.utils.plugin:install_frontmatter_requirements:185 - No requirements found in frontmatter. - {} ```
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

In that case, the server was not running for the first 8 minutes of the container life. Now that it's running, does http://localhost:8080 bring up the Open WebUI interface?

<!-- gh-comment-id:2889056319 --> @rick-github commented on GitHub (May 18, 2025): In that case, the server was not running for the first 8 minutes of the container life. Now that it's running, does http://localhost:8080 bring up the Open WebUI interface?
Author
Owner

@guoyejun commented on GitHub (May 18, 2025):

no, I could not access both http://localhost:8080/ and http://localhost:11434/

root@ae4365d30765:/app# ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0   2580     0 ?        Ss   15:37   0:00 /bin/sh /app/start.sh
root          11  0.7  0.0 19327376 60616 ?      Sl   15:37   0:00 ollama serve
root          32  0.0  0.0   4612  1792 pts/0    Ss   15:37   0:00 bash
root          50 20.4  0.1 10627772 1014812 ?    Sl   15:37   0:22 /usr/local/bin/python3.11 /usr/local/bin/open-webu
root         205  0.0  0.0   8540  1792 pts/0    R+   15:39   0:00 ps aux

(my proxy is not good now, I'm unable to install tool for netstat)

<!-- gh-comment-id:2889062254 --> @guoyejun commented on GitHub (May 18, 2025): no, I could not access both http://localhost:8080/ and http://localhost:11434/ ``` root@ae4365d30765:/app# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2580 0 ? Ss 15:37 0:00 /bin/sh /app/start.sh root 11 0.7 0.0 19327376 60616 ? Sl 15:37 0:00 ollama serve root 32 0.0 0.0 4612 1792 pts/0 Ss 15:37 0:00 bash root 50 20.4 0.1 10627772 1014812 ? Sl 15:37 0:22 /usr/local/bin/python3.11 /usr/local/bin/open-webu root 205 0.0 0.0 8540 1792 pts/0 R+ 15:39 0:00 ps aux ``` (my proxy is not good now, I'm unable to install tool for netstat)
Author
Owner

@rick-github commented on GitHub (May 18, 2025):

This doesn't seem like an ollama issue anymore, thelocallab's discord can be found here.

<!-- gh-comment-id:2889063800 --> @rick-github commented on GitHub (May 18, 2025): This doesn't seem like an ollama issue anymore, thelocallab's discord can be found [here](https://discord.gg/5hmB4N4JFc).
Author
Owner

@guoyejun commented on GitHub (May 19, 2025):

thanks, will close the issue.

<!-- gh-comment-id:2889593898 --> @guoyejun commented on GitHub (May 19, 2025): thanks, will close the issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69130