[GH-ISSUE #12213] Cannot see models under api #8127

Closed
opened 2026-04-12 20:29:59 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @Xxxyz721 on GitHub (Sep 8, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12213

What is the issue?

Ok, so, using Ollama app on truenas scale.

Versions:-

Ollama:-
App Version:
v0.11.10
Version:
v1.1.21

Open-WebUI:-
App Version:
v0.6.26
Version:
v1.1.18


The issue is that I'm trying to set up Ollama on the server to be accessable via the API for n8n and other services to connect to.

I've successfully updated the firefox web browser to connect to my local server instance for AI chat (through port 31028 of WebUI) and that connects up to the Ollama back end through that.

However, directly going to the API for Ollama itself (I have set up an API key for n8n to use) either via n8n ollama nodes or just in the browser using server:11434/api/tags shows no models installed (I have many) ("models[]" is reported on the .json output).

also, if I use /api/models instead of /api/tags, that shows page not found.

Running the app (docker container i guess) CLI, the ollama list command shows no models, and even when a query is running the ollama ps command doesn't show anything as loaded.

the nvidia-smi command when run in the container cli is showing that the GPU has a model loaded and is processing.

I have tried spinning up a second instance of the app (the previous one used the 30068 port for some reason) and it picked up on all the models etc straight away through WebUI, so I guess it's using the same iX application filestore automatically? again though, same issues with the API not reporting, nor CLI showing anything.

so,

A. how do I see through the api what models are available or not?
B. If it's not reporting the models back via the API, where do I need to look to address this?
C. is this an inherent issue with the API under TRuenas scale?
D. I have Dockge as an app to manage docker instances and is there a simple link to a yaml file to spin up Ollama that way instead? (though dockge shows the "apps" as installed docker containers already).

Any help would be grand. I can get the n8n nodes for ollama to connect when I put in the api details and key, but just not return the model list. It looks though like it's ollama itself that's not publishing the details. Is there an environment variable I need to set up? I have tried setting the Ollama_host variable to 0.0.0.0:11434, but that fails as statung the variable is already set....

Ta,

STu.

Relevant log output


OS

Truenas Scale 25.xx

GPU

GTX 1660 super running 580.xx nvidia drivers

CPU

Twin 2650 V2

Ollama version

v1.11.21

Originally created by @Xxxyz721 on GitHub (Sep 8, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12213 ### What is the issue? Ok, so, using Ollama app on truenas scale. Versions:- Ollama:- App Version: v0.11.10 Version: v1.1.21 Open-WebUI:- App Version: v0.6.26 Version: v1.1.18 ------------ The issue is that I'm trying to set up Ollama on the server to be accessable via the API for n8n and other services to connect to. I've successfully updated the firefox web browser to connect to my local server instance for AI chat (through port 31028 of WebUI) and that connects up to the Ollama back end through that. However, directly going to the API for Ollama itself (I have set up an API key for n8n to use) either via n8n ollama nodes or just in the browser using server:11434/api/tags shows no models installed (I have many) ("models[]" is reported on the .json output). also, if I use /api/models instead of /api/tags, that shows page not found. Running the app (docker container i guess) CLI, the ollama list command shows no models, and even when a query is running the ollama ps command doesn't show anything as loaded. the nvidia-smi command when run in the container cli is showing that the GPU has a model loaded and is processing. I have tried spinning up a second instance of the app (the previous one used the 30068 port for some reason) and it picked up on all the models etc straight away through WebUI, so I guess it's using the same iX application filestore automatically? again though, same issues with the API not reporting, nor CLI showing anything. so, A. how do I see through the api what models are available or not? B. If it's not reporting the models back via the API, where do I need to look to address this? C. is this an inherent issue with the API under TRuenas scale? D. I have Dockge as an app to manage docker instances and is there a simple link to a yaml file to spin up Ollama that way instead? (though dockge shows the "apps" as installed docker containers already). Any help would be grand. I can get the n8n nodes for ollama to connect when I put in the api details and key, but just not return the model list. It looks though like it's ollama itself that's not publishing the details. Is there an environment variable I need to set up? I have tried setting the Ollama_host variable to 0.0.0.0:11434, but that fails as statung the variable is already set.... Ta, STu. ### Relevant log output ```shell ``` ### OS Truenas Scale 25.xx ### GPU GTX 1660 super running 580.xx nvidia drivers ### CPU Twin 2650 V2 ### Ollama version v1.11.21
GiteaMirror added the bug label 2026-04-12 20:29:59 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 8, 2025):

You may be running a version of Open-WebUI that has a built-in ollama server, in which case the ollama server responding to server:11434 is superfluous. In either case, if OpenWebUI is successfully using ollama models, the ollama endpoints are available with the prefix /ollama. For example:

curl server:31028/ollama/api/tags -H "Authorization: Bearer $API_KEY"
<!-- gh-comment-id:3265475245 --> @rick-github commented on GitHub (Sep 8, 2025): You may be running a version of Open-WebUI that has a built-in ollama server, in which case the ollama server responding to server:11434 is superfluous. In either case, if OpenWebUI is successfully using ollama models, the ollama endpoints are available with the prefix `/ollama`. For example: ``` curl server:31028/ollama/api/tags -H "Authorization: Bearer $API_KEY" ```
Author
Owner

@Xxxyz721 commented on GitHub (Sep 8, 2025):

Righto.

Ok, so I cant use it seems "http://ollama2:11434" (the container name) as the API address, nor 0.0.0.0:11434....

If I specify http://172.16.4.1:11434 as the api address (the internal Docker ip address for the instance) it connects but lists no models, and if I use http://192.168.1.16:11434 (the Lan facing network port for Ollama) then that connects, but again no models.

So, I guess I just have to push everything through 30128?

curl is seemingly not available in the ollama instance.....

http://192.168.1.16:31028/ollama/api/tags.....

gives me a list of the models in use :).

so... now I need to configure the n8n nodes to look at that /ollama/api/ rather than just /api/

definitely getting somewhere, I'll keep trying and let you know... :).

<!-- gh-comment-id:3265658604 --> @Xxxyz721 commented on GitHub (Sep 8, 2025): Righto. Ok, so I cant use it seems "http://ollama2:11434" (the container name) as the API address, nor 0.0.0.0:11434.... If I specify http://172.16.4.1:11434 as the api address (the internal Docker ip address for the instance) it connects but lists no models, and if I use http://192.168.1.16:11434 (the Lan facing network port for Ollama) then that connects, but again no models. So, I guess I just have to push everything through 30128? curl is seemingly not available in the ollama instance..... http://192.168.1.16:31028/ollama/api/tags..... gives me a list of the models in use :). so... now I need to configure the n8n nodes to look at that /ollama/api/ rather than just /api/ definitely getting somewhere, I'll keep trying and let you know... :).
Author
Owner

@rick-github commented on GitHub (Sep 8, 2025):

An alternative is to expose the internal ollama server, and then you would be able to connect to 192.168.1.16:11434/api/tags. I'm not familiar with Truenas but since you are running containers I'm assuming it's docker-like. In which case you would turn down the ollama2 container and add a port to the open-webui container: 11434:11434. Note that since this makes a direct connection to the ollama server, the api key is no longer required.

<!-- gh-comment-id:3265681212 --> @rick-github commented on GitHub (Sep 8, 2025): An alternative is to expose the internal ollama server, and then you would be able to connect to 192.168.1.16:11434/api/tags. I'm not familiar with Truenas but since you are running containers I'm assuming it's docker-like. In which case you would turn down the ollama2 container and add a port to the open-webui container: 11434:11434. Note that since this makes a direct connection to the ollama server, the api key is no longer required.
Author
Owner

@Xxxyz721 commented on GitHub (Sep 8, 2025):

http://192.168.1.16:31028/ollama/api/ as the api entry for the node does seem to connect, but again I'm not seeing models, plus when running a test im getting a 405 method not allowed error (I guess its just shoving the details into the wrong hole.... :) )

<!-- gh-comment-id:3265687792 --> @Xxxyz721 commented on GitHub (Sep 8, 2025): http://192.168.1.16:31028/ollama/api/ as the api entry for the node does seem to connect, but again I'm not seeing models, plus when running a test im getting a 405 method not allowed error (I guess its just shoving the details into the wrong hole.... :) )
Author
Owner

@Xxxyz721 commented on GitHub (Sep 8, 2025):

An alternative is to expose the internal ollama server, and then you would be able to connect to 192.168.1.16:11434/api/tags. I'm not familiar with Truenas but since you are running containers I'm assuming it's docker-like. In which case you would turn down the ollama2 container and add a port to the open-webui container: 11434:11434. Note that since this makes a direct connection to the ollama server, the api key is no longer required.

Might give that one a try. still struggling to get n8n to see the models and still getting the 405 errors. it probably is trying to referr to the wrong folder or suchlike through the api, but be damned if I can see how to edit it.

<!-- gh-comment-id:3265812807 --> @Xxxyz721 commented on GitHub (Sep 8, 2025): > An alternative is to expose the internal ollama server, and then you would be able to connect to 192.168.1.16:11434/api/tags. I'm not familiar with Truenas but since you are running containers I'm assuming it's docker-like. In which case you would turn down the ollama2 container and add a port to the open-webui container: 11434:11434. Note that since this makes a direct connection to the ollama server, the api key is no longer required. Might give that one a try. still struggling to get n8n to see the models and still getting the 405 errors. it probably is trying to referr to the wrong folder or suchlike through the api, but be damned if I can see how to edit it.
Author
Owner

@Xxxyz721 commented on GitHub (Sep 8, 2025):

Can't see in the open-WebUI config how to publish the 11434 port....

grr. It does work if I pull the model down manually into ollama via the cli then I can select it when configuring the node.

seems like Webui is just another pointless layer in the way.....

I want to be able to use what I've downloaded already without having to duplicate models etc.

I'll try pointing both to the same host server directory and see if that works, so if I download via webui It is then shared with ollama directly...

<!-- gh-comment-id:3265876819 --> @Xxxyz721 commented on GitHub (Sep 8, 2025): Can't see in the open-WebUI config how to publish the 11434 port.... grr. It does work if I pull the model down manually into ollama via the cli then I can select it when configuring the node. seems like Webui is just another pointless layer in the way..... I want to be able to use what I've downloaded already without having to duplicate models etc. I'll try pointing both to the same host server directory and see if that works, so if I download via webui It is then shared with ollama directly...
Author
Owner

@rick-github commented on GitHub (Sep 8, 2025):

Can't see in the open-WebUI config how to publish the 11434 port....

The port publishing is done in Truenas, not Open-WebUI.

<!-- gh-comment-id:3265918234 --> @rick-github commented on GitHub (Sep 8, 2025): > Can't see in the open-WebUI config how to publish the 11434 port.... The port publishing is done in Truenas, not Open-WebUI.
Author
Owner

@Xxxyz721 commented on GitHub (Sep 8, 2025):

Can't see in the open-WebUI config how to publish the 11434 port....

The port publishing is done in Truenas, not Open-WebUI.

Yep. looking at the openwebui config and it lists the front end 30128 port as the api, but nothing addidtional it seems for other ports and links to ollama...

<!-- gh-comment-id:3265956873 --> @Xxxyz721 commented on GitHub (Sep 8, 2025): > > Can't see in the open-WebUI config how to publish the 11434 port.... > > The port publishing is done in Truenas, not Open-WebUI. Yep. looking at the openwebui config and it lists the front end 30128 port as the api, but nothing addidtional it seems for other ports and links to ollama...
Author
Owner

@pdevine commented on GitHub (Sep 15, 2025):

I think this is answered (and it sounds like a non-Ollama issue)? I'll go ahead and close it, but we can reopen.

<!-- gh-comment-id:3294123845 --> @pdevine commented on GitHub (Sep 15, 2025): I think this is answered (and it sounds like a non-Ollama issue)? I'll go ahead and close it, but we can reopen.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8127