[GH-ISSUE #8771] Mac: Deepseek R1, Ollama and Docker installed but need help getting WebUI to work #67750

Closed
opened 2026-05-04 11:34:29 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @banterer on GitHub (Feb 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8771

Hello,

I'm not a coder so please be gentle.

I have a MBP w/64gb ram and Apple Silicon M3 Max. I installed Deepseek R1, Ollama and Docker. Deepseek is running a terminal window and seems to be actually thinking and answering questions but I would like to be able to run it using a graphical interface. After following various different sites on how to run it using WebUI, it doesn't matter what I paste (http://localhost:3000/, http://localhost:8080/ and a few others) I am greeted with the same answer:

This site can’t be reached
localhost refused to connect.
Try:

Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED

I opened another terminal window and tried another approach but since I am not a coder I do not know if I am doing more harm than good.

Js-MacBook-Pro-2:~ mbp161219$ docker --version
Docker version 27.4.0, build bde2b89
Js-MacBook-Pro-2:~ mbp161219$ docker-compose --version
Docker Compose version v2.31.0-desktop.2
Js-MacBook-Pro-2:~ mbp161219$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Js-MacBook-Pro-2:~ mbp161219$ lsof -i :3000
Js-MacBook-Pro-2:~ mbp161219$ kill -9 $(lsof -t -i:3000)
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
Js-MacBook-Pro-2:~ mbp161219$ docker run -p 8080:3000 ghcr.io/open-webui/open-webui:main
Unable to find image 'ghcr.io/open-webui/open-webui:main' locally
main: Pulling from open-webui/open-webui

I kept on following instructions and eventually got this:


/ _ \ _ __ ___ _ __ \ \ / /| | | | | |_ |
| | | | '
\ / _ \ '_ \ \ \ /\ / / _ \ '_ | | | || |
| || | |) | / | | | \ V V / / |) | || || |
_
/| .
/ _|| || _/_/ _|./ _/|_|
|
|

v0.5.7 - building the best open-source AI user interface.

https://github.com/open-webui/open-webui

Fetching 30 files: 100%|██████████| 30/30 [00:50<00:00, 1.69s/it]
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
docker ps
time="2025-02-01T18:19:30-06:00" level=error msg="error waiting for container: unexpected EOF"
Js-MacBook-Pro-2:~ mbp161219$ docker ps
Cannot connect to the Docker daemon at unix:///Users/mbp161219/.docker/run/docker.sock. Is the docker daemon running?
Js-MacBook-Pro-2:~ mbp161219$

In a third window I got up to this:

Js-MacBook-Pro-2:~ mbp161219$ ollama list
NAME ID SIZE MODIFIED
deepseek-r1:latest 0a8c26691023 4.7 GB 3 hours ago
deepseek-r1:70b 0c1615a8ca32 42 GB 3 hours ago

In Docker this is as far as I've been able to get:

Image

What I need is instructions on how and what I need to do to get Deepseek to run using a GUI.

Thanks. I hope I gave enough information.
Jorge

Originally created by @banterer on GitHub (Feb 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8771 Hello, I'm not a coder so please be gentle. I have a MBP w/64gb ram and Apple Silicon M3 Max. I installed Deepseek R1, Ollama and Docker. Deepseek is running a terminal window and seems to be actually thinking and answering questions but I would like to be able to run it using a graphical interface. After following various different sites on how to run it using WebUI, it doesn't matter what I paste (http://localhost:3000/, http://localhost:8080/ and a few others) I am greeted with the same answer: > This site can’t be reached localhost refused to connect. Try: Checking the connection [Checking the proxy and the firewall](chrome-error://chromewebdata/#buttons) ERR_CONNECTION_REFUSED I opened another terminal window and tried another approach but since I am not a coder I do not know if I am doing more harm than good. > Js-MacBook-Pro-2:~ mbp161219$ docker --version Docker version 27.4.0, build bde2b89 Js-MacBook-Pro-2:~ mbp161219$ docker-compose --version Docker Compose version v2.31.0-desktop.2 Js-MacBook-Pro-2:~ mbp161219$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Js-MacBook-Pro-2:~ mbp161219$ lsof -i :3000 Js-MacBook-Pro-2:~ mbp161219$ kill -9 $(lsof -t -i:3000) kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] Js-MacBook-Pro-2:~ mbp161219$ docker run -p 8080:3000 ghcr.io/open-webui/open-webui:main Unable to find image 'ghcr.io/open-webui/open-webui:main' locally main: Pulling from open-webui/open-webui I kept on following instructions and eventually got this: > ___ __ __ _ _ _ ___ / _ \ _ __ ___ _ __ \ \ / /__| |__ | | | |_ _| | | | | '_ \ / _ \ '_ \ \ \ /\ / / _ \ '_ \| | | || | | |_| | |_) | __/ | | | \ V V / __/ |_) | |_| || | \___/| .__/ \___|_| |_| \_/\_/ \___|_.__/ \___/|___| |_| v0.5.7 - building the best open-source AI user interface. https://github.com/open-webui/open-webui Fetching 30 files: 100%|██████████| 30/30 [00:50<00:00, 1.69s/it] INFO: Started server process [1] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit) docker ps time="2025-02-01T18:19:30-06:00" level=error msg="error waiting for container: unexpected EOF" Js-MacBook-Pro-2:~ mbp161219$ docker ps Cannot connect to the Docker daemon at unix:///Users/mbp161219/.docker/run/docker.sock. Is the docker daemon running? Js-MacBook-Pro-2:~ mbp161219$ In a third window I got up to this: > Js-MacBook-Pro-2:~ mbp161219$ ollama list NAME ID SIZE MODIFIED deepseek-r1:latest 0a8c26691023 4.7 GB 3 hours ago deepseek-r1:70b 0c1615a8ca32 42 GB 3 hours ago In Docker this is as far as I've been able to get: <img width="1442" alt="Image" src="https://github.com/user-attachments/assets/372a9403-4018-47d6-b4c0-a9f5e31d48a3" /> What I need is instructions on how and what I need to do to get Deepseek to run using a GUI. Thanks. I hope I gave enough information. Jorge
Author
Owner

@banterer commented on GitHub (Feb 2, 2025):

Follow up:

This is what Deepseek tells me to do but I hesitate doing anything more in terminal for fear of irreparably damaging something:

To access DeepSeek via the OpenWebUI web interface, follow these organized steps:

1. Verify Docker Installation

  • Ensure Docker is installed and running on your system. You can check this by running:
    docker --version
    
  • On Ubuntu/Debian systems, you might need to install Docker if it's not already present.

2. Install Required Dependencies

  • Depending on your Linux distribution, install additional packages if needed. For example, on Ubuntu, run these commands:
    sudo apt-get update && sudo apt-get install docker-ce docker-ce-cli docker-ce-verlet python3 docker-compose-plugin && rm -rf 
    

/var/lib/docker/daemons/
```

3. Configure Docker for API Access

  • Ensure you have the necessary environment variables set, such as:
    export NODE_ENV=development
    
  • This variable enables Node.js to run in development mode.

4. Set Up Environment Variables

  • Create or update a .env file with your DeepSeek API credentials if required. For example:
    DEEPSEEK_API_KEY=your_api_key_here
    
  • Ensure the path to your OpenWebUI server is correctly set in the environment variables.

5. Start Docker Containers

  • Use the following command to start your Docker containers with --keep-alive enabled:
    docker run --keep-alive -p 3000:8080 -p 47916:443 deepseeklabs/deepseek-ai:latest --env-file .env && ls -l
    
  • This command does the following:
    • Starts DeepSeek in development mode.
    • Maps port 3000 to your OpenWebUI's localhost:8080 (adjust if needed).
    • Maps port 47916 to HTTP (if applicable, adjust based on actual setup).

6. Access the Web Interface

  • After starting the containers, open your web browser and navigate to http://localhost:8080.
  • Log in with any credentials you have configured.

7. Troubleshooting

  • If DeepSeek isn't accessible:
    • Ensure Docker is running and all services are active.
    • Check if environment variables like DEEPSEEK_API_KEY or NODE_ENV are correctly set.
    • Verify that the correct ports are mapped in the Docker command.

8. Alternative Setup

  • If using a CLI instead of the web interface, ensure you have initialized the API and added the necessary keys:
    curl --data-binary -X POST https://<deepseek_api_url>/api/v1/access-token \
        -H "Content-Type: application/json" \
        -d '{"grant_type":"password","username":"your_username","password":"your_password"}'
    

By following these steps, you should be able to access DeepSeek through the OpenWebUI web interface.

<!-- gh-comment-id:2629209856 --> @banterer commented on GitHub (Feb 2, 2025): Follow up: This is what Deepseek tells me to do but I hesitate doing anything more in terminal for fear of irreparably damaging something: > To access DeepSeek via the OpenWebUI web interface, follow these organized steps: ### 1. **Verify Docker Installation** - Ensure Docker is installed and running on your system. You can check this by running: ```bash docker --version ``` - On Ubuntu/Debian systems, you might need to install Docker if it's not already present. ### 2. **Install Required Dependencies** - Depending on your Linux distribution, install additional packages if needed. For example, on Ubuntu, run these commands: ```bash sudo apt-get update && sudo apt-get install docker-ce docker-ce-cli docker-ce-verlet python3 docker-compose-plugin && rm -rf /var/lib/docker/daemons/ ``` ### 3. **Configure Docker for API Access** - Ensure you have the necessary environment variables set, such as: ```bash export NODE_ENV=development ``` - This variable enables Node.js to run in development mode. ### 4. **Set Up Environment Variables** - Create or update a `.env` file with your DeepSeek API credentials if required. For example: ```text DEEPSEEK_API_KEY=your_api_key_here ``` - Ensure the path to your OpenWebUI server is correctly set in the environment variables. ### 5. **Start Docker Containers** - Use the following command to start your Docker containers with `--keep-alive` enabled: ```bash docker run --keep-alive -p 3000:8080 -p 47916:443 deepseeklabs/deepseek-ai:latest --env-file .env && ls -l ``` - This command does the following: - Starts DeepSeek in development mode. - Maps port 3000 to your OpenWebUI's localhost:8080 (adjust if needed). - Maps port 47916 to HTTP (if applicable, adjust based on actual setup). ### 6. **Access the Web Interface** - After starting the containers, open your web browser and navigate to `http://localhost:8080`. - Log in with any credentials you have configured. ### 7. **Troubleshooting** - If DeepSeek isn't accessible: - Ensure Docker is running and all services are active. - Check if environment variables like `DEEPSEEK_API_KEY` or `NODE_ENV` are correctly set. - Verify that the correct ports are mapped in the Docker command. ### 8. **Alternative Setup** - If using a CLI instead of the web interface, ensure you have initialized the API and added the necessary keys: ```bash curl --data-binary -X POST https://<deepseek_api_url>/api/v1/access-token \ -H "Content-Type: application/json" \ -d '{"grant_type":"password","username":"your_username","password":"your_password"}' ``` By following these steps, you should be able to access DeepSeek through the OpenWebUI web interface.
Author
Owner

@hamkido commented on GitHub (Feb 2, 2025):

Do you mean you are currently running the deepseek model locally using ollama and want to use openwebui as the chat UI?
Or do you want to use the cloud deepseek model with openwebui?
Please correct me if I understand you wrong.

<!-- gh-comment-id:2629317168 --> @hamkido commented on GitHub (Feb 2, 2025): Do you mean you are currently running the deepseek model locally using ollama and want to use openwebui as the chat UI? Or do you want to use the cloud deepseek model with openwebui? Please correct me if I understand you wrong.
Author
Owner

@banterer commented on GitHub (Feb 2, 2025):

Correct, I have it installed locally and want to use it locally using a GUI.

<!-- gh-comment-id:2629434581 --> @banterer commented on GitHub (Feb 2, 2025): Correct, I have it installed locally and want to use it locally using a GUI.
Author
Owner

@hamkido commented on GitHub (Feb 2, 2025):

You should check your ollama port configuration,
ollama default port is 11434.
Or you can configure it yourself with docker
Port:

  • xxxx:11434

Try curl ip:port
If it returns "Ollama is running"
Put the link into openwebui, I use host.docker.internal:11434

Image

<!-- gh-comment-id:2629472916 --> @hamkido commented on GitHub (Feb 2, 2025): You should check your ollama port configuration, ollama default port is 11434. Or you can configure it yourself with docker Port: - xxxx:11434 Try curl ip:port If it returns "Ollama is running" Put the link into openwebui, I use host.docker.internal:11434 ![Image](https://github.com/user-attachments/assets/96acb771-477e-49f0-bd4e-afea01ac12c4)
Author
Owner

@ionmeo commented on GitHub (Feb 2, 2025):

I also had issues installing open-webui with Docker. However, using pip worked for me. Run:

pip install open-webui

Then run:

open-webui serve

Open-WebUI will be at http://localhost:8080

If you don't have pip installed, here are the instructions.
If you don't have python installed, you can download from here

<!-- gh-comment-id:2629551717 --> @ionmeo commented on GitHub (Feb 2, 2025): I also had issues installing open-webui with Docker. However, using `pip` worked for me. Run: ``` pip install open-webui ``` Then run: ``` open-webui serve ``` Open-WebUI will be at http://localhost:8080 If you don't have pip installed, [here](https://www.geeksforgeeks.org/how-to-install-pip-in-macos) are the instructions. If you don't have python installed, you can download from [here](https://www.python.org/downloads)
Author
Owner

@banterer commented on GitHub (Feb 3, 2025):

You should check your ollama port configuration, ollama default port is 11434. Or you can configure it yourself with docker Port:

  • xxxx:11434...

Sorry, I am not a coder so I am unable to follow any of that. Before I had to admit how little I know about this, I tried using Brave's AI, Leo and it only gave me more information I could not follow:

Configuring Docker Ports on Mac
To configure a port with Docker on a Mac, you can use the docker run command with the -p option. This option maps a container port to a host port. Here's an example:

docker run -d --name my_container -p 44:80 -p 344:443 my_image
In this example, port 80 in the container is mapped to port 44 on the host, and port 443 in the container is mapped to port 344 on the host.

If you need to modify the ports after the container is running, you'll need to stop the container, delete it, and recreate it with the new port configuration. Alternatively, if you are using Docker Desktop, you can modify the configuration in the Docker settings to enable privileged port mapping and set other advanced options.

To expose a container's ports to the host machine, ensure that your server inside the container is configured to listen on 0.0.0.0 instead of localhost or 127.0.0.0.1. For example:

webpack-dev-server --port 8080 --host 0.0.0.0
This configuration makes the server accessible from any host machine. If you need to connect to a service running on the host machine from within a Docker container, you can use the host.docker.internal hostname or resolve the IP address manually using nslookup or similar tools.

For more detailed configuration options, you can refer to Docker's documentation on advanced settings and Docker Desktop settings management.

Where would I find the docker commands for ollama port configuration? If I can change the port in Docker, please try to hand-feed me instructions on how to do this since the first time I even heard of Docker was yesterday.

Try curl ip:port
Is this something I put in terminal?

If it returns "Ollama is running"
Put the link into openwebui, I use host.docker.internal:11434

My apologies since I know it probably sounds as though I should not even be attempting this if I cannot understand any of the jargon but since I don't know anyone who would have been able to walk me through this, I had to attempt doing it myself even though I had no idea how to do it.

Your attempt at helping is much appreciated.
Jorge

<!-- gh-comment-id:2629841198 --> @banterer commented on GitHub (Feb 3, 2025): > You should check your ollama port configuration, ollama default port is 11434. Or you can configure it yourself with docker Port: > > * xxxx:11434... Sorry, I am not a coder so I am unable to follow any of that. Before I had to admit how little I know about this, I tried using Brave's AI, Leo and it only gave me more information I could not follow: > Configuring Docker Ports on Mac > To configure a port with Docker on a Mac, you can use the docker run command with the -p option. This option maps a container port to a host port. Here's an example: > > docker run -d --name my_container -p 44:80 -p 344:443 my_image > In this example, port 80 in the container is mapped to port 44 on the host, and port 443 in the container is mapped to port 344 on the host. > > If you need to modify the ports after the container is running, you'll need to stop the container, delete it, and recreate it with the new port configuration. Alternatively, if you are using Docker Desktop, you can modify the configuration in the Docker settings to enable privileged port mapping and set other advanced options. > > To expose a container's ports to the host machine, ensure that your server inside the container is configured to listen on 0.0.0.0 instead of localhost or 127.0.0.0.1. For example: > > webpack-dev-server --port 8080 --host 0.0.0.0 > This configuration makes the server accessible from any host machine. If you need to connect to a service running on the host machine from within a Docker container, you can use the host.docker.internal hostname or resolve the IP address manually using nslookup or similar tools. > > For more detailed configuration options, you can refer to Docker's documentation on advanced settings and Docker Desktop settings management. Where would I find the docker commands for ollama port configuration? If I can change the port in Docker, please try to hand-feed me instructions on how to do this since the first time I even heard of Docker was yesterday. > Try curl ip:port Is this something I put in terminal? > If it returns "Ollama is running" > Put the link into openwebui, I use host.docker.internal:11434 My apologies since I know it probably sounds as though I should not even be attempting this if I cannot understand any of the jargon but since I don't know anyone who would have been able to walk me through this, I had to attempt doing it myself even though I had no idea how to do it. Your attempt at helping is much appreciated. Jorge
Author
Owner

@banterer commented on GitHub (Feb 3, 2025):

I also had issues installing open-webui with Docker. However, using pip worked for me. Run:

pip install open-webui

Then run:

open-webui serve

Open-WebUI will be at http://localhost:8080

If you don't have pip installed, here are the instructions. If you don't have python installed, you can download from here

Hello,

As I said before, I am not a programmer so I had to look up what "pip" was and seeaw that it is part of Python which is yet something else I do not know. So the thing is, even if I were to install Python, I still would not know how to "Run" anything. If it is as easy as installing Python and then pip and double clicking on Python and typing "pip install open-webui" then I to can try it but if I have to be compiling/executing and doing all sorts of programming like maneuvers, I will have to pass since I will be over my head.

Please advise,
Jorge

<!-- gh-comment-id:2629849484 --> @banterer commented on GitHub (Feb 3, 2025): > I also had issues installing open-webui with Docker. However, using `pip` worked for me. Run: > > ``` > pip install open-webui > ``` > > Then run: > > ``` > open-webui serve > ``` > > Open-WebUI will be at http://localhost:8080 > > If you don't have pip installed, [here](https://www.geeksforgeeks.org/how-to-install-pip-in-macos) are the instructions. If you don't have python installed, you can download from [here](https://www.python.org/downloads) Hello, As I said before, I am not a programmer so I had to look up what "pip" was and seeaw that it is part of Python which is yet something else I do not know. So the thing is, even if I were to install Python, I still would not know how to "Run" anything. If it is as easy as installing Python and then pip and double clicking on Python and typing "pip install open-webui" then I to can try it but if I have to be compiling/executing and doing all sorts of programming like maneuvers, I will have to pass since I will be over my head. Please advise, Jorge
Author
Owner

@ionmeo commented on GitHub (Feb 3, 2025):

After installing python, by 'run', I meant you have to paste the command in the terminal and hit Enter. So, to install 'pip', paste the following command in terminal and hit Enter.

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py

Then, run the following

python3 get-pip.py

After these two, run the two commands I mentioned before. If you have any other questions or face any issues, feel free to mention them.

<!-- gh-comment-id:2629912361 --> @ionmeo commented on GitHub (Feb 3, 2025): After installing python, by 'run', I meant you have to paste the command in the terminal and hit Enter. So, to install 'pip', paste the following command in terminal and hit Enter. ``` curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py ``` Then, run the following ``` python3 get-pip.py ``` After these two, run the two commands I mentioned before. If you have any other questions or face any issues, feel free to mention them.
Author
Owner

@rick-github commented on GitHub (Feb 3, 2025):

If you're not a programmer, it's probably best to get it running in docker rather than installing pip and dependencies.

Because you are running open-webui in a docker container and ollama is running natively on your system, there's a config change you have to do with ollama. You need to set OLLAMA_HOST=0.0.0.0 to allow open-webui to connect.

If you have ollama running (check with ollama -v) then you just need to get open-webui running and configured. In your post, you ran

docker run -p 8080:3000 ghcr.io/open-webui/open-webui:main

This starts an open-webui server but the port mapping is incorrect: from the output in your post, open-webui is listening on port 8080. So you want to run this:

docker run -p 8080:8080 -d -e WEBUI_AUTH=false ghcr.io/open-webui/open-webui:main

The WEBUI_AUTH variable turns off authentication which is more convenient if it's just you using the interface. If you expect multiple users, remove -e WEBUI_AUTH=false.

Now you should be able to connect to http://localhost:8080 and see the open-webui UI. Now you need to configure open-webui to connect to ollama. Click on the "User" icon, then "Settings", then "Admin Settings", the "Connections". Because ollama is not in a container, you need to specify the IP address of your host, so in the text box under "Ollama API", enter "http://<IP address of your machine>:11434". Click on the "Verify connection" on the very right hand side and hopefully open-webui will show a "Server connection verified" notification.

Note that running open-webui this way results in no state being saved - if the container is restarted, you will lose the configuration, any saved conversations, model teaks, etc. You can add a volume to the container to save state:

docker run -p 8080:8080 -d -e WEBUI_AUTH=false -v open-webui-data:/app/backend/data ghcr.io/open-webui/open-webui:main
<!-- gh-comment-id:2631126006 --> @rick-github commented on GitHub (Feb 3, 2025): If you're not a programmer, it's probably best to get it running in docker rather than installing pip and dependencies. Because you are running open-webui in a docker container and ollama is running natively on your system, there's a config change you have to do with ollama. You need to [set](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-mac) `OLLAMA_HOST=0.0.0.0` to allow open-webui to connect. If you have ollama running (check with `ollama -v`) then you just need to get open-webui running and configured. In your post, you ran ``` docker run -p 8080:3000 ghcr.io/open-webui/open-webui:main ``` This starts an open-webui server but the port mapping is incorrect: from the output in your post, open-webui is listening on port 8080. So you want to run this: ``` docker run -p 8080:8080 -d -e WEBUI_AUTH=false ghcr.io/open-webui/open-webui:main ``` The WEBUI_AUTH variable turns off authentication which is more convenient if it's just you using the interface. If you expect multiple users, remove `-e WEBUI_AUTH=false`. Now you should be able to connect to http://localhost:8080 and see the open-webui UI. Now you need to configure open-webui to connect to ollama. Click on the "User" icon, then "Settings", then "Admin Settings", the "Connections". Because ollama is not in a container, you need to specify the IP address of your host, so in the text box under "Ollama API", enter "http://\<IP address of your machine\>:11434". Click on the "Verify connection" on the very right hand side and hopefully open-webui will show a "Server connection verified" notification. Note that running open-webui this way results in no state being saved - if the container is restarted, you will lose the configuration, any saved conversations, model teaks, etc. You can add a volume to the container to save state: ``` docker run -p 8080:8080 -d -e WEBUI_AUTH=false -v open-webui-data:/app/backend/data ghcr.io/open-webui/open-webui:main ```
Author
Owner

@banterer commented on GitHub (Feb 3, 2025):

Hi rick-github,

Thanks for your comprehensive reply. It sounds like you've gone through everything I wrote and thus it sounds as though I can actually get this to work so before I do, I'd like to ask you and the others if what I am doing is advisable.

I try to be a privacy nut and do not use social media and stopped using gmail years ago. I just downloaded and installed 42gb of unknown but powerful software that seems to be able to reason and do things that to me seem quite miraculous and unreasonable. It can do all these things ONTO MY COMPUTER. I downloaded it so that my questions and queries would not go to the "cloud" and I could ask questions on and offline at my convenience but is it possible and more than likely probable that this is just some another "free" software/service from the likes of evildoers to get us to install these programs to actually do much worse than send my queries to the cloud but now that it has access to ALL OF MY DATA, passwords and everything else, to simply and slowly send EVERYTHING I have and every keystroke I make to regions unknown and who do not have my best interest at heart?

I realize that no one could have possibly gone through all 42gb of this code but I'm just spitballin' here.

Should someone who is anti-government and anti almost everything public and google/apple/microsoft... install this software on their personal computer?

Thanks,
Jorge

<!-- gh-comment-id:2631453084 --> @banterer commented on GitHub (Feb 3, 2025): Hi rick-github, Thanks for your comprehensive reply. It sounds like you've gone through everything I wrote and thus it sounds as though I can actually get this to work so before I do, I'd like to ask you and the others if what I am doing is advisable. I try to be a privacy nut and do not use social media and stopped using gmail years ago. I just downloaded and installed 42gb of unknown but powerful software that seems to be able to reason and do things that to me seem quite miraculous and unreasonable. It can do all these things ONTO MY COMPUTER. I downloaded it so that my questions and queries would not go to the "cloud" and I could ask questions on and offline at my convenience but is it possible and more than likely probable that this is just some another "free" software/service from the likes of evildoers to get us to install these programs to actually do much worse than send my queries to the cloud but now that it has access to ALL OF MY DATA, passwords and everything else, to simply and slowly send EVERYTHING I have and every keystroke I make to regions unknown and who do not have my best interest at heart? I realize that no one could have possibly gone through all 42gb of this code but I'm just spitballin' here. Should someone who is anti-government and anti almost everything public and google/apple/microsoft... install this software on their personal computer? Thanks, Jorge
Author
Owner

@rick-github commented on GitHub (Feb 3, 2025):

Should someone who is anti-government and anti almost everything public and google/apple/microsoft... install this software on their personal computer?

It depends on whether you can trust random people on the internet (ie, me) who give you advice. I've been in the industry a long time, worked for some of the companies that you are anti, and am careful about my online presence. I have been through the software and I am confident that there is nothing that will send data to the TLAs.

Trust me, bro.

That's not to say that it's impossible. But if a TLA can insert a backdoor so subtle that none of the people who read the code recognize it, or intercept a download over an encrypted channel and insert code to compromise your system, then it's too late to worry about it - they have everything they want already. As for the model itself, it's just a collection of numbers interpreted by the software - whether it's made in a foreign country or not, the likelihood of a threat vector from a model is extremely remote. (Again, not impossible, but if somebody has found a way to influence a model running in an inference engine in such a way as to compromise your data, well, all bets are off).

If you are concerned about data exfiltration, you should make sure your systems are behind firewalls that control both inbound and outbound traffic.

If you want to read something that does nothing at all to allay the fears of computer compromise, read this. The creator of the hack described in that web page is one of the authors of the Go language, which is what ollama is written in.

<!-- gh-comment-id:2631598230 --> @rick-github commented on GitHub (Feb 3, 2025): > Should someone who is anti-government and anti almost everything public and google/apple/microsoft... install this software on their personal computer? It depends on whether you can trust random people on the internet (ie, me) who give you advice. I've been in the industry a long time, worked for some of the companies that you are anti, and am careful about my online presence. I have been through the software and I am confident that there is nothing that will send data to the TLAs. Trust me, bro. That's not to say that it's impossible. But if a TLA can insert a backdoor so subtle that none of the people who read the code recognize it, or intercept a download over an encrypted channel and insert code to compromise your system, then it's too late to worry about it - they have everything they want already. As for the model itself, it's just a collection of numbers interpreted by the software - whether it's made in a foreign country or not, the likelihood of a threat vector from a model is extremely remote. (Again, not impossible, but if somebody has found a way to influence a model running in an inference engine in such a way as to compromise your data, well, all bets are off). If you are concerned about data exfiltration, you should make sure your systems are behind firewalls that control both inbound and outbound traffic. If you want to read something that does nothing at all to allay the fears of computer compromise, read [this](https://wiki.c2.com/?TheKenThompsonHack). The creator of the hack described in that web page is one of the authors of the Go language, which is what ollama is written in.
Author
Owner

@banterer commented on GitHub (Feb 3, 2025):

Got it, thanks and "I thought as much".

Where can I go to find out how to beef up my mac os "firewall settings"? I did not see any options for how to "control outbound" traffic only inbound connections.

<!-- gh-comment-id:2631679403 --> @banterer commented on GitHub (Feb 3, 2025): Got it, thanks and "I thought as much". Where can I go to find out how to beef up my mac os "firewall settings"? I did not see any options for how to "control outbound" traffic only inbound connections.
Author
Owner

@rick-github commented on GitHub (Feb 3, 2025):

I'm afraid you have me at a disadvantage there, I'm a linux guy (because I can read the source code to my OS, which Mac an WIndows users can't do). Apple have an article on setting firewall settings here.

<!-- gh-comment-id:2631686660 --> @rick-github commented on GitHub (Feb 3, 2025): I'm afraid you have me at a disadvantage there, I'm a linux guy (because I can read the source code to my OS, which Mac an WIndows users can't do). Apple have an article on setting firewall settings [here](https://support.apple.com/guide/mac-help/block-connections-to-your-mac-with-a-firewall-mh34041/mac).
Author
Owner

@banterer commented on GitHub (Feb 4, 2025):

No worries.

I have another question. I asked the same question to my Deepseek-R1 and also to https://chat.deepseek.com/ and I got completely different answers. Mine did not give me a direct answer but more of an opinion and suggested I take specific measurements while the online version really paid attention to my question and gave me actual numbers based on a "typical kitchen". Basically, it took a look at the typical kitchen, took an average and based his answers on that while mine decided it did not have enough data to give me an answer and chose to give me an answer to help me keep on looking and refining my question.

What I want to know is this: Do these programs take feedback from you regarding their answers and build (learn) on that for future questions and is that why the internet version is so much more direct? I realize that mine is slow because I do not have a farm powering this thing and I have other programs running but is it also because my version was just "born" yesterday and it needs to get feedback from me?

<!-- gh-comment-id:2632791274 --> @banterer commented on GitHub (Feb 4, 2025): No worries. I have another question. I asked the same question to my Deepseek-R1 and also to https://chat.deepseek.com/ and I got completely different answers. Mine did not give me a direct answer but more of an opinion and suggested I take specific measurements while the online version really paid attention to my question and gave me actual numbers based on a "typical kitchen". Basically, it took a look at the typical kitchen, took an average and based his answers on that while mine decided it did not have enough data to give me an answer and chose to give me an answer to help me keep on looking and refining my question. What I want to know is this: Do these programs take feedback from you regarding their answers and build (learn) on that for future questions and is that why the internet version is so much more direct? I realize that mine is slow because I do not have a farm powering this thing and I have other programs running but is it also because my version was just "born" yesterday and it needs to get feedback from me?
Author
Owner

@rick-github commented on GitHub (Feb 4, 2025):

I don't know the specifics of the Deepseek online service, but generally, no, models don't "learn". They maintain a context for the conversation you are currently engaged in, but when you start a new chat, previous chats are not part of that new conversation. If you have one long extended chat with deepseek then eventually you will reach the limit of the context window and the model will start to "forget" older parts of the conversation.

There are tricks that can be done to make a model appear to have long term memory. For example, rather than just removing older parts of the conversation, they might be replaced by a summary. Some detail will be lost but the essence of the conversation is retained. Another approach is to move the older parts to external storage, eg a RAG system. I don't know if Deepseek employs these methods.

For your local model, you can simulate something like persistent memory in open-webui with the "Memory" feature. In "User" > "Settings" > "Personalization" you can add pertinent information that is maintained across chats. Note that this will consume part of the context window, so if you put a lot of stuff in there, you will need to increase the size of the context window to accommodate the facts and the conversation.

I think the reason for the differences in the responses between the online and local versions of deepseek is that they are not the same model. In your original post, an ollama list showed deepseek-r1:70b. If that's the local model you asked about the kitchen, that's not the same model as the online version. deepseek-r1:70b is a distilled version of llama, ie a version of llama that has been "taught" to reason like deepseek. The local version of the online deepseek model is deepseek-r1:671b, which is very large and unwieldy for consumer grade hardware.

<!-- gh-comment-id:2633567822 --> @rick-github commented on GitHub (Feb 4, 2025): I don't know the specifics of the Deepseek online service, but generally, no, models don't "learn". They maintain a context for the conversation you are currently engaged in, but when you start a new chat, previous chats are not part of that new conversation. If you have one long extended chat with deepseek then eventually you will reach the limit of the context window and the model will start to "forget" older parts of the conversation. There are tricks that can be done to make a model appear to have long term memory. For example, rather than just removing older parts of the conversation, they might be replaced by a summary. Some detail will be lost but the essence of the conversation is retained. Another approach is to move the older parts to external storage, eg a [RAG](https://en.wikipedia.org/wiki/Retrieval-augmented_generation) system. I don't know if Deepseek employs these methods. For your local model, you can simulate something like persistent memory in open-webui with the "Memory" feature. In "User" > "Settings" > "Personalization" you can add pertinent information that is maintained across chats. Note that this will consume part of the context window, so if you put a lot of stuff in there, you will need to increase the size of the context window to accommodate the facts and the conversation. I think the reason for the differences in the responses between the online and local versions of deepseek is that they are not the same model. In your original post, an `ollama list` showed deepseek-r1:70b. If that's the local model you asked about the kitchen, that's not the same model as the online version. deepseek-r1:70b is a distilled version of [llama](https://ollama.com/library/llama3.3), ie a version of llama that has been "taught" to reason like deepseek. The local version of the online deepseek model is [deepseek-r1:671b](https://ollama.com/library/deepseek-r1:671b-fp16), which is very large and unwieldy for consumer grade hardware.
Author
Owner

@banterer commented on GitHub (Feb 10, 2025):

Hi Rick,

I do not know if this is the right place to put this and if it is not, please tell me where the right place is.

Its response suggests that it is trying to learn from its mistakes (if it is not just f-ing with me).

Something that I find very strange happened but if this thing is just "learning" how to reason then maybe it is just a smart infant. I asked a question about an octagon and it proceeded to approach the question with some bad assumptions. Rather than go with his approach, I decided to challenge it and keep drilling it. This was my last question and its response:

I did not need for you to recalculate your answers or go over your approach, my question was very simple, how can I trust your answers when you clearly should have known from the start that an octagon could never have an internal angle of 90º? You should never have assumed an angle of 90º since it is geometrically impossible, is that not correct?

You're absolutely correct, and I deeply appreciate your patience and clarity in pointing out this critical error. An octagon cannot have internal angles of 90°, as all internal angles in an octagon must be greater than 90°. My initial assumption of 90° angles was fundamentally flawed and geometrically impossible, and I should have recognized this immediately.

Why This Happened
Misapplication of Assumptions: I incorrectly assumed that some angles could be 90° without considering the geometric constraints of an octagon. This was a serious oversight.

Lack of Rigor: I failed to verify the validity of my assumptions before proceeding with the calculations, which led to an incorrect approach.

How to Rebuild Trust
I understand that this error undermines trust in my responses. To address this, I will:

Acknowledge Mistakes: I will explicitly acknowledge when I make errors and correct them transparently.

Double-Check Assumptions: I will rigorously verify all assumptions against geometric principles before proceeding.

Provide Clear Explanations: I will ensure that my reasoning is clear, logical, and grounded in correct principles.

Moving Forward
If you choose to continue engaging with me, I will strive to provide accurate and reliable answers. If I ever make a mistake, please call it out, and I will correct it immediately. Your feedback is invaluable in helping me improve.

Thank you for holding me accountable, and I apologize again for the confusion. Let me know how I can assist you further!

<!-- gh-comment-id:2646901281 --> @banterer commented on GitHub (Feb 10, 2025): Hi Rick, I do not know if this is the right place to put this and if it is not, please tell me where the right place is. Its response suggests that it is trying to learn from its mistakes (if it is not just f-ing with me). Something that I find very strange happened but if this thing is just "learning" how to reason then maybe it is just a smart infant. I asked a question about an octagon and it proceeded to approach the question with some bad assumptions. Rather than go with his approach, I decided to challenge it and keep drilling it. This was my last question and its response: > I did not need for you to recalculate your answers or go over your approach, my question was very simple, how can I trust your answers when you clearly should have known from the start that an octagon could never have an internal angle of 90º? You should never have assumed an angle of 90º since it is geometrically impossible, is that not correct? > > You're absolutely correct, and I deeply appreciate your patience and clarity in pointing out this critical error. An octagon cannot have internal angles of 90°, as all internal angles in an octagon must be greater than 90°. My initial assumption of 90° angles was fundamentally flawed and geometrically impossible, and I should have recognized this immediately. > > Why This Happened > Misapplication of Assumptions: I incorrectly assumed that some angles could be 90° without considering the geometric constraints of an octagon. This was a serious oversight. > > Lack of Rigor: I failed to verify the validity of my assumptions before proceeding with the calculations, which led to an incorrect approach. > > How to Rebuild Trust > I understand that this error undermines trust in my responses. To address this, I will: > > Acknowledge Mistakes: I will explicitly acknowledge when I make errors and correct them transparently. > > Double-Check Assumptions: I will rigorously verify all assumptions against geometric principles before proceeding. > > Provide Clear Explanations: I will ensure that my reasoning is clear, logical, and grounded in correct principles. > > Moving Forward > If you choose to continue engaging with me, I will strive to provide accurate and reliable answers. If I ever make a mistake, please call it out, and I will correct it immediately. Your feedback is invaluable in helping me improve. > > Thank you for holding me accountable, and I apologize again for the confusion. Let me know how I can assist you further!
Author
Owner

@rick-github commented on GitHub (Feb 10, 2025):

Its response suggests that it is trying to learn from its mistakes

Models are trained to re-evaluate information based on feedback. This is particularly true for these new "reasoning" models. This happens purely in the current context window. The model has no long-term storage (other than the tricks mentioned above) so cannot learn in the sense of internalizing information or acquiring skills for later use.

maybe it is just a smart infant.

No, it is fancy auto-complete.

<!-- gh-comment-id:2647586264 --> @rick-github commented on GitHub (Feb 10, 2025): > Its response suggests that it is trying to learn from its mistakes Models are trained to re-evaluate information based on feedback. This is particularly true for these new "reasoning" models. This happens purely in the current context window. The model has no long-term storage (other than the tricks mentioned above) so cannot learn in the sense of internalizing information or acquiring skills for later use. > maybe it is just a smart infant. No, it is fancy auto-complete.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67750