mirror of
https://github.com/ollama/ollama.git
synced 2026-05-09 01:59:50 -05:00
[GH-ISSUE #8771] Mac: Deepseek R1, Ollama and Docker installed but need help getting WebUI to work #31454
Closed
opened 2026-04-22 11:54:21 -05:00 by GiteaMirror
·
17 comments
No Branch/Tag Specified
main
hoyyeva/opencode-image-modality
hoyyeva/anthropic-renderer-local-image-path
hoyyeva/anthropic-local-image-path
dhiltgen/ci
dhiltgen/llama-runner
parth-launch-codex-app
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.30.0-rc7
v0.30.0-rc6
v0.30.0-rc5
v0.23.2
v0.23.2-rc0
v0.30.0-rc4
v0.30.0-rc3
v0.30.0-rc2
v0.30.0-rc1
v0.30.0-rc0
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#31454
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @banterer on GitHub (Feb 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8771
Hello,
I'm not a coder so please be gentle.
I have a MBP w/64gb ram and Apple Silicon M3 Max. I installed Deepseek R1, Ollama and Docker. Deepseek is running a terminal window and seems to be actually thinking and answering questions but I would like to be able to run it using a graphical interface. After following various different sites on how to run it using WebUI, it doesn't matter what I paste (http://localhost:3000/, http://localhost:8080/ and a few others) I am greeted with the same answer:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
I opened another terminal window and tried another approach but since I am not a coder I do not know if I am doing more harm than good.
I kept on following instructions and eventually got this:
/ _ \ _ __ ___ _ __ \ \ / /| | | | | |_ |
| | | | ' \ / _ \ '_ \ \ \ /\ / / _ \ '_ | | | || |
| || | |) | / | | | \ V V / / |) | || || |
_/| ./ _|| || _/_/ _|./ _/|_|
||
v0.5.7 - building the best open-source AI user interface.
https://github.com/open-webui/open-webui
Fetching 30 files: 100%|██████████| 30/30 [00:50<00:00, 1.69s/it]
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
docker ps
time="2025-02-01T18:19:30-06:00" level=error msg="error waiting for container: unexpected EOF"
Js-MacBook-Pro-2:~ mbp161219$ docker ps
Cannot connect to the Docker daemon at unix:///Users/mbp161219/.docker/run/docker.sock. Is the docker daemon running?
Js-MacBook-Pro-2:~ mbp161219$
In a third window I got up to this:
In Docker this is as far as I've been able to get:
What I need is instructions on how and what I need to do to get Deepseek to run using a GUI.
Thanks. I hope I gave enough information.
Jorge
@banterer commented on GitHub (Feb 2, 2025):
Follow up:
This is what Deepseek tells me to do but I hesitate doing anything more in terminal for fear of irreparably damaging something:
1. Verify Docker Installation
2. Install Required Dependencies
/var/lib/docker/daemons/
```
3. Configure Docker for API Access
4. Set Up Environment Variables
.envfile with your DeepSeek API credentials if required. For example:5. Start Docker Containers
--keep-aliveenabled:6. Access the Web Interface
http://localhost:8080.7. Troubleshooting
DEEPSEEK_API_KEYorNODE_ENVare correctly set.8. Alternative Setup
By following these steps, you should be able to access DeepSeek through the OpenWebUI web interface.
@hamkido commented on GitHub (Feb 2, 2025):
Do you mean you are currently running the deepseek model locally using ollama and want to use openwebui as the chat UI?
Or do you want to use the cloud deepseek model with openwebui?
Please correct me if I understand you wrong.
@banterer commented on GitHub (Feb 2, 2025):
Correct, I have it installed locally and want to use it locally using a GUI.
@hamkido commented on GitHub (Feb 2, 2025):
You should check your ollama port configuration,
ollama default port is 11434.
Or you can configure it yourself with docker
Port:
Try curl ip:port
If it returns "Ollama is running"
Put the link into openwebui, I use host.docker.internal:11434
@ionmeo commented on GitHub (Feb 2, 2025):
I also had issues installing open-webui with Docker. However, using
pipworked for me. Run:Then run:
Open-WebUI will be at http://localhost:8080
If you don't have pip installed, here are the instructions.
If you don't have python installed, you can download from here
@banterer commented on GitHub (Feb 3, 2025):
Sorry, I am not a coder so I am unable to follow any of that. Before I had to admit how little I know about this, I tried using Brave's AI, Leo and it only gave me more information I could not follow:
Where would I find the docker commands for ollama port configuration? If I can change the port in Docker, please try to hand-feed me instructions on how to do this since the first time I even heard of Docker was yesterday.
My apologies since I know it probably sounds as though I should not even be attempting this if I cannot understand any of the jargon but since I don't know anyone who would have been able to walk me through this, I had to attempt doing it myself even though I had no idea how to do it.
Your attempt at helping is much appreciated.
Jorge
@banterer commented on GitHub (Feb 3, 2025):
Hello,
As I said before, I am not a programmer so I had to look up what "pip" was and seeaw that it is part of Python which is yet something else I do not know. So the thing is, even if I were to install Python, I still would not know how to "Run" anything. If it is as easy as installing Python and then pip and double clicking on Python and typing "pip install open-webui" then I to can try it but if I have to be compiling/executing and doing all sorts of programming like maneuvers, I will have to pass since I will be over my head.
Please advise,
Jorge
@ionmeo commented on GitHub (Feb 3, 2025):
After installing python, by 'run', I meant you have to paste the command in the terminal and hit Enter. So, to install 'pip', paste the following command in terminal and hit Enter.
Then, run the following
After these two, run the two commands I mentioned before. If you have any other questions or face any issues, feel free to mention them.
@rick-github commented on GitHub (Feb 3, 2025):
If you're not a programmer, it's probably best to get it running in docker rather than installing pip and dependencies.
Because you are running open-webui in a docker container and ollama is running natively on your system, there's a config change you have to do with ollama. You need to set
OLLAMA_HOST=0.0.0.0to allow open-webui to connect.If you have ollama running (check with
ollama -v) then you just need to get open-webui running and configured. In your post, you ranThis starts an open-webui server but the port mapping is incorrect: from the output in your post, open-webui is listening on port 8080. So you want to run this:
The WEBUI_AUTH variable turns off authentication which is more convenient if it's just you using the interface. If you expect multiple users, remove
-e WEBUI_AUTH=false.Now you should be able to connect to http://localhost:8080 and see the open-webui UI. Now you need to configure open-webui to connect to ollama. Click on the "User" icon, then "Settings", then "Admin Settings", the "Connections". Because ollama is not in a container, you need to specify the IP address of your host, so in the text box under "Ollama API", enter "http://<IP address of your machine>:11434". Click on the "Verify connection" on the very right hand side and hopefully open-webui will show a "Server connection verified" notification.
Note that running open-webui this way results in no state being saved - if the container is restarted, you will lose the configuration, any saved conversations, model teaks, etc. You can add a volume to the container to save state:
@banterer commented on GitHub (Feb 3, 2025):
Hi rick-github,
Thanks for your comprehensive reply. It sounds like you've gone through everything I wrote and thus it sounds as though I can actually get this to work so before I do, I'd like to ask you and the others if what I am doing is advisable.
I try to be a privacy nut and do not use social media and stopped using gmail years ago. I just downloaded and installed 42gb of unknown but powerful software that seems to be able to reason and do things that to me seem quite miraculous and unreasonable. It can do all these things ONTO MY COMPUTER. I downloaded it so that my questions and queries would not go to the "cloud" and I could ask questions on and offline at my convenience but is it possible and more than likely probable that this is just some another "free" software/service from the likes of evildoers to get us to install these programs to actually do much worse than send my queries to the cloud but now that it has access to ALL OF MY DATA, passwords and everything else, to simply and slowly send EVERYTHING I have and every keystroke I make to regions unknown and who do not have my best interest at heart?
I realize that no one could have possibly gone through all 42gb of this code but I'm just spitballin' here.
Should someone who is anti-government and anti almost everything public and google/apple/microsoft... install this software on their personal computer?
Thanks,
Jorge
@rick-github commented on GitHub (Feb 3, 2025):
It depends on whether you can trust random people on the internet (ie, me) who give you advice. I've been in the industry a long time, worked for some of the companies that you are anti, and am careful about my online presence. I have been through the software and I am confident that there is nothing that will send data to the TLAs.
Trust me, bro.
That's not to say that it's impossible. But if a TLA can insert a backdoor so subtle that none of the people who read the code recognize it, or intercept a download over an encrypted channel and insert code to compromise your system, then it's too late to worry about it - they have everything they want already. As for the model itself, it's just a collection of numbers interpreted by the software - whether it's made in a foreign country or not, the likelihood of a threat vector from a model is extremely remote. (Again, not impossible, but if somebody has found a way to influence a model running in an inference engine in such a way as to compromise your data, well, all bets are off).
If you are concerned about data exfiltration, you should make sure your systems are behind firewalls that control both inbound and outbound traffic.
If you want to read something that does nothing at all to allay the fears of computer compromise, read this. The creator of the hack described in that web page is one of the authors of the Go language, which is what ollama is written in.
@banterer commented on GitHub (Feb 3, 2025):
Got it, thanks and "I thought as much".
Where can I go to find out how to beef up my mac os "firewall settings"? I did not see any options for how to "control outbound" traffic only inbound connections.
@rick-github commented on GitHub (Feb 3, 2025):
I'm afraid you have me at a disadvantage there, I'm a linux guy (because I can read the source code to my OS, which Mac an WIndows users can't do). Apple have an article on setting firewall settings here.
@banterer commented on GitHub (Feb 4, 2025):
No worries.
I have another question. I asked the same question to my Deepseek-R1 and also to https://chat.deepseek.com/ and I got completely different answers. Mine did not give me a direct answer but more of an opinion and suggested I take specific measurements while the online version really paid attention to my question and gave me actual numbers based on a "typical kitchen". Basically, it took a look at the typical kitchen, took an average and based his answers on that while mine decided it did not have enough data to give me an answer and chose to give me an answer to help me keep on looking and refining my question.
What I want to know is this: Do these programs take feedback from you regarding their answers and build (learn) on that for future questions and is that why the internet version is so much more direct? I realize that mine is slow because I do not have a farm powering this thing and I have other programs running but is it also because my version was just "born" yesterday and it needs to get feedback from me?
@rick-github commented on GitHub (Feb 4, 2025):
I don't know the specifics of the Deepseek online service, but generally, no, models don't "learn". They maintain a context for the conversation you are currently engaged in, but when you start a new chat, previous chats are not part of that new conversation. If you have one long extended chat with deepseek then eventually you will reach the limit of the context window and the model will start to "forget" older parts of the conversation.
There are tricks that can be done to make a model appear to have long term memory. For example, rather than just removing older parts of the conversation, they might be replaced by a summary. Some detail will be lost but the essence of the conversation is retained. Another approach is to move the older parts to external storage, eg a RAG system. I don't know if Deepseek employs these methods.
For your local model, you can simulate something like persistent memory in open-webui with the "Memory" feature. In "User" > "Settings" > "Personalization" you can add pertinent information that is maintained across chats. Note that this will consume part of the context window, so if you put a lot of stuff in there, you will need to increase the size of the context window to accommodate the facts and the conversation.
I think the reason for the differences in the responses between the online and local versions of deepseek is that they are not the same model. In your original post, an
ollama listshowed deepseek-r1:70b. If that's the local model you asked about the kitchen, that's not the same model as the online version. deepseek-r1:70b is a distilled version of llama, ie a version of llama that has been "taught" to reason like deepseek. The local version of the online deepseek model is deepseek-r1:671b, which is very large and unwieldy for consumer grade hardware.@banterer commented on GitHub (Feb 10, 2025):
Hi Rick,
I do not know if this is the right place to put this and if it is not, please tell me where the right place is.
Its response suggests that it is trying to learn from its mistakes (if it is not just f-ing with me).
Something that I find very strange happened but if this thing is just "learning" how to reason then maybe it is just a smart infant. I asked a question about an octagon and it proceeded to approach the question with some bad assumptions. Rather than go with his approach, I decided to challenge it and keep drilling it. This was my last question and its response:
@rick-github commented on GitHub (Feb 10, 2025):
Models are trained to re-evaluate information based on feedback. This is particularly true for these new "reasoning" models. This happens purely in the current context window. The model has no long-term storage (other than the tricks mentioned above) so cannot learn in the sense of internalizing information or acquiring skills for later use.
No, it is fancy auto-complete.