mirror of
https://github.com/ollama/ollama.git
synced 2026-05-07 00:22:43 -05:00
Open
opened 2026-05-03 09:37:27 -05:00 by GiteaMirror
·
58 comments
No Branch/Tag Specified
main
hoyyeva/anthropic-local-image-path
dhiltgen/ci
dhiltgen/llama-runner
parth-remove-claude-desktop-launch
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.30.0-rc3
v0.30.0-rc2
v0.30.0-rc1
v0.30.0-rc0
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
feature request
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#62587
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @eng-alameedi on GitHub (Nov 12, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1102
Hello there:
@walterjwhite commented on GitHub (Jan 7, 2024):
I tried briefly:
git clone https://github.com/jmorganca/ollama.git
cd ollama
go generate ./...
go build .
Note that I have gcc13 installed, cmake, and go installed on FreeBSD 14.
Can you clarify what you tried? Perhaps an additional dependency is required.
@SoloBSD commented on GitHub (Apr 17, 2024):
@walterjwhite
I used:
export CGO_ENABLED="0"
and then
go build .
And got:
github.com/ollama/ollama/llm
llm/payload.go:143:24: undefined: libEmbed
llm/payload.go:163:17: undefined: libEmbed
llm/server.go:59:28: undefined: gpu.CheckVRAM
llm/server.go:60:14: undefined: gpu.GetGPUInfo
I don't have a GPU so I think we need some modifier to skip GPU on build.
@walterjwhite commented on GitHub (Apr 29, 2024):
Ok, I believe I had this running on my old system (hard drive died) by simply doing:
download https://ollama.ai/download/ollama-linux-$_ARCHITECTURE and move it to $PATH
chmod +x ollama
When I try to run it now, I end up with 'Abort trap'. I have the linux compatibility kernel module loaded.
I should also note that while I had it running on FreeBSD, my hardware is a bit dated, so performance was abysmal.
@danielrpfeiffer commented on GitHub (May 9, 2024):
The strongest machines we have run FreeBSD. It would be great to have ollama working natively, even in CPU-only mode.
@yurivict commented on GitHub (May 15, 2024):
0.1.38 still has this problem.
@yjqg6666 commented on GitHub (May 21, 2024):
You may need
brandelf -t Linux ollama.@kraileth commented on GitHub (May 27, 2024):
In case you missed it: there's a PR, #4172, which is meant to add support for the four main BSDs. It makes
ollamabuildable on FreeBSD natively (without requiring the Linuxulator). After running the application for a couple of days, I can say that it works really well. I hope it gets merged. Then the next step would obviously be to create a port in the FPC.@eng-alameedi @walterjwhite @SoloBSD @danielrpfeiffer @yurivict @yjqg6666
@yurivict commented on GitHub (May 27, 2024):
@kraileth
I am getting this failure with 0.1.39 + #4172 :
go-1.22 is used.
@kraileth commented on GitHub (May 27, 2024):
@yurivict Looks like the additional files introduced by the PR may not be present on your system?
Just redoing it in a fresh jail to document what I was doing:
# pkg install -y git go122 cmake vulkan-headers vulkan-loader# git clone https://github.com/prep/ollama.git# cd ollama && git checkout feature/add-bsd-support# go122 generate ./...# go122 build .Works fine for me, no problems encountered.
@yurivict commented on GitHub (May 27, 2024):
In order for us to use this PR in the FreeBSD port it should be merged first, because you clone from another account: https://github.com/prep/ollama
Any idea when is it going to be merged?
@kraileth commented on GitHub (May 27, 2024):
@yurivict So now it works for you, too? We could pick the changes from the PR and patch ollama source downstream in ports. It would definitively be preferable to have the PR merged, though. That would also benefit the other BSDs as well.
Unfortunately with just short of 180 open PRs, I assume it may take the small team that manages the project a moment to get to it. But maybe we'll be lucky?
@yurivict commented on GitHub (May 27, 2024):
It worked for me when I used your instructions.
However, in order to have a working port we need this PR to be merged into this account.
It failed for me when I tried to add patches from the PR into the last ollama release.
@SoloBSD commented on GitHub (May 28, 2024):
I asked on the Discord if they could prioritize the merge of this PR.
It has been forwarded to proper developers.
Let's hope it gets merged soon.
@walterjwhite commented on GitHub (May 28, 2024):
Works for me too, thanks.
@kraileth commented on GitHub (May 28, 2024):
Seems like we were having quite a bit of bad luck: The version that works well is from Star Wars day but on the very next day #4144 introduced changes that obviously broke something that stops the build on FreeBSD. I have no idea what, though.
@rmszc81 commented on GitHub (Jul 23, 2024):
Hello guys,
any news in this topic?
I'd be great to use ollama in FreeBSD by having officially in the ports tree.
@xorander00 commented on GitHub (Aug 5, 2024):
I've managed to update the patch for v0.3.3. However, I feel like something isn't right. The output executable that was built on my FreeBSD 14.1-STABLE system is only 24mb (35mb unstripped), whereas the Github-released Linux executable is 559mb.
Is there data that's supposed to be embedded into the executable? If so, is it optional or required?
Mind you, I'm completely new to Ollama. I know nothing about it and the reason I'm building it is so that I can play around with it on my daily driver FreeBSD desktop.
@yurivict commented on GitHub (Aug 5, 2024):
@xorander00
Are you able to submit the patch as a pull request for this repository?
@kraileth commented on GitHub (Aug 5, 2024):
@xorander00 The much smaller size is normal, I guess. The self-built Linux binary that I currently run in a Linuxulator jail is 38 MB. I'd also be interested in you sharing your patch.
@xorander00 commented on GitHub (Aug 5, 2024):
@yurivict @kraileth Weird, it wouldn't let me attach the patch to this post. Copying+pasting it here for now...
...and then here are the actual commands to build...
@yurivict commented on GitHub (Aug 6, 2024):
The build fails for me with version 0.3.3 + the above patch:
Was anything forgotten?
@yurivict commented on GitHub (Aug 6, 2024):
app/store/store_linux.go needs to be copied to app/store/store_bsd.go
@xorander00 commented on GitHub (Aug 6, 2024):
Hmm, strange. I didn't have to do that to get it to successfully compile. Looking at app/store/ though, the build would fail if it tried to compile that package. Did you happen to use any build tags during your build? I'm wondering why it didn't fail on mine.
Either way, it should probably be patched. Copying store_linux.go to store_bsd.go will solve the error, but you'll want to modify line 11 to return "/usr/local/etc/ollama/config.json" instead of "/etc/ollama/config.json". I think store_unix.go would technically cover both platforms (Linux, FreeBSD) too, so you could just wrap that line in a conditional that checks GOOS and returns the platform-specific path or a default path.
@yurivict commented on GitHub (Aug 6, 2024):
I added the FreeBSD port for ollama: https://cgit.freebsd.org/ports/tree/misc/ollama
Thank you, @xorander00, for your patch.
@xorander00 commented on GitHub (Aug 6, 2024):
Where should I message you? I have something like 400+ internal ports I've made over the last couple of years that I've been meaning to upstream. Just haven't had the time and they're messy. Can work with you go get that going.
@yurivict commented on GitHub (Aug 6, 2024):
yuri at FreeBSD
@yurivict commented on GitHub (Aug 6, 2024):
@xorander00
The port builds but fails to run inference for some reason.
The client fails:
The server has this:
Do you know what might be wrong?
@xorander00 commented on GitHub (Aug 6, 2024):
@yurivict
My first guess, off the top of my head, is because there's no actual model that is bundled with the executable. I'm guessing that's probably why the Linux release is 559mb vs. the FreeBSD source-built executable is less than 40mb.
Will either have to download model(s) or look at patching the source to embed models into the executable. That is of course if that's what is actually happening here. Will look here in a bit and see what I find.
@xorander00 commented on GitHub (Aug 6, 2024):
Oh, and if it's not a model issue, then my second guess is that it's a hardware acceleration issue. It seemed that CPU support was being reworked or dropped if there was no GPU fallback, though I could still very well be wrong.
@xorander00 commented on GitHub (Aug 6, 2024):
@yurivict
See https://github.com/ggerganov/llama.cpp/issues/7386
@kraileth commented on GitHub (Aug 6, 2024):
There seems to be slightly more wrong with the port so far. By chance I just mailed Yuri before I saw that there's more replies here. One of the things that have changed since the initial OpenBSD patch earlier this year is that some CMake vars were renamed. Therefore we can see this when building:
These should be replaced by GGML_*. This will lead to a build failure due to a required vulkan component missing which can be satisfied by graphics/shaderc. However I still couldn't build it due to a missing symbol pthread_create. I'll have to stop at this point for today but wanted to share these bits in case it helps somebody else. It would be awesome if we could get ollama working properly on FreeBSD.
@xorander00 commented on GitHub (Aug 6, 2024):
@kraileth
Looking at it right now in fact, so your comment is helpful. Thanks!
I'll see if I can update the patch to add the renamed CMake variables. I'm also searching for go build tags to see what else might be relevant for FreeBSD. I just noticed now something about the gpu package that might need to be patched, but not sure yet.
@xorander00 commented on GitHub (Aug 6, 2024):
@yurivict
I have some other stuff I have to get back to for the time being, and you may be faster at this piece anyway. If it's trying to build llama.cpp, then we should figure out how to skip it and use the system (port). The pthread_create issue is with building llama.cpp and I haven't yet been able to find where it's setting the linker path for pthreads. Shouldn't have to do any of that though if it's able to rely on the system package instead.
@xorander00 commented on GitHub (Aug 6, 2024):
@yurivict
Here's a snapshot of my WIP patch:
freebsd.txt
Don't be surprised if the build still fails. I added freebsd to gpu/gpu.go as a build tag, which could very well cause the build to fail.
@yurivict commented on GitHub (Aug 6, 2024):
It builds the patched llama.cpp though.
The patches have to be upstreamed first to use the llama-cpp package.
@abdielsudiro commented on GitHub (Aug 8, 2024):
awesome. many thanks. this for me too.
@xorander00 commented on GitHub (Aug 8, 2024):
@yurivict Not sure if you saw it, but in llm/generate/gen_bsd.sh the CMake variable prefixes need to be changed from LLAMA_* to GGML_*. I haven't had a chance to resuming working on an updated patch, but my tree has those changes. I'll generate a patch later when I get a chance to use as a diff reference to see what might be worth integrating.
@yurivict commented on GitHub (Aug 8, 2024):
@xorander00
I believe that libllvm.so from llama-cpp is actually used, so renaming LLAMA_* to GGML_* shouldn't matter in this case.
@yurivict commented on GitHub (Aug 8, 2024):
What are these files?
@yurivict commented on GitHub (Aug 8, 2024):
I asked the ollama upstream: https://github.com/ollama/ollama/issues/6259
@yurivict commented on GitHub (Aug 8, 2024):
The latest revision of the misc/ollama port has inference working.
Please update your ports tree and rebuild.
The working package name will be ollama-0.3.4_2
In case you'll find any problems, please report them to me either through the e-mail yuri at FreeBSD, or through the FreeBSD Bugzilla.
I think that this issue can be closed now.
@yurivict commented on GitHub (Aug 9, 2024):
To be precise, inference works on CPU.
I am working on enabling Vulkan.
@yurivict commented on GitHub (Aug 9, 2024):
Vulkan now works.
Please test the port.
@yjqg6666 commented on GitHub (Sep 12, 2024):
How about updating to the recent version v0.3.10?
@yurivict commented on GitHub (Sep 12, 2024):
I tried to update it but the extensive patch needs extensive changes and I couldn't make it work yet.
@tingox commented on GitHub (Nov 8, 2024):
I've installed ollama from a package
on FreeBSD 13.4
I start the server like this
but quite often, the client aborts while trying to start and load the model (I've tried a few models)
on the next try it works
any other info I can provide to help debug this?
@yurivict commented on GitHub (Feb 28, 2025):
I maintain the FreeBSD port that is currently at the version 0.3.6
I am receiving e-mails from users almost every week asking why isn't the port updated.
Therefore I have these questions:
@aleksander-haugas commented on GitHub (Mar 6, 2025):
Works fine on 14.1-RELEASE with only cpu and mistral, super Fast!!!.... for some reason is forced to use vulkan libs...
@walterjwhite commented on GitHub (Mar 7, 2025):
Yes, the above steps do work for me. It must be go122 and not go123. For me, my CPU is not performant at all. I did try mistral as I previously used llama3.2.
I'm on an i5-3470 with 16GB of ram, ancient stuff with no GPU acceleration. I asked a basic question and gave up waiting. For comparison, I asked the same question on an older computer, but with an Nvidia RTX 2060 (still old), dual Xeon processors with 96 GB of ram and got a response relatively quickly.
The above steps work and I can vouch for that, but my only question is, what hardware are you running on to get good performance with CPU alone?
@yurivict commented on GitHub (Mar 16, 2025):
@aleksander-haugas
The feature/add-bsd-support branch works, but it is for 1 year old version. It hasn't been updated for the latest version, since May 5th, 2024.
@aleksander-haugas commented on GitHub (Mar 16, 2025):
When I try the PKG version... Doesn't load all models I want, something happens with tensor and ollama... But compiling and building yourself... Works... Also is easy ...
@yurivict commented on GitHub (Apr 13, 2025):
https://github.com/ollama/ollama/pull/10254
@hckiang commented on GitHub (Jan 25, 2026):
The patch doesn't work anymore with the new refactored discovery/. The
GpuInfostruct has disappeared and now there'sin
discover/gpu_info_darwin.m. I don't know where these are called...@yurivict commented on GitHub (Jan 25, 2026):
The upstream refused to merge this patch and it isn't updated any more.
Please use the package ollama-0.13.5_1 that is currently available, or the port misc/ollama.
@yurivict commented on GitHub (Jan 25, 2026):
This issue can be closed now.
The FreeBSD patches are in the ollama port.
Someone from the upstream replied to my Discord post last month and said that they are afraid that these patches would be a dead code since there is no CI.
I will try to create a CI job.
This issue can be closed now to avoid confusion since it is outdated and will not be updated.
@hckiang commented on GitHub (Jan 27, 2026):
Thanks for replies. I reckon they want CI etc and it was rejected. The port misc/ollama works but doesn't seem to have use GPU despite there's libvulkan.so etc and llama.cpp is working well (enough) with GPU+vulkan.
Does your Github fork support GPU on FreeBSD?
@yurivict commented on GitHub (Jan 27, 2026):
I didn't test Vulkan with the ollama port.
I know thatin ollama itwas added ~Oct-Nov 2025 and it used to or still requires some option to enable.
I will try it and will get back to you.
@spmzt commented on GitHub (Mar 7, 2026):
Thank you for your work. any updates?