mirror of
https://github.com/ollama/ollama.git
synced 2026-05-07 00:22:43 -05:00
Closed
opened 2026-04-22 13:38:47 -05:00 by GiteaMirror
·
61 comments
No Branch/Tag Specified
main
hoyyeva/anthropic-local-image-path
dhiltgen/ci
dhiltgen/llama-runner
parth-remove-claude-desktop-launch
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.30.0-rc3
v0.30.0-rc2
v0.30.0-rc1
v0.30.0-rc0
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
model
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#32416
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @UmutAlihan on GitHub (Apr 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10143
I know that it has only been a couple of hours since Llama 4 model family has been released. However I believe it is good practive to ping the repo about when its support on Ollama will be available 😄
Looking very forward to inference with this new very long context multimodal mixture of experts model family on Ollama
official release: https://ai.meta.com/blog/llama-4-multimodal-intelligence/
cheers
@AlbertoSinigaglia commented on GitHub (Apr 5, 2025):
I get that models are always more exploiting the mixture of experts, but:
This ignoring the VRAM used for the KVCaching, which for the 10M context length is going to be giant...
@coder543 commented on GitHub (Apr 5, 2025):
I don't think they said anything about fitting onto a single GPU with 10M context. Very, very few use cases right now are going to involve a 10M context window.
@sasank-desaraju commented on GitHub (Apr 5, 2025):
Work ongoing at #10141
@blinkysc commented on GitHub (Apr 6, 2025):
They say it fits on H100 so < 80g. The AMD 395 with 128gb and Mac's with 128gb probably gonna be fine
@JeffTax commented on GitHub (Apr 6, 2025):
Looking forward to this 😄
@sanjibnarzary commented on GitHub (Apr 6, 2025):
I need it to fit in single V100 16GB GPU
@lxyeternal commented on GitHub (Apr 6, 2025):
I need the Llama4.
@marcussacana commented on GitHub (Apr 6, 2025):
Is there any hope to this model be pruned?
@AlbertoSinigaglia commented on GitHub (Apr 6, 2025):
Maybe memory wise, but not sure about inference speed. Also, a 1M context length usually in my experience requires 200Gb of memory for kv cache... so...
@puzanov commented on GitHub (Apr 6, 2025):
How hard the Llama4-sout model will be quantified?
@jimccadm commented on GitHub (Apr 6, 2025):
I'll be testing it on a 128Gb Macbook Pro with max cores across the board as soon as it lands on the model list.
@jano403 commented on GitHub (Apr 6, 2025):
saAAAAAAAAAAAAAAAAAaaar DO NOT REDEEM
@Jabher commented on GitHub (Apr 6, 2025):
128gb mbp owner here, can't wait to try
@pavankay commented on GitHub (Apr 6, 2025):
Has Llama 4 been released yet?
@pavankay commented on GitHub (Apr 6, 2025):
On Ollama
@jpapenfuss commented on GitHub (Apr 7, 2025):
It's not going to fit in 128 gigabits, no matter how hard it's quantified.
@jimccadm commented on GitHub (Apr 7, 2025):
Agreed. I had a couple of attempts and relaxed rules in LM Studio, no dice, doesn't fit.
@oreaba commented on GitHub (Apr 7, 2025):
looking forward to!
@ghmer commented on GitHub (Apr 7, 2025):
It should be noted that the license does not permit usage of llama4 by Europeans. When offering those models, don’t forget to add a big warning message 😣
@croqaz commented on GitHub (Apr 7, 2025):
Why are you asking? Did you pay the devs to release it in a few hours, over the weekend?
@PawelSzpyt commented on GitHub (Apr 7, 2025):
From Meta's use-policy:
"This restriction does not apply to end users of a product or service that incorporates any such multimodal models."
Perhaps you are end user of a product (like a free product called Ollama) that incorporates Llama model, and in this case you can use it. Not a legal advice though.
@Kwisss commented on GitHub (Apr 8, 2025):
Is it to late to take that bet?
@colout commented on GitHub (Apr 8, 2025):
Quantifying the model won't make it fit in 128 Gigabits.
However, you can quantize the model to make it fit in 128 Gigabytes of memory.
In all seriousness, I have a 8845hs mini pc with 96GB RAM (dual channel 5600mhz) that runs
qwen2.5:14b-instruct-q4_K_Mmodel at a reasonable enough speed for CPU-only inference (about 5-7tk/s <8k context).I'd be happy to test once this comes out (in the meantime, I'd love to see a non-bnb
q4_K_Min general that I can try. Even if it's just with python transformers library to get a baseline while I wait for ollama)Edit: In case anyone's interested, I got unsloth's
q4_K_Mrunning through the bleeding edge llama.cpp. around 2.3 tk/s with an empty context window.@igorschlum commented on GitHub (Apr 9, 2025):
I have a 192GB max studio ready to test llama4 with Ollama and share results.
@Luap2003 commented on GitHub (Apr 9, 2025):
I have a server with two H100 GPUs, and I'm really interested in testing it, especially since the blog post mentioned it should fit on just one.
@gileneusz commented on GitHub (Apr 9, 2025):
I have a rack with 64 B200s and can't wait to test it soon!
@pakoito commented on GitHub (Apr 9, 2025):
I have a White Citroën 2CV and it contributes as much to this conversation as your posts.
@igorschlum commented on GitHub (Apr 10, 2025):
I had a blue one and an orange buggy. I regret them both.
@dineshkumartp7 commented on GitHub (Apr 10, 2025):
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.05 Driver Version: 560.35.05 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA A100 80GB PCIe Off | 00000000:38:00.0 Off | 0 |
| N/A 36C P0 48W / 300W | 188MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA A100 80GB PCIe Off | 00000000:A8:00.0 Off | 0 |
| N/A 31C P0 44W / 300W | 4MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA A100 80GB PCIe Off | 00000000:B8:00.0 Off | 0 |
| N/A 37C P0 43W / 300W | 4MiB / 81920MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1391600 G /usr/libexec/Xorg 108MiB |
| 0 N/A N/A 1391624 G /usr/bin/gnome-shell 17MiB |
| 0 N/A N/A 4043059 C+G missioncenter 30MiB |
+-----------------------------------------------------------------------------------------+
Cant wait to try :)
@AlbertoSinigaglia commented on GitHub (Apr 10, 2025):
This is getting out of hand
@aravhawk commented on GitHub (Apr 10, 2025):
Try out
ollama run aravhawk/llama4, I got it on there @ 4-bit quant and a 4096 token context window. It's ~65GB.Also try experimenting with the context in the
MODELFILE, that 128GB can easily fit it (as long as the GPU can handle it).@FlippingBinary commented on GitHub (Apr 10, 2025):
Looks like
ingu627/llama4-scout-q4was published 7 hours earlier with exactly the same hash.@aravhawk commented on GitHub (Apr 11, 2025):
Hey, apologies for the initial duplicate. I've since reuploaded the model. This current version uses the sharded GGUFs from Unsloth, which I configured for Ollama. My goal was just to share a useful setup, not claim original creation.
Additionally, I've added Maverick (
aravhawk/llama4:400b) if anyone has the VRAM for it.@mistrjirka commented on GitHub (Apr 12, 2025):
I updated my ollama through the script to the latest version but it errrors out on llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'llama4'.
Is the version that can support it not released yet?
@AlbertoSinigaglia commented on GitHub (Apr 12, 2025):
https://ollama.com/search?o=newest i don't see any llama4 available to be fair...
@aravhawk commented on GitHub (Apr 12, 2025):
Yes, it seems so. I just tried reinstalling
ollamaand doingollama run aravhawk/llama4on a GH200 machine, and recieved the same error. Looks like Ollama might not support the new arch.@rick-github commented on GitHub (Apr 12, 2025):
llama4 support is in progress: #10141
@mistrjirka commented on GitHub (Apr 12, 2025):
Well it is pos
Well it is I can see it in the search results. https://ollama.com/search?q=llama4
@ZV-Liu commented on GitHub (Apr 14, 2025):
https://github.com/ollama/ollama/pull/10141 How long will it take to support llama4? I have tested and recompiled ollama on this branch. It can support llama4, but I can only use backend and CPU reasoning?
@batot1 commented on GitHub (Apr 14, 2025):
Error: unable to load model:
Any idea what is it wrong?
All other model in ollama working property only this model not working.
@rick-github commented on GitHub (Apr 14, 2025):
https://github.com/ollama/ollama/issues/10143#issuecomment-2798941503
@aravhawk commented on GitHub (Apr 16, 2025):
Architectural issues, unfortunately 😔
@aravhawk commented on GitHub (Apr 16, 2025):
I think you can recompile llama.cpp with CUDA support, but don't quote me on it.
@lee-b commented on GitHub (Apr 21, 2025):
Llama 4 (even Scout) is a great model. Very fast, and much more useful answers than most of the previous models I've tried. I'm running it on llama.cpp at the moment though, which lacks vision support. It would be great if ollama implements this with vision.
@ips972 commented on GitHub (Apr 24, 2025):
hi, any update on when ollama will support llama4 ? f16 or q1-8 etc... with all functions chat, vision and the huge max token size.. ??
@Notbici commented on GitHub (Apr 25, 2025):
Any workarounds for getting Llama 4 working on Ollama?
@mistrjirka commented on GitHub (Apr 25, 2025):
you can compile the branch llama4 yourself. It is currently open pull request. Currently it seems to be in phase of code review by other contributors.
@rsmirnov90 commented on GitHub (Apr 26, 2025):
I think it just disappeared... Or at least I don't see it in the branch list anymore (and I know it was there because I was checking it almost daily up until now).
@igorschlum commented on GitHub (Apr 26, 2025):
@rsmirnov90 there is a new version of Ollama that support Llama4, it still can evolve, but it's there and you can try it.
https://github.com/ollama/ollama/releases/tag/v0.6.7-rc0
@ips972 commented on GitHub (Apr 27, 2025):
tried the new ollama with llama4 , works fine. but still lacks the perfotmance of vllm. hope that some day ollama gets to that performance level. its a much easier platform to manage then any other. espacially in multi user sessions.
@thorewi commented on GitHub (Apr 29, 2025):
Hello, is this implementation really multimodal with image processing (or maybe I'm using wrong model)? Because I'm getting negative answer (see attached picture)... I'm using ollama 0.6.7-rc0 and tried this models: https://ollama.com/ingu627/llama4-scout-q4 and https://ollama.com/aravhawk/llama4. Thank you for your help.
@igorschlum commented on GitHub (Apr 29, 2025):
@thorewi I think that Llama4 can process an image as it can describe an image as llama3.3 is able to, but llama4 cannot process an image making modifications to the image.
@thorewi commented on GitHub (Apr 29, 2025):
@igorschlum Yes, that's exactly what I need, but I always get something like this: [attached picture] — basically no response. So the question is whether it’s working for anyone or not...
@rick-github commented on GitHub (Apr 29, 2025):
aravhawk/llama4 doesn't support images:
The lack of response may be due to something else. ollama server logs may aid in debugging.
@rick-github commented on GitHub (May 1, 2025):
https://ollama.com/library/llama4
@aravhawk commented on GitHub (May 1, 2025):
*The models I've uploaded are not multimodal. I've updated the description to reflect that (I originally copied it directly from Meta)
@thepwagner commented on GitHub (May 3, 2025):
Is there a bug in the template currently?
On
v0.6.7, usingllama4:17b-scout-16e-instruct-q4_K_M b62dea0de67c.When calling with tools, I'm getting:
Using the
with .Toolson L3 makes me think the range on L6 should just be over.- but I can't find the source anywhere to submit a PR:@olumolu commented on GitHub (May 4, 2025):
Close this as support already merged.
@addypy commented on GitHub (May 5, 2025):
I'm getting the same issue @thepwagner mentioned. Tested with ollama (docker) with both v0.6.8 and v0.6.7.
ModelHTTPError: status_code: 500, model_name: llama4, body: {'message': 'template: :6:10: executing "" at <.Tools>: can't evaluate field Tools in type api.Tools', 'type': 'api_error', 'param': None, 'code': None}
Any updates on this?
@sherlock666 commented on GitHub (May 5, 2025):
Same issue as @thepwagner and @addypy
I would like to work with the tools function
for same code the llama3.2 can use the tools function correctly
while the llama4:scout will return with the same error
.... template: :6:10: executing "" at <.Tools>: can't evaluate field Tools in type api.Tools" ........
@mxyng commented on GitHub (May 5, 2025):
tools is fixed for llama 4. Please repull the model