mirror of
https://github.com/ollama/ollama.git
synced 2026-05-06 16:11:34 -05:00
Closed
opened 2026-04-29 01:22:09 -05:00 by GiteaMirror
·
75 comments
No Branch/Tag Specified
main
dhiltgen/ci
parth-launch-plan-gating
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
bug
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#52915
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ultramarinebicycle on GitHub (Mar 15, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9791
Originally assigned to: @jmorganca on GitHub.
What is the issue?
Earlier (0.6.0), I could run Gemma 3 12b q4 at around 20-25 tokens per second. Now it stays somewhere between 10-16 tokens per second.
Not only that, but I was also able to use 8k context length without any issues. Now doing that crashes my computer, so I have to use it at default 4k.
Computer specs:
Relevant log output
OS
Windows
GPU
Nvidia
CPU
AMD
Ollama version
0.6.1
@jmorganca commented on GitHub (Mar 16, 2025):
Hi so sorry about this. What does ‘ollama ps’ show for you?
@jmorganca commented on GitHub (Mar 16, 2025):
And would it be possible to share your logs? Sorry again about the crash
@ultramarinebicycle commented on GitHub (Mar 16, 2025):
Hey @jmorganca
This^ is my recent run with 4k context.
I'm attaching the app and server logs for the time it crashed.
server-1.log
app-1.log
@jmorganca commented on GitHub (Mar 16, 2025):
Thanks so much
@jmorganca commented on GitHub (Mar 16, 2025):
I don't see a crash in the the logs you sent. Do you have one for the 8k case where Ollama crashes? Thanks so much.
@ultramarinebicycle commented on GitHub (Mar 16, 2025):
No longer crashing for whatever reason (maybe some background program interfered with it the last time). Now my PC just becomes unresponsive; can move the cursor around but nothing else. You want me to provide logs for that?
@daihouzi commented on GitHub (Mar 16, 2025):
I encountered the same issue; the memory usage was normal with the 0.6.0 version, but after updating to the 0.6.1 version, the memory usage would double with just a little conversation or image transfer. With 3.7G of parameters, the memory usage would increase from an initial 4G to over 10G.
@lpdink commented on GitHub (Mar 16, 2025):
Same issue here on 4070ti (12GB). Gemma3 12B occupies 8.1GB of disk space, but after loading into memory/VRAM, it surprisingly takes up 12GB(only load without calling inference). Is this expected?
@JamesInform commented on GitHub (Mar 16, 2025):
Hi All!
This is my first post here, so first of all thanks for your great work on Ollama.
Using Ollama 0.6.1.
Even worse on MacBook M2 Max, 64 GB RAM.
When running gemma3:27b the over all memory consumption rises from 15,5GB to 49,8GB.
So even more than "ollama ps" reports.
The differences in ram consumption in the following screenshot are introduced just be doing "ollama run".
No other actions have been made:
There is no such issue with other models.
Hope that helps!
Cheers,
James
@raymondtri commented on GitHub (Mar 16, 2025):
I have a discord thread running about this. Even after the latest update, Gemma usage is all messed up.
I've got a 5070ti with 14.9 available vram and running the 12b_q6_k_l gemma overflows onto my hardware ram like nobody's business.
@smerschjohann commented on GitHub (Mar 16, 2025):
gemma3 does not work on my system as well. First communication iteration works, then this (using 10GB VRAM, 48GB RAM):
Mär 16 16:01:09 fedora ollama[2664]: [GIN] 2025/03/16 - 16:01:09 | 200 | 1m58s | 127.0.0.1 | POST "/api/chat"
Mär 16 16:01:14 fedora ollama[2664]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 7457.67 MiB on device 0: cudaMalloc failed: out of memory
Mär 16 16:01:14 fedora ollama[2664]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 7819937792
Mär 16 16:01:14 fedora ollama[2664]: SIGSEGV: segmentation violation
Mär 16 16:01:14 fedora ollama[2664]: PC=0x56509735c1d0 m=213 sigcode=1 addr=0x58
Mär 16 16:01:14 fedora ollama[2664]: signal arrived during cgo execution
Mär 16 16:01:14 fedora ollama[2664]: goroutine 8 gp=0xc00048d180 m=213 mp=0xc003080808 [syscall]:
Mär 16 16:01:14 fedora ollama[2664]: runtime.cgocall(0x5650973b01e0, 0xc00612db00)
Mär 16 16:01:14 fedora ollama[2664]: runtime/cgocall.go:167 +0x4b fp=0xc00612dad8 sp=0xc00612daa0 pc=0x56509657c60b
Mär 16 16:01:14 fedora ollama[2664]: github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x7f8bb800a4f0, 0x7f8c0432a720)
Mär 16 16:01:14 fedora ollama[2664]: _cgo_gotypes.go:485 +0x4a fp=0xc00612db00 sp=0xc00612dad8 pc=0x5650969678ca
simon@fedora:~$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
gemma3:12b 6fd036cefda5 12 GB 24%/76% CPU/GPU 46 seconds from now
@illnesse commented on GitHub (Mar 16, 2025):
4b-27b, all crash for me (with openwebui v0.5.20)
ran it in debug to see whats up, hope this helps:
ollama_gemma3_4b.log
@rick-github commented on GitHub (Mar 16, 2025):
A commonality of the crashes is the model loading successfully, answering a query or two, and then crashing because
ggml_backend_sched_graph_compute_async()wants to allocate an unrealistically large buffer, 7G in the example from @smerschjohann. For windows users with recent Nvidia drivers, that ends up in unified memory, causing the RAM blowout shown by @raymondtri. For Linux users withoutGGML_CUDA_ENABLE_UNIFIED_MEMORYthat's an instant OOM.Other examples:
I haven't been able to trigger this on my own systems yet, so there's perhaps some feature of the affected systems contributing to this.
@rick-github commented on GitHub (Mar 16, 2025):
@illnesse Thanks for the log, unfortunately it doesn't contain a crash.
@wills106 commented on GitHub (Mar 16, 2025):
gemma3:4b & gemma3:12b keep crashing with
level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 2"I have a RTX3060 12GB and a GTX1650 4GB, they only ever sit in the RTX, they never seem to try and use the GTX1650 for extra RAM.
( Edit wrong log...)
Not sure if that actually indicates what's crashed though?
@rick-github commented on GitHub (Mar 16, 2025):
@wills106 You appear to have re-uploaded @illnesse's log.
@wills106 commented on GitHub (Mar 16, 2025):
😅 Try again...
ollama.log
@rick-github commented on GitHub (Mar 16, 2025):
This looks like slightly different.
So the failure was an ASSERT instead of an OOM, but it sill happened in
ggml_backend_sched_graph_compute_async(). It may be different because the API was called with acontextfield, so the usual tokenization that occurs for API calls didn't take place, leading to a different code path that didn't need memory allocation but still failed when computing the graph.@bioshazard commented on GitHub (Mar 16, 2025):
I don't run into a crash but attempting to load Gemma3 27b on my 3090 (24G VRAM) causes my system to lock up with insanely high iowait. Will share logs if I can get to them next time I try it. But did want to log that I am running into it on 0.6.1 w/
servevia OpenWebUI@rick-github commented on GitHub (Mar 16, 2025):
@bioshazard Windows or Linux?
@smerschjohann commented on GitHub (Mar 16, 2025):
@rick-github if I can help to pinpoint it in someway, let me know. This happens even with ollama run, so no "thirdparty settings" involved.
A random chat:
So it most likely has something todo with context length, but I would have enough memory free
I can free up all other VRAM usage if that helps for debugging.
@wills106 commented on GitHub (Mar 16, 2025):
Do you want me to try the 12b and see if I get the same or different error?
I seem to be able to chat with either Gemma3 version ok, but I get the crashes when I use the GenerativeAI in Frigate
@bioshazard commented on GitHub (Mar 16, 2025):
Kubuntu 24.04 booted from a USD 3.0 SSD. I really should try to get y'all some useful logs or other debug info... I will see about spending some more time on this today. Tag me again if you have any specific tests you want me to run or docker image tags to try on my system. I have it running in an nvidia-enabled docker compose successfully for months against many other models, so hopefully my system is a good example failure case.
@rick-github commented on GitHub (Mar 16, 2025):
@bioshazard If you add some logs, could you also include your docker config?
@bioshazard commented on GitHub (Mar 16, 2025):
here is the docker config at least for now. just a slightly tweaked from coolify default. I confirmed in-terminal that
-Vshows 0.6.1@wills106 commented on GitHub (Mar 16, 2025):
Just ran nvidia-smi again and just noticed that the RAM usage on the RTX3060 is higher than what the processes section is indicating.
When it's crashing is it not fully clearing the RAM? Could be a different issue all together?
@rick-github commented on GitHub (Mar 16, 2025):
@smerschjohann I'm curious about how you limited your 3080 to 10G. I have a 3080 in the lab and I'd like to duplicate the environment to see if I can trigger the failure.
@smerschjohann commented on GitHub (Mar 16, 2025):
@rick-github would be nice, if I simply limited my GPU to 10 GB, but no. I'm afraid, the early RTX 3080 only had 10 GB :(
@ALLMI78 commented on GitHub (Mar 16, 2025):
how can you guys load gemma-3 with only 12 gb vram usage? https://github.com/ollama/ollama/issues/9791#issuecomment-2727276666
my issue
@smerschjohann commented on GitHub (Mar 16, 2025):
with env var
GGML_CUDA_ENABLE_UNIFIED_MEMORY=1is stabilizes at 9600 - 9800 MiB without getting slower. So at least that is a good thing ;)@smerschjohann commented on GitHub (Mar 16, 2025):
Yeah it does not fit completly in GPU, but here are my stats with the environment variable set:
@ALLMI78 commented on GitHub (Mar 16, 2025):
12 GB with 8k context length?
can one test it with 32k pls?
OLLAMA_CONTEXT_LENGTH=32768
@smerschjohann commented on GitHub (Mar 16, 2025):
this does not work for me (but 10GB VRAM)
@rick-github commented on GitHub (Mar 16, 2025):
@bioshazard commented on GitHub (Mar 16, 2025):
My errors seem related to booting off a USB SSD. I get nasty FIFO errors when I attempt to load up Gemma 27B in ollama 0.6.1 in docker. Qwen32B R1 distill worked fine, but I haven't found any useful logs yet. So count me out of troubleshooting for now. Sry yall
@jamon commented on GitHub (Mar 16, 2025):
ollama.service
➜ ~ nvidia-smi
➜ ~ ollama ps
it crashes with the context set to 32k... it'll run with it set to 6k or less...
at 6144 context, it's 100% GPU, 16,890MiB VRAM use and doesn't crash
@smerschjohann commented on GitHub (Mar 16, 2025):
there is nothing wrong with the settings, ollama's behavior is wrong as it should (and normally does) support CPU offloading just fine.
@smerschjohann commented on GitHub (Mar 16, 2025):
Calm down, they are trying to investigate here. What do you expect? This is opensource and free software, instead of ranting here, you can help. Also there are issues on windows and linux, so I'm not sure what you mean with your Windows comment.
With 8K it works on linux with
GGML_CUDA_ENABLE_UNIFIED_MEMORY=1enabled..@rick-github commented on GitHub (Mar 16, 2025):
It is selecting the correct backend. The problem (in this issue) is that the backend is making unusually large allocations.
@rick-github commented on GitHub (Mar 16, 2025):
https://github.com/ollama/ollama/issues/9791#issuecomment-2727513844
@bjj commented on GitHub (Mar 16, 2025):
I'm also observing that gemma3:27b q4_k_m is allocating the right amount of space on the GPU, but also allocating a ton of system memory (enough to OOM in my case, but I do have more VRAM than RAM on this system). The same exact configuration runs qwen2.5:32b q4_k_m just fine.
Logs of loading gemma3
@rick-github commented on GitHub (Mar 16, 2025):
Yes, I found the same - allocation of system RAM is much greater for the gemma3 models even when the model is fully hosted in VRAM.
@wills106 commented on GitHub (Mar 17, 2025):
I have tried gemma3:12b this time with the following settings:
Seems to fail at a slightly different area now:
ollama2.log
Do you want me to raise this as a separate issue?
@stimata-debug commented on GitHub (Mar 18, 2025):
Error Report
Version: 0.6.1
System Configuration: 70+ GB VRAM
Models: 27b, 12b, 4b
Issue Description:
On any model, if first message of chat contains a image, I encounter a segmentation fault (SIGSEGV) immediately when message is recieved. Otherwise after 10 messages on 27b with image in chat history, model gets similar crash.
@hlinden commented on GitHub (Mar 18, 2025):
Version: 0.6.1
System Configuration: 16GB VRAM, 96GB RAM
Model: gemma3:27b
Error:
allocating 20513.56 MiB on device 0: cudaMalloc failed: out of memoryLog ist attached.
ollama-0.6.1_gemma3-27b_oom_error.log
@konrad0101 commented on GitHub (Mar 18, 2025):
I'm trying out Ollama 0.6.2-rc0 and there is a substantial drop in quality on vision OCR tasks compared to 0.6.1 (though no longer getting OOM errors). The results went from very good, to unusable (with lots of repeating text in the response). Using
gemma3:27b-it-q8_0on Ubuntu 24.04, RTX 3090.@rick-github commented on GitHub (Mar 18, 2025):
RSS is down with 0.6.2.
@bjj commented on GitHub (Mar 18, 2025):
With
ollama:0.6.2-rocmI can also loadgemma3:27bwithout OOM. It does use about 6G more main memory (while the model is fully offloaded to VRAM) thanqwen2.5:32b, but it is usable.@konrad0101 I also see the q4_k_m performing poorly at vision tasks, including repeating image elements. However, a lot of that goes away with better parameters:
Even then, the performance does not match an FP8 (not q8, I haven't downloaded that) quant.
Example description of a Rust radial build menu, default parameters, q4_k_m
Here's a description of the icons around the circular menu, starting at the top and going clockwise, based on the image:
It appears the majority of the icons are variations of wall pieces with windows.
...and q4_k_m with suggested parameters
Here's a description of the icons, starting at the cursor position and proceeding clockwise:
These icons seem to be options for building or construction, possibly within a game or creative environment.
@wills106 commented on GitHub (Mar 18, 2025):
I tried ollama 0.6.2 earlier. With Gemma3:12b with the above settings it was consuming 16GB of RAM, but Split between the RTX3060 and CPU. But was very very slow.
I then tried the same settings but with Gemma3:4b which seemed fine at fist. With it using about 6.7GB of VRAM.
But came back to my server and it was hardly responding.
Turns out it was using over 31GB of System RAM

Even though ollama PS is still showing 6.7GB

I'll try and limit the docker container to 16GB and see how that behaves.
At least it's a step in the right direction, as it's not fully crashed...
Edit:
I tried to get into the logs but the server was that unresponsive all I could do was restart the container.
@JamesInform commented on GitHub (Mar 18, 2025):
Just out of curiosity and maybe a dump question:
Why are so many users reporting the same issue in different threads and the maintainers are not able to spot the bug, although it seems that on almost every hardware setup including Apple Silicon with unified memory the issue is reproducable immediately?
@ultramarinebicycle commented on GitHub (Mar 19, 2025):
Update using 0.6.2:
T/s is still not fixed. VRAM is not being saturated and instead RAM is being used. Is this a model issue or an ollama issue?
@OSULZER commented on GitHub (Mar 19, 2025):
can confirm, issue still persists
@nhnzman commented on GitHub (Mar 19, 2025):
Why does Ollama keep logging warnings and restarting?
@NandaIda commented on GitHub (Mar 19, 2025):
I encountered persistent OOM errors when using the Gemma3:27b model with 2x RTX 3060 12GB + 1x GTX 1060 3GB, specifically when attempting to upload images. The model loads successfully and responds to text-only prompts, but image uploads trigger an out-of-memory crash.
I suspect this is related to the
mmprojcomponent (used for multi-modal processing), as I’ve observed that Ollama’s engine loads it differently compared to thellama.cppimplementation. Notably, the same configuration works flawlessly inllama.cpp, suggesting a potential discrepancy in how Ollama handles GPU memory allocation formmproj.Codes:
Working
llama.cppCommand (no OOM):Failing Ollama Engine Command (OOM on image uploads):
Differences:
--n-gpu-layers 62, whilellama.cppuses63.ctx-sizein Ollama is set to2048, whereasllama.cppuses16392(a much larger context size).mmprojpath is explicitly provided inllama.cpp, but not in the Ollama command (though it may be implicitly loaded).Could this issue come from how
mmprojis managed in Ollama’s GPU memory allocation?@NikhilM42 commented on GitHub (Mar 19, 2025):
This looks to be an ollama issue, I ran an update from 0.6 to 0.6.1 and all of a sudden I was hit with "connection forcibly closed by remote host" errors. It looks to be a memory access issue, based on my logs and the logs of this fellow here
9816
@NikhilM42 commented on GitHub (Mar 19, 2025):
Nevermind, it looks like I just had an outdated AMD driver 🤦🏽
@rick-github commented on GitHub (Mar 20, 2025):
Bisected the commits between 0.6.0 and 0.6.1 and token generation rate falls 25% at
a422ba39c9.EDIT: ignore this, I re-ran the test to compare with 0.6.3-rc0 and didn't see same same drop, so the experimental config was flawed.
@alsimms commented on GitHub (Mar 20, 2025):
I can confirm this issue as well. I have attached a full debug log which may provide some answers. I can run gemma2 27B and Qwen 32B on this setup but Gemma3-12b-it_K_M crashes and so does Gemma3-27b-it_K_M. I will try to replicate the crash on 12B and submit another debug.
Using ollama 6.2
Here is part of the issue.
##############################################Part1
time=2025-03-20T04:48:16.682Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05
dlsym: cuInit - 0x7f3ac06a8bc0
dlsym: cuDriverGetVersion - 0x7f3ac06a8be0
dlsym: cuDeviceGetCount - 0x7f3ac06a8c20
dlsym: cuDeviceGet - 0x7f3ac06a8c00
dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00
dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60
dlsym: cuDeviceGetName - 0x7f3ac06a8c40
dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0
dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20
dlsym: cuCtxDestroy - 0x7f3ac070d850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 2
time=2025-03-20T04:48:16.817Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a nam
gemma3-27b-q4_K_M_debug.txt
e="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB"
time=2025-03-20T04:48:16.965Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB"
releasing cuda driver library
time=2025-03-20T04:48:16.965Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.123572171 model=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3
time=2025-03-20T04:48:16.965Z level=DEBUG source=sched.go:385 msg="sending an unloaded event" modelPath=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3
time=2025-03-20T04:48:16.965Z level=DEBUG source=sched.go:303 msg="unload completed" modelPath=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3
time=2025-03-20T04:48:16.965Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05
dlsym: cuInit - 0x7f3ac06a8bc0
dlsym: cuDriverGetVersion - 0x7f3ac06a8be0
dlsym: cuDeviceGetCount - 0x7f3ac06a8c20
dlsym: cuDeviceGet - 0x7f3ac06a8c00
dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00
dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60
dlsym: cuDeviceGetName - 0x7f3ac06a8c40
dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0
dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20
dlsym: cuCtxDestroy - 0x7f3ac070d850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 2
time=2025-03-20T04:48:17.103Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB"
time=2025-03-20T04:48:17.240Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB"
releasing cuda driver library
time=2025-03-20T04:48:17.240Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.39818804 model=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3
time=2025-03-20T04:48:17.240Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05
dlsym: cuInit - 0x7f3ac06a8bc0
dlsym: cuDriverGetVersion - 0x7f3ac06a8be0
dlsym: cuDeviceGetCount - 0x7f3ac06a8c20
dlsym: cuDeviceGet - 0x7f3ac06a8c00
dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00
dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60
dlsym: cuDeviceGetName - 0x7f3ac06a8c40
dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0
dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20
dlsym: cuCtxDestroy - 0x7f3ac070d850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 2
time=2025-03-20T04:48:17.377Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB"
time=2025-03-20T04:48:17.526Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB"
releasing cuda driver library
time=2025-03-20T04:48:17.526Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.683948942 model=/root/.ollama/models/blobs/sha256-adca500fad9b54c565ae672184e0c9eb690eb6014ba63f8ec13849d4f73a32d3
time=2025-03-20T04:48:17.526Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05
dlsym: cuInit - 0x7f3ac06a8bc0
dlsym: cuDriverGetVersion - 0x7f3ac06a8be0
dlsym: cuDeviceGetCount - 0x7f3ac06a8c20
dlsym: cuDeviceGet - 0x7f3ac06a8c00
dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00
dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60
dlsym: cuDeviceGetName - 0x7f3ac06a8c40
dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0
dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20
dlsym: cuCtxDestroy - 0x7f3ac070d850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 2
time=2025-03-20T04:48:17.661Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB"
time=2025-03-20T04:48:17.806Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB"
releasing cuda driver library
time=2025-03-20T04:48:17.942Z level=DEBUG source=sched.go:225 msg="loading first model" model=/root/.ollama/models/blobs/sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541
time=2025-03-20T04:48:17.943Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=2 available="[9.8 GiB 9.6 GiB]"
time=2025-03-20T04:48:17.947Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=2 available="[9.8 GiB 9.6 GiB]"
time=2025-03-20T04:48:17.950Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="31.3 GiB" before.free="16.4 GiB" before.free_swap="0 B" now.total="31.3 GiB" now.free="16.4 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05
dlsym: cuInit - 0x7f3ac06a8bc0
dlsym: cuDriverGetVersion - 0x7f3ac06a8be0
dlsym: cuDeviceGetCount - 0x7f3ac06a8c20
dlsym: cuDeviceGet - 0x7f3ac06a8c00
dlsym: cuDeviceGetAttribute - 0x7f3ac06a8d00
dlsym: cuDeviceGetUuid - 0x7f3ac06a8c60
dlsym: cuDeviceGetName - 0x7f3ac06a8c40
dlsym: cuCtxCreate_v3 - 0x7f3ac06a8ee0
dlsym: cuMemGetInfo_v2 - 0x7f3ac06b2e20
dlsym: cuCtxDestroy - 0x7f3ac070d850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 2
time=2025-03-20T04:48:18.100Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.6 GiB" now.total="9.9 GiB" now.free="9.6 GiB" now.used="358.1 MiB"
time=2025-03-20T04:48:18.237Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce name="NVIDIA P102-100" overhead="0 B" before.total="9.9 GiB" before.free="9.8 GiB" now.total="9.9 GiB" now.free="9.8 GiB" now.used="128.1 MiB"
releasing cuda driver library
time=2025-03-20T04:48:18.237Z level=INFO source=server.go:105 msg="system memory" total="31.3 GiB" free="16.4 GiB" free_swap="0 B"
time=2025-03-20T04:48:18.237Z level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=2 available="[9.6 GiB 9.8 GiB]"
time=2025-03-20T04:48:18.241Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=99 layers.model=63 layers.offload=52 layers.split=22,30 memory.available="[9.6 GiB 9.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="22.8 GiB" memory.required.partial="19.1 GiB" memory.required.kv="992.0 MiB" memory.required.allocations="[9.5 GiB 9.6 GiB]" memory.weights.total="14.3 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.6 GiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-03-20T04:48:18.241Z level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
time=2025-03-20T04:48:18.396Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]|\s[\r\n]+|\s+(?!\S)|\s+"
time=2025-03-20T04:48:18.401Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-03-20T04:48:18.401Z level=DEBUG source=process_text_spm.go:27 msg=Tokens "num tokens"=262145 vals="[ ]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]"
time=2025-03-20T04:48:18.407Z level=DEBUG source=process_text_spm.go:41 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93
time=2025-03-20T04:48:18.407Z level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]|\s[\r\n]+|\s+(?!\S)|\s+"
time=2025-03-20T04:48:18.412Z level=DEBUG source=process_text_spm.go:27 msg=Tokens "num tokens"=262145 vals="[ ]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]"
time=2025-03-20T04:48:18.418Z level=DEBUG source=process_text_spm.go:41 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93
time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-03-20T04:48:18.418Z level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-03-20T04:48:18.418Z level=DEBUG source=server.go:335 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12
time=2025-03-20T04:48:18.418Z level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12]
time=2025-03-20T04:48:18.418Z level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-afa0ea2ef463c87a1eebb9af070e76a353107493b5d9a62e5e66f65a65409541 --ctx-size 2048 --batch-size 512 --n-gpu-layers 99 --verbose --threads 8 --no-mmap --parallel 1 --tensor-split 22,30 --port 34497"
time=2025-03-20T04:48:18.418Z level=DEBUG source=server.go:423 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama/cuda_v12:/usr/lib/ollama CUDA_VISIBLE_DEVICES=GPU-d3f7e561-1589-9a35-1a75-2c70a83a628a,GPU-a60f0ac8-28f1-89b7-ce12-3f12db15acce]"
time=2025-03-20T04:48:18.440Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-20T04:48:18.440Z level=INFO source=server.go:580 msg="waiting for llama runner to start responding"
time=2025-03-20T04:48:18.441Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server error"
time=2025-03-20T04:48:18.710Z level=INFO source=runner.go:763 msg="starting ollama engine"
time=2025-03-20T04:48:18.711Z level=INFO source=runner.go:823 msg="Server listening on 127.0.0.1:34497"
time=2025-03-20T04:48:18.858Z level=WARN source=ggml.go:149 msg="key not found" key=general.name default=""
time=2025-03-20T04:48:18.858Z level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-03-20T04:48:18.858Z level=INFO source=ggml.go:67 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=36
time=2025-03-20T04:48:18.859Z level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12
time=2025-03-20T04:48:18.943Z level=INFO source=server.go:614 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA P102-100, compute capability 6.1, VMM: yes
Device 1: NVIDIA P102-100, compute capability 6.1, VMM: yes
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2025-03-20T04:48:19.549Z level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib
time=2025-03-20T04:48:19.549Z level=DEBUG source=ggml.go:93 msg="skipping path which is not part of ollama" path=/usr/local/nvidia/lib64
time=2025-03-20T04:48:19.549Z level=DEBUG source=ggml.go:99 msg="ggml backend load all from path" path=/usr/lib/ollama
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-alderlake.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-icelake.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-skylakex.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-haswell.so score: 0
ggml_backend_load_best: /usr/lib/ollama/libggml-cpu-sandybridge.so score: 20
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sandybridge.so
time=2025-03-20T04:48:19.721Z level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-03-20T04:48:19.721Z level=DEBUG source=ggml.go:220 msg="created tensor" name=mm.mm_input_projection.weight shape="[5376 1152]" dtype=1 buffer_type=CUDA1
time=2025-03-20T04:48:19.721Z level=DEBUG source=ggml.go:220 msg="created tensor" name=mm.mm_soft_emb_norm.weight shape=[1152] dtype=0 buffer_type=CUDA1
time=2025-03-20T04:48:19.721Z level=DEBUG source=ggml.go:220 msg="created tensor" name=output_norm.weight shape=[5376] dtype=0 buffer_type=CUDA1
time=2025-03-20T04:48:19.721Z level=DEBUG source=ggml.go:220 msg="created tensor" name=token_embd.weight shape="[5376 262144]" dtype=14 buffer_type=CPU
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=output.weight shape="[5376 262144]" dtype=14 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_k.bias shape=[1152] dtype=0 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_k.weight shape="[1152 1152]" dtype=1 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_output.bias shape=[1152] dtype=0 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_output.weight shape="[1152 1152]" dtype=1 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_q.bias shape=[1152] dtype=0 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_q.weight shape="[1152 1152]" dtype=1 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_v.bias shape=[1152] dtype=0 buffer_type=CUDA1
time=2025-03-20T04:48:19.722Z level=DEBUG source=ggml.go:220 msg="created tensor" name=v.blk.0.attn_v.weight shape="[1152 1152]" dtype=1 buffer_type=CUDA1
##########################################################Part2
`ggml_backend_cuda_buffer_type_alloc_buffer: allocating 10180.80 MiB on device 1: cudaMalloc failed: out of memory
SIGSEGV: segmentation violation
PC=0x5642219fee1d m=8 sigcode=1 addr=0x60
signal arrived during cgo execution
goroutine 10 gp=0xc000582700 m=8 mp=0xc000600008 [syscall]:
runtime.cgocall(0x564221a518d0, 0xc000047268)
runtime/cgocall.go:167 +0x4b fp=0xc000047240 sp=0xc000047208 pc=0x564220c1d96b
github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_buffer_set_usage(0x0, 0x1)
_cgo_gotypes.go:249 +0x45 fp=0xc000047268 sp=0xc000047240 pc=0x564221016565
github.com/ollama/ollama/ml/backend/ggml.New.func12(...)
github.com/ollama/ollama/ml/backend/ggml/ggml.go:284
github.com/ollama/ollama/ml/backend/ggml.New(0xc0001360e0, {0x8, 0x0, 0x63, {0xc000478758, 0x2, 0x2}, 0x0})
github.com/ollama/ollama/ml/backend/ggml/ggml.go:284 +0x18cb fp=0xc000047d58 sp=0xc000047268 pc=0x56422101cb4b
github.com/ollama/ollama/ml.NewBackend(0xc0001360e0, {0x8, 0x0, 0x63, {0xc000478758, 0x2, 0x2}, 0x0})
github.com/ollama/ollama/ml/backend.go:91 +0x9c fp=0xc000047da8 sp=0xc000047d58 pc=0x564221010a3c
github.com/ollama/ollama/model.New({0x7ffe04599c7b?, 0x0?}, {0x8, 0x0, 0x63, {0xc000478758, 0x2, 0x2}, 0x0})
github.com/ollama/ollama/model/model.go:104 +0xfb fp=0xc000047ee0 sp=0xc000047da8 pc=0x56422104a67b
github.com/ollama/ollama/runner/ollamarunner.(*Server).loadModel(0xc0005c57a0, {0x7ffe04599c7b, 0x62}, {0x8, 0x0, 0x63, {0xc000478758, 0x2, 0x2}, 0x0}, ...)
github.com/ollama/ollama/runner/ollamarunner/runner.go:689 +0x95 fp=0xc000047f40 sp=0xc000047ee0 pc=0x5642210d2c15
github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap1()
github.com/ollama/ollama/runner/ollamarunner/runner.go:793 +0x91 fp=0xc000047fe0 sp=0xc000047f40 pc=0x5642210d40d1
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000047fe8 sp=0xc000047fe0 pc=0x564220c283a1
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
github.com/ollama/ollama/runner/ollamarunner/runner.go:793 +0x9c5
goroutine 1 gp=0xc000002380 m=nil [IO wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc0005cf648 sp=0xc0005cf628 pc=0x564220c20c6e
runtime.netpollblock(0xc0005cf698?, 0x20bba426?, 0x42?)
runtime/netpoll.go:575 +0xf7 fp=0xc0005cf680 sp=0xc0005cf648 pc=0x564220be5a57
internal/poll.runtime_pollWait(0x7f5479521eb0, 0x72)
runtime/netpoll.go:351 +0x85 fp=0xc0005cf6a0 sp=0xc0005cf680 pc=0x564220c1fe85
internal/poll.(*pollDesc).wait(0xc000133c80?, 0x900000036?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0005cf6c8 sp=0xc0005cf6a0 pc=0x564220ca7307
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000133c80)
internal/poll/fd_unix.go:620 +0x295 fp=0xc0005cf770 sp=0xc0005cf6c8 pc=0x564220cac6d5
net.(*netFD).accept(0xc000133c80)
net/fd_unix.go:172 +0x29 fp=0xc0005cf828 sp=0xc0005cf770 pc=0x564220d1f4e9
net.(*TCPListener).accept(0xc000142880)
net/tcpsock_posix.go:159 +0x1b fp=0xc0005cf878 sp=0xc0005cf828 pc=0x564220d34e9b
net.(*TCPListener).Accept(0xc000142880)
net/tcpsock.go:380 +0x30 fp=0xc0005cf8a8 sp=0xc0005cf878 pc=0x564220d33d50
net/http.(*onceCloseListener).Accept(0xc0004b81b0?)
:1 +0x24 fp=0xc0005cf8c0 sp=0xc0005cf8a8 pc=0x564220f4b384
net/http.(*Server).Serve(0xc0001f1500, {0x564221efad58, 0xc000142880})
net/http/server.go:3424 +0x30c fp=0xc0005cf9f0 sp=0xc0005cf8c0 pc=0x564220f22c4c
github.com/ollama/ollama/runner/ollamarunner.Execute({0xc000034190, 0x12, 0x13})
github.com/ollama/ollama/runner/ollamarunner/runner.go:824 +0xe29 fp=0xc0005cfd08 sp=0xc0005cf9f0 pc=0x5642210d3d49
github.com/ollama/ollama/runner.Execute({0xc000034170?, 0x0?, 0x0?})
github.com/ollama/ollama/runner/runner.go:20 +0xc9 fp=0xc0005cfd30 sp=0xc0005cfd08 pc=0x5642210d49a9
github.com/ollama/ollama/cmd.NewCLI.func2(0xc0001f1200?, {0x564221a6d053?, 0x4?, 0x564221a6d057?})
github.com/ollama/ollama/cmd/cmd.go:1327 +0x45 fp=0xc0005cfd58 sp=0xc0005cfd30 pc=0x564221822625
github.com/spf13/cobra.(*Command).execute(0xc0004baf08, {0xc000495180, 0x13, 0x14})
github.com/spf13/cobra@v1.7.0/command.go:940 +0x85c fp=0xc0005cfe78 sp=0xc0005cfd58 pc=0x564220d98b3c
github.com/spf13/cobra.(*Command).ExecuteC(0xc0004a6908)
github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc0005cff30 sp=0xc0005cfe78 pc=0x564220d99385
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
github.com/spf13/cobra@v1.7.0/command.go:985
main.main()
github.com/ollama/ollama/main.go:12 +0x4d fp=0xc0005cff50 sp=0xc0005cff30 pc=0x56422182298d
runtime.main()
runtime/proc.go:283 +0x29d fp=0xc0005cffe0 sp=0xc0005cff50 pc=0x564220bed05d
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0005cffe8 sp=0xc0005cffe0 pc=0x564220c283a1
goroutine 2 gp=0xc000002e00 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000070fa8 sp=0xc000070f88 pc=0x564220c20c6e
runtime.goparkunlock(...)
runtime/proc.go:441
runtime.forcegchelper()
runtime/proc.go:348 +0xb8 fp=0xc000070fe0 sp=0xc000070fa8 pc=0x564220bed398
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000070fe8 sp=0xc000070fe0 pc=0x564220c283a1
created by runtime.init.7 in goroutine 1
runtime/proc.go:336 +0x1a
goroutine 3 gp=0xc000003340 m=nil [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000071780 sp=0xc000071760 pc=0x564220c20c6e
runtime.goparkunlock(...)
runtime/proc.go:441
runtime.bgsweep(0xc000040080)
runtime/mgcsweep.go:316 +0xdf fp=0xc0000717c8 sp=0xc000071780 pc=0x564220bd7a5f
runtime.gcenable.gowrap1()
runtime/mgc.go:204 +0x25 fp=0xc0000717e0 sp=0xc0000717c8 pc=0x564220bcbe45
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000717e8 sp=0xc0000717e0 pc=0x564220c283a1
created by runtime.gcenable in goroutine 1
runtime/mgc.go:204 +0x66
goroutine 4 gp=0xc000003500 m=nil [GC scavenge wait]:
runtime.gopark(0x10000?, 0x564221c24118?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000071f78 sp=0xc000071f58 pc=0x564220c20c6e
runtime.goparkunlock(...)
runtime/proc.go:441
runtime.(*scavengerState).park(0x564222762b20)
runtime/mgcscavenge.go:425 +0x49 fp=0xc000071fa8 sp=0xc000071f78 pc=0x564220bd54a9
runtime.bgscavenge(0xc000040080)
runtime/mgcscavenge.go:658 +0x59 fp=0xc000071fc8 sp=0xc000071fa8 pc=0x564220bd5a39
runtime.gcenable.gowrap2()
runtime/mgc.go:205 +0x25 fp=0xc000071fe0 sp=0xc000071fc8 pc=0x564220bcbde5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000071fe8 sp=0xc000071fe0 pc=0x564220c283a1
created by runtime.gcenable in goroutine 1
runtime/mgc.go:205 +0xa5
goroutine 5 gp=0xc000003dc0 m=nil [finalizer wait]:
runtime.gopark(0x1b8?, 0xc000002380?, 0x1?, 0x23?, 0xc000070688?)
runtime/proc.go:435 +0xce fp=0xc000070630 sp=0xc000070610 pc=0x564220c20c6e
runtime.runfinq()
runtime/mfinal.go:196 +0x107 fp=0xc0000707e0 sp=0xc000070630 pc=0x564220bcae07
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000707e8 sp=0xc0000707e0 pc=0x564220c283a1
created by runtime.createfing in goroutine 1
runtime/mfinal.go:166 +0x3d
goroutine 6 gp=0xc0001d08c0 m=nil [chan receive]:
runtime.gopark(0xc00022b540?, 0xc00011e018?, 0x60?, 0x27?, 0x564220d06228?)
runtime/proc.go:435 +0xce fp=0xc000072718 sp=0xc0000726f8 pc=0x564220c20c6e
runtime.chanrecv(0xc00003e3f0, 0x0, 0x1)
runtime/chan.go:664 +0x445 fp=0xc000072790 sp=0xc000072718 pc=0x564220bbd005
runtime.chanrecv1(0x0?, 0x0?)
runtime/chan.go:506 +0x12 fp=0xc0000727b8 sp=0xc000072790 pc=0x564220bbcb92
runtime.unique_runtime_registerUniqueMapCleanup.func2(...)
runtime/mgc.go:1796
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
runtime/mgc.go:1799 +0x2f fp=0xc0000727e0 sp=0xc0000727b8 pc=0x564220bcefef
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000727e8 sp=0xc0000727e0 pc=0x564220c283a1
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
runtime/mgc.go:1794 +0x85
goroutine 7 gp=0xc0001d1340 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000072f38 sp=0xc000072f18 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc000072fc8 sp=0xc000072f38 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc000072fe0 sp=0xc000072fc8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000072fe8 sp=0xc000072fe0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105
goroutine 18 gp=0xc000504000 m=nil [GC worker (idle)]:
runtime.gopark(0x564222811280?, 0x1?, 0x64?, 0x1b?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc00006c738 sp=0xc00006c718 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc00006c7c8 sp=0xc00006c738 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc00006c7e0 sp=0xc00006c7c8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00006c7e8 sp=0xc00006c7e0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105
goroutine 34 gp=0xc000102380 m=nil [GC worker (idle)]:
runtime.gopark(0x3987fda0d37?, 0x3?, 0xf4?, 0x3d?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc00011a738 sp=0xc00011a718 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc00011a7c8 sp=0xc00011a738 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc00011a7e0 sp=0xc00011a7c8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00011a7e8 sp=0xc00011a7e0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105
goroutine 8 gp=0xc0001d1500 m=nil [GC worker (idle)]:
runtime.gopark(0x3987fda1cb6?, 0x3?, 0x50?, 0x39?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000073738 sp=0xc000073718 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc0000737c8 sp=0xc000073738 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc0000737e0 sp=0xc0000737c8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000737e8 sp=0xc0000737e0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105
goroutine 19 gp=0xc0005041c0 m=nil [GC worker (idle)]:
runtime.gopark(0x3987fda0a73?, 0x3?, 0xb5?, 0x68?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc00006cf38 sp=0xc00006cf18 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc00006cfc8 sp=0xc00006cf38 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc00006cfe0 sp=0xc00006cfc8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00006cfe8 sp=0xc00006cfe0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105
goroutine 35 gp=0xc000102540 m=nil [GC worker (idle)]:
runtime.gopark(0x3987fda055e?, 0x3?, 0xd?, 0xbc?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc00011af38 sp=0xc00011af18 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc00011afc8 sp=0xc00011af38 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc00011afe0 sp=0xc00011afc8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00011afe8 sp=0xc00011afe0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105
goroutine 9 gp=0xc0001d16c0 m=nil [GC worker (idle)]:
runtime.gopark(0x3987fda0f0e?, 0x3?, 0x70?, 0x30?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc000073f38 sp=0xc000073f18 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc000073fc8 sp=0xc000073f38 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc000073fe0 sp=0xc000073fc8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000073fe8 sp=0xc000073fe0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105
goroutine 20 gp=0xc000504380 m=nil [GC worker (idle)]:
runtime.gopark(0x3987fda05ec?, 0x3?, 0x1c?, 0x50?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc00006d738 sp=0xc00006d718 pc=0x564220c20c6e
runtime.gcBgMarkWorker(0xc00003f9d0)
runtime/mgc.go:1423 +0xe9 fp=0xc00006d7c8 sp=0xc00006d738 pc=0x564220bce309
runtime.gcBgMarkStartWorkers.gowrap1()
runtime/mgc.go:1339 +0x25 fp=0xc00006d7e0 sp=0xc00006d7c8 pc=0x564220bce1e5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00006d7e8 sp=0xc00006d7e0 pc=0x564220c283a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1339 +0x105
goroutine 11 gp=0xc0005828c0 m=nil [sync.WaitGroup.Wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0xc0?, 0x0?)
runtime/proc.go:435 +0xce fp=0xc00011d6d0 sp=0xc00011d6b0 pc=0x564220c20c6e
runtime.goparkunlock(...)
runtime/proc.go:441
runtime.semacquire1(0xc0005c57a8, 0x0, 0x1, 0x0, 0x18)
runtime/sema.go:188 +0x229 fp=0xc00011d738 sp=0xc00011d6d0 pc=0x564220c00629
sync.runtime_SemacquireWaitGroup(0x0?)
runtime/sema.go:110 +0x25 fp=0xc00011d770 sp=0xc00011d738 pc=0x564220c22685
sync.(*WaitGroup).Wait(0x0?)
sync/waitgroup.go:118 +0x48 fp=0xc00011d798 sp=0xc00011d770 pc=0x564220c33e08
github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0005c57a0, {0x564221efd020, 0xc0005ada40})
github.com/ollama/ollama/runner/ollamarunner/runner.go:329 +0x25 fp=0xc00011d7b8 sp=0xc00011d798 pc=0x5642210cfce5
github.com/ollama/ollama/runner/ollamarunner.Execute.gowrap2()
github.com/ollama/ollama/runner/ollamarunner/runner.go:800 +0x28 fp=0xc00011d7e0 sp=0xc00011d7b8 pc=0x5642210d4008
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00011d7e8 sp=0xc00011d7e0 pc=0x564220c283a1
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
github.com/ollama/ollama/runner/ollamarunner/runner.go:800 +0xa9c
goroutine 12 gp=0xc000102fc0 m=nil [IO wait]:
runtime.gopark(0x564220caa905?, 0xc000132100?, 0x40?, 0xda?, 0xb?)
runtime/proc.go:435 +0xce fp=0xc0005cd948 sp=0xc0005cd928 pc=0x564220c20c6e
runtime.netpollblock(0x564220c440f8?, 0x20bba426?, 0x42?)
runtime/netpoll.go:575 +0xf7 fp=0xc0005cd980 sp=0xc0005cd948 pc=0x564220be5a57
internal/poll.runtime_pollWait(0x7f5479521d98, 0x72)
runtime/netpoll.go:351 +0x85 fp=0xc0005cd9a0 sp=0xc0005cd980 pc=0x564220c1fe85
internal/poll.(*pollDesc).wait(0xc000132100?, 0xc002f86000?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0005cd9c8 sp=0xc0005cd9a0 pc=0x564220ca7307
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000132100, {0xc002f86000, 0x1000, 0x1000})
internal/poll/fd_unix.go:165 +0x27a fp=0xc0005cda60 sp=0xc0005cd9c8 pc=0x564220ca85fa
net.(*netFD).Read(0xc000132100, {0xc002f86000?, 0xc0005cdad0?, 0x564220ca77c5?})
net/fd_posix.go:55 +0x25 fp=0xc0005cdaa8 sp=0xc0005cda60 pc=0x564220d1d545
net.(*conn).Read(0xc0005a4010, {0xc002f86000?, 0x0?, 0x0?})
net/net.go:194 +0x45 fp=0xc0005cdaf0 sp=0xc0005cdaa8 pc=0x564220d2b905
net/http.(*connReader).Read(0xc0000b06c0, {0xc002f86000, 0x1000, 0x1000})
net/http/server.go:798 +0x159 fp=0xc0005cdb40 sp=0xc0005cdaf0 pc=0x564220f17af9
bufio.(*Reader).fill(0xc0001101e0)
bufio/bufio.go:113 +0x103 fp=0xc0005cdb78 sp=0xc0005cdb40 pc=0x564220d430a3
bufio.(*Reader).Peek(0xc0001101e0, 0x4)
bufio/bufio.go:152 +0x53 fp=0xc0005cdb98 sp=0xc0005cdb78 pc=0x564220d431d3
net/http.(*conn).serve(0xc0004b81b0, {0x564221efcfe8, 0xc000704840})
net/http/server.go:2137 +0x785 fp=0xc0005cdfb8 sp=0xc0005cdb98 pc=0x564220f1d8e5
net/http.(*Server).Serve.gowrap3()
net/http/server.go:3454 +0x28 fp=0xc0005cdfe0 sp=0xc0005cdfb8 pc=0x564220f23048
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0005cdfe8 sp=0xc0005cdfe0 pc=0x564220c283a1
created by net/http.(*Server).Serve in goroutine 1
net/http/server.go:3454 +0x485
rax 0x564221a518d0
rbx 0xc000047268
rcx 0xffffffffffffffd8
rdx 0xc0000471f8
rdi 0x0
rsi 0x1
rbp 0x0
rsp 0x7f54727fbe00
r8 0xc000600008
r9 0x0
r10 0x7f5400e00b4b
r11 0x0
r12 0x1
r13 0x0
r14 0xc000582700
r15 0x5642210d41a0
rip 0x5642219fee1d
rflags 0x10206
cs 0x33
fs 0x0
gs 0x0
time=2025-03-20T04:48:19.815Z level=ERROR source=server.go:449 msg="llama runner terminated" error="exit status 2"
time=2025-03-20T04:48:19.947Z level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory"`
@alsimms commented on GitHub (Mar 20, 2025):
Here is the 27B debug
gemma3-27b-q4_K_M_debug.txt
@alsimms commented on GitHub (Mar 20, 2025):
Interesting, I can load the unsloth model but not the one directly from ollama.
@NandaIda commented on GitHub (Mar 20, 2025):
Can you process an image input using the unsloth?
@rick-github commented on GitHub (Mar 20, 2025):
ollama is over-allocating layers to the GPU: available [9.6 GiB 9.8 GiB] allocating [9.5 GiB 9.6 GiB] doesn't leave much margin. See here for ways to mitigate this.
Note this is different to the
ggml_backend_sched_graph_compute_async()crashes which are the bulk of the reports in this issue.@Kazunarit commented on GitHub (Mar 20, 2025):
Thanks to everyone who is working on this issue.
My ChatBot app system executes about 3000 text chats per batch, but Gemma3:27b stops midway with ollama 0.6.2 due to a memory allocation error.
There seems to be a memory leak problem when running Gemma3:27b.
In the sample program, when "tell me a story" is repeated about 50 times, the docker desktop container memory usage increases by about 7-8GB.
requests.post(OLLAMA_API_URL...) "options": {"num_ctx": 8192} is specified. (same as my application)
Image data is not used.
In Gemma2:27b and Qwen2.5:32b, there is no memory increase or it is only slight, and no error occurs.
This may not be directly related to this phenomenon, but when running Gemma3, the CPU increases about 40%.
Other models run at about 10% and basically on the GPU.
I hope this helps with debugging.
RTX4090 CUDA NVIDIA APP v11.0.2.341, 64GB RAM
ollama 0.6.0 to 0.6.2
@bjj commented on GitHub (Mar 21, 2025):
but did that also make vision work?
@rick-github commented on GitHub (Mar 21, 2025):
Vision has always worked.
@rick-github commented on GitHub (Mar 22, 2025):
@rick-github commented on GitHub (Mar 22, 2025):
q4_0 and q8_0 KV quant still see a performance hit.
@ultramarinebicycle commented on GitHub (Mar 23, 2025):
@rick-github for me, performance is acceptable (PC no longer becomes unresponsive at 8k context) now but GPU memory allocation still seems to be wonky:
At 2k context:
gemma3:12b 6fd036cefda5 12 GB 7%/93% CPU/GPU
17t/s. VRAM usage is around 8GB and RAM usage at 10.
At 8k context:
gemma3:12b 6fd036cefda5 14 GB 23%/77% CPU/GPU
7t/s VRAM usage is around 7GB and RAM usage at 12.
@rick-github commented on GitHub (Mar 23, 2025):
Yes, the changes that have reduced the size of the context buffer have made it harder for ollama to estimate the usage compared to what the GPU backend actually allocates. The ollama team is aware of this, I assume the estimation logic will receive some attention in the next couple of releases. In the meantime you can improve VRAM utlization by overriding
num_gpu.@rick-github commented on GitHub (Mar 26, 2025):
#9987 has been merged, ollama:0.6.3-rc0 goes from estimating 17G for gemma3:12b+16K cache to 12G. nvidia-smi shows there's still room for improvement but it's getting there.
@jessegross commented on GitHub (Mar 26, 2025):
@rick-github One thing to be aware of is that the old engine preallocates the worst case computation graph (max context + max batch) whereas the new engine currently does not. The KV cache is preallocated for the full context in both cases though.
As a result, unless you have exercised the worst case,
nvidia-smiwill underreport the total amount of memory that may be needed, which is whatollama psis showing. The memory consumption will stay at the high water mark of a batch until the runner process is restarted.There is definitely still a gap between the estimate and actual worst case usage but it might not be quite as large as it seems.
The behavior of not preallocating the worst case may change in the future but that's the way it is now.
@Kazunarit commented on GitHub (Mar 31, 2025):
Run with 0.6.3.
When running the sample program that repeats "tell me a story", memory usage continues to increase for Gemma3:12b and Gemma3:27b.
It does not increase for other models.
In the attached graph, gemma3:12b is running from around 10:52,
phi4 from just after 11:01,
deepseek-r1:14b from around 11:07.
Increased memory usage will cause crashes on systems that run continuously.
I would appreciate it if you could address this issue.
import requests
import json
import time
Global Constants
OLLAMA_API_URL = "http://localhost:11434/api/chat"
OLLAMA_MODEL = "gemma3:12b"
REQUEST_COUNT = 1000 # Number of requests
CONTEXT_LENGTH = 8192 # Context length
def send_request(count):
"""
Sends a request to the Ollama API and processes the streaming response.
"""
data = {
"model": OLLAMA_MODEL,
"messages": [{"role": "user", "content": "Tell me a story"}],
"stream": True,
"options": {"num_ctx": CONTEXT_LENGTH}
}
def main():
"""
Main execution function.
"""
print(f"Starting Ollama API requests")
print(f"API URL: {OLLAMA_API_URL}")
print(f"Model: {OLLAMA_MODEL}")
print(f"Context Length: {CONTEXT_LENGTH}")
print(f"Total Requests: {REQUEST_COUNT}")
print("=" * 50)
if name == "main":
main()
@rzykov commented on GitHub (Mar 31, 2025):
I observed the same issue with Gemma 3 4b with a long context on 0.6.3. But 0.6.3 definitely reduced the leak.
My context length is 50 000. 3090 GPU
@rick-github commented on GitHub (Mar 31, 2025):
Growth in RSS is being investigated in https://github.com/ollama/ollama/issues/10040.
@jessegross commented on GitHub (Apr 8, 2025):
Closing this as original issue related to VRAM has been solved, please follow on the system memory leak in #10040