mirror of
https://github.com/ollama/ollama.git
synced 2026-05-07 08:30:05 -05:00
[GH-ISSUE #11008] gemma3:12b does not load onto Nvidia Card if AMD is Present but deepseek:12b does #69317
Closed
opened 2026-05-04 17:46:36 -05:00 by GiteaMirror
·
10 comments
No Branch/Tag Specified
main
hoyyeva/anthropic-local-image-path
dhiltgen/ci
dhiltgen/llama-runner
parth-remove-claude-desktop-launch
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.30.0-rc3
v0.30.0-rc2
v0.30.0-rc1
v0.30.0-rc0
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
bug
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#69317
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @sto1 on GitHub (Jun 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11008
What is the issue?
I'm not abel to load the gemma3:12b on the Nvidia 3060 12GB Card, but other model work, even if they have to use partly the CPU. I'm working on windows and the version 0.9.0
Relevant log output
OS
Windows
GPU
No response
CPU
No response
Ollama version
No response
@sto1 commented on GitHub (Jun 7, 2025):
the same problem if I use the linux version under Ubuntu
@rick-github commented on GitHub (Jun 7, 2025):
ollama has determined that it can fit the entire model on the ROCm card.
However, it was unable to find a ROCm backend, so loaded it in CPU instead. Is there a
rocmdirectory inC:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama?@sto1 commented on GitHub (Jun 7, 2025):
yes, it was able to load ROCm. But I renamed the library to force him to take the NVIDIA Card (faster). Then he took the CPU instead. I'm now able to run it on Ubuntu. and there it works with: CUDA_VISIBLE_DEVICES=0 HIP_VISIBLE_DEVICES="" ROCR_VISIBLE_DEVICES=""
@sto1 commented on GitHub (Jun 7, 2025):
the reason I try this, it's faster using the NVIDA 3060 and a part on the CPU then fiting the full model on the AMD Card even it's a 6900.
@rick-github commented on GitHub (Jun 7, 2025):
Try setting
OLLAMA_LLM_LIBRARY=cuda_v12instead of breaking your installation.@sto1 commented on GitHub (Jun 7, 2025):
Thanks for your support, I will try tomorrow.
@sto1 commented on GitHub (Jun 7, 2025):
I just followed Gemeni 2.5 Flash ;-)
@sto1 commented on GitHub (Jun 8, 2025):
I did take back my changes and Set the variables:
PS I:\Users\storc> $env:OLLAMA_LLM_LIBRARY = "cuda_v12"
PS I:\Users\storc> $env:CUDA_VISIBLE_DEVICES="0"
PS I:\Users\storc> $env:HIP_VISIBLE_DEVICES=""
PS I:\Users\storc> ollama run gemma:12b --verbose
The system tells me that he is running on the GPU, but it is not!
It does not take the memory and it's to slow!
It works now fine unter WSL Ubuntu, but not Windows! But my Ubuntu has no access to the AMD Card!
(base) stor@DESKTOP-NFL740H:
$ ollama ps$ nvidia-smiNAME ID SIZE PROCESSOR UNTIL
gemma3:12b f4031aab637d 11 GB 100% GPU 3 minutes from now
(base) stor@DESKTOP-NFL740H:
Sun Jun 8 09:11:39 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.02 Driver Version: 560.94 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3060 On | 00000000:2D:00.0 Off | N/A |
| 0% 41C P8 13W / 170W | 27MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
time=2025-06-08T08:25:37.011+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\Users\storc\AppData\Local\Programs\Ollama\ollama.exe runner --model I:\Benutzer\storc\.ollama\blobs\sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 4096 --batch-size 512 --n-gpu-layers 26 --threads 8 --parallel 1 --port 55998"
time=2025-06-08T08:25:37.014+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-08T08:25:37.014+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-08T08:25:37.014+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-08T08:25:37.049+02:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-06-08T08:25:37.085+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
time=2025-06-08T08:25:37.085+02:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:55998"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from I:\Benutzer\storc.ollama\blobs\sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 39.59 GiB (4.82 BPW)
time=2025-06-08T08:25:37.265+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch = llama
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 8192
print_info: n_layer = 80
print_info: n_head = 64
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 28672
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = linear
print_info: freq_base_train = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 70B
print_info: model params = 70.55 B
print_info: general.name = DeepSeek R1 Distill Llama 70B
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token = 128001 '<|end▁of▁sentence|>'
print_info: EOT token = 128001 '<|end▁of▁sentence|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: PAD token = 128001 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 128001 '<|end▁of▁sentence|>'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: CPU_Mapped model buffer size = 40543.11 MiB
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 512
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 500000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 0.52 MiB
llama_kv_cache_unified: kv_size = 4096, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1, padding = 32
llama_kv_cache_unified: CPU KV buffer size = 1280.00 MiB
llama_kv_cache_unified: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB
llama_context: CPU compute buffer size = 584.01 MiB
llama_context: graph nodes = 2726
llama_context: graph splits = 1
time=2025-06-08T08:25:51.786+02:00 level=INFO source=server.go:630 msg="llama runner started in 14.77 seconds"
[GIN] 2025/06/08 - 08:25:51 | 200 | 15.8432965s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/06/08 - 08:29:10 | 200 | 21.5046313s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/06/08 - 08:32:18 | 200 | 28.6268919s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/06/08 - 08:32:26 | 200 | 3.2012421s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/06/08 - 08:38:21 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:38:21 | 200 | 998.7µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:38:43 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:38:43 | 200 | 999.3µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:40:02 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:40:02 | 200 | 997.9µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:40:39 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:40:39 | 200 | 405.8µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:41:37 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:41:37 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/06/08 - 08:41:40 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:41:40 | 200 | 1.0022ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:45:00 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:45:00 | 200 | 500.2µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:46:51 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:46:51 | 200 | 497.1µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/06/08 - 08:47:33 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:47:33 | 404 | 499.7µs | 127.0.0.1 | POST "/api/show"
time=2025-06-08T08:47:34.877+02:00 level=INFO source=download.go:177 msg="downloading a99b7f834d75 in 16 373 MB part(s)"
time=2025-06-08T08:48:19.277+02:00 level=INFO source=download.go:177 msg="downloading a242d8dfdc8f in 1 487 B part(s)"
time=2025-06-08T08:48:20.607+02:00 level=INFO source=download.go:177 msg="downloading 75357d685f23 in 1 28 B part(s)"
time=2025-06-08T08:48:21.944+02:00 level=INFO source=download.go:177 msg="downloading 832dd9e00a68 in 1 11 KB part(s)"
time=2025-06-08T08:48:23.276+02:00 level=INFO source=download.go:177 msg="downloading 52d2a7aa3a38 in 1 23 B part(s)"
time=2025-06-08T08:48:24.606+02:00 level=INFO source=download.go:177 msg="downloading 83b9da835d9f in 1 567 B part(s)"
[GIN] 2025/06/08 - 08:48:31 | 200 | 57.758926s | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/06/08 - 08:48:31 | 200 | 47.1552ms | 127.0.0.1 | POST "/api/show"
time=2025-06-08T08:48:32.142+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=I:\Benutzer\storc.ollama\blobs\sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 gpu=GPU-cd38eb1c-290a-15c2-d573-b2c87845fde3 parallel=2 available=11793334272 required="8.4 GiB"
time=2025-06-08T08:48:32.531+02:00 level=INFO source=server.go:135 msg="system memory" total="95.9 GiB" free="77.9 GiB" free_swap="74.5 GiB"
time=2025-06-08T08:48:32.533+02:00 level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.4 GiB" memory.required.partial="8.4 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[8.4 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="522.7 MiB" memory.graph.partial="522.7 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB"
time=2025-06-08T08:48:32.564+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\Users\storc\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model I:\Benutzer\storc\.ollama\blobs\sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --no-mmap --parallel 2 --port 60362"
time=2025-06-08T08:48:32.566+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-08T08:48:32.566+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-08T08:48:32.566+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-08T08:48:32.603+02:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-06-08T08:48:32.626+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:60362"
time=2025-06-08T08:48:32.654+02:00 level=INFO source=ggml.go:92 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=36
load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
load_backend: loaded CUDA backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-06-08T08:48:32.757+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-06-08T08:48:32.817+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
time=2025-06-08T08:48:33.020+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="292.4 MiB"
time=2025-06-08T08:48:33.020+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CUDA0 size="5.3 GiB"
time=2025-06-08T08:48:33.271+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.7 GiB"
time=2025-06-08T08:48:33.271+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
time=2025-06-08T08:48:33.344+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.7 GiB"
time=2025-06-08T08:48:33.344+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="16.8 MiB"
time=2025-06-08T08:48:34.320+02:00 level=INFO source=server.go:630 msg="llama runner started in 1.75 seconds"
[GIN] 2025/06/08 - 08:48:34 | 200 | 2.6222462s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/06/08 - 08:49:02 | 200 | 7.0925569s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/06/08 - 08:49:32 | 200 | 611.0885ms | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/06/08 - 08:49:52 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 08:49:52 | 404 | 499.1µs | 127.0.0.1 | POST "/api/show"
time=2025-06-08T08:49:53.356+02:00 level=INFO source=download.go:177 msg="downloading a99b7f834d75 in 16 373 MB part(s)"
time=2025-06-08T08:50:37.711+02:00 level=INFO source=download.go:177 msg="downloading a242d8dfdc8f in 1 487 B part(s)"
time=2025-06-08T08:50:39.020+02:00 level=INFO source=download.go:177 msg="downloading 75357d685f23 in 1 28 B part(s)"
time=2025-06-08T08:50:40.383+02:00 level=INFO source=download.go:177 msg="downloading 832dd9e00a68 in 1 11 KB part(s)"
time=2025-06-08T08:50:41.693+02:00 level=INFO source=download.go:177 msg="downloading 52d2a7aa3a38 in 1 23 B part(s)"
time=2025-06-08T08:50:43.057+02:00 level=INFO source=download.go:177 msg="downloading 83b9da835d9f in 1 567 B part(s)"
[GIN] 2025/06/08 - 08:50:50 | 200 | 57.5681917s | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/06/08 - 08:50:50 | 200 | 33.5409ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/06/08 - 08:50:50 | 200 | 17.9988ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/06/08 - 09:08:37 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 09:08:37 | 404 | 501.5µs | 127.0.0.1 | POST "/api/show"
[GIN] 2025/06/08 - 09:08:38 | 200 | 396.3477ms | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/06/08 - 09:08:46 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 09:08:46 | 404 | 500.6µs | 127.0.0.1 | POST "/api/show"
time=2025-06-08T09:08:47.575+02:00 level=INFO source=download.go:177 msg="downloading e8ad13eff07a in 16 509 MB part(s)"
time=2025-06-08T09:08:52.474+02:00 level=INFO source=download.go:295 msg="e8ad13eff07a part 13 attempt 0 failed: unexpected EOF, retrying in 1s"
time=2025-06-08T09:09:47.101+02:00 level=INFO source=download.go:177 msg="downloading e0a42594d802 in 1 358 B part(s)"
time=2025-06-08T09:09:48.446+02:00 level=INFO source=download.go:177 msg="downloading dd084c7d92a3 in 1 8.4 KB part(s)"
time=2025-06-08T09:09:49.754+02:00 level=INFO source=download.go:177 msg="downloading 3116c5225075 in 1 77 B part(s)"
time=2025-06-08T09:09:51.060+02:00 level=INFO source=download.go:177 msg="downloading 6819964c2bcf in 1 490 B part(s)"
[GIN] 2025/06/08 - 09:10:00 | 200 | 1m13s | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/06/08 - 09:10:00 | 200 | 64.1806ms | 127.0.0.1 | POST "/api/show"
time=2025-06-08T09:10:00.722+02:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=I:\Benutzer\storc.ollama\blobs\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de gpu=0 parallel=2 available=15385755648 required="11.0 GiB"
time=2025-06-08T09:10:01.103+02:00 level=INFO source=server.go:135 msg="system memory" total="95.9 GiB" free="78.0 GiB" free_swap="74.0 GiB"
time=2025-06-08T09:10:01.105+02:00 level=INFO source=server.go:168 msg=offload library=rocm layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[14.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="11.0 GiB" memory.required.partial="11.0 GiB" memory.required.kv="1.3 GiB" memory.required.allocations="[11.0 GiB]" memory.weights.total="6.8 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="787.5 MiB" memory.graph.full="519.5 MiB" memory.graph.partial="1.3 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-06-08T09:10:01.167+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="C:\Users\storc\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model I:\Benutzer\storc\.ollama\blobs\sha256-e8ad13eff07a78d89926e9e8b882317d082ef5bf9768ad7b50fcdbbcd63748de --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 8 --parallel 2 --port 64366"
time=2025-06-08T09:10:01.169+02:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-06-08T09:10:01.169+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-06-08T09:10:01.170+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
time=2025-06-08T09:10:01.203+02:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-06-08T09:10:01.226+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:64366"
time=2025-06-08T09:10:01.286+02:00 level=INFO source=ggml.go:92 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1065 num_key_values=37
load_backend: loaded CPU backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-06-08T09:10:01.420+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX 6900 XT, gfx1030 (0x1030), VMM: no, Wave Size: 32
load_backend: loaded ROCm backend from C:\Users\storc\AppData\Local\Programs\Ollama\lib\ollama\rocm\ggml-hip.dll
time=2025-06-08T09:10:01.455+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-06-08T09:10:04.252+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=ROCm0 size="7.6 GiB"
time=2025-06-08T09:10:04.252+02:00 level=INFO source=ggml.go:351 msg="model weights" buffer=CPU size="787.5 MiB"
time=2025-06-08T09:10:04.506+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=ROCm0 buffer_type=ROCm0 size="1.1 GiB"
time=2025-06-08T09:10:04.506+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="0 B"
time=2025-06-08T09:10:04.908+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=ROCm0 buffer_type=ROCm0 size="1.1 GiB"
time=2025-06-08T09:10:04.908+02:00 level=INFO source=ggml.go:638 msg="compute graph" backend=CPU buffer_type=CPU size="7.5 MiB"
time=2025-06-08T09:10:05.935+02:00 level=INFO source=server.go:630 msg="llama runner started in 4.77 seconds"
[GIN] 2025/06/08 - 09:10:05 | 200 | 5.7064075s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/06/08 - 09:10:08 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 09:10:08 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/06/08 - 09:10:52 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 09:10:52 | 200 | 547µs | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/06/08 - 09:11:36 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/06/08 - 09:11:36 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2025/06/08 - 09:14:37 | 200 | 4m5s | 127.0.0.1 | POST "/api/chat"
@sto1 commented on GitHub (Jun 8, 2025):
It is again on the AMD card
@sto1 commented on GitHub (Jun 8, 2025):
I have closed down everthing and started from scratch. Now it's also working in Windows. Thanks for your Suppor!!!