mirror of
https://github.com/ollama/ollama.git
synced 2026-05-07 00:22:43 -05:00
[GH-ISSUE #14287] Ollama docker doesn't recognize GPU on new RTX Pro 6000 Blackwell GPU with Ubuntu 24.04 #9300
Closed
opened 2026-04-12 22:10:01 -05:00 by GiteaMirror
·
30 comments
No Branch/Tag Specified
main
hoyyeva/anthropic-local-image-path
dhiltgen/ci
dhiltgen/llama-runner
parth-remove-claude-desktop-launch
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.30.0-rc3
v0.30.0-rc2
v0.30.0-rc1
v0.30.0-rc0
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
bug
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#9300
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @akhilec on GitHub (Feb 16, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14287
What is the issue?
details-Log.txt
I have 10 GPU devices which are NVIDIA RTX PRO 6000 Blackwell on ubunutu 24.04. GPUs are not getting discovered. I tried with all latest nvidia drivers 580., 590. but no luck. I have also tried with CUDA 13.1
Relevant log output
OS
No response
GPU
No response
CPU
No response
Ollama version
No response
@rick-github commented on GitHub (Feb 16, 2026):
Is that
nvidia-smioutput from inside or outside of the container?@akhilec commented on GitHub (Feb 17, 2026):
It is from inside. I could see all devices from inside container
@rick-github commented on GitHub (Feb 17, 2026):
Set
OLLAMA_DEBUG=2in the server environment and post the log from start to the line that saysinference compute.@akhilec commented on GitHub (Feb 17, 2026):
Sure, attached is the detail log you asked for. Also below commands I used to pull docker image and run a model
2864df567639067f8c2cd1429ea959b0dfd683564fb50c21e95ad5adb0928703-json.log
@rick-github commented on GitHub (Feb 17, 2026):
Please post plain text logs.
What's the output of
nvidia-smioutside of the container?@akhilec commented on GitHub (Feb 17, 2026):
Same as it was shown from within container
@akhilec commented on GitHub (Feb 17, 2026):
See outside container result and from within container result
@rick-github commented on GitHub (Feb 17, 2026):
Are the GPUs discovered if you run ollama natively?
@akhilec commented on GitHub (Feb 17, 2026):
no same error. Attached detail logs when ollama is run locally
ollama_native-detail-logs.txt
@rick-github commented on GitHub (Feb 18, 2026):
Both the v12 and v13 libraries get initialization errors but it's not clear why. What happens if you restrict the GPUs to one device by setting
CUDA_VISIBLE_DEVICES=0? What's the output ofnvidia-smi -q?@akhilec commented on GitHub (Feb 18, 2026):
I set CUDA_VISIBLE_DEVICES="0" as -e variable to my docker, but in nvidia-smi from within container, i see all 10 devices.
So for root@hq-it-ai:~# sudo docker exec -it ollama nvidia-smi -q, i got data for all 10 devices. but I copied data for one device and pasted below (see towards the end)
Detail log attached
detail_log_with CUDA_VISIBLE_.txt
@rick-github commented on GitHub (Feb 18, 2026):
Try this:
It uses an older version that has extra debugging during device discovery. Run it, wait for the "inference compute" line, then ^C and post the output.
@akhilec commented on GitHub (Feb 18, 2026):
root@hq-it-ai:~# docker run --gpus=all -e OLLAMA_DEBUG=2 ollama/ollama:0.12.3
Unable to find image 'ollama/ollama:0.12.3' locally
0.12.3: Pulling from ollama/ollama
36591e7dd4a3: Pull complete
804f1b698a9f: Pull complete
66ef1ccd9b48: Pull complete
953cdd413371: Pull complete
b4f95af85236: Download complete
Digest: sha256:c622a7adec67cf5bd7fe1802b7e26aa583a955a54e91d132889301f50c3e0bd0
Status: Downloaded newer image for ollama/ollama:0.12.3
Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBXRDpA/veXS8w4StHVGy9q8QSo0zfsZxr8yV5FSuO1x
time=2026-02-18T15:23:39.656Z level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-02-18T15:23:39.656Z level=INFO source=images.go:518 msg="total blobs: 0"
time=2026-02-18T15:23:39.656Z level=INFO source=images.go:525 msg="total unused blobs removed: 0"
time=2026-02-18T15:23:39.657Z level=INFO source=routes.go:1528 msg="Listening on [::]:11434 (version 0.12.3)"
time=2026-02-18T15:23:39.657Z level=DEBUG source=sched.go:121 msg="starting llm scheduler"
time=2026-02-18T15:23:39.657Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2026-02-18T15:23:39.673Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2026-02-18T15:23:39.673Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcuda.so
time=2026-02-18T15:23:39.673Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets//lib/libcuda.so /usr/lib/-linux-gnu/nvidia/current/libcuda.so /usr/lib/-linux-gnu/libcuda.so /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers//libcuda.so /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2026-02-18T15:23:39.675Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.580.126.09]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.580.126.09
dlsym: cuInit - 0x7624fa527850
dlsym: cuDriverGetVersion - 0x7624fa527910
dlsym: cuDeviceGetCount - 0x7624fa527a90
dlsym: cuDeviceGet - 0x7624fa5279d0
dlsym: cuDeviceGetAttribute - 0x7624fa527f10
dlsym: cuDeviceGetUuid - 0x7624fa57eb10
dlsym: cuDeviceGetName - 0x7624fa527b50
dlsym: cuCtxCreate_v3 - 0x7624fa57c2b0
dlsym: cuMemGetInfo_v2 - 0x7624fa52b780
dlsym: cuCtxDestroy - 0x7624fa57e1b0
calling cuInit
cuInit err: 3
time=2026-02-18T15:23:39.875Z level=INFO source=gpu.go:631 msg="Unable to load cudart library /usr/lib/x86_64-linux-gnu/libcuda.so.580.126.09: cuda driver library init failure: 3"
time=2026-02-18T15:23:39.875Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcudart.so*
time=2026-02-18T15:23:39.875Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/cuda_v*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers//libcudart.so /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
time=2026-02-18T15:23:39.876Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths="[/usr/lib/ollama/cuda_v12/libcudart.so.12.8.90 /usr/lib/ollama/cuda_v13/libcudart.so.13.0.88]"
cudaSetDevice err: 3
time=2026-02-18T15:23:40.051Z level=DEBUG source=gpu.go:593 msg="Unable to load cudart library /usr/lib/ollama/cuda_v12/libcudart.so.12.8.90: cudart init failure: 3"
cudaSetDevice err: 3
time=2026-02-18T15:23:40.224Z level=DEBUG source=gpu.go:593 msg="Unable to load cudart library /usr/lib/ollama/cuda_v13/libcudart.so.13.0.88: cudart init failure: 3"
time=2026-02-18T15:23:40.224Z level=DEBUG source=amd_linux.go:423 msg="amdgpu driver not detected /sys/module/amdgpu"
time=2026-02-18T15:23:40.224Z level=INFO source=gpu.go:396 msg="no compatible GPUs were discovered"
time=2026-02-18T15:23:40.224Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="2267.2 GiB" available="2237.1 GiB"
@rick-github commented on GitHub (Feb 18, 2026):
From the CUDA Toolkit documentation:
cudaErrorInitializationError = 3
Which really doesn't shed a lot more light on the situation. What's the output of
@akhilec commented on GitHub (Feb 18, 2026):
root@hq-it-ai:/usr/lib# grep -i nvidia /var/log/dmesg
[ 24.873726] kernel: nvidia: loading out-of-tree module taints kernel.
[ 24.873734] kernel: nvidia: module verification failed: signature and/or required key missing - tainting kernel
[ 24.946863] kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 510
[ 24.985371] kernel: nvidia 0000:54:00.0: enabling device (0000 -> 0002)
[ 25.022394] kernel: nvidia 0000:57:00.0: enabling device (0000 -> 0002)
[ 25.063685] kernel: nvidia 0000:5a:00.0: enabling device (0000 -> 0002)
[ 25.130840] kernel: nvidia 0000:5d:00.0: enabling device (0000 -> 0002)
[ 25.154457] kernel: nvidia 0000:5e:00.0: enabling device (0000 -> 0002)
[ 25.225437] kernel: nvidia 0000:d3:00.0: enabling device (0000 -> 0002)
[ 25.262423] kernel: nvidia 0000:d6:00.0: enabling device (0000 -> 0002)
[ 25.297391] kernel: nvidia 0000:d9:00.0: enabling device (0000 -> 0002)
[ 25.332523] kernel: nvidia 0000:dc:00.0: enabling device (0000 -> 0002)
[ 25.355507] kernel: nvidia 0000:dd:00.0: enabling device (0000 -> 0002)
[ 25.366279] kernel: NVRM: loading NVIDIA UNIX Open Kernel Module for x86_64 580.126.09 Release Build (dvs-builder@U22-I3-AM02-24-3) Wed Jan 7 22:51:36 UTC 2026
[ 25.383176] kernel: nvidia-modeset: Loading NVIDIA UNIX Open Kernel Mode Setting Driver for x86_64 580.126.09 Release Build (dvs-builder@U22-I3-AM02-24-3) Wed Jan 7 22:33:56 UTC 2026
[ 25.387260] kernel: [drm] [nvidia-drm] [GPU ID 0x00005400] Loading driver
[ 26.974766] kernel: [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:54:00.0 on minor 1
[ 26.974797] kernel: nvidia 0000:54:00.0: [drm] No compatible format found
[ 26.974800] kernel: nvidia 0000:54:00.0: [drm] Cannot find any crtc or sizes
[ 26.974835] kernel: [drm] [nvidia-drm] [GPU ID 0x00005700] Loading driver
[ 31.684692] kernel: [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:57:00.0 on minor 2
[ 31.684736] kernel: nvidia 0000:57:00.0: [drm] No compatible format found
[ 31.684739] kernel: nvidia 0000:57:00.0: [drm] Cannot find any crtc or sizes
[ 31.684779] kernel: [drm] [nvidia-drm] [GPU ID 0x00005a00] Loading driver
[ 31.709490] kernel: [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:5a:00.0 on minor 3
[ 31.709515] kernel: nvidia 0000:5a:00.0: [drm] No compatible format found
[ 31.709517] kernel: nvidia 0000:5a:00.0: [drm] Cannot find any crtc or sizes
[ 31.709554] kernel: [drm] [nvidia-drm] [GPU ID 0x00005d00] Loading driver
[ 31.714236] kernel: [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:5d:00.0 on minor 4
[ 31.714261] kernel: nvidia 0000:5d:00.0: [drm] No compatible format found
[ 31.714263] kernel: nvidia 0000:5d:00.0: [drm] Cannot find any crtc or sizes
[ 31.714290] kernel: [drm] [nvidia-drm] [GPU ID 0x00005e00] Loading driver
[ 31.718846] kernel: [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:5e:00.0 on minor 5
[ 31.718867] kernel: nvidia 0000:5e:00.0: [drm] No compatible format found
[ 31.718868] kernel: nvidia 0000:5e:00.0: [drm] Cannot find any crtc or sizes
[ 31.718921] kernel: [drm] [nvidia-drm] [GPU ID 0x0000d300] Loading driver
[ 31.723516] kernel: [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:d3:00.0 on minor 6
[ 31.723529] kernel: nvidia 0000:d3:00.0: [drm] No compatible format found
[ 31.723531] kernel: nvidia 0000:d3:00.0: [drm] Cannot find any crtc or sizes
[ 31.723554] kernel: [drm] [nvidia-drm] [GPU ID 0x0000d600] Loading driver
[ 31.728083] kernel: [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:d6:00.0 on minor 7
[ 31.728093] kernel: nvidia 0000:d6:00.0: [drm] No compatible format found
[ 31.728095] kernel: nvidia 0000:d6:00.0: [drm] Cannot find any crtc or sizes
[ 31.728116] kernel: [drm] [nvidia-drm] [GPU ID 0x0000d900] Loading driver
[ 31.732799] kernel: [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:d9:00.0 on minor 8
[ 31.732809] kernel: nvidia 0000:d9:00.0: [drm] No compatible format found
[ 31.732811] kernel: nvidia 0000:d9:00.0: [drm] Cannot find any crtc or sizes
[ 31.732835] kernel: [drm] [nvidia-drm] [GPU ID 0x0000dc00] Loading driver
[ 31.737556] kernel: [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:dc:00.0 on minor 9
[ 31.737568] kernel: nvidia 0000:dc:00.0: [drm] No compatible format found
[ 31.737570] kernel: nvidia 0000:dc:00.0: [drm] Cannot find any crtc or sizes
[ 31.737593] kernel: [drm] [nvidia-drm] [GPU ID 0x0000dd00] Loading driver
[ 31.742228] kernel: [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:dd:00.0 on minor 10
[ 31.742238] kernel: nvidia 0000:dd:00.0: [drm] No compatible format found
[ 31.742240] kernel: nvidia 0000:dd:00.0: [drm] Cannot find any crtc or sizes
@akhilec commented on GitHub (Feb 18, 2026):
root@hq-it-ai:/usr/lib# lsmod | grep nv
nvidia_uvm 2158592 4
nvidia_drm 139264 0
nvidia_modeset 1814528 1 nvidia_drm
nvidia 14409728 47 nvidia_uvm,nvidia_modeset
video 77824 1 nvidia_modeset
ecc 45056 1 nvidia
nvme 61440 7
nvme_core 212992 8 nvme
nvme_auth 28672 1 nvme_core
@akhilec commented on GitHub (Feb 18, 2026):
root@hq-it-ai:/usr/lib# ls -l /dev/nvidia*
crw-rw-rw- 1 root root 195, 0 Feb 18 05:05 /dev/nvidia0
crw-rw-rw- 1 root root 195, 1 Feb 18 05:05 /dev/nvidia1
crw-rw-rw- 1 root root 195, 2 Feb 18 05:05 /dev/nvidia2
crw-rw-rw- 1 root root 195, 3 Feb 18 05:05 /dev/nvidia3
crw-rw-rw- 1 root root 195, 4 Feb 18 05:05 /dev/nvidia4
crw-rw-rw- 1 root root 195, 5 Feb 18 05:05 /dev/nvidia5
crw-rw-rw- 1 root root 195, 6 Feb 18 05:05 /dev/nvidia6
crw-rw-rw- 1 root root 195, 7 Feb 18 05:05 /dev/nvidia7
crw-rw-rw- 1 root root 195, 8 Feb 18 05:05 /dev/nvidia8
crw-rw-rw- 1 root root 195, 9 Feb 18 05:05 /dev/nvidia9
crw-rw-rw- 1 root root 195, 255 Feb 18 05:05 /dev/nvidiactl
crw-rw-rw- 1 root root 195, 254 Feb 18 05:05 /dev/nvidia-modeset
crw-rw-rw- 1 root root 508, 0 Feb 18 05:05 /dev/nvidia-uvm
crw-rw-rw- 1 root root 508, 1 Feb 18 05:05 /dev/nvidia-uvm-tools
/dev/nvidia-caps:
total 0
cr-------- 1 root root 511, 1 Feb 18 05:05 nvidia-cap1
cr--r--r-- 1 root root 511, 2 Feb 18 05:05 nvidia-cap2
@rick-github commented on GitHub (Feb 18, 2026):
Try this:
@akhilec commented on GitHub (Feb 18, 2026):
at the host level?
@rick-github commented on GitHub (Feb 18, 2026):
Yes.
@akhilec commented on GitHub (Feb 18, 2026):
root@hq-it-ai:/usr/lib# echo 'options nvidia_uvm uvm_disable_hmm=1' > /etc/modprobe.d/nvidia-uvm.conf
root@hq-it-ai:/usr/lib# modprobe -r nvidia_uvm
root@hq-it-ai:/usr/lib# modprobe nvidia_uvm
root@hq-it-ai:/usr/lib#
I tired all these commands, dind't see any out put. Was i expected to see anything
@rick-github commented on GitHub (Feb 18, 2026):
If there were no errors, now try:
@akhilec commented on GitHub (Feb 18, 2026):
root@hq-it-ai:~# docker logs -f 22c3299c409d
Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICC7kq7tAKGc+OqHhoUYl51S9zqrgwoNyGNQX0Oa3zi3
time=2026-02-18T16:09:54.367Z level=INFO source=routes.go:1475 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-02-18T16:09:54.368Z level=INFO source=images.go:518 msg="total blobs: 0"
time=2026-02-18T16:09:54.368Z level=INFO source=images.go:525 msg="total unused blobs removed: 0"
time=2026-02-18T16:09:54.368Z level=INFO source=routes.go:1528 msg="Listening on [::]:11434 (version 0.12.3)"
time=2026-02-18T16:09:54.368Z level=DEBUG source=sched.go:121 msg="starting llm scheduler"
time=2026-02-18T16:09:54.368Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2026-02-18T16:09:54.376Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2026-02-18T16:09:54.376Z level=DEBUG source=gpu.go:520 msg="Searching for GPU library" name=libcuda.so
time=2026-02-18T16:09:54.376Z level=DEBUG source=gpu.go:544 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets//lib/libcuda.so /usr/lib/-linux-gnu/nvidia/current/libcuda.so /usr/lib/-linux-gnu/libcuda.so /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers//libcuda.so /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2026-02-18T16:09:54.377Z level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.580.126.09]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.580.126.09
dlsym: cuInit - 0x700b0a527850
dlsym: cuDriverGetVersion - 0x700b0a527910
dlsym: cuDeviceGetCount - 0x700b0a527a90
dlsym: cuDeviceGet - 0x700b0a5279d0
dlsym: cuDeviceGetAttribute - 0x700b0a527f10
dlsym: cuDeviceGetUuid - 0x700b0a57eb10
dlsym: cuDeviceGetName - 0x700b0a527b50
dlsym: cuCtxCreate_v3 - 0x700b0a57c2b0
dlsym: cuMemGetInfo_v2 - 0x700b0a52b780
dlsym: cuCtxDestroy - 0x700b0a57e1b0
calling cuInit
calling cuDriverGetVersion
raw version 0x32c8
CUDA driver version: 13.0
calling cuDeviceGetCount
device count 10
time=2026-02-18T16:09:55.615Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=10 library=/usr/lib/x86_64-linux-gnu/libcuda.so.580.126.09
[GPU-dec32acf-20bb-2380-c2a0-291da4e33fba] CUDA totalMem 97249mb
[GPU-dec32acf-20bb-2380-c2a0-291da4e33fba] CUDA freeMem 96687mb
[GPU-dec32acf-20bb-2380-c2a0-291da4e33fba] Compute Capability 12.0
[GPU-39d80ed7-aa0f-4a25-f778-e1f2f2ffd9e9] CUDA totalMem 97249mb
[GPU-39d80ed7-aa0f-4a25-f778-e1f2f2ffd9e9] CUDA freeMem 96687mb
[GPU-39d80ed7-aa0f-4a25-f778-e1f2f2ffd9e9] Compute Capability 12.0
[GPU-2d43c411-a53f-a2fb-0374-5636d1b750b4] CUDA totalMem 97249mb
[GPU-2d43c411-a53f-a2fb-0374-5636d1b750b4] CUDA freeMem 96687mb
[GPU-2d43c411-a53f-a2fb-0374-5636d1b750b4] Compute Capability 12.0
[GPU-eb51bc4d-4f67-c734-23f7-5fab1ef2e885] CUDA totalMem 97249mb
[GPU-eb51bc4d-4f67-c734-23f7-5fab1ef2e885] CUDA freeMem 96687mb
[GPU-eb51bc4d-4f67-c734-23f7-5fab1ef2e885] Compute Capability 12.0
[GPU-f474b2ec-e25b-2723-be82-cac7b99609fa] CUDA totalMem 97249mb
[GPU-f474b2ec-e25b-2723-be82-cac7b99609fa] CUDA freeMem 96687mb
[GPU-f474b2ec-e25b-2723-be82-cac7b99609fa] Compute Capability 12.0
[GPU-fc02a6f3-15da-232f-cd45-503f90a1c4b7] CUDA totalMem 97249mb
[GPU-fc02a6f3-15da-232f-cd45-503f90a1c4b7] CUDA freeMem 96687mb
[GPU-fc02a6f3-15da-232f-cd45-503f90a1c4b7] Compute Capability 12.0
[GPU-720c4c0f-9858-3779-be3e-9718bfd01653] CUDA totalMem 97249mb
[GPU-720c4c0f-9858-3779-be3e-9718bfd01653] CUDA freeMem 96687mb
[GPU-720c4c0f-9858-3779-be3e-9718bfd01653] Compute Capability 12.0
[GPU-ff1d1807-c731-089a-5711-759443a6a60d] CUDA totalMem 97249mb
[GPU-ff1d1807-c731-089a-5711-759443a6a60d] CUDA freeMem 96687mb
[GPU-ff1d1807-c731-089a-5711-759443a6a60d] Compute Capability 12.0
[GPU-e17849d5-ecca-44e0-88f1-482b72195ea4] CUDA totalMem 97249mb
[GPU-e17849d5-ecca-44e0-88f1-482b72195ea4] CUDA freeMem 96687mb
[GPU-e17849d5-ecca-44e0-88f1-482b72195ea4] Compute Capability 12.0
[GPU-5afaa004-f27b-fd5d-e4e4-a974bce8803c] CUDA totalMem 97249mb
[GPU-5afaa004-f27b-fd5d-e4e4-a974bce8803c] CUDA freeMem 96687mb
[GPU-5afaa004-f27b-fd5d-e4e4-a974bce8803c] Compute Capability 12.0
time=2026-02-18T16:09:57.159Z level=DEBUG source=amd_linux.go:423 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2026-02-18T16:09:57.159Z level=INFO source=types.go:131 msg="inference compute" id=GPU-dec32acf-20bb-2380-c2a0-291da4e33fba library=cuda variant=v13 compute=12.0 driver=13.0 name="NVIDIA RTX PRO 6000 Blackwell Server Edition" total="95.0 GiB" available="94.4 GiB"
time=2026-02-18T16:09:57.159Z level=INFO source=types.go:131 msg="inference compute" id=GPU-39d80ed7-aa0f-4a25-f778-e1f2f2ffd9e9 library=cuda variant=v13 compute=12.0 driver=13.0 name="NVIDIA RTX PRO 6000 Blackwell Server Edition" total="95.0 GiB" available="94.4 GiB"
time=2026-02-18T16:09:57.159Z level=INFO source=types.go:131 msg="inference compute" id=GPU-2d43c411-a53f-a2fb-0374-5636d1b750b4 library=cuda variant=v13 compute=12.0 driver=13.0 name="NVIDIA RTX PRO 6000 Blackwell Server Edition" total="95.0 GiB" available="94.4 GiB"
time=2026-02-18T16:09:57.159Z level=INFO source=types.go:131 msg="inference compute" id=GPU-eb51bc4d-4f67-c734-23f7-5fab1ef2e885 library=cuda variant=v13 compute=12.0 driver=13.0 name="NVIDIA RTX PRO 6000 Blackwell Server Edition" total="95.0 GiB" available="94.4 GiB"
time=2026-02-18T16:09:57.159Z level=INFO source=types.go:131 msg="inference compute" id=GPU-f474b2ec-e25b-2723-be82-cac7b99609fa library=cuda variant=v13 compute=12.0 driver=13.0 name="NVIDIA RTX PRO 6000 Blackwell Server Edition" total="95.0 GiB" available="94.4 GiB"
time=2026-02-18T16:09:57.159Z level=INFO source=types.go:131 msg="inference compute" id=GPU-fc02a6f3-15da-232f-cd45-503f90a1c4b7 library=cuda variant=v13 compute=12.0 driver=13.0 name="NVIDIA RTX PRO 6000 Blackwell Server Edition" total="95.0 GiB" available="94.4 GiB"
time=2026-02-18T16:09:57.159Z level=INFO source=types.go:131 msg="inference compute" id=GPU-720c4c0f-9858-3779-be3e-9718bfd01653 library=cuda variant=v13 compute=12.0 driver=13.0 name="NVIDIA RTX PRO 6000 Blackwell Server Edition" total="95.0 GiB" available="94.4 GiB"
time=2026-02-18T16:09:57.159Z level=INFO source=types.go:131 msg="inference compute" id=GPU-ff1d1807-c731-089a-5711-759443a6a60d library=cuda variant=v13 compute=12.0 driver=13.0 name="NVIDIA RTX PRO 6000 Blackwell Server Edition" total="95.0 GiB" available="94.4 GiB"
time=2026-02-18T16:09:57.159Z level=INFO source=types.go:131 msg="inference compute" id=GPU-e17849d5-ecca-44e0-88f1-482b72195ea4 library=cuda variant=v13 compute=12.0 driver=13.0 name="NVIDIA RTX PRO 6000 Blackwell Server Edition" total="95.0 GiB" available="94.4 GiB"
time=2026-02-18T16:09:57.159Z level=INFO source=types.go:131 msg="inference compute" id=GPU-5afaa004-f27b-fd5d-e4e4-a974bce8803c library=cuda variant=v13 compute=12.0 driver=13.0 name="NVIDIA RTX PRO 6000 Blackwell Server Edition" total="95.0 GiB" available="94.4 GiB"
@akhilec commented on GitHub (Feb 18, 2026):
Looks like it was able to detect now
@akhilec commented on GitHub (Feb 18, 2026):
root@hq-it-ai:~# docker exec -it 22c3299c409d ollama run gemma3:27b
pulling manifest
pulling e796792eba26: 100% ▕█████████████████████████████████████████████▏ 17 GB
pulling e0a42594d802: 100% ▕█████████████████████████████████████████████▏ 358 B
pulling dd084c7d92a3: 100% ▕█████████████████████████████████████████████▏ 8.4 KB
pulling 3116c5225075: 100% ▕█████████████████████████████████████████████▏ 77 B
pulling f838f048d368: 100% ▕█████████████████████████████████████████████▏ 490 B
verifying sha256 digest
writing manifest
success
How can I help you today? Just let me know what you're thinking, or if you just wanted to say hi,
that's perfectly fine too! 😊
I can:
NAME ID SIZE PROCESSOR CONTEXT UNTIL
gemma3:27b a418f5838eaf 20 GB 100% GPU 4096 4 minutes from now
@akhilec commented on GitHub (Feb 18, 2026):
@akhilec commented on GitHub (Feb 18, 2026):
Thanks Rick, you are the best and the troubleshooting approach you did to hash out the problem was just amazing.
@rick-github commented on GitHub (Feb 18, 2026):
No worries, glad we resolved it. Have fun with your 10x6000s, I'm a little bit envious.
@akhilec commented on GitHub (Feb 18, 2026):
Rick, one question, Should I continue to be on 0.12.3 or can try latest?
@rick-github commented on GitHub (Feb 18, 2026):
Latest should now work.