mirror of
https://github.com/ollama/ollama.git
synced 2026-05-07 00:22:43 -05:00
Closed
opened 2026-05-03 11:02:53 -05:00 by GiteaMirror
·
39 comments
No Branch/Tag Specified
main
hoyyeva/anthropic-local-image-path
dhiltgen/ci
dhiltgen/llama-runner
parth-remove-claude-desktop-launch
hoyyeva/anthropic-reference-images-path
parth-anthropic-reference-images-path
brucemacd/download-before-remove
hoyyeva/editor-config-repair
parth-mlx-decode-checkpoints
parth-launch-codex-app
hoyyeva/fix-codex-model-metadata-warning
hoyyeva/qwen
parth/hide-claude-desktop-till-release
hoyyeva/opencode-image-modality
parth-add-claude-code-autoinstall
release_v0.22.0
pdevine/manifest-list
codex/fix-codex-model-metadata-warning
pdevine/addressable-manifest
brucemacd/launch-fetch-reccomended
jmorganca/llama-compat
launch-copilot-cli
hoyyeva/opencode-thinking
release_v0.20.7
parth-auto-save-backup
parth-test
jmorganca/gemma4-audio-replacements
fix-manifest-digest-on-pull
hoyyeva/vscode-improve
brucemacd/install-server-wait
parth/update-claude-docs
brucemac/start-ap-install
pdevine/mlx-update
pdevine/qwen35_vision
drifkin/api-show-fallback
mintlify/image-generation-1773352582
hoyyeva/server-context-length-local-config
jmorganca/faster-reptition-penalties
jmorganca/convert-nemotron
parth-pi-thinking
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.30.0-rc3
v0.30.0-rc2
v0.30.0-rc1
v0.30.0-rc0
v0.23.1
v0.23.1-rc0
v0.23.0
v0.23.0-rc0
v0.22.1
v0.22.1-rc1
v0.22.1-rc0
v0.22.0
v0.22.0-rc1
v0.21.3-rc0
v0.21.2-rc1
v0.21.2
v0.21.2-rc0
v0.21.1
v0.21.1-rc1
v0.21.1-rc0
v0.21.0
v0.21.0-rc1
v0.21.0-rc0
v0.20.8-rc0
v0.20.7
v0.20.7-rc1
v0.20.7-rc0
v0.20.6
v0.20.6-rc1
v0.20.6-rc0
v0.20.5
v0.20.5-rc2
v0.20.5-rc1
v0.20.5-rc0
v0.20.4
v0.20.4-rc2
v0.20.4-rc1
v0.20.4-rc0
v0.20.3
v0.20.3-rc0
v0.20.2
v0.20.1
v0.20.1-rc2
v0.20.1-rc1
v0.20.1-rc0
v0.20.0
v0.20.0-rc1
v0.20.0-rc0
v0.19.0
v0.19.0-rc2
v0.19.0-rc1
v0.19.0-rc0
v0.18.4-rc1
v0.18.4-rc0
v0.18.3
v0.18.3-rc2
v0.18.3-rc1
v0.18.3-rc0
v0.18.2
v0.18.2-rc1
v0.18.2-rc0
v0.18.1
v0.18.1-rc1
v0.18.1-rc0
v0.18.0
v0.18.0-rc2
v0.18.0-rc1
v0.18.0-rc0
v0.17.8-rc4
v0.17.8-rc3
v0.17.8-rc2
v0.17.8-rc1
v0.17.8-rc0
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
cloud
compatibility
context-length
create
docker
documentation
embeddings
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
image
install
intel
js
launch
linux
macos
memory
mlx
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
vulkan
windows
wsl
Mirrored from GitHub Pull Request
No Label
nvidia
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama#62970
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Firebrand on GitHub (Dec 21, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1663
Originally assigned to: @dhiltgen on GitHub.
Hi folks,
It appears that Ollama is using CUDA properly but in my resource monitor I'm getting near 0% GPU usage when running a prompt and the response is extremely slow (15 mins for one line response). Thanks!
Running on Ubuntu 22.04/WSL2/Windows 10 - GeForce GTX 1080 - 32GB RAM
@EliMCosta commented on GitHub (Dec 21, 2023):
I already had this issue using the ollama container after some time without use. Very slow responses and hallucinations. The solution for me was remove and deploy a new container. There is a bug to investigate, don't know if it's in ollama or in the software infrastructure.
@donnadulcinea commented on GitHub (Dec 22, 2023):
I confirm what @EliMCosta said.
I have more or less the same configuration as yours, and I want to add that sometimes a "cold bootstrap" is sufficient.
What I mean you need to make a query to make ollama "wake up" after that query response are faster.
I'm working mainly by API interface.
@Firebrand commented on GitHub (Dec 22, 2023):
Thanks @EliMCosta and @donnadulcinea
Not exactly sure what you mean by "remove and deploy a new container". I'm not using docker or anything, I just installed Ollama on my Ubuntu WSL environment using "curl https://ollama.ai/install.sh | sh"
@jayvhaile commented on GitHub (Dec 23, 2023):
same issue here @Firebrand
@iukea1 commented on GitHub (Dec 27, 2023):
First off I am just now having this issue also .
Was able to reproduce on running olloma locally and in container
@Firebrand
Looks like you are running a local install not a docker versioned of it
@bagstoper commented on GitHub (Dec 29, 2023):
I did a fresh install of ubuntu today and after updating ran the install command "curl https://ollama.ai/install.sh | sh". I can make queries and get responses but they seem as fast as another machine I had loaded the same model on that didn't have a gtx4070.
I have the same output as the screenshots in the first post and ~8GB of memory used. Oddly, using nvtop I can see that it spikes to 100% about once every 30 seconds.
@iukea1 commented on GitHub (Dec 29, 2023):
Issue resolved itself once I moved it to a completely separate container on a separate network
@Bizyak13 commented on GitHub (Jan 3, 2024):
I am having the same issue, where nothing I do will use the GPU. Either getting errors that no GPU was detected (CUDA 100 error) or that only the CPU is ever utilised, and no matter where I check the GPU will not use any resources.
System info:
Running on Ubuntu 22.04/WSL2/Windows 11 - GeForce RTX 3080 - 64GB RAM
Nvidia driver 546.33
WSL version: 2.0.9.0
Kernel version: 5.15.133.1-1
WSLg version: 1.0.59
MSRDC version: 1.2.4677
Direct3D version: 1.611.1-81528511
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.22631.2861
Trying to run the dolphin-mixtral model
Here is everything I have tried written out in hopes for someone to provide an answer to this issue.
I am also attaching Ollama logs from the working instance (no. 5), and the monitoring of Nvidia graphics card resources.
When I try to watch the

nvidia-smicommand there are no processes listed.When I check the gpustat, there is no measurable change.

When I check the Task Manager on the host machine, there is also no change, appart from the CPU spiking.

And here are also the logs from the Ollama service where the GPU is detected and supposedly used.

* But I have not tried Docker yet, since the instruction is ambiguous and it is not clear where to install the docker itself. But I am not hopeful it will solve my issue.
So I guess what I am asking is, is this it? Or can the GPU be utilized more(or at all) in order to gain performance?
@mongolu commented on GitHub (Jan 3, 2024):
It can and it does
@siikdUde commented on GitHub (Jan 3, 2024):
@Bizyak13
Did you uninstall Ubuntu and WSL and then re-installed before downloading oobabooga? If not, please do so and try my method again. It works perfectly with dolphin-mixtral. Also please note that not all models work well with GPU.
@Bizyak13 commented on GitHub (Jan 3, 2024):
@siikdUde I did yes. I cleaned everything, then reinstalled everything back, then installed oobabooga, and only after that, installed Ollama.
I guess I can try if any other models perform differently. But from what I'm seeing is that Ollama does initially load something into the GPU memory, but then just doesn't use it.
@siikdUde commented on GitHub (Jan 3, 2024):
@Bizyak13
After playing around some more with this issue, it does seem like there can be a hiccup or glitch that happens at random where the GPU will stop being used for the current model loaded and then any subsequent models loaded in the same terminal session. Particularly in my case, the GPU stopped being used when I downloaded gpustat, so that may have been a trigger that affected the terminal session. What I have found to fix this or as a workaround is to load a different model, and the GPU will start working again. Then, you can load back to the original model being used and the GPU will still work.
Please try to exit terminal, open up again and load a different model and see if that changes anything
@ltomes commented on GitHub (Jan 4, 2024):
For what it's worth I'm seeing similar behavior in the latest container release of ollama. Ollama believes it's offloading work to the GPU via CUDA (and I do see high vRam usage), but the GPU usage stays low, and CPU usage high.
@draco1544 commented on GitHub (Jan 5, 2024):
I also have this problem, my gpu is only used at 5%
@quanpinjie commented on GitHub (Jan 5, 2024):
Is it resolved, I have same problem
@bagstoper commented on GitHub (Jan 5, 2024):
There are some things to try in this thread but I am not hopeful that they will solve the issue. Some have resolved it with specific install methods using oogabooga as the method of getting nvidia drivers installed. I read something about it maybe being CUDA version related too. I have tried 12.2 and 12.3 with no luck. 12.1 is next on my list, which that was what was installed by oogabooga but I did that before I had the newest nvidia drivers for ubuntu and either that or apt-update put the newer version of CUDA on there.
@Bizyak13 commented on GitHub (Jan 10, 2024):
Did some more poking around and also installed LMstudio, to try and see if that would pick up the GPU. What I found out is that apparently, my GPU (RTX 3080 with 12GB of VRAM) is not enough for the model, as it only offloads 6/7 layers, which is not enough to get any significant use out of the GPU. In LMstudio however you can manually specify the layers, and setting it to something like 30 will get the GPU going, but I think it also spills out into regular memory, which does not make things any faster.
I was not able to do the same with Ollama, as anytime any changes are made on the WSL, the GPU support fails, and I am getting only CUDA 100 errors.
This is just conjecture at this point, but maybe it helps someone out.
@dhiltgen commented on GitHub (Jan 27, 2024):
@Bizyak13 we've made quite a few fixes to the CUDA integration over the past few weeks. Please give 0.1.22 a try and if you're still having problems, share the server log so we can see what's going wrong.
@ltomes commented on GitHub (Jan 27, 2024):
I see similar behavior on latest.
Super high memory usage, but lower power draw and low % usage.
Setup: I updated to the latest container, deleted all models, redownloaded, and ran a query.
I am trying to run mixtral:latest, maybe it's just to large for an a5000.
@dhiltgen commented on GitHub (Jan 27, 2024):
Server logs please.
https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md
@ltomes commented on GitHub (Jan 29, 2024):
When running ollama as a container, I do not see logs being generated in the
~/.ollama/logs/directory (That is a mounted path, I checked from inside the container, and from the host mounted directory, The image tag being used isollama/ollama:0.1.22The original issue Firebrand described is in WSL, and I'm running Slackware, let me know if you would like me to make a new issue and I will try to provide all the details of my setup and see if we can get detailed logs that point to a root cause.
@dhiltgen commented on GitHub (Jan 29, 2024):
@ltomes you raise a good point - the troubleshooting doc needs a section on containers. The logs are going to stdout/stderr in the container, so you'd do
docker logs <container-name>or equivalent for your container platform.@ltomes commented on GitHub (Jan 29, 2024):
Here's logs around a query that appears to be CPU dependent, but it is a 24.6gb GGUF model. Maybe I'm just vram limited (24gb, A5000), and that bottleneck is making the CUDA cores have low utilization. I'm open to other models to use for testing to sort out what's going on!
2024:01:29 13-31-38-ollama.log
I can also make an MR tonight for container logging procedures so others like me (who didn't think very hard 🙃) can get logs to you faster.
@easp commented on GitHub (Jan 29, 2024):
@ltomes, It looks like only 2/3rd of the model is on GPU. I'd expect GPU utilization to be low because the GPU will spend most of its time waiting for the CPU to process that 1/3 of the model that doesn't fit in VRAM.
If we assume that the GPU can process its 2/3rds of the model in 1/10th the time it takes the CPU to process its 1/3rd of the model, then the GPU will be ~90% idle and speeds will be much closer to CPU-only speeds than to GPU-only speeds.
@ltomes commented on GitHub (Jan 29, 2024):
If I set
OLLAMA_LLM_LIBRARY=cuda_v11would you expect using this model to fail fast/run only on the GPU when it can manage it?Setting the above I still see
INFO CPU has AVX2/AVX being feature detected, but maybe it won't be used. I will run a few queries to test it out.@dhiltgen Heres an MR for documentation: https://github.com/ollama/ollama/pull/2275
@mehdiataei commented on GitHub (Jan 30, 2024):
Same issue when using 2x RTX 6000 Ada gen.
@matjazbo commented on GitHub (Jan 31, 2024):
I also have this issue, GPU memory is allocated, but only CPU is used for inference.
ollama.log
@easp commented on GitHub (Jan 31, 2024):
@matjazbo What's your system configuration and what models were you using?
It looks like you might be using WSL2. From what I can tell your last 3 models were Dolphin Mixtral, Phi-2 and Mixtral. Phi-2 looks like it ran entirely on GPU. The Mixtral-family models exceed the amount of available VRAM by about 3x. As a result, the majority of the model is running on CPU. In those circumstances the GPU will be mostly idle while the CPU will be using all of the physical cores (typically 1/2 the total thread or core count).
Ollama is behaving as expected.
@mehdiataei What model and quantization are you trying to run? You have plenty of VRAM, unless there is other software you are running that has allocated a lot of CUDA memory. Can you share your ollama log
@matjazbo commented on GitHub (Feb 1, 2024):
@easp you might be correct, although when running Phi-2, I didn't see any GPU usage, neither in task manager nor in nvidia-smi. I'm using 4070 with 12GB which seem to be too small for dolphin-mixtral and mixtral but when ollama allocated GPU VRAM, I was expecting it to use GPU also.
I'm upgrading my system with 3090 soon and will then be able to test the other models.
@ltomes commented on GitHub (Feb 1, 2024):
@easp can you clarify/point me to documentation or discussions on the expected behavior if the three of us set
OLLAMA_LLM_LIBRARY=cuda_v11?In that case should we be expecting GPU use only, or a failure to load the model (In my case/with inadequate vram), or something else?
With a single A5000 I am seeing mixtral requests fall back to the CPU, which I was not expecting when explicitly setting the library to cuda.
@mehdiataei commented on GitHub (Feb 1, 2024):
@easp
I am running fp16. I have two Ada GPUs (totalling +98 VRAM) and Codellama model. I am getting less than 1 token/sec, and obviously with my hardware that doesn't make any sense. I am cetain that although the GPU memories are allocated it is using CPU.
Here is the log:
ollama.log
@penouc commented on GitHub (Feb 3, 2024):
This seems to be a new version issue. I tried using ollma0.1.20 and found that the CPU's percentage could go over 100%, without crashing.
@jakern commented on GitHub (Feb 11, 2024):
I was just trouble shooting this issue for myself and found this thread. I'm on linux not windows but surprisingly rebooting the system and restarting the container allowed it to use GPU again.
@8bitaby commented on GitHub (Feb 13, 2024):
I'm having the same issue. While using the ollama on llama2 , my GPU resource is not being used. Only cpu is getting used. Is anyone find what might be the issue?

@dhiltgen commented on GitHub (Feb 15, 2024):
At present there is no mechanism to force exclusive GPU use, so the system will always attempt to load as much of the model as possible into the GPU, and if it doesn't fit, it will load the remainder in system memory and partially use the CPU. This will often result in lower performance compared to pure GPU, as the GPU stalls waiting for the CPU to keep up, however it should still be faster than running just on the CPU alone. We don't currently have UX to expose details about this in the CLI, but may add that in the future for verbose output. Until then, you can check the server log, and look for a line like this:
If not all layers are loaded into the GPU, some performance impact will result as the CPU has to carry some of the load. If there's enough difference in performance between the GPU and CPU in your system, and enough layers are on CPU, then this will cause the GPU to spend most of its compute time idle.
@ltomes commented on GitHub (Feb 15, 2024):
I will try to find some time this weekend to do some testing and post some logs of what I am seeing. I added a 3090 to my server so I have ~48 gb available which should keep things GPU bound, I might try limiting the cores the container can use to only two isolated cores or something to make the testing easier. What might be happening, is some requests are properly using the GPU, but the resources are not released, then subsequent requests are CPU bound, but it's likely not worth speculating. I will post some results here if I can reproduce what I said above.
@dhiltgen commented on GitHub (Mar 13, 2024):
I don't believe this issue is tracking anything actionable at this point. If there are still any remaining questions/concerns please let me know.
@icemagno commented on GitHub (Dec 4, 2024):
Why was this thread closed? I have Ollama for windows, RTX 4060 and ollama keeps insisting on using CPU and RAM. It is very disapointing because I spent a fortune buying this gpu. Many have explained various things about PCI, buses and RAM performance, etc... So ... what is the point of having GPU then ?
@dhiltgen commented on GitHub (Dec 4, 2024):
@icemagno please open a new issue describing your system and include the server logs so we can assist.