mirror of
https://github.com/ollama/ollama.git
synced 2026-03-09 03:12:11 -05:00
a lot of CUDA errors and CUDA error: out of memory and SIGSEGV: segmentation violation eventhough VRAM memory still available #5382
Closed
opened 2025-11-12 12:54:40 -06:00 by GiteaMirror
·
1 comment
No Branch/Tag Specified
main
parth-oai-split-thinking-content-tc
parth-refactor-launch-tui
brucemacd/signin-simplify
parth-pi-thinking
jessegross/mlx-swap
pdevine/sampling-penalties
jmorganca/fix-create-quantization-memory
dongchen/resumable_transfer_fix
pdevine/sampling-cache-error
jessegross/mlx-usage
hoyyeva/openclaw-config
hoyyeva/app-html
pdevine/qwen3next
brucemacd/download-before-remove
brucemacd/sign-sh-install
brucemacd/tui-update
brucemacd/usage-api
jmorganca/launch-empty
fix-app-dist-embed
mxyng/mlx-compile
mxyng/mlx-quant
mxyng/mlx-glm4.7
mxyng/mlx
brucemacd/simplify-model-picker
drifkin/debug-request-logger
jmorganca/qwen3-concurrent
fix-glm-4.7-flash-mla-config
drifkin/qwen3-coder-opening-tag
brucemacd/usage-cli
fix-cuda12-fattn-shmem
ollama-imagegen-docs
parth/fix-multiline-inputs
brucemacd/config-docs
mxyng/model-files
mxyng/simple-execute
fix-imagegen-ollama-models
mxyng/async-upload
jmorganca/lazy-no-dtype-changes
imagegen-auto-detect-create
parth/decrease-concurrent-download-hf
fix-mlx-quantize-init
jmorganca/x-cleanup
usage
imagegen-readme
jmorganca/glm-image
mlx-gpu-cd
jmorganca/imagegen-modelfile
parth/agent-skills
parth/agent-allowlist
parth/signed-in-offline
parth/agents
parth/fix-context-chopping
improve-cloud-flow
parth/add-models-websearch
parth/prompt-renderer-mcp
jmorganca/native-settings
jmorganca/download-stream-hash
jmorganca/client2-rebased
brucemacd/oai-chat-req-multipart
jessegross/multi_chunk_reserve
grace/additional-omit-empty
grace/mistral-3-large
mxyng/tokenizer2
mxyng/tokenizer
jessegross/flash
hoyyeva/windows-nacked-app
mxyng/cleanup-attention
grace/deepseek-parser
hoyyeva/remember-unsent-prompt
parth/add-lfs-pointer-error-conversion
parth/olmo2-test2
hoyyeva/ollama-launchagent-plist
nicole/olmo-model
parth/olmo-test
mxyng/remove-embedded
parth/render-template
jmorganca/intellect-3
parth/remove-prealloc-linter
jmorganca/cmd-eval
nicole/nomic-embed-text-fix
mxyng/lint-2
hoyyeva/add-gemini-3-pro-preview
hoyyeva/load-model-list
mxyng/expand-path
mxyng/environ-2
hoyyeva/deeplink-json-encoding
parth/improve-tool-calling-tests
hoyyeva/conversation
hoyyeva/assistant-edit-response
hoyyeva/thinking
origin/brucemacd/invalid-char-i-err
parth/improve-tool-calling
jmorganca/required-omitempty
grace/qwen3-vl-tests
mxyng/iter-client
parth/docs-readme
nicole/embed-test
pdevine/integration-benchstat
parth/remove-generate-cmd
parth/add-toolcall-id
mxyng/server-tests
jmorganca/glm-4.6
jmorganca/gin-h-compat
drifkin/stable-tool-args
pdevine/qwen3-more-thinking
parth/add-websearch-client
nicole/websearch_local
jmorganca/qwen3-coder-updates
grace/deepseek-v3-migration-tests
mxyng/fix-create
jmorganca/cloud-errors
pdevine/parser-tidy
revert-12233-parth/simplify-entrypoints-runner
parth/enable-so-gpt-oss
brucemacd/qwen3vl
jmorganca/readme-simplify
parth/gpt-oss-structured-outputs
revert-12039-jmorganca/tools-braces
mxyng/embeddings
mxyng/gguf
mxyng/benchmark
mxyng/types-null
parth/move-parsing
mxyng/gemma2
jmorganca/docs
mxyng/16-bit
mxyng/create-stdin
pdevine/authorizedkeys
mxyng/quant
parth/opt-in-error-context-window
brucemacd/cache-models
brucemacd/runner-completion
jmorganca/llama-update-6
brucemacd/benchmark-list
brucemacd/partial-read-caps
parth/deepseek-r1-tools
mxyng/omit-array
parth/tool-prefix-temp
brucemacd/runner-test
jmorganca/qwen25vl
brucemacd/model-forward-test-ext
parth/python-function-parsing
jmorganca/cuda-compression-none
drifkin/num-parallel
drifkin/chat-truncation-fix
jmorganca/sync
parth/python-tools-calling
drifkin/array-head-count
brucemacd/create-no-loop
parth/server-enable-content-stream-with-tools
qwen25omni
mxyng/v3
brucemacd/ropeconfig
jmorganca/silence-tokenizer
parth/sample-so-test
parth/sampling-structured-outputs
brucemacd/doc-go-engine
parth/constrained-sampling-json
jmorganca/mistral-wip
brucemacd/mistral-small-convert
parth/sample-unmarshal-json-for-params
brucemacd/jomorganca/mistral
pdevine/bfloat16
jmorganca/mistral
brucemacd/mistral
pdevine/logging
parth/sample-correctness-fix
parth/sample-fix-sorting
jmorgan/sample-fix-sorting-extras
jmorganca/temp-0-images
brucemacd/parallel-embed-models
brucemacd/shim-grammar
jmorganca/fix-gguf-error
bmizerany/nameswork
jmorganca/faster-releases
bmizerany/validatenames
brucemacd/err-no-vocab
brucemacd/rope-config
brucemacd/err-hint
brucemacd/qwen2_5
brucemacd/logprobs
brucemacd/new_runner_graph_bench
progress-flicker
brucemacd/forward-test
brucemacd/go_qwen2
pdevine/gemma2
jmorganca/add-missing-symlink-eval
mxyng/next-debug
parth/set-context-size-openai
brucemacd/next-bpe-bench
brucemacd/next-bpe-test
brucemacd/new_runner_e2e
brucemacd/new_runner_qwen2
pdevine/convert-cohere2
brucemacd/convert-cli
parth/log-probs
mxyng/next-mlx
mxyng/cmd-history
parth/templating
parth/tokenize-detokenize
brucemacd/check-key-register
bmizerany/grammar
jmorganca/vendor-081b29bd
mxyng/func-checks
jmorganca/fix-null-format
parth/fix-default-to-warn-json
jmorganca/qwen2vl
jmorganca/no-concat
parth/cmd-cleanup-SO
brucemacd/check-key-register-structured-err
parth/openai-stream-usage
parth/fix-referencing-so
stream-tools-stop
jmorganca/degin-1
brucemacd/install-path-clean
brucemacd/push-name-validation
brucemacd/browser-key-register
jmorganca/openai-fix-first-message
jmorganca/fix-proxy
jessegross/sample
parth/disallow-streaming-tools
dhiltgen/remove_submodule
jmorganca/ga
jmorganca/mllama
pdevine/newlines
pdevine/geems-2b
jmorganca/llama-bump
mxyng/modelname-7
mxyng/gin-slog
mxyng/modelname-6
jyan/convert-prog
jyan/quant5
paligemma-support
pdevine/import-docs
jmorganca/openai-context
jyan/paligemma
jyan/p2
jyan/palitest
bmizerany/embedspeedup
jmorganca/llama-vit
brucemacd/allow-ollama
royh/ep-methods
royh/whisper
mxyng/api-models
mxyng/fix-memory
jyan/q4_4/8
jyan/ollama-v
royh/stream-tools
roy-embed-parallel
bmizerany/hrm
revert-5963-revert-5924-mxyng/llama3.1-rope
royh/embed-viz
jyan/local2
jyan/auth
jyan/local
jyan/parse-temp
jmorganca/template-mistral
jyan/reord-g
royh-openai-suffixdocs
royh-imgembed
royh-embed-parallel
jyan/quant4
royh-precision
jyan/progress
pdevine/fix-template
jyan/quant3
pdevine/ggla
mxyng/update-registry-domain
jmorganca/ggml-static
mxyng/create-context
jyan/v0.146
mxyng/layers-from-files
build_dist
bmizerany/noseek
royh-ls
royh-name
timeout
mxyng/server-timestamp
bmizerany/nosillyggufslurps
royh-params
jmorganca/llama-cpp-7c26775
royh-openai-delete
royh-show-rigid
jmorganca/enable-fa
jmorganca/no-error-template
jyan/format
royh-testdelete
bmizerany/fastverify
language_support
pdevine/ps-glitches
brucemacd/tokenize
bruce/iq-quants
bmizerany/filepathwithcoloninhost
mxyng/split-bin
bmizerany/client-registry
jmorganca/if-none-match
native
jmorganca/native
jmorganca/batch-embeddings
jmorganca/initcmake
jmorganca/mm
pdevine/showggmlinfo
modenameenforcealphanum
bmizerany/modenameenforcealphanum
jmorganca/done-reason
jmorganca/llama-cpp-8960fe8
ollama.com
bmizerany/filepathnobuild
bmizerany/types/model/defaultfix
rmdisplaylong
nogogen
bmizerany/x
modelfile-readme
bmizerany/replacecolon
jmorganca/limit
jmorganca/execstack
jmorganca/replace-assets
mxyng/tune-concurrency
jmorganca/testing
whitespace-detection
jmorganca/options
upgrade-all
scratch
cuda-search
mattw/airenamer
mattw/allmodelsonhuggingface
mattw/quantcontext
mattw/whatneedstorun
brucemacd/llama-mem-calc
mattw/faq-context
mattw/communitylinks
mattw/noprune
mattw/python-functioncalling
rename
mxyng/install
pulse
remove-first
editor
mattw/selfqueryingretrieval
cgo
mattw/howtoquant
api
matt/streamingapi
format-config
mxyng/extra-args
shell
update-nous-hermes
cp-model
upload-progress
fix-unknown-model
fix-model-names
delete-fix
insecure-registry
ls
deletemodels
progressbar
readme-updates
license-layers
skip-list
list-models
modelpath
matt/examplemodelfiles
distribution
go-opts
v0.17.7
v0.17.7-rc2
v0.17.7-rc1
v0.17.7-rc0
v0.17.6
v0.17.5
v0.17.4
v0.17.3
v0.17.2
v0.17.1
v0.17.1-rc2
v0.17.1-rc1
v0.17.1-rc0
v0.17.0
v0.17.0-rc2
v0.17.0-rc1
v0.17.0-rc0
v0.16.3
v0.16.3-rc2
v0.16.3-rc1
v0.16.3-rc0
v0.16.2
v0.16.2-rc0
v0.16.1
v0.16.0
v0.16.0-rc2
v0.16.0-rc0
v0.16.0-rc1
v0.15.6
v0.15.5
v0.15.5-rc5
v0.15.5-rc4
v0.15.5-rc3
v0.15.5-rc2
v0.15.5-rc1
v0.15.5-rc0
v0.15.4
v0.15.3
v0.15.2
v0.15.1
v0.15.1-rc1
v0.15.1-rc0
v0.15.0-rc6
v0.15.0
v0.15.0-rc5
v0.15.0-rc4
v0.15.0-rc3
v0.15.0-rc2
v0.15.0-rc1
v0.15.0-rc0
v0.14.3
v0.14.3-rc3
v0.14.3-rc2
v0.14.3-rc1
v0.14.3-rc0
v0.14.2
v0.14.2-rc1
v0.14.2-rc0
v0.14.1
v0.14.0-rc11
v0.14.0
v0.14.0-rc10
v0.14.0-rc9
v0.14.0-rc8
v0.14.0-rc7
v0.14.0-rc6
v0.14.0-rc5
v0.14.0-rc4
v0.14.0-rc3
v0.14.0-rc2
v0.14.0-rc1
v0.14.0-rc0
v0.13.5
v0.13.5-rc1
v0.13.5-rc0
v0.13.4-rc2
v0.13.4
v0.13.4-rc1
v0.13.4-rc0
v0.13.3
v0.13.3-rc1
v0.13.3-rc0
v0.13.2
v0.13.2-rc2
v0.13.2-rc1
v0.13.2-rc0
v0.13.1
v0.13.1-rc2
v0.13.1-rc1
v0.13.1-rc0
v0.13.0
v0.13.0-rc0
v0.12.11
v0.12.11-rc1
v0.12.11-rc0
v0.12.10
v0.12.10-rc1
v0.12.10-rc0
v0.12.9-rc0
v0.12.9
v0.12.8
v0.12.8-rc0
v0.12.7
v0.12.7-rc1
v0.12.7-rc0
v0.12.7-citest0
v0.12.6
v0.12.6-rc1
v0.12.6-rc0
v0.12.5
v0.12.5-rc0
v0.12.4
v0.12.4-rc7
v0.12.4-rc6
v0.12.4-rc5
v0.12.4-rc4
v0.12.4-rc3
v0.12.4-rc2
v0.12.4-rc1
v0.12.4-rc0
v0.12.3
v0.12.2
v0.12.2-rc0
v0.12.1
v0.12.1-rc1
v0.12.1-rc2
v0.12.1-rc0
v0.12.0
v0.12.0-rc1
v0.12.0-rc0
v0.11.11
v0.11.11-rc3
v0.11.11-rc2
v0.11.11-rc1
v0.11.11-rc0
v0.11.10
v0.11.9
v0.11.9-rc0
v0.11.8
v0.11.8-rc0
v0.11.7-rc1
v0.11.7-rc0
v0.11.7
v0.11.6
v0.11.6-rc0
v0.11.5-rc4
v0.11.5-rc3
v0.11.5
v0.11.5-rc5
v0.11.5-rc2
v0.11.5-rc1
v0.11.5-rc0
v0.11.4
v0.11.4-rc0
v0.11.3
v0.11.3-rc0
v0.11.2
v0.11.1
v0.11.0-rc2
v0.11.0-rc1
v0.11.0-rc0
v0.11.0
v0.10.2-int1
v0.10.1
v0.10.0
v0.10.0-rc4
v0.10.0-rc3
v0.10.0-rc2
v0.10.0-rc1
v0.10.0-rc0
v0.9.7-rc1
v0.9.7-rc0
v0.9.6
v0.9.6-rc0
v0.9.6-ci0
v0.9.5
v0.9.4-rc5
v0.9.4-rc6
v0.9.4
v0.9.4-rc3
v0.9.4-rc4
v0.9.4-rc1
v0.9.4-rc2
v0.9.4-rc0
v0.9.3
v0.9.3-rc5
v0.9.4-citest0
v0.9.3-rc4
v0.9.3-rc3
v0.9.3-rc2
v0.9.3-rc1
v0.9.3-rc0
v0.9.2
v0.9.1
v0.9.1-rc1
v0.9.1-rc0
v0.9.1-ci1
v0.9.1-ci0
v0.9.0
v0.9.0-rc0
v0.8.0
v0.8.0-rc0
v0.7.1-rc2
v0.7.1
v0.7.1-rc1
v0.7.1-rc0
v0.7.0
v0.7.0-rc1
v0.7.0-rc0
v0.6.9-rc0
v0.6.8
v0.6.8-rc0
v0.6.7
v0.6.7-rc2
v0.6.7-rc1
v0.6.7-rc0
v0.6.6
v0.6.6-rc2
v0.6.6-rc1
v0.6.6-rc0
v0.6.5-rc1
v0.6.5
v0.6.5-rc0
v0.6.4-rc0
v0.6.4
v0.6.3-rc1
v0.6.3
v0.6.3-rc0
v0.6.2
v0.6.2-rc0
v0.6.1
v0.6.1-rc0
v0.6.0-rc0
v0.6.0
v0.5.14-rc0
v0.5.13
v0.5.13-rc6
v0.5.13-rc5
v0.5.13-rc4
v0.5.13-rc3
v0.5.13-rc2
v0.5.13-rc1
v0.5.13-rc0
v0.5.12
v0.5.12-rc1
v0.5.12-rc0
v0.5.11
v0.5.10
v0.5.9
v0.5.9-rc0
v0.5.8-rc13
v0.5.8
v0.5.8-rc12
v0.5.8-rc11
v0.5.8-rc10
v0.5.8-rc9
v0.5.8-rc8
v0.5.8-rc7
v0.5.8-rc6
v0.5.8-rc5
v0.5.8-rc4
v0.5.8-rc3
v0.5.8-rc2
v0.5.8-rc1
v0.5.8-rc0
v0.5.7
v0.5.6
v0.5.5
v0.5.5-rc0
v0.5.4
v0.5.3
v0.5.3-rc0
v0.5.2
v0.5.2-rc3
v0.5.2-rc2
v0.5.2-rc1
v0.5.2-rc0
v0.5.1
v0.5.0
v0.5.0-rc1
v0.4.8-rc0
v0.4.7
v0.4.6
v0.4.5
v0.4.4
v0.4.3
v0.4.3-rc0
v0.4.2
v0.4.2-rc1
v0.4.2-rc0
v0.4.1
v0.4.1-rc0
v0.4.0
v0.4.0-rc8
v0.4.0-rc7
v0.4.0-rc6
v0.4.0-rc5
v0.4.0-rc4
v0.4.0-rc3
v0.4.0-rc2
v0.4.0-rc1
v0.4.0-rc0
v0.4.0-ci3
v0.3.14
v0.3.14-rc0
v0.3.13
v0.3.12
v0.3.12-rc5
v0.3.12-rc4
v0.3.12-rc3
v0.3.12-rc2
v0.3.12-rc1
v0.3.11
v0.3.11-rc4
v0.3.11-rc3
v0.3.11-rc2
v0.3.11-rc1
v0.3.10
v0.3.10-rc1
v0.3.9
v0.3.8
v0.3.7
v0.3.7-rc6
v0.3.7-rc5
v0.3.7-rc4
v0.3.7-rc3
v0.3.7-rc2
v0.3.7-rc1
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.8
v0.2.8-rc2
v0.2.8-rc1
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.2-rc2
v0.2.2-rc1
v0.2.1
v0.2.0
v0.1.49-rc14
v0.1.49-rc13
v0.1.49-rc12
v0.1.49-rc11
v0.1.49-rc10
v0.1.49-rc9
v0.1.49-rc8
v0.1.49-rc7
v0.1.49-rc6
v0.1.49-rc4
v0.1.49-rc5
v0.1.49-rc3
v0.1.49-rc2
v0.1.49-rc1
v0.1.48
v0.1.47
v0.1.46
v0.1.45-rc5
v0.1.45
v0.1.45-rc4
v0.1.45-rc3
v0.1.45-rc2
v0.1.45-rc1
v0.1.44
v0.1.43
v0.1.42
v0.1.41
v0.1.40
v0.1.40-rc1
v0.1.39
v0.1.39-rc2
v0.1.39-rc1
v0.1.38
v0.1.37
v0.1.36
v0.1.35
v0.1.35-rc1
v0.1.34
v0.1.34-rc1
v0.1.33
v0.1.33-rc7
v0.1.33-rc6
v0.1.33-rc5
v0.1.33-rc4
v0.1.33-rc3
v0.1.33-rc2
v0.1.33-rc1
v0.1.32
v0.1.32-rc2
v0.1.32-rc1
v0.1.31
v0.1.30
v0.1.29
v0.1.28
v0.1.27
v0.1.26
v0.1.25
v0.1.24
v0.1.23
v0.1.22
v0.1.21
v0.1.20
v0.1.19
v0.1.18
v0.1.17
v0.1.16
v0.1.15
v0.1.14
v0.1.13
v0.1.12
v0.1.11
v0.1.10
v0.1.9
v0.1.8
v0.1.7
v0.1.6
v0.1.5
v0.1.4
v0.1.3
v0.1.2
v0.1.1
v0.1.0
v0.0.21
v0.0.20
v0.0.19
v0.0.18
v0.0.17
v0.0.16
v0.0.15
v0.0.14
v0.0.13
v0.0.12
v0.0.11
v0.0.10
v0.0.9
v0.0.8
v0.0.7
v0.0.6
v0.0.5
v0.0.4
v0.0.3
v0.0.2
v0.0.1
Labels
Clear labels
amd
api
app
bug
build
cli
client2
cloud
compatibility
context-length
create
docker
documentation
embeddings
engine
feature request
feedback wanted
good first issue
gpt-oss
gpu
harmony
help wanted
install
integration
intel
js
linux
macos
memory
model
needs more info
networking
nvidia
ollama.com
performance
pull-request
python
question
registry
rendering
thinking
tools
top
windows
wsl
Mirrored from GitHub Pull Request
No Label
bug
Milestone
No items
No Milestone
Projects
Clear projects
No project
No Assignees
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/ollama-ollama#5382
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @oussemah on GitHub (Jan 10, 2025).
What is the issue?
Having ollama "crash" after cuda segementation violation where ollama seems to be over-estimating the memory size it can use. This happens almost systematically when i use a 32b q4_k_m or q5_k_l model with 32k context ( happend with qwq and qwen-coder) , also happened when i wanted to use glm4-9b with 100K context or more.
So it happened both when vram was fully used, and also when i still had like 20% of VRAM still free.
I have never seen the issue happen if some part of the model is oflloaded to cpu.
Os: Ubuntu 24.04.1 LTS
CPU: Intel(R) Core(TM) i5-14500
GPU: RTX 3090 on Pcie16 + RTX 4060 ti 16GB on Pcie8
00:01.0 PCI bridge: Intel Corporation 12th Gen Core Processor PCI Express x16 Controller #1 (rev 02)
00:06.0 PCI bridge: Intel Corporation 12th Gen Core Processor PCI Express x4 Controller #0 (rev 02)
01:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1)
05:00.0 VGA compatible controller: NVIDIA Corporation AD106 [GeForce RTX 4060 Ti] (rev a1)
server log: tried to hightlight the interesting parts
Jan 10 16:30:57 node1 systemd[1]: Started ollama.service - Ollama Service.
Jan 10 16:30:57 node1 ollama[119046]: 2025/01/10 16:30:57 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
........
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=routes.go:1340 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.584+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.585+01:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.585+01:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.585+01:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /usr/local/lib/ollama/libcuda.so* /libcuda.so* /usr/local/cuda*/targets//lib/libcuda.so /usr/lib/-linux-gnu/nvidia/current/libcuda.so /usr/lib/-linux-gnu/libcuda.so /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers//libcuda.so /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.592+01:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths="[/usr/lib/i386-linux-gnu/libcuda.so.560.35.05 /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05]"
Jan 10 16:30:57 node1 ollama[119046]: initializing /usr/lib/i386-linux-gnu/libcuda.so.560.35.05
Jan 10 16:30:57 node1 ollama[119046]: library /usr/lib/i386-linux-gnu/libcuda.so.560.35.05 load err: /usr/lib/i386-linux-gnu/libcuda.so.560.35.05: wrong ELF class: ELFCLASS32
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.593+01:00 level=DEBUG source=gpu.go:628 msg="skipping 32bit library" library=/usr/lib/i386-linux-gnu/libcuda.so.560.35.05
Jan 10 16:30:57 node1 ollama[119046]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuInit - 0x72e616060800
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDriverGetVersion - 0x72e616060820
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGetCount - 0x72e616060860
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGet - 0x72e616060840
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGetAttribute - 0x72e616060940
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGetUuid - 0x72e6160608a0
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuDeviceGetName - 0x72e616060880
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuCtxCreate_v3 - 0x72e61606b020
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuMemGetInfo_v2 - 0x72e6160764e0
Jan 10 16:30:57 node1 ollama[119046]: dlsym: cuCtxDestroy - 0x72e6160d11b0
Jan 10 16:30:57 node1 ollama[119046]: calling cuInit
Jan 10 16:30:57 node1 ollama[119046]: calling cuDriverGetVersion
Jan 10 16:30:57 node1 ollama[119046]: raw version 0x2f1c
Jan 10 16:30:57 node1 ollama[119046]: CUDA driver version: 12.6
Jan 10 16:30:57 node1 ollama[119046]: calling cuDeviceGetCount
Jan 10 16:30:57 node1 ollama[119046]: device count 2
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.672+01:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=2 library=/usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05
Jan 10 16:30:57 node1 ollama[119046]: [GPU-0e3f2440-290b-0b48-4d15-8b43b8638f30] CUDA totalMem 24154 mb
Jan 10 16:30:57 node1 ollama[119046]: [GPU-0e3f2440-290b-0b48-4d15-8b43b8638f30] CUDA freeMem 23877 mb
Jan 10 16:30:57 node1 ollama[119046]: [GPU-0e3f2440-290b-0b48-4d15-8b43b8638f30] Compute Capability 8.6
Jan 10 16:30:57 node1 ollama[119046]: [GPU-08cf14dc-3d8a-c656-c643-4ba2eb37b51f] CUDA totalMem 15978 mb
Jan 10 16:30:57 node1 ollama[119046]: [GPU-08cf14dc-3d8a-c656-c643-4ba2eb37b51f] CUDA freeMem 15837 mb
Jan 10 16:30:57 node1 ollama[119046]: [GPU-08cf14dc-3d8a-c656-c643-4ba2eb37b51f] Compute Capability 8.9
Jan 10 16:30:57 node1 ollama[119046]: time=2025-01-10T16:30:57.898+01:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
Jan 10 16:30:57 node1 ollama[119046]: releasing cuda driver library
.........
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=INFO source=server.go:104 msg="system memory" total="46.8 GiB" free="40.2 GiB" free_swap="62.0 GiB"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=2 available="[23.3 GiB 15.5 GiB]"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=65 layers.offload=65 layers.split=40,25 memory.available="[23.3 GiB 15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="37.6 GiB" memory.required.partial="37.6 GiB" memory.required.kv="8.0 GiB" memory.required.allocations="[22.3 GiB 15.2 GiB]" memory.weights.total="28.6 GiB" memory.weights.repeating="27.8 GiB" memory.weights.nonrepeating="788.9 MiB" memory.graph.full="3.2 GiB" memory.graph.partial="3.2 GiB"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.528+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a --ctx-size 32768 --batch-size 512 --n-gpu-layers 65 --verbose --threads 6 --parallel 1 --tensor-split 40,25 --port 46519"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/home/ous/.pyenv/shims:/home/ous/.pyenv/bin:/home/ous/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/ous/.local/bin:/home/ous/.local/bin:/home/ous/.local/anaconda3/bin LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama:/usr/local/lib/ollama/runners/cuda_v12_avx CUDA_VISIBLE_DEVICES=GPU-0e3f2440-290b-0b48-4d15-8b43b8638f30,GPU-08cf14dc-3d8a-c656-c643-4ba2eb37b51f]"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.529+01:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.530+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.580+01:00 level=INFO source=runner.go:945 msg="starting go runner"
Jan 10 16:30:58 node1 ollama[119046]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Jan 10 16:30:58 node1 ollama[119046]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Jan 10 16:30:58 node1 ollama[119046]: ggml_cuda_init: found 2 CUDA devices:
Jan 10 16:30:58 node1 ollama[119046]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Jan 10 16:30:58 node1 ollama[119046]: Device 1: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.659+01:00 level=INFO source=runner.go:946 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=6
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.659+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:46519"
Jan 10 16:30:58 node1 ollama[119046]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23877 MiB free
Jan 10 16:30:58 node1 ollama[119046]: llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 4060 Ti) - 15837 MiB free
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: loaded meta data with 38 key-value pairs and 771 tensors from /home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a (version GGUF V3 (latest))
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 0: general.architecture str = qwen2
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 1: general.type str = model
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 2: general.name str = QwQ 32B Preview
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 3: general.finetune str = Preview
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 4: general.basename str = QwQ
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 5: general.size_label str = 32B
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 6: general.license str = apache-2.0
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/QwQ-32B-P...
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 8: general.base_model.count u32 = 1
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 32B Instruct
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-3...
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 14: qwen2.block_count u32 = 64
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 27648
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 22: general.file_type u32 = 17
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
Jan 10 16:30:58 node1 ollama[119046]: time=2025-01-10T16:30:58.780+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 33: general.quantization_version u32 = 2
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 34: quantize.imatrix.file str = /models_out/QwQ-32B-Preview-GGUF/QwQ-...
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 35: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 36: quantize.imatrix.entries_count i32 = 448
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - kv 37: quantize.imatrix.chunks_count i32 = 128
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - type f32: 321 tensors
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - type q8_0: 2 tensors
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - type q5_K: 384 tensors
Jan 10 16:30:58 node1 ollama[119046]: llama_model_loader: - type q6_K: 64 tensors
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151660 '<|fim_middle|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151659 '<|fim_prefix|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151653 '<|vision_end|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151648 '<|box_start|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151646 '<|object_ref_start|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151649 '<|box_end|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151655 '<|image_pad|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151651 '<|quad_end|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151647 '<|object_ref_end|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151652 '<|vision_start|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151654 '<|vision_pad|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151656 '<|video_pad|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151644 '<|im_start|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151661 '<|fim_suffix|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: control token: 151650 '<|quad_start|>' is not marked as EOG
Jan 10 16:30:58 node1 ollama[119046]: llm_load_vocab: special tokens cache size = 22
Jan 10 16:30:59 node1 ollama[119046]: llm_load_vocab: token to piece cache size = 0.9310 MB
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: format = GGUF V3 (latest)
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: arch = qwen2
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: vocab type = BPE
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_vocab = 152064
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_merges = 151387
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: vocab_only = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_ctx_train = 32768
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd = 5120
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_layer = 64
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_head = 40
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_head_kv = 8
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_rot = 128
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_swa = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd_head_k = 128
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd_head_v = 128
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_gqa = 5
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd_k_gqa = 1024
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_embd_v_gqa = 1024
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_norm_eps = 0.0e+00
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: f_logit_scale = 0.0e+00
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_ff = 27648
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_expert = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_expert_used = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: causal attn = 1
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: pooling type = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: rope type = 2
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: rope scaling = linear
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: freq_base_train = 1000000.0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: freq_scale_train = 1
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: n_ctx_orig_yarn = 32768
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: rope_finetuned = unknown
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_d_conv = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_d_inner = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_d_state = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_dt_rank = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: model type = 32B
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: model ftype = Q5_K - Medium
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: model params = 32.76 B
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: model size = 22.11 GiB (5.80 BPW)
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: general.name = QwQ 32B Preview
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOS token = 151645 '<|im_end|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOT token = 151645 '<|im_end|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: LF token = 148848 'ÄĬ'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151645 '<|im_end|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
Jan 10 16:30:59 node1 ollama[119046]: llm_load_print_meta: max token length = 256
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: tensor 'token_embd.weight' (q8_0) (and 0 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: offloading 64 repeating layers to GPU
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: offloading output layer to GPU
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: offloaded 65/65 layers to GPU
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: CPU_Mapped model buffer size = 788.91 MiB
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: CUDA0 model buffer size = 13124.84 MiB
Jan 10 16:30:59 node1 ollama[119046]: llm_load_tensors: CUDA1 model buffer size = 8723.33 MiB
Jan 10 16:30:59 node1 ollama[119046]: time=2025-01-10T16:30:59.784+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.18"
Jan 10 16:31:00 node1 ollama[119046]: time=2025-01-10T16:31:00.034+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.36"
Jan 10 16:31:00 node1 ollama[119046]: time=2025-01-10T16:31:00.285+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.54"
Jan 10 16:31:00 node1 ollama[119046]: time=2025-01-10T16:31:00.536+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.66"
Jan 10 16:31:00 node1 ollama[119046]: time=2025-01-10T16:31:00.787+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.72"
Jan 10 16:31:01 node1 ollama[119046]: time=2025-01-10T16:31:01.038+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.79"
Jan 10 16:31:01 node1 ollama[119046]: time=2025-01-10T16:31:01.288+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.86"
Jan 10 16:31:01 node1 ollama[119046]: time=2025-01-10T16:31:01.539+01:00 level=DEBUG source=server.go:600 msg="model load progress 0.93"
Jan 10 16:31:01 node1 ollama[119046]: time=2025-01-10T16:31:01.790+01:00 level=DEBUG source=server.go:600 msg="model load progress 1.00"
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_seq_max = 1
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_ctx = 32768
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_ctx_per_seq = 32768
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_batch = 512
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: n_ubatch = 512
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: flash_attn = 0
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: freq_base = 1000000.0
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: freq_scale = 1
Jan 10 16:31:02 node1 ollama[119046]: llama_kv_cache_init: CUDA0 KV buffer size = 5120.00 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_kv_cache_init: CUDA1 KV buffer size = 3072.00 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: KV self size = 8192.00 MiB, K (f16): 4096.00 MiB, V (f16): 4096.00 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: CUDA_Host output buffer size = 0.60 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: CUDA0 compute buffer size = 2896.01 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: CUDA1 compute buffer size = 2896.02 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: CUDA_Host compute buffer size = 266.02 MiB
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: graph nodes = 2246
Jan 10 16:31:02 node1 ollama[119046]: llama_new_context_with_model: graph splits = 3
Jan 10 16:31:02 node1 ollama[119046]: time=2025-01-10T16:31:02.292+01:00 level=INFO source=server.go:594 msg="llama runner started in 3.76 seconds"
Jan 10 16:31:02 node1 ollama[119046]: time=2025-01-10T16:31:02.292+01:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a
Jan 10 16:31:02 node1 ollama[119046]: time=2025-01-10T16:31:02.295+01:00 level=DEBUG source=server.go:967 msg="new runner detected, loading model for cgo tokenization"
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: loaded meta data with 38 key-value pairs and 771 tensors from /home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a (version GGUF V3 (latest))
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 0: general.architecture str = qwen2
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 1: general.type str = model
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 2: general.name str = QwQ 32B Preview
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 3: general.finetune str = Preview
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 4: general.basename str = QwQ
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 5: general.size_label str = 32B
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 6: general.license str = apache-2.0
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/QwQ-32B-P...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 8: general.base_model.count u32 = 1
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 32B Instruct
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-3...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 14: qwen2.block_count u32 = 64
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 27648
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 22: general.file_type u32 = 17
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 33: general.quantization_version u32 = 2
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 34: quantize.imatrix.file str = /models_out/QwQ-32B-Preview-GGUF/QwQ-...
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 35: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 36: quantize.imatrix.entries_count i32 = 448
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - kv 37: quantize.imatrix.chunks_count i32 = 128
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - type f32: 321 tensors
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - type q8_0: 2 tensors
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - type q5_K: 384 tensors
Jan 10 16:31:02 node1 ollama[119046]: llama_model_loader: - type q6_K: 64 tensors
Jan 10 16:31:02 node1 ollama[119046]: llm_load_vocab: special tokens cache size = 22
Jan 10 16:31:02 node1 ollama[119046]: llm_load_vocab: token to piece cache size = 0.9310 MB
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: format = GGUF V3 (latest)
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: arch = qwen2
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: vocab type = BPE
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: n_vocab = 152064
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: n_merges = 151387
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: vocab_only = 1
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: model type = ?B
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: model ftype = all F32
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: model params = 32.76 B
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: model size = 22.11 GiB (5.80 BPW)
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: general.name = QwQ 32B Preview
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOS token = 151645 '<|im_end|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOT token = 151645 '<|im_end|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: LF token = 148848 'ÄĬ'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151645 '<|im_end|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151662 '<|fim_pad|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151663 '<|repo_name|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: EOG token = 151664 '<|file_sep|>'
Jan 10 16:31:02 node1 ollama[119046]: llm_load_print_meta: max token length = 256
Jan 10 16:31:02 node1 ollama[119046]: llama_model_load: vocab only - skipping tensors
Jan 10 16:31:08 node1 ollama[119046]: time=2025-01-10T16:31:08.286+01:00 level=DEBUG source=prompt.go:77 msg="truncating input messages which exceed context length" truncated=60
Jan 10 16:31:08 node1 ollama[119046]: time=2025-01-10T16:31:08.286+01:00 level=DEBUG source=routes.go:1542 msg="chat request" images=0 prompt="<|im_start|>system\nYou are Cline, .REMOVED ORIGINAL PROMPT....</environment_details><|im_end|>\n<|im_start|>assistant\n"
Jan 10 16:31:08 node1 ollama[119046]: time=2025-01-10T16:31:08.417+01:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=32752 used=0 remaining=32752
Jan 10 16:31:27 node1 ollama[119046]: [GIN] 2025/01/10 - 16:31:27 | 200 | 14.762µs | 127.0.0.1 | HEAD "/"
Jan 10 16:31:27 node1 ollama[119046]: [GIN] 2025/01/10 - 16:31:27 | 200 | 72.782µs | 127.0.0.1 | GET "/api/ps"
Jan 10 16:31:55 node1 ollama[119046]: CUDA error: out of memory
Jan 10 16:31:55 node1 ollama[119046]: current device: 1, in function alloc at llama/ggml-cuda/ggml-cuda.cu:370
Jan 10 16:31:55 node1 ollama[119046]: cuMemCreate(&handle, reserve_size, &prop, 0)
Jan 10 16:31:55 node1 ollama[119046]: llama/ggml-cuda/ggml-cuda.cu:96: CUDA error
Jan 10 16:31:55 node1 ollama[119046]: SIGSEGV: segmentation violation
Jan 10 16:31:55 node1 ollama[119046]: PC=0x7cfdb17efe57 m=0 sigcode=1 addr=0x20b203fd0
Jan 10 16:31:55 node1 ollama[119046]: signal arrived during cgo execution
Jan 10 16:31:55 node1 ollama[119046]: goroutine 19 gp=0xc0001b01c0 m=0 mp=0x5cfc8ff8e1a0 [syscall]:
Jan 10 16:31:55 node1 ollama[119046]: runtime.cgocall(0x5cfc8f9a87d0, 0xc000081b90)
Jan 10 16:31:55 node1 ollama[119046]: runtime/cgocall.go:167 +0x4b fp=0xc000081b68 sp=0xc000081b30 pc=0x5cfc8f75cb2b
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7cfd30c5b730, {0x200, 0x7cfd30e0aa50, 0x0, 0x0, 0x7cfd30e0b260, 0x7cfd30e0ba70, 0x7cfd30c7f6c0, 0x7cfd30c5f9e0})
Jan 10 16:31:55 node1 ollama[119046]: _cgo_gotypes.go:556 +0x4f fp=0xc000081b90 sp=0xc000081b68 pc=0x5cfc8f806baf
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama.(*Context).Decode.func1(0x5cfc8f9a3f0b?, 0x7cfd30c5b730?)
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/llama.go:207 +0xf5 fp=0xc000081c80 sp=0xc000081b90 pc=0x5cfc8f809475
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama.(*Context).Decode(0xc000081d70?, 0x0?)
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/llama.go:207 +0x13 fp=0xc000081cc8 sp=0xc000081c80 pc=0x5cfc8f8092f3
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.(*Server).processBatch(0xc00019a000, 0xc0000a0360, 0xc000081f20)
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:434 +0x23f fp=0xc000081ee0 sp=0xc000081cc8 pc=0x5cfc8f9a2bdf
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.(*Server).run(0xc00019a000, {0x5cfc8fda1de0, 0xc0001980a0})
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:342 +0x1d5 fp=0xc000081fb8 sp=0xc000081ee0 pc=0x5cfc8f9a2615
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.Execute.gowrap2()
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:984 +0x28 fp=0xc000081fe0 sp=0xc000081fb8 pc=0x5cfc8f9a7628
Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({})
Jan 10 16:31:55 node1 ollama[119046]: runtime/asm_amd64.s:1700 +0x1 fp=0xc000081fe8 sp=0xc000081fe0 pc=0x5cfc8f76a561
Jan 10 16:31:55 node1 ollama[119046]: created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:984 +0xde5
Jan 10 16:31:55 node1 ollama[119046]: goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
Jan 10 16:31:55 node1 ollama[119046]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:424 +0xce fp=0xc0000e77b0 sp=0xc0000e7790 pc=0x5cfc8f76292e
Jan 10 16:31:55 node1 ollama[119046]: runtime.netpollblock(0xc000029800?, 0x8f6fb186?, 0xfc?)
Jan 10 16:31:55 node1 ollama[119046]: runtime/netpoll.go:575 +0xf7 fp=0xc0000e77e8 sp=0xc0000e77b0 pc=0x5cfc8f727697
Jan 10 16:31:55 node1 ollama[119046]: internal/poll.runtime_pollWait(0x7cfdb0cc6650, 0x72)
Jan 10 16:31:55 node1 ollama[119046]: runtime/netpoll.go:351 +0x85 fp=0xc0000e7808 sp=0xc0000e77e8 pc=0x5cfc8f761c25
Jan 10 16:31:55 node1 ollama[119046]: internal/poll.(*pollDesc).wait(0xc000194180?, 0x900000036?, 0x0)
Jan 10 16:31:55 node1 ollama[119046]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0000e7830 sp=0xc0000e7808 pc=0x5cfc8f7b7a67
Jan 10 16:31:55 node1 ollama[119046]: internal/poll.(*pollDesc).waitRead(...)
Jan 10 16:31:55 node1 ollama[119046]: internal/poll/fd_poll_runtime.go:89
Jan 10 16:31:55 node1 ollama[119046]: internal/poll.(*FD).Accept(0xc000194180)
Jan 10 16:31:55 node1 ollama[119046]: internal/poll/fd_unix.go:620 +0x295 fp=0xc0000e78d8 sp=0xc0000e7830 pc=0x5cfc8f7b8fd5
Jan 10 16:31:55 node1 ollama[119046]: net.(*netFD).accept(0xc000194180)
Jan 10 16:31:55 node1 ollama[119046]: net/fd_unix.go:172 +0x29 fp=0xc0000e7990 sp=0xc0000e78d8 pc=0x5cfc8f831969
Jan 10 16:31:55 node1 ollama[119046]: net.(*TCPListener).accept(0xc0001b2040)
Jan 10 16:31:55 node1 ollama[119046]: net/tcpsock_posix.go:159 +0x1e fp=0xc0000e79e0 sp=0xc0000e7990 pc=0x5cfc8f841fbe
Jan 10 16:31:55 node1 ollama[119046]: net.(*TCPListener).Accept(0xc0001b2040)
Jan 10 16:31:55 node1 ollama[119046]: net/tcpsock.go:372 +0x30 fp=0xc0000e7a10 sp=0xc0000e79e0 pc=0x5cfc8f8412f0
Jan 10 16:31:55 node1 ollama[119046]: net/http.(*onceCloseListener).Accept(0xc00019a3f0?)
Jan 10 16:31:55 node1 ollama[119046]: :1 +0x24 fp=0xc0000e7a28 sp=0xc0000e7a10 pc=0x5cfc8f97fec4
Jan 10 16:31:55 node1 ollama[119046]: net/http.(*Server).Serve(0xc0001904b0, {0x5cfc8fda17f8, 0xc0001b2040})
Jan 10 16:31:55 node1 ollama[119046]: net/http/server.go:3330 +0x30c fp=0xc0000e7b58 sp=0xc0000e7a28 pc=0x5cfc8f971c0c
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.Execute({0xc000016150?, 0x5cfc8f76a1bc?, 0x0?})
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:1005 +0x11a9 fp=0xc0000e7ef8 sp=0xc0000e7b58 pc=0x5cfc8f9a7309
Jan 10 16:31:55 node1 ollama[119046]: main.main()
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/cmd/runner/main.go:11 +0x54 fp=0xc0000e7f50 sp=0xc0000e7ef8 pc=0x5cfc8f9a8294
Jan 10 16:31:55 node1 ollama[119046]: runtime.main()
Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:272 +0x29d fp=0xc0000e7fe0 sp=0xc0000e7f50 pc=0x5cfc8f72ec7d
Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({})
Jan 10 16:31:55 node1 ollama[119046]: runtime/asm_amd64.s:1700 +0x1 fp=0xc0000e7fe8 sp=0xc0000e7fe0 pc=0x5cfc8f76a561
Jan 10 16:31:55 node1 ollama[119046]: goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
Jan 10 16:31:55 node1 ollama[119046]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:424 +0xce fp=0xc00006efa8 sp=0xc00006ef88 pc=0x5cfc8f76292e
Jan 10 16:31:55 node1 ollama[119046]: runtime.goparkunlock(...)
Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:430
Jan 10 16:31:55 node1 ollama[119046]: runtime.forcegchelper()
Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:337 +0xb8 fp=0xc00006efe0 sp=0xc00006efa8 pc=0x5cfc8f72efb8
Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({})
------ removed some duplicate traces -----------
Jan 10 16:31:55 node1 ollama[119046]: runtime/asm_amd64.s:1700 +0x1 fp=0xc0004a17e8 sp=0xc0004a17e0 pc=0x5cfc8f76a561
Jan 10 16:31:55 node1 ollama[119046]: created by runtime.gcBgMarkStartWorkers in goroutine 26
Jan 10 16:31:55 node1 ollama[119046]: runtime/mgc.go:1328 +0x105
Jan 10 16:31:55 node1 ollama[119046]: goroutine 67 gp=0xc000278540 m=nil [GC worker (idle)]:
Jan 10 16:31:55 node1 ollama[119046]: runtime.gopark(0x97e81a03181?, 0x1?, 0xfc?, 0x38?, 0x0?)
Jan 10 16:31:55 node1 ollama[119046]: runtime/proc.go:424 +0xce fp=0xc0004a1f38 sp=0xc0004a1f18 pc=0x5cfc8f76292e
Jan 10 16:31:55 node1 ollama[119046]: runtime.gcBgMarkWorker(0xc000022700)
Jan 10 16:31:55 node1 ollama[119046]: runtime/mgc.go:1412 +0xe9 fp=0xc0004a1fc8 sp=0xc0004a1f38 pc=0x5cfc8f710209
Jan 10 16:31:55 node1 ollama[119046]: runtime.gcBgMarkStartWorkers.gowrap1()
Jan 10 16:31:55 node1 ollama[119046]: runtime/mgc.go:1328 +0x25 fp=0xc0004a1fe0 sp=0xc0004a1fc8 pc=0x5cfc8f7100e5
Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({})
Jan 10 16:31:55 node1 ollama[119046]: runtime/asm_amd64.s:1700 +0x1 fp=0xc0004a1fe8 sp=0xc0004a1fe0 pc=0x5cfc8f76a561
Jan 10 16:31:55 node1 ollama[119046]: created by runtime.gcBgMarkStartWorkers in goroutine 26
Jan 10 16:31:55 node1 ollama[119046]: runtime/mgc.go:1328 +0x105
Jan 10 16:31:55 node1 ollama[119046]: rax 0x20b203fd0
Jan 10 16:31:55 node1 ollama[119046]: rbx 0x7cfd303b4130
Jan 10 16:31:55 node1 ollama[119046]: rcx 0xff4
Jan 10 16:31:55 node1 ollama[119046]: rdx 0x7cfd30006a50
Jan 10 16:31:55 node1 ollama[119046]: rdi 0x7cfd30006a60
Jan 10 16:31:55 node1 ollama[119046]: rsi 0x0
Jan 10 16:31:55 node1 ollama[119046]: rbp 0x7fff595f5fd0
Jan 10 16:31:55 node1 ollama[119046]: rsp 0x7fff595f5fb0
Jan 10 16:31:55 node1 ollama[119046]: r8 0x0
Jan 10 16:31:55 node1 ollama[119046]: r9 0x0
Jan 10 16:31:55 node1 ollama[119046]: r10 0x0
Jan 10 16:31:55 node1 ollama[119046]: r11 0x246
Jan 10 16:31:55 node1 ollama[119046]: r12 0x7cf944007060
Jan 10 16:31:55 node1 ollama[119046]: r13 0x7cfd30006a60
Jan 10 16:31:55 node1 ollama[119046]: r14 0x0
Jan 10 16:31:55 node1 ollama[119046]: r15 0x7cfdfd063d50
Jan 10 16:31:55 node1 ollama[119046]: rip 0x7cfdb17efe57
Jan 10 16:31:55 node1 ollama[119046]: rflags 0x10297
Jan 10 16:31:55 node1 ollama[119046]: cs 0x33
Jan 10 16:31:55 node1 ollama[119046]: fs 0x0
Jan 10 16:31:55 node1 ollama[119046]: gs 0x0
Jan 10 16:31:55 node1 ollama[119046]: SIGABRT: abort
Jan 10 16:31:55 node1 ollama[119046]: PC=0x7cfd8b69eb1c m=0 sigcode=18446744073709551610
Jan 10 16:31:55 node1 ollama[119046]: signal arrived during cgo execution
Jan 10 16:31:55 node1 ollama[119046]: goroutine 19 gp=0xc0001b01c0 m=0 mp=0x5cfc8ff8e1a0 [syscall]:
Jan 10 16:31:55 node1 ollama[119046]: runtime.cgocall(0x5cfc8f9a87d0, 0xc000081b90)
Jan 10 16:31:55 node1 ollama[119046]: runtime/cgocall.go:167 +0x4b fp=0xc000081b68 sp=0xc000081b30 pc=0x5cfc8f75cb2b
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7cfd30c5b730, {0x200, 0x7cfd30e0aa50, 0x0, 0x0, 0x7cfd30e0b260, 0x7cfd30e0ba70, 0x7cfd30c7f6c0, 0x7cfd30c5f9e0})
Jan 10 16:31:55 node1 ollama[119046]: _cgo_gotypes.go:556 +0x4f fp=0xc000081b90 sp=0xc000081b68 pc=0x5cfc8f806baf
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama.(*Context).Decode.func1(0x5cfc8f9a3f0b?, 0x7cfd30c5b730?)
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/llama.go:207 +0xf5 fp=0xc000081c80 sp=0xc000081b90 pc=0x5cfc8f809475
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama.(*Context).Decode(0xc000081d70?, 0x0?)
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/llama.go:207 +0x13 fp=0xc000081cc8 sp=0xc000081c80 pc=0x5cfc8f8092f3
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.(*Server).processBatch(0xc00019a000, 0xc0000a0360, 0xc000081f20)
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:434 +0x23f fp=0xc000081ee0 sp=0xc000081cc8 pc=0x5cfc8f9a2bdf
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.(*Server).run(0xc00019a000, {0x5cfc8fda1de0, 0xc0001980a0})
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:342 +0x1d5 fp=0xc000081fb8 sp=0xc000081ee0 pc=0x5cfc8f9a2615
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner.Execute.gowrap2()
Jan 10 16:31:55 node1 ollama[119046]: github.com/ollama/ollama/llama/runner/runner.go:984 +0x28 fp=0xc000081fe0 sp=0xc000081fb8 pc=0x5cfc8f9a7628
Jan 10 16:31:55 node1 ollama[119046]: runtime.goexit({})
------ remove duplicate traces ------------
Jan 10 16:31:55 node1 ollama[119046]: [GIN] 2025/01/10 - 16:31:55 | 200 | 57.936292517s | 127.0.0.1 | POST "/v1/chat/completions"
Jan 10 16:31:55 node1 ollama[119046]: time=2025-01-10T16:31:55.983+01:00 level=DEBUG source=sched.go:466 msg="context for request finished"
Jan 10 16:31:55 node1 ollama[119046]: time=2025-01-10T16:31:55.983+01:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a duration=5m0s
Jan 10 16:31:55 node1 ollama[119046]: time=2025-01-10T16:31:55.983+01:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/ollama/.ollama/models/blobs/sha256-5c5975fb16bebb4e77f71a0ac616f60b680412611f6503e59f76de4393fc2e6a refCount=0
Jan 10 16:31:56 node1 ollama[119046]: time=2025-01-10T16:31:56.096+01:00 level=DEBUG source=server.go:416 msg="llama runner terminated" error="exit status 2"
OS
Linux
GPU
Nvidia
CPU
Intel
Ollama version
0.5.4
@rick-github commented on GitHub (Jan 10, 2025):
Device 1 had 15.5G free and ollama wants to use 15.2G. Some temporary allocation during inference exhausted VRAM and the runner OOMed. The following mitigations are possible:
OLLAMA_GPU_OVERHEADto give llama.cpp a buffer to grow in to (eg,OLLAMA_GPU_OVERHEAD=536870912to reserve 512M)OLLAMA_FLASH_ATTENTION=1in the server environment. Flash attention is a more efficient use of memory and may reduce memory pressure.num_gputo 60.GGML_CUDA_ENABLE_UNIFIED_MEMORY=1. This will allow the GPU to offload to CPU memory if VRAM is exhausted. This is only useful for small amounts of memory as there is a performance penalty. However, in the case where the goal is to reduce OOMs, the amount offloaded will be small and the impact minimal.