[GH-ISSUE #7896] Installing bolt.new and qwen2.5-coder:7b locally (error cudaMalloc failed: out of memory) #67109

Closed
opened 2026-05-04 09:29:35 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @LieLust on GitHub (Nov 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7896

What is the issue?

Title: Issue with installing bolt.new and qwen2.5-coder:7b locally (error cudaMalloc failed: out of memory)

Description:

I am trying to install bolt.new and qwen2.5-coder:7b locally, but I get the following error:
{"error":"llama runner process has terminated: cudaMalloc failed: out of memory"}.

The installation of qwen2.5-coder:7b fails with this memory error during execution.

Environment:

  • Operating System: Windows
  • Git version: 2.47.1.windows.1
  • Node version: v22.11.0
  • pnpm version: 9.14.4
  • Ollama version: 0.4.6
  • CPU: AMD Ryzen 7 7800X3D
  • GPU: AMD Radeon RX 7900 XTX (24GB VRAM)
  • RAM: 32Go

Context and steps followed:

  1. I followed the provided installation guide: Google Document Guide.
  2. I attempted to install qwen2.5-coder:7b by following the steps, but the following error occurred: {"error":"llama runner process has terminated: cudaMalloc failed: out of memory"}
  3. I suspect this error is related to GPU memory management, but since my AMD Radeon RX 7900 XTX has 24GB of VRAM, it doesn't seem to be a capacity issue.

Error details:

  • The cudaMalloc failed: out of memory error seems to indicate an issue with memory allocation on the GPU.
  • Despite having sufficient VRAM resources, the error persists during execution.

Request:

  • Are there any specific steps or configurations to resolve this memory issue with qwen2.5-coder:7b?
  • Is this a known issue for AMD graphics cards, and are there any recommended solutions or workarounds?

Thank you for your help and suggestions!

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.4.6

Originally created by @LieLust on GitHub (Nov 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7896 ### What is the issue? ### Title: Issue with installing **bolt.new** and **qwen2.5-coder:7b** locally (error `cudaMalloc failed: out of memory`) #### Description: I am trying to install **bolt.new** and **qwen2.5-coder:7b** locally, but I get the following error: `{"error":"llama runner process has terminated: cudaMalloc failed: out of memory"}`. The installation of **qwen2.5-coder:7b** fails with this memory error during execution. #### Environment: - **Operating System**: Windows - **Git version**: 2.47.1.windows.1 - **Node version**: v22.11.0 - **pnpm version**: 9.14.4 - **Ollama version**: 0.4.6 - **CPU**: AMD Ryzen 7 7800X3D - **GPU**: AMD Radeon RX 7900 XTX (24GB VRAM) - **RAM**: 32Go #### Context and steps followed: 1. I followed the provided installation guide: [Google Document Guide](https://docs.google.com/document/d/19UNRP1c6ulDS_X7Ig7mRTI_EaT0xcvgimnfOJDKm7ig/edit?tab=t.0). 2. I attempted to install **qwen2.5-coder:7b** by following the steps, but the following error occurred: `{"error":"llama runner process has terminated: cudaMalloc failed: out of memory"}` 3. I suspect this error is related to GPU memory management, but since my **AMD Radeon RX 7900 XTX** has 24GB of VRAM, it doesn't seem to be a capacity issue. #### Error details: - The **cudaMalloc failed: out of memory** error seems to indicate an issue with memory allocation on the GPU. - Despite having sufficient VRAM resources, the error persists during execution. #### Request: - Are there any specific steps or configurations to resolve this memory issue with **qwen2.5-coder:7b**? - Is this a known issue for AMD graphics cards, and are there any recommended solutions or workarounds? Thank you for your help and suggestions! ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.4.6
GiteaMirror added the bug label 2026-05-04 09:29:35 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

Server logs will aid in debugging.

<!-- gh-comment-id:2509135602 --> @rick-github commented on GitHub (Nov 30, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@LieLust commented on GitHub (Nov 30, 2024):

2024/11/30 20:05:36 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\Max\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-11-30T20:05:36.277+01:00 level=INFO source=images.go:753 msg="total blobs: 8"
time=2024-11-30T20:05:36.277+01:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-11-30T20:05:36.278+01:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6)"
time=2024-11-30T20:05:36.278+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm]"
time=2024-11-30T20:05:36.278+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-30T20:05:36.278+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2024-11-30T20:05:36.278+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2024-11-30T20:05:36.891+01:00 level=INFO source=amd_windows.go:127 msg="unsupported Radeon iGPU detected skipping" id=1 total="11.9 GiB"
time=2024-11-30T20:05:36.892+01:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=rocm variant="" compute=gfx1100 driver=6.2 name="AMD Radeon RX 7900 XTX" total="24.0 GiB" available="23.8 GiB"
[GIN] 2024/11/30 - 20:08:03 | 200 | 591.8µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/30 - 20:08:04 | 200 | 803.8µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/30 - 20:08:04 | 200 | 513µs | 127.0.0.1 | GET "/api/tags"
time=2024-11-30T20:08:30.247+01:00 level=INFO source=sched.go:185 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency"
time=2024-11-30T20:08:30.268+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\Max.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 gpu=0 parallel=4 available=25439502336 required="19.2 GiB"
time=2024-11-30T20:08:30.554+01:00 level=INFO source=server.go:105 msg="system memory" total="31.0 GiB" free="23.6 GiB" free_swap="22.2 GiB"
time=2024-11-30T20:08:30.555+01:00 level=INFO source=memory.go:343 msg="offload to rocm" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[23.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.2 GiB" memory.required.partial="19.2 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[19.2 GiB]" memory.weights.total="10.7 GiB" memory.weights.repeating="10.2 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="7.3 GiB" memory.graph.partial="9.0 GiB"
time=2024-11-30T20:08:30.560+01:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\Users\Max\AppData\Local\Programs\Ollama\lib\ollama\runners\rocm\ollama_llama_server.exe --model C:\Users\Max\.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 --ctx-size 131072 --batch-size 512 --n-gpu-layers 29 --threads 8 --parallel 4 --port 61337"
time=2024-11-30T20:08:30.562+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-30T20:08:30.562+01:00 level=INFO source=server.go:559 msg="waiting for llama runner to start responding"
time=2024-11-30T20:08:30.563+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error"
time=2024-11-30T20:08:30.580+01:00 level=INFO source=runner.go:939 msg="starting go runner"
time=2024-11-30T20:08:30.590+01:00 level=INFO source=runner.go:940 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=8
time=2024-11-30T20:08:30.591+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:61337"
llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from C:\Users\Max.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 7B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder
llama_model_loader: - kv 5: general.size_label str = 7B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C...
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 7B
llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C...
llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ...
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: qwen2.block_count u32 = 28
llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 22: general.file_type u32 = 15
llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 33: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_K: 169 tensors
llama_model_loader: - type q6_K: 29 tensors
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 3584
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 28
llm_load_print_meta: n_head_kv = 4
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 7
llm_load_print_meta: n_embd_k_gqa = 512
llm_load_print_meta: n_embd_v_gqa = 512
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 18944
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 7.62 B
llm_load_print_meta: model size = 4.36 GiB (4.91 BPW)
llm_load_print_meta: general.name = Qwen2.5 Coder 7B Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
time=2024-11-30T20:08:30.820+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llm_load_tensors: ggml ctx size = 0.30 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors: ROCm0 buffer size = 4168.09 MiB
llm_load_tensors: CPU buffer size = 292.36 MiB
llama_new_context_with_model: n_ctx = 131072
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: ROCm0 KV buffer size = 7168.00 MiB
llama_new_context_with_model: KV self size = 7168.00 MiB, K (f16): 3584.00 MiB, V (f16): 3584.00 MiB
llama_new_context_with_model: ROCm_Host output buffer size = 2.38 MiB
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 7452.00 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate ROCm0 buffer of size 7813992448
llama_new_context_with_model: failed to allocate compute buffers
panic: unable to create llama context

goroutine 6 [running]:
main.(*Server).loadModel(0xc000130120, {0x1d, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc0000241c0, 0x0}, ...)
github.com/ollama/ollama/llama/runner/runner.go:868 +0x39c
created by main.main in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:973 +0xbf1
time=2024-11-30T20:08:50.148+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error"
time=2024-11-30T20:08:50.666+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory"
[GIN] 2024/11/30 - 20:08:50 | 500 | 20.725802s | 127.0.0.1 | POST "/api/chat"
time=2024-11-30T20:08:53.005+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\Max.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 gpu=0 parallel=4 available=25125978112 required="19.2 GiB"
time=2024-11-30T20:08:53.288+01:00 level=INFO source=server.go:105 msg="system memory" total="31.0 GiB" free="23.9 GiB" free_swap="22.0 GiB"
time=2024-11-30T20:08:53.289+01:00 level=INFO source=memory.go:343 msg="offload to rocm" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[23.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.2 GiB" memory.required.partial="19.2 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[19.2 GiB]" memory.weights.total="10.7 GiB" memory.weights.repeating="10.2 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="7.3 GiB" memory.graph.partial="9.0 GiB"
time=2024-11-30T20:08:53.292+01:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\Users\Max\AppData\Local\Programs\Ollama\lib\ollama\runners\rocm\ollama_llama_server.exe --model C:\Users\Max\.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 --ctx-size 131072 --batch-size 512 --n-gpu-layers 29 --threads 8 --parallel 4 --port 61365"
time=2024-11-30T20:08:53.294+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-30T20:08:53.294+01:00 level=INFO source=server.go:559 msg="waiting for llama runner to start responding"
time=2024-11-30T20:08:53.294+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error"
time=2024-11-30T20:08:53.311+01:00 level=INFO source=runner.go:939 msg="starting go runner"
time=2024-11-30T20:08:53.322+01:00 level=INFO source=runner.go:940 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=8
time=2024-11-30T20:08:53.322+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:61365"
llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from C:\Users\Max.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 7B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder
llama_model_loader: - kv 5: general.size_label str = 7B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C...
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 7B
llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C...
llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ...
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: qwen2.block_count u32 = 28
llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 22: general.file_type u32 = 15
llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 33: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_K: 169 tensors
llama_model_loader: - type q6_K: 29 tensors
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 3584
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 28
llm_load_print_meta: n_head_kv = 4
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 7
llm_load_print_meta: n_embd_k_gqa = 512
llm_load_print_meta: n_embd_v_gqa = 512
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 18944
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 7.62 B
llm_load_print_meta: model size = 4.36 GiB (4.91 BPW)
llm_load_print_meta: general.name = Qwen2.5 Coder 7B Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
time=2024-11-30T20:08:53.555+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llm_load_tensors: ggml ctx size = 0.30 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors: ROCm0 buffer size = 4168.09 MiB
llm_load_tensors: CPU buffer size = 292.36 MiB
llama_new_context_with_model: n_ctx = 131072
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: ROCm0 KV buffer size = 7168.00 MiB
llama_new_context_with_model: KV self size = 7168.00 MiB, K (f16): 3584.00 MiB, V (f16): 3584.00 MiB
llama_new_context_with_model: ROCm_Host output buffer size = 2.38 MiB
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 7452.00 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate ROCm0 buffer of size 7813992448
llama_new_context_with_model: failed to allocate compute buffers
panic: unable to create llama context

goroutine 6 [running]:
main.(*Server).loadModel(0xc000130120, {0x1d, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc0000241c0, 0x0}, ...)
github.com/ollama/ollama/llama/runner/runner.go:868 +0x39c
created by main.main in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:973 +0xbf1
time=2024-11-30T20:09:12.333+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error"
time=2024-11-30T20:09:12.839+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory"
[GIN] 2024/11/30 - 20:09:12 | 500 | 20.1556848s | 127.0.0.1 | POST "/api/chat"
time=2024-11-30T20:09:17.166+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\Max.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 gpu=0 parallel=4 available=24812453888 required="19.2 GiB"
time=2024-11-30T20:09:17.443+01:00 level=INFO source=server.go:105 msg="system memory" total="31.0 GiB" free="23.8 GiB" free_swap="21.7 GiB"
time=2024-11-30T20:09:17.443+01:00 level=INFO source=memory.go:343 msg="offload to rocm" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[23.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.2 GiB" memory.required.partial="19.2 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[19.2 GiB]" memory.weights.total="10.7 GiB" memory.weights.repeating="10.2 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="7.3 GiB" memory.graph.partial="9.0 GiB"
time=2024-11-30T20:09:17.446+01:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\Users\Max\AppData\Local\Programs\Ollama\lib\ollama\runners\rocm\ollama_llama_server.exe --model C:\Users\Max\.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 --ctx-size 131072 --batch-size 512 --n-gpu-layers 29 --threads 8 --parallel 4 --port 61394"
time=2024-11-30T20:09:17.448+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-30T20:09:17.448+01:00 level=INFO source=server.go:559 msg="waiting for llama runner to start responding"
time=2024-11-30T20:09:17.448+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error"
time=2024-11-30T20:09:17.465+01:00 level=INFO source=runner.go:939 msg="starting go runner"
time=2024-11-30T20:09:17.475+01:00 level=INFO source=runner.go:940 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=8
time=2024-11-30T20:09:17.475+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:61394"
llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from C:\Users\Max.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 7B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder
llama_model_loader: - kv 5: general.size_label str = 7B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C...
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 7B
llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C...
llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ...
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: qwen2.block_count u32 = 28
llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 22: general.file_type u32 = 15
llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 33: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_K: 169 tensors
llama_model_loader: - type q6_K: 29 tensors
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 3584
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 28
llm_load_print_meta: n_head_kv = 4
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 7
llm_load_print_meta: n_embd_k_gqa = 512
llm_load_print_meta: n_embd_v_gqa = 512
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 18944
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 7.62 B
llm_load_print_meta: model size = 4.36 GiB (4.91 BPW)
llm_load_print_meta: general.name = Qwen2.5 Coder 7B Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
time=2024-11-30T20:09:17.711+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llm_load_tensors: ggml ctx size = 0.30 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors: ROCm0 buffer size = 4168.09 MiB
llm_load_tensors: CPU buffer size = 292.36 MiB
llama_new_context_with_model: n_ctx = 131072
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: ROCm0 KV buffer size = 7168.00 MiB
llama_new_context_with_model: KV self size = 7168.00 MiB, K (f16): 3584.00 MiB, V (f16): 3584.00 MiB
llama_new_context_with_model: ROCm_Host output buffer size = 2.38 MiB
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 7452.00 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate ROCm0 buffer of size 7813992448
llama_new_context_with_model: failed to allocate compute buffers
panic: unable to create llama context

goroutine 19 [running]:
main.(*Server).loadModel(0xc0000dc120, {0x1d, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc00008a180, 0x0}, ...)
github.com/ollama/ollama/llama/runner/runner.go:868 +0x39c
created by main.main in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:973 +0xbf1
time=2024-11-30T20:09:36.263+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error"
time=2024-11-30T20:09:36.791+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory"
[GIN] 2024/11/30 - 20:09:36 | 500 | 19.9398993s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:2509153565 --> @LieLust commented on GitHub (Nov 30, 2024): 2024/11/30 20:05:36 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Max\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-11-30T20:05:36.277+01:00 level=INFO source=images.go:753 msg="total blobs: 8" time=2024-11-30T20:05:36.277+01:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-11-30T20:05:36.278+01:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6)" time=2024-11-30T20:05:36.278+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm]" time=2024-11-30T20:05:36.278+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-30T20:05:36.278+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2024-11-30T20:05:36.278+01:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16 time=2024-11-30T20:05:36.891+01:00 level=INFO source=amd_windows.go:127 msg="unsupported Radeon iGPU detected skipping" id=1 total="11.9 GiB" time=2024-11-30T20:05:36.892+01:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=rocm variant="" compute=gfx1100 driver=6.2 name="AMD Radeon RX 7900 XTX" total="24.0 GiB" available="23.8 GiB" [GIN] 2024/11/30 - 20:08:03 | 200 | 591.8µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/30 - 20:08:04 | 200 | 803.8µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/30 - 20:08:04 | 200 | 513µs | 127.0.0.1 | GET "/api/tags" time=2024-11-30T20:08:30.247+01:00 level=INFO source=sched.go:185 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency" time=2024-11-30T20:08:30.268+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\Max\.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 gpu=0 parallel=4 available=25439502336 required="19.2 GiB" time=2024-11-30T20:08:30.554+01:00 level=INFO source=server.go:105 msg="system memory" total="31.0 GiB" free="23.6 GiB" free_swap="22.2 GiB" time=2024-11-30T20:08:30.555+01:00 level=INFO source=memory.go:343 msg="offload to rocm" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[23.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.2 GiB" memory.required.partial="19.2 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[19.2 GiB]" memory.weights.total="10.7 GiB" memory.weights.repeating="10.2 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="7.3 GiB" memory.graph.partial="9.0 GiB" time=2024-11-30T20:08:30.560+01:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\Max\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\rocm\\ollama_llama_server.exe --model C:\\Users\\Max\\.ollama\\models\\blobs\\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 --ctx-size 131072 --batch-size 512 --n-gpu-layers 29 --threads 8 --parallel 4 --port 61337" time=2024-11-30T20:08:30.562+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-30T20:08:30.562+01:00 level=INFO source=server.go:559 msg="waiting for llama runner to start responding" time=2024-11-30T20:08:30.563+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error" time=2024-11-30T20:08:30.580+01:00 level=INFO source=runner.go:939 msg="starting go runner" time=2024-11-30T20:08:30.590+01:00 level=INFO source=runner.go:940 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=8 time=2024-11-30T20:08:30.591+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:61337" llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from C:\Users\Max\.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 7B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder llama_model_loader: - kv 5: general.size_label str = 7B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 7B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 28 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 3584 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 28 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 512 llm_load_print_meta: n_embd_v_gqa = 512 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18944 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) llm_load_print_meta: general.name = Qwen2.5 Coder 7B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 time=2024-11-30T20:08:30.820+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon RX 7900 XTX, compute capability 11.0, VMM: no llm_load_tensors: ggml ctx size = 0.30 MiB llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: ROCm0 buffer size = 4168.09 MiB llm_load_tensors: CPU buffer size = 292.36 MiB llama_new_context_with_model: n_ctx = 131072 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 7168.00 MiB llama_new_context_with_model: KV self size = 7168.00 MiB, K (f16): 3584.00 MiB, V (f16): 3584.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 2.38 MiB ggml_backend_cuda_buffer_type_alloc_buffer: allocating 7452.00 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate ROCm0 buffer of size 7813992448 llama_new_context_with_model: failed to allocate compute buffers panic: unable to create llama context goroutine 6 [running]: main.(*Server).loadModel(0xc000130120, {0x1d, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc0000241c0, 0x0}, ...) github.com/ollama/ollama/llama/runner/runner.go:868 +0x39c created by main.main in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:973 +0xbf1 time=2024-11-30T20:08:50.148+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error" time=2024-11-30T20:08:50.666+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory" [GIN] 2024/11/30 - 20:08:50 | 500 | 20.725802s | 127.0.0.1 | POST "/api/chat" time=2024-11-30T20:08:53.005+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\Max\.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 gpu=0 parallel=4 available=25125978112 required="19.2 GiB" time=2024-11-30T20:08:53.288+01:00 level=INFO source=server.go:105 msg="system memory" total="31.0 GiB" free="23.9 GiB" free_swap="22.0 GiB" time=2024-11-30T20:08:53.289+01:00 level=INFO source=memory.go:343 msg="offload to rocm" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[23.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.2 GiB" memory.required.partial="19.2 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[19.2 GiB]" memory.weights.total="10.7 GiB" memory.weights.repeating="10.2 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="7.3 GiB" memory.graph.partial="9.0 GiB" time=2024-11-30T20:08:53.292+01:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\Max\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\rocm\\ollama_llama_server.exe --model C:\\Users\\Max\\.ollama\\models\\blobs\\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 --ctx-size 131072 --batch-size 512 --n-gpu-layers 29 --threads 8 --parallel 4 --port 61365" time=2024-11-30T20:08:53.294+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-30T20:08:53.294+01:00 level=INFO source=server.go:559 msg="waiting for llama runner to start responding" time=2024-11-30T20:08:53.294+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error" time=2024-11-30T20:08:53.311+01:00 level=INFO source=runner.go:939 msg="starting go runner" time=2024-11-30T20:08:53.322+01:00 level=INFO source=runner.go:940 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=8 time=2024-11-30T20:08:53.322+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:61365" llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from C:\Users\Max\.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 7B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder llama_model_loader: - kv 5: general.size_label str = 7B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 7B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 28 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 3584 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 28 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 512 llm_load_print_meta: n_embd_v_gqa = 512 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18944 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) llm_load_print_meta: general.name = Qwen2.5 Coder 7B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 time=2024-11-30T20:08:53.555+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon RX 7900 XTX, compute capability 11.0, VMM: no llm_load_tensors: ggml ctx size = 0.30 MiB llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: ROCm0 buffer size = 4168.09 MiB llm_load_tensors: CPU buffer size = 292.36 MiB llama_new_context_with_model: n_ctx = 131072 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 7168.00 MiB llama_new_context_with_model: KV self size = 7168.00 MiB, K (f16): 3584.00 MiB, V (f16): 3584.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 2.38 MiB ggml_backend_cuda_buffer_type_alloc_buffer: allocating 7452.00 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate ROCm0 buffer of size 7813992448 llama_new_context_with_model: failed to allocate compute buffers panic: unable to create llama context goroutine 6 [running]: main.(*Server).loadModel(0xc000130120, {0x1d, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc0000241c0, 0x0}, ...) github.com/ollama/ollama/llama/runner/runner.go:868 +0x39c created by main.main in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:973 +0xbf1 time=2024-11-30T20:09:12.333+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error" time=2024-11-30T20:09:12.839+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory" [GIN] 2024/11/30 - 20:09:12 | 500 | 20.1556848s | 127.0.0.1 | POST "/api/chat" time=2024-11-30T20:09:17.166+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=C:\Users\Max\.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 gpu=0 parallel=4 available=24812453888 required="19.2 GiB" time=2024-11-30T20:09:17.443+01:00 level=INFO source=server.go:105 msg="system memory" total="31.0 GiB" free="23.8 GiB" free_swap="21.7 GiB" time=2024-11-30T20:09:17.443+01:00 level=INFO source=memory.go:343 msg="offload to rocm" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[23.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.2 GiB" memory.required.partial="19.2 GiB" memory.required.kv="7.0 GiB" memory.required.allocations="[19.2 GiB]" memory.weights.total="10.7 GiB" memory.weights.repeating="10.2 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="7.3 GiB" memory.graph.partial="9.0 GiB" time=2024-11-30T20:09:17.446+01:00 level=INFO source=server.go:380 msg="starting llama server" cmd="C:\\Users\\Max\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\rocm\\ollama_llama_server.exe --model C:\\Users\\Max\\.ollama\\models\\blobs\\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 --ctx-size 131072 --batch-size 512 --n-gpu-layers 29 --threads 8 --parallel 4 --port 61394" time=2024-11-30T20:09:17.448+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-30T20:09:17.448+01:00 level=INFO source=server.go:559 msg="waiting for llama runner to start responding" time=2024-11-30T20:09:17.448+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error" time=2024-11-30T20:09:17.465+01:00 level=INFO source=runner.go:939 msg="starting go runner" time=2024-11-30T20:09:17.475+01:00 level=INFO source=runner.go:940 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=8 time=2024-11-30T20:09:17.475+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:61394" llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from C:\Users\Max\.ollama\models\blobs\sha256-60e05f2100071479f596b964f89f510f057ce397ea22f2833a0cfe029bfc2463 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 7B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder llama_model_loader: - kv 5: general.size_label str = 7B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 7B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 28 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 3584 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 28 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 512 llm_load_print_meta: n_embd_v_gqa = 512 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18944 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) llm_load_print_meta: general.name = Qwen2.5 Coder 7B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 time=2024-11-30T20:09:17.711+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server loading model" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 ROCm devices: Device 0: AMD Radeon RX 7900 XTX, compute capability 11.0, VMM: no llm_load_tensors: ggml ctx size = 0.30 MiB llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: ROCm0 buffer size = 4168.09 MiB llm_load_tensors: CPU buffer size = 292.36 MiB llama_new_context_with_model: n_ctx = 131072 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 7168.00 MiB llama_new_context_with_model: KV self size = 7168.00 MiB, K (f16): 3584.00 MiB, V (f16): 3584.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 2.38 MiB ggml_backend_cuda_buffer_type_alloc_buffer: allocating 7452.00 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate ROCm0 buffer of size 7813992448 llama_new_context_with_model: failed to allocate compute buffers panic: unable to create llama context goroutine 19 [running]: main.(*Server).loadModel(0xc0000dc120, {0x1d, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc00008a180, 0x0}, ...) github.com/ollama/ollama/llama/runner/runner.go:868 +0x39c created by main.main in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:973 +0xbf1 time=2024-11-30T20:09:36.263+01:00 level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error" time=2024-11-30T20:09:36.791+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory" [GIN] 2024/11/30 - 20:09:36 | 500 | 19.9398993s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

OLLAMA_NUM_PARALLEL is unset so ollama is using a default of 4. Your client is requesting a context size of 32768, so ollama is allocating a total context buffer of (4 * 32768) or 131072 tokens, which takes 7GB. When llama.cpp tries to allocate this buffer, it finds there is not enough VRAM available. ollama made a calculated guess at memory requirements and got it wrong. There are several mitigation techniques.

  1. Set OLLAMA_NUM_PARALLEL=1 in the server environment.
  2. Configure bolt.new to use a smaller context window.
  3. Set OLLAMA_GPU_OVERHEAD to give llama.cpp a buffer to grow in to (eg, OLLAMA_GPU_OVERHEAD=536870912 to reserve 512M)
  4. Enable flash attention by setting OLLAMA_FLASH_ATTENTION=1 in the server environment. Flash attention is a more efficient use of memory and may reduce memory pressure.
  5. Reduce the number layers that ollama thinks it can offload to the GPU, see here. Ollama is currently offloading 29 layers, try setting num_gpu to 25.
  6. Use a smaller model.
<!-- gh-comment-id:2509170874 --> @rick-github commented on GitHub (Nov 30, 2024): `OLLAMA_NUM_PARALLEL` is unset so ollama is using a default of 4. Your client is requesting a context size of 32768, so ollama is allocating a total context buffer of (4 * 32768) or 131072 tokens, which takes 7GB. When llama.cpp tries to allocate this buffer, it finds there is not enough VRAM available. ollama made a calculated guess at memory requirements and got it wrong. There are several mitigation techniques. 1. Set `OLLAMA_NUM_PARALLEL=1` in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server). 2. Configure bolt.new to use a smaller context window. 3. Set [`OLLAMA_GPU_OVERHEAD`](https://github.com/ollama/ollama/blob/5f8051180e3b9aeafc153f6b5056e7358a939c88/envconfig/config.go#L237) to give llama.cpp a buffer to grow in to (eg, `OLLAMA_GPU_OVERHEAD=536870912` to reserve 512M) 4. Enable flash attention by setting [`OLLAMA_FLASH_ATTENTION=1`](https://github.com/ollama/ollama/blob/5f8051180e3b9aeafc153f6b5056e7358a939c88/envconfig/config.go#L236) in the server environment. Flash attention is a more efficient use of memory and may reduce memory pressure. 5. Reduce the number layers that ollama thinks it can offload to the GPU, see [here](https://github.com/ollama/ollama/issues/6950#issuecomment-2373663650). Ollama is currently offloading 29 layers, try setting `num_gpu` to 25. 6. Use a smaller model.
Author
Owner

@LieLust commented on GitHub (Dec 11, 2024):

Hello,
After a thorough investigation, the issue wasn’t where I was looking.
It turns out that, for some unknown reason, my graphics card drivers were initially on version 24.8. At that time, everything was working correctly. Then, there was an update to version 24.10, which caused the problem. After going through GitHub issues and forums, I discovered that this update had broken everything.
I then performed another update. I’m currently on version 24.12.1, and everything is working again (even the qwen2.5-coder:32b model).

However, it seems like it’s not using the VRAM. Do you have a solution for this issue?

Here are two screenshots while qwen2.5-coder:32b is running.
{0E760DD3-D0EF-43E0-B09E-042A34E5AE0A}
{AE46F84D-413B-4AF4-8DA9-E7F90836455B}

<!-- gh-comment-id:2535675564 --> @LieLust commented on GitHub (Dec 11, 2024): Hello, After a thorough investigation, the issue wasn’t where I was looking. It turns out that, for some unknown reason, my graphics card drivers were initially on version 24.8. At that time, everything was working correctly. Then, there was an update to version 24.10, which caused the problem. After going through GitHub issues and forums, I discovered that this update had broken everything. I then performed another update. I’m currently on version 24.12.1, and everything is working again (even the qwen2.5-coder:32b model). However, it seems like it’s not using the VRAM. Do you have a solution for this issue? Here are two screenshots while qwen2.5-coder:32b is running. ![{0E760DD3-D0EF-43E0-B09E-042A34E5AE0A}](https://github.com/user-attachments/assets/41b12cc5-8388-411b-a62e-d7b6a763d488) ![{AE46F84D-413B-4AF4-8DA9-E7F90836455B}](https://github.com/user-attachments/assets/9b988c60-fb7e-4c02-91c9-e3f452c772f2)
Author
Owner

@rick-github commented on GitHub (Dec 23, 2024):

Server logs will aid in debugging.

<!-- gh-comment-id:2559108854 --> @rick-github commented on GitHub (Dec 23, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67109