[GH-ISSUE #10814] Extremely slow running on CPU #69161

Closed
opened 2026-05-04 17:19:34 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @Timmmm on GitHub (May 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10814

What is the issue?

I can start the server fine via ollama serve, and then download and load a model fine using ollama run qwen2.5-coder:3b. I get to the >>> prompt, however ones I put in a message it just spins forever and never produces a response.

This is using CPU only. It's also running on a machine with 128 cores, but in a SLURM job which is restricted to 24 cores (I think using cgroups). I dunno if that would affect it.

Relevant log output

When I connect:


[GIN] 2025/05/22 - 14:46:36 | 200 |      37.667µs |       127.0.0.1 | HEAD     "/"
time=2025-05-22T14:46:36.551+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-22T14:46:36.559+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/22 - 14:46:36 | 200 |   33.624598ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-22T14:46:36.584+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-22T14:46:36.584+02:00 level=DEBUG source=sched.go:604 msg="evaluating already loaded" model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba
[GIN] 2025/05/22 - 14:46:36 | 200 |   24.133032ms |       127.0.0.1 | POST     "/api/generate"
time=2025-05-22T14:46:36.584+02:00 level=DEBUG source=sched.go:423 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen2.5-coder:3b runner.inference=cpu runner.devices=1 runner.size="2.6 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=2575499 runner.model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba runner.num_ctx=8192
time=2025-05-22T14:46:36.584+02:00 level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen2.5-coder:3b runner.inference=cpu runner.devices=1 runner.size="2.6 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=2575499 runner.model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba runner.num_ctx=8192 duration=5m0s
time=2025-05-22T14:46:36.584+02:00 level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen2.5-coder:3b runner.inference=cpu runner.devices=1 runner.size="2.6 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=2575499 runner.model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba runner.num_ctx=8192 refCount=0


When I submit a query:


time=2025-05-22T14:46:40.948+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32
time=2025-05-22T14:46:40.948+02:00 level=DEBUG source=sched.go:604 msg="evaluating already loaded" model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba
time=2025-05-22T14:46:40.951+02:00 level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=153 format=""
time=2025-05-22T14:46:40.953+02:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=30 prompt=30 used=24 remaining=6


When I Ctrl-C the query:


time=2025-05-22T14:47:28.503+02:00 level=DEBUG source=sched.go:423 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen2.5-coder:3b runner.inference=cpu runner.devices=1 runner.size="2.6 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=2575499 runner.model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba runner.num_ctx=8192
time=2025-05-22T14:47:28.503+02:00 level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen2.5-coder:3b runner.inference=cpu runner.devices=1 runner.size="2.6 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=2575499 runner.model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba runner.num_ctx=8192 duration=5m0s
time=2025-05-22T14:47:28.503+02:00 level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen2.5-coder:3b runner.inference=cpu runner.devices=1 runner.size="2.6 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=2575499 runner.model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba runner.num_ctx=8192 refCount=0
[GIN] 2025/05/22 - 14:47:28 | 200 |  47.57177757s |       127.0.0.1 | POST     "/api/chat"

OS

Linux

GPU

No response

CPU

AMD

Ollama version

0.7.0

Originally created by @Timmmm on GitHub (May 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10814 ### What is the issue? I can start the server fine via `ollama serve`, and then download and load a model fine using `ollama run qwen2.5-coder:3b`. I get to the `>>>` prompt, however ones I put in a message it just spins forever and never produces a response. This is using CPU only. It's also running on a machine with 128 cores, but in a SLURM job which is restricted to 24 cores (I think using cgroups). I dunno if that would affect it. ### Relevant log output ```shell When I connect: [GIN] 2025/05/22 - 14:46:36 | 200 | 37.667µs | 127.0.0.1 | HEAD "/" time=2025-05-22T14:46:36.551+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-22T14:46:36.559+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/22 - 14:46:36 | 200 | 33.624598ms | 127.0.0.1 | POST "/api/show" time=2025-05-22T14:46:36.584+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-22T14:46:36.584+02:00 level=DEBUG source=sched.go:604 msg="evaluating already loaded" model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba [GIN] 2025/05/22 - 14:46:36 | 200 | 24.133032ms | 127.0.0.1 | POST "/api/generate" time=2025-05-22T14:46:36.584+02:00 level=DEBUG source=sched.go:423 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen2.5-coder:3b runner.inference=cpu runner.devices=1 runner.size="2.6 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=2575499 runner.model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba runner.num_ctx=8192 time=2025-05-22T14:46:36.584+02:00 level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen2.5-coder:3b runner.inference=cpu runner.devices=1 runner.size="2.6 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=2575499 runner.model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba runner.num_ctx=8192 duration=5m0s time=2025-05-22T14:46:36.584+02:00 level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen2.5-coder:3b runner.inference=cpu runner.devices=1 runner.size="2.6 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=2575499 runner.model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba runner.num_ctx=8192 refCount=0 When I submit a query: time=2025-05-22T14:46:40.948+02:00 level=DEBUG source=ggml.go:154 msg="key not found" key=general.alignment default=32 time=2025-05-22T14:46:40.948+02:00 level=DEBUG source=sched.go:604 msg="evaluating already loaded" model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba time=2025-05-22T14:46:40.951+02:00 level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=153 format="" time=2025-05-22T14:46:40.953+02:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=30 prompt=30 used=24 remaining=6 When I Ctrl-C the query: time=2025-05-22T14:47:28.503+02:00 level=DEBUG source=sched.go:423 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen2.5-coder:3b runner.inference=cpu runner.devices=1 runner.size="2.6 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=2575499 runner.model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba runner.num_ctx=8192 time=2025-05-22T14:47:28.503+02:00 level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/qwen2.5-coder:3b runner.inference=cpu runner.devices=1 runner.size="2.6 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=2575499 runner.model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba runner.num_ctx=8192 duration=5m0s time=2025-05-22T14:47:28.503+02:00 level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen2.5-coder:3b runner.inference=cpu runner.devices=1 runner.size="2.6 GiB" runner.vram="0 B" runner.parallel=2 runner.pid=2575499 runner.model=/home/me/.ollama/models/blobs/sha256-4a188102020e9c9530b687fd6400f775c45e90a0d7baafe65bd0a36963fbb7ba runner.num_ctx=8192 refCount=0 [GIN] 2025/05/22 - 14:47:28 | 200 | 47.57177757s | 127.0.0.1 | POST "/api/chat" ``` ### OS Linux ### GPU _No response_ ### CPU AMD ### Ollama version 0.7.0
GiteaMirror added the bug label 2026-05-04 17:19:34 -05:00
Author
Owner

@Timmmm commented on GitHub (May 22, 2025):

Oh actually the problem might just be that it is astonishingly slow. I tried again with ollama run qwen2.5-coder:0.5b and 32 cores, and it gets about 0.1 tokens per second. That seems... wrong. On my 16-core Ryzen desktop it's a couple of orders of magnitude faster.

Could it be because Ollama and the models are on very slow NFS? I would have thought everything would be loaded into RAM.

<!-- gh-comment-id:2901159278 --> @Timmmm commented on GitHub (May 22, 2025): Oh actually the problem might just be that it is astonishingly slow. I tried again with `ollama run qwen2.5-coder:0.5b` and 32 cores, and it gets about 0.1 tokens per second. That seems... wrong. On my 16-core Ryzen desktop it's a couple of orders of magnitude faster. Could it be because Ollama and the models are on very slow NFS? I would have thought everything would be loaded into RAM.
Author
Owner

@rick-github commented on GitHub (May 22, 2025):

The model should be fully loaded in RAM. If you can provide a complete log it may be easier to diagnose.

<!-- gh-comment-id:2901257038 --> @rick-github commented on GitHub (May 22, 2025): The model should be fully loaded in RAM. If you can provide a complete log it may be easier to diagnose.
Author
Owner

@Timmmm commented on GitHub (May 22, 2025):

The first part of the log (might have been for a different run, but I didn't change anything):

time=2025-05-22T14:59:35.975+02:00 level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/me/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-05-22T14:59:35.988+02:00 level=INFO source=images.go:463 msg="total blobs: 8"
time=2025-05-22T14:59:35.989+02:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0"
time=2025-05-22T14:59:35.991+02:00 level=INFO source=routes.go:1258 msg="Listening on 127.0.0.1:11434 (version 0.7.0)"
time=2025-05-22T14:59:35.991+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-05-22T14:59:36.011+02:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-05-22T14:59:36.011+02:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="755.1 GiB" available="739.2 GiB"
[GIN] 2025/05/22 - 14:59:56 | 200 |      60.021µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/05/22 - 14:59:56 | 404 |    2.222739ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-22T14:59:57.743+02:00 level=INFO source=download.go:177 msg="downloading 828125e28bf4 in 6 100 MB part(s)"
time=2025-05-22T15:00:04.664+02:00 level=INFO source=download.go:177 msg="downloading 30167f507fe3 in 1 488 B part(s)"
[GIN] 2025/05/22 - 15:00:06 | 200 | 10.031304053s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/05/22 - 15:00:06 | 200 |   24.105549ms |       127.0.0.1 | POST     "/api/show"
time=2025-05-22T15:00:06.997+02:00 level=INFO source=server.go:135 msg="system memory" total="755.1 GiB" free="738.6 GiB" free_swap="0 B"
time=2025-05-22T15:00:06.998+02:00 level=INFO source=server.go:168 msg=offload library=cpu layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[738.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.0 GiB" memory.required.partial="0 B" memory.required.kv="96.0 MiB" memory.required.allocations="[1.0 GiB]" memory.weights.total="500.8 MiB" memory.weights.repeating="362.8 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB"
llama_model_loader: loaded meta data with 34 key-value pairs and 290 tensors from /home/me/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 Coder 0.5B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5-Coder
llama_model_loader: - kv   5:                         general.size_label str              = 0.5B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-C...
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 Coder 0.5B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-C...
llama_model_loader: - kv  12:                               general.tags arr[str,6]       = ["code", "codeqwen", "chat", "qwen", ...
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          qwen2.block_count u32              = 24
llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 896
llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 4864
llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 14
llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 2
llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  22:                          general.file_type u32              = 7
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q8_0:  169 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 500.79 MiB (8.50 BPW) 
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 494.03 M
print_info: general.name     = Qwen2.5 Coder 0.5B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151643 '<|endoftext|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-05-22T15:00:07.111+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/home/me/local/ollama/bin/ollama runner --model /home/me/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef --ctx-size 8192 --batch-size 512 --threads 64 --no-mmap --parallel 2 --port 35741"
time=2025-05-22T15:00:07.113+02:00 level=INFO source=sched.go:472 msg="loaded runners" count=1
time=2025-05-22T15:00:07.113+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2025-05-22T15:00:07.114+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-22T15:00:07.120+02:00 level=INFO source=runner.go:815 msg="starting go runner"
load_backend: loaded CPU backend from /home/me/local/ollama/lib/ollama/libggml-cpu-icelake.so
time=2025-05-22T15:00:07.153+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-05-22T15:00:07.153+02:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:35741"
llama_model_loader: loaded meta data with 34 key-value pairs and 290 tensors from /home/me/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 Coder 0.5B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5-Coder
llama_model_loader: - kv   5:                         general.size_label str              = 0.5B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-C...
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 Coder 0.5B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-C...
llama_model_loader: - kv  12:                               general.tags arr[str,6]       = ["code", "codeqwen", "chat", "qwen", ...
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          qwen2.block_count u32              = 24
llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 896
llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 4864
llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 14
llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 2
llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  22:                          general.file_type u32              = 7
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q8_0:  169 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 500.79 MiB (8.50 BPW) 
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 32768
print_info: n_embd           = 896
print_info: n_layer          = 24
print_info: n_head           = 14
print_info: n_head_kv        = 2
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 64
print_info: n_embd_head_v    = 64
print_info: n_gqa            = 7
print_info: n_embd_k_gqa     = 128
print_info: n_embd_v_gqa     = 128
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 4864
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = -1
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 32768
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 1B
print_info: model params     = 494.03 M
print_info: general.name     = Qwen2.5 Coder 0.5B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151643 '<|endoftext|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors:          CPU model buffer size =   500.79 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 2
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 1024
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     1.17 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1, padding = 32
llama_kv_cache_unified:        CPU KV buffer size =    96.00 MiB
llama_kv_cache_unified: KV self size  =   96.00 MiB, K (f16):   48.00 MiB, V (f16):   48.00 MiB
llama_context:        CPU compute buffer size =   300.25 MiB
llama_context: graph nodes  = 894
llama_context: graph splits = 1
time=2025-05-22T15:00:07.365+02:00 level=INFO source=server.go:630 msg="llama runner started in 0.25 seconds"
[GIN] 2025/05/22 - 15:00:07 | 200 |  398.311495ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/05/22 - 15:02:31 | 200 |         2m22s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:2901850458 --> @Timmmm commented on GitHub (May 22, 2025): The first part of the log (might have been for a different run, but I didn't change anything): ``` time=2025-05-22T14:59:35.975+02:00 level=INFO source=routes.go:1205 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/me/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-05-22T14:59:35.988+02:00 level=INFO source=images.go:463 msg="total blobs: 8" time=2025-05-22T14:59:35.989+02:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0" time=2025-05-22T14:59:35.991+02:00 level=INFO source=routes.go:1258 msg="Listening on 127.0.0.1:11434 (version 0.7.0)" time=2025-05-22T14:59:35.991+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-05-22T14:59:36.011+02:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-05-22T14:59:36.011+02:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="755.1 GiB" available="739.2 GiB" [GIN] 2025/05/22 - 14:59:56 | 200 | 60.021µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/22 - 14:59:56 | 404 | 2.222739ms | 127.0.0.1 | POST "/api/show" time=2025-05-22T14:59:57.743+02:00 level=INFO source=download.go:177 msg="downloading 828125e28bf4 in 6 100 MB part(s)" time=2025-05-22T15:00:04.664+02:00 level=INFO source=download.go:177 msg="downloading 30167f507fe3 in 1 488 B part(s)" [GIN] 2025/05/22 - 15:00:06 | 200 | 10.031304053s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/05/22 - 15:00:06 | 200 | 24.105549ms | 127.0.0.1 | POST "/api/show" time=2025-05-22T15:00:06.997+02:00 level=INFO source=server.go:135 msg="system memory" total="755.1 GiB" free="738.6 GiB" free_swap="0 B" time=2025-05-22T15:00:06.998+02:00 level=INFO source=server.go:168 msg=offload library=cpu layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[738.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.0 GiB" memory.required.partial="0 B" memory.required.kv="96.0 MiB" memory.required.allocations="[1.0 GiB]" memory.weights.total="500.8 MiB" memory.weights.repeating="362.8 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB" llama_model_loader: loaded meta data with 34 key-value pairs and 290 tensors from /home/me/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 0.5B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder llama_model_loader: - kv 5: general.size_label str = 0.5B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 0.5B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 24 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 896 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 4864 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 14 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 7 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 121 tensors llama_model_loader: - type q8_0: 169 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 500.79 MiB (8.50 BPW) load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 494.03 M print_info: general.name = Qwen2.5 Coder 0.5B Instruct print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151643 '<|endoftext|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-05-22T15:00:07.111+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/home/me/local/ollama/bin/ollama runner --model /home/me/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef --ctx-size 8192 --batch-size 512 --threads 64 --no-mmap --parallel 2 --port 35741" time=2025-05-22T15:00:07.113+02:00 level=INFO source=sched.go:472 msg="loaded runners" count=1 time=2025-05-22T15:00:07.113+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2025-05-22T15:00:07.114+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2025-05-22T15:00:07.120+02:00 level=INFO source=runner.go:815 msg="starting go runner" load_backend: loaded CPU backend from /home/me/local/ollama/lib/ollama/libggml-cpu-icelake.so time=2025-05-22T15:00:07.153+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-05-22T15:00:07.153+02:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:35741" llama_model_loader: loaded meta data with 34 key-value pairs and 290 tensors from /home/me/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 Coder 0.5B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5-Coder llama_model_loader: - kv 5: general.size_label str = 0.5B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 Coder 0.5B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-C... llama_model_loader: - kv 12: general.tags arr[str,6] = ["code", "codeqwen", "chat", "qwen", ... llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 24 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 896 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 4864 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 14 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 7 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 121 tensors llama_model_loader: - type q8_0: 169 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 500.79 MiB (8.50 BPW) load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 32768 print_info: n_embd = 896 print_info: n_layer = 24 print_info: n_head = 14 print_info: n_head_kv = 2 print_info: n_rot = 64 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 64 print_info: n_embd_head_v = 64 print_info: n_gqa = 7 print_info: n_embd_k_gqa = 128 print_info: n_embd_v_gqa = 128 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 4864 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 32768 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 1B print_info: model params = 494.03 M print_info: general.name = Qwen2.5 Coder 0.5B Instruct print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151643 '<|endoftext|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: CPU model buffer size = 500.79 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 1.17 MiB llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1, padding = 32 llama_kv_cache_unified: CPU KV buffer size = 96.00 MiB llama_kv_cache_unified: KV self size = 96.00 MiB, K (f16): 48.00 MiB, V (f16): 48.00 MiB llama_context: CPU compute buffer size = 300.25 MiB llama_context: graph nodes = 894 llama_context: graph splits = 1 time=2025-05-22T15:00:07.365+02:00 level=INFO source=server.go:630 msg="llama runner started in 0.25 seconds" [GIN] 2025/05/22 - 15:00:07 | 200 | 398.311495ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/05/22 - 15:02:31 | 200 | 2m22s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (May 22, 2025):

time=2025-05-22T15:00:07.111+02:00 level=INFO source=server.go:431 msg="starting llama server"
 cmd="/home/me/local/ollama/bin/ollama runner --model /home/me/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef
 --ctx-size 8192 --batch-size 512 --threads 64 --no-mmap --parallel 2 --port 35741"

The runner is being started with 64 threads, but the SLURM job is limited to 24 cores. Try setting num_thread in the API call or Modelfile to a lower value. See #10022 for context.

$ ollama run qwen2.5-coder:3b --verbose
>>> /set parameter num_thread 8
Set parameter 'num_thread' to '8'
>>> hello
Hello! How can I assist you today? Is there something specific you'd like to talk about or ask me about?

total duration:       6.855628667s
load duration:        6.636584394s
prompt eval count:    30 token(s)
prompt eval duration: 36.059901ms
prompt eval rate:     831.95 tokens/s
eval count:           25 token(s)
eval duration:        174.475783ms
eval rate:            143.29 tokens/s
<!-- gh-comment-id:2901877862 --> @rick-github commented on GitHub (May 22, 2025): ``` time=2025-05-22T15:00:07.111+02:00 level=INFO source=server.go:431 msg="starting llama server" cmd="/home/me/local/ollama/bin/ollama runner --model /home/me/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef --ctx-size 8192 --batch-size 512 --threads 64 --no-mmap --parallel 2 --port 35741" ``` The runner is being started with 64 threads, but the SLURM job is limited to 24 cores. Try setting `num_thread` in the API call or Modelfile to a lower value. See #10022 for context. ```console $ ollama run qwen2.5-coder:3b --verbose >>> /set parameter num_thread 8 Set parameter 'num_thread' to '8' >>> hello Hello! How can I assist you today? Is there something specific you'd like to talk about or ask me about? total duration: 6.855628667s load duration: 6.636584394s prompt eval count: 30 token(s) prompt eval duration: 36.059901ms prompt eval rate: 831.95 tokens/s eval count: 25 token(s) eval duration: 174.475783ms eval rate: 143.29 tokens/s ```
Author
Owner

@Timmmm commented on GitHub (May 22, 2025):

Ah ok I wondered if it was something like that. Ninja also had a similar bug where it tried to start 128 jobs (number of actual cores) and then fell over because cgroups only allowed access to like 4. They they fixed it at some point though so it's definitely possible. Thanks for the very quick help anyway!

<!-- gh-comment-id:2901911420 --> @Timmmm commented on GitHub (May 22, 2025): Ah ok I wondered if it was something like that. Ninja also had a similar bug where it tried to start 128 jobs (number of actual cores) and then fell over because cgroups only allowed access to like 4. They they fixed it at some point though so it's definitely possible. Thanks for the very quick help anyway!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69161