[GH-ISSUE #4636] 1.0.39 pre-release - timed out waiting for llama runner to start #49424

Closed
opened 2026-04-28 11:45:52 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @skrew on GitHub (May 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4636

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Cannot launch models, all models tested timed out
1.0.39-rc2 worked

ollama run <model>
Error: timed out waiting for llama runner to start - progress 0.00 -

I see nothing special in server logs, but i will post it if you need it

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.39 Pre-release

Originally created by @skrew on GitHub (May 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4636 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Cannot launch models, all models tested timed out 1.0.39-rc2 worked ``` ollama run <model> Error: timed out waiting for llama runner to start - progress 0.00 - ``` I see nothing special in server logs, but i will post it if you need it ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.39 Pre-release
GiteaMirror added the bug label 2026-04-28 11:45:52 -05:00
Author
Owner

@jmorganca commented on GitHub (May 26, 2024):

Hi @skrew - do you know what version of the Nvidia driver you are running? I suppose it may be this: https://github.com/ollama/ollama/issues/4563

<!-- gh-comment-id:2132093218 --> @jmorganca commented on GitHub (May 26, 2024): Hi @skrew - do you know what version of the Nvidia driver you are running? I suppose it may be this: https://github.com/ollama/ollama/issues/4563
Author
Owner

@skrew commented on GitHub (May 26, 2024):

Driver Version: 535.161.08
CUDA Version: 12.2

<!-- gh-comment-id:2132165463 --> @skrew commented on GitHub (May 26, 2024): Driver Version: 535.161.08 CUDA Version: 12.2
Author
Owner

@jmorganca commented on GitHub (May 27, 2024):

Thanks! May I ask which model you're running and which GPU?

<!-- gh-comment-id:2132738230 --> @jmorganca commented on GitHub (May 27, 2024): Thanks! May I ask which model you're running and which GPU?
Author
Owner

@jmorganca commented on GitHub (May 27, 2024):

In terms of the logs, you can view them with journalctl -fu ollama.

Would love to know if you see the model loading in the logs – it should look something like:

May 27 06:31:25 tater16 ollama[3539]: llm_load_print_meta: general.name     = Meta-Llama-3-70B-Instruct
May 27 06:31:25 tater16 ollama[3539]: llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
May 27 06:31:25 tater16 ollama[3539]: llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
May 27 06:31:25 tater16 ollama[3539]: llm_load_print_meta: LF token         = 128 'Ä'
May 27 06:31:25 tater16 ollama[3539]: llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
May 27 06:31:25 tater16 ollama[3539]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
May 27 06:31:25 tater16 ollama[3539]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
May 27 06:31:25 tater16 ollama[3539]: ggml_cuda_init: found 1 CUDA devices:
May 27 06:31:25 tater16 ollama[3539]:   Device 0: NVIDIA RTX 6000 Ada Generation, compute capability 8.9, VMM: yes
May 27 06:31:25 tater16 ollama[3539]: llm_load_tensors: ggml ctx size =    0.74 MiB
May 27 06:31:25 tater16 ollama[3539]: llm_load_tensors: offloading 80 repeating layers to GPU
May 27 06:31:25 tater16 ollama[3539]: llm_load_tensors: offloading non-repeating layers to GPU
May 27 06:31:25 tater16 ollama[3539]: llm_load_tensors: offloaded 81/81 layers to GPU
May 27 06:31:25 tater16 ollama[3539]: llm_load_tensors:        CPU buffer size =   563.62 MiB
May 27 06:31:25 tater16 ollama[3539]: llm_load_tensors:      CUDA0 buffer size = 37546.98 MiB
May 27 06:31:25 tater16 ollama[3539]: time=2024-05-27T06:31:25.987Z level=DEBUG source=server.go:573 msg="model load progress 0.07"
May 27 06:31:26 tater16 ollama[3539]: time=2024-05-27T06:31:26.238Z level=DEBUG source=server.go:573 msg="model load progress 0.18"
May 27 06:31:26 tater16 ollama[3539]: time=2024-05-27T06:31:26.489Z level=DEBUG source=server.go:573 msg="model load progress 0.29"
May 27 06:31:26 tater16 ollama[3539]: time=2024-05-27T06:31:26.739Z level=DEBUG source=server.go:573 msg="model load progress 0.39"
May 27 06:31:26 tater16 ollama[3539]: time=2024-05-27T06:31:26.990Z level=DEBUG source=server.go:573 msg="model load progress 0.50"
May 27 06:31:27 tater16 ollama[3539]: time=2024-05-27T06:31:27.241Z level=DEBUG source=server.go:573 msg="model load progress 0.60"
May 27 06:31:27 tater16 ollama[3539]: time=2024-05-27T06:31:27.492Z level=DEBUG source=server.go:573 msg="model load progress 0.71"
May 27 06:31:27 tater16 ollama[3539]: time=2024-05-27T06:31:27.743Z level=DEBUG source=server.go:573 msg="model load progress 0.81"
May 27 06:31:27 tater16 ollama[3539]: time=2024-05-27T06:31:27.994Z level=DEBUG source=server.go:573 msg="model load progress 0.92"
May 27 06:31:28 tater16 ollama[3539]: time=2024-05-27T06:31:28.245Z level=DEBUG source=server.go:573 msg="model load progress 1.00"
<!-- gh-comment-id:2132741570 --> @jmorganca commented on GitHub (May 27, 2024): In terms of the logs, you can view them with `journalctl -fu ollama`. Would love to know if you see the model loading in the logs – it should look something like: ``` May 27 06:31:25 tater16 ollama[3539]: llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct May 27 06:31:25 tater16 ollama[3539]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' May 27 06:31:25 tater16 ollama[3539]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' May 27 06:31:25 tater16 ollama[3539]: llm_load_print_meta: LF token = 128 'Ä' May 27 06:31:25 tater16 ollama[3539]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' May 27 06:31:25 tater16 ollama[3539]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes May 27 06:31:25 tater16 ollama[3539]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no May 27 06:31:25 tater16 ollama[3539]: ggml_cuda_init: found 1 CUDA devices: May 27 06:31:25 tater16 ollama[3539]: Device 0: NVIDIA RTX 6000 Ada Generation, compute capability 8.9, VMM: yes May 27 06:31:25 tater16 ollama[3539]: llm_load_tensors: ggml ctx size = 0.74 MiB May 27 06:31:25 tater16 ollama[3539]: llm_load_tensors: offloading 80 repeating layers to GPU May 27 06:31:25 tater16 ollama[3539]: llm_load_tensors: offloading non-repeating layers to GPU May 27 06:31:25 tater16 ollama[3539]: llm_load_tensors: offloaded 81/81 layers to GPU May 27 06:31:25 tater16 ollama[3539]: llm_load_tensors: CPU buffer size = 563.62 MiB May 27 06:31:25 tater16 ollama[3539]: llm_load_tensors: CUDA0 buffer size = 37546.98 MiB May 27 06:31:25 tater16 ollama[3539]: time=2024-05-27T06:31:25.987Z level=DEBUG source=server.go:573 msg="model load progress 0.07" May 27 06:31:26 tater16 ollama[3539]: time=2024-05-27T06:31:26.238Z level=DEBUG source=server.go:573 msg="model load progress 0.18" May 27 06:31:26 tater16 ollama[3539]: time=2024-05-27T06:31:26.489Z level=DEBUG source=server.go:573 msg="model load progress 0.29" May 27 06:31:26 tater16 ollama[3539]: time=2024-05-27T06:31:26.739Z level=DEBUG source=server.go:573 msg="model load progress 0.39" May 27 06:31:26 tater16 ollama[3539]: time=2024-05-27T06:31:26.990Z level=DEBUG source=server.go:573 msg="model load progress 0.50" May 27 06:31:27 tater16 ollama[3539]: time=2024-05-27T06:31:27.241Z level=DEBUG source=server.go:573 msg="model load progress 0.60" May 27 06:31:27 tater16 ollama[3539]: time=2024-05-27T06:31:27.492Z level=DEBUG source=server.go:573 msg="model load progress 0.71" May 27 06:31:27 tater16 ollama[3539]: time=2024-05-27T06:31:27.743Z level=DEBUG source=server.go:573 msg="model load progress 0.81" May 27 06:31:27 tater16 ollama[3539]: time=2024-05-27T06:31:27.994Z level=DEBUG source=server.go:573 msg="model load progress 0.92" May 27 06:31:28 tater16 ollama[3539]: time=2024-05-27T06:31:28.245Z level=DEBUG source=server.go:573 msg="model load progress 1.00" ```
Author
Owner

@skrew commented on GitHub (May 27, 2024):

Hi,
you can see hardware in logs, model is aya:35b-23-q8_0 but i have this error with all models tested.

May 27 12:32:31 test-server ollama[233346]: 2024/05/27 12:32:31 routes.go:1028: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
May 27 12:32:31 test-server ollama[233346]: time=2024-05-27T12:32:31.486Z level=INFO source=images.go:729 msg="total blobs: 11"
May 27 12:32:31 test-server ollama[233346]: time=2024-05-27T12:32:31.493Z level=INFO source=images.go:736 msg="total unused blobs removed: 0"
May 27 12:32:31 test-server ollama[233346]: time=2024-05-27T12:32:31.493Z level=INFO source=routes.go:1074 msg="Listening on 127.0.0.1:11434 (version 0.1.39)"
May 27 12:32:31 test-server ollama[233346]: time=2024-05-27T12:32:31.493Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1428713392/runners
May 27 12:32:33 test-server ollama[233346]: time=2024-05-27T12:32:33.998Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-4effbf41-2d2b-123d-3e00-87b020bf6f2c library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-d340a9a6-dd55-c0cc-2a5c-6400aa90cbef library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-397461d5-3b02-d592-1bdf-c4dc32ba2725 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-087805de-e9c0-a3d4-995a-f201a059d00b library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-59ca4876-6e35-3a7d-215c-c1a340d19051 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-1911ca72-a8d3-b501-c8c9-2406c2745071 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-bf11f3e7-f8c5-efec-19c2-566babe5c95c library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-9fd416b3-99e6-028e-f64c-d33496518d72 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-6cfe6c07-edfe-b2d5-28cc-025aa26569c6 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-1f8c3d9c-27b8-f80e-1646-99ba58a5586c library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:32:56 test-server systemd[1]: /etc/systemd/system/ollama.service.d/environment.conf:1: Assignment outside of section. Ignoring.
May 27 12:32:56 test-server systemd[1]: ollama.service: Deactivated successfully.
May 27 12:32:56 test-server systemd[1]: ollama.service: Consumed 6.784s CPU time.
May 27 12:32:56 test-server ollama[233489]: 2024/05/27 12:32:56 routes.go:1028: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_HOST: OLLAMA_KEEP_ALIVE:-1 OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
May 27 12:32:56 test-server ollama[233489]: time=2024-05-27T12:32:56.957Z level=INFO source=images.go:729 msg="total blobs: 17"
May 27 12:32:56 test-server ollama[233489]: time=2024-05-27T12:32:56.957Z level=INFO source=images.go:736 msg="total unused blobs removed: 0"
May 27 12:32:56 test-server ollama[233489]: time=2024-05-27T12:32:56.957Z level=INFO source=routes.go:1074 msg="Listening on [::]:11434 (version 0.1.39)"
May 27 12:32:56 test-server ollama[233489]: time=2024-05-27T12:32:56.957Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1444333578/runners
May 27 12:32:59 test-server ollama[233489]: time=2024-05-27T12:32:59.476Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 rocm_v60002 cpu cpu_avx cpu_avx2]"
May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-4effbf41-2d2b-123d-3e00-87b020bf6f2c library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-d340a9a6-dd55-c0cc-2a5c-6400aa90cbef library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-397461d5-3b02-d592-1bdf-c4dc32ba2725 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-087805de-e9c0-a3d4-995a-f201a059d00b library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-59ca4876-6e35-3a7d-215c-c1a340d19051 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-1911ca72-a8d3-b501-c8c9-2406c2745071 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-bf11f3e7-f8c5-efec-19c2-566babe5c95c library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-9fd416b3-99e6-028e-f64c-d33496518d72 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-6cfe6c07-edfe-b2d5-28cc-025aa26569c6 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-1f8c3d9c-27b8-f80e-1646-99ba58a5586c library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB"
May 27 12:33:09 test-server ollama[233489]: [GIN] 2024/05/27 - 12:33:09 | 200 |      28.061µs |       127.0.0.1 | HEAD     "/"
May 27 12:33:09 test-server ollama[233489]: [GIN] 2024/05/27 - 12:33:09 | 200 |     685.753µs |       127.0.0.1 | POST     "/api/show"
May 27 12:33:09 test-server ollama[233489]: [GIN] 2024/05/27 - 12:33:09 | 200 |     428.956µs |       127.0.0.1 | POST     "/api/show"
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.564Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB"
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.565Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB"
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.566Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB"
May 27 12:33:12 test-server ollama[233489]: message repeated 2 times: [ time=2024-05-27T12:33:12.566Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB"]
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.567Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB"
May 27 12:33:12 test-server ollama[233489]: message repeated 2 times: [ time=2024-05-27T12:33:12.567Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB"]
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.568Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB"
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.568Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB"
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.569Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=41 memory.available="78.4 GiB" memory.required.full="43.4 GiB" memory.required.partial="43.4 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="5.0 GiB" memory.graph.partial="21.1 GiB"
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.569Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=41 memory.available="78.4 GiB" memory.required.full="43.4 GiB" memory.required.partial="43.4 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="5.0 GiB" memory.graph.partial="21.1 GiB"
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.569Z level=INFO source=server.go:338 msg="starting llama server" cmd="/tmp/ollama1444333578/runners/cuda_v11/ollama_llama_server --model /mnt/media/ollama/blobs/sha256-bfb1905e866d86bed09ac7687492bd9c7056bca5bc8558aade76e03c3172a581 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --flash-attn --parallel 1 --port 40247"
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.570Z level=INFO source=sched.go:338 msg="loaded runners" count=1
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.570Z level=INFO source=server.go:525 msg="waiting for llama runner to start responding"
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.570Z level=INFO source=server.go:562 msg="waiting for server to become available" status="llm server error"
May 27 12:33:12 test-server ollama[233602]: INFO [main] build info | build=1 commit="74f33ad" tid="139960085577728" timestamp=1716813192
May 27 12:33:12 test-server ollama[233602]: INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139960085577728" timestamp=1716813192 total_threads=8
May 27 12:33:12 test-server ollama[233602]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="40247" tid="139960085577728" timestamp=1716813192
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: loaded meta data with 27 key-value pairs and 322 tensors from /mnt/media/ollama/blobs/sha256-bfb1905e866d86bed09ac7687492bd9c7056bca5bc8558aade76e03c3172a581 (version GGUF V3 (latest))
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv   0:                       general.architecture str              = command-r
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv   1:                               general.name str              = aya-23-35B
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv   2:                      command-r.block_count u32              = 40
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv   3:                   command-r.context_length u32              = 8192
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv   4:                 command-r.embedding_length u32              = 8192
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv   5:              command-r.feed_forward_length u32              = 22528
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv   6:             command-r.attention.head_count u32              = 64
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv   7:          command-r.attention.head_count_kv u32              = 64
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv   8:                   command-r.rope.freq_base f32              = 8000000.000000
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv   9:     command-r.attention.layer_norm_epsilon f32              = 0.000010
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  10:                          general.file_type u32              = 7
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  11:                      command-r.logit_scale f32              = 0.062500
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  12:                command-r.rope.scaling.type str              = none
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,256000]  = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ...
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ...
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,253333]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a...
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 5
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 255001
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 0
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  22:           tokenizer.chat_template.tool_use str              = {{ bos_token }}{% if messages[0]['rol...
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  23:                tokenizer.chat_template.rag str              = {{ bos_token }}{% if messages[0]['rol...
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  24:                   tokenizer.chat_templates arr[str,2]       = ["rag", "tool_use"]
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv  26:               general.quantization_version u32              = 2
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - type  f32:   41 tensors
May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - type q8_0:  281 tensors
May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.821Z level=INFO source=server.go:562 msg="waiting for server to become available" status="llm server loading model"
May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: missing pre-tokenizer type, using: 'default'
May 27 12:33:13 test-server ollama[233489]: llm_load_vocab:
May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: ************************************
May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: CONSIDER REGENERATING THE MODEL
May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: ************************************
May 27 12:33:13 test-server ollama[233489]: llm_load_vocab:
May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: special tokens definition check successful ( 1008/256000 ).
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: format           = GGUF V3 (latest)
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: arch             = command-r
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: vocab type       = BPE
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_vocab          = 256000
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_merges         = 253333
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_ctx_train      = 8192
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_embd           = 8192
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_head           = 64
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_head_kv        = 64
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_layer          = 40
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_rot            = 128
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_embd_head_k    = 128
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_embd_head_v    = 128
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_gqa            = 1
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_embd_k_gqa     = 8192
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_embd_v_gqa     = 8192
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: f_norm_eps       = 1.0e-05
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: f_norm_rms_eps   = 0.0e+00
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: f_logit_scale    = 6.2e-02
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_ff             = 22528
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_expert         = 0
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_expert_used    = 0
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: causal attn      = 1
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: pooling type     = 0
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: rope type        = 0
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: rope scaling     = none
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: freq_base_train  = 8000000.0
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: freq_scale_train = 1
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_yarn_orig_ctx  = 8192
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: rope_finetuned   = unknown
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: ssm_d_conv       = 0
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: ssm_d_inner      = 0
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: ssm_d_state      = 0
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: ssm_dt_rank      = 0
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: model type       = 35B
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: model ftype      = Q8_0
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: model params     = 34.98 B
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: model size       = 34.62 GiB (8.50 BPW)
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: general.name     = aya-23-35B
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: BOS token        = 5 '<BOS_TOKEN>'
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: EOS token        = 255001 '<|END_OF_TURN_TOKEN|>'
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: PAD token        = 0 '<PAD>'
May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: LF token         = 136 'Ä'
May 27 12:33:13 test-server ollama[233489]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
May 27 12:33:13 test-server ollama[233489]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
May 27 12:33:13 test-server ollama[233489]: ggml_cuda_init: found 10 CUDA devices:
May 27 12:33:13 test-server ollama[233489]:   Device 0: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
May 27 12:33:13 test-server ollama[233489]:   Device 1: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
May 27 12:33:13 test-server ollama[233489]:   Device 2: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
May 27 12:33:13 test-server ollama[233489]:   Device 3: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
May 27 12:33:13 test-server ollama[233489]:   Device 4: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
May 27 12:33:13 test-server ollama[233489]:   Device 5: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
May 27 12:33:13 test-server ollama[233489]:   Device 6: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
May 27 12:33:13 test-server ollama[233489]:   Device 7: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
May 27 12:33:13 test-server ollama[233489]:   Device 8: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
May 27 12:33:13 test-server ollama[233489]:   Device 9: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
May 27 12:33:14 test-server ollama[233489]: llm_load_tensors: ggml ctx size =    1.86 MiB
May 27 12:34:12 test-server ollama[233489]: time=2024-05-27T12:34:12.557Z level=INFO source=server.go:562 msg="waiting for server to become available" status="llm server not responding"
May 27 12:34:12 test-server ollama[233489]: time=2024-05-27T12:34:12.862Z level=ERROR source=sched.go:344 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "
<!-- gh-comment-id:2133396093 --> @skrew commented on GitHub (May 27, 2024): Hi, you can see hardware in logs, model is `aya:35b-23-q8_0` but i have this error with all models tested. ``` May 27 12:32:31 test-server ollama[233346]: 2024/05/27 12:32:31 routes.go:1028: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" May 27 12:32:31 test-server ollama[233346]: time=2024-05-27T12:32:31.486Z level=INFO source=images.go:729 msg="total blobs: 11" May 27 12:32:31 test-server ollama[233346]: time=2024-05-27T12:32:31.493Z level=INFO source=images.go:736 msg="total unused blobs removed: 0" May 27 12:32:31 test-server ollama[233346]: time=2024-05-27T12:32:31.493Z level=INFO source=routes.go:1074 msg="Listening on 127.0.0.1:11434 (version 0.1.39)" May 27 12:32:31 test-server ollama[233346]: time=2024-05-27T12:32:31.493Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1428713392/runners May 27 12:32:33 test-server ollama[233346]: time=2024-05-27T12:32:33.998Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]" May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-4effbf41-2d2b-123d-3e00-87b020bf6f2c library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-d340a9a6-dd55-c0cc-2a5c-6400aa90cbef library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-397461d5-3b02-d592-1bdf-c4dc32ba2725 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-087805de-e9c0-a3d4-995a-f201a059d00b library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-59ca4876-6e35-3a7d-215c-c1a340d19051 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-1911ca72-a8d3-b501-c8c9-2406c2745071 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-bf11f3e7-f8c5-efec-19c2-566babe5c95c library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-9fd416b3-99e6-028e-f64c-d33496518d72 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-6cfe6c07-edfe-b2d5-28cc-025aa26569c6 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:32:35 test-server ollama[233346]: time=2024-05-27T12:32:35.418Z level=INFO source=types.go:71 msg="inference compute" id=GPU-1f8c3d9c-27b8-f80e-1646-99ba58a5586c library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:32:56 test-server systemd[1]: /etc/systemd/system/ollama.service.d/environment.conf:1: Assignment outside of section. Ignoring. May 27 12:32:56 test-server systemd[1]: ollama.service: Deactivated successfully. May 27 12:32:56 test-server systemd[1]: ollama.service: Consumed 6.784s CPU time. May 27 12:32:56 test-server ollama[233489]: 2024/05/27 12:32:56 routes.go:1028: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_HOST: OLLAMA_KEEP_ALIVE:-1 OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" May 27 12:32:56 test-server ollama[233489]: time=2024-05-27T12:32:56.957Z level=INFO source=images.go:729 msg="total blobs: 17" May 27 12:32:56 test-server ollama[233489]: time=2024-05-27T12:32:56.957Z level=INFO source=images.go:736 msg="total unused blobs removed: 0" May 27 12:32:56 test-server ollama[233489]: time=2024-05-27T12:32:56.957Z level=INFO source=routes.go:1074 msg="Listening on [::]:11434 (version 0.1.39)" May 27 12:32:56 test-server ollama[233489]: time=2024-05-27T12:32:56.957Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1444333578/runners May 27 12:32:59 test-server ollama[233489]: time=2024-05-27T12:32:59.476Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 rocm_v60002 cpu cpu_avx cpu_avx2]" May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-4effbf41-2d2b-123d-3e00-87b020bf6f2c library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-d340a9a6-dd55-c0cc-2a5c-6400aa90cbef library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-397461d5-3b02-d592-1bdf-c4dc32ba2725 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-087805de-e9c0-a3d4-995a-f201a059d00b library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-59ca4876-6e35-3a7d-215c-c1a340d19051 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-1911ca72-a8d3-b501-c8c9-2406c2745071 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-bf11f3e7-f8c5-efec-19c2-566babe5c95c library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-9fd416b3-99e6-028e-f64c-d33496518d72 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-6cfe6c07-edfe-b2d5-28cc-025aa26569c6 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:33:01 test-server ollama[233489]: time=2024-05-27T12:33:01.069Z level=INFO source=types.go:71 msg="inference compute" id=GPU-1f8c3d9c-27b8-f80e-1646-99ba58a5586c library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="7.8 GiB" May 27 12:33:09 test-server ollama[233489]: [GIN] 2024/05/27 - 12:33:09 | 200 | 28.061µs | 127.0.0.1 | HEAD "/" May 27 12:33:09 test-server ollama[233489]: [GIN] 2024/05/27 - 12:33:09 | 200 | 685.753µs | 127.0.0.1 | POST "/api/show" May 27 12:33:09 test-server ollama[233489]: [GIN] 2024/05/27 - 12:33:09 | 200 | 428.956µs | 127.0.0.1 | POST "/api/show" May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.564Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB" May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.565Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB" May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.566Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB" May 27 12:33:12 test-server ollama[233489]: message repeated 2 times: [ time=2024-05-27T12:33:12.566Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB"] May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.567Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB" May 27 12:33:12 test-server ollama[233489]: message repeated 2 times: [ time=2024-05-27T12:33:12.567Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB"] May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.568Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB" May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.568Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=5 memory.available="7.8 GiB" memory.required.full="38.9 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="516.0 MiB" memory.graph.partial="2.1 GiB" May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.569Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=41 memory.available="78.4 GiB" memory.required.full="43.4 GiB" memory.required.partial="43.4 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="5.0 GiB" memory.graph.partial="21.1 GiB" May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.569Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=41 memory.available="78.4 GiB" memory.required.full="43.4 GiB" memory.required.partial="43.4 GiB" memory.required.kv="2.5 GiB" memory.weights.total="34.6 GiB" memory.weights.repeating="32.5 GiB" memory.weights.nonrepeating="2.1 GiB" memory.graph.full="5.0 GiB" memory.graph.partial="21.1 GiB" May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.569Z level=INFO source=server.go:338 msg="starting llama server" cmd="/tmp/ollama1444333578/runners/cuda_v11/ollama_llama_server --model /mnt/media/ollama/blobs/sha256-bfb1905e866d86bed09ac7687492bd9c7056bca5bc8558aade76e03c3172a581 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 41 --flash-attn --parallel 1 --port 40247" May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.570Z level=INFO source=sched.go:338 msg="loaded runners" count=1 May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.570Z level=INFO source=server.go:525 msg="waiting for llama runner to start responding" May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.570Z level=INFO source=server.go:562 msg="waiting for server to become available" status="llm server error" May 27 12:33:12 test-server ollama[233602]: INFO [main] build info | build=1 commit="74f33ad" tid="139960085577728" timestamp=1716813192 May 27 12:33:12 test-server ollama[233602]: INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139960085577728" timestamp=1716813192 total_threads=8 May 27 12:33:12 test-server ollama[233602]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="40247" tid="139960085577728" timestamp=1716813192 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: loaded meta data with 27 key-value pairs and 322 tensors from /mnt/media/ollama/blobs/sha256-bfb1905e866d86bed09ac7687492bd9c7056bca5bc8558aade76e03c3172a581 (version GGUF V3 (latest)) May 27 12:33:12 test-server ollama[233489]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 0: general.architecture str = command-r May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 1: general.name str = aya-23-35B May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 2: command-r.block_count u32 = 40 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 3: command-r.context_length u32 = 8192 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 4: command-r.embedding_length u32 = 8192 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 5: command-r.feed_forward_length u32 = 22528 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 6: command-r.attention.head_count u32 = 64 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 7: command-r.attention.head_count_kv u32 = 64 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 8: command-r.rope.freq_base f32 = 8000000.000000 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 9: command-r.attention.layer_norm_epsilon f32 = 0.000010 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 10: general.file_type u32 = 7 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 11: command-r.logit_scale f32 = 0.062500 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 12: command-r.rope.scaling.type str = none May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,256000] = ["<PAD>", "<UNK>", "<CLS>", "<SEP>", ... May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, ... May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,253333] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ a... May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 5 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 255001 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 22: tokenizer.chat_template.tool_use str = {{ bos_token }}{% if messages[0]['rol... May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 23: tokenizer.chat_template.rag str = {{ bos_token }}{% if messages[0]['rol... May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 24: tokenizer.chat_templates arr[str,2] = ["rag", "tool_use"] May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 25: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - kv 26: general.quantization_version u32 = 2 May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - type f32: 41 tensors May 27 12:33:12 test-server ollama[233489]: llama_model_loader: - type q8_0: 281 tensors May 27 12:33:12 test-server ollama[233489]: time=2024-05-27T12:33:12.821Z level=INFO source=server.go:562 msg="waiting for server to become available" status="llm server loading model" May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: missing pre-tokenizer type, using: 'default' May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: ************************************ May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED! May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: CONSIDER REGENERATING THE MODEL May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: ************************************ May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: May 27 12:33:13 test-server ollama[233489]: llm_load_vocab: special tokens definition check successful ( 1008/256000 ). May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: format = GGUF V3 (latest) May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: arch = command-r May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: vocab type = BPE May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_vocab = 256000 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_merges = 253333 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_ctx_train = 8192 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_embd = 8192 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_head = 64 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_head_kv = 64 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_layer = 40 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_rot = 128 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_embd_head_k = 128 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_embd_head_v = 128 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_gqa = 1 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_embd_k_gqa = 8192 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_embd_v_gqa = 8192 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: f_norm_eps = 1.0e-05 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: f_logit_scale = 6.2e-02 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_ff = 22528 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_expert = 0 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_expert_used = 0 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: causal attn = 1 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: pooling type = 0 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: rope type = 0 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: rope scaling = none May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: freq_base_train = 8000000.0 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: freq_scale_train = 1 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: n_yarn_orig_ctx = 8192 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: rope_finetuned = unknown May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: ssm_d_conv = 0 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: ssm_d_inner = 0 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: ssm_d_state = 0 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: ssm_dt_rank = 0 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: model type = 35B May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: model ftype = Q8_0 May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: model params = 34.98 B May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: model size = 34.62 GiB (8.50 BPW) May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: general.name = aya-23-35B May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: BOS token = 5 '<BOS_TOKEN>' May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: EOS token = 255001 '<|END_OF_TURN_TOKEN|>' May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: PAD token = 0 '<PAD>' May 27 12:33:13 test-server ollama[233489]: llm_load_print_meta: LF token = 136 'Ä' May 27 12:33:13 test-server ollama[233489]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes May 27 12:33:13 test-server ollama[233489]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no May 27 12:33:13 test-server ollama[233489]: ggml_cuda_init: found 10 CUDA devices: May 27 12:33:13 test-server ollama[233489]: Device 0: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes May 27 12:33:13 test-server ollama[233489]: Device 1: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes May 27 12:33:13 test-server ollama[233489]: Device 2: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes May 27 12:33:13 test-server ollama[233489]: Device 3: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes May 27 12:33:13 test-server ollama[233489]: Device 4: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes May 27 12:33:13 test-server ollama[233489]: Device 5: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes May 27 12:33:13 test-server ollama[233489]: Device 6: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes May 27 12:33:13 test-server ollama[233489]: Device 7: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes May 27 12:33:13 test-server ollama[233489]: Device 8: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes May 27 12:33:13 test-server ollama[233489]: Device 9: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes May 27 12:33:14 test-server ollama[233489]: llm_load_tensors: ggml ctx size = 1.86 MiB May 27 12:34:12 test-server ollama[233489]: time=2024-05-27T12:34:12.557Z level=INFO source=server.go:562 msg="waiting for server to become available" status="llm server not responding" May 27 12:34:12 test-server ollama[233489]: time=2024-05-27T12:34:12.862Z level=ERROR source=sched.go:344 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " ```
Author
Owner

@skrew commented on GitHub (May 28, 2024):

Thanks, tested RC from ad89708 and it works with all models tested: aya:35b-23-q8_0, mixtral:8x7b-instruct-v0.1-q8_0 and command-r-plus:104b-q4_0 👍🏻

<!-- gh-comment-id:2136159543 --> @skrew commented on GitHub (May 28, 2024): Thanks, tested RC from ad89708 and it works with all models tested: `aya:35b-23-q8_0`, `mixtral:8x7b-instruct-v0.1-q8_0` and `command-r-plus:104b-q4_0` 👍🏻
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49424