[GH-ISSUE #8258] Error: an error was encountered while running the model: unexpected EOF #51790

Closed
opened 2026-04-28 20:57:13 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @Yuchen-Labnote on GitHub (Dec 27, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8258

What is the issue?

I use Ollama to pull llama3.1:8b. When I run llama3.1:8b, the following error occurs:

ollama run llama3.1
>>> hello
Hello! HowError: an error was encountered while running the model: unexpected EOF

(I use the Ubuntu 20.04 image, and since I don’t have permission, systemctl is unavailable in my environment. I’m not sure if this is related.)

Set OLLAMA_DEBUG=1, and it shows the following:

2024/12/28 00:12:37 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-12-28T00:12:37.377+08:00 level=INFO source=images.go:757 msg="total blobs: 5"
time=2024-12-28T00:12:37.377+08:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(Server).GenerateRoutes.func2 (5 handlers)
time=2024-12-28T00:12:37.378+08:00 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)"
time=2024-12-28T00:12:37.378+08:00 level=DEBUG source=common.go:80 msg="runners located" dir=/usr/local/lib/ollama/runners
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
time=2024-12-28T00:12:37.379+08:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu]"
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=routes.go:1340 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-12-28T00:12:37.379+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2024-12-28T00:12:37.386+08:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-12-28T00:12:37.386+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so

time=2024-12-28T00:12:37.386+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so
/usr/local/lib/ollama/libcuda.so
/opt/orion/orion_runtime/gpu/cuda/libcuda.so
/opt/orion/orion_runtime/gpu/cuda/libcuda.so
/opt/orion/orion_runtime/lib/libcuda.so
/usr/lib64/libcuda.so
/usr/lib/libcuda.so
/usr/local/nvidia/lib/libcuda.so
/usr/local/nvidia/lib64/libcuda.so
/usr/local/cuda
/targets/
/lib/libcuda.so
/usr/lib/
-linux-gnu/nvidia/current/libcuda.so
/usr/lib/
-linux-gnu/libcuda.so
/usr/lib/wsl/lib/libcuda.so
/usr/lib/wsl/drivers/
/libcuda.so
/opt/cuda/lib
/libcuda.so
/usr/local/cuda/lib
/libcuda.so
/usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-12-28T00:12:37.398+08:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[/opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so]
initializing /opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so
dlsym: cuInit - 0x7efd7438fccb
dlsym: cuDriverGetVersion - 0x7efd7438fd6e
dlsym: cuDeviceGetCount - 0x7efd7438fec2
dlsym: cuDeviceGet - 0x7efd7438fe14
dlsym: cuDeviceGetAttribute - 0x7efd74390230
dlsym: cuDeviceGetUuid - 0x7efd74390020
dlsym: cuDeviceGetName - 0x7efd7438ff6e
dlsym: cuCtxCreate_v3 - 0x7efd743a8d85
dlsym: cuMemGetInfo_v2 - 0x7efd743a1670
dlsym: cuCtxDestroy - 0x7efd743908fa
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-12-28T00:12:37.555+08:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=1 library=/opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so
[GPU-00000000-0000-000a-02aa-6b26e8000000] CUDA totalMem 22889 mb
[GPU-00000000-0000-000a-02aa-6b26e8000000] CUDA freeMem 22889 mb
[GPU-00000000-0000-000a-02aa-6b26e8000000] Compute Capability 8.6
time=2024-12-28T00:12:38.012+08:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-12-28T00:12:38.013+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-00000000-0000-000a-02aa-6b26e8000000 library=cuda variant=v12 compute=8.6 driver=12.4 name="NVIDIA A40" total="22.4 GiB" available="22.4 GiB"
[GIN] 2024/12/28 - 00:12:41 | 200 | 157.861µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/12/28 - 00:12:41 | 200 | 28.317212ms | 127.0.0.1 | POST "/api/show"
time=2024-12-28T00:12:41.916+08:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="881.5 GiB" before.free="832.5 GiB" before.free_swap="0 B" now.total="881.5 GiB" now.free="832.5 GiB" now.free_swap="0 B"
initializing /opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so
dlsym: cuInit - 0x7efd6c3e8ccb
dlsym: cuDriverGetVersion - 0x7efd6c3e8d6e
dlsym: cuDeviceGetCount - 0x7efd6c3e8ec2
dlsym: cuDeviceGet - 0x7efd6c3e8e14
dlsym: cuDeviceGetAttribute - 0x7efd6c3e9230
dlsym: cuDeviceGetUuid - 0x7efd6c3e9020
dlsym: cuDeviceGetName - 0x7efd6c3e8f6e
dlsym: cuCtxCreate_v3 - 0x7efd6c401d85
dlsym: cuMemGetInfo_v2 - 0x7efd6c3fa670
dlsym: cuCtxDestroy - 0x7efd6c3e98fa
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-12-28T00:12:41.941+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-00000000-0000-000a-02aa-6b26e8000000 name="NVIDIA A40" overhead="0 B" before.total="22.4 GiB" before.free="22.4 GiB" now.total="22.4 GiB" now.free="22.4 GiB" now.used="0 B"
releasing cuda driver library
time=2024-12-28T00:12:41.942+08:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x556684a41780 gpu_count=1
time=2024-12-28T00:12:41.986+08:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
time=2024-12-28T00:12:41.986+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]"
time=2024-12-28T00:12:41.987+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 gpu=GPU-00000000-0000-000a-02aa-6b26e8000000 parallel=4 available=24000856064 required="6.5 GiB"
time=2024-12-28T00:12:41.987+08:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="881.5 GiB" before.free="832.5 GiB" before.free_swap="0 B" now.total="881.5 GiB" now.free="832.4 GiB" now.free_swap="0 B"
initializing /opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so
dlsym: cuInit - 0x7efd6c3e8ccb
dlsym: cuDriverGetVersion - 0x7efd6c3e8d6e
dlsym: cuDeviceGetCount - 0x7efd6c3e8ec2
dlsym: cuDeviceGet - 0x7efd6c3e8e14
dlsym: cuDeviceGetAttribute - 0x7efd6c3e9230
dlsym: cuDeviceGetUuid - 0x7efd6c3e9020
dlsym: cuDeviceGetName - 0x7efd6c3e8f6e
dlsym: cuCtxCreate_v3 - 0x7efd6c401d85
dlsym: cuMemGetInfo_v2 - 0x7efd6c3fa670
dlsym: cuCtxDestroy - 0x7efd6c3e98fa
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-12-28T00:12:41.999+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-00000000-0000-000a-02aa-6b26e8000000 name="NVIDIA A40" overhead="0 B" before.total="22.4 GiB" before.free="22.4 GiB" now.total="22.4 GiB" now.free="22.4 GiB" now.used="0 B"
releasing cuda driver library
time=2024-12-28T00:12:41.999+08:00 level=INFO source=server.go:104 msg="system memory" total="881.5 GiB" free="832.4 GiB" free_swap="0 B"
time=2024-12-28T00:12:41.999+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]"
time=2024-12-28T00:12:42.000+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="6.5 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.5 GiB]" memory.weights.total="4.9 GiB" memory.weights.repeating="4.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
time=2024-12-28T00:12:42.001+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --verbose --threads 48 --parallel 4 --port 34605"
time=2024-12-28T00:12:42.001+08:00 level=DEBUG source=server.go:393 msg=subprocess environment="[CUDA_VERSION=12.1.0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama:/usr/local/lib/ollama/runners/cuda_v12_avx:/opt/orion/orion_runtime/gpu/cuda:/opt/orion/orion_runtime/gpu/cuda:/opt/orion/orion_runtime/lib:/usr/lib64:/usr/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 PATH=/root/miniconda3/bin:/root/miniconda3/condabin:/root/miniconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin CUDA_VISIBLE_DEVICES=GPU-00000000-0000-000a-02aa-6b26e8000000]"
time=2024-12-28T00:12:42.003+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-12-28T00:12:42.005+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2024-12-28T00:12:42.006+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2024-12-28T00:12:45.320+08:00 level=INFO source=runner.go:945 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A40, compute capability 8.6, VMM: yes
time=2024-12-28T00:12:45.764+08:00 level=INFO source=runner.go:946 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=48
time=2024-12-28T00:12:45.764+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:34605"
llama_load_model_from_file: using device CUDA0 (NVIDIA A40) - 22889 MiB free
time=2024-12-28T00:12:45.780+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1
llama_model_loader: - kv 5: general.size_label str = 8B
llama_model_loader: - kv 6: general.license str = llama3.1
llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 9: llama.block_count u32 = 32
llama_model_loader: - kv 10: llama.context_length u32 = 131072
llama_model_loader: - kv 11: llama.embedding_length u32 = 4096
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 13: llama.attention.head_count u32 = 32
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: general.file_type u32 = 15
llama_model_loader: - kv 18: llama.vocab_size u32 = 128256
llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - type f32: 66 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llm_load_vocab: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG
llm_load_vocab: control token: 128249 '<|reserved_special_token_241|>' is not marked as EOG
llm_load_vocab: control token: 128246 '<|reserved_special_token_238|>' is not marked as EOG
llm_load_vocab: control token: 128243 '<|reserved_special_token_235|>' is not marked as EOG
llm_load_vocab: control token: 128242 '<|reserved_special_token_234|>' is not marked as EOG
llm_load_vocab: control token: 128241 '<|reserved_special_token_233|>' is not marked as EOG
llm_load_vocab: control token: 128240 '<|reserved_special_token_232|>' is not marked as EOG
llm_load_vocab: control token: 128235 '<|reserved_special_token_227|>' is not marked as EOG
llm_load_vocab: control token: 128231 '<|reserved_special_token_223|>' is not marked as EOG
llm_load_vocab: control token: 128230 '<|reserved_special_token_222|>' is not marked as EOG
llm_load_vocab: control token: 128228 '<|reserved_special_token_220|>' is not marked as EOG
llm_load_vocab: control token: 128225 '<|reserved_special_token_217|>' is not marked as EOG
llm_load_vocab: control token: 128218 '<|reserved_special_token_210|>' is not marked as EOG
llm_load_vocab: control token: 128214 '<|reserved_special_token_206|>' is not marked as EOG
llm_load_vocab: control token: 128213 '<|reserved_special_token_205|>' is not marked as EOG
llm_load_vocab: control token: 128207 '<|reserved_special_token_199|>' is not marked as EOG
llm_load_vocab: control token: 128206 '<|reserved_special_token_198|>' is not marked as EOG
llm_load_vocab: control token: 128204 '<|reserved_special_token_196|>' is not marked as EOG
llm_load_vocab: control token: 128200 '<|reserved_special_token_192|>' is not marked as EOG
llm_load_vocab: control token: 128199 '<|reserved_special_token_191|>' is not marked as EOG
llm_load_vocab: control token: 128198 '<|reserved_special_token_190|>' is not marked as EOG
llm_load_vocab: control token: 128196 '<|reserved_special_token_188|>' is not marked as EOG
llm_load_vocab: control token: 128194 '<|reserved_special_token_186|>' is not marked as EOG
llm_load_vocab: control token: 128193 '<|reserved_special_token_185|>' is not marked as EOG
llm_load_vocab: control token: 128188 '<|reserved_special_token_180|>' is not marked as EOG
llm_load_vocab: control token: 128187 '<|reserved_special_token_179|>' is not marked as EOG
llm_load_vocab: control token: 128185 '<|reserved_special_token_177|>' is not marked as EOG
llm_load_vocab: control token: 128184 '<|reserved_special_token_176|>' is not marked as EOG
llm_load_vocab: control token: 128180 '<|reserved_special_token_172|>' is not marked as EOG
llm_load_vocab: control token: 128179 '<|reserved_special_token_171|>' is not marked as EOG
llm_load_vocab: control token: 128178 '<|reserved_special_token_170|>' is not marked as EOG
llm_load_vocab: control token: 128177 '<|reserved_special_token_169|>' is not marked as EOG
llm_load_vocab: control token: 128176 '<|reserved_special_token_168|>' is not marked as EOG
llm_load_vocab: control token: 128175 '<|reserved_special_token_167|>' is not marked as EOG
llm_load_vocab: control token: 128171 '<|reserved_special_token_163|>' is not marked as EOG
llm_load_vocab: control token: 128170 '<|reserved_special_token_162|>' is not marked as EOG
llm_load_vocab: control token: 128169 '<|reserved_special_token_161|>' is not marked as EOG
llm_load_vocab: control token: 128168 '<|reserved_special_token_160|>' is not marked as EOG
llm_load_vocab: control token: 128165 '<|reserved_special_token_157|>' is not marked as EOG
llm_load_vocab: control token: 128162 '<|reserved_special_token_154|>' is not marked as EOG
llm_load_vocab: control token: 128158 '<|reserved_special_token_150|>' is not marked as EOG
llm_load_vocab: control token: 128156 '<|reserved_special_token_148|>' is not marked as EOG
llm_load_vocab: control token: 128155 '<|reserved_special_token_147|>' is not marked as EOG
llm_load_vocab: control token: 128154 '<|reserved_special_token_146|>' is not marked as EOG
llm_load_vocab: control token: 128151 '<|reserved_special_token_143|>' is not marked as EOG
llm_load_vocab: control token: 128149 '<|reserved_special_token_141|>' is not marked as EOG
llm_load_vocab: control token: 128147 '<|reserved_special_token_139|>' is not marked as EOG
llm_load_vocab: control token: 128146 '<|reserved_special_token_138|>' is not marked as EOG
llm_load_vocab: control token: 128144 '<|reserved_special_token_136|>' is not marked as EOG
llm_load_vocab: control token: 128142 '<|reserved_special_token_134|>' is not marked as EOG
llm_load_vocab: control token: 128141 '<|reserved_special_token_133|>' is not marked as EOG
llm_load_vocab: control token: 128138 '<|reserved_special_token_130|>' is not marked as EOG
llm_load_vocab: control token: 128136 '<|reserved_special_token_128|>' is not marked as EOG
llm_load_vocab: control token: 128135 '<|reserved_special_token_127|>' is not marked as EOG
llm_load_vocab: control token: 128134 '<|reserved_special_token_126|>' is not marked as EOG
llm_load_vocab: control token: 128133 '<|reserved_special_token_125|>' is not marked as EOG
llm_load_vocab: control token: 128131 '<|reserved_special_token_123|>' is not marked as EOG
llm_load_vocab: control token: 128128 '<|reserved_special_token_120|>' is not marked as EOG
llm_load_vocab: control token: 128124 '<|reserved_special_token_116|>' is not marked as EOG
llm_load_vocab: control token: 128123 '<|reserved_special_token_115|>' is not marked as EOG
llm_load_vocab: control token: 128122 '<|reserved_special_token_114|>' is not marked as EOG
llm_load_vocab: control token: 128119 '<|reserved_special_token_111|>' is not marked as EOG
llm_load_vocab: control token: 128115 '<|reserved_special_token_107|>' is not marked as EOG
llm_load_vocab: control token: 128112 '<|reserved_special_token_104|>' is not marked as EOG
llm_load_vocab: control token: 128110 '<|reserved_special_token_102|>' is not marked as EOG
llm_load_vocab: control token: 128109 '<|reserved_special_token_101|>' is not marked as EOG
llm_load_vocab: control token: 128108 '<|reserved_special_token_100|>' is not marked as EOG
llm_load_vocab: control token: 128106 '<|reserved_special_token_98|>' is not marked as EOG
llm_load_vocab: control token: 128103 '<|reserved_special_token_95|>' is not marked as EOG
llm_load_vocab: control token: 128102 '<|reserved_special_token_94|>' is not marked as EOG
llm_load_vocab: control token: 128101 '<|reserved_special_token_93|>' is not marked as EOG
llm_load_vocab: control token: 128097 '<|reserved_special_token_89|>' is not marked as EOG
llm_load_vocab: control token: 128091 '<|reserved_special_token_83|>' is not marked as EOG
llm_load_vocab: control token: 128090 '<|reserved_special_token_82|>' is not marked as EOG
llm_load_vocab: control token: 128089 '<|reserved_special_token_81|>' is not marked as EOG
llm_load_vocab: control token: 128087 '<|reserved_special_token_79|>' is not marked as EOG
llm_load_vocab: control token: 128085 '<|reserved_special_token_77|>' is not marked as EOG
llm_load_vocab: control token: 128081 '<|reserved_special_token_73|>' is not marked as EOG
llm_load_vocab: control token: 128078 '<|reserved_special_token_70|>' is not marked as EOG
llm_load_vocab: control token: 128076 '<|reserved_special_token_68|>' is not marked as EOG
llm_load_vocab: control token: 128075 '<|reserved_special_token_67|>' is not marked as EOG
llm_load_vocab: control token: 128073 '<|reserved_special_token_65|>' is not marked as EOG
llm_load_vocab: control token: 128068 '<|reserved_special_token_60|>' is not marked as EOG
llm_load_vocab: control token: 128067 '<|reserved_special_token_59|>' is not marked as EOG
llm_load_vocab: control token: 128065 '<|reserved_special_token_57|>' is not marked as EOG
llm_load_vocab: control token: 128063 '<|reserved_special_token_55|>' is not marked as EOG
llm_load_vocab: control token: 128062 '<|reserved_special_token_54|>' is not marked as EOG
llm_load_vocab: control token: 128060 '<|reserved_special_token_52|>' is not marked as EOG
llm_load_vocab: control token: 128059 '<|reserved_special_token_51|>' is not marked as EOG
llm_load_vocab: control token: 128057 '<|reserved_special_token_49|>' is not marked as EOG
llm_load_vocab: control token: 128054 '<|reserved_special_token_46|>' is not marked as EOG
llm_load_vocab: control token: 128046 '<|reserved_special_token_38|>' is not marked as EOG
llm_load_vocab: control token: 128045 '<|reserved_special_token_37|>' is not marked as EOG
llm_load_vocab: control token: 128044 '<|reserved_special_token_36|>' is not marked as EOG
llm_load_vocab: control token: 128043 '<|reserved_special_token_35|>' is not marked as EOG
llm_load_vocab: control token: 128038 '<|reserved_special_token_30|>' is not marked as EOG
llm_load_vocab: control token: 128036 '<|reserved_special_token_28|>' is not marked as EOG
llm_load_vocab: control token: 128035 '<|reserved_special_token_27|>' is not marked as EOG
llm_load_vocab: control token: 128032 '<|reserved_special_token_24|>' is not marked as EOG
llm_load_vocab: control token: 128028 '<|reserved_special_token_20|>' is not marked as EOG
llm_load_vocab: control token: 128027 '<|reserved_special_token_19|>' is not marked as EOG
llm_load_vocab: control token: 128024 '<|reserved_special_token_16|>' is not marked as EOG
llm_load_vocab: control token: 128023 '<|reserved_special_token_15|>' is not marked as EOG
llm_load_vocab: control token: 128022 '<|reserved_special_token_14|>' is not marked as EOG
llm_load_vocab: control token: 128021 '<|reserved_special_token_13|>' is not marked as EOG
llm_load_vocab: control token: 128018 '<|reserved_special_token_10|>' is not marked as EOG
llm_load_vocab: control token: 128016 '<|reserved_special_token_8|>' is not marked as EOG
llm_load_vocab: control token: 128015 '<|reserved_special_token_7|>' is not marked as EOG
llm_load_vocab: control token: 128013 '<|reserved_special_token_5|>' is not marked as EOG
llm_load_vocab: control token: 128011 '<|reserved_special_token_3|>' is not marked as EOG
llm_load_vocab: control token: 128005 '<|reserved_special_token_2|>' is not marked as EOG
llm_load_vocab: control token: 128004 '<|finetune_right_pad_id|>' is not marked as EOG
llm_load_vocab: control token: 128002 '<|reserved_special_token_0|>' is not marked as EOG
llm_load_vocab: control token: 128252 '<|reserved_special_token_244|>' is not marked as EOG
llm_load_vocab: control token: 128190 '<|reserved_special_token_182|>' is not marked as EOG
llm_load_vocab: control token: 128183 '<|reserved_special_token_175|>' is not marked as EOG
llm_load_vocab: control token: 128137 '<|reserved_special_token_129|>' is not marked as EOG
llm_load_vocab: control token: 128182 '<|reserved_special_token_174|>' is not marked as EOG
llm_load_vocab: control token: 128040 '<|reserved_special_token_32|>' is not marked as EOG
llm_load_vocab: control token: 128048 '<|reserved_special_token_40|>' is not marked as EOG
llm_load_vocab: control token: 128092 '<|reserved_special_token_84|>' is not marked as EOG
llm_load_vocab: control token: 128215 '<|reserved_special_token_207|>' is not marked as EOG
llm_load_vocab: control token: 128107 '<|reserved_special_token_99|>' is not marked as EOG
llm_load_vocab: control token: 128208 '<|reserved_special_token_200|>' is not marked as EOG
llm_load_vocab: control token: 128145 '<|reserved_special_token_137|>' is not marked as EOG
llm_load_vocab: control token: 128031 '<|reserved_special_token_23|>' is not marked as EOG
llm_load_vocab: control token: 128129 '<|reserved_special_token_121|>' is not marked as EOG
llm_load_vocab: control token: 128201 '<|reserved_special_token_193|>' is not marked as EOG
llm_load_vocab: control token: 128074 '<|reserved_special_token_66|>' is not marked as EOG
llm_load_vocab: control token: 128095 '<|reserved_special_token_87|>' is not marked as EOG
llm_load_vocab: control token: 128186 '<|reserved_special_token_178|>' is not marked as EOG
llm_load_vocab: control token: 128143 '<|reserved_special_token_135|>' is not marked as EOG
llm_load_vocab: control token: 128229 '<|reserved_special_token_221|>' is not marked as EOG
llm_load_vocab: control token: 128007 '<|end_header_id|>' is not marked as EOG
llm_load_vocab: control token: 128055 '<|reserved_special_token_47|>' is not marked as EOG
llm_load_vocab: control token: 128056 '<|reserved_special_token_48|>' is not marked as EOG
llm_load_vocab: control token: 128061 '<|reserved_special_token_53|>' is not marked as EOG
llm_load_vocab: control token: 128153 '<|reserved_special_token_145|>' is not marked as EOG
llm_load_vocab: control token: 128152 '<|reserved_special_token_144|>' is not marked as EOG
llm_load_vocab: control token: 128212 '<|reserved_special_token_204|>' is not marked as EOG
llm_load_vocab: control token: 128172 '<|reserved_special_token_164|>' is not marked as EOG
llm_load_vocab: control token: 128160 '<|reserved_special_token_152|>' is not marked as EOG
llm_load_vocab: control token: 128041 '<|reserved_special_token_33|>' is not marked as EOG
llm_load_vocab: control token: 128181 '<|reserved_special_token_173|>' is not marked as EOG
llm_load_vocab: control token: 128094 '<|reserved_special_token_86|>' is not marked as EOG
llm_load_vocab: control token: 128118 '<|reserved_special_token_110|>' is not marked as EOG
llm_load_vocab: control token: 128236 '<|reserved_special_token_228|>' is not marked as EOG
llm_load_vocab: control token: 128148 '<|reserved_special_token_140|>' is not marked as EOG
llm_load_vocab: control token: 128042 '<|reserved_special_token_34|>' is not marked as EOG
llm_load_vocab: control token: 128139 '<|reserved_special_token_131|>' is not marked as EOG
llm_load_vocab: control token: 128173 '<|reserved_special_token_165|>' is not marked as EOG
llm_load_vocab: control token: 128239 '<|reserved_special_token_231|>' is not marked as EOG
llm_load_vocab: control token: 128157 '<|reserved_special_token_149|>' is not marked as EOG
llm_load_vocab: control token: 128052 '<|reserved_special_token_44|>' is not marked as EOG
llm_load_vocab: control token: 128026 '<|reserved_special_token_18|>' is not marked as EOG
llm_load_vocab: control token: 128003 '<|reserved_special_token_1|>' is not marked as EOG
llm_load_vocab: control token: 128019 '<|reserved_special_token_11|>' is not marked as EOG
llm_load_vocab: control token: 128116 '<|reserved_special_token_108|>' is not marked as EOG
llm_load_vocab: control token: 128161 '<|reserved_special_token_153|>' is not marked as EOG
llm_load_vocab: control token: 128226 '<|reserved_special_token_218|>' is not marked as EOG
llm_load_vocab: control token: 128159 '<|reserved_special_token_151|>' is not marked as EOG
llm_load_vocab: control token: 128012 '<|reserved_special_token_4|>' is not marked as EOG
llm_load_vocab: control token: 128088 '<|reserved_special_token_80|>' is not marked as EOG
llm_load_vocab: control token: 128163 '<|reserved_special_token_155|>' is not marked as EOG
llm_load_vocab: control token: 128001 '<|end_of_text|>' is not marked as EOG
llm_load_vocab: control token: 128113 '<|reserved_special_token_105|>' is not marked as EOG
llm_load_vocab: control token: 128250 '<|reserved_special_token_242|>' is not marked as EOG
llm_load_vocab: control token: 128125 '<|reserved_special_token_117|>' is not marked as EOG
llm_load_vocab: control token: 128053 '<|reserved_special_token_45|>' is not marked as EOG
llm_load_vocab: control token: 128224 '<|reserved_special_token_216|>' is not marked as EOG
llm_load_vocab: control token: 128247 '<|reserved_special_token_239|>' is not marked as EOG
llm_load_vocab: control token: 128251 '<|reserved_special_token_243|>' is not marked as EOG
llm_load_vocab: control token: 128216 '<|reserved_special_token_208|>' is not marked as EOG
llm_load_vocab: control token: 128006 '<|start_header_id|>' is not marked as EOG
llm_load_vocab: control token: 128211 '<|reserved_special_token_203|>' is not marked as EOG
llm_load_vocab: control token: 128077 '<|reserved_special_token_69|>' is not marked as EOG
llm_load_vocab: control token: 128237 '<|reserved_special_token_229|>' is not marked as EOG
llm_load_vocab: control token: 128086 '<|reserved_special_token_78|>' is not marked as EOG
llm_load_vocab: control token: 128227 '<|reserved_special_token_219|>' is not marked as EOG
llm_load_vocab: control token: 128058 '<|reserved_special_token_50|>' is not marked as EOG
llm_load_vocab: control token: 128100 '<|reserved_special_token_92|>' is not marked as EOG
llm_load_vocab: control token: 128209 '<|reserved_special_token_201|>' is not marked as EOG
llm_load_vocab: control token: 128084 '<|reserved_special_token_76|>' is not marked as EOG
llm_load_vocab: control token: 128071 '<|reserved_special_token_63|>' is not marked as EOG
llm_load_vocab: control token: 128070 '<|reserved_special_token_62|>' is not marked as EOG
llm_load_vocab: control token: 128049 '<|reserved_special_token_41|>' is not marked as EOG
llm_load_vocab: control token: 128197 '<|reserved_special_token_189|>' is not marked as EOG
llm_load_vocab: control token: 128072 '<|reserved_special_token_64|>' is not marked as EOG
llm_load_vocab: control token: 128000 '<|begin_of_text|>' is not marked as EOG
llm_load_vocab: control token: 128223 '<|reserved_special_token_215|>' is not marked as EOG
llm_load_vocab: control token: 128217 '<|reserved_special_token_209|>' is not marked as EOG
llm_load_vocab: control token: 128111 '<|reserved_special_token_103|>' is not marked as EOG
llm_load_vocab: control token: 128203 '<|reserved_special_token_195|>' is not marked as EOG
llm_load_vocab: control token: 128051 '<|reserved_special_token_43|>' is not marked as EOG
llm_load_vocab: control token: 128030 '<|reserved_special_token_22|>' is not marked as EOG
llm_load_vocab: control token: 128117 '<|reserved_special_token_109|>' is not marked as EOG
llm_load_vocab: control token: 128010 '<|python_tag|>' is not marked as EOG
llm_load_vocab: control token: 128238 '<|reserved_special_token_230|>' is not marked as EOG
llm_load_vocab: control token: 128255 '<|reserved_special_token_247|>' is not marked as EOG
llm_load_vocab: control token: 128202 '<|reserved_special_token_194|>' is not marked as EOG
llm_load_vocab: control token: 128132 '<|reserved_special_token_124|>' is not marked as EOG
llm_load_vocab: control token: 128248 '<|reserved_special_token_240|>' is not marked as EOG
llm_load_vocab: control token: 128167 '<|reserved_special_token_159|>' is not marked as EOG
llm_load_vocab: control token: 128127 '<|reserved_special_token_119|>' is not marked as EOG
llm_load_vocab: control token: 128105 '<|reserved_special_token_97|>' is not marked as EOG
llm_load_vocab: control token: 128039 '<|reserved_special_token_31|>' is not marked as EOG
llm_load_vocab: control token: 128232 '<|reserved_special_token_224|>' is not marked as EOG
llm_load_vocab: control token: 128166 '<|reserved_special_token_158|>' is not marked as EOG
llm_load_vocab: control token: 128130 '<|reserved_special_token_122|>' is not marked as EOG
llm_load_vocab: control token: 128114 '<|reserved_special_token_106|>' is not marked as EOG
llm_load_vocab: control token: 128234 '<|reserved_special_token_226|>' is not marked as EOG
llm_load_vocab: control token: 128191 '<|reserved_special_token_183|>' is not marked as EOG
llm_load_vocab: control token: 128064 '<|reserved_special_token_56|>' is not marked as EOG
llm_load_vocab: control token: 128140 '<|reserved_special_token_132|>' is not marked as EOG
llm_load_vocab: control token: 128096 '<|reserved_special_token_88|>' is not marked as EOG
llm_load_vocab: control token: 128098 '<|reserved_special_token_90|>' is not marked as EOG
llm_load_vocab: control token: 128192 '<|reserved_special_token_184|>' is not marked as EOG
llm_load_vocab: control token: 128093 '<|reserved_special_token_85|>' is not marked as EOG
llm_load_vocab: control token: 128150 '<|reserved_special_token_142|>' is not marked as EOG
llm_load_vocab: control token: 128222 '<|reserved_special_token_214|>' is not marked as EOG
llm_load_vocab: control token: 128233 '<|reserved_special_token_225|>' is not marked as EOG
llm_load_vocab: control token: 128220 '<|reserved_special_token_212|>' is not marked as EOG
llm_load_vocab: control token: 128034 '<|reserved_special_token_26|>' is not marked as EOG
llm_load_vocab: control token: 128033 '<|reserved_special_token_25|>' is not marked as EOG
llm_load_vocab: control token: 128253 '<|reserved_special_token_245|>' is not marked as EOG
llm_load_vocab: control token: 128195 '<|reserved_special_token_187|>' is not marked as EOG
llm_load_vocab: control token: 128099 '<|reserved_special_token_91|>' is not marked as EOG
llm_load_vocab: control token: 128189 '<|reserved_special_token_181|>' is not marked as EOG
llm_load_vocab: control token: 128210 '<|reserved_special_token_202|>' is not marked as EOG
llm_load_vocab: control token: 128174 '<|reserved_special_token_166|>' is not marked as EOG
llm_load_vocab: control token: 128083 '<|reserved_special_token_75|>' is not marked as EOG
llm_load_vocab: control token: 128080 '<|reserved_special_token_72|>' is not marked as EOG
llm_load_vocab: control token: 128104 '<|reserved_special_token_96|>' is not marked as EOG
llm_load_vocab: control token: 128082 '<|reserved_special_token_74|>' is not marked as EOG
llm_load_vocab: control token: 128219 '<|reserved_special_token_211|>' is not marked as EOG
llm_load_vocab: control token: 128017 '<|reserved_special_token_9|>' is not marked as EOG
llm_load_vocab: control token: 128050 '<|reserved_special_token_42|>' is not marked as EOG
llm_load_vocab: control token: 128205 '<|reserved_special_token_197|>' is not marked as EOG
llm_load_vocab: control token: 128047 '<|reserved_special_token_39|>' is not marked as EOG
llm_load_vocab: control token: 128164 '<|reserved_special_token_156|>' is not marked as EOG
llm_load_vocab: control token: 128020 '<|reserved_special_token_12|>' is not marked as EOG
llm_load_vocab: control token: 128069 '<|reserved_special_token_61|>' is not marked as EOG
llm_load_vocab: control token: 128245 '<|reserved_special_token_237|>' is not marked as EOG
llm_load_vocab: control token: 128121 '<|reserved_special_token_113|>' is not marked as EOG
llm_load_vocab: control token: 128079 '<|reserved_special_token_71|>' is not marked as EOG
llm_load_vocab: control token: 128037 '<|reserved_special_token_29|>' is not marked as EOG
llm_load_vocab: control token: 128244 '<|reserved_special_token_236|>' is not marked as EOG
llm_load_vocab: control token: 128029 '<|reserved_special_token_21|>' is not marked as EOG
llm_load_vocab: control token: 128221 '<|reserved_special_token_213|>' is not marked as EOG
llm_load_vocab: control token: 128066 '<|reserved_special_token_58|>' is not marked as EOG
llm_load_vocab: control token: 128120 '<|reserved_special_token_112|>' is not marked as EOG
llm_load_vocab: control token: 128014 '<|reserved_special_token_6|>' is not marked as EOG
llm_load_vocab: control token: 128025 '<|reserved_special_token_17|>' is not marked as EOG
llm_load_vocab: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.58 GiB (4.89 BPW)
llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: tensor 'token_embd.weight' (q4_K) (and 0 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 281.81 MiB
llm_load_tensors: CUDA0 model buffer size = 4403.49 MiB
time=2024-12-28T00:12:47.291+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.06"
time=2024-12-28T00:12:47.794+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.17"
time=2024-12-28T00:12:48.045+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.21"
time=2024-12-28T00:12:48.298+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.24"
time=2024-12-28T00:12:48.549+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.29"
time=2024-12-28T00:12:48.801+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.32"
time=2024-12-28T00:12:49.053+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.36"
time=2024-12-28T00:12:49.304+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.39"
time=2024-12-28T00:12:49.556+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.44"
time=2024-12-28T00:12:49.808+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.47"
time=2024-12-28T00:12:50.060+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.50"
time=2024-12-28T00:12:50.312+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.55"
time=2024-12-28T00:12:50.564+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.58"
time=2024-12-28T00:12:50.816+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.61"
time=2024-12-28T00:12:51.067+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.65"
time=2024-12-28T00:12:51.319+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.68"
time=2024-12-28T00:12:51.571+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.71"
time=2024-12-28T00:12:51.822+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.76"
time=2024-12-28T00:12:52.074+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.79"
time=2024-12-28T00:12:52.326+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.83"
time=2024-12-28T00:12:52.578+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.87"
time=2024-12-28T00:12:52.830+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.91"
time=2024-12-28T00:12:53.081+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.94"
time=2024-12-28T00:12:53.333+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.98"
time=2024-12-28T00:12:53.585+08:00 level=DEBUG source=server.go:600 msg="model load progress 1.00"
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
time=2024-12-28T00:12:53.837+08:00 level=INFO source=server.go:594 msg="llama runner started in 11.83 seconds"
time=2024-12-28T00:12:53.837+08:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
[GIN] 2024/12/28 - 00:12:53 | 200 | 11.979894799s | 127.0.0.1 | POST "/api/generate"
time=2024-12-28T00:12:53.838+08:00 level=DEBUG source=sched.go:466 msg="context for request finished"
time=2024-12-28T00:12:53.838+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 duration=5m0s
time=2024-12-28T00:12:53.838+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 refCount=0
time=2024-12-28T00:12:57.763+08:00 level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
time=2024-12-28T00:12:57.764+08:00 level=DEBUG source=routes.go:1542 msg="chat request" images=0 prompt="<|start_header_id|>user<|end_header_id|>\n\nhello<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
time=2024-12-28T00:12:57.766+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=11 used=0 remaining=11
SIGSEGV: segmentation violation
PC=0x7f399000ac23 m=4 sigcode=1 addr=0x1c
signal arrived during cgo execution

goroutine 8 gp=0xc0001fc1c0 m=4 mp=0xc0000cd508 [syscall]:
runtime.cgocall(0x558fdc4657d0, 0xc0000ddb90)
runtime/cgocall.go:167 +0x4b fp=0xc0000ddb68 sp=0xc0000ddb30 pc=0x558fdc219b2b
github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7f391d961b80, {0x1, 0x7f393039c270, 0x0, 0x0, 0x7f3930410520, 0x7f3930412530, 0x7f39303a9ff0, 0x7f391d970b80})
_cgo_gotypes.go:556 +0x4f fp=0xc0000ddb90 sp=0xc0000ddb68 pc=0x558fdc2c3baf
github.com/ollama/ollama/llama.(*Context).Decode.func1(0x558fdc460f0b?, 0x7f391d961b80?)
github.com/ollama/ollama/llama/llama.go:207 +0xf5 fp=0xc0000ddc80 sp=0xc0000ddb90 pc=0x558fdc2c6475
github.com/ollama/ollama/llama.(*Context).Decode(0xc00011e170?, 0x0?)
github.com/ollama/ollama/llama/llama.go:207 +0x13 fp=0xc0000ddcc8 sp=0xc0000ddc80 pc=0x558fdc2c62f3
github.com/ollama/ollama/llama/runner.(*Server).processBatch(0xc00019e1b0, 0xc0001121e0, 0xc0000ddf20)
github.com/ollama/ollama/llama/runner/runner.go:434 +0x23f fp=0xc0000ddee0 sp=0xc0000ddcc8 pc=0x558fdc45fbdf
github.com/ollama/ollama/llama/runner.(*Server).run(0xc00019e1b0, {0x558fdc85ede0, 0xc0001fa050})
github.com/ollama/ollama/llama/runner/runner.go:342 +0x1d5 fp=0xc0000ddfb8 sp=0xc0000ddee0 pc=0x558fdc45f615
github.com/ollama/ollama/llama/runner.Execute.gowrap2()
github.com/ollama/ollama/llama/runner/runner.go:984 +0x28 fp=0xc0000ddfe0 sp=0xc0000ddfb8 pc=0x558fdc464628
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ddfe8 sp=0xc0000ddfe0 pc=0x558fdc227561
created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:984 +0xde5

goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc00004b7b0 sp=0xc00004b790 pc=0x558fdc21f92e
runtime.netpollblock(0xc000217f80?, 0xdc1b8186?, 0x8f?)
runtime/netpoll.go:575 +0xf7 fp=0xc00004b7e8 sp=0xc00004b7b0 pc=0x558fdc1e4697
internal/poll.runtime_pollWait(0x7f3948ebafd0, 0x72)
runtime/netpoll.go:351 +0x85 fp=0xc00004b808 sp=0xc00004b7e8 pc=0x558fdc21ec25
internal/poll.(*pollDesc).wait(0xc0001f6100?, 0x2c?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00004b830 sp=0xc00004b808 pc=0x558fdc274a67
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001f6100)
internal/poll/fd_unix.go:620 +0x295 fp=0xc00004b8d8 sp=0xc00004b830 pc=0x558fdc275fd5
net.(*netFD).accept(0xc0001f6100)
net/fd_unix.go:172 +0x29 fp=0xc00004b990 sp=0xc00004b8d8 pc=0x558fdc2ee969
net.(*TCPListener).accept(0xc0000f4700)
net/tcpsock_posix.go:159 +0x1e fp=0xc00004b9e0 sp=0xc00004b990 pc=0x558fdc2fefbe
net.(*TCPListener).Accept(0xc0000f4700)
net/tcpsock.go:372 +0x30 fp=0xc00004ba10 sp=0xc00004b9e0 pc=0x558fdc2fe2f0
net/http.(*onceCloseListener).Accept(0xc000212000?)
:1 +0x24 fp=0xc00004ba28 sp=0xc00004ba10 pc=0x558fdc43cec4
net/http.(*Server).Serve(0xc0001f44b0, {0x558fdc85e7f8, 0xc0000f4700})
net/http/server.go:3330 +0x30c fp=0xc00004bb58 sp=0xc00004ba28 pc=0x558fdc42ec0c
github.com/ollama/ollama/llama/runner.Execute({0xc000016130?, 0x558fdc2271bc?, 0x0?})
github.com/ollama/ollama/llama/runner/runner.go:1005 +0x11a9 fp=0xc00004bef8 sp=0xc00004bb58 pc=0x558fdc464309
main.main()
github.com/ollama/ollama/cmd/runner/main.go:11 +0x54 fp=0xc00004bf50 sp=0xc00004bef8 pc=0x558fdc465294
runtime.main()
runtime/proc.go:272 +0x29d fp=0xc00004bfe0 sp=0xc00004bf50 pc=0x558fdc1ebc7d
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00004bfe8 sp=0xc00004bfe0 pc=0x558fdc227561

goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc0000c6fa8 sp=0xc0000c6f88 pc=0x558fdc21f92e
runtime.goparkunlock(...)
runtime/proc.go:430
runtime.forcegchelper()
runtime/proc.go:337 +0xb8 fp=0xc0000c6fe0 sp=0xc0000c6fa8 pc=0x558fdc1ebfb8
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c6fe8 sp=0xc0000c6fe0 pc=0x558fdc227561
created by runtime.init.7 in goroutine 1
runtime/proc.go:325 +0x1a

goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc0000c7780 sp=0xc0000c7760 pc=0x558fdc21f92e
runtime.goparkunlock(...)
runtime/proc.go:430
runtime.bgsweep(0xc000030100)
runtime/mgcsweep.go:277 +0x94 fp=0xc0000c77c8 sp=0xc0000c7780 pc=0x558fdc1d67f4
runtime.gcenable.gowrap1()
runtime/mgc.go:204 +0x25 fp=0xc0000c77e0 sp=0xc0000c77c8 pc=0x558fdc1cb0a5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c77e8 sp=0xc0000c77e0 pc=0x558fdc227561
created by runtime.gcenable in goroutine 1
runtime/mgc.go:204 +0x66

goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]:
runtime.gopark(0xc000030100?, 0x558fdc73fe60?, 0x1?, 0x0?, 0xc000007340?)
runtime/proc.go:424 +0xce fp=0xc0000c7f78 sp=0xc0000c7f58 pc=0x558fdc21f92e
runtime.goparkunlock(...)
runtime/proc.go:430
runtime.(*scavengerState).park(0x558fdca4a060)
runtime/mgcscavenge.go:425 +0x49 fp=0xc0000c7fa8 sp=0xc0000c7f78 pc=0x558fdc1d4229
runtime.bgscavenge(0xc000030100)
runtime/mgcscavenge.go:653 +0x3c fp=0xc0000c7fc8 sp=0xc0000c7fa8 pc=0x558fdc1d479c
runtime.gcenable.gowrap2()
runtime/mgc.go:205 +0x25 fp=0xc0000c7fe0 sp=0xc0000c7fc8 pc=0x558fdc1cb045
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c7fe8 sp=0xc0000c7fe0 pc=0x558fdc227561
created by runtime.gcenable in goroutine 1
runtime/mgc.go:205 +0xa5

goroutine 5 gp=0xc000007c00 m=nil [finalizer wait]:
runtime.gopark(0xc0000c6648?, 0x558fdc1c15a5?, 0xb0?, 0x1?, 0xc0000061c0?)
runtime/proc.go:424 +0xce fp=0xc0000c6620 sp=0xc0000c6600 pc=0x558fdc21f92e
runtime.runfinq()
runtime/mfinal.go:193 +0x107 fp=0xc0000c67e0 sp=0xc0000c6620 pc=0x558fdc1ca127
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c67e8 sp=0xc0000c67e0 pc=0x558fdc227561
created by runtime.createfing in goroutine 1
runtime/mfinal.go:163 +0x3d

goroutine 6 gp=0xc000007dc0 m=nil [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc0000c8718 sp=0xc0000c86f8 pc=0x558fdc21f92e
runtime.chanrecv(0xc0001800e0, 0x0, 0x1)
runtime/chan.go:639 +0x41c fp=0xc0000c8790 sp=0xc0000c8718 pc=0x558fdc1bad7c
runtime.chanrecv1(0x0?, 0x0?)
runtime/chan.go:489 +0x12 fp=0xc0000c87b8 sp=0xc0000c8790 pc=0x558fdc1ba952
runtime.unique_runtime_registerUniqueMapCleanup.func1(...)
runtime/mgc.go:1781
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
runtime/mgc.go:1784 +0x2f fp=0xc0000c87e0 sp=0xc0000c87b8 pc=0x558fdc1cdf0f
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c87e8 sp=0xc0000c87e0 pc=0x558fdc227561
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
runtime/mgc.go:1779 +0x96

goroutine 18 gp=0xc000218000 m=nil [select]:
runtime.gopark(0xc00031da68?, 0x2?, 0xd?, 0xfe?, 0xc00031d834?)
runtime/proc.go:424 +0xce fp=0xc00031d6a0 sp=0xc00031d680 pc=0x558fdc21f92e
runtime.selectgo(0xc00031da68, 0xc00031d830, 0xc000146000?, 0x0, 0x1?, 0x1)
runtime/select.go:335 +0x7a5 fp=0xc00031d7c8 sp=0xc00031d6a0 pc=0x558fdc1fdb85
github.com/ollama/ollama/llama/runner.(*Server).completion(0xc00019e1b0, {0x558fdc85e978, 0xc0001d6b60}, 0xc0001def00)
github.com/ollama/ollama/llama/runner/runner.go:696 +0xa86 fp=0xc00031dac0 sp=0xc00031d7c8 pc=0x558fdc461a26
github.com/ollama/ollama/llama/runner.(*Server).completion-fm({0x558fdc85e978?, 0xc0001d6b60?}, 0x558fdc432f07?)
:1 +0x36 fp=0xc00031daf0 sp=0xc00031dac0 pc=0x558fdc464ed6
net/http.HandlerFunc.ServeHTTP(0xc0001d60e0?, {0x558fdc85e978?, 0xc0001d6b60?}, 0x0?)
net/http/server.go:2220 +0x29 fp=0xc00031db18 sp=0xc00031daf0 pc=0x558fdc42bac9
net/http.(*ServeMux).ServeHTTP(0x558fdc1c15a5?, {0x558fdc85e978, 0xc0001d6b60}, 0xc0001def00)
net/http/server.go:2747 +0x1ca fp=0xc00031db68 sp=0xc00031db18 pc=0x558fdc42d96a
net/http.serverHandler.ServeHTTP({0x558fdc85da30?}, {0x558fdc85e978?, 0xc0001d6b60?}, 0x6?)
net/http/server.go:3210 +0x8e fp=0xc00031db98 sp=0xc00031db68 pc=0x558fdc43486e
net/http.(*conn).serve(0xc000212000, {0x558fdc85eda8, 0xc00018ef60})
net/http/server.go:2092 +0x5d0 fp=0xc00031dfb8 sp=0xc00031db98 pc=0x558fdc42a6f0
net/http.(*Server).Serve.gowrap3()
net/http/server.go:3360 +0x28 fp=0xc00031dfe0 sp=0xc00031dfb8 pc=0x558fdc42f008
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00031dfe8 sp=0xc00031dfe0 pc=0x558fdc227561
created by net/http.(*Server).Serve in goroutine 1
net/http/server.go:3360 +0x485

goroutine 69 gp=0xc0002c8a80 m=nil [IO wait]:
runtime.gopark(0x558fdc1c5a85?, 0x0?, 0x0?, 0x0?, 0xb?)
runtime/proc.go:424 +0xce fp=0xc00012f5a8 sp=0xc00012f588 pc=0x558fdc21f92e
runtime.netpollblock(0x558fdc25b158?, 0xdc1b8186?, 0x8f?)
runtime/netpoll.go:575 +0xf7 fp=0xc00012f5e0 sp=0xc00012f5a8 pc=0x558fdc1e4697
internal/poll.runtime_pollWait(0x7f3948ebaeb8, 0x72)
runtime/netpoll.go:351 +0x85 fp=0xc00012f600 sp=0xc00012f5e0 pc=0x558fdc21ec25
internal/poll.(*pollDesc).wait(0xc000210000?, 0xc000204101?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00012f628 sp=0xc00012f600 pc=0x558fdc274a67
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000210000, {0xc000204101, 0x1, 0x1})
internal/poll/fd_unix.go:165 +0x27a fp=0xc00012f6c0 sp=0xc00012f628 pc=0x558fdc2755ba
net.(*netFD).Read(0xc000210000, {0xc000204101?, 0xc00012f748?, 0x558fdc220fd0?})
net/fd_posix.go:55 +0x25 fp=0xc00012f708 sp=0xc00012f6c0 pc=0x558fdc2ed885
net.(*conn).Read(0xc000206008, {0xc000204101?, 0x0?, 0xc0002040f8?})
net/net.go:189 +0x45 fp=0xc00012f750 sp=0xc00012f708 pc=0x558fdc2f7285
net.(*TCPConn).Read(0x558fdca0ad80?, {0xc000204101?, 0x0?, 0x0?})
:1 +0x25 fp=0xc00012f780 sp=0xc00012f750 pc=0x558fdc304325
net/http.(*connReader).backgroundRead(0xc0002040f0)
net/http/server.go:690 +0x37 fp=0xc00012f7c8 sp=0xc00012f780 pc=0x558fdc425077
net/http.(*connReader).startBackgroundRead.gowrap2()
net/http/server.go:686 +0x25 fp=0xc00012f7e0 sp=0xc00012f7c8 pc=0x558fdc424fa5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00012f7e8 sp=0xc00012f7e0 pc=0x558fdc227561
created by net/http.(*connReader).startBackgroundRead in goroutine 18
net/http/server.go:686 +0xb6

rax 0x7f393c0fcad4
rbx 0x7f3947e78ed0
rcx 0x7f393c0fcad4
rdx 0x4
rdi 0x7f393c0fcad4
rsi 0x1c
rbp 0x7f3947e78e20
rsp 0x7f3947e78dc8
r8 0x7f393c0fcac0
r9 0x1
r10 0x0
r11 0x7f393a0fbbd0
r12 0x7f393827eca0
r13 0x7f3938288680
r14 0x7f393827ed70
r15 0x7f3938287fb0
rip 0x7f399000ac23
rflags 0x10246
cs 0x33
fs 0x0
gs 0x0
time=2024-12-28T00:13:01.403+08:00 level=DEBUG source=server.go:1080 msg="stopping llama server"
time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=server.go:1086 msg="waiting for llama server to exit"
time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=server.go:1090 msg="llama server stopped"
[GIN] 2024/12/28 - 00:13:01 | 200 | 3.682007471s | 127.0.0.1 | POST "/api/chat"
time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=sched.go:407 msg="context for request finished"
time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 duration=5m0s
time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 refCount=0

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.5.4

Originally created by @Yuchen-Labnote on GitHub (Dec 27, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8258 ### What is the issue? I use Ollama to pull llama3.1:8b. When I run llama3.1:8b, the following error occurs: > ollama run llama3.1 > \>\>\> hello > Hello! HowError: an error was encountered while running the model: unexpected EOF (I use the Ubuntu 20.04 image, and since I don’t have permission, _systemctl_ is unavailable in my environment. I’m not sure if this is related.) Set OLLAMA_DEBUG=1, and it shows the following: 2024/12/28 00:12:37 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-12-28T00:12:37.377+08:00 level=INFO source=images.go:757 msg="total blobs: 5" time=2024-12-28T00:12:37.377+08:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2024-12-28T00:12:37.378+08:00 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)" time=2024-12-28T00:12:37.378+08:00 level=DEBUG source=common.go:80 msg="runners located" dir=/usr/local/lib/ollama/runners time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server time=2024-12-28T00:12:37.379+08:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu]" time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=routes.go:1340 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-12-28T00:12:37.379+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2024-12-28T00:12:37.386+08:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA" time=2024-12-28T00:12:37.386+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so* time=2024-12-28T00:12:37.386+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /usr/local/lib/ollama/libcuda.so* /opt/orion/orion_runtime/gpu/cuda/libcuda.so* /opt/orion/orion_runtime/gpu/cuda/libcuda.so* /opt/orion/orion_runtime/lib/libcuda.so* /usr/lib64/libcuda.so* /usr/lib/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2024-12-28T00:12:37.398+08:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[/opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so] initializing /opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so dlsym: cuInit - 0x7efd7438fccb dlsym: cuDriverGetVersion - 0x7efd7438fd6e dlsym: cuDeviceGetCount - 0x7efd7438fec2 dlsym: cuDeviceGet - 0x7efd7438fe14 dlsym: cuDeviceGetAttribute - 0x7efd74390230 dlsym: cuDeviceGetUuid - 0x7efd74390020 dlsym: cuDeviceGetName - 0x7efd7438ff6e dlsym: cuCtxCreate_v3 - 0x7efd743a8d85 dlsym: cuMemGetInfo_v2 - 0x7efd743a1670 dlsym: cuCtxDestroy - 0x7efd743908fa calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2024-12-28T00:12:37.555+08:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=1 library=/opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so [GPU-00000000-0000-000a-02aa-6b26e8000000] CUDA totalMem 22889 mb [GPU-00000000-0000-000a-02aa-6b26e8000000] CUDA freeMem 22889 mb [GPU-00000000-0000-000a-02aa-6b26e8000000] Compute Capability 8.6 time=2024-12-28T00:12:38.012+08:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2024-12-28T00:12:38.013+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-00000000-0000-000a-02aa-6b26e8000000 library=cuda variant=v12 compute=8.6 driver=12.4 name="NVIDIA A40" total="22.4 GiB" available="22.4 GiB" [GIN] 2024/12/28 - 00:12:41 | 200 | 157.861µs | 127.0.0.1 | HEAD "/" [GIN] 2024/12/28 - 00:12:41 | 200 | 28.317212ms | 127.0.0.1 | POST "/api/show" time=2024-12-28T00:12:41.916+08:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="881.5 GiB" before.free="832.5 GiB" before.free_swap="0 B" now.total="881.5 GiB" now.free="832.5 GiB" now.free_swap="0 B" initializing /opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so dlsym: cuInit - 0x7efd6c3e8ccb dlsym: cuDriverGetVersion - 0x7efd6c3e8d6e dlsym: cuDeviceGetCount - 0x7efd6c3e8ec2 dlsym: cuDeviceGet - 0x7efd6c3e8e14 dlsym: cuDeviceGetAttribute - 0x7efd6c3e9230 dlsym: cuDeviceGetUuid - 0x7efd6c3e9020 dlsym: cuDeviceGetName - 0x7efd6c3e8f6e dlsym: cuCtxCreate_v3 - 0x7efd6c401d85 dlsym: cuMemGetInfo_v2 - 0x7efd6c3fa670 dlsym: cuCtxDestroy - 0x7efd6c3e98fa calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2024-12-28T00:12:41.941+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-00000000-0000-000a-02aa-6b26e8000000 name="NVIDIA A40" overhead="0 B" before.total="22.4 GiB" before.free="22.4 GiB" now.total="22.4 GiB" now.free="22.4 GiB" now.used="0 B" releasing cuda driver library time=2024-12-28T00:12:41.942+08:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x556684a41780 gpu_count=1 time=2024-12-28T00:12:41.986+08:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 time=2024-12-28T00:12:41.986+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]" time=2024-12-28T00:12:41.987+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 gpu=GPU-00000000-0000-000a-02aa-6b26e8000000 parallel=4 available=24000856064 required="6.5 GiB" time=2024-12-28T00:12:41.987+08:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="881.5 GiB" before.free="832.5 GiB" before.free_swap="0 B" now.total="881.5 GiB" now.free="832.4 GiB" now.free_swap="0 B" initializing /opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so dlsym: cuInit - 0x7efd6c3e8ccb dlsym: cuDriverGetVersion - 0x7efd6c3e8d6e dlsym: cuDeviceGetCount - 0x7efd6c3e8ec2 dlsym: cuDeviceGet - 0x7efd6c3e8e14 dlsym: cuDeviceGetAttribute - 0x7efd6c3e9230 dlsym: cuDeviceGetUuid - 0x7efd6c3e9020 dlsym: cuDeviceGetName - 0x7efd6c3e8f6e dlsym: cuCtxCreate_v3 - 0x7efd6c401d85 dlsym: cuMemGetInfo_v2 - 0x7efd6c3fa670 dlsym: cuCtxDestroy - 0x7efd6c3e98fa calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2024-12-28T00:12:41.999+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-00000000-0000-000a-02aa-6b26e8000000 name="NVIDIA A40" overhead="0 B" before.total="22.4 GiB" before.free="22.4 GiB" now.total="22.4 GiB" now.free="22.4 GiB" now.used="0 B" releasing cuda driver library time=2024-12-28T00:12:41.999+08:00 level=INFO source=server.go:104 msg="system memory" total="881.5 GiB" free="832.4 GiB" free_swap="0 B" time=2024-12-28T00:12:41.999+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]" time=2024-12-28T00:12:42.000+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="6.5 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.5 GiB]" memory.weights.total="4.9 GiB" memory.weights.repeating="4.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server time=2024-12-28T00:12:42.001+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --verbose --threads 48 --parallel 4 --port 34605" time=2024-12-28T00:12:42.001+08:00 level=DEBUG source=server.go:393 msg=subprocess environment="[CUDA_VERSION=12.1.0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama:/usr/local/lib/ollama/runners/cuda_v12_avx:/opt/orion/orion_runtime/gpu/cuda:/opt/orion/orion_runtime/gpu/cuda:/opt/orion/orion_runtime/lib:/usr/lib64:/usr/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 PATH=/root/miniconda3/bin:/root/miniconda3/condabin:/root/miniconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin CUDA_VISIBLE_DEVICES=GPU-00000000-0000-000a-02aa-6b26e8000000]" time=2024-12-28T00:12:42.003+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-12-28T00:12:42.005+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2024-12-28T00:12:42.006+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2024-12-28T00:12:45.320+08:00 level=INFO source=runner.go:945 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A40, compute capability 8.6, VMM: yes time=2024-12-28T00:12:45.764+08:00 level=INFO source=runner.go:946 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=48 time=2024-12-28T00:12:45.764+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:34605" llama_load_model_from_file: using device CUDA0 (NVIDIA A40) - 22889 MiB free time=2024-12-28T00:12:45.780+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 15 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors llm_load_vocab: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG llm_load_vocab: control token: 128249 '<|reserved_special_token_241|>' is not marked as EOG llm_load_vocab: control token: 128246 '<|reserved_special_token_238|>' is not marked as EOG llm_load_vocab: control token: 128243 '<|reserved_special_token_235|>' is not marked as EOG llm_load_vocab: control token: 128242 '<|reserved_special_token_234|>' is not marked as EOG llm_load_vocab: control token: 128241 '<|reserved_special_token_233|>' is not marked as EOG llm_load_vocab: control token: 128240 '<|reserved_special_token_232|>' is not marked as EOG llm_load_vocab: control token: 128235 '<|reserved_special_token_227|>' is not marked as EOG llm_load_vocab: control token: 128231 '<|reserved_special_token_223|>' is not marked as EOG llm_load_vocab: control token: 128230 '<|reserved_special_token_222|>' is not marked as EOG llm_load_vocab: control token: 128228 '<|reserved_special_token_220|>' is not marked as EOG llm_load_vocab: control token: 128225 '<|reserved_special_token_217|>' is not marked as EOG llm_load_vocab: control token: 128218 '<|reserved_special_token_210|>' is not marked as EOG llm_load_vocab: control token: 128214 '<|reserved_special_token_206|>' is not marked as EOG llm_load_vocab: control token: 128213 '<|reserved_special_token_205|>' is not marked as EOG llm_load_vocab: control token: 128207 '<|reserved_special_token_199|>' is not marked as EOG llm_load_vocab: control token: 128206 '<|reserved_special_token_198|>' is not marked as EOG llm_load_vocab: control token: 128204 '<|reserved_special_token_196|>' is not marked as EOG llm_load_vocab: control token: 128200 '<|reserved_special_token_192|>' is not marked as EOG llm_load_vocab: control token: 128199 '<|reserved_special_token_191|>' is not marked as EOG llm_load_vocab: control token: 128198 '<|reserved_special_token_190|>' is not marked as EOG llm_load_vocab: control token: 128196 '<|reserved_special_token_188|>' is not marked as EOG llm_load_vocab: control token: 128194 '<|reserved_special_token_186|>' is not marked as EOG llm_load_vocab: control token: 128193 '<|reserved_special_token_185|>' is not marked as EOG llm_load_vocab: control token: 128188 '<|reserved_special_token_180|>' is not marked as EOG llm_load_vocab: control token: 128187 '<|reserved_special_token_179|>' is not marked as EOG llm_load_vocab: control token: 128185 '<|reserved_special_token_177|>' is not marked as EOG llm_load_vocab: control token: 128184 '<|reserved_special_token_176|>' is not marked as EOG llm_load_vocab: control token: 128180 '<|reserved_special_token_172|>' is not marked as EOG llm_load_vocab: control token: 128179 '<|reserved_special_token_171|>' is not marked as EOG llm_load_vocab: control token: 128178 '<|reserved_special_token_170|>' is not marked as EOG llm_load_vocab: control token: 128177 '<|reserved_special_token_169|>' is not marked as EOG llm_load_vocab: control token: 128176 '<|reserved_special_token_168|>' is not marked as EOG llm_load_vocab: control token: 128175 '<|reserved_special_token_167|>' is not marked as EOG llm_load_vocab: control token: 128171 '<|reserved_special_token_163|>' is not marked as EOG llm_load_vocab: control token: 128170 '<|reserved_special_token_162|>' is not marked as EOG llm_load_vocab: control token: 128169 '<|reserved_special_token_161|>' is not marked as EOG llm_load_vocab: control token: 128168 '<|reserved_special_token_160|>' is not marked as EOG llm_load_vocab: control token: 128165 '<|reserved_special_token_157|>' is not marked as EOG llm_load_vocab: control token: 128162 '<|reserved_special_token_154|>' is not marked as EOG llm_load_vocab: control token: 128158 '<|reserved_special_token_150|>' is not marked as EOG llm_load_vocab: control token: 128156 '<|reserved_special_token_148|>' is not marked as EOG llm_load_vocab: control token: 128155 '<|reserved_special_token_147|>' is not marked as EOG llm_load_vocab: control token: 128154 '<|reserved_special_token_146|>' is not marked as EOG llm_load_vocab: control token: 128151 '<|reserved_special_token_143|>' is not marked as EOG llm_load_vocab: control token: 128149 '<|reserved_special_token_141|>' is not marked as EOG llm_load_vocab: control token: 128147 '<|reserved_special_token_139|>' is not marked as EOG llm_load_vocab: control token: 128146 '<|reserved_special_token_138|>' is not marked as EOG llm_load_vocab: control token: 128144 '<|reserved_special_token_136|>' is not marked as EOG llm_load_vocab: control token: 128142 '<|reserved_special_token_134|>' is not marked as EOG llm_load_vocab: control token: 128141 '<|reserved_special_token_133|>' is not marked as EOG llm_load_vocab: control token: 128138 '<|reserved_special_token_130|>' is not marked as EOG llm_load_vocab: control token: 128136 '<|reserved_special_token_128|>' is not marked as EOG llm_load_vocab: control token: 128135 '<|reserved_special_token_127|>' is not marked as EOG llm_load_vocab: control token: 128134 '<|reserved_special_token_126|>' is not marked as EOG llm_load_vocab: control token: 128133 '<|reserved_special_token_125|>' is not marked as EOG llm_load_vocab: control token: 128131 '<|reserved_special_token_123|>' is not marked as EOG llm_load_vocab: control token: 128128 '<|reserved_special_token_120|>' is not marked as EOG llm_load_vocab: control token: 128124 '<|reserved_special_token_116|>' is not marked as EOG llm_load_vocab: control token: 128123 '<|reserved_special_token_115|>' is not marked as EOG llm_load_vocab: control token: 128122 '<|reserved_special_token_114|>' is not marked as EOG llm_load_vocab: control token: 128119 '<|reserved_special_token_111|>' is not marked as EOG llm_load_vocab: control token: 128115 '<|reserved_special_token_107|>' is not marked as EOG llm_load_vocab: control token: 128112 '<|reserved_special_token_104|>' is not marked as EOG llm_load_vocab: control token: 128110 '<|reserved_special_token_102|>' is not marked as EOG llm_load_vocab: control token: 128109 '<|reserved_special_token_101|>' is not marked as EOG llm_load_vocab: control token: 128108 '<|reserved_special_token_100|>' is not marked as EOG llm_load_vocab: control token: 128106 '<|reserved_special_token_98|>' is not marked as EOG llm_load_vocab: control token: 128103 '<|reserved_special_token_95|>' is not marked as EOG llm_load_vocab: control token: 128102 '<|reserved_special_token_94|>' is not marked as EOG llm_load_vocab: control token: 128101 '<|reserved_special_token_93|>' is not marked as EOG llm_load_vocab: control token: 128097 '<|reserved_special_token_89|>' is not marked as EOG llm_load_vocab: control token: 128091 '<|reserved_special_token_83|>' is not marked as EOG llm_load_vocab: control token: 128090 '<|reserved_special_token_82|>' is not marked as EOG llm_load_vocab: control token: 128089 '<|reserved_special_token_81|>' is not marked as EOG llm_load_vocab: control token: 128087 '<|reserved_special_token_79|>' is not marked as EOG llm_load_vocab: control token: 128085 '<|reserved_special_token_77|>' is not marked as EOG llm_load_vocab: control token: 128081 '<|reserved_special_token_73|>' is not marked as EOG llm_load_vocab: control token: 128078 '<|reserved_special_token_70|>' is not marked as EOG llm_load_vocab: control token: 128076 '<|reserved_special_token_68|>' is not marked as EOG llm_load_vocab: control token: 128075 '<|reserved_special_token_67|>' is not marked as EOG llm_load_vocab: control token: 128073 '<|reserved_special_token_65|>' is not marked as EOG llm_load_vocab: control token: 128068 '<|reserved_special_token_60|>' is not marked as EOG llm_load_vocab: control token: 128067 '<|reserved_special_token_59|>' is not marked as EOG llm_load_vocab: control token: 128065 '<|reserved_special_token_57|>' is not marked as EOG llm_load_vocab: control token: 128063 '<|reserved_special_token_55|>' is not marked as EOG llm_load_vocab: control token: 128062 '<|reserved_special_token_54|>' is not marked as EOG llm_load_vocab: control token: 128060 '<|reserved_special_token_52|>' is not marked as EOG llm_load_vocab: control token: 128059 '<|reserved_special_token_51|>' is not marked as EOG llm_load_vocab: control token: 128057 '<|reserved_special_token_49|>' is not marked as EOG llm_load_vocab: control token: 128054 '<|reserved_special_token_46|>' is not marked as EOG llm_load_vocab: control token: 128046 '<|reserved_special_token_38|>' is not marked as EOG llm_load_vocab: control token: 128045 '<|reserved_special_token_37|>' is not marked as EOG llm_load_vocab: control token: 128044 '<|reserved_special_token_36|>' is not marked as EOG llm_load_vocab: control token: 128043 '<|reserved_special_token_35|>' is not marked as EOG llm_load_vocab: control token: 128038 '<|reserved_special_token_30|>' is not marked as EOG llm_load_vocab: control token: 128036 '<|reserved_special_token_28|>' is not marked as EOG llm_load_vocab: control token: 128035 '<|reserved_special_token_27|>' is not marked as EOG llm_load_vocab: control token: 128032 '<|reserved_special_token_24|>' is not marked as EOG llm_load_vocab: control token: 128028 '<|reserved_special_token_20|>' is not marked as EOG llm_load_vocab: control token: 128027 '<|reserved_special_token_19|>' is not marked as EOG llm_load_vocab: control token: 128024 '<|reserved_special_token_16|>' is not marked as EOG llm_load_vocab: control token: 128023 '<|reserved_special_token_15|>' is not marked as EOG llm_load_vocab: control token: 128022 '<|reserved_special_token_14|>' is not marked as EOG llm_load_vocab: control token: 128021 '<|reserved_special_token_13|>' is not marked as EOG llm_load_vocab: control token: 128018 '<|reserved_special_token_10|>' is not marked as EOG llm_load_vocab: control token: 128016 '<|reserved_special_token_8|>' is not marked as EOG llm_load_vocab: control token: 128015 '<|reserved_special_token_7|>' is not marked as EOG llm_load_vocab: control token: 128013 '<|reserved_special_token_5|>' is not marked as EOG llm_load_vocab: control token: 128011 '<|reserved_special_token_3|>' is not marked as EOG llm_load_vocab: control token: 128005 '<|reserved_special_token_2|>' is not marked as EOG llm_load_vocab: control token: 128004 '<|finetune_right_pad_id|>' is not marked as EOG llm_load_vocab: control token: 128002 '<|reserved_special_token_0|>' is not marked as EOG llm_load_vocab: control token: 128252 '<|reserved_special_token_244|>' is not marked as EOG llm_load_vocab: control token: 128190 '<|reserved_special_token_182|>' is not marked as EOG llm_load_vocab: control token: 128183 '<|reserved_special_token_175|>' is not marked as EOG llm_load_vocab: control token: 128137 '<|reserved_special_token_129|>' is not marked as EOG llm_load_vocab: control token: 128182 '<|reserved_special_token_174|>' is not marked as EOG llm_load_vocab: control token: 128040 '<|reserved_special_token_32|>' is not marked as EOG llm_load_vocab: control token: 128048 '<|reserved_special_token_40|>' is not marked as EOG llm_load_vocab: control token: 128092 '<|reserved_special_token_84|>' is not marked as EOG llm_load_vocab: control token: 128215 '<|reserved_special_token_207|>' is not marked as EOG llm_load_vocab: control token: 128107 '<|reserved_special_token_99|>' is not marked as EOG llm_load_vocab: control token: 128208 '<|reserved_special_token_200|>' is not marked as EOG llm_load_vocab: control token: 128145 '<|reserved_special_token_137|>' is not marked as EOG llm_load_vocab: control token: 128031 '<|reserved_special_token_23|>' is not marked as EOG llm_load_vocab: control token: 128129 '<|reserved_special_token_121|>' is not marked as EOG llm_load_vocab: control token: 128201 '<|reserved_special_token_193|>' is not marked as EOG llm_load_vocab: control token: 128074 '<|reserved_special_token_66|>' is not marked as EOG llm_load_vocab: control token: 128095 '<|reserved_special_token_87|>' is not marked as EOG llm_load_vocab: control token: 128186 '<|reserved_special_token_178|>' is not marked as EOG llm_load_vocab: control token: 128143 '<|reserved_special_token_135|>' is not marked as EOG llm_load_vocab: control token: 128229 '<|reserved_special_token_221|>' is not marked as EOG llm_load_vocab: control token: 128007 '<|end_header_id|>' is not marked as EOG llm_load_vocab: control token: 128055 '<|reserved_special_token_47|>' is not marked as EOG llm_load_vocab: control token: 128056 '<|reserved_special_token_48|>' is not marked as EOG llm_load_vocab: control token: 128061 '<|reserved_special_token_53|>' is not marked as EOG llm_load_vocab: control token: 128153 '<|reserved_special_token_145|>' is not marked as EOG llm_load_vocab: control token: 128152 '<|reserved_special_token_144|>' is not marked as EOG llm_load_vocab: control token: 128212 '<|reserved_special_token_204|>' is not marked as EOG llm_load_vocab: control token: 128172 '<|reserved_special_token_164|>' is not marked as EOG llm_load_vocab: control token: 128160 '<|reserved_special_token_152|>' is not marked as EOG llm_load_vocab: control token: 128041 '<|reserved_special_token_33|>' is not marked as EOG llm_load_vocab: control token: 128181 '<|reserved_special_token_173|>' is not marked as EOG llm_load_vocab: control token: 128094 '<|reserved_special_token_86|>' is not marked as EOG llm_load_vocab: control token: 128118 '<|reserved_special_token_110|>' is not marked as EOG llm_load_vocab: control token: 128236 '<|reserved_special_token_228|>' is not marked as EOG llm_load_vocab: control token: 128148 '<|reserved_special_token_140|>' is not marked as EOG llm_load_vocab: control token: 128042 '<|reserved_special_token_34|>' is not marked as EOG llm_load_vocab: control token: 128139 '<|reserved_special_token_131|>' is not marked as EOG llm_load_vocab: control token: 128173 '<|reserved_special_token_165|>' is not marked as EOG llm_load_vocab: control token: 128239 '<|reserved_special_token_231|>' is not marked as EOG llm_load_vocab: control token: 128157 '<|reserved_special_token_149|>' is not marked as EOG llm_load_vocab: control token: 128052 '<|reserved_special_token_44|>' is not marked as EOG llm_load_vocab: control token: 128026 '<|reserved_special_token_18|>' is not marked as EOG llm_load_vocab: control token: 128003 '<|reserved_special_token_1|>' is not marked as EOG llm_load_vocab: control token: 128019 '<|reserved_special_token_11|>' is not marked as EOG llm_load_vocab: control token: 128116 '<|reserved_special_token_108|>' is not marked as EOG llm_load_vocab: control token: 128161 '<|reserved_special_token_153|>' is not marked as EOG llm_load_vocab: control token: 128226 '<|reserved_special_token_218|>' is not marked as EOG llm_load_vocab: control token: 128159 '<|reserved_special_token_151|>' is not marked as EOG llm_load_vocab: control token: 128012 '<|reserved_special_token_4|>' is not marked as EOG llm_load_vocab: control token: 128088 '<|reserved_special_token_80|>' is not marked as EOG llm_load_vocab: control token: 128163 '<|reserved_special_token_155|>' is not marked as EOG llm_load_vocab: control token: 128001 '<|end_of_text|>' is not marked as EOG llm_load_vocab: control token: 128113 '<|reserved_special_token_105|>' is not marked as EOG llm_load_vocab: control token: 128250 '<|reserved_special_token_242|>' is not marked as EOG llm_load_vocab: control token: 128125 '<|reserved_special_token_117|>' is not marked as EOG llm_load_vocab: control token: 128053 '<|reserved_special_token_45|>' is not marked as EOG llm_load_vocab: control token: 128224 '<|reserved_special_token_216|>' is not marked as EOG llm_load_vocab: control token: 128247 '<|reserved_special_token_239|>' is not marked as EOG llm_load_vocab: control token: 128251 '<|reserved_special_token_243|>' is not marked as EOG llm_load_vocab: control token: 128216 '<|reserved_special_token_208|>' is not marked as EOG llm_load_vocab: control token: 128006 '<|start_header_id|>' is not marked as EOG llm_load_vocab: control token: 128211 '<|reserved_special_token_203|>' is not marked as EOG llm_load_vocab: control token: 128077 '<|reserved_special_token_69|>' is not marked as EOG llm_load_vocab: control token: 128237 '<|reserved_special_token_229|>' is not marked as EOG llm_load_vocab: control token: 128086 '<|reserved_special_token_78|>' is not marked as EOG llm_load_vocab: control token: 128227 '<|reserved_special_token_219|>' is not marked as EOG llm_load_vocab: control token: 128058 '<|reserved_special_token_50|>' is not marked as EOG llm_load_vocab: control token: 128100 '<|reserved_special_token_92|>' is not marked as EOG llm_load_vocab: control token: 128209 '<|reserved_special_token_201|>' is not marked as EOG llm_load_vocab: control token: 128084 '<|reserved_special_token_76|>' is not marked as EOG llm_load_vocab: control token: 128071 '<|reserved_special_token_63|>' is not marked as EOG llm_load_vocab: control token: 128070 '<|reserved_special_token_62|>' is not marked as EOG llm_load_vocab: control token: 128049 '<|reserved_special_token_41|>' is not marked as EOG llm_load_vocab: control token: 128197 '<|reserved_special_token_189|>' is not marked as EOG llm_load_vocab: control token: 128072 '<|reserved_special_token_64|>' is not marked as EOG llm_load_vocab: control token: 128000 '<|begin_of_text|>' is not marked as EOG llm_load_vocab: control token: 128223 '<|reserved_special_token_215|>' is not marked as EOG llm_load_vocab: control token: 128217 '<|reserved_special_token_209|>' is not marked as EOG llm_load_vocab: control token: 128111 '<|reserved_special_token_103|>' is not marked as EOG llm_load_vocab: control token: 128203 '<|reserved_special_token_195|>' is not marked as EOG llm_load_vocab: control token: 128051 '<|reserved_special_token_43|>' is not marked as EOG llm_load_vocab: control token: 128030 '<|reserved_special_token_22|>' is not marked as EOG llm_load_vocab: control token: 128117 '<|reserved_special_token_109|>' is not marked as EOG llm_load_vocab: control token: 128010 '<|python_tag|>' is not marked as EOG llm_load_vocab: control token: 128238 '<|reserved_special_token_230|>' is not marked as EOG llm_load_vocab: control token: 128255 '<|reserved_special_token_247|>' is not marked as EOG llm_load_vocab: control token: 128202 '<|reserved_special_token_194|>' is not marked as EOG llm_load_vocab: control token: 128132 '<|reserved_special_token_124|>' is not marked as EOG llm_load_vocab: control token: 128248 '<|reserved_special_token_240|>' is not marked as EOG llm_load_vocab: control token: 128167 '<|reserved_special_token_159|>' is not marked as EOG llm_load_vocab: control token: 128127 '<|reserved_special_token_119|>' is not marked as EOG llm_load_vocab: control token: 128105 '<|reserved_special_token_97|>' is not marked as EOG llm_load_vocab: control token: 128039 '<|reserved_special_token_31|>' is not marked as EOG llm_load_vocab: control token: 128232 '<|reserved_special_token_224|>' is not marked as EOG llm_load_vocab: control token: 128166 '<|reserved_special_token_158|>' is not marked as EOG llm_load_vocab: control token: 128130 '<|reserved_special_token_122|>' is not marked as EOG llm_load_vocab: control token: 128114 '<|reserved_special_token_106|>' is not marked as EOG llm_load_vocab: control token: 128234 '<|reserved_special_token_226|>' is not marked as EOG llm_load_vocab: control token: 128191 '<|reserved_special_token_183|>' is not marked as EOG llm_load_vocab: control token: 128064 '<|reserved_special_token_56|>' is not marked as EOG llm_load_vocab: control token: 128140 '<|reserved_special_token_132|>' is not marked as EOG llm_load_vocab: control token: 128096 '<|reserved_special_token_88|>' is not marked as EOG llm_load_vocab: control token: 128098 '<|reserved_special_token_90|>' is not marked as EOG llm_load_vocab: control token: 128192 '<|reserved_special_token_184|>' is not marked as EOG llm_load_vocab: control token: 128093 '<|reserved_special_token_85|>' is not marked as EOG llm_load_vocab: control token: 128150 '<|reserved_special_token_142|>' is not marked as EOG llm_load_vocab: control token: 128222 '<|reserved_special_token_214|>' is not marked as EOG llm_load_vocab: control token: 128233 '<|reserved_special_token_225|>' is not marked as EOG llm_load_vocab: control token: 128220 '<|reserved_special_token_212|>' is not marked as EOG llm_load_vocab: control token: 128034 '<|reserved_special_token_26|>' is not marked as EOG llm_load_vocab: control token: 128033 '<|reserved_special_token_25|>' is not marked as EOG llm_load_vocab: control token: 128253 '<|reserved_special_token_245|>' is not marked as EOG llm_load_vocab: control token: 128195 '<|reserved_special_token_187|>' is not marked as EOG llm_load_vocab: control token: 128099 '<|reserved_special_token_91|>' is not marked as EOG llm_load_vocab: control token: 128189 '<|reserved_special_token_181|>' is not marked as EOG llm_load_vocab: control token: 128210 '<|reserved_special_token_202|>' is not marked as EOG llm_load_vocab: control token: 128174 '<|reserved_special_token_166|>' is not marked as EOG llm_load_vocab: control token: 128083 '<|reserved_special_token_75|>' is not marked as EOG llm_load_vocab: control token: 128080 '<|reserved_special_token_72|>' is not marked as EOG llm_load_vocab: control token: 128104 '<|reserved_special_token_96|>' is not marked as EOG llm_load_vocab: control token: 128082 '<|reserved_special_token_74|>' is not marked as EOG llm_load_vocab: control token: 128219 '<|reserved_special_token_211|>' is not marked as EOG llm_load_vocab: control token: 128017 '<|reserved_special_token_9|>' is not marked as EOG llm_load_vocab: control token: 128050 '<|reserved_special_token_42|>' is not marked as EOG llm_load_vocab: control token: 128205 '<|reserved_special_token_197|>' is not marked as EOG llm_load_vocab: control token: 128047 '<|reserved_special_token_39|>' is not marked as EOG llm_load_vocab: control token: 128164 '<|reserved_special_token_156|>' is not marked as EOG llm_load_vocab: control token: 128020 '<|reserved_special_token_12|>' is not marked as EOG llm_load_vocab: control token: 128069 '<|reserved_special_token_61|>' is not marked as EOG llm_load_vocab: control token: 128245 '<|reserved_special_token_237|>' is not marked as EOG llm_load_vocab: control token: 128121 '<|reserved_special_token_113|>' is not marked as EOG llm_load_vocab: control token: 128079 '<|reserved_special_token_71|>' is not marked as EOG llm_load_vocab: control token: 128037 '<|reserved_special_token_29|>' is not marked as EOG llm_load_vocab: control token: 128244 '<|reserved_special_token_236|>' is not marked as EOG llm_load_vocab: control token: 128029 '<|reserved_special_token_21|>' is not marked as EOG llm_load_vocab: control token: 128221 '<|reserved_special_token_213|>' is not marked as EOG llm_load_vocab: control token: 128066 '<|reserved_special_token_58|>' is not marked as EOG llm_load_vocab: control token: 128120 '<|reserved_special_token_112|>' is not marked as EOG llm_load_vocab: control token: 128014 '<|reserved_special_token_6|>' is not marked as EOG llm_load_vocab: control token: 128025 '<|reserved_special_token_17|>' is not marked as EOG llm_load_vocab: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.58 GiB (4.89 BPW) llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: tensor 'token_embd.weight' (q4_K) (and 0 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 281.81 MiB llm_load_tensors: CUDA0 model buffer size = 4403.49 MiB time=2024-12-28T00:12:47.291+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.06" time=2024-12-28T00:12:47.794+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.17" time=2024-12-28T00:12:48.045+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.21" time=2024-12-28T00:12:48.298+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.24" time=2024-12-28T00:12:48.549+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.29" time=2024-12-28T00:12:48.801+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.32" time=2024-12-28T00:12:49.053+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.36" time=2024-12-28T00:12:49.304+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.39" time=2024-12-28T00:12:49.556+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.44" time=2024-12-28T00:12:49.808+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.47" time=2024-12-28T00:12:50.060+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.50" time=2024-12-28T00:12:50.312+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.55" time=2024-12-28T00:12:50.564+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.58" time=2024-12-28T00:12:50.816+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.61" time=2024-12-28T00:12:51.067+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.65" time=2024-12-28T00:12:51.319+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.68" time=2024-12-28T00:12:51.571+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.71" time=2024-12-28T00:12:51.822+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.76" time=2024-12-28T00:12:52.074+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.79" time=2024-12-28T00:12:52.326+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.83" time=2024-12-28T00:12:52.578+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.87" time=2024-12-28T00:12:52.830+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.91" time=2024-12-28T00:12:53.081+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.94" time=2024-12-28T00:12:53.333+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.98" time=2024-12-28T00:12:53.585+08:00 level=DEBUG source=server.go:600 msg="model load progress 1.00" llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 time=2024-12-28T00:12:53.837+08:00 level=INFO source=server.go:594 msg="llama runner started in 11.83 seconds" time=2024-12-28T00:12:53.837+08:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 [GIN] 2024/12/28 - 00:12:53 | 200 | 11.979894799s | 127.0.0.1 | POST "/api/generate" time=2024-12-28T00:12:53.838+08:00 level=DEBUG source=sched.go:466 msg="context for request finished" time=2024-12-28T00:12:53.838+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 duration=5m0s time=2024-12-28T00:12:53.838+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 refCount=0 time=2024-12-28T00:12:57.763+08:00 level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 time=2024-12-28T00:12:57.764+08:00 level=DEBUG source=routes.go:1542 msg="chat request" images=0 prompt="<|start_header_id|>user<|end_header_id|>\n\nhello<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" time=2024-12-28T00:12:57.766+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=11 used=0 remaining=11 SIGSEGV: segmentation violation PC=0x7f399000ac23 m=4 sigcode=1 addr=0x1c signal arrived during cgo execution goroutine 8 gp=0xc0001fc1c0 m=4 mp=0xc0000cd508 [syscall]: runtime.cgocall(0x558fdc4657d0, 0xc0000ddb90) runtime/cgocall.go:167 +0x4b fp=0xc0000ddb68 sp=0xc0000ddb30 pc=0x558fdc219b2b github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7f391d961b80, {0x1, 0x7f393039c270, 0x0, 0x0, 0x7f3930410520, 0x7f3930412530, 0x7f39303a9ff0, 0x7f391d970b80}) _cgo_gotypes.go:556 +0x4f fp=0xc0000ddb90 sp=0xc0000ddb68 pc=0x558fdc2c3baf github.com/ollama/ollama/llama.(*Context).Decode.func1(0x558fdc460f0b?, 0x7f391d961b80?) github.com/ollama/ollama/llama/llama.go:207 +0xf5 fp=0xc0000ddc80 sp=0xc0000ddb90 pc=0x558fdc2c6475 github.com/ollama/ollama/llama.(*Context).Decode(0xc00011e170?, 0x0?) github.com/ollama/ollama/llama/llama.go:207 +0x13 fp=0xc0000ddcc8 sp=0xc0000ddc80 pc=0x558fdc2c62f3 github.com/ollama/ollama/llama/runner.(*Server).processBatch(0xc00019e1b0, 0xc0001121e0, 0xc0000ddf20) github.com/ollama/ollama/llama/runner/runner.go:434 +0x23f fp=0xc0000ddee0 sp=0xc0000ddcc8 pc=0x558fdc45fbdf github.com/ollama/ollama/llama/runner.(*Server).run(0xc00019e1b0, {0x558fdc85ede0, 0xc0001fa050}) github.com/ollama/ollama/llama/runner/runner.go:342 +0x1d5 fp=0xc0000ddfb8 sp=0xc0000ddee0 pc=0x558fdc45f615 github.com/ollama/ollama/llama/runner.Execute.gowrap2() github.com/ollama/ollama/llama/runner/runner.go:984 +0x28 fp=0xc0000ddfe0 sp=0xc0000ddfb8 pc=0x558fdc464628 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ddfe8 sp=0xc0000ddfe0 pc=0x558fdc227561 created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:984 +0xde5 goroutine 1 gp=0xc0000061c0 m=nil [IO wait]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:424 +0xce fp=0xc00004b7b0 sp=0xc00004b790 pc=0x558fdc21f92e runtime.netpollblock(0xc000217f80?, 0xdc1b8186?, 0x8f?) runtime/netpoll.go:575 +0xf7 fp=0xc00004b7e8 sp=0xc00004b7b0 pc=0x558fdc1e4697 internal/poll.runtime_pollWait(0x7f3948ebafd0, 0x72) runtime/netpoll.go:351 +0x85 fp=0xc00004b808 sp=0xc00004b7e8 pc=0x558fdc21ec25 internal/poll.(*pollDesc).wait(0xc0001f6100?, 0x2c?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00004b830 sp=0xc00004b808 pc=0x558fdc274a67 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Accept(0xc0001f6100) internal/poll/fd_unix.go:620 +0x295 fp=0xc00004b8d8 sp=0xc00004b830 pc=0x558fdc275fd5 net.(*netFD).accept(0xc0001f6100) net/fd_unix.go:172 +0x29 fp=0xc00004b990 sp=0xc00004b8d8 pc=0x558fdc2ee969 net.(*TCPListener).accept(0xc0000f4700) net/tcpsock_posix.go:159 +0x1e fp=0xc00004b9e0 sp=0xc00004b990 pc=0x558fdc2fefbe net.(*TCPListener).Accept(0xc0000f4700) net/tcpsock.go:372 +0x30 fp=0xc00004ba10 sp=0xc00004b9e0 pc=0x558fdc2fe2f0 net/http.(*onceCloseListener).Accept(0xc000212000?) <autogenerated>:1 +0x24 fp=0xc00004ba28 sp=0xc00004ba10 pc=0x558fdc43cec4 net/http.(*Server).Serve(0xc0001f44b0, {0x558fdc85e7f8, 0xc0000f4700}) net/http/server.go:3330 +0x30c fp=0xc00004bb58 sp=0xc00004ba28 pc=0x558fdc42ec0c github.com/ollama/ollama/llama/runner.Execute({0xc000016130?, 0x558fdc2271bc?, 0x0?}) github.com/ollama/ollama/llama/runner/runner.go:1005 +0x11a9 fp=0xc00004bef8 sp=0xc00004bb58 pc=0x558fdc464309 main.main() github.com/ollama/ollama/cmd/runner/main.go:11 +0x54 fp=0xc00004bf50 sp=0xc00004bef8 pc=0x558fdc465294 runtime.main() runtime/proc.go:272 +0x29d fp=0xc00004bfe0 sp=0xc00004bf50 pc=0x558fdc1ebc7d runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00004bfe8 sp=0xc00004bfe0 pc=0x558fdc227561 goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:424 +0xce fp=0xc0000c6fa8 sp=0xc0000c6f88 pc=0x558fdc21f92e runtime.goparkunlock(...) runtime/proc.go:430 runtime.forcegchelper() runtime/proc.go:337 +0xb8 fp=0xc0000c6fe0 sp=0xc0000c6fa8 pc=0x558fdc1ebfb8 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c6fe8 sp=0xc0000c6fe0 pc=0x558fdc227561 created by runtime.init.7 in goroutine 1 runtime/proc.go:325 +0x1a goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:424 +0xce fp=0xc0000c7780 sp=0xc0000c7760 pc=0x558fdc21f92e runtime.goparkunlock(...) runtime/proc.go:430 runtime.bgsweep(0xc000030100) runtime/mgcsweep.go:277 +0x94 fp=0xc0000c77c8 sp=0xc0000c7780 pc=0x558fdc1d67f4 runtime.gcenable.gowrap1() runtime/mgc.go:204 +0x25 fp=0xc0000c77e0 sp=0xc0000c77c8 pc=0x558fdc1cb0a5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c77e8 sp=0xc0000c77e0 pc=0x558fdc227561 created by runtime.gcenable in goroutine 1 runtime/mgc.go:204 +0x66 goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]: runtime.gopark(0xc000030100?, 0x558fdc73fe60?, 0x1?, 0x0?, 0xc000007340?) runtime/proc.go:424 +0xce fp=0xc0000c7f78 sp=0xc0000c7f58 pc=0x558fdc21f92e runtime.goparkunlock(...) runtime/proc.go:430 runtime.(*scavengerState).park(0x558fdca4a060) runtime/mgcscavenge.go:425 +0x49 fp=0xc0000c7fa8 sp=0xc0000c7f78 pc=0x558fdc1d4229 runtime.bgscavenge(0xc000030100) runtime/mgcscavenge.go:653 +0x3c fp=0xc0000c7fc8 sp=0xc0000c7fa8 pc=0x558fdc1d479c runtime.gcenable.gowrap2() runtime/mgc.go:205 +0x25 fp=0xc0000c7fe0 sp=0xc0000c7fc8 pc=0x558fdc1cb045 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c7fe8 sp=0xc0000c7fe0 pc=0x558fdc227561 created by runtime.gcenable in goroutine 1 runtime/mgc.go:205 +0xa5 goroutine 5 gp=0xc000007c00 m=nil [finalizer wait]: runtime.gopark(0xc0000c6648?, 0x558fdc1c15a5?, 0xb0?, 0x1?, 0xc0000061c0?) runtime/proc.go:424 +0xce fp=0xc0000c6620 sp=0xc0000c6600 pc=0x558fdc21f92e runtime.runfinq() runtime/mfinal.go:193 +0x107 fp=0xc0000c67e0 sp=0xc0000c6620 pc=0x558fdc1ca127 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c67e8 sp=0xc0000c67e0 pc=0x558fdc227561 created by runtime.createfing in goroutine 1 runtime/mfinal.go:163 +0x3d goroutine 6 gp=0xc000007dc0 m=nil [chan receive]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:424 +0xce fp=0xc0000c8718 sp=0xc0000c86f8 pc=0x558fdc21f92e runtime.chanrecv(0xc0001800e0, 0x0, 0x1) runtime/chan.go:639 +0x41c fp=0xc0000c8790 sp=0xc0000c8718 pc=0x558fdc1bad7c runtime.chanrecv1(0x0?, 0x0?) runtime/chan.go:489 +0x12 fp=0xc0000c87b8 sp=0xc0000c8790 pc=0x558fdc1ba952 runtime.unique_runtime_registerUniqueMapCleanup.func1(...) runtime/mgc.go:1781 runtime.unique_runtime_registerUniqueMapCleanup.gowrap1() runtime/mgc.go:1784 +0x2f fp=0xc0000c87e0 sp=0xc0000c87b8 pc=0x558fdc1cdf0f runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c87e8 sp=0xc0000c87e0 pc=0x558fdc227561 created by unique.runtime_registerUniqueMapCleanup in goroutine 1 runtime/mgc.go:1779 +0x96 goroutine 18 gp=0xc000218000 m=nil [select]: runtime.gopark(0xc00031da68?, 0x2?, 0xd?, 0xfe?, 0xc00031d834?) runtime/proc.go:424 +0xce fp=0xc00031d6a0 sp=0xc00031d680 pc=0x558fdc21f92e runtime.selectgo(0xc00031da68, 0xc00031d830, 0xc000146000?, 0x0, 0x1?, 0x1) runtime/select.go:335 +0x7a5 fp=0xc00031d7c8 sp=0xc00031d6a0 pc=0x558fdc1fdb85 github.com/ollama/ollama/llama/runner.(*Server).completion(0xc00019e1b0, {0x558fdc85e978, 0xc0001d6b60}, 0xc0001def00) github.com/ollama/ollama/llama/runner/runner.go:696 +0xa86 fp=0xc00031dac0 sp=0xc00031d7c8 pc=0x558fdc461a26 github.com/ollama/ollama/llama/runner.(*Server).completion-fm({0x558fdc85e978?, 0xc0001d6b60?}, 0x558fdc432f07?) <autogenerated>:1 +0x36 fp=0xc00031daf0 sp=0xc00031dac0 pc=0x558fdc464ed6 net/http.HandlerFunc.ServeHTTP(0xc0001d60e0?, {0x558fdc85e978?, 0xc0001d6b60?}, 0x0?) net/http/server.go:2220 +0x29 fp=0xc00031db18 sp=0xc00031daf0 pc=0x558fdc42bac9 net/http.(*ServeMux).ServeHTTP(0x558fdc1c15a5?, {0x558fdc85e978, 0xc0001d6b60}, 0xc0001def00) net/http/server.go:2747 +0x1ca fp=0xc00031db68 sp=0xc00031db18 pc=0x558fdc42d96a net/http.serverHandler.ServeHTTP({0x558fdc85da30?}, {0x558fdc85e978?, 0xc0001d6b60?}, 0x6?) net/http/server.go:3210 +0x8e fp=0xc00031db98 sp=0xc00031db68 pc=0x558fdc43486e net/http.(*conn).serve(0xc000212000, {0x558fdc85eda8, 0xc00018ef60}) net/http/server.go:2092 +0x5d0 fp=0xc00031dfb8 sp=0xc00031db98 pc=0x558fdc42a6f0 net/http.(*Server).Serve.gowrap3() net/http/server.go:3360 +0x28 fp=0xc00031dfe0 sp=0xc00031dfb8 pc=0x558fdc42f008 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00031dfe8 sp=0xc00031dfe0 pc=0x558fdc227561 created by net/http.(*Server).Serve in goroutine 1 net/http/server.go:3360 +0x485 goroutine 69 gp=0xc0002c8a80 m=nil [IO wait]: runtime.gopark(0x558fdc1c5a85?, 0x0?, 0x0?, 0x0?, 0xb?) runtime/proc.go:424 +0xce fp=0xc00012f5a8 sp=0xc00012f588 pc=0x558fdc21f92e runtime.netpollblock(0x558fdc25b158?, 0xdc1b8186?, 0x8f?) runtime/netpoll.go:575 +0xf7 fp=0xc00012f5e0 sp=0xc00012f5a8 pc=0x558fdc1e4697 internal/poll.runtime_pollWait(0x7f3948ebaeb8, 0x72) runtime/netpoll.go:351 +0x85 fp=0xc00012f600 sp=0xc00012f5e0 pc=0x558fdc21ec25 internal/poll.(*pollDesc).wait(0xc000210000?, 0xc000204101?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00012f628 sp=0xc00012f600 pc=0x558fdc274a67 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc000210000, {0xc000204101, 0x1, 0x1}) internal/poll/fd_unix.go:165 +0x27a fp=0xc00012f6c0 sp=0xc00012f628 pc=0x558fdc2755ba net.(*netFD).Read(0xc000210000, {0xc000204101?, 0xc00012f748?, 0x558fdc220fd0?}) net/fd_posix.go:55 +0x25 fp=0xc00012f708 sp=0xc00012f6c0 pc=0x558fdc2ed885 net.(*conn).Read(0xc000206008, {0xc000204101?, 0x0?, 0xc0002040f8?}) net/net.go:189 +0x45 fp=0xc00012f750 sp=0xc00012f708 pc=0x558fdc2f7285 net.(*TCPConn).Read(0x558fdca0ad80?, {0xc000204101?, 0x0?, 0x0?}) <autogenerated>:1 +0x25 fp=0xc00012f780 sp=0xc00012f750 pc=0x558fdc304325 net/http.(*connReader).backgroundRead(0xc0002040f0) net/http/server.go:690 +0x37 fp=0xc00012f7c8 sp=0xc00012f780 pc=0x558fdc425077 net/http.(*connReader).startBackgroundRead.gowrap2() net/http/server.go:686 +0x25 fp=0xc00012f7e0 sp=0xc00012f7c8 pc=0x558fdc424fa5 runtime.goexit({}) runtime/asm_amd64.s:1700 +0x1 fp=0xc00012f7e8 sp=0xc00012f7e0 pc=0x558fdc227561 created by net/http.(*connReader).startBackgroundRead in goroutine 18 net/http/server.go:686 +0xb6 rax 0x7f393c0fcad4 rbx 0x7f3947e78ed0 rcx 0x7f393c0fcad4 rdx 0x4 rdi 0x7f393c0fcad4 rsi 0x1c rbp 0x7f3947e78e20 rsp 0x7f3947e78dc8 r8 0x7f393c0fcac0 r9 0x1 r10 0x0 r11 0x7f393a0fbbd0 r12 0x7f393827eca0 r13 0x7f3938288680 r14 0x7f393827ed70 r15 0x7f3938287fb0 rip 0x7f399000ac23 rflags 0x10246 cs 0x33 fs 0x0 gs 0x0 time=2024-12-28T00:13:01.403+08:00 level=DEBUG source=server.go:1080 msg="stopping llama server" time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=server.go:1086 msg="waiting for llama server to exit" time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=server.go:1090 msg="llama server stopped" [GIN] 2024/12/28 - 00:13:01 | 200 | 3.682007471s | 127.0.0.1 | POST "/api/chat" time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=sched.go:407 msg="context for request finished" time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 duration=5m0s time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 refCount=0 ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.5.4
GiteaMirror added the bug label 2026-04-28 20:57:13 -05:00
Author
Owner

@RutaTang commented on GitHub (Dec 28, 2024):

I got similar issues on both ollama docker image and my M2 Mac. This issue happens several times during running models.

<!-- gh-comment-id:2564166148 --> @RutaTang commented on GitHub (Dec 28, 2024): I got similar issues on both ollama docker image and my M2 Mac. This issue happens several times during running models.
Author
Owner

@Yuchen-Labnote commented on GitHub (Dec 28, 2024):

I got similar issues on both ollama docker image and my M2 Mac. This issue happens several times during running models.

Have you finally resolved this issue? This problem has been troubling me for several days (T^T).

<!-- gh-comment-id:2564169915 --> @Yuchen-Labnote commented on GitHub (Dec 28, 2024): > I got similar issues on both ollama docker image and my M2 Mac. This issue happens several times during running models. Have you finally resolved this issue? This problem has been troubling me for several days (T^T).
Author
Owner

@RutaTang commented on GitHub (Dec 28, 2024):

I got similar issues on both ollama docker image and my M2 Mac. This issue happens several times during running models.

Have you finally resolved this issue? This problem has been troubling me for several days (T^T).

Me as well. Did not find a way to work around it. I did not encounter this kind of issues before so I am trying to use previous version of ollama docker image.

<!-- gh-comment-id:2564181709 --> @RutaTang commented on GitHub (Dec 28, 2024): > > I got similar issues on both ollama docker image and my M2 Mac. This issue happens several times during running models. > > Have you finally resolved this issue? This problem has been troubling me for several days (T^T). Me as well. Did not find a way to work around it. I did not encounter this kind of issues before so I am trying to use previous version of ollama docker image.
Author
Owner

@rick-github commented on GitHub (Dec 28, 2024):

It's quite likely that the issues are different. One is getting a SEGV on Linux system with an Nvidia A40, the other is an unspecified issue on an M2 Mac. Server logs from the Mac will determine how similar the are.

Have you tried an older version of ollama to see if the errors persist?

<!-- gh-comment-id:2564192475 --> @rick-github commented on GitHub (Dec 28, 2024): It's quite likely that the issues are different. One is getting a SEGV on Linux system with an Nvidia A40, the other is an unspecified issue on an M2 Mac. [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) from the Mac will determine how similar the are. Have you tried an older version of ollama to see if the errors persist?
Author
Owner

@RutaTang commented on GitHub (Dec 28, 2024):

I tried the 0.5.1, it works and I did not get this error. However, for this version (0.5.1), sometimes I got ResponseError: POST predict: Post "http://127.0.0.1:40599/completion": EOF, with models such as codestral:22b-v0.1-q4_0.

<!-- gh-comment-id:2564293809 --> @RutaTang commented on GitHub (Dec 28, 2024): I tried the 0.5.1, it works and I did not get this error. However, for this version (0.5.1), sometimes I got **ResponseError: POST predict: Post "http://127.0.0.1:40599/completion": EOF**, with models such as codestral:22b-v0.1-q4_0.
Author
Owner

@rick-github commented on GitHub (Dec 28, 2024):

Server logs.

<!-- gh-comment-id:2564295080 --> @rick-github commented on GitHub (Dec 28, 2024): Server logs.
Author
Owner

@RutaTang commented on GitHub (Dec 28, 2024):

OLLAMA_DEBUG=1

I got this error after running the model for a while.

time=2024-12-28T23:22:45.435+08:00 level=WARN source=runner.go:129 msg="truncating input prompt" limit=2048 prompt=2669 keep=5 new=2048
llama_decode: failed to decode, ret = 1
llama_decode: failed to decode, ret = 1
panic: failed to decode batch: could not find a kv cache slot

goroutine 67 [running]:
github.com/ollama/ollama/llama/runner.(*Server).run(0x140001b9560, {0x105ec20f0, 0x14000624640})
	github.com/ollama/ollama/llama/runner/runner.go:344 +0x1d8
created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1
	github.com/ollama/ollama/llama/runner/runner.go:984 +0xba8
[GIN] 2024/12/28 - 23:22:49 | 500 |  4.315560083s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/12/28 - 23:22:49 | 500 |         2m25s |       127.0.0.1 | POST     "/api/chat"

OS
macOS Sequoia

Chip
Apple M2 Ultra

Ollama version
0.5.4

Model:
codestral:22b-v0.1-q4_0

Error:
ResponseError: an error was encountered while running the model: unexpected EOF

<!-- gh-comment-id:2564372355 --> @RutaTang commented on GitHub (Dec 28, 2024): > OLLAMA_DEBUG=1 I got this error after running the model for a while. ``` time=2024-12-28T23:22:45.435+08:00 level=WARN source=runner.go:129 msg="truncating input prompt" limit=2048 prompt=2669 keep=5 new=2048 llama_decode: failed to decode, ret = 1 llama_decode: failed to decode, ret = 1 panic: failed to decode batch: could not find a kv cache slot goroutine 67 [running]: github.com/ollama/ollama/llama/runner.(*Server).run(0x140001b9560, {0x105ec20f0, 0x14000624640}) github.com/ollama/ollama/llama/runner/runner.go:344 +0x1d8 created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:984 +0xba8 [GIN] 2024/12/28 - 23:22:49 | 500 | 4.315560083s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/12/28 - 23:22:49 | 500 | 2m25s | 127.0.0.1 | POST "/api/chat" ``` OS macOS Sequoia Chip Apple M2 Ultra Ollama version 0.5.4 Model: codestral:22b-v0.1-q4_0 Error: ResponseError: an error was encountered while running the model: unexpected EOF
Author
Owner

@rick-github commented on GitHub (Dec 28, 2024):

This might be https://github.com/ollama/ollama/issues/7949, in which case it will be fixed in the next release.

<!-- gh-comment-id:2564554914 --> @rick-github commented on GitHub (Dec 28, 2024): This might be https://github.com/ollama/ollama/issues/7949, in which case it will be fixed in the next release.
Author
Owner

@Yuchen-Labnote commented on GitHub (Dec 29, 2024):

It's quite likely that the issues are different. One is getting a SEGV on Linux system with an Nvidia A40, the other is an unspecified issue on an M2 Mac. Server logs from the Mac will determine how similar the are.

Have you tried an older version of ollama to see if the errors persist?

I tried version 0.4.7, and it had the same error as 0.5.4.
I also tried version 0.3.10, but it still reported an error. However, the error displayed was different.

0.3.10:

hello
Hello! HowError: an unknown error was encountered while running the model

OLLAMA_DEBUG=1, I got this:

2024/12/29 10:47:17 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-12-29T10:47:17.775+08:00 level=INFO source=images.go:753 msg="total blobs: 5"
time=2024-12-29T10:47:17.775+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-12-29T10:47:17.775+08:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.10)"
time=2024-12-29T10:47:17.776+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama641188210/runners
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/libggml.so.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/libllama.so.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/libggml.so.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/libllama.so.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/libggml.so.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/libllama.so.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libggml.so.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libllama.so.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v12 file=build/linux/x86_64/cuda_v12/bin/libggml.so.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v12 file=build/linux/x86_64/cuda_v12/bin/libllama.so.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v12 file=build/linux/x86_64/cuda_v12/bin/ollama_llama_server.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=rocm_v60102 file=build/linux/x86_64/rocm_v60102/bin/libggml.so.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=rocm_v60102 file=build/linux/x86_64/rocm_v60102/bin/libllama.so.gz
time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=rocm_v60102 file=build/linux/x86_64/rocm_v60102/bin/ollama_llama_server.gz
time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu/ollama_llama_server
time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu_avx/ollama_llama_server
time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu_avx2/ollama_llama_server
time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cuda_v11/ollama_llama_server
time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cuda_v12/ollama_llama_server
time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/rocm_v60102/ollama_llama_server
time=2024-12-29T10:47:31.520+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 cuda_v12 rocm_v60102 cpu cpu_avx cpu_avx2]"
time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-12-29T10:47:31.520+08:00 level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so

time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /opt/orion/orion_runtime/gpu/cuda/libcuda.so* /opt/orion/orion_runtime/gpu/cuda/libcuda.so* /opt/orion/orion_runtime/lib/libcuda.so* /usr/lib64/libcuda.so* /usr/lib/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets//lib/libcuda.so /usr/lib/-linux-gnu/nvidia/current/libcuda.so /usr/lib/-linux-gnu/libcuda.so /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers//libcuda.so /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-12-29T10:47:31.523+08:00 level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths=[/opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so]
CUDA driver version: 12.4
time=2024-12-29T10:47:31.684+08:00 level=DEBUG source=gpu.go:119 msg="detected GPUs" count=1 library=/opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so
[GPU-00000000-0000-000a-02aa-6726e8000000] CUDA totalMem 22889 mb
[GPU-00000000-0000-000a-02aa-6726e8000000] CUDA freeMem 22889 mb
[GPU-00000000-0000-000a-02aa-6726e8000000] Compute Capability 8.6
time=2024-12-29T10:47:35.744+08:00 level=DEBUG source=amd_linux.go:371 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-12-29T10:47:35.744+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-00000000-0000-000a-02aa-6726e8000000 library=cuda variant=v12 compute=8.6 driver=12.4 name="NVIDIA A40" total="22.4 GiB" available="22.4 GiB"
[GIN] 2024/12/29 - 10:48:14 | 200 | 67.6µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/12/29 - 10:48:14 | 200 | 1.414158ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/12/29 - 10:48:39 | 200 | 80.131µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/12/29 - 10:48:39 | 200 | 2.648481ms | 127.0.0.1 | DELETE "/api/delete"
[GIN] 2024/12/29 - 10:48:50 | 200 | 63.655µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/12/29 - 10:48:50 | 200 | 43.98012ms | 127.0.0.1 | POST "/api/show"
time=2024-12-29T10:48:50.317+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="881.5 GiB" before.free="832.6 GiB" before.free_swap="0 B" now.total="881.5 GiB" now.free="830.6 GiB" now.free_swap="0 B"
CUDA driver version: 12.4
time=2024-12-29T10:48:50.378+08:00 level=DEBUG source=gpu.go:407 msg="updating cuda memory data" gpu=GPU-00000000-0000-000a-02aa-6726e8000000 name="NVIDIA A40" overhead="0 B" before.total="22.4 GiB" before.free="22.4 GiB" now.total="22.4 GiB" now.free="22.4 GiB" now.used="0 B"
releasing cuda driver library
time=2024-12-29T10:48:50.378+08:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x8195c0 gpu_count=1
time=2024-12-29T10:48:50.452+08:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
time=2024-12-29T10:48:50.452+08:00 level=DEBUG source=memory.go:103 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]"
time=2024-12-29T10:48:50.453+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 gpu=GPU-00000000-0000-000a-02aa-6726e8000000 parallel=4 available=24000856064 required="6.5 GiB"
time=2024-12-29T10:48:50.453+08:00 level=INFO source=server.go:101 msg="system memory" total="881.5 GiB" free="830.6 GiB" free_swap="0 B"
time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=memory.go:103 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]"
time=2024-12-29T10:48:50.453+08:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="6.5 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.5 GiB]" memory.weights.total="4.9 GiB" memory.weights.repeating="4.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu/ollama_llama_server
time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu_avx/ollama_llama_server
time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu_avx2/ollama_llama_server
time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cuda_v11/ollama_llama_server
time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cuda_v12/ollama_llama_server
time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/rocm_v60102/ollama_llama_server
time=2024-12-29T10:48:50.454+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu/ollama_llama_server
time=2024-12-29T10:48:50.454+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu_avx/ollama_llama_server
time=2024-12-29T10:48:50.454+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu_avx2/ollama_llama_server
time=2024-12-29T10:48:50.454+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cuda_v11/ollama_llama_server
time=2024-12-29T10:48:50.454+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cuda_v12/ollama_llama_server
time=2024-12-29T10:48:50.454+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/rocm_v60102/ollama_llama_server
time=2024-12-29T10:48:50.457+08:00 level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama641188210/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --verbose --parallel 4 --port 45885"
time=2024-12-29T10:48:50.457+08:00 level=DEBUG source=server.go:408 msg=subprocess environment="[CUDA_VERSION=12.1.0 LD_LIBRARY_PATH=/usr/lib/ollama:/tmp/ollama641188210/runners/cuda_v12:/opt/orion/orion_runtime/gpu/cuda:/opt/orion/orion_runtime/gpu/cuda:/opt/orion/orion_runtime/lib:/usr/lib64:/usr/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 PATH=/root/miniconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin CUDA_VISIBLE_DEVICES=GPU-00000000-0000-000a-02aa-6726e8000000]"
time=2024-12-29T10:48:50.459+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-12-29T10:48:50.459+08:00 level=INFO source=server.go:590 msg="waiting for llama runner to start responding"
time=2024-12-29T10:48:50.459+08:00 level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="8962422" tid="139921346494464" timestamp=1735440533
INFO [main] system info | n_threads=48 n_threads_batch=48 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139921346494464" timestamp=1735440533 total_threads=48
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="45885" tid="139921346494464" timestamp=1735440533
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1
llama_model_loader: - kv 5: general.size_label str = 8B
llama_model_loader: - kv 6: general.license str = llama3.1
llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 9: llama.block_count u32 = 32
llama_model_loader: - kv 10: llama.context_length u32 = 131072
llama_model_loader: - kv 11: llama.embedding_length u32 = 4096
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 13: llama.attention.head_count u32 = 32
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: general.file_type u32 = 15
llama_model_loader: - kv 18: llama.vocab_size u32 = 128256
llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
time=2024-12-29T10:48:53.728+08:00 level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - type f32: 66 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.58 GiB (4.89 BPW)
llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A40, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.27 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 281.81 MiB
llm_load_tensors: CUDA0 buffer size = 4403.50 MiB
time=2024-12-29T10:49:07.817+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.06"
time=2024-12-29T10:49:19.653+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.15"
time=2024-12-29T10:49:21.417+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.16"
time=2024-12-29T10:49:22.675+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.17"
time=2024-12-29T10:49:23.682+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.18"
time=2024-12-29T10:49:25.696+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.19"
time=2024-12-29T10:49:26.953+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.20"
time=2024-12-29T10:49:27.708+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.21"
time=2024-12-29T10:49:28.463+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.22"
time=2024-12-29T10:49:29.721+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.23"
time=2024-12-29T10:49:30.224+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.24"
time=2024-12-29T10:49:33.243+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.25"
time=2024-12-29T10:49:33.746+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.26"
time=2024-12-29T10:49:34.500+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.27"
time=2024-12-29T10:49:35.256+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.28"
time=2024-12-29T10:49:36.010+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.29"
time=2024-12-29T10:49:37.520+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.30"
time=2024-12-29T10:49:38.785+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.31"
time=2024-12-29T10:49:39.540+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.32"
time=2024-12-29T10:49:43.575+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.33"
time=2024-12-29T10:49:44.582+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.34"
time=2024-12-29T10:49:46.093+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.35"
time=2024-12-29T10:49:47.100+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.36"
time=2024-12-29T10:49:48.610+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.37"
time=2024-12-29T10:49:50.623+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.38"
time=2024-12-29T10:49:53.642+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.39"
time=2024-12-29T10:49:56.666+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.40"
time=2024-12-29T10:50:00.188+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.41"
time=2024-12-29T10:50:03.711+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.42"
time=2024-12-29T10:50:04.969+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.43"
time=2024-12-29T10:50:06.479+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.44"
time=2024-12-29T10:50:07.736+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.45"
time=2024-12-29T10:50:10.000+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.46"
time=2024-12-29T10:50:10.755+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.47"
time=2024-12-29T10:50:12.015+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.48"
time=2024-12-29T10:50:12.769+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.49"
time=2024-12-29T10:50:14.279+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.50"
time=2024-12-29T10:50:18.559+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.51"
time=2024-12-29T10:50:20.825+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.52"
time=2024-12-29T10:50:22.584+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.53"
time=2024-12-29T10:50:25.601+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.54"
time=2024-12-29T10:50:27.613+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.55"
time=2024-12-29T10:50:30.130+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.56"
time=2024-12-29T10:50:32.646+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.57"
time=2024-12-29T10:50:33.904+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.58"
time=2024-12-29T10:50:37.426+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.59"
time=2024-12-29T10:50:38.683+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.60"
time=2024-12-29T10:50:40.697+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.61"
time=2024-12-29T10:50:41.702+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.62"
time=2024-12-29T10:50:43.212+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.63"
time=2024-12-29T10:50:44.720+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.64"
time=2024-12-29T10:50:46.481+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.65"
time=2024-12-29T10:50:48.998+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.66"
time=2024-12-29T10:50:53.025+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.67"
time=2024-12-29T10:50:55.039+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.68"
time=2024-12-29T10:50:59.316+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.69"
time=2024-12-29T10:51:00.322+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.70"
time=2024-12-29T10:51:02.335+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.71"
time=2024-12-29T10:51:05.606+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.72"
time=2024-12-29T10:51:06.112+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.73"
time=2024-12-29T10:51:06.867+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.74"
time=2024-12-29T10:51:08.128+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.75"
time=2024-12-29T10:51:08.632+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.76"
time=2024-12-29T10:51:09.891+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.77"
time=2024-12-29T10:51:11.149+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.78"
time=2024-12-29T10:51:11.652+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.79"
time=2024-12-29T10:51:14.167+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.80"
time=2024-12-29T10:51:14.922+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.81"
time=2024-12-29T10:51:15.928+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.82"
time=2024-12-29T10:51:16.682+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.83"
time=2024-12-29T10:51:17.436+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.84"
time=2024-12-29T10:51:18.442+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.85"
time=2024-12-29T10:51:19.700+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.86"
time=2024-12-29T10:51:20.958+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.87"
time=2024-12-29T10:51:22.216+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.88"
time=2024-12-29T10:51:23.222+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.89"
time=2024-12-29T10:51:24.731+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.90"
time=2024-12-29T10:51:26.240+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.91"
time=2024-12-29T10:51:27.245+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.92"
time=2024-12-29T10:51:28.755+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.93"
time=2024-12-29T10:51:30.517+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.94"
time=2024-12-29T10:51:31.524+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.95"
time=2024-12-29T10:51:32.781+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.96"
time=2024-12-29T10:51:34.792+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.97"
time=2024-12-29T10:51:35.295+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.98"
time=2024-12-29T10:51:37.560+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.99"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB
time=2024-12-29T10:51:38.818+08:00 level=DEBUG source=server.go:635 msg="model load progress 1.00"
llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
time=2024-12-29T10:51:39.070+08:00 level=DEBUG source=server.go:638 msg="model load completed, waiting for server to become available" status="llm server loading model"
DEBUG [initialize] initializing slots | n_slots=4 tid="139921346494464" timestamp=1735440710
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=0 tid="139921346494464" timestamp=1735440710
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=1 tid="139921346494464" timestamp=1735440710
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=2 tid="139921346494464" timestamp=1735440710
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=3 tid="139921346494464" timestamp=1735440710
INFO [main] model loaded | tid="139921346494464" timestamp=1735440710
DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="139921346494464" timestamp=1735440710
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=0 tid="139921346494464" timestamp=1735440710
time=2024-12-29T10:51:50.149+08:00 level=INFO source=server.go:629 msg="llama runner started in 179.69 seconds"
time=2024-12-29T10:51:50.150+08:00 level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
[GIN] 2024/12/29 - 10:51:50 | 200 | 2m59s | 127.0.0.1 | POST "/api/chat"
time=2024-12-29T10:51:50.150+08:00 level=DEBUG source=sched.go:467 msg="context for request finished"
time=2024-12-29T10:51:50.150+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 duration=5m0s
time=2024-12-29T10:51:50.151+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 refCount=0
time=2024-12-29T10:52:10.268+08:00 level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1 tid="139921346494464" timestamp=1735440730
time=2024-12-29T10:52:10.270+08:00 level=DEBUG source=routes.go:1363 msg="chat request" images=0 prompt="<|start_header_id|>user<|end_header_id|>\n\nhekko<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=2 tid="139921346494464" timestamp=1735440730
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=3 tid="139921346494464" timestamp=1735440730
DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=12 slot_id=0 task_id=3 tid="139921346494464" timestamp=1735440730
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=3 tid="139921346494464" timestamp=1735440730
time=2024-12-29T10:52:39.524+08:00 level=DEBUG source=server.go:1047 msg="stopping llama server"
time=2024-12-29T10:52:39.524+08:00 level=DEBUG source=server.go:1053 msg="waiting for llama server to exit"
time=2024-12-29T10:52:39.525+08:00 level=DEBUG source=server.go:1057 msg="llama server stopped"
[GIN] 2024/12/29 - 10:52:39 | 200 | 29.283581139s | 127.0.0.1 | POST "/api/chat"
time=2024-12-29T10:52:39.525+08:00 level=DEBUG source=sched.go:408 msg="context for request finished"
time=2024-12-29T10:52:39.525+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 duration=5m0s
time=2024-12-29T10:52:39.525+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 refCount=0

<!-- gh-comment-id:2564595992 --> @Yuchen-Labnote commented on GitHub (Dec 29, 2024): > It's quite likely that the issues are different. One is getting a SEGV on Linux system with an Nvidia A40, the other is an unspecified issue on an M2 Mac. [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) from the Mac will determine how similar the are. > > Have you tried an older version of ollama to see if the errors persist? I tried version 0.4.7, and it had the same error as 0.5.4. I also tried version 0.3.10, but it still reported an error. However, the error displayed was different. 0.3.10: > hello > Hello! HowError: an unknown error was encountered while running the model OLLAMA_DEBUG=1, I got this: 2024/12/29 10:47:17 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-12-29T10:47:17.775+08:00 level=INFO source=images.go:753 msg="total blobs: 5" time=2024-12-29T10:47:17.775+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-12-29T10:47:17.775+08:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.10)" time=2024-12-29T10:47:17.776+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama641188210/runners time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/libggml.so.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/libllama.so.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/libggml.so.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/libllama.so.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/libggml.so.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/libllama.so.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libggml.so.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libllama.so.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v12 file=build/linux/x86_64/cuda_v12/bin/libggml.so.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v12 file=build/linux/x86_64/cuda_v12/bin/libllama.so.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=cuda_v12 file=build/linux/x86_64/cuda_v12/bin/ollama_llama_server.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=rocm_v60102 file=build/linux/x86_64/rocm_v60102/bin/libggml.so.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=rocm_v60102 file=build/linux/x86_64/rocm_v60102/bin/libllama.so.gz time=2024-12-29T10:47:17.777+08:00 level=DEBUG source=payload.go:182 msg=extracting variant=rocm_v60102 file=build/linux/x86_64/rocm_v60102/bin/ollama_llama_server.gz time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu/ollama_llama_server time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu_avx/ollama_llama_server time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu_avx2/ollama_llama_server time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cuda_v11/ollama_llama_server time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cuda_v12/ollama_llama_server time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/rocm_v60102/ollama_llama_server time=2024-12-29T10:47:31.520+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 cuda_v12 rocm_v60102 cpu cpu_avx cpu_avx2]" time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-12-29T10:47:31.520+08:00 level=INFO source=gpu.go:200 msg="looking for compatible GPUs" time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA" time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so* time=2024-12-29T10:47:31.520+08:00 level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /opt/orion/orion_runtime/gpu/cuda/libcuda.so* /opt/orion/orion_runtime/gpu/cuda/libcuda.so* /opt/orion/orion_runtime/lib/libcuda.so* /usr/lib64/libcuda.so* /usr/lib/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2024-12-29T10:47:31.523+08:00 level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths=[/opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so] CUDA driver version: 12.4 time=2024-12-29T10:47:31.684+08:00 level=DEBUG source=gpu.go:119 msg="detected GPUs" count=1 library=/opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so [GPU-00000000-0000-000a-02aa-6726e8000000] CUDA totalMem 22889 mb [GPU-00000000-0000-000a-02aa-6726e8000000] CUDA freeMem 22889 mb [GPU-00000000-0000-000a-02aa-6726e8000000] Compute Capability 8.6 time=2024-12-29T10:47:35.744+08:00 level=DEBUG source=amd_linux.go:371 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2024-12-29T10:47:35.744+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-00000000-0000-000a-02aa-6726e8000000 library=cuda variant=v12 compute=8.6 driver=12.4 name="NVIDIA A40" total="22.4 GiB" available="22.4 GiB" [GIN] 2024/12/29 - 10:48:14 | 200 | 67.6µs | 127.0.0.1 | HEAD "/" [GIN] 2024/12/29 - 10:48:14 | 200 | 1.414158ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/12/29 - 10:48:39 | 200 | 80.131µs | 127.0.0.1 | HEAD "/" [GIN] 2024/12/29 - 10:48:39 | 200 | 2.648481ms | 127.0.0.1 | DELETE "/api/delete" [GIN] 2024/12/29 - 10:48:50 | 200 | 63.655µs | 127.0.0.1 | HEAD "/" [GIN] 2024/12/29 - 10:48:50 | 200 | 43.98012ms | 127.0.0.1 | POST "/api/show" time=2024-12-29T10:48:50.317+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="881.5 GiB" before.free="832.6 GiB" before.free_swap="0 B" now.total="881.5 GiB" now.free="830.6 GiB" now.free_swap="0 B" CUDA driver version: 12.4 time=2024-12-29T10:48:50.378+08:00 level=DEBUG source=gpu.go:407 msg="updating cuda memory data" gpu=GPU-00000000-0000-000a-02aa-6726e8000000 name="NVIDIA A40" overhead="0 B" before.total="22.4 GiB" before.free="22.4 GiB" now.total="22.4 GiB" now.free="22.4 GiB" now.used="0 B" releasing cuda driver library time=2024-12-29T10:48:50.378+08:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x8195c0 gpu_count=1 time=2024-12-29T10:48:50.452+08:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 time=2024-12-29T10:48:50.452+08:00 level=DEBUG source=memory.go:103 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]" time=2024-12-29T10:48:50.453+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 gpu=GPU-00000000-0000-000a-02aa-6726e8000000 parallel=4 available=24000856064 required="6.5 GiB" time=2024-12-29T10:48:50.453+08:00 level=INFO source=server.go:101 msg="system memory" total="881.5 GiB" free="830.6 GiB" free_swap="0 B" time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=memory.go:103 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]" time=2024-12-29T10:48:50.453+08:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="6.5 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.5 GiB]" memory.weights.total="4.9 GiB" memory.weights.repeating="4.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu/ollama_llama_server time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu_avx/ollama_llama_server time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu_avx2/ollama_llama_server time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cuda_v11/ollama_llama_server time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cuda_v12/ollama_llama_server time=2024-12-29T10:48:50.453+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/rocm_v60102/ollama_llama_server time=2024-12-29T10:48:50.454+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu/ollama_llama_server time=2024-12-29T10:48:50.454+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu_avx/ollama_llama_server time=2024-12-29T10:48:50.454+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cpu_avx2/ollama_llama_server time=2024-12-29T10:48:50.454+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cuda_v11/ollama_llama_server time=2024-12-29T10:48:50.454+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/cuda_v12/ollama_llama_server time=2024-12-29T10:48:50.454+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama641188210/runners/rocm_v60102/ollama_llama_server time=2024-12-29T10:48:50.457+08:00 level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama641188210/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --verbose --parallel 4 --port 45885" time=2024-12-29T10:48:50.457+08:00 level=DEBUG source=server.go:408 msg=subprocess environment="[CUDA_VERSION=12.1.0 LD_LIBRARY_PATH=/usr/lib/ollama:/tmp/ollama641188210/runners/cuda_v12:/opt/orion/orion_runtime/gpu/cuda:/opt/orion/orion_runtime/gpu/cuda:/opt/orion/orion_runtime/lib:/usr/lib64:/usr/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 PATH=/root/miniconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin CUDA_VISIBLE_DEVICES=GPU-00000000-0000-000a-02aa-6726e8000000]" time=2024-12-29T10:48:50.459+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2024-12-29T10:48:50.459+08:00 level=INFO source=server.go:590 msg="waiting for llama runner to start responding" time=2024-12-29T10:48:50.459+08:00 level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="8962422" tid="139921346494464" timestamp=1735440533 INFO [main] system info | n_threads=48 n_threads_batch=48 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139921346494464" timestamp=1735440533 total_threads=48 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="45885" tid="139921346494464" timestamp=1735440533 llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 15 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... time=2024-12-29T10:48:53.728+08:00 level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.58 GiB (4.89 BPW) llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A40, compute capability 8.6, VMM: yes llm_load_tensors: ggml ctx size = 0.27 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 281.81 MiB llm_load_tensors: CUDA0 buffer size = 4403.50 MiB time=2024-12-29T10:49:07.817+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.06" time=2024-12-29T10:49:19.653+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.15" time=2024-12-29T10:49:21.417+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.16" time=2024-12-29T10:49:22.675+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.17" time=2024-12-29T10:49:23.682+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.18" time=2024-12-29T10:49:25.696+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.19" time=2024-12-29T10:49:26.953+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.20" time=2024-12-29T10:49:27.708+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.21" time=2024-12-29T10:49:28.463+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.22" time=2024-12-29T10:49:29.721+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.23" time=2024-12-29T10:49:30.224+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.24" time=2024-12-29T10:49:33.243+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.25" time=2024-12-29T10:49:33.746+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.26" time=2024-12-29T10:49:34.500+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.27" time=2024-12-29T10:49:35.256+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.28" time=2024-12-29T10:49:36.010+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.29" time=2024-12-29T10:49:37.520+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.30" time=2024-12-29T10:49:38.785+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.31" time=2024-12-29T10:49:39.540+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.32" time=2024-12-29T10:49:43.575+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.33" time=2024-12-29T10:49:44.582+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.34" time=2024-12-29T10:49:46.093+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.35" time=2024-12-29T10:49:47.100+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.36" time=2024-12-29T10:49:48.610+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.37" time=2024-12-29T10:49:50.623+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.38" time=2024-12-29T10:49:53.642+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.39" time=2024-12-29T10:49:56.666+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.40" time=2024-12-29T10:50:00.188+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.41" time=2024-12-29T10:50:03.711+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.42" time=2024-12-29T10:50:04.969+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.43" time=2024-12-29T10:50:06.479+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.44" time=2024-12-29T10:50:07.736+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.45" time=2024-12-29T10:50:10.000+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.46" time=2024-12-29T10:50:10.755+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.47" time=2024-12-29T10:50:12.015+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.48" time=2024-12-29T10:50:12.769+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.49" time=2024-12-29T10:50:14.279+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.50" time=2024-12-29T10:50:18.559+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.51" time=2024-12-29T10:50:20.825+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.52" time=2024-12-29T10:50:22.584+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.53" time=2024-12-29T10:50:25.601+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.54" time=2024-12-29T10:50:27.613+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.55" time=2024-12-29T10:50:30.130+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.56" time=2024-12-29T10:50:32.646+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.57" time=2024-12-29T10:50:33.904+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.58" time=2024-12-29T10:50:37.426+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.59" time=2024-12-29T10:50:38.683+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.60" time=2024-12-29T10:50:40.697+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.61" time=2024-12-29T10:50:41.702+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.62" time=2024-12-29T10:50:43.212+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.63" time=2024-12-29T10:50:44.720+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.64" time=2024-12-29T10:50:46.481+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.65" time=2024-12-29T10:50:48.998+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.66" time=2024-12-29T10:50:53.025+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.67" time=2024-12-29T10:50:55.039+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.68" time=2024-12-29T10:50:59.316+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.69" time=2024-12-29T10:51:00.322+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.70" time=2024-12-29T10:51:02.335+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.71" time=2024-12-29T10:51:05.606+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.72" time=2024-12-29T10:51:06.112+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.73" time=2024-12-29T10:51:06.867+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.74" time=2024-12-29T10:51:08.128+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.75" time=2024-12-29T10:51:08.632+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.76" time=2024-12-29T10:51:09.891+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.77" time=2024-12-29T10:51:11.149+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.78" time=2024-12-29T10:51:11.652+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.79" time=2024-12-29T10:51:14.167+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.80" time=2024-12-29T10:51:14.922+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.81" time=2024-12-29T10:51:15.928+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.82" time=2024-12-29T10:51:16.682+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.83" time=2024-12-29T10:51:17.436+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.84" time=2024-12-29T10:51:18.442+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.85" time=2024-12-29T10:51:19.700+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.86" time=2024-12-29T10:51:20.958+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.87" time=2024-12-29T10:51:22.216+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.88" time=2024-12-29T10:51:23.222+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.89" time=2024-12-29T10:51:24.731+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.90" time=2024-12-29T10:51:26.240+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.91" time=2024-12-29T10:51:27.245+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.92" time=2024-12-29T10:51:28.755+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.93" time=2024-12-29T10:51:30.517+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.94" time=2024-12-29T10:51:31.524+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.95" time=2024-12-29T10:51:32.781+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.96" time=2024-12-29T10:51:34.792+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.97" time=2024-12-29T10:51:35.295+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.98" time=2024-12-29T10:51:37.560+08:00 level=DEBUG source=server.go:635 msg="model load progress 0.99" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB time=2024-12-29T10:51:38.818+08:00 level=DEBUG source=server.go:635 msg="model load progress 1.00" llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 time=2024-12-29T10:51:39.070+08:00 level=DEBUG source=server.go:638 msg="model load completed, waiting for server to become available" status="llm server loading model" DEBUG [initialize] initializing slots | n_slots=4 tid="139921346494464" timestamp=1735440710 DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=0 tid="139921346494464" timestamp=1735440710 DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=1 tid="139921346494464" timestamp=1735440710 DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=2 tid="139921346494464" timestamp=1735440710 DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=3 tid="139921346494464" timestamp=1735440710 INFO [main] model loaded | tid="139921346494464" timestamp=1735440710 DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="139921346494464" timestamp=1735440710 DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=0 tid="139921346494464" timestamp=1735440710 time=2024-12-29T10:51:50.149+08:00 level=INFO source=server.go:629 msg="llama runner started in 179.69 seconds" time=2024-12-29T10:51:50.150+08:00 level=DEBUG source=sched.go:463 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 [GIN] 2024/12/29 - 10:51:50 | 200 | 2m59s | 127.0.0.1 | POST "/api/chat" time=2024-12-29T10:51:50.150+08:00 level=DEBUG source=sched.go:467 msg="context for request finished" time=2024-12-29T10:51:50.150+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 duration=5m0s time=2024-12-29T10:51:50.151+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 refCount=0 time=2024-12-29T10:52:10.268+08:00 level=DEBUG source=sched.go:576 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1 tid="139921346494464" timestamp=1735440730 time=2024-12-29T10:52:10.270+08:00 level=DEBUG source=routes.go:1363 msg="chat request" images=0 prompt="<|start_header_id|>user<|end_header_id|>\n\nhekko<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=2 tid="139921346494464" timestamp=1735440730 DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=3 tid="139921346494464" timestamp=1735440730 DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=12 slot_id=0 task_id=3 tid="139921346494464" timestamp=1735440730 DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=3 tid="139921346494464" timestamp=1735440730 time=2024-12-29T10:52:39.524+08:00 level=DEBUG source=server.go:1047 msg="stopping llama server" time=2024-12-29T10:52:39.524+08:00 level=DEBUG source=server.go:1053 msg="waiting for llama server to exit" time=2024-12-29T10:52:39.525+08:00 level=DEBUG source=server.go:1057 msg="llama server stopped" [GIN] 2024/12/29 - 10:52:39 | 200 | 29.283581139s | 127.0.0.1 | POST "/api/chat" time=2024-12-29T10:52:39.525+08:00 level=DEBUG source=sched.go:408 msg="context for request finished" time=2024-12-29T10:52:39.525+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 duration=5m0s time=2024-12-29T10:52:39.525+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 refCount=0
Author
Owner

@rick-github commented on GitHub (Dec 29, 2024):

This log shows no errors, and says that it took 29 seconds to successfully reply to your prompt, which was "hekko", not "hello" as in your extract above.

<!-- gh-comment-id:2564601437 --> @rick-github commented on GitHub (Dec 29, 2024): This log shows no errors, and says that it took 29 seconds to successfully reply to your prompt, which was "hekko", not "hello" as in your extract above.
Author
Owner

@Yuchen-Labnote commented on GitHub (Dec 29, 2024):

This log shows no errors, and says that it took 29 seconds to successfully reply to your prompt, which was "hekko", not "hello" as in your extract above.

Oh,Sorry.... I copied the wrong log.
This is the new one:

image

log.txt

<!-- gh-comment-id:2564614783 --> @Yuchen-Labnote commented on GitHub (Dec 29, 2024): > This log shows no errors, and says that it took 29 seconds to successfully reply to your prompt, which was "hekko", not "hello" as in your extract above. Oh,Sorry.... I copied the wrong log. This is the new one: ![image](https://github.com/user-attachments/assets/fd2bac4f-356a-4187-a1f9-d82de4559347) [log.txt](https://github.com/user-attachments/files/18269321/log.txt)
Author
Owner

@Yuchen-Labnote commented on GitHub (Dec 29, 2024):

I solved it! When I only use the CPU, Ollama works. The issue seems to be related to CUDA. It seems there are two versions of CUDA in my system, or maybe the CUDA version is incompatible? I’m not sure. However, when I run the following command, everything works fine.

OLLAMA_LLM_LIBRARY=cuda_v11 ollama serve

<!-- gh-comment-id:2564694602 --> @Yuchen-Labnote commented on GitHub (Dec 29, 2024): I solved it! When I only use the CPU, Ollama works. The issue seems to be related to CUDA. It seems there are two versions of CUDA in my system, or maybe the CUDA version is incompatible? I’m not sure. However, when I run the following command, everything works fine. > OLLAMA_LLM_LIBRARY=cuda_v11 ollama serve
Author
Owner

@rick-github commented on GitHub (Dec 29, 2024):

What's the output of nvidia-smi?

<!-- gh-comment-id:2564697261 --> @rick-github commented on GitHub (Dec 29, 2024): What's the output of `nvidia-smi`?
Author
Owner

@Yuchen-Labnote commented on GitHub (Dec 29, 2024):

What's the output of nvidia-smi?

image

<!-- gh-comment-id:2564712390 --> @Yuchen-Labnote commented on GitHub (Dec 29, 2024): > What's the output of `nvidia-smi`? ![image](https://github.com/user-attachments/assets/57ff2719-f536-46d7-a358-ea45f94b048b)
Author
Owner

@rick-github commented on GitHub (Dec 29, 2024):

Could you also add logs from the session where you set OLLAMA_LLM_LIBRARY=cuda_v11?

<!-- gh-comment-id:2564714935 --> @rick-github commented on GitHub (Dec 29, 2024): Could you also add logs from the session where you set `OLLAMA_LLM_LIBRARY=cuda_v11`?
Author
Owner

@Yuchen-Labnote commented on GitHub (Dec 29, 2024):

Could you also add logs from the session where you set OLLAMA_LLM_LIBRARY=cuda_v11?

log: serve_log.txt

And the result of nvcc -V was as follows:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0

I think it might be because there were two versions of CUDA installed before, which caused a conflict or something else. And when I set OLLAMA_LLM_LIBRARY=cuda_v12, it results in the same error as before.

After switching to a new Ubuntu system image with CUDA 11.7, I found it works just fine without having to set OLLAMA_LLM_LIBRARY=cuda_v11.

<!-- gh-comment-id:2564726857 --> @Yuchen-Labnote commented on GitHub (Dec 29, 2024): > Could you also add logs from the session where you set `OLLAMA_LLM_LIBRARY=cuda_v11`? log: [serve_log.txt](https://github.com/user-attachments/files/18269698/serve_log.txt) And the result of nvcc -V was as follows: > nvcc: NVIDIA (R) Cuda compiler driver > Copyright (c) 2005-2023 NVIDIA Corporation > Built on Mon_Apr__3_17:16:06_PDT_2023 > Cuda compilation tools, release 12.1, V12.1.105 > Build cuda_12.1.r12.1/compiler.32688072_0 I think it might be because there were two versions of CUDA installed before, which caused a conflict or something else. And when I set OLLAMA_LLM_LIBRARY=cuda_v12, it results in the same error as before. After switching to a new Ubuntu system image with CUDA 11.7, I found it works just fine without having to set OLLAMA_LLM_LIBRARY=cuda_v11.
Author
Owner

@rick-github commented on GitHub (Dec 30, 2024):

Does your A40 work with CUDA 11.7 and ollama 0.5.4?

<!-- gh-comment-id:2564897832 --> @rick-github commented on GitHub (Dec 30, 2024): Does your A40 work with CUDA 11.7 and ollama 0.5.4?
Author
Owner

@Yuchen-Labnote commented on GitHub (Dec 31, 2024):

I haven’t tried this version, but I guess it should work.

<!-- gh-comment-id:2566101264 --> @Yuchen-Labnote commented on GitHub (Dec 31, 2024): I haven’t tried this version, but I guess it should work.
Author
Owner

@hchasens commented on GitHub (Mar 10, 2025):

Just got this same error on ollama version is 0.5.7

<!-- gh-comment-id:2711843128 --> @hchasens commented on GitHub (Mar 10, 2025): Just got this same error on `ollama version is 0.5.7`
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51790