[GH-ISSUE #8200] Ollama hangs when running llama3.2 and llama3.2:1b #30994

Closed
opened 2026-04-22 11:05:24 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @pr0fsmith on GitHub (Dec 21, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8200

What is the issue?

After a while of using Ollama, the LLM becomes completely unresponsive and there's no CPU or GPU usage during that time. This happens with LLAMA3.2 and LLAMA3.2:1B. Here are the logs.
`
Dec 21 00:37:03 olivi ollama[627]: [GIN] 2024/12/21 - 00:37:03 | 200 | 816.217µs | 172.17.0.2 | GET "/api/tags"
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: loaded meta data with 30 key-value pairs and 147 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 (version GGUF V3 (latest))
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 0: general.architecture str = llama
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 1: general.type str = model
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 2: general.name str = Llama 3.2 1B Instruct
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 3: general.finetune str = Instruct
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 4: general.basename str = Llama-3.2
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 5: general.size_label str = 1B
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 8: llama.block_count u32 = 16
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 9: llama.context_length u32 = 131072
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 10: llama.embedding_length u32 = 2048
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 12: llama.attention.head_count u32 = 32
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 16: llama.attention.key_length u32 = 64
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 17: llama.attention.value_length u32 = 64
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 18: general.file_type u32 = 7
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 19: llama.vocab_size u32 = 128256
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 64
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 29: general.quantization_version u32 = 2
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - type f32: 34 tensors
Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - type q8_0: 113 tensors
Dec 21 00:37:04 olivi ollama[627]: llm_load_vocab: special tokens cache size = 256
Dec 21 00:37:04 olivi ollama[627]: llm_load_vocab: token to piece cache size = 0.7999 MB
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: format = GGUF V3 (latest)
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: arch = llama
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: vocab type = BPE
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: n_vocab = 128256
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: n_merges = 280147
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: vocab_only = 1
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: model type = ?B
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: model ftype = all F32
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: model params = 1.24 B
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: model size = 1.22 GiB (8.50 BPW)
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: general.name = Llama 3.2 1B Instruct
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: LF token = 128 'Ä'
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: max token length = 256
Dec 21 00:37:04 olivi ollama[627]: llama_model_load: vocab only - skipping tensors
Dec 21 06:08:13 olivi ollama[627]: [GIN] 2024/12/21 - 06:08:13 | 200 | 5h31m9s | 172.17.0.2 | POST "/api/chat"
Dec 21 07:28:02 olivi ollama[627]: [GIN] 2024/12/21 - 07:28:02 | 200 | 10h12m29s | 172.17.0.2 | POST "/api/chat"
Dec 21 11:09:13 olivi systemd[1]: Stopping ollama.service - Ollama Service...
Dec 21 11:09:13 olivi systemd[1]: ollama.service: Deactivated successfully.
Dec 21 11:09:13 olivi systemd[1]: Stopped ollama.service - Ollama Service.
Dec 21 11:09:13 olivi systemd[1]: ollama.service: Consumed 4min 38.830s CPU time.
Dec 21 11:09:13 olivi systemd[1]: Started ollama.service - Ollama Service.
Dec 21 11:09:13 olivi ollama[34220]: 2024/12/21 11:09:13 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Dec 21 11:09:13 olivi ollama[34220]: time=2024-12-21T11:09:13.418-05:00 level=INFO source=images.go:757 msg="total blobs: 14"
Dec 21 11:09:13 olivi ollama[34220]: time=2024-12-21T11:09:13.418-05:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
Dec 21 11:09:13 olivi ollama[34220]: - using env: export GIN_MODE=release
Dec 21 11:09:13 olivi ollama[34220]: - using code: gin.SetMode(gin.ReleaseMode)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
Dec 21 11:09:13 olivi ollama[34220]: time=2024-12-21T11:09:13.419-05:00 level=INFO source=routes.go:1310 msg="Listening on [::]:11434 (version 0.5.4)"
Dec 21 11:09:13 olivi ollama[34220]: time=2024-12-21T11:09:13.419-05:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
Dec 21 11:09:13 olivi ollama[34220]: time=2024-12-21T11:09:13.419-05:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
Dec 21 11:09:13 olivi ollama[34220]: time=2024-12-21T11:09:13.799-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-20ba82ab-bf3d-11ef-bf4a-9e2501b9f7cf library=cuda variant=v12 compute=6.1 driver=12.2 name="GRID P4-4Q" total="4.0 GiB" available="2.9 GiB"
Dec 21 11:09:23 olivi ollama[34220]: [GIN] 2024/12/21 - 11:09:23 | 200 | 62.241µs | 127.0.0.1 | HEAD "/"
Dec 21 11:09:23 olivi ollama[34220]: [GIN] 2024/12/21 - 11:09:23 | 200 | 26.245321ms | 127.0.0.1 | POST "/api/show"
Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.246-05:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 gpu=GPU-20ba82ab-bf3d-11ef-bf4a-9e2501b9f7cf parallel=4 available=3091042304 required="2.5 GiB"
Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.588-05:00 level=INFO source=server.go:104 msg="system memory" total="15.2 GiB" free="13.6 GiB" free_swap="975.0 MiB"
Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.588-05:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=17 layers.offload=17 layers.split="" memory.available="[2.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.5 GiB" memory.required.partial="2.5 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[2.5 GiB]" memory.weights.total="1.2 GiB" memory.weights.repeating="976.1 MiB" memory.weights.nonrepeating="266.2 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="554.3 MiB"
Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.589-05:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /usr/share/ollama/.ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 --ctx-size 8192 --batch-size 512 --n-gpu-layers 17 --threads 16 --parallel 4 --port 45211"
Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.590-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.590-05:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.590-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.633-05:00 level=INFO source=runner.go:945 msg="starting go runner"
Dec 21 11:09:24 olivi ollama[34220]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Dec 21 11:09:24 olivi ollama[34220]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Dec 21 11:09:24 olivi ollama[34220]: ggml_cuda_init: found 1 CUDA devices:
Dec 21 11:09:24 olivi ollama[34220]: Device 0: GRID P4-4Q, compute capability 6.1, VMM: no
Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.644-05:00 level=INFO source=runner.go:946 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=16
Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.644-05:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:45211"
Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.842-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
Dec 21 11:09:24 olivi ollama[34220]: llama_load_model_from_file: using device CUDA0 (GRID P4-4Q) - 2947 MiB free
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: loaded meta data with 30 key-value pairs and 147 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 (version GGUF V3 (latest))
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 0: general.architecture str = llama
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 1: general.type str = model
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 2: general.name str = Llama 3.2 1B Instruct
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 3: general.finetune str = Instruct
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 4: general.basename str = Llama-3.2
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 5: general.size_label str = 1B
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 8: llama.block_count u32 = 16
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 9: llama.context_length u32 = 131072
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 10: llama.embedding_length u32 = 2048
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 12: llama.attention.head_count u32 = 32
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 16: llama.attention.key_length u32 = 64
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 17: llama.attention.value_length u32 = 64
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 18: general.file_type u32 = 7
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 19: llama.vocab_size u32 = 128256
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 64
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 29: general.quantization_version u32 = 2
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - type f32: 34 tensors
Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - type q8_0: 113 tensors
Dec 21 11:09:25 olivi ollama[34220]: llm_load_vocab: special tokens cache size = 256
Dec 21 11:09:25 olivi ollama[34220]: llm_load_vocab: token to piece cache size = 0.7999 MB
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: format = GGUF V3 (latest)
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: arch = llama
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: vocab type = BPE
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_vocab = 128256
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_merges = 280147
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: vocab_only = 0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_ctx_train = 131072
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_embd = 2048
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_layer = 16
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_head = 32
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_head_kv = 8
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_rot = 64
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_swa = 0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_embd_head_k = 64
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_embd_head_v = 64
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_gqa = 4
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_embd_k_gqa = 512
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_embd_v_gqa = 512
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: f_norm_eps = 0.0e+00
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: f_logit_scale = 0.0e+00
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_ff = 8192
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_expert = 0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_expert_used = 0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: causal attn = 1
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: pooling type = 0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: rope type = 0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: rope scaling = linear
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: freq_base_train = 500000.0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: freq_scale_train = 1
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_ctx_orig_yarn = 131072
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: rope_finetuned = unknown
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: ssm_d_conv = 0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: ssm_d_inner = 0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: ssm_d_state = 0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: ssm_dt_rank = 0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: ssm_dt_b_c_rms = 0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: model type = 1B
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: model ftype = Q8_0
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: model params = 1.24 B
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: model size = 1.22 GiB (8.50 BPW)
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: general.name = Llama 3.2 1B Instruct
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: LF token = 128 'Ä'
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: max token length = 256
Dec 21 11:09:26 olivi ollama[34220]: llm_load_tensors: offloading 16 repeating layers to GPU
Dec 21 11:09:26 olivi ollama[34220]: llm_load_tensors: offloading output layer to GPU
Dec 21 11:09:26 olivi ollama[34220]: llm_load_tensors: offloaded 17/17 layers to GPU
Dec 21 11:09:26 olivi ollama[34220]: llm_load_tensors: CPU_Mapped model buffer size = 266.16 MiB
Dec 21 11:09:26 olivi ollama[34220]: llm_load_tensors: CUDA0 model buffer size = 1252.41 MiB
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: n_seq_max = 4
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: n_ctx = 8192
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: n_ctx_per_seq = 2048
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: n_batch = 2048
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: n_ubatch = 512
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: flash_attn = 0
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: freq_base = 500000.0
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: freq_scale = 1
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
Dec 21 11:09:44 olivi ollama[34220]: llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: CUDA_Host output buffer size = 1.99 MiB
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: CUDA0 compute buffer size = 544.00 MiB
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: graph nodes = 518
Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: graph splits = 2
Dec 21 11:09:44 olivi ollama[34220]: time=2024-12-21T11:09:44.687-05:00 level=INFO source=server.go:594 msg="llama runner started in 20.10 seconds"
Dec 21 11:09:44 olivi ollama[34220]: [GIN] 2024/12/21 - 11:09:44 | 200 | 20.862331914s | 127.0.0.1 | POST "/api/generate"

`

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.4

Originally created by @pr0fsmith on GitHub (Dec 21, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8200 ### What is the issue? After a while of using Ollama, the LLM becomes completely unresponsive and there's no CPU or GPU usage during that time. This happens with LLAMA3.2 and LLAMA3.2:1B. Here are the logs. ` Dec 21 00:37:03 olivi ollama[627]: [GIN] 2024/12/21 - 00:37:03 | 200 | 816.217µs | 172.17.0.2 | GET "/api/tags" Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: loaded meta data with 30 key-value pairs and 147 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 (version GGUF V3 (latest)) Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 0: general.architecture str = llama Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 1: general.type str = model Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 2: general.name str = Llama 3.2 1B Instruct Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 3: general.finetune str = Instruct Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 4: general.basename str = Llama-3.2 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 5: general.size_label str = 1B Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 8: llama.block_count u32 = 16 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 9: llama.context_length u32 = 131072 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 10: llama.embedding_length u32 = 2048 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 12: llama.attention.head_count u32 = 32 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 16: llama.attention.key_length u32 = 64 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 17: llama.attention.value_length u32 = 64 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 18: general.file_type u32 = 7 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 64 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - kv 29: general.quantization_version u32 = 2 Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - type f32: 34 tensors Dec 21 00:37:04 olivi ollama[627]: llama_model_loader: - type q8_0: 113 tensors Dec 21 00:37:04 olivi ollama[627]: llm_load_vocab: special tokens cache size = 256 Dec 21 00:37:04 olivi ollama[627]: llm_load_vocab: token to piece cache size = 0.7999 MB Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: format = GGUF V3 (latest) Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: arch = llama Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: vocab type = BPE Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: n_vocab = 128256 Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: n_merges = 280147 Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: vocab_only = 1 Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: model type = ?B Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: model ftype = all F32 Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: model params = 1.24 B Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: model size = 1.22 GiB (8.50 BPW) Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: general.name = Llama 3.2 1B Instruct Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>' Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: LF token = 128 'Ä' Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>' Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>' Dec 21 00:37:04 olivi ollama[627]: llm_load_print_meta: max token length = 256 Dec 21 00:37:04 olivi ollama[627]: llama_model_load: vocab only - skipping tensors Dec 21 06:08:13 olivi ollama[627]: [GIN] 2024/12/21 - 06:08:13 | 200 | 5h31m9s | 172.17.0.2 | POST "/api/chat" Dec 21 07:28:02 olivi ollama[627]: [GIN] 2024/12/21 - 07:28:02 | 200 | 10h12m29s | 172.17.0.2 | POST "/api/chat" Dec 21 11:09:13 olivi systemd[1]: Stopping ollama.service - Ollama Service... Dec 21 11:09:13 olivi systemd[1]: ollama.service: Deactivated successfully. Dec 21 11:09:13 olivi systemd[1]: Stopped ollama.service - Ollama Service. Dec 21 11:09:13 olivi systemd[1]: ollama.service: Consumed 4min 38.830s CPU time. Dec 21 11:09:13 olivi systemd[1]: Started ollama.service - Ollama Service. Dec 21 11:09:13 olivi ollama[34220]: 2024/12/21 11:09:13 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Dec 21 11:09:13 olivi ollama[34220]: time=2024-12-21T11:09:13.418-05:00 level=INFO source=images.go:757 msg="total blobs: 14" Dec 21 11:09:13 olivi ollama[34220]: time=2024-12-21T11:09:13.418-05:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0" Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. Dec 21 11:09:13 olivi ollama[34220]: - using env: export GIN_MODE=release Dec 21 11:09:13 olivi ollama[34220]: - using code: gin.SetMode(gin.ReleaseMode) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) Dec 21 11:09:13 olivi ollama[34220]: time=2024-12-21T11:09:13.419-05:00 level=INFO source=routes.go:1310 msg="Listening on [::]:11434 (version 0.5.4)" Dec 21 11:09:13 olivi ollama[34220]: time=2024-12-21T11:09:13.419-05:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]" Dec 21 11:09:13 olivi ollama[34220]: time=2024-12-21T11:09:13.419-05:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" Dec 21 11:09:13 olivi ollama[34220]: time=2024-12-21T11:09:13.799-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-20ba82ab-bf3d-11ef-bf4a-9e2501b9f7cf library=cuda variant=v12 compute=6.1 driver=12.2 name="GRID P4-4Q" total="4.0 GiB" available="2.9 GiB" Dec 21 11:09:23 olivi ollama[34220]: [GIN] 2024/12/21 - 11:09:23 | 200 | 62.241µs | 127.0.0.1 | HEAD "/" Dec 21 11:09:23 olivi ollama[34220]: [GIN] 2024/12/21 - 11:09:23 | 200 | 26.245321ms | 127.0.0.1 | POST "/api/show" Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.246-05:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 gpu=GPU-20ba82ab-bf3d-11ef-bf4a-9e2501b9f7cf parallel=4 available=3091042304 required="2.5 GiB" Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.588-05:00 level=INFO source=server.go:104 msg="system memory" total="15.2 GiB" free="13.6 GiB" free_swap="975.0 MiB" Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.588-05:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=17 layers.offload=17 layers.split="" memory.available="[2.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.5 GiB" memory.required.partial="2.5 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[2.5 GiB]" memory.weights.total="1.2 GiB" memory.weights.repeating="976.1 MiB" memory.weights.nonrepeating="266.2 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="554.3 MiB" Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.589-05:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /usr/share/ollama/.ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 --ctx-size 8192 --batch-size 512 --n-gpu-layers 17 --threads 16 --parallel 4 --port 45211" Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.590-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.590-05:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.590-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.633-05:00 level=INFO source=runner.go:945 msg="starting go runner" Dec 21 11:09:24 olivi ollama[34220]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Dec 21 11:09:24 olivi ollama[34220]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Dec 21 11:09:24 olivi ollama[34220]: ggml_cuda_init: found 1 CUDA devices: Dec 21 11:09:24 olivi ollama[34220]: Device 0: GRID P4-4Q, compute capability 6.1, VMM: no Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.644-05:00 level=INFO source=runner.go:946 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=16 Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.644-05:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:45211" Dec 21 11:09:24 olivi ollama[34220]: time=2024-12-21T11:09:24.842-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" Dec 21 11:09:24 olivi ollama[34220]: llama_load_model_from_file: using device CUDA0 (GRID P4-4Q) - 2947 MiB free Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: loaded meta data with 30 key-value pairs and 147 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-74701a8c35f6c8d9a4b91f3f3497643001d63e0c7a84e085bed452548fa88d45 (version GGUF V3 (latest)) Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 0: general.architecture str = llama Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 1: general.type str = model Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 2: general.name str = Llama 3.2 1B Instruct Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 3: general.finetune str = Instruct Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 4: general.basename str = Llama-3.2 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 5: general.size_label str = 1B Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 8: llama.block_count u32 = 16 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 9: llama.context_length u32 = 131072 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 10: llama.embedding_length u32 = 2048 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 12: llama.attention.head_count u32 = 32 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 16: llama.attention.key_length u32 = 64 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 17: llama.attention.value_length u32 = 64 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 18: general.file_type u32 = 7 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 64 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - kv 29: general.quantization_version u32 = 2 Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - type f32: 34 tensors Dec 21 11:09:25 olivi ollama[34220]: llama_model_loader: - type q8_0: 113 tensors Dec 21 11:09:25 olivi ollama[34220]: llm_load_vocab: special tokens cache size = 256 Dec 21 11:09:25 olivi ollama[34220]: llm_load_vocab: token to piece cache size = 0.7999 MB Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: format = GGUF V3 (latest) Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: arch = llama Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: vocab type = BPE Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_vocab = 128256 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_merges = 280147 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: vocab_only = 0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_ctx_train = 131072 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_embd = 2048 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_layer = 16 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_head = 32 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_head_kv = 8 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_rot = 64 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_swa = 0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_embd_head_k = 64 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_embd_head_v = 64 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_gqa = 4 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_embd_k_gqa = 512 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_embd_v_gqa = 512 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: f_norm_eps = 0.0e+00 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: f_logit_scale = 0.0e+00 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_ff = 8192 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_expert = 0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_expert_used = 0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: causal attn = 1 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: pooling type = 0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: rope type = 0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: rope scaling = linear Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: freq_base_train = 500000.0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: freq_scale_train = 1 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: n_ctx_orig_yarn = 131072 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: rope_finetuned = unknown Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: ssm_d_conv = 0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: ssm_d_inner = 0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: ssm_d_state = 0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: ssm_dt_rank = 0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: model type = 1B Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: model ftype = Q8_0 Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: model params = 1.24 B Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: model size = 1.22 GiB (8.50 BPW) Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: general.name = Llama 3.2 1B Instruct Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>' Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: LF token = 128 'Ä' Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>' Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>' Dec 21 11:09:25 olivi ollama[34220]: llm_load_print_meta: max token length = 256 Dec 21 11:09:26 olivi ollama[34220]: llm_load_tensors: offloading 16 repeating layers to GPU Dec 21 11:09:26 olivi ollama[34220]: llm_load_tensors: offloading output layer to GPU Dec 21 11:09:26 olivi ollama[34220]: llm_load_tensors: offloaded 17/17 layers to GPU Dec 21 11:09:26 olivi ollama[34220]: llm_load_tensors: CPU_Mapped model buffer size = 266.16 MiB Dec 21 11:09:26 olivi ollama[34220]: llm_load_tensors: CUDA0 model buffer size = 1252.41 MiB Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: n_seq_max = 4 Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: n_ctx = 8192 Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: n_ctx_per_seq = 2048 Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: n_batch = 2048 Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: n_ubatch = 512 Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: flash_attn = 0 Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: freq_base = 500000.0 Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: freq_scale = 1 Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized Dec 21 11:09:44 olivi ollama[34220]: llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: CUDA_Host output buffer size = 1.99 MiB Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: CUDA0 compute buffer size = 544.00 MiB Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: graph nodes = 518 Dec 21 11:09:44 olivi ollama[34220]: llama_new_context_with_model: graph splits = 2 Dec 21 11:09:44 olivi ollama[34220]: time=2024-12-21T11:09:44.687-05:00 level=INFO source=server.go:594 msg="llama runner started in 20.10 seconds" Dec 21 11:09:44 olivi ollama[34220]: [GIN] 2024/12/21 - 11:09:44 | 200 | 20.862331914s | 127.0.0.1 | POST "/api/generate" ` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.4
GiteaMirror added the bug label 2026-04-22 11:05:24 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 21, 2024):

Is your GRID P4-4Q licensed?

https://docs.nvidia.com/vgpu/13.0/grid-licensing-user-guide/index.html#software-enforcement-grid-licensing

<!-- gh-comment-id:2558239664 --> @rick-github commented on GitHub (Dec 21, 2024): Is your GRID P4-4Q licensed? https://docs.nvidia.com/vgpu/13.0/grid-licensing-user-guide/index.html#software-enforcement-grid-licensing
Author
Owner

@pr0fsmith commented on GitHub (Dec 21, 2024):

Omg. Is that the issue? Its not licensed. Is there a workaround or do i have to just pay for the license?

<!-- gh-comment-id:2558241096 --> @pr0fsmith commented on GitHub (Dec 21, 2024): Omg. Is that the issue? Its not licensed. Is there a workaround or do i have to just pay for the license?
Author
Owner

@rick-github commented on GitHub (Dec 21, 2024):

I have no idea. See #8023 for a previous issue resolved by licensing.

<!-- gh-comment-id:2558243771 --> @rick-github commented on GitHub (Dec 21, 2024): I have no idea. See #8023 for a previous issue resolved by licensing.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30994