[GH-ISSUE #6236] gpu not found in windows #29660

Closed
opened 2026-04-22 08:44:11 -05:00 by GiteaMirror · 18 comments
Owner

Originally created by @showyoung on GitHub (Aug 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6236

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

A few days ago, my ollama could still run using the GPU, but today it suddenly can only use the CPU. I tried to reinstall ollama, use an old version of ollama, and updated the graphics card driver, but I couldn't make ollama run on the GPU. windows 11 22H2, graphics card is 3080, cpu is intel.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.3.4

Originally created by @showyoung on GitHub (Aug 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6236 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? A few days ago, my ollama could still run using the GPU, but today it suddenly can only use the CPU. I tried to reinstall ollama, use an old version of ollama, and updated the graphics card driver, but I couldn't make ollama run on the GPU. windows 11 22H2, graphics card is 3080, cpu is intel. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.4
GiteaMirror added the nvidiabugwindows labels 2026-04-22 08:44:12 -05:00
Author
Owner

@showyoung commented on GitHub (Aug 7, 2024):

2024/08/07 23:59:33 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY:cudart64_110.dll OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\documents\ollamaModel OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-07T23:59:33.871+08:00 level=INFO source=images.go:781 msg="total blobs: 30"
time=2024-08-07T23:59:33.909+08:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-07T23:59:33.911+08:00 level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.4)"
time=2024-08-07T23:59:33.917+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1]"
time=2024-08-07T23:59:33.917+08:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-07T23:59:33.991+08:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
time=2024-08-07T23:59:33.993+08:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="127.7 GiB" available="119.6 GiB"
[GIN] 2024/08/08 - 00:00:24 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/08 - 00:00:24 | 200 | 43.5509ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/08/08 - 00:01:28 | 200 | 224.5µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/08 - 00:01:28 | 200 | 65.535ms | 127.0.0.1 | POST "/api/show"
time=2024-08-08T00:01:28.149+08:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[119.4 GiB]" memory.required.full="9.2 GiB" memory.required.partial="0 B" memory.required.kv="1.0 GiB" memory.required.allocations="[9.2 GiB]" memory.weights.total="7.9 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="532.3 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-08-08T00:01:28.153+08:00 level=INFO source=server.go:171 msg="Invalid OLLAMA_LLM_LIBRARY cudart64_110.dll - not found"
time=2024-08-08T00:01:28.163+08:00 level=INFO source=server.go:392 msg="starting llama server" cmd="C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe --model C:\documents\ollamaModel\blobs\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 50551"
time=2024-08-08T00:01:28.204+08:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-08T00:01:28.208+08:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding"
time=2024-08-08T00:01:28.209+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3535 commit="1e6f6554" tid="12652" timestamp=1723046488
INFO [wmain] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="12652" timestamp=1723046488 total_threads=48
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="50551" tid="12652" timestamp=1723046488
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from C:\documents\ollamaModel\blobs\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Llama3.1-8B-Chinese-Chat
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 131072
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 7
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = smaug-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 128009
llama_model_loader: - kv 21: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q8_0: 226 tensors
time=2024-08-08T00:01:28.474+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q8_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 7.95 GiB (8.50 BPW)
llm_load_print_meta: general.name = Llama3.1-8B-Chinese-Chat
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: PAD token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.14 MiB
llm_load_tensors: CPU buffer size = 8137.64 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.02 MiB
llama_new_context_with_model: CPU compute buffer size = 560.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
INFO [wmain] model loaded | tid="12652" timestamp=1723046508
[GIN] 2024/08/08 - 00:01:48 | 200 | 20.7449597s | 127.0.0.1 | POST "/api/chat"
time=2024-08-08T00:01:48.837+08:00 level=INFO source=server.go:631 msg="llama runner started in 20.63 seconds"
[GIN] 2024/08/08 - 00:02:01 | 200 | 4.5436671s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/08/08 - 00:12:19 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/08 - 00:12:19 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2024/08/08 - 00:12:41 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/08 - 00:12:41 | 200 | 3.7143ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/08/08 - 00:12:52 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/08 - 00:12:52 | 200 | 27.0793ms | 127.0.0.1 | POST "/api/show"
time=2024-08-08T00:12:52.905+08:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[118.3 GiB]" memory.required.full="9.2 GiB" memory.required.partial="0 B" memory.required.kv="1.0 GiB" memory.required.allocations="[9.2 GiB]" memory.weights.total="7.9 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="532.3 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-08-08T00:12:52.908+08:00 level=INFO source=server.go:171 msg="Invalid OLLAMA_LLM_LIBRARY cudart64_110.dll - not found"
time=2024-08-08T00:12:52.914+08:00 level=INFO source=server.go:392 msg="starting llama server" cmd="C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe --model C:\documents\ollamaModel\blobs\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 65083"
time=2024-08-08T00:12:52.915+08:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-08T00:12:52.915+08:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding"
time=2024-08-08T00:12:52.918+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3535 commit="1e6f6554" tid="20284" timestamp=1723047172
INFO [wmain] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="20284" timestamp=1723047172 total_threads=48
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="65083" tid="20284" timestamp=1723047172
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from C:\documents\ollamaModel\blobs\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Llama3.1-8B-Chinese-Chat
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 131072
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 7
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = smaug-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 128009
llama_model_loader: - kv 21: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q8_0: 226 tensors
time=2024-08-08T00:12:53.174+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q8_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 7.95 GiB (8.50 BPW)
llm_load_print_meta: general.name = Llama3.1-8B-Chinese-Chat
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: PAD token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.14 MiB
llm_load_tensors: CPU buffer size = 8137.64 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.02 MiB
llama_new_context_with_model: CPU compute buffer size = 560.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
INFO [wmain] model loaded | tid="20284" timestamp=1723047177
time=2024-08-08T00:12:58.079+08:00 level=INFO source=server.go:631 msg="llama runner started in 5.16 seconds"
[GIN] 2024/08/08 - 00:12:58 | 200 | 5.2198074s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/08/08 - 00:13:03 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/08 - 00:13:03 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2024/08/08 - 00:15:31 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/08 - 00:15:31 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2024/08/08 - 00:15:41 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/08 - 00:15:41 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2024/08/08 - 00:15:52 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/08 - 00:15:52 | 200 | 58.6492ms | 127.0.0.1 | POST "/api/show"
time=2024-08-08T00:15:52.274+08:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=29 layers.offload=0 layers.split="" memory.available="[109.3 GiB]" memory.required.full="4.9 GiB" memory.required.partial="0 B" memory.required.kv="448.0 MiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.4 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB"
time=2024-08-08T00:15:52.277+08:00 level=INFO source=server.go:171 msg="Invalid OLLAMA_LLM_LIBRARY cudart64_110.dll - not found"
time=2024-08-08T00:15:52.279+08:00 level=INFO source=server.go:392 msg="starting llama server" cmd="C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe --model C:\documents\ollamaModel\blobs\sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 65135"
time=2024-08-08T00:15:52.285+08:00 level=INFO source=sched.go:445 msg="loaded runners" count=2
time=2024-08-08T00:15:52.285+08:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding"
time=2024-08-08T00:15:52.288+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3535 commit="1e6f6554" tid="20632" timestamp=1723047352
INFO [wmain] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="20632" timestamp=1723047352 total_threads=48
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="65135" tid="20632" timestamp=1723047352
llama_model_loader: loaded meta data with 21 key-value pairs and 339 tensors from C:\documents\ollamaModel\blobs\sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-7B-Instruct
llama_model_loader: - kv 2: qwen2.block_count u32 = 28
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_0: 197 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-08-08T00:15:52.543+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 421
llm_load_vocab: token to piece cache size = 0.9352 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 3584
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 28
llm_load_print_meta: n_head_kv = 4
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 7
llm_load_print_meta: n_embd_k_gqa = 512
llm_load_print_meta: n_embd_v_gqa = 512
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 18944
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.62 B
llm_load_print_meta: model size = 4.12 GiB (4.65 BPW)
llm_load_print_meta: general.name = Qwen2-7B-Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.15 MiB
llm_load_tensors: CPU buffer size = 4220.43 MiB
[GIN] 2024/08/08 - 00:15:54 | 200 | 528.2µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/08 - 00:15:54 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 448.00 MiB
llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.38 MiB
llama_new_context_with_model: CPU compute buffer size = 492.01 MiB
llama_new_context_with_model: graph nodes = 986
llama_new_context_with_model: graph splits = 1
INFO [wmain] model loaded | tid="20632" timestamp=1723047363
time=2024-08-08T00:16:03.108+08:00 level=INFO source=server.go:631 msg="llama runner started in 10.82 seconds"
[GIN] 2024/08/08 - 00:16:03 | 200 | 10.8748023s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/08/08 - 00:22:54 | 200 | 0s | 127.0.0.1 | GET "/api/version"

<!-- gh-comment-id:2273853895 --> @showyoung commented on GitHub (Aug 7, 2024): 2024/08/07 23:59:33 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY:cudart64_110.dll OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\documents\\ollamaModel OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-07T23:59:33.871+08:00 level=INFO source=images.go:781 msg="total blobs: 30" time=2024-08-07T23:59:33.909+08:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" time=2024-08-07T23:59:33.911+08:00 level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.4)" time=2024-08-07T23:59:33.917+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1]" time=2024-08-07T23:59:33.917+08:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs" time=2024-08-07T23:59:33.991+08:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered" time=2024-08-07T23:59:33.993+08:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="127.7 GiB" available="119.6 GiB" [GIN] 2024/08/08 - 00:00:24 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 00:00:24 | 200 | 43.5509ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/08/08 - 00:01:28 | 200 | 224.5µs | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 00:01:28 | 200 | 65.535ms | 127.0.0.1 | POST "/api/show" time=2024-08-08T00:01:28.149+08:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[119.4 GiB]" memory.required.full="9.2 GiB" memory.required.partial="0 B" memory.required.kv="1.0 GiB" memory.required.allocations="[9.2 GiB]" memory.weights.total="7.9 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="532.3 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-08-08T00:01:28.153+08:00 level=INFO source=server.go:171 msg="Invalid OLLAMA_LLM_LIBRARY cudart64_110.dll - not found" time=2024-08-08T00:01:28.163+08:00 level=INFO source=server.go:392 msg="starting llama server" cmd="C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\documents\\ollamaModel\\blobs\\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 50551" time=2024-08-08T00:01:28.204+08:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 time=2024-08-08T00:01:28.208+08:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding" time=2024-08-08T00:01:28.209+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=3535 commit="1e6f6554" tid="12652" timestamp=1723046488 INFO [wmain] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="12652" timestamp=1723046488 total_threads=48 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="50551" tid="12652" timestamp=1723046488 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from C:\documents\ollamaModel\blobs\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Llama3.1-8B-Chinese-Chat llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 131072 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 7 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = smaug-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 128009 llama_model_loader: - kv 21: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q8_0: 226 tensors time=2024-08-08T00:01:28.474+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q8_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 7.95 GiB (8.50 BPW) llm_load_print_meta: general.name = Llama3.1-8B-Chinese-Chat llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: PAD token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.14 MiB llm_load_tensors: CPU buffer size = 8137.64 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CPU output buffer size = 2.02 MiB llama_new_context_with_model: CPU compute buffer size = 560.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 INFO [wmain] model loaded | tid="12652" timestamp=1723046508 [GIN] 2024/08/08 - 00:01:48 | 200 | 20.7449597s | 127.0.0.1 | POST "/api/chat" time=2024-08-08T00:01:48.837+08:00 level=INFO source=server.go:631 msg="llama runner started in 20.63 seconds" [GIN] 2024/08/08 - 00:02:01 | 200 | 4.5436671s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/08/08 - 00:12:19 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 00:12:19 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2024/08/08 - 00:12:41 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 00:12:41 | 200 | 3.7143ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/08/08 - 00:12:52 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 00:12:52 | 200 | 27.0793ms | 127.0.0.1 | POST "/api/show" time=2024-08-08T00:12:52.905+08:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[118.3 GiB]" memory.required.full="9.2 GiB" memory.required.partial="0 B" memory.required.kv="1.0 GiB" memory.required.allocations="[9.2 GiB]" memory.weights.total="7.9 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="532.3 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-08-08T00:12:52.908+08:00 level=INFO source=server.go:171 msg="Invalid OLLAMA_LLM_LIBRARY cudart64_110.dll - not found" time=2024-08-08T00:12:52.914+08:00 level=INFO source=server.go:392 msg="starting llama server" cmd="C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\documents\\ollamaModel\\blobs\\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 65083" time=2024-08-08T00:12:52.915+08:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 time=2024-08-08T00:12:52.915+08:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding" time=2024-08-08T00:12:52.918+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=3535 commit="1e6f6554" tid="20284" timestamp=1723047172 INFO [wmain] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="20284" timestamp=1723047172 total_threads=48 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="65083" tid="20284" timestamp=1723047172 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from C:\documents\ollamaModel\blobs\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Llama3.1-8B-Chinese-Chat llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 131072 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 7 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = smaug-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 128009 llama_model_loader: - kv 21: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q8_0: 226 tensors time=2024-08-08T00:12:53.174+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q8_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 7.95 GiB (8.50 BPW) llm_load_print_meta: general.name = Llama3.1-8B-Chinese-Chat llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: PAD token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.14 MiB llm_load_tensors: CPU buffer size = 8137.64 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CPU output buffer size = 2.02 MiB llama_new_context_with_model: CPU compute buffer size = 560.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 INFO [wmain] model loaded | tid="20284" timestamp=1723047177 time=2024-08-08T00:12:58.079+08:00 level=INFO source=server.go:631 msg="llama runner started in 5.16 seconds" [GIN] 2024/08/08 - 00:12:58 | 200 | 5.2198074s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/08/08 - 00:13:03 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 00:13:03 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2024/08/08 - 00:15:31 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 00:15:31 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2024/08/08 - 00:15:41 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 00:15:41 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2024/08/08 - 00:15:52 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 00:15:52 | 200 | 58.6492ms | 127.0.0.1 | POST "/api/show" time=2024-08-08T00:15:52.274+08:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=29 layers.offload=0 layers.split="" memory.available="[109.3 GiB]" memory.required.full="4.9 GiB" memory.required.partial="0 B" memory.required.kv="448.0 MiB" memory.required.allocations="[4.9 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.4 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB" time=2024-08-08T00:15:52.277+08:00 level=INFO source=server.go:171 msg="Invalid OLLAMA_LLM_LIBRARY cudart64_110.dll - not found" time=2024-08-08T00:15:52.279+08:00 level=INFO source=server.go:392 msg="starting llama server" cmd="C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\documents\\ollamaModel\\blobs\\sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 65135" time=2024-08-08T00:15:52.285+08:00 level=INFO source=sched.go:445 msg="loaded runners" count=2 time=2024-08-08T00:15:52.285+08:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding" time=2024-08-08T00:15:52.288+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=3535 commit="1e6f6554" tid="20632" timestamp=1723047352 INFO [wmain] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="20632" timestamp=1723047352 total_threads=48 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="65135" tid="20632" timestamp=1723047352 llama_model_loader: loaded meta data with 21 key-value pairs and 339 tensors from C:\documents\ollamaModel\blobs\sha256-43f7a214e5329f672bb05404cfba1913cbb70fdaa1a17497224e1925046b0ed5 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.name str = Qwen2-7B-Instruct llama_model_loader: - kv 2: qwen2.block_count u32 = 28 llama_model_loader: - kv 3: qwen2.context_length u32 = 32768 llama_model_loader: - kv 4: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_0: 197 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-08-08T00:15:52.543+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 421 llm_load_vocab: token to piece cache size = 0.9352 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 3584 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 28 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 512 llm_load_print_meta: n_embd_v_gqa = 512 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18944 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.12 GiB (4.65 BPW) llm_load_print_meta: general.name = Qwen2-7B-Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.15 MiB llm_load_tensors: CPU buffer size = 4220.43 MiB [GIN] 2024/08/08 - 00:15:54 | 200 | 528.2µs | 127.0.0.1 | HEAD "/" [GIN] 2024/08/08 - 00:15:54 | 200 | 0s | 127.0.0.1 | GET "/api/ps" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 448.00 MiB llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_new_context_with_model: CPU output buffer size = 2.38 MiB llama_new_context_with_model: CPU compute buffer size = 492.01 MiB llama_new_context_with_model: graph nodes = 986 llama_new_context_with_model: graph splits = 1 INFO [wmain] model loaded | tid="20632" timestamp=1723047363 time=2024-08-08T00:16:03.108+08:00 level=INFO source=server.go:631 msg="llama runner started in 10.82 seconds" [GIN] 2024/08/08 - 00:16:03 | 200 | 10.8748023s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/08/08 - 00:22:54 | 200 | 0s | 127.0.0.1 | GET "/api/version"
Author
Owner

@showyoung commented on GitHub (Aug 7, 2024):

Screenshot_1

<!-- gh-comment-id:2273856965 --> @showyoung commented on GitHub (Aug 7, 2024): ![Screenshot_1](https://github.com/user-attachments/assets/ef2fce47-27db-40ca-b120-7bceffaf46bc)
Author
Owner

@showyoung commented on GitHub (Aug 7, 2024):

Screenshot_2

<!-- gh-comment-id:2273861856 --> @showyoung commented on GitHub (Aug 7, 2024): ![Screenshot_2](https://github.com/user-attachments/assets/a93c8a8f-e6d4-4e96-b2ce-f6c668a27af1)
Author
Owner

@dhiltgen commented on GitHub (Aug 7, 2024):

@showyoung it looks like you have an invalid setting: OLLAMA_LLM_LIBRARY=cudart64_110.dll Remove that environment variable and it should recover.

Although maybe that's not the root cause and was trying to work around the problem. If that doesn't clear it up, please try running with OLLAMA_DEBUG=1 set so we can see more details in the logs during GPU discovery.

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md

<!-- gh-comment-id:2274228077 --> @dhiltgen commented on GitHub (Aug 7, 2024): @showyoung it looks like you have an invalid setting: `OLLAMA_LLM_LIBRARY=cudart64_110.dll` Remove that environment variable and it should recover. Although maybe that's not the root cause and was trying to work around the problem. If that doesn't clear it up, please try running with `OLLAMA_DEBUG=1` set so we can see more details in the logs during GPU discovery. https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md
Author
Owner

@showyoung commented on GitHub (Aug 8, 2024):

@dhiltgen , Thank you for your answer, but I have already tried deleting OLLAMA_LLM_LIBRARY=cudart64_110.dll, and it still did not work. Ultimately, I installed CUDA on Windows, although it took up a lot of space, it indeed fixed the issue.

<!-- gh-comment-id:2276267316 --> @showyoung commented on GitHub (Aug 8, 2024): @dhiltgen , Thank you for your answer, but I have already tried deleting `OLLAMA_LLM_LIBRARY=cudart64_110.dll`, and it still did not work. Ultimately, I installed CUDA on Windows, although it took up a lot of space, it indeed fixed the issue.
Author
Owner

@showyoung commented on GitHub (Aug 8, 2024):

After I uninstalled cuda, ollama could only use cpu to work.

2024/08/09 01:16:04 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\documents\\ollamaModel OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-09T01:16:04.543+08:00 level=INFO source=images.go:781 msg="total blobs: 30"
time=2024-08-09T01:16:04.581+08:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-09T01:16:04.582+08:00 level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.4)"
time=2024-08-09T01:16:04.587+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v6.1 cpu cpu_avx]"
time=2024-08-09T01:16:04.587+08:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-09T01:16:04.643+08:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
time=2024-08-09T01:16:04.645+08:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="127.7 GiB" available="108.0 GiB"
<!-- gh-comment-id:2276305430 --> @showyoung commented on GitHub (Aug 8, 2024): After I uninstalled cuda, ollama could only use cpu to work. ``` 2024/08/09 01:16:04 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\documents\\ollamaModel OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-09T01:16:04.543+08:00 level=INFO source=images.go:781 msg="total blobs: 30" time=2024-08-09T01:16:04.581+08:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" time=2024-08-09T01:16:04.582+08:00 level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.4)" time=2024-08-09T01:16:04.587+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v6.1 cpu cpu_avx]" time=2024-08-09T01:16:04.587+08:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs" time=2024-08-09T01:16:04.643+08:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered" time=2024-08-09T01:16:04.645+08:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="127.7 GiB" available="108.0 GiB" ```
Author
Owner

@showyoung commented on GitHub (Aug 8, 2024):

This is the log before I uninstalled cuda

2024/08/09 01:00:36 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\documents\\ollamaModel OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-09T01:00:36.983+08:00 level=INFO source=images.go:781 msg="total blobs: 30"
time=2024-08-09T01:00:37.021+08:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-09T01:00:37.026+08:00 level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.4)"
time=2024-08-09T01:00:37.032+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1]"
time=2024-08-09T01:00:37.033+08:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-09T01:00:37.376+08:00 level=INFO source=gpu.go:288 msg="detected OS VRAM overhead" id=GPU-3cb84947-101e-c532-b831-f0697739f1c0 library=cuda compute=8.6 driver=0.0 name="" overhead="250.0 MiB"
time=2024-08-09T01:00:37.405+08:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-3cb84947-101e-c532-b831-f0697739f1c0 library=cuda compute=8.6 driver=0.0 name="" total="10.0 GiB" available="8.9 GiB"
<!-- gh-comment-id:2276306721 --> @showyoung commented on GitHub (Aug 8, 2024): This is the log before I uninstalled cuda ``` 2024/08/09 01:00:36 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\documents\\ollamaModel OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-09T01:00:36.983+08:00 level=INFO source=images.go:781 msg="total blobs: 30" time=2024-08-09T01:00:37.021+08:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" time=2024-08-09T01:00:37.026+08:00 level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.4)" time=2024-08-09T01:00:37.032+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1]" time=2024-08-09T01:00:37.033+08:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs" time=2024-08-09T01:00:37.376+08:00 level=INFO source=gpu.go:288 msg="detected OS VRAM overhead" id=GPU-3cb84947-101e-c532-b831-f0697739f1c0 library=cuda compute=8.6 driver=0.0 name="" overhead="250.0 MiB" time=2024-08-09T01:00:37.405+08:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-3cb84947-101e-c532-b831-f0697739f1c0 library=cuda compute=8.6 driver=0.0 name="" total="10.0 GiB" available="8.9 GiB" ```
Author
Owner

@showyoung commented on GitHub (Aug 8, 2024):

After I uninstalled cuda, ollama could only use cpu to work. I opened DEBUG MODE

2024/08/09 01:21:17 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\documents\\ollamaModel OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-09T01:21:17.895+08:00 level=INFO source=images.go:781 msg="total blobs: 30"
time=2024-08-09T01:21:17.896+08:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-09T01:21:17.899+08:00 level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.4)"
time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu\ollama_llama_server.exe
time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx\ollama_llama_server.exe
time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe
time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe
time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\rocm_v6.1\ollama_llama_server.exe
time=2024-08-09T01:21:17.901+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v6.1 cpu cpu_avx]"
time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-08-09T01:21:17.901+08:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=gpu.go:90 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvml.dll
time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Microsoft VS Code\\bin\\nvml.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvml.dll* C:\\Program Files\\nodejs\\nvml.dll* C:\\ProgramData\\anaconda3\\nvml.dll* C:\\ProgramData\\anaconda3\\Scripts\\nvml.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvml.dll* C:\\ffmpeg6\\bin\\nvml.dll* C:\\ffmpeg7\\bin\\nvml.dll* C:\\Program Files\\PowerShell\\7\\nvml.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvml.dll* C:\\Program Files\\nodejs\\nvml.dll* c:\\Windows\\System32\\nvml.dll]"
time=2024-08-09T01:21:17.902+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll*"
time=2024-08-09T01:21:17.903+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[c:\Windows\System32\nvml.dll]
time=2024-08-09T01:21:17.917+08:00 level=DEBUG source=gpu.go:112 msg="nvidia-ml loaded" library=c:\Windows\System32\nvml.dll
time=2024-08-09T01:21:17.917+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvcuda.dll
time=2024-08-09T01:21:17.928+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Windows\\nvcuda.dll* C:\\Windows\\System32\\Wbem\\nvcuda.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\Windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Program Files\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\ProgramData\\anaconda3\\nvcuda.dll* C:\\ProgramData\\anaconda3\\Scripts\\nvcuda.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\nvcuda.dll* C:\\Program Files\\Git\\cmd\\nvcuda.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll* C:\\ffmpeg6\\bin\\nvcuda.dll* C:\\ffmpeg7\\bin\\nvcuda.dll* C:\\Program Files\\PowerShell\\7\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]"
time=2024-08-09T01:21:17.928+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*"
time=2024-08-09T01:21:17.931+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[]
time=2024-08-09T01:21:17.931+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=cudart64_*.dll
time=2024-08-09T01:21:17.931+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll* C:\\Windows\\cudart64_*.dll* C:\\Windows\\System32\\Wbem\\cudart64_*.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\cudart64_*.dll* C:\\Windows\\System32\\OpenSSH\\cudart64_*.dll* C:\\Program Files\\dotnet\\cudart64_*.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll* C:\\Program Files\\Microsoft VS Code\\bin\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\cudart64_*.dll* C:\\Program Files\\nodejs\\cudart64_*.dll* C:\\ProgramData\\anaconda3\\cudart64_*.dll* C:\\ProgramData\\anaconda3\\Scripts\\cudart64_*.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\cudart64_*.dll* C:\\Program Files\\Git\\cmd\\cudart64_*.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\cudart64_*.dll* C:\\ffmpeg6\\bin\\cudart64_*.dll* C:\\ffmpeg7\\bin\\cudart64_*.dll* C:\\Program Files\\PowerShell\\7\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\cudart64_*.dll* C:\\Program Files\\nodejs\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda*\\cudart64_*.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*\\bin\\cudart64_*.dll]"
time=2024-08-09T01:21:17.932+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll*"
time=2024-08-09T01:21:17.935+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[]
time=2024-08-09T01:21:17.936+08:00 level=DEBUG source=amd_windows.go:33 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found."
time=2024-08-09T01:21:17.936+08:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
time=2024-08-09T01:21:17.937+08:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="127.7 GiB" available="107.1 GiB"
time=2024-08-09T01:21:45.447+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="127.7 GiB" before.free="107.1 GiB" before.free_swap="113.3 GiB" now.total="127.7 GiB" now.free="107.1 GiB" now.free_swap="113.4 GiB"
time=2024-08-09T01:21:45.447+08:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0xe0bb00 gpu_count=1
time=2024-08-09T01:21:45.466+08:00 level=DEBUG source=sched.go:206 msg="cpu mode with first model, loading"
time=2024-08-09T01:21:45.466+08:00 level=DEBUG source=server.go:101 msg="system memory" total="127.7 GiB" free="107.1 GiB" free_swap="113.4 GiB"
time=2024-08-09T01:21:45.468+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu\ollama_llama_server.exe
time=2024-08-09T01:21:45.468+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx\ollama_llama_server.exe
time=2024-08-09T01:21:45.468+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe
time=2024-08-09T01:21:45.468+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe
time=2024-08-09T01:21:45.468+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\rocm_v6.1\ollama_llama_server.exe
time=2024-08-09T01:21:45.468+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cpu gpu_count=1 available="[107.1 GiB]"
time=2024-08-09T01:21:45.469+08:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[107.1 GiB]" memory.required.full="9.2 GiB" memory.required.partial="0 B" memory.required.kv="1.0 GiB" memory.required.allocations="[9.2 GiB]" memory.weights.total="7.9 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="532.3 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-08-09T01:21:45.469+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu\ollama_llama_server.exe
time=2024-08-09T01:21:45.469+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx\ollama_llama_server.exe
time=2024-08-09T01:21:45.469+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe
time=2024-08-09T01:21:45.469+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe
time=2024-08-09T01:21:45.469+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\rocm_v6.1\ollama_llama_server.exe
time=2024-08-09T01:21:45.481+08:00 level=DEBUG source=gpu.go:637 msg="no filter required for library cpu"
time=2024-08-09T01:21:45.481+08:00 level=INFO source=server.go:392 msg="starting llama server" cmd="C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\documents\\ollamaModel\\blobs\\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d --ctx-size 8192 --batch-size 512 --embedding --log-disable --verbose --no-mmap --parallel 4 --port 61944"
time=2024-08-09T01:21:45.481+08:00 level=DEBUG source=server.go:409 msg=subprocess environment="[PATH=C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2;C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners;;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Microsoft VS Code\\bin;C:\\Users\\showyoung\\AppData\\Roaming\\nvm;C:\\Program Files\\nodejs;C:\\ProgramData\\anaconda3;C:\\ProgramData\\anaconda3\\Scripts;C:\\ProgramData\\anaconda3\\Library\\bin;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\ffmpeg6\\bin;C:\\ffmpeg7\\bin;C:\\Program Files\\PowerShell\\7\\;C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama;C:\\Users\\showyoung\\AppData\\Roaming\\nvm;C:\\Program Files\\nodejs]"
time=2024-08-09T01:21:45.486+08:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-09T01:21:45.486+08:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding"
time=2024-08-09T01:21:45.486+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3535 commit="1e6f6554" tid="25056" timestamp=1723137705
INFO [wmain] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="25056" timestamp=1723137705 total_threads=48
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="61944" tid="25056" timestamp=1723137705
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from C:\documents\ollamaModel\blobs\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Llama3.1-8B-Chinese-Chat
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 131072
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 7
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = smaug-bpe
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 128009
llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q8_0:  226 tensors
time=2024-08-09T01:21:45.749+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q8_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 7.95 GiB (8.50 BPW) 
llm_load_print_meta: general.name     = Llama3.1-8B-Chinese-Chat
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: PAD token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size =    0.14 MiB
llm_load_tensors:        CPU buffer size =  8137.64 MiB
time=2024-08-09T01:21:46.282+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.07"
time=2024-08-09T01:21:46.533+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.14"
time=2024-08-09T01:21:46.800+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.23"
time=2024-08-09T01:21:47.066+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.31"
time=2024-08-09T01:21:47.318+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.39"
time=2024-08-09T01:21:47.597+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.47"
time=2024-08-09T01:21:47.866+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.55"
time=2024-08-09T01:21:48.119+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.62"
time=2024-08-09T01:21:48.382+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.71"
time=2024-08-09T01:21:48.651+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.79"
time=2024-08-09T01:21:48.902+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.87"
time=2024-08-09T01:21:49.184+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.95"
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
time=2024-08-09T01:21:49.449+08:00 level=DEBUG source=server.go:637 msg="model load progress 1.00"
llama_kv_cache_init:        CPU KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     2.02 MiB
llama_new_context_with_model:        CPU compute buffer size =   560.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
time=2024-08-09T01:21:49.712+08:00 level=DEBUG source=server.go:640 msg="model load completed, waiting for server to become available" status="llm server loading model"
DEBUG [initialize] initializing slots | n_slots=4 tid="25056" timestamp=1723137710
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=0 tid="25056" timestamp=1723137710
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=1 tid="25056" timestamp=1723137710
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=2 tid="25056" timestamp=1723137710
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=3 tid="25056" timestamp=1723137710
INFO [wmain] model loaded | tid="25056" timestamp=1723137710
DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="25056" timestamp=1723137710
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=0 tid="25056" timestamp=1723137710
time=2024-08-09T01:21:50.499+08:00 level=INFO source=server.go:631 msg="llama runner started in 5.01 seconds"
time=2024-08-09T01:21:50.499+08:00 level=DEBUG source=sched.go:458 msg="finished setting up runner" model=C:\documents\ollamaModel\blobs\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1 tid="25056" timestamp=1723137710
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=61950 status=200 tid="10152" timestamp=1723137710
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=2 tid="25056" timestamp=1723137710
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=61950 status=200 tid="10152" timestamp=1723137710
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=3 tid="25056" timestamp=1723137710
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=61950 status=200 tid="10152" timestamp=1723137710
time=2024-08-09T01:21:50.505+08:00 level=DEBUG source=routes.go:1346 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\n**please use Chinese to talk.**<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n你好呀<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n你好,很高兴为您服务。有什么可以帮助您的吗?<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n你好呀<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=4 tid="25056" timestamp=1723137710
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=5 tid="25056" timestamp=1723137710
DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=54 slot_id=0 task_id=5 tid="25056" timestamp=1723137710
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=5 tid="25056" timestamp=1723137710
<!-- gh-comment-id:2276312703 --> @showyoung commented on GitHub (Aug 8, 2024): After I uninstalled cuda, ollama could only use cpu to work. I opened DEBUG MODE ``` 2024/08/09 01:21:17 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\documents\\ollamaModel OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-09T01:21:17.895+08:00 level=INFO source=images.go:781 msg="total blobs: 30" time=2024-08-09T01:21:17.896+08:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" time=2024-08-09T01:21:17.899+08:00 level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.4)" time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu\ollama_llama_server.exe time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx\ollama_llama_server.exe time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\rocm_v6.1\ollama_llama_server.exe time=2024-08-09T01:21:17.901+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v6.1 cpu cpu_avx]" time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-08-09T01:21:17.901+08:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs" time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=gpu.go:90 msg="searching for GPU discovery libraries for NVIDIA" time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvml.dll time=2024-08-09T01:21:17.901+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Program Files\\Microsoft VS Code\\bin\\nvml.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvml.dll* C:\\Program Files\\nodejs\\nvml.dll* C:\\ProgramData\\anaconda3\\nvml.dll* C:\\ProgramData\\anaconda3\\Scripts\\nvml.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvml.dll* C:\\ffmpeg6\\bin\\nvml.dll* C:\\ffmpeg7\\bin\\nvml.dll* C:\\Program Files\\PowerShell\\7\\nvml.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvml.dll* C:\\Program Files\\nodejs\\nvml.dll* c:\\Windows\\System32\\nvml.dll]" time=2024-08-09T01:21:17.902+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll*" time=2024-08-09T01:21:17.903+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[c:\Windows\System32\nvml.dll] time=2024-08-09T01:21:17.917+08:00 level=DEBUG source=gpu.go:112 msg="nvidia-ml loaded" library=c:\Windows\System32\nvml.dll time=2024-08-09T01:21:17.917+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvcuda.dll time=2024-08-09T01:21:17.928+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Windows\\nvcuda.dll* C:\\Windows\\System32\\Wbem\\nvcuda.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\Windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Program Files\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\ProgramData\\anaconda3\\nvcuda.dll* C:\\ProgramData\\anaconda3\\Scripts\\nvcuda.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\nvcuda.dll* C:\\Program Files\\Git\\cmd\\nvcuda.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll* C:\\ffmpeg6\\bin\\nvcuda.dll* C:\\ffmpeg7\\bin\\nvcuda.dll* C:\\Program Files\\PowerShell\\7\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]" time=2024-08-09T01:21:17.928+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*" time=2024-08-09T01:21:17.931+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[] time=2024-08-09T01:21:17.931+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=cudart64_*.dll time=2024-08-09T01:21:17.931+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll* C:\\Windows\\cudart64_*.dll* C:\\Windows\\System32\\Wbem\\cudart64_*.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\cudart64_*.dll* C:\\Windows\\System32\\OpenSSH\\cudart64_*.dll* C:\\Program Files\\dotnet\\cudart64_*.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll* C:\\Program Files\\Microsoft VS Code\\bin\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\cudart64_*.dll* C:\\Program Files\\nodejs\\cudart64_*.dll* C:\\ProgramData\\anaconda3\\cudart64_*.dll* C:\\ProgramData\\anaconda3\\Scripts\\cudart64_*.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\cudart64_*.dll* C:\\Program Files\\Git\\cmd\\cudart64_*.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\cudart64_*.dll* C:\\ffmpeg6\\bin\\cudart64_*.dll* C:\\ffmpeg7\\bin\\cudart64_*.dll* C:\\Program Files\\PowerShell\\7\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\cudart64_*.dll* C:\\Program Files\\nodejs\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda*\\cudart64_*.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*\\bin\\cudart64_*.dll]" time=2024-08-09T01:21:17.932+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll*" time=2024-08-09T01:21:17.935+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[] time=2024-08-09T01:21:17.936+08:00 level=DEBUG source=amd_windows.go:33 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found." time=2024-08-09T01:21:17.936+08:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered" time=2024-08-09T01:21:17.937+08:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="127.7 GiB" available="107.1 GiB" time=2024-08-09T01:21:45.447+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="127.7 GiB" before.free="107.1 GiB" before.free_swap="113.3 GiB" now.total="127.7 GiB" now.free="107.1 GiB" now.free_swap="113.4 GiB" time=2024-08-09T01:21:45.447+08:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0xe0bb00 gpu_count=1 time=2024-08-09T01:21:45.466+08:00 level=DEBUG source=sched.go:206 msg="cpu mode with first model, loading" time=2024-08-09T01:21:45.466+08:00 level=DEBUG source=server.go:101 msg="system memory" total="127.7 GiB" free="107.1 GiB" free_swap="113.4 GiB" time=2024-08-09T01:21:45.468+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu\ollama_llama_server.exe time=2024-08-09T01:21:45.468+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx\ollama_llama_server.exe time=2024-08-09T01:21:45.468+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe time=2024-08-09T01:21:45.468+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe time=2024-08-09T01:21:45.468+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\rocm_v6.1\ollama_llama_server.exe time=2024-08-09T01:21:45.468+08:00 level=DEBUG source=memory.go:101 msg=evaluating library=cpu gpu_count=1 available="[107.1 GiB]" time=2024-08-09T01:21:45.469+08:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[107.1 GiB]" memory.required.full="9.2 GiB" memory.required.partial="0 B" memory.required.kv="1.0 GiB" memory.required.allocations="[9.2 GiB]" memory.weights.total="7.9 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="532.3 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-08-09T01:21:45.469+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu\ollama_llama_server.exe time=2024-08-09T01:21:45.469+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx\ollama_llama_server.exe time=2024-08-09T01:21:45.469+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe time=2024-08-09T01:21:45.469+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe time=2024-08-09T01:21:45.469+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\rocm_v6.1\ollama_llama_server.exe time=2024-08-09T01:21:45.481+08:00 level=DEBUG source=gpu.go:637 msg="no filter required for library cpu" time=2024-08-09T01:21:45.481+08:00 level=INFO source=server.go:392 msg="starting llama server" cmd="C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\documents\\ollamaModel\\blobs\\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d --ctx-size 8192 --batch-size 512 --embedding --log-disable --verbose --no-mmap --parallel 4 --port 61944" time=2024-08-09T01:21:45.481+08:00 level=DEBUG source=server.go:409 msg=subprocess environment="[PATH=C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2;C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners;;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\Microsoft VS Code\\bin;C:\\Users\\showyoung\\AppData\\Roaming\\nvm;C:\\Program Files\\nodejs;C:\\ProgramData\\anaconda3;C:\\ProgramData\\anaconda3\\Scripts;C:\\ProgramData\\anaconda3\\Library\\bin;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Docker\\Docker\\resources\\bin;C:\\ffmpeg6\\bin;C:\\ffmpeg7\\bin;C:\\Program Files\\PowerShell\\7\\;C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama;C:\\Users\\showyoung\\AppData\\Roaming\\nvm;C:\\Program Files\\nodejs]" time=2024-08-09T01:21:45.486+08:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 time=2024-08-09T01:21:45.486+08:00 level=INFO source=server.go:592 msg="waiting for llama runner to start responding" time=2024-08-09T01:21:45.486+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=3535 commit="1e6f6554" tid="25056" timestamp=1723137705 INFO [wmain] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="25056" timestamp=1723137705 total_threads=48 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="61944" tid="25056" timestamp=1723137705 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from C:\documents\ollamaModel\blobs\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Llama3.1-8B-Chinese-Chat llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 131072 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 7 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = smaug-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 128009 llama_model_loader: - kv 21: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q8_0: 226 tensors time=2024-08-09T01:21:45.749+08:00 level=INFO source=server.go:626 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q8_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 7.95 GiB (8.50 BPW) llm_load_print_meta: general.name = Llama3.1-8B-Chinese-Chat llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: PAD token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.14 MiB llm_load_tensors: CPU buffer size = 8137.64 MiB time=2024-08-09T01:21:46.282+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.07" time=2024-08-09T01:21:46.533+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.14" time=2024-08-09T01:21:46.800+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.23" time=2024-08-09T01:21:47.066+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.31" time=2024-08-09T01:21:47.318+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.39" time=2024-08-09T01:21:47.597+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.47" time=2024-08-09T01:21:47.866+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.55" time=2024-08-09T01:21:48.119+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.62" time=2024-08-09T01:21:48.382+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.71" time=2024-08-09T01:21:48.651+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.79" time=2024-08-09T01:21:48.902+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.87" time=2024-08-09T01:21:49.184+08:00 level=DEBUG source=server.go:637 msg="model load progress 0.95" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 time=2024-08-09T01:21:49.449+08:00 level=DEBUG source=server.go:637 msg="model load progress 1.00" llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CPU output buffer size = 2.02 MiB llama_new_context_with_model: CPU compute buffer size = 560.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 time=2024-08-09T01:21:49.712+08:00 level=DEBUG source=server.go:640 msg="model load completed, waiting for server to become available" status="llm server loading model" DEBUG [initialize] initializing slots | n_slots=4 tid="25056" timestamp=1723137710 DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=0 tid="25056" timestamp=1723137710 DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=1 tid="25056" timestamp=1723137710 DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=2 tid="25056" timestamp=1723137710 DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=3 tid="25056" timestamp=1723137710 INFO [wmain] model loaded | tid="25056" timestamp=1723137710 DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="25056" timestamp=1723137710 DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=0 tid="25056" timestamp=1723137710 time=2024-08-09T01:21:50.499+08:00 level=INFO source=server.go:631 msg="llama runner started in 5.01 seconds" time=2024-08-09T01:21:50.499+08:00 level=DEBUG source=sched.go:458 msg="finished setting up runner" model=C:\documents\ollamaModel\blobs\sha256-fbd07611f6a0c943376f48b65e03edaefff94d27c940d0a0b6269996153d2b4d DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1 tid="25056" timestamp=1723137710 DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=61950 status=200 tid="10152" timestamp=1723137710 DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=2 tid="25056" timestamp=1723137710 DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=61950 status=200 tid="10152" timestamp=1723137710 DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=3 tid="25056" timestamp=1723137710 DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=61950 status=200 tid="10152" timestamp=1723137710 time=2024-08-09T01:21:50.505+08:00 level=DEBUG source=routes.go:1346 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\n**please use Chinese to talk.**<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n你好呀<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n你好,很高兴为您服务。有什么可以帮助您的吗?<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n你好呀<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=4 tid="25056" timestamp=1723137710 DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=5 tid="25056" timestamp=1723137710 DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=54 slot_id=0 task_id=5 tid="25056" timestamp=1723137710 DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=5 tid="25056" timestamp=1723137710 ```
Author
Owner

@dhiltgen commented on GitHub (Aug 8, 2024):

@showyoung did you have the nvidia video driver installed, or did you uninstall that as well and fallback to the default windows video driver? For inference to work on nvidia cards, we need the nvidia driver to be present.

<!-- gh-comment-id:2276362714 --> @dhiltgen commented on GitHub (Aug 8, 2024): @showyoung did you have the nvidia video driver installed, or did you uninstall that as well and fallback to the default windows video driver? For inference to work on nvidia cards, we need the nvidia driver to be present.
Author
Owner

@showyoung commented on GitHub (Aug 8, 2024):

Screenshot_1

As you can see, I installed the latest driver, but it didn't work, after I installed cuda, the card worked.

<!-- gh-comment-id:2276373565 --> @showyoung commented on GitHub (Aug 8, 2024): > ![Screenshot_1](https://private-user-images.githubusercontent.com/5949457/355899347-ef2fce47-27db-40ca-b120-7bceffaf46bc.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjMxNDAwMTQsIm5iZiI6MTcyMzEzOTcxNCwicGF0aCI6Ii81OTQ5NDU3LzM1NTg5OTM0Ny1lZjJmY2U0Ny0yN2RiLTQwY2EtYjEyMC03YmNlZmZhZjQ2YmMucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDgwOCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA4MDhUMTc1NTE0WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9MGMxYjNlN2M1MDg0MGJjMmE3ODIxODU1MWNjNDUyZjMxOGVmYjI1NTQ0MThjNjhjOTQwYTViN2I1OWMzZTFiOCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.zgQsrUfnjO6nNUU2pAA3ZOkYQSmrO9jxytzua1ct2P8) As you can see, I installed the latest driver, but it didn't work, after I installed cuda, the card worked.
Author
Owner

@showyoung commented on GitHub (Aug 8, 2024):

i will try to reinstall driver after i sleep. i will tell you result.

<!-- gh-comment-id:2276376580 --> @showyoung commented on GitHub (Aug 8, 2024): i will try to reinstall driver after i sleep. i will tell you result.
Author
Owner

@showyoung commented on GitHub (Aug 9, 2024):

here is my log after reinstall driver, and without cuda, still cpu only .

2024/08/09 18:10:03 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\documents\\ollamaModel OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-09T18:10:03.245+08:00 level=INFO source=images.go:781 msg="total blobs: 30"
time=2024-08-09T18:10:03.291+08:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-09T18:10:03.294+08:00 level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.4)"
time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu\ollama_llama_server.exe
time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx\ollama_llama_server.exe
time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe
time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe
time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\rocm_v6.1\ollama_llama_server.exe
time=2024-08-09T18:10:03.300+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1]"
time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-08-09T18:10:03.300+08:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=gpu.go:90 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-08-09T18:10:03.301+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvml.dll
time=2024-08-09T18:10:03.301+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Microsoft VS Code\\bin\\nvml.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvml.dll* C:\\Program Files\\nodejs\\nvml.dll* C:\\ProgramData\\anaconda3\\nvml.dll* C:\\ProgramData\\anaconda3\\Scripts\\nvml.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvml.dll* C:\\ffmpeg6\\bin\\nvml.dll* C:\\ffmpeg7\\bin\\nvml.dll* C:\\Program Files\\PowerShell\\7\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvml.dll* C:\\Program Files\\nodejs\\nvml.dll* c:\\Windows\\System32\\nvml.dll]"
time=2024-08-09T18:10:03.353+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll*"
time=2024-08-09T18:10:03.354+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[c:\Windows\System32\nvml.dll]
time=2024-08-09T18:10:03.422+08:00 level=DEBUG source=gpu.go:112 msg="nvidia-ml loaded" library=c:\Windows\System32\nvml.dll
time=2024-08-09T18:10:03.423+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvcuda.dll
time=2024-08-09T18:10:03.423+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Windows\\nvcuda.dll* C:\\Windows\\System32\\Wbem\\nvcuda.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\Windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* C:\\Program Files\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\ProgramData\\anaconda3\\nvcuda.dll* C:\\ProgramData\\anaconda3\\Scripts\\nvcuda.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\nvcuda.dll* C:\\Program Files\\Git\\cmd\\nvcuda.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll* C:\\ffmpeg6\\bin\\nvcuda.dll* C:\\ffmpeg7\\bin\\nvcuda.dll* C:\\Program Files\\PowerShell\\7\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]"
time=2024-08-09T18:10:03.425+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*"
time=2024-08-09T18:10:03.426+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[]
time=2024-08-09T18:10:03.426+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=cudart64_*.dll
time=2024-08-09T18:10:03.426+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll* C:\\Windows\\cudart64_*.dll* C:\\Windows\\System32\\Wbem\\cudart64_*.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\cudart64_*.dll* C:\\Windows\\System32\\OpenSSH\\cudart64_*.dll* C:\\Program Files\\dotnet\\cudart64_*.dll* C:\\Program Files\\Microsoft VS Code\\bin\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\cudart64_*.dll* C:\\Program Files\\nodejs\\cudart64_*.dll* C:\\ProgramData\\anaconda3\\cudart64_*.dll* C:\\ProgramData\\anaconda3\\Scripts\\cudart64_*.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\cudart64_*.dll* C:\\Program Files\\Git\\cmd\\cudart64_*.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\cudart64_*.dll* C:\\ffmpeg6\\bin\\cudart64_*.dll* C:\\ffmpeg7\\bin\\cudart64_*.dll* C:\\Program Files\\PowerShell\\7\\cudart64_*.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\cudart64_*.dll* C:\\Program Files\\nodejs\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda*\\cudart64_*.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*\\bin\\cudart64_*.dll]"
time=2024-08-09T18:10:03.429+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll*"
time=2024-08-09T18:10:03.431+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[]
time=2024-08-09T18:10:03.431+08:00 level=DEBUG source=amd_windows.go:33 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found."
time=2024-08-09T18:10:03.431+08:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered"
time=2024-08-09T18:10:03.433+08:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="127.7 GiB" available="119.7 GiB"
<!-- gh-comment-id:2277630462 --> @showyoung commented on GitHub (Aug 9, 2024): here is my log after reinstall driver, and without cuda, still cpu only . ``` 2024/08/09 18:10:03 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\documents\\ollamaModel OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-09T18:10:03.245+08:00 level=INFO source=images.go:781 msg="total blobs: 30" time=2024-08-09T18:10:03.291+08:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" time=2024-08-09T18:10:03.294+08:00 level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.4)" time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu\ollama_llama_server.exe time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx\ollama_llama_server.exe time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\cuda_v11.3\ollama_llama_server.exe time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=C:\Users\showyoung\AppData\Local\Programs\Ollama\ollama_runners\rocm_v6.1\ollama_llama_server.exe time=2024-08-09T18:10:03.300+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1]" time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-08-09T18:10:03.300+08:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs" time=2024-08-09T18:10:03.300+08:00 level=DEBUG source=gpu.go:90 msg="searching for GPU discovery libraries for NVIDIA" time=2024-08-09T18:10:03.301+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvml.dll time=2024-08-09T18:10:03.301+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Windows\\nvml.dll* C:\\Windows\\System32\\Wbem\\nvml.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvml.dll* C:\\Windows\\System32\\OpenSSH\\nvml.dll* C:\\Program Files\\dotnet\\nvml.dll* C:\\Program Files\\Microsoft VS Code\\bin\\nvml.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvml.dll* C:\\Program Files\\nodejs\\nvml.dll* C:\\ProgramData\\anaconda3\\nvml.dll* C:\\ProgramData\\anaconda3\\Scripts\\nvml.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\nvml.dll* C:\\Program Files\\Git\\cmd\\nvml.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvml.dll* C:\\ffmpeg6\\bin\\nvml.dll* C:\\ffmpeg7\\bin\\nvml.dll* C:\\Program Files\\PowerShell\\7\\nvml.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\nvml.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvml.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvml.dll* C:\\Program Files\\nodejs\\nvml.dll* c:\\Windows\\System32\\nvml.dll]" time=2024-08-09T18:10:03.353+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvml.dll*" time=2024-08-09T18:10:03.354+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[c:\Windows\System32\nvml.dll] time=2024-08-09T18:10:03.422+08:00 level=DEBUG source=gpu.go:112 msg="nvidia-ml loaded" library=c:\Windows\System32\nvml.dll time=2024-08-09T18:10:03.423+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvcuda.dll time=2024-08-09T18:10:03.423+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Windows\\nvcuda.dll* C:\\Windows\\System32\\Wbem\\nvcuda.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\Windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* C:\\Program Files\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\ProgramData\\anaconda3\\nvcuda.dll* C:\\ProgramData\\anaconda3\\Scripts\\nvcuda.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\nvcuda.dll* C:\\Program Files\\Git\\cmd\\nvcuda.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll* C:\\ffmpeg6\\bin\\nvcuda.dll* C:\\ffmpeg7\\bin\\nvcuda.dll* C:\\Program Files\\PowerShell\\7\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]" time=2024-08-09T18:10:03.425+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*" time=2024-08-09T18:10:03.426+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[] time=2024-08-09T18:10:03.426+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=cudart64_*.dll time=2024-08-09T18:10:03.426+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll* C:\\Windows\\cudart64_*.dll* C:\\Windows\\System32\\Wbem\\cudart64_*.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\cudart64_*.dll* C:\\Windows\\System32\\OpenSSH\\cudart64_*.dll* C:\\Program Files\\dotnet\\cudart64_*.dll* C:\\Program Files\\Microsoft VS Code\\bin\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\cudart64_*.dll* C:\\Program Files\\nodejs\\cudart64_*.dll* C:\\ProgramData\\anaconda3\\cudart64_*.dll* C:\\ProgramData\\anaconda3\\Scripts\\cudart64_*.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\cudart64_*.dll* C:\\Program Files\\Git\\cmd\\cudart64_*.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\cudart64_*.dll* C:\\ffmpeg6\\bin\\cudart64_*.dll* C:\\ffmpeg7\\bin\\cudart64_*.dll* C:\\Program Files\\PowerShell\\7\\cudart64_*.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\cudart64_*.dll* C:\\Program Files\\nodejs\\cudart64_*.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda*\\cudart64_*.dll c:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v*\\bin\\cudart64_*.dll]" time=2024-08-09T18:10:03.429+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\cudart64_*.dll*" time=2024-08-09T18:10:03.431+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[] time=2024-08-09T18:10:03.431+08:00 level=DEBUG source=amd_windows.go:33 msg="unable to load amdhip64_6.dll, please make sure to upgrade to the latest amd driver: The specified module could not be found." time=2024-08-09T18:10:03.431+08:00 level=INFO source=gpu.go:347 msg="no compatible GPUs were discovered" time=2024-08-09T18:10:03.433+08:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="127.7 GiB" available="119.7 GiB" ```
Author
Owner

@dhiltgen commented on GitHub (Aug 9, 2024):

I'm happy we found a workaround by installing CUDA but that shouldn't be required, so there's a bug here somewhere.

Without cuda installed, there should still be an nvcuda.dll on your system, and typically that shows up in c:\Windows\System32\nvcuda.dll but based on the logs above, it seems it isn't there. Is there anything unusual about your Windows install, or the way you're installing the nvidia driver? Is nvidia-smi.exe present and working? (typically it will be installed in C:\Windows\system32\nvidia-smi.exe)

<!-- gh-comment-id:2278666244 --> @dhiltgen commented on GitHub (Aug 9, 2024): I'm happy we found a workaround by installing CUDA but that shouldn't be required, so there's a bug here somewhere. Without cuda installed, there should still be an `nvcuda.dll` on your system, and typically that shows up in `c:\Windows\System32\nvcuda.dll` but based on the logs above, it seems it isn't there. Is there anything unusual about your Windows install, or the way you're installing the nvidia driver? Is `nvidia-smi.exe` present and working? (typically it will be installed in `C:\Windows\system32\nvidia-smi.exe`)
Author
Owner

@showyoung commented on GitHub (Aug 18, 2024):

@dhiltgen

Is there anything unusual about your Windows install, or the way you're installing the nvidia driver?

installing nvidia card driver, there were no unusual.

Is nvidia-smi.exe present and working? (typically it will be installed in C:\Windows\system32\nvidia-smi.exe)

All files are present. nvcuda.dll and nvidia-smi.exe is in c:\Windows\System32\

<!-- gh-comment-id:2295134560 --> @showyoung commented on GitHub (Aug 18, 2024): @dhiltgen > Is there anything unusual about your Windows install, or the way you're installing the nvidia driver? installing nvidia card driver, there were no unusual. > Is nvidia-smi.exe present and working? (typically it will be installed in `C:\Windows\system32\nvidia-smi.exe`) All files are present. nvcuda.dll and nvidia-smi.exe is in `c:\Windows\System32\`
Author
Owner

@showyoung commented on GitHub (Aug 18, 2024):

Screenshot_1
Screenshot_2

<!-- gh-comment-id:2295165823 --> @showyoung commented on GitHub (Aug 18, 2024): ![Screenshot_1](https://github.com/user-attachments/assets/35c32371-3457-4e40-aeb4-306e40309d01) ![Screenshot_2](https://github.com/user-attachments/assets/f58fa89f-5bb0-4f5b-b162-8dfb00a64b2d)
Author
Owner

@d-kleine commented on GitHub (Aug 20, 2024):

I have a 3080 ti, and Ollama uses my GPU with CUDA 12.6 on my Windows 11.

You could try to remove everything Nvidia-related from your PC with Display Driver Uninstaller (DDU) and Revo Uninstaller, and then re-install every components in this order:

  • GeForce Game Ready driver 560.94
  • CUDA 12.6

and maybe

  • cuDNN 9.3.0

Maybe the Chinese system language can be problematic here too, idk (I can't read/understand anything in your screenshot)

<!-- gh-comment-id:2298947519 --> @d-kleine commented on GitHub (Aug 20, 2024): I have a 3080 ti, and Ollama uses my GPU with CUDA 12.6 on my Windows 11. You could try to remove everything Nvidia-related from your PC with Display Driver Uninstaller (DDU) and Revo Uninstaller, and then re-install every components in this order: * GeForce Game Ready driver 560.94 * CUDA 12.6 and maybe * cuDNN 9.3.0 Maybe the Chinese system language can be problematic here too, idk (I can't read/understand anything in your screenshot)
Author
Owner

@showyoung commented on GitHub (Aug 21, 2024):

@dhiltgen @d-kleine Thank you all for your suggestion, I will try it when I have time.
But Chinese shouldn’t be the cause of the problem.
In the beginning, it was able to run successfully without using cuda and only using the driver.

<!-- gh-comment-id:2301397130 --> @showyoung commented on GitHub (Aug 21, 2024): @dhiltgen @d-kleine Thank you all for your suggestion, I will try it when I have time. But Chinese shouldn’t be the cause of the problem. In the beginning, it was able to run successfully without using cuda and only using the driver.
Author
Owner

@dhiltgen commented on GitHub (Sep 5, 2024):

@showyoung I'm a little confused by the order. You should only need to have the GPU Driver installed. When that is installed, you should have c:\Windows\System32\nvcuda.dll present on the system. The logs you shared above indicate this was not found.

time=2024-08-09T18:10:03.423+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvcuda.dll
time=2024-08-09T18:10:03.423+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Windows\\nvcuda.dll* C:\\Windows\\System32\\Wbem\\nvcuda.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\Windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* C:\\Program Files\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\ProgramData\\anaconda3\\nvcuda.dll* C:\\ProgramData\\anaconda3\\Scripts\\nvcuda.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\nvcuda.dll* C:\\Program Files\\Git\\cmd\\nvcuda.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll* C:\\ffmpeg6\\bin\\nvcuda.dll* C:\\ffmpeg7\\bin\\nvcuda.dll* C:\\Program Files\\PowerShell\\7\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]"
time=2024-08-09T18:10:03.425+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*"
time=2024-08-09T18:10:03.426+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[]

If you just install the driver, please confirm the file c:\Windows\System32\nvcuda.dll is present and then restart Ollama and it should find that library and discover your GPU. If that's not the case, please share a log and I'll re-open the issue so we can continue to investigate why this isn't working as expected.

<!-- gh-comment-id:2332414663 --> @dhiltgen commented on GitHub (Sep 5, 2024): @showyoung I'm a little confused by the order. You should only need to have the GPU Driver installed. When that is installed, you should have `c:\Windows\System32\nvcuda.dll` present on the system. The logs you shared above indicate this was not found. ``` time=2024-08-09T18:10:03.423+08:00 level=DEBUG source=gpu.go:469 msg="Searching for GPU library" name=nvcuda.dll time=2024-08-09T18:10:03.423+08:00 level=DEBUG source=gpu.go:488 msg="gpu library search" globs="[C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Windows\\nvcuda.dll* C:\\Windows\\System32\\Wbem\\nvcuda.dll* C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\nvcuda.dll* C:\\Windows\\System32\\OpenSSH\\nvcuda.dll* C:\\Program Files\\dotnet\\nvcuda.dll* C:\\Program Files\\Microsoft VS Code\\bin\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* C:\\ProgramData\\anaconda3\\nvcuda.dll* C:\\ProgramData\\anaconda3\\Scripts\\nvcuda.dll* C:\\ProgramData\\anaconda3\\Library\\bin\\nvcuda.dll* C:\\Program Files\\Git\\cmd\\nvcuda.dll* C:\\Program Files\\Docker\\Docker\\resources\\bin\\nvcuda.dll* C:\\ffmpeg6\\bin\\nvcuda.dll* C:\\ffmpeg7\\bin\\nvcuda.dll* C:\\Program Files\\PowerShell\\7\\nvcuda.dll* C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Microsoft\\WindowsApps\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Local\\Programs\\Ollama\\nvcuda.dll* C:\\Users\\showyoung\\AppData\\Roaming\\nvm\\nvcuda.dll* C:\\Program Files\\nodejs\\nvcuda.dll* c:\\windows\\system*\\nvcuda.dll]" time=2024-08-09T18:10:03.425+08:00 level=DEBUG source=gpu.go:493 msg="skipping PhysX cuda library path" path="C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common\\nvcuda.dll*" time=2024-08-09T18:10:03.426+08:00 level=DEBUG source=gpu.go:522 msg="discovered GPU libraries" paths=[] ``` If you just install the driver, please confirm the file `c:\Windows\System32\nvcuda.dll` is present and then restart Ollama and it should find that library and discover your GPU. If that's not the case, please share a log and I'll re-open the issue so we can continue to investigate why this isn't working as expected.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29660