[GH-ISSUE #8361] llama3.1-8B doesn't utilize my gpu #51873

Closed
opened 2026-04-28 21:06:04 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @sunday-hao on GitHub (Jan 9, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8361

What is the issue?

when I tried to run llama3.1-8B-Instruct, it just didn't utilize my GPU and only utilize my CPU, so the speed is very slow. However, the server log said that ollama server detected my gpu, and move my model to my gpu. Could anyone help me? And I write output of nvidia-smi and the server log as follows. thanks a lot.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.4

Originally created by @sunday-hao on GitHub (Jan 9, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8361 ### What is the issue? when I tried to run llama3.1-8B-Instruct, it just didn't utilize my GPU and only utilize my CPU, so the speed is very slow. However, the server log said that ollama server detected my gpu, and move my model to my gpu. Could anyone help me? And I write output of `nvidia-smi` and the server log as follows. thanks a lot. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.4
GiteaMirror added the bug label 2026-04-28 21:06:05 -05:00
Author
Owner

@sunday-hao commented on GitHub (Jan 9, 2025):

20250109-172629

<!-- gh-comment-id:2579568300 --> @sunday-hao commented on GitHub (Jan 9, 2025): ![20250109-172629](https://github.com/user-attachments/assets/33464934-9115-4d1c-8933-e3fcf2cd7a60)
Author
Owner

@sunday-hao commented on GitHub (Jan 9, 2025):

2025/01/09 16:47:02 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/ailab/user/zhouhao1/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy:https://zhouhao1:s5Y1ASgIqczsOfNpmQ7Lli1XryXFMwDPw0hO5akyOyIqIymmFnM1343FO7fn@blsc-proxy.pjlab.org.cn:13128 https_proxy:https://zhouhao1:s5Y1ASgIqczsOfNpmQ7Lli1XryXFMwDPw0hO5akyOyIqIymmFnM1343FO7fn@blsc-proxy.pjlab.org.cn:13128 no_proxy:]"
time=2025-01-09T16:47:02.123+08:00 level=INFO source=images.go:757 msg="total blobs: 5"
time=2025-01-09T16:47:02.125+08:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-09T16:47:02.127+08:00 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)"
time=2025-01-09T16:47:02.130+08:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
time=2025-01-09T16:47:02.130+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-09T16:47:02.261+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-a4999875-9a2e-39c1-393b-b28fe731a5a7 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="23.3 GiB"
[GIN] 2025/01/09 - 16:47:21 | 200 | 101.06µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/01/09 - 16:47:21 | 200 | 24.124723ms | 127.0.0.1 | POST "/api/show"
time=2025-01-09T16:47:21.269+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/ailab/user/zhouhao1/.ollama/models/blobs/sha256-052367b7f6a9afd00c2eb1bbe24cecc904aa7452bc1d8b20e349b78dc4ad1a5a gpu=GPU-a4999875-9a2e-39c1-393b-b28fe731a5a7 parallel=4 available=24976752640 required="16.4 GiB"
time=2025-01-09T16:47:21.362+08:00 level=INFO source=server.go:104 msg="system memory" total="503.6 GiB" free="484.2 GiB" free_swap="0 B"
time=2025-01-09T16:47:21.362+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.4 GiB" memory.required.partial="16.4 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[16.4 GiB]" memory.weights.total="14.0 GiB" memory.weights.repeating="13.0 GiB" memory.weights.nonrepeating="1002.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2025-01-09T16:47:21.431+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/ailab/user/zhouhao1/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /ailab/user/zhouhao1/.ollama/models/blobs/sha256-052367b7f6a9afd00c2eb1bbe24cecc904aa7452bc1d8b20e349b78dc4ad1a5a --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 48 --parallel 4 --port 46782"
time=2025-01-09T16:47:21.432+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-01-09T16:47:21.432+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-01-09T16:47:21.433+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-01-09T16:47:21.458+08:00 level=INFO source=runner.go:945 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
time=2025-01-09T16:47:21.469+08:00 level=INFO source=runner.go:946 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=48
time=2025-01-09T16:47:21.470+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:46782"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4090) - 23819 MiB free
llama_model_loader: loaded meta data with 27 key-value pairs and 292 tensors from /ailab/user/zhouhao1/.ollama/models/blobs/sha256-052367b7f6a9afd00c2eb1bbe24cecc904aa7452bc1d8b20e349b78dc4ad1a5a (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Whole_Model
llama_model_loader: - kv 3: general.size_label str = 8.0B
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.context_length u32 = 131072
llama_model_loader: - kv 6: llama.embedding_length u32 = 4096
llama_model_loader: - kv 7: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 8: llama.attention.head_count u32 = 32
llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 12: llama.attention.key_length u32 = 128
llama_model_loader: - kv 13: llama.attention.value_length u32 = 128
llama_model_loader: - kv 14: general.file_type u32 = 32
llama_model_loader: - kv 15: llama.vocab_size u32 = 128256
llama_model_loader: - kv 16: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
time=2025-01-09T16:47:21.684+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128009
llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 26: general.quantization_version u32 = 2
llama_model_loader: - type f32: 66 tensors
llama_model_loader: - type bf16: 226 tensors
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = BF16
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 14.96 GiB (16.00 BPW)
llm_load_print_meta: general.name = Whole_Model
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CUDA0 model buffer size = 1.02 MiB
llm_load_tensors: CPU_Mapped model buffer size = 15317.02 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 548.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 258.50 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 323
time=2025-01-09T16:47:23.941+08:00 level=INFO source=server.go:594 msg="llama runner started in 2.51 seconds"
[GIN] 2025/01/09 - 16:47:23 | 200 | 2.839645405s | 127.0.0.1 | POST "/api/generate"
llama_model_loader: loaded meta data with 27 key-value pairs and 292 tensors from /ailab/user/zhouhao1/.ollama/models/blobs/sha256-052367b7f6a9afd00c2eb1bbe24cecc904aa7452bc1d8b20e349b78dc4ad1a5a (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Whole_Model
llama_model_loader: - kv 3: general.size_label str = 8.0B
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.context_length u32 = 131072
llama_model_loader: - kv 6: llama.embedding_length u32 = 4096
llama_model_loader: - kv 7: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 8: llama.attention.head_count u32 = 32
llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 12: llama.attention.key_length u32 = 128
llama_model_loader: - kv 13: llama.attention.value_length u32 = 128
llama_model_loader: - kv 14: general.file_type u32 = 32
llama_model_loader: - kv 15: llama.vocab_size u32 = 128256
llama_model_loader: - kv 16: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128009
llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 26: general.quantization_version u32 = 2
llama_model_loader: - type f32: 66 tensors
llama_model_loader: - type bf16: 226 tensors
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 14.96 GiB (16.00 BPW)
llm_load_print_meta: general.name = Whole_Model
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors

<!-- gh-comment-id:2579569897 --> @sunday-hao commented on GitHub (Jan 9, 2025): 2025/01/09 16:47:02 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/ailab/user/zhouhao1/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy:https://zhouhao1:s5Y1ASgIqczsOfNpmQ7Lli1XryXFMwDPw0hO5akyOyIqIymmFnM1343FO7fn@blsc-proxy.pjlab.org.cn:13128 https_proxy:https://zhouhao1:s5Y1ASgIqczsOfNpmQ7Lli1XryXFMwDPw0hO5akyOyIqIymmFnM1343FO7fn@blsc-proxy.pjlab.org.cn:13128 no_proxy:]" time=2025-01-09T16:47:02.123+08:00 level=INFO source=images.go:757 msg="total blobs: 5" time=2025-01-09T16:47:02.125+08:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-01-09T16:47:02.127+08:00 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)" time=2025-01-09T16:47:02.130+08:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]" time=2025-01-09T16:47:02.130+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-09T16:47:02.261+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-a4999875-9a2e-39c1-393b-b28fe731a5a7 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="23.3 GiB" [GIN] 2025/01/09 - 16:47:21 | 200 | 101.06µs | 127.0.0.1 | HEAD "/" [GIN] 2025/01/09 - 16:47:21 | 200 | 24.124723ms | 127.0.0.1 | POST "/api/show" time=2025-01-09T16:47:21.269+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/ailab/user/zhouhao1/.ollama/models/blobs/sha256-052367b7f6a9afd00c2eb1bbe24cecc904aa7452bc1d8b20e349b78dc4ad1a5a gpu=GPU-a4999875-9a2e-39c1-393b-b28fe731a5a7 parallel=4 available=24976752640 required="16.4 GiB" time=2025-01-09T16:47:21.362+08:00 level=INFO source=server.go:104 msg="system memory" total="503.6 GiB" free="484.2 GiB" free_swap="0 B" time=2025-01-09T16:47:21.362+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[23.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.4 GiB" memory.required.partial="16.4 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[16.4 GiB]" memory.weights.total="14.0 GiB" memory.weights.repeating="13.0 GiB" memory.weights.nonrepeating="1002.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2025-01-09T16:47:21.431+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/ailab/user/zhouhao1/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /ailab/user/zhouhao1/.ollama/models/blobs/sha256-052367b7f6a9afd00c2eb1bbe24cecc904aa7452bc1d8b20e349b78dc4ad1a5a --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 48 --parallel 4 --port 46782" time=2025-01-09T16:47:21.432+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-01-09T16:47:21.432+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-01-09T16:47:21.433+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-09T16:47:21.458+08:00 level=INFO source=runner.go:945 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes time=2025-01-09T16:47:21.469+08:00 level=INFO source=runner.go:946 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=48 time=2025-01-09T16:47:21.470+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:46782" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 4090) - 23819 MiB free llama_model_loader: loaded meta data with 27 key-value pairs and 292 tensors from /ailab/user/zhouhao1/.ollama/models/blobs/sha256-052367b7f6a9afd00c2eb1bbe24cecc904aa7452bc1d8b20e349b78dc4ad1a5a (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Whole_Model llama_model_loader: - kv 3: general.size_label str = 8.0B llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.context_length u32 = 131072 llama_model_loader: - kv 6: llama.embedding_length u32 = 4096 llama_model_loader: - kv 7: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 8: llama.attention.head_count u32 = 32 llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.attention.key_length u32 = 128 llama_model_loader: - kv 13: llama.attention.value_length u32 = 128 llama_model_loader: - kv 14: general.file_type u32 = 32 llama_model_loader: - kv 15: llama.vocab_size u32 = 128256 llama_model_loader: - kv 16: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... time=2025-01-09T16:47:21.684+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128009 llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type bf16: 226 tensors llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = BF16 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 14.96 GiB (16.00 BPW) llm_load_print_meta: general.name = Whole_Model llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CUDA0 model buffer size = 1.02 MiB llm_load_tensors: CPU_Mapped model buffer size = 15317.02 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB llama_new_context_with_model: CUDA0 compute buffer size = 548.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 258.50 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 323 time=2025-01-09T16:47:23.941+08:00 level=INFO source=server.go:594 msg="llama runner started in 2.51 seconds" [GIN] 2025/01/09 - 16:47:23 | 200 | 2.839645405s | 127.0.0.1 | POST "/api/generate" llama_model_loader: loaded meta data with 27 key-value pairs and 292 tensors from /ailab/user/zhouhao1/.ollama/models/blobs/sha256-052367b7f6a9afd00c2eb1bbe24cecc904aa7452bc1d8b20e349b78dc4ad1a5a (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Whole_Model llama_model_loader: - kv 3: general.size_label str = 8.0B llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.context_length u32 = 131072 llama_model_loader: - kv 6: llama.embedding_length u32 = 4096 llama_model_loader: - kv 7: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 8: llama.attention.head_count u32 = 32 llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.attention.key_length u32 = 128 llama_model_loader: - kv 13: llama.attention.value_length u32 = 128 llama_model_loader: - kv 14: general.file_type u32 = 32 llama_model_loader: - kv 15: llama.vocab_size u32 = 128256 llama_model_loader: - kv 16: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 18: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 128009 llama_model_loader: - kv 25: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type bf16: 226 tensors llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 14.96 GiB (16.00 BPW) llm_load_print_meta: general.name = Whole_Model llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors
Author
Owner

@YonTracks commented on GitHub (Jan 9, 2025):

very good info perfect, thanks.

this is hard to recognize, but I believe this is the same issue, with sched/server and parallel num_ctx.
I'm on it, give me a few days, or the team/pro's will be on it, I'm sure anyway.
cheers.
good luck.

<!-- gh-comment-id:2579729671 --> @YonTracks commented on GitHub (Jan 9, 2025): very good info perfect, thanks. this is hard to recognize, but I believe this is the same issue, with sched/server and parallel num_ctx. I'm on it, give me a few days, or the team/pro's will be on it, I'm sure anyway. cheers. good luck.
Author
Owner

@sunday-hao commented on GitHub (Jan 9, 2025):

Thanks.

Good luck

<!-- gh-comment-id:2579885353 --> @sunday-hao commented on GitHub (Jan 9, 2025): Thanks. Good luck
Author
Owner

@rick-github commented on GitHub (Jan 9, 2025):

llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CUDA0 model buffer size = 1.02 MiB
llm_load_tensors: CPU_Mapped model buffer size = 15317.02 MiB

llama.cpp certainly intended to load everything in GPU, but when it came time to allocate the buffers it fell back to the CPU.

llama_model_loader: - kv 2: general.name str = Whole_Model
llm_load_print_meta: model ftype = BF16

It looks like the model is not llama3.1:8b-instruct from the ollama library. There was a problem with BF16 tensors a few months ago, perhaps some residual issues remain. Where did you get the model? Have you tried the ollama version?

<!-- gh-comment-id:2580487445 --> @rick-github commented on GitHub (Jan 9, 2025): ``` llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CUDA0 model buffer size = 1.02 MiB llm_load_tensors: CPU_Mapped model buffer size = 15317.02 MiB ``` llama.cpp certainly intended to load everything in GPU, but when it came time to allocate the buffers it fell back to the CPU. ``` llama_model_loader: - kv 2: general.name str = Whole_Model llm_load_print_meta: model ftype = BF16 ``` It looks like the model is not llama3.1:8b-instruct from the ollama library. There was a [problem](https://github.com/ollama/ollama/pull/7193) with BF16 tensors a few months ago, perhaps some residual issues remain. Where did you get the model? Have you tried the ollama [version](https://ollama.com/library/llama3.1:8b-instruct-fp16)?
Author
Owner

@sunday-hao commented on GitHub (Jan 10, 2025):

yeah, the model I used isn't the official llama3.1:8b-instruct, but a fine-tuned model from the official one.
And I use convert_hf_to_gguf.py in llama.cpp to covert these .safetensors files into .gguf file, and I set --outtype to bf16.

I just tried the llama3.1:8b-instruct from the ollama library, and everything went well. so maybe the problem should owe to BF16 tensors. And I will set --outtype to fp16 to check.

by the way, just an aside, the size of llama3.1:8b-instruct from the ollama library is just 4.7GB, but the size of my fine-tuned llama3.1:8b-instruct model is approximately 16GB. Did you quantize it when you created it?

<!-- gh-comment-id:2581650120 --> @sunday-hao commented on GitHub (Jan 10, 2025): yeah, the model I used isn't the official llama3.1:8b-instruct, but a fine-tuned model from the official one. And I use `convert_hf_to_gguf.py` in llama.cpp to covert these .safetensors files into .gguf file, and I set `--outtype` to `bf16`. I just tried the llama3.1:8b-instruct from the ollama library, and everything went well. so maybe the problem should owe to BF16 tensors. And I will set `--outtype` to `fp16` to check. by the way, just an aside, the size of llama3.1:8b-instruct from the ollama library is just 4.7GB, but the size of my fine-tuned llama3.1:8b-instruct model is approximately 16GB. Did you quantize it when you created it?
Author
Owner

@rick-github commented on GitHub (Jan 10, 2025):

Ollama makes multiple quants of a model available in the library. The default one is tagged latest and is usually an alias for q4_K_M (previously q4_0). If you want a different quant you can browse the model page. The link I supplied above is to the FP16 quant.

<!-- gh-comment-id:2581658528 --> @rick-github commented on GitHub (Jan 10, 2025): Ollama makes multiple quants of a model available in the library. The default one is tagged `latest` and is usually an alias for `q4_K_M` (previously `q4_0`). If you want a different quant you can browse the model page. The link I supplied above is to the FP16 quant.
Author
Owner

@sunday-hao commented on GitHub (Jan 10, 2025):

I set --outtype tof16, and then everything goes well with my model now. so, I think the problem should owe to BF16 tensors.

Thanks a lot.

<!-- gh-comment-id:2581671286 --> @sunday-hao commented on GitHub (Jan 10, 2025): I set `--outtype` to`f16`, and then everything goes well with my model now. so, I think the problem should owe to `BF16` tensors. Thanks a lot.
Author
Owner

@rick-github commented on GitHub (Jan 10, 2025):

Thanks for letting us know. BF16 is not commonly used but it should work, we'll have a deeper dive and see if we can fix the root cause.

<!-- gh-comment-id:2581676130 --> @rick-github commented on GitHub (Jan 10, 2025): Thanks for letting us know. BF16 is not commonly used but it should work, we'll have a deeper dive and see if we can fix the root cause.
Author
Owner

@rick-github commented on GitHub (Jan 10, 2025):

BF16 is not GPU accelerated: https://github.com/ggerganov/llama.cpp/issues/8941

<!-- gh-comment-id:2581702788 --> @rick-github commented on GitHub (Jan 10, 2025): BF16 is not GPU accelerated: https://github.com/ggerganov/llama.cpp/issues/8941
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51873