[GH-ISSUE #2934] Unable to init GPU: unknown error #27558

Closed
opened 2026-04-22 04:58:49 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @PLNech on GitHub (Mar 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2934

Originally assigned to: @dhiltgen on GitHub.

Hi there! My ollama-based project (thanks for the amazing framework <3) suddenly stopped using the GPU as backend.

It used to work well and I could confirm that the GPU layers offloading was happening from logs a few days ago.

Today the specific error I see in journals are: Failed to load dynamic library /tmp/ollama3406780784/cuda_v11/libext_server.so

Here's my journalctl relevant output. At first GPU is well detected, CUDA too:

Mar 02 18:35:37 XPS24 systemd[1]: Started Ollama Service.
Mar 02 18:35:37 XPS24 ollama[135152]: time=2024-03-02T18:35:37.066+01:00 level=INFO source=images.go:710 msg="total blobs: 63"
Mar 02 18:35:37 XPS24 ollama[135152]: time=2024-03-02T18:35:37.068+01:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0"
Mar 02 18:35:37 XPS24 ollama[135152]: time=2024-03-02T18:35:37.068+01:00 level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.27)"
Mar 02 18:35:37 XPS24 ollama[135152]: time=2024-03-02T18:35:37.068+01:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.311+01:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [rocm_v5 rocm_v6 cpu_avx cuda_v11 cpu cpu_avx2]"
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.311+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.311+01:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.314+01:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08]"
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.324+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.324+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.330+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"

but then the GPU library .so seems to fail loading for an undescribed reason:

Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3406780784/cuda_v11/libext_server.so"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama3406780784/cuda_v11/libext_server.so  Unable to init GPU: unknown error"
Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3406780784/cpu_avx2/libext_server.so"

Any idea what could cause this issue? NVIDIA & CUDA seem fine on my machine, see output:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08              Driver Version: 545.23.08    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 4080 ...    On  | 00000000:01:00.0 Off |                  N/A |
| N/A   50C    P4              18W /  60W |     10MiB / 12282MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      4130      G   /usr/lib/xorg/Xorg                            4MiB |
+---------------------------------------------------------------------------------------+
Originally created by @PLNech on GitHub (Mar 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2934 Originally assigned to: @dhiltgen on GitHub. Hi there! My ollama-based project (thanks for the amazing framework <3) suddenly stopped using the GPU as backend. It used to work well and I could confirm that the `GPU layers` offloading was happening from logs a few days ago. Today the specific error I see in journals are: `Failed to load dynamic library /tmp/ollama3406780784/cuda_v11/libext_server.so ` Here's my journalctl relevant output. At first GPU is well detected, CUDA too: ``` Mar 02 18:35:37 XPS24 systemd[1]: Started Ollama Service. Mar 02 18:35:37 XPS24 ollama[135152]: time=2024-03-02T18:35:37.066+01:00 level=INFO source=images.go:710 msg="total blobs: 63" Mar 02 18:35:37 XPS24 ollama[135152]: time=2024-03-02T18:35:37.068+01:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0" Mar 02 18:35:37 XPS24 ollama[135152]: time=2024-03-02T18:35:37.068+01:00 level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.27)" Mar 02 18:35:37 XPS24 ollama[135152]: time=2024-03-02T18:35:37.068+01:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.311+01:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [rocm_v5 rocm_v6 cpu_avx cuda_v11 cpu cpu_avx2]" Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.311+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.311+01:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so" Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.314+01:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08]" Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.324+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected" Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.324+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Mar 02 18:35:40 XPS24 ollama[135152]: time=2024-03-02T18:35:40.330+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9" ``` but then the GPU library .so seems to fail loading for an undescribed reason: ``` Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9" Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9" Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3406780784/cuda_v11/libext_server.so" Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama3406780784/cuda_v11/libext_server.so Unable to init GPU: unknown error" Mar 05 12:39:05 XPS24 ollama[135152]: time=2024-03-05T12:39:05.843+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3406780784/cpu_avx2/libext_server.so" ``` Any idea what could cause this issue? NVIDIA & CUDA seem fine on my machine, see output: ``` +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 4080 ... On | 00000000:01:00.0 Off | N/A | | N/A 50C P4 18W / 60W | 10MiB / 12282MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 4130 G /usr/lib/xorg/Xorg 4MiB | +---------------------------------------------------------------------------------------+ ```
GiteaMirror added the bug label 2026-04-22 04:58:49 -05:00
Author
Owner

@remy415 commented on GitHub (Mar 5, 2024):

@PLNech Could you please set OLLAMA_DEBUG=1 in your /etc/systemd/system/ollama.service file, reload the systemd daemon with sudo systemctl daemon-reload, restart the service sudo systemctl restart ollama, then see if there's better logs?

Here's what you need to add to your /etc/systemd/system/ollama.service file:

[Unit]
...

[Service]
...
# ADD THIS:
Environment="OLLAMA_DEBUG=1"

[Install]
...

Note: Yes, you can have multiple Environment= lines.

<!-- gh-comment-id:1979591921 --> @remy415 commented on GitHub (Mar 5, 2024): @PLNech Could you please set `OLLAMA_DEBUG=1` in your `/etc/systemd/system/ollama.service` file, reload the systemd daemon with `sudo systemctl daemon-reload`, restart the service `sudo systemctl restart ollama`, then see if there's better logs? Here's what you need to add to your `/etc/systemd/system/ollama.service` file: ``` [Unit] ... [Service] ... # ADD THIS: Environment="OLLAMA_DEBUG=1" [Install] ... ``` Note: Yes, you can have multiple `Environment=` lines.
Author
Owner

@Vilsol commented on GitHub (Mar 6, 2024):

I think I've hit a similar issue.

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.29.02              Driver Version: 545.29.02    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 2080 Ti     On  | 00000000:01:00.0  On |                  N/A |
| 13%   48C    P5              32W / 200W |   2432MiB / 11264MiB |     29%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

Log with debug enabled:

time=2024-03-06T14:58:24.633+02:00 level=INFO source=images.go:710 msg="total blobs: 6"
time=2024-03-06T14:58:24.634+02:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0"
time=2024-03-06T14:58:24.634+02:00 level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.27)"
time=2024-03-06T14:58:24.634+02:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-03-06T14:58:30.640+02:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx2 cuda_v12 cpu cpu_avx]"
time=2024-03-06T14:58:30.640+02:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-03-06T14:58:30.640+02:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-03-06T14:58:30.640+02:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-03-06T14:58:30.640+02:00 level=DEBUG source=gpu.go:283 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /nix/store/g4ayyar0v68y72agnj5s1jsqv637fjl5-pipewire-1.0.1-jack/lib/libnvidia-ml.so* /run/opengl-driver/lib/libnvidia-ml.so* /nix/store/nx6ip2s54sfi7337abzvcfq9j8nrckhv-nvidia-x11-550.54.14-6.6.19/lib/libnvidia-ml.so*]"
time=2024-03-06T14:58:30.640+02:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/nix/store/x3gfgqvgvvxnm9vw4zymh64x43lww5a8-nvidia-x11-545.29.02-6.1.79/lib/libnvidia-ml.so.545.29.02 /nix/store/nx6ip2s54sfi7337abzvcfq9j8nrckhv-nvidia-x11-550.54.14-6.6.19/lib/libnvidia-ml.so.550.54.14]"
wiring nvidia management library functions in /nix/store/x3gfgqvgvvxnm9vw4zymh64x43lww5a8-nvidia-x11-545.29.02-6.1.79/lib/libnvidia-ml.so.545.29.02
dlsym: nvmlInit_v2
dlsym: nvmlShutdown
dlsym: nvmlDeviceGetHandleByIndex
dlsym: nvmlDeviceGetMemoryInfo
dlsym: nvmlDeviceGetCount_v2
dlsym: nvmlDeviceGetCudaComputeCapability
dlsym: nvmlSystemGetDriverVersion
dlsym: nvmlDeviceGetName
dlsym: nvmlDeviceGetSerial
dlsym: nvmlDeviceGetVbiosVersion
dlsym: nvmlDeviceGetBoardPartNumber
dlsym: nvmlDeviceGetBrand
CUDA driver version: 545.29.02
time=2024-03-06T14:58:30.649+02:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-03-06T14:58:30.649+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[0] CUDA device name: NVIDIA GeForce RTX 2080 Ti
[0] CUDA part number: 
nvmlDeviceGetSerial failed: 3
[0] CUDA vbios version: 90.02.0B.40.4A
[0] CUDA brand: 5
[0] CUDA totalMem 11811160064
[0] CUDA usedMem 9001959424
time=2024-03-06T14:58:30.655+02:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5"
time=2024-03-06T14:58:30.655+02:00 level=DEBUG source=gpu.go:254 msg="cuda detected 1 devices with 7560M available memory"
time=2024-03-06T14:59:04.540+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[0] CUDA device name: NVIDIA GeForce RTX 2080 Ti
[0] CUDA part number: 
nvmlDeviceGetSerial failed: 3
[0] CUDA vbios version: 90.02.0B.40.4A
[0] CUDA brand: 5
[0] CUDA totalMem 11811160064
[0] CUDA usedMem 8989310976
time=2024-03-06T14:59:04.540+02:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5"
time=2024-03-06T14:59:04.540+02:00 level=DEBUG source=gpu.go:254 msg="cuda detected 1 devices with 7548M available memory"
time=2024-03-06T14:59:04.540+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[0] CUDA device name: NVIDIA GeForce RTX 2080 Ti
[0] CUDA part number: 
nvmlDeviceGetSerial failed: 3
[0] CUDA vbios version: 90.02.0B.40.4A
[0] CUDA brand: 5
[0] CUDA totalMem 11811160064
[0] CUDA usedMem 8989310976
time=2024-03-06T14:59:04.540+02:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5"
time=2024-03-06T14:59:04.540+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-06T14:59:04.540+02:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/nix-shell.ZHmI92/ollama1429801354/cuda_v12/libext_server.so /tmp/nix-shell.ZHmI92/ollama1429801354/cpu_avx2/libext_server.so]"
loading library /tmp/nix-shell.ZHmI92/ollama1429801354/cuda_v12/libext_server.so
time=2024-03-06T14:59:04.573+02:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/nix-shell.ZHmI92/ollama1429801354/cuda_v12/libext_server.so"
time=2024-03-06T14:59:04.573+02:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
[1709729944] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | 
[1709729944] Performing pre-initialization of GPU
time=2024-03-06T14:59:04.576+02:00 level=DEBUG source=dyn_ext_server.go:157 msg="failure during initialization: Unable to init GPU: unknown error"
time=2024-03-06T14:59:04.576+02:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/nix-shell.ZHmI92/ollama1429801354/cuda_v12/libext_server.so  Unable to init GPU: unknown error"
loading library /tmp/nix-shell.ZHmI92/ollama1429801354/cpu_avx2/libext_server.so
time=2024-03-06T14:59:04.577+02:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/nix-shell.ZHmI92/ollama1429801354/cpu_avx2/libext_server.so"
time=2024-03-06T14:59:04.577+02:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
[1709729944] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | 
llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from /home/vilsol/.ollama/models/blobs/sha256:3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = codellama
llama_model_loader: - kv   2:                       llama.context_length u32              = 16384
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32016]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32016]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32016]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: mismatch in special tokens definition ( 264/32016 vs 259/32016 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32016
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 16384
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 16384
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = codellama
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.11 MiB
llm_load_tensors:        CPU buffer size =  3647.95 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 4096
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =  2048.00 MiB
llama_new_context_with_model: KV self size  = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_new_context_with_model:        CPU input buffer size   =    17.04 MiB
llama_new_context_with_model:        CPU compute buffer size =   288.00 MiB
llama_new_context_with_model: graph splits (measure): 1
[1709729945] warming up the model with an empty run
[1709729945] Available slots:
[1709729945]  -> Slot 0 - max context: 4096
time=2024-03-06T14:59:05.921+02:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop"
[1709729945] llama server main loop starting
[1709729945] all slots are idle and system prompt is empty, clear the KV cache
time=2024-03-06T14:59:05.921+02:00 level=DEBUG source=prompt.go:170 msg="prompt now fits in context window" required=21 window=4096
time=2024-03-06T14:59:05.921+02:00 level=DEBUG source=routes.go:1225 msg="chat handler" prompt="[INST] <<SYS>><</SYS>>\n\nHello [/INST]\n" images=0
[1709729945] slot 0 is processing [task id: 0]
[1709729945] slot 0 : in cache: 0 tokens | to process: 21 tokens
[1709729945] slot 0 : kv cache rm - [0, end)
[1709729948] sampled token:    13: '
'
[1709729948] sampled token: 18567: 'Hi'
[1709729948] sampled token: 29991: '!'
[1709729948] sampled token:     2: ''
[1709729948] 
[1709729948] print_timings: prompt eval time =    2471.49 ms /    21 tokens (  117.69 ms per token,     8.50 tokens per second)
[1709729948] print_timings:        eval time =     490.69 ms /     4 runs   (  122.67 ms per token,     8.15 tokens per second)
[1709729948] print_timings:       total time =    2962.17 ms
[1709729948] slot 0 released (25 tokens in cache)
[1709729948] next result cancel on stop
[1709729948] next result removing waiting task ID: 0
[GIN] 2024/03/06 - 14:59:08 | 200 |  4.488577794s |       127.0.0.1 | POST     "/api/chat"

Can provide a reproducable flake.nix if necessary.

<!-- gh-comment-id:1980825995 --> @Vilsol commented on GitHub (Mar 6, 2024): I think I've hit a similar issue. ``` +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 545.29.02 Driver Version: 545.29.02 CUDA Version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 2080 Ti On | 00000000:01:00.0 On | N/A | | 13% 48C P5 32W / 200W | 2432MiB / 11264MiB | 29% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ ``` Log with debug enabled: ``` time=2024-03-06T14:58:24.633+02:00 level=INFO source=images.go:710 msg="total blobs: 6" time=2024-03-06T14:58:24.634+02:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0" time=2024-03-06T14:58:24.634+02:00 level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.27)" time=2024-03-06T14:58:24.634+02:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-03-06T14:58:30.640+02:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx2 cuda_v12 cpu cpu_avx]" time=2024-03-06T14:58:30.640+02:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-03-06T14:58:30.640+02:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-03-06T14:58:30.640+02:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-06T14:58:30.640+02:00 level=DEBUG source=gpu.go:283 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /nix/store/g4ayyar0v68y72agnj5s1jsqv637fjl5-pipewire-1.0.1-jack/lib/libnvidia-ml.so* /run/opengl-driver/lib/libnvidia-ml.so* /nix/store/nx6ip2s54sfi7337abzvcfq9j8nrckhv-nvidia-x11-550.54.14-6.6.19/lib/libnvidia-ml.so*]" time=2024-03-06T14:58:30.640+02:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/nix/store/x3gfgqvgvvxnm9vw4zymh64x43lww5a8-nvidia-x11-545.29.02-6.1.79/lib/libnvidia-ml.so.545.29.02 /nix/store/nx6ip2s54sfi7337abzvcfq9j8nrckhv-nvidia-x11-550.54.14-6.6.19/lib/libnvidia-ml.so.550.54.14]" wiring nvidia management library functions in /nix/store/x3gfgqvgvvxnm9vw4zymh64x43lww5a8-nvidia-x11-545.29.02-6.1.79/lib/libnvidia-ml.so.545.29.02 dlsym: nvmlInit_v2 dlsym: nvmlShutdown dlsym: nvmlDeviceGetHandleByIndex dlsym: nvmlDeviceGetMemoryInfo dlsym: nvmlDeviceGetCount_v2 dlsym: nvmlDeviceGetCudaComputeCapability dlsym: nvmlSystemGetDriverVersion dlsym: nvmlDeviceGetName dlsym: nvmlDeviceGetSerial dlsym: nvmlDeviceGetVbiosVersion dlsym: nvmlDeviceGetBoardPartNumber dlsym: nvmlDeviceGetBrand CUDA driver version: 545.29.02 time=2024-03-06T14:58:30.649+02:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected" time=2024-03-06T14:58:30.649+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [0] CUDA device name: NVIDIA GeForce RTX 2080 Ti [0] CUDA part number: nvmlDeviceGetSerial failed: 3 [0] CUDA vbios version: 90.02.0B.40.4A [0] CUDA brand: 5 [0] CUDA totalMem 11811160064 [0] CUDA usedMem 9001959424 time=2024-03-06T14:58:30.655+02:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5" time=2024-03-06T14:58:30.655+02:00 level=DEBUG source=gpu.go:254 msg="cuda detected 1 devices with 7560M available memory" time=2024-03-06T14:59:04.540+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [0] CUDA device name: NVIDIA GeForce RTX 2080 Ti [0] CUDA part number: nvmlDeviceGetSerial failed: 3 [0] CUDA vbios version: 90.02.0B.40.4A [0] CUDA brand: 5 [0] CUDA totalMem 11811160064 [0] CUDA usedMem 8989310976 time=2024-03-06T14:59:04.540+02:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5" time=2024-03-06T14:59:04.540+02:00 level=DEBUG source=gpu.go:254 msg="cuda detected 1 devices with 7548M available memory" time=2024-03-06T14:59:04.540+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [0] CUDA device name: NVIDIA GeForce RTX 2080 Ti [0] CUDA part number: nvmlDeviceGetSerial failed: 3 [0] CUDA vbios version: 90.02.0B.40.4A [0] CUDA brand: 5 [0] CUDA totalMem 11811160064 [0] CUDA usedMem 8989310976 time=2024-03-06T14:59:04.540+02:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5" time=2024-03-06T14:59:04.540+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-06T14:59:04.540+02:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/nix-shell.ZHmI92/ollama1429801354/cuda_v12/libext_server.so /tmp/nix-shell.ZHmI92/ollama1429801354/cpu_avx2/libext_server.so]" loading library /tmp/nix-shell.ZHmI92/ollama1429801354/cuda_v12/libext_server.so time=2024-03-06T14:59:04.573+02:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/nix-shell.ZHmI92/ollama1429801354/cuda_v12/libext_server.so" time=2024-03-06T14:59:04.573+02:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" [1709729944] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | [1709729944] Performing pre-initialization of GPU time=2024-03-06T14:59:04.576+02:00 level=DEBUG source=dyn_ext_server.go:157 msg="failure during initialization: Unable to init GPU: unknown error" time=2024-03-06T14:59:04.576+02:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/nix-shell.ZHmI92/ollama1429801354/cuda_v12/libext_server.so Unable to init GPU: unknown error" loading library /tmp/nix-shell.ZHmI92/ollama1429801354/cpu_avx2/libext_server.so time=2024-03-06T14:59:04.577+02:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/nix-shell.ZHmI92/ollama1429801354/cpu_avx2/libext_server.so" time=2024-03-06T14:59:04.577+02:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" [1709729944] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from /home/vilsol/.ollama/models/blobs/sha256:3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac (version GGUF V2) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = codellama llama_model_loader: - kv 2: llama.context_length u32 = 16384 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32016] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32016] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32016] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: mismatch in special tokens definition ( 264/32016 vs 259/32016 ). llm_load_print_meta: format = GGUF V2 llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32016 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 16384 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 16384 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 6.74 B llm_load_print_meta: model size = 3.56 GiB (4.54 BPW) llm_load_print_meta: general.name = codellama llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.11 MiB llm_load_tensors: CPU buffer size = 3647.95 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 4096 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 2048.00 MiB llama_new_context_with_model: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB llama_new_context_with_model: CPU input buffer size = 17.04 MiB llama_new_context_with_model: CPU compute buffer size = 288.00 MiB llama_new_context_with_model: graph splits (measure): 1 [1709729945] warming up the model with an empty run [1709729945] Available slots: [1709729945] -> Slot 0 - max context: 4096 time=2024-03-06T14:59:05.921+02:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop" [1709729945] llama server main loop starting [1709729945] all slots are idle and system prompt is empty, clear the KV cache time=2024-03-06T14:59:05.921+02:00 level=DEBUG source=prompt.go:170 msg="prompt now fits in context window" required=21 window=4096 time=2024-03-06T14:59:05.921+02:00 level=DEBUG source=routes.go:1225 msg="chat handler" prompt="[INST] <<SYS>><</SYS>>\n\nHello [/INST]\n" images=0 [1709729945] slot 0 is processing [task id: 0] [1709729945] slot 0 : in cache: 0 tokens | to process: 21 tokens [1709729945] slot 0 : kv cache rm - [0, end) [1709729948] sampled token: 13: ' ' [1709729948] sampled token: 18567: 'Hi' [1709729948] sampled token: 29991: '!' [1709729948] sampled token: 2: '' [1709729948] [1709729948] print_timings: prompt eval time = 2471.49 ms / 21 tokens ( 117.69 ms per token, 8.50 tokens per second) [1709729948] print_timings: eval time = 490.69 ms / 4 runs ( 122.67 ms per token, 8.15 tokens per second) [1709729948] print_timings: total time = 2962.17 ms [1709729948] slot 0 released (25 tokens in cache) [1709729948] next result cancel on stop [1709729948] next result removing waiting task ID: 0 [GIN] 2024/03/06 - 14:59:08 | 200 | 4.488577794s | 127.0.0.1 | POST "/api/chat" ``` Can provide a reproducable `flake.nix` if necessary.
Author
Owner

@dhiltgen commented on GitHub (Mar 6, 2024):

Looking around online, I see some people reporting that nvidia-modprobe -u might resolve this. Can you try that on your system and report back how it goes?

<!-- gh-comment-id:1981298432 --> @dhiltgen commented on GitHub (Mar 6, 2024): Looking around online, I see some people reporting that `nvidia-modprobe -u` might resolve this. Can you try that on your system and report back how it goes?
Author
Owner

@PLNech commented on GitHub (Mar 6, 2024):

@remy415 thanks for the guidance! Here's the full log of a query after enabling debug logging:

Debug logs
Mar 06 18:56:33 XPS24 ollama[591887]: time=2024-03-06T18:56:33.428+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 06 18:56:33 XPS24 ollama[591887]: [0] CUDA device name: NVIDIA GeForce RTX 4080 Laptop GPU
Mar 06 18:56:33 XPS24 ollama[591887]: [0] CUDA part number:
Mar 06 18:56:33 XPS24 ollama[591887]: nvmlDeviceGetSerial failed: 3
Mar 06 18:56:33 XPS24 ollama[591887]: [0] CUDA vbios version: 95.04.3C.40.1D
Mar 06 18:56:33 XPS24 ollama[591887]: [0] CUDA brand: 5
Mar 06 18:56:33 XPS24 ollama[591887]: [0] CUDA totalMem 12878610432
Mar 06 18:56:33 XPS24 ollama[591887]: [0] CUDA usedMem 12581601280
Mar 06 18:56:33 XPS24 ollama[591887]: time=2024-03-06T18:56:33.434+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
Mar 06 18:56:33 XPS24 ollama[591887]: time=2024-03-06T18:56:33.434+01:00 level=DEBUG source=gpu.go:254 msg="cuda detected 1 devices with 10798M available memory"
Mar 06 18:57:05 XPS24 ollama[591887]: [GIN] 2024/03/06 - 18:57:05 | 200 |      78.973µs |       127.0.0.1 | HEAD     "/"
Mar 06 18:57:05 XPS24 ollama[591887]: [GIN] 2024/03/06 - 18:57:05 | 200 |    1.796193ms |       127.0.0.1 | POST     "/api/show"
Mar 06 18:57:05 XPS24 ollama[591887]: [GIN] 2024/03/06 - 18:57:05 | 200 |     569.861µs |       127.0.0.1 | POST     "/api/show"
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA device name: NVIDIA GeForce RTX 4080 Laptop GPU
Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA part number:
Mar 06 18:57:05 XPS24 ollama[591887]: nvmlDeviceGetSerial failed: 3
Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA vbios version: 95.04.3C.40.1D
Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA brand: 5
Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA totalMem 12878610432
Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA usedMem 12581601280
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=DEBUG source=gpu.go:254 msg="cuda detected 1 devices with 10798M available memory"
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA device name: NVIDIA GeForce RTX 4080 Laptop GPU
Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA part number:
Mar 06 18:57:05 XPS24 ollama[591887]: nvmlDeviceGetSerial failed: 3
Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA vbios version: 95.04.3C.40.1D
Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA brand: 5
Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA totalMem 12878610432
Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA usedMem 12581601280
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9"
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama2490062218/cuda_v11/libext_server.so /tmp/ollama2490062218/cpu_avx2/libext_server.so]"
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.481+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2490062218/cuda_v11/libext_server.so"
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.482+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
Mar 06 18:57:05 XPS24 ollama[591887]: [1709747825] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
Mar 06 18:57:05 XPS24 ollama[591887]: [1709747825] Performing pre-initialization of GPU
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.489+01:00 level=DEBUG source=dyn_ext_server.go:157 msg="failure during initialization: Unable to init GPU: unknown error"
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.489+01:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama2490062218/cuda_v11/libext_server.so  Unable to init GPU: unknown error"
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.491+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2490062218/cpu_avx2/libext_server.so"
Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.491+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
Mar 06 18:57:05 XPS24 ollama[591887]: [1709747825] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /usr/share/ollama/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv   0:                       general.architecture str              = llama
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv   1:                               general.name str              = mistralai
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv   4:                          llama.block_count u32              = 32
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  11:                          general.file_type u32              = 2
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv  23:               general.quantization_version u32              = 2
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - type  f32:   65 tensors
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - type q4_0:  225 tensors
Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - type q6_K:    1 tensors
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_vocab: special tokens definition check successful ( 259/32000 ).
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: format           = GGUF V3 (latest)
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: arch             = llama
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: vocab type       = SPM
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_vocab          = 32000
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_merges         = 0
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_ctx_train      = 32768
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_embd           = 4096
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_head           = 32
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_head_kv        = 8
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_layer          = 32
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_rot            = 128
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_embd_head_k    = 128
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_embd_head_v    = 128
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_gqa            = 4
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_embd_k_gqa     = 1024
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_embd_v_gqa     = 1024
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_ff             = 14336
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_expert         = 0
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_expert_used    = 0
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: rope scaling     = linear
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: freq_base_train  = 1000000.0
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: freq_scale_train = 1
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_yarn_orig_ctx  = 32768
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: rope_finetuned   = unknown
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: model type       = 7B
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: model ftype      = Q4_0
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: model params     = 7.24 B
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW)
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: general.name     = mistralai
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: BOS token        = 1 '<s>'
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: EOS token        = 2 '</s>'
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: UNK token        = 0 '<unk>'
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: LF token         = 13 '<0x0A>'
Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_tensors: ggml ctx size =    0.11 MiB
Mar 06 18:57:08 XPS24 ollama[591887]: llm_load_tensors:        CPU buffer size =  3917.87 MiB
Mar 06 18:57:08 XPS24 ollama[591887]: ..................................................................................................
Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model: n_ctx      = 2048
Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model: freq_base  = 1000000.0
Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model: freq_scale = 1
Mar 06 18:57:08 XPS24 ollama[591887]: llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model:        CPU input buffer size   =    13.02 MiB
Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model:        CPU compute buffer size =   160.00 MiB
Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model: graph splits (measure): 1
Mar 06 18:57:08 XPS24 ollama[591887]: [1709747828] warming up the model with an empty run
Mar 06 18:57:08 XPS24 ollama[591887]: [1709747828] Available slots:
Mar 06 18:57:08 XPS24 ollama[591887]: [1709747828]  -> Slot 0 - max context: 2048
Mar 06 18:57:08 XPS24 ollama[591887]: time=2024-03-06T18:57:08.811+01:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop"
Mar 06 18:57:08 XPS24 ollama[591887]: time=2024-03-06T18:57:08.811+01:00 level=DEBUG source=prompt.go:170 msg="prompt now fits in context window" required=1 window=2048
Mar 06 18:57:08 XPS24 ollama[591887]: [1709747828] llama server main loop starting
Mar 06 18:57:08 XPS24 ollama[591887]: [1709747828] all slots are idle and system prompt is empty, clear the KV cache
Mar 06 18:57:08 XPS24 ollama[591887]: [GIN] 2024/03/06 - 18:57:08 | 200 |  3.768090397s |       127.0.0.1 | POST     "/api/chat"
Mar 06 18:57:11 XPS24 ollama[591887]: time=2024-03-06T18:57:11.621+01:00 level=DEBUG source=prompt.go:170 msg="prompt now fits in context window" required=14 window=2048
Mar 06 18:57:11 XPS24 ollama[591887]: time=2024-03-06T18:57:11.621+01:00 level=DEBUG source=routes.go:1225 msg="chat handler" prompt="[INST]  Yo dawg [/INST]" images=0
Mar 06 18:57:11 XPS24 ollama[591887]: [1709747831] slot 0 is processing [task id: 0]
Mar 06 18:57:11 XPS24 ollama[591887]: [1709747831] slot 0 : in cache: 0 tokens | to process: 14 tokens
Mar 06 18:57:11 XPS24 ollama[591887]: [1709747831] slot 0 : kv cache rm - [0, end)
Mar 06 18:57:13 XPS24 ollama[591887]: [1709747833] sampled token: 22557: ' Hello'
Mar 06 18:57:13 XPS24 ollama[591887]: [1709747833] sampled token:   736: ' there'
Mar 06 18:57:13 XPS24 ollama[591887]: [1709747833] sampled token: 28808: '!'
Mar 06 18:57:13 XPS24 ollama[591887]: [1709747833] sampled token:  1602: ' How'
Mar 06 18:57:14 XPS24 ollama[591887]: [1709747834] sampled token:   541: ' can'
Mar 06 18:57:14 XPS24 ollama[591887]: [1709747834] sampled token:   315: ' I'
Mar 06 18:57:14 XPS24 ollama[591887]: [1709747834] sampled token:  6031: ' assist'
Mar 06 18:57:14 XPS24 ollama[591887]: [1709747834] sampled token:   368: ' you'
Mar 06 18:57:14 XPS24 ollama[591887]: [1709747834] sampled token:  3154: ' today'
Mar 06 18:57:14 XPS24 ollama[591887]: [1709747834] sampled token: 28725: ','
Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token:   586: ' my'
Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token:  1832: ' friend'
Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token: 28804: '?'
Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token:  1047: ' If'
Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token:   368: ' you'
Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token:   506: ' have'
Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token:   707: ' any'
Mar 06 18:57:16 XPS24 ollama[591887]: [1709747836] sampled token:  4224: ' questions'
Mar 06 18:57:16 XPS24 ollama[591887]: [1709747836] sampled token:   442: ' or'
Mar 06 18:57:16 XPS24 ollama[591887]: [1709747836] sampled token:   927: ' need'
Mar 06 18:57:16 XPS24 ollama[591887]: [1709747836] sampled token:  1316: ' help'
Mar 06 18:57:16 XPS24 ollama[591887]: [1709747836] sampled token:   395: ' with'
Mar 06 18:57:16 XPS24 ollama[591887]: [1709747836] sampled token:  1545: ' something'
Mar 06 18:57:17 XPS24 ollama[591887]: [1709747837] sampled token: 28725: ','
Mar 06 18:57:17 XPS24 ollama[591887]: [1709747837] sampled token:   776: ' just'
Mar 06 18:57:17 XPS24 ollama[591887]: [1709747837] sampled token:  1346: ' let'
Mar 06 18:57:17 XPS24 ollama[591887]: [1709747837] sampled token:   528: ' me'
Mar 06 18:57:17 XPS24 ollama[591887]: [1709747837] sampled token:   873: ' know'
Mar 06 18:57:17 XPS24 ollama[591887]: [1709747837] sampled token: 28723: '.'
Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token:   315: ' I'
Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token: 28742: '''
Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token: 28719: 'm'
Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token:  1236: ' here'
Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token:   298: ' to'
Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token:  1038: ' make'
Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token:   574: ' your'
Mar 06 18:57:19 XPS24 ollama[591887]: [1709747839] sampled token:  1370: ' day'
Mar 06 18:57:19 XPS24 ollama[591887]: [1709747839] sampled token:   264: ' a'
Mar 06 18:57:19 XPS24 ollama[591887]: [1709747839] sampled token:  1628: ' little'
Mar 06 18:57:19 XPS24 ollama[591887]: [1709747839] sampled token:  1170: ' br'
Mar 06 18:57:19 XPS24 ollama[591887]: [1709747839] sampled token:  8918: 'ighter'
Mar 06 18:57:19 XPS24 ollama[591887]: [1709747839] sampled token:   304: ' and'
Mar 06 18:57:20 XPS24 ollama[591887]: [1709747840] sampled token:  7089: ' easier'
Mar 06 18:57:20 XPS24 ollama[591887]: [1709747840] sampled token: 28723: '.'
Mar 06 18:57:20 XPS24 ollama[591887]: [1709747840] sampled token:  3169: ' Let'
Mar 06 18:57:20 XPS24 ollama[591887]: [1709747840] sampled token: 28742: '''
Mar 06 18:57:20 XPS24 ollama[591887]: [1709747840] sampled token: 28713: 's'
Mar 06 18:57:20 XPS24 ollama[591887]: [1709747840] sampled token:   625: ' get'
Mar 06 18:57:21 XPS24 ollama[591887]: [1709747841] sampled token:   456: ' this'
Mar 06 18:57:21 XPS24 ollama[591887]: [1709747841] sampled token:  4150: ' party'
Mar 06 18:57:21 XPS24 ollama[591887]: [1709747841] sampled token:  2774: ' started'
Mar 06 18:57:21 XPS24 ollama[591887]: [1709747841] sampled token: 28808: '!'
Mar 06 18:57:21 XPS24 ollama[591887]: [1709747841] sampled token:  1824: ' What'
Mar 06 18:57:21 XPS24 ollama[591887]: [1709747841] sampled token: 28742: '''
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] sampled token: 28713: 's'
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] sampled token:   356: ' on'
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] sampled token:   574: ' your'
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] sampled token:  2273: ' mind'
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] sampled token: 28804: '?'
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] sampled token:     2: ''
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842]
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] print_timings: prompt eval time =    1852.62 ms /    14 tokens (  132.33 ms per token,     7.56 tokens per second)
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] print_timings:        eval time =    9403.33 ms /    60 runs   (  156.72 ms per token,     6.38 tokens per second)
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] print_timings:       total time =   11255.94 ms
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] slot 0 released (74 tokens in cache)
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] next result cancel on stop
Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] next result removing waiting task ID: 0
Mar 06 18:57:22 XPS24 ollama[591887]: [GIN] 2024/03/06 - 18:57:22 | 200 |  11.25704483s |       127.0.0.1 | POST     "/api/chat"

@dhiltgen thanks for the tip! I tried that command then sudo systemctl restart ollama, didn't seem to change the output. I'll report back after next system reboot in case that's required for this tip to work!

<!-- gh-comment-id:1981484703 --> @PLNech commented on GitHub (Mar 6, 2024): @remy415 thanks for the guidance! Here's the full log of a query after enabling debug logging: <details> <summary>Debug logs</summary> ``` Mar 06 18:56:33 XPS24 ollama[591887]: time=2024-03-06T18:56:33.428+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Mar 06 18:56:33 XPS24 ollama[591887]: [0] CUDA device name: NVIDIA GeForce RTX 4080 Laptop GPU Mar 06 18:56:33 XPS24 ollama[591887]: [0] CUDA part number: Mar 06 18:56:33 XPS24 ollama[591887]: nvmlDeviceGetSerial failed: 3 Mar 06 18:56:33 XPS24 ollama[591887]: [0] CUDA vbios version: 95.04.3C.40.1D Mar 06 18:56:33 XPS24 ollama[591887]: [0] CUDA brand: 5 Mar 06 18:56:33 XPS24 ollama[591887]: [0] CUDA totalMem 12878610432 Mar 06 18:56:33 XPS24 ollama[591887]: [0] CUDA usedMem 12581601280 Mar 06 18:56:33 XPS24 ollama[591887]: time=2024-03-06T18:56:33.434+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9" Mar 06 18:56:33 XPS24 ollama[591887]: time=2024-03-06T18:56:33.434+01:00 level=DEBUG source=gpu.go:254 msg="cuda detected 1 devices with 10798M available memory" Mar 06 18:57:05 XPS24 ollama[591887]: [GIN] 2024/03/06 - 18:57:05 | 200 | 78.973µs | 127.0.0.1 | HEAD "/" Mar 06 18:57:05 XPS24 ollama[591887]: [GIN] 2024/03/06 - 18:57:05 | 200 | 1.796193ms | 127.0.0.1 | POST "/api/show" Mar 06 18:57:05 XPS24 ollama[591887]: [GIN] 2024/03/06 - 18:57:05 | 200 | 569.861µs | 127.0.0.1 | POST "/api/show" Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA device name: NVIDIA GeForce RTX 4080 Laptop GPU Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA part number: Mar 06 18:57:05 XPS24 ollama[591887]: nvmlDeviceGetSerial failed: 3 Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA vbios version: 95.04.3C.40.1D Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA brand: 5 Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA totalMem 12878610432 Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA usedMem 12581601280 Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9" Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=DEBUG source=gpu.go:254 msg="cuda detected 1 devices with 10798M available memory" Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA device name: NVIDIA GeForce RTX 4080 Laptop GPU Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA part number: Mar 06 18:57:05 XPS24 ollama[591887]: nvmlDeviceGetSerial failed: 3 Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA vbios version: 95.04.3C.40.1D Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA brand: 5 Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA totalMem 12878610432 Mar 06 18:57:05 XPS24 ollama[591887]: [0] CUDA usedMem 12581601280 Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.9" Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.456+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama2490062218/cuda_v11/libext_server.so /tmp/ollama2490062218/cpu_avx2/libext_server.so]" Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.481+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2490062218/cuda_v11/libext_server.so" Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.482+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" Mar 06 18:57:05 XPS24 ollama[591887]: [1709747825] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | Mar 06 18:57:05 XPS24 ollama[591887]: [1709747825] Performing pre-initialization of GPU Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.489+01:00 level=DEBUG source=dyn_ext_server.go:157 msg="failure during initialization: Unable to init GPU: unknown error" Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.489+01:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama2490062218/cuda_v11/libext_server.so Unable to init GPU: unknown error" Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.491+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2490062218/cpu_avx2/libext_server.so" Mar 06 18:57:05 XPS24 ollama[591887]: time=2024-03-06T18:57:05.491+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" Mar 06 18:57:05 XPS24 ollama[591887]: [1709747825] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /usr/share/ollama/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest)) Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 0: general.architecture str = llama Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 1: general.name str = mistralai Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 2: llama.context_length u32 = 32768 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 4: llama.block_count u32 = 32 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 11: general.file_type u32 = 2 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 12: tokenizer.ggml.model str = llama Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - kv 23: general.quantization_version u32 = 2 Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - type f32: 65 tensors Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - type q4_0: 225 tensors Mar 06 18:57:05 XPS24 ollama[591887]: llama_model_loader: - type q6_K: 1 tensors Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_vocab: special tokens definition check successful ( 259/32000 ). Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: format = GGUF V3 (latest) Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: arch = llama Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: vocab type = SPM Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_vocab = 32000 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_merges = 0 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_ctx_train = 32768 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_embd = 4096 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_head = 32 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_head_kv = 8 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_layer = 32 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_rot = 128 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_embd_head_k = 128 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_embd_head_v = 128 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_gqa = 4 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_embd_k_gqa = 1024 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_embd_v_gqa = 1024 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: f_norm_eps = 0.0e+00 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_ff = 14336 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_expert = 0 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_expert_used = 0 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: rope scaling = linear Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: freq_base_train = 1000000.0 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: freq_scale_train = 1 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: n_yarn_orig_ctx = 32768 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: rope_finetuned = unknown Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: model type = 7B Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: model ftype = Q4_0 Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: model params = 7.24 B Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: general.name = mistralai Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: BOS token = 1 '<s>' Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: EOS token = 2 '</s>' Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: UNK token = 0 '<unk>' Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_print_meta: LF token = 13 '<0x0A>' Mar 06 18:57:05 XPS24 ollama[591887]: llm_load_tensors: ggml ctx size = 0.11 MiB Mar 06 18:57:08 XPS24 ollama[591887]: llm_load_tensors: CPU buffer size = 3917.87 MiB Mar 06 18:57:08 XPS24 ollama[591887]: .................................................................................................. Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model: n_ctx = 2048 Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model: freq_base = 1000000.0 Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model: freq_scale = 1 Mar 06 18:57:08 XPS24 ollama[591887]: llama_kv_cache_init: CPU KV buffer size = 256.00 MiB Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model: CPU input buffer size = 13.02 MiB Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model: CPU compute buffer size = 160.00 MiB Mar 06 18:57:08 XPS24 ollama[591887]: llama_new_context_with_model: graph splits (measure): 1 Mar 06 18:57:08 XPS24 ollama[591887]: [1709747828] warming up the model with an empty run Mar 06 18:57:08 XPS24 ollama[591887]: [1709747828] Available slots: Mar 06 18:57:08 XPS24 ollama[591887]: [1709747828] -> Slot 0 - max context: 2048 Mar 06 18:57:08 XPS24 ollama[591887]: time=2024-03-06T18:57:08.811+01:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop" Mar 06 18:57:08 XPS24 ollama[591887]: time=2024-03-06T18:57:08.811+01:00 level=DEBUG source=prompt.go:170 msg="prompt now fits in context window" required=1 window=2048 Mar 06 18:57:08 XPS24 ollama[591887]: [1709747828] llama server main loop starting Mar 06 18:57:08 XPS24 ollama[591887]: [1709747828] all slots are idle and system prompt is empty, clear the KV cache Mar 06 18:57:08 XPS24 ollama[591887]: [GIN] 2024/03/06 - 18:57:08 | 200 | 3.768090397s | 127.0.0.1 | POST "/api/chat" Mar 06 18:57:11 XPS24 ollama[591887]: time=2024-03-06T18:57:11.621+01:00 level=DEBUG source=prompt.go:170 msg="prompt now fits in context window" required=14 window=2048 Mar 06 18:57:11 XPS24 ollama[591887]: time=2024-03-06T18:57:11.621+01:00 level=DEBUG source=routes.go:1225 msg="chat handler" prompt="[INST] Yo dawg [/INST]" images=0 Mar 06 18:57:11 XPS24 ollama[591887]: [1709747831] slot 0 is processing [task id: 0] Mar 06 18:57:11 XPS24 ollama[591887]: [1709747831] slot 0 : in cache: 0 tokens | to process: 14 tokens Mar 06 18:57:11 XPS24 ollama[591887]: [1709747831] slot 0 : kv cache rm - [0, end) Mar 06 18:57:13 XPS24 ollama[591887]: [1709747833] sampled token: 22557: ' Hello' Mar 06 18:57:13 XPS24 ollama[591887]: [1709747833] sampled token: 736: ' there' Mar 06 18:57:13 XPS24 ollama[591887]: [1709747833] sampled token: 28808: '!' Mar 06 18:57:13 XPS24 ollama[591887]: [1709747833] sampled token: 1602: ' How' Mar 06 18:57:14 XPS24 ollama[591887]: [1709747834] sampled token: 541: ' can' Mar 06 18:57:14 XPS24 ollama[591887]: [1709747834] sampled token: 315: ' I' Mar 06 18:57:14 XPS24 ollama[591887]: [1709747834] sampled token: 6031: ' assist' Mar 06 18:57:14 XPS24 ollama[591887]: [1709747834] sampled token: 368: ' you' Mar 06 18:57:14 XPS24 ollama[591887]: [1709747834] sampled token: 3154: ' today' Mar 06 18:57:14 XPS24 ollama[591887]: [1709747834] sampled token: 28725: ',' Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token: 586: ' my' Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token: 1832: ' friend' Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token: 28804: '?' Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token: 1047: ' If' Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token: 368: ' you' Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token: 506: ' have' Mar 06 18:57:15 XPS24 ollama[591887]: [1709747835] sampled token: 707: ' any' Mar 06 18:57:16 XPS24 ollama[591887]: [1709747836] sampled token: 4224: ' questions' Mar 06 18:57:16 XPS24 ollama[591887]: [1709747836] sampled token: 442: ' or' Mar 06 18:57:16 XPS24 ollama[591887]: [1709747836] sampled token: 927: ' need' Mar 06 18:57:16 XPS24 ollama[591887]: [1709747836] sampled token: 1316: ' help' Mar 06 18:57:16 XPS24 ollama[591887]: [1709747836] sampled token: 395: ' with' Mar 06 18:57:16 XPS24 ollama[591887]: [1709747836] sampled token: 1545: ' something' Mar 06 18:57:17 XPS24 ollama[591887]: [1709747837] sampled token: 28725: ',' Mar 06 18:57:17 XPS24 ollama[591887]: [1709747837] sampled token: 776: ' just' Mar 06 18:57:17 XPS24 ollama[591887]: [1709747837] sampled token: 1346: ' let' Mar 06 18:57:17 XPS24 ollama[591887]: [1709747837] sampled token: 528: ' me' Mar 06 18:57:17 XPS24 ollama[591887]: [1709747837] sampled token: 873: ' know' Mar 06 18:57:17 XPS24 ollama[591887]: [1709747837] sampled token: 28723: '.' Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token: 315: ' I' Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token: 28742: ''' Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token: 28719: 'm' Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token: 1236: ' here' Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token: 298: ' to' Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token: 1038: ' make' Mar 06 18:57:18 XPS24 ollama[591887]: [1709747838] sampled token: 574: ' your' Mar 06 18:57:19 XPS24 ollama[591887]: [1709747839] sampled token: 1370: ' day' Mar 06 18:57:19 XPS24 ollama[591887]: [1709747839] sampled token: 264: ' a' Mar 06 18:57:19 XPS24 ollama[591887]: [1709747839] sampled token: 1628: ' little' Mar 06 18:57:19 XPS24 ollama[591887]: [1709747839] sampled token: 1170: ' br' Mar 06 18:57:19 XPS24 ollama[591887]: [1709747839] sampled token: 8918: 'ighter' Mar 06 18:57:19 XPS24 ollama[591887]: [1709747839] sampled token: 304: ' and' Mar 06 18:57:20 XPS24 ollama[591887]: [1709747840] sampled token: 7089: ' easier' Mar 06 18:57:20 XPS24 ollama[591887]: [1709747840] sampled token: 28723: '.' Mar 06 18:57:20 XPS24 ollama[591887]: [1709747840] sampled token: 3169: ' Let' Mar 06 18:57:20 XPS24 ollama[591887]: [1709747840] sampled token: 28742: ''' Mar 06 18:57:20 XPS24 ollama[591887]: [1709747840] sampled token: 28713: 's' Mar 06 18:57:20 XPS24 ollama[591887]: [1709747840] sampled token: 625: ' get' Mar 06 18:57:21 XPS24 ollama[591887]: [1709747841] sampled token: 456: ' this' Mar 06 18:57:21 XPS24 ollama[591887]: [1709747841] sampled token: 4150: ' party' Mar 06 18:57:21 XPS24 ollama[591887]: [1709747841] sampled token: 2774: ' started' Mar 06 18:57:21 XPS24 ollama[591887]: [1709747841] sampled token: 28808: '!' Mar 06 18:57:21 XPS24 ollama[591887]: [1709747841] sampled token: 1824: ' What' Mar 06 18:57:21 XPS24 ollama[591887]: [1709747841] sampled token: 28742: ''' Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] sampled token: 28713: 's' Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] sampled token: 356: ' on' Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] sampled token: 574: ' your' Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] sampled token: 2273: ' mind' Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] sampled token: 28804: '?' Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] sampled token: 2: '' Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] print_timings: prompt eval time = 1852.62 ms / 14 tokens ( 132.33 ms per token, 7.56 tokens per second) Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] print_timings: eval time = 9403.33 ms / 60 runs ( 156.72 ms per token, 6.38 tokens per second) Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] print_timings: total time = 11255.94 ms Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] slot 0 released (74 tokens in cache) Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] next result cancel on stop Mar 06 18:57:22 XPS24 ollama[591887]: [1709747842] next result removing waiting task ID: 0 Mar 06 18:57:22 XPS24 ollama[591887]: [GIN] 2024/03/06 - 18:57:22 | 200 | 11.25704483s | 127.0.0.1 | POST "/api/chat" ``` </details> @dhiltgen thanks for the tip! I tried that command then `sudo systemctl restart ollama`, didn't seem to change the output. I'll report back after next system reboot in case that's required for this tip to work!
Author
Owner

@Vilsol commented on GitHub (Mar 6, 2024):

@dhiltgen That command doesn't exist on my system, and any references I can find to it only seem to apply to in-container workloads

<!-- gh-comment-id:1981495093 --> @Vilsol commented on GitHub (Mar 6, 2024): @dhiltgen That command doesn't exist on my system, and any references I can find to it only seem to apply to in-container workloads
Author
Owner

@dhiltgen commented on GitHub (Mar 6, 2024):

@Vilsol on ubuntu it's part of the nvidia-modprobe package, which gets installed as a dependency when you install the cuda-drivers-XXX package. Not sure how it's packaged on other distros. (I'm not sure if this will actually resolve the problem, but there's some indication the unknown error when trying to initialize the GPU might be related to lacking unified memory support.)

<!-- gh-comment-id:1981880435 --> @dhiltgen commented on GitHub (Mar 6, 2024): @Vilsol on ubuntu it's part of the `nvidia-modprobe` package, which gets installed as a dependency when you install the `cuda-drivers-XXX` package. Not sure how it's packaged on other distros. (I'm not sure if this will actually resolve the problem, but there's some indication the unknown error when trying to initialize the GPU might be related to lacking unified memory support.)
Author
Owner

@jmlara commented on GitHub (Mar 11, 2024):

I had the same issue running the official Docker image on 5.15.0-56-generic #62~20.04.1-Ubuntu host. A nvidia-modprobe -u and system reboot resolved the issue.

<!-- gh-comment-id:1987531205 --> @jmlara commented on GitHub (Mar 11, 2024): I had the same issue running the official Docker image on 5.15.0-56-generic #62~20.04.1-Ubuntu host. A nvidia-modprobe -u and system reboot resolved the issue.
Author
Owner

@ru4en commented on GitHub (Mar 16, 2024):

This seems like a bug I had a while ago which was returing cuda error 999. Ive started to get the same failure during initialization: Unable to init GPU: unknown error error recently with it failing back to the CPU. Again similar to the the prevous bug reloading the nvidia uvm drivers seemed to have fixed it temporally.

Command I used: sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm

<!-- gh-comment-id:2001112082 --> @ru4en commented on GitHub (Mar 16, 2024): This seems like a bug I had a while ago which was returing cuda error 999. Ive started to get the same `failure during initialization: Unable to init GPU: unknown error` error recently with it failing back to the CPU. Again similar to the the prevous bug reloading the nvidia uvm drivers seemed to have fixed it temporally. Command I used: `sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm`
Author
Owner

@jmlara commented on GitHub (Mar 18, 2024):

I can recreate the error by placing my PC in suspend/sleep mode while the ollama docker is up and a model is loaded , then wake/resume the PC.

ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2932.04 MiB on device 0: cudaMalloc failed: unknown error
llama_model_load: error loading model: failed to allocate buffer
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '/root/.ollama/models/blobs/sha256:3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac'
time=2024-03-17T23:30:46.465Z level=WARN source=llm.go:162 msg="Failed to load dynamic library /root/.ollama/assets/0.1.28/cuda_v11/libext_server.so error loading model /root/.ollama/models/blobs/sha256:3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac"
time=2024-03-17T23:30:46.466Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /root/.ollama/assets/0.1.28/cpu_avx2/libext_server.so"

I can get ollama to use the GPU again by issuing: sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm

<!-- gh-comment-id:2002686925 --> @jmlara commented on GitHub (Mar 18, 2024): I can recreate the error by placing my PC in suspend/sleep mode while the ollama docker is up and a model is loaded , then wake/resume the PC. ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2932.04 MiB on device 0: cudaMalloc failed: unknown error llama_model_load: error loading model: failed to allocate buffer llama_load_model_from_file: failed to load model llama_init_from_gpt_params: error: failed to load model '/root/.ollama/models/blobs/sha256:3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac' time=2024-03-17T23:30:46.465Z level=WARN source=llm.go:162 msg="Failed to load dynamic library /root/.ollama/assets/0.1.28/cuda_v11/libext_server.so error loading model /root/.ollama/models/blobs/sha256:3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac" time=2024-03-17T23:30:46.466Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /root/.ollama/assets/0.1.28/cpu_avx2/libext_server.so" I can get ollama to use the GPU again by issuing: sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm
Author
Owner

@manmatteo commented on GitHub (Mar 21, 2024):

For many this issue is related to sleep/resume on a laptop. Unloading and reloading the kernel module is not possible in some cases. I managed to fix this adding a systemd service that does this:

options nvidia NVreg_PreserveVideoMemoryAllocations=1 NVreg_TemporaryFilePath=/tmp

Source: https://askubuntu.com/questions/1228423/how-do-i-fix-cuda-breaking-after-suspend (and http://download.nvidia.com/XFree86/Linux-x86_64/435.17/README/powermanagement.html)

<!-- gh-comment-id:2011778566 --> @manmatteo commented on GitHub (Mar 21, 2024): For many this issue is related to sleep/resume on a laptop. Unloading and reloading the kernel module is not possible in some cases. I managed to fix this adding a systemd service that does this: ``` options nvidia NVreg_PreserveVideoMemoryAllocations=1 NVreg_TemporaryFilePath=/tmp ``` Source: https://askubuntu.com/questions/1228423/how-do-i-fix-cuda-breaking-after-suspend (and http://download.nvidia.com/XFree86/Linux-x86_64/435.17/README/powermanagement.html)
Author
Owner

@Ca-ressemble-a-du-fake commented on GitHub (Aug 8, 2024):

On my setup (Debian 12, Nvidia Drivers 560, ollama in a docker container, headless server) I could not reload nvidia module as advised above because it was in use (so the first command sudo rmmod nvidia_uvm did not work). In case it helps, here is my step-by-step workaround to make ollama work on resume without rebooting.

So I had first to stop ollama container that was using nvidia_uvm module with sudo docker stop ollama. Afterward lsmod | grep nvidia showed that nvidia_uvm was used by 0 processe / module.

Then I could reload nvidia_uvm module as advised sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm and finally relaunch ollama container sudo docker restart ollama.

<!-- gh-comment-id:2274832593 --> @Ca-ressemble-a-du-fake commented on GitHub (Aug 8, 2024): On my setup (Debian 12, Nvidia Drivers 560, ollama in a docker container, headless server) I could not reload nvidia module as advised above because it was in use (so the first command `sudo rmmod nvidia_uvm` did not work). In case it helps, here is my step-by-step workaround to make ollama work on resume without rebooting. So I had first to stop ollama container that was using nvidia_uvm module with `sudo docker stop ollama`. Afterward `lsmod | grep nvidia` showed that nvidia_uvm was used by 0 processe / module. Then I could reload nvidia_uvm module as advised `sudo rmmod nvidia_uvm && sudo modprobe nvidia_uvm` and finally relaunch ollama container `sudo docker restart ollama`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27558