[GH-ISSUE #7423] "model requires more system memory" When Running in Docker Container and Making Continue Plugin Request from Inside Intellij #30481

Closed
opened 2026-04-22 10:07:54 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @nathan-hook on GitHub (Oct 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7423

What is the issue?

I hope that this a PEBCAK issue and that there is quick environment setting, but with my searching I couldn't find one.

TL;DR

When using the Continue Plugin in my Intellij and then configuring it to talk to my local Docker containerized Ollama instance, I get the following error from the plugin:

HTTP 500 Internal Server Error from http://127.0.0.1:11434/api/chat {"error":"model requires more system memory (10.1 GiB) than is available (4.1 GiB)"}

Longer Version

I am running Ollama in a Docker container along with Open WebUI via docker-compose on my Apple M2 Pro with 32 GB or memory:

services:
  ollama:
    volumes:
      - ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    ports:
      - 11434:11434
    environment:
      - OLLAMA_DEBUG=1
    tty: true
    restart: unless-stopped
#    image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest}
    image: ollama/ollama:latest

  open-webui:
    build:
      context: .
      args:
        OLLAMA_BASE_URL: '/ollama'
      dockerfile: Dockerfile
#    image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main}
    image: ghcr.io/open-webui/open-webui:latest
    container_name: open-webui
    volumes:
      - open-webui:/app/backend/data
    depends_on:
      - ollama
    ports:
#      - ${OPEN_WEBUI_PORT-3000}:8080
      - 3000:8080
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434'
#      - 'WEBUI_SECRET_KEY='
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped

volumes:
  ollama: {}
  open-webui: {}

In general, these instances have been able to handle most requests that I've made to them. I have asked general chat questions and then some Fabric AI queries and Ollama/Open WebUI seemed to do just fine.

When making the requests from Intellij with the Continue plugin, memory suddenly became a problem.

Here is the exact error message from the Continue plugin:

HTTP 500 Internal Server Error from http://127.0.0.1:11434/api/chat {"error":"model requires more system memory (10.1 GiB) than is available (4.1 GiB)"}

Then here are the logs from Ollama with the DEBUG=1 environment variable set:

2024-10-30 10:01:33 2024/10/30 16:01:33 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.049Z level=INFO source=images.go:754 msg="total blobs: 10"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.050Z level=INFO source=images.go:761 msg="total unused blobs removed: 0"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.050Z level=INFO source=routes.go:1205 msg="Listening on [::]:11434 (version 0.3.14)"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 cpu]"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:94 msg="searching for GPU discovery libraries for NVIDIA"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcuda.so*
2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths=[]
2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcudart.so*
2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]"
2024-10-30 10:01:33 cudaSetDevice err: 35
2024-10-30 10:01:33 time=2024-10-30T16:01:33.056Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.12.4.99: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
2024-10-30 10:01:33 cudaSetDevice err: 35
2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.11.3.109: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=DEBUG source=amd_linux.go:416 msg="amdgpu driver not detected /sys/module/amdgpu"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=INFO source=gpu.go:384 msg="no compatible GPUs were discovered"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="23.4 GiB" available="4.6 GiB"
2024-10-30 10:01:46 [GIN] 2024/10/30 - 16:01:46 | 200 |    3.489458ms |      172.18.0.3 | GET      "/api/tags"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.061Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="4.6 GiB" before.free_swap="6.9 MiB" now.total="23.4 GiB" now.free="4.1 GiB" now.free_swap="6.9 MiB"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.061Z level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x7f22c0 gpu_count=1
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=sched.go:211 msg="cpu mode with first model, loading"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="4.1 GiB" before.free_swap="6.9 MiB" now.total="23.4 GiB" now.free="4.1 GiB" now.free_swap="6.9 MiB"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=INFO source=server.go:105 msg="system memory" total="23.4 GiB" free="4.1 GiB" free_swap="6.9 MiB"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=memory.go:103 msg=evaluating library=cpu gpu_count=1 available="[4.1 GiB]"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.070Z level=WARN source=server.go:137 msg="model request too large for system" requested="10.1 GiB" available=4443041792 total="23.4 GiB" free="4.1 GiB" swap="6.9 MiB"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.070Z level=INFO source=sched.go:428 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 error="model requires more system memory (10.1 GiB) than is available (4.1 GiB)"
2024-10-30 10:02:05 [GIN] 2024/10/30 - 16:02:05 | 500 |   18.648125ms |    192.168.65.1 | POST     "/api/chat"
2024-10-30 10:06:17 [GIN] 2024/10/30 - 16:06:17 | 200 |       181.5µs |       127.0.0.1 | GET      "/api/version"
2024-10-30 10:34:21 [GIN] 2024/10/30 - 16:34:21 | 200 |      232.25µs |       127.0.0.1 | HEAD     "/"
2024-10-30 10:34:21 [GIN] 2024/10/30 - 16:34:21 | 200 |     281.833µs |       127.0.0.1 | GET      "/api/ps"
2024-10-30 10:47:05 [GIN] 2024/10/30 - 16:47:05 | 200 |    1.833166ms |      172.18.0.3 | GET      "/api/tags"
2024-10-30 10:47:05 [GIN] 2024/10/30 - 16:47:05 | 200 |      77.917µs |      172.18.0.3 | GET      "/api/version"
2024-10-30 10:47:16 [GIN] 2024/10/30 - 16:47:16 | 200 |      61.084µs |      172.18.0.3 | GET      "/api/version"
2024-10-30 10:47:22 [GIN] 2024/10/30 - 16:47:22 | 200 |     223.334µs |      172.18.0.3 | GET      "/api/version"

Here are my docker stats:

CONTAINER ID   NAME         CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O        PIDS
2bef887cf4a9   open-webui   0.41%     698.3MiB / 23.44GiB   2.91%     28.4kB / 20.1kB   348MB / 41.5MB   22
29e804e5b9e1   ollama       0.00%     22.76MiB / 23.44GiB   0.09%     9.32kB / 8.49kB   15.2MB / 0B      12

My Docker Desktop has the following resource settings:
CPU Limit: 8
Memory Limit: 24 Gigs
SWAP: 4GB

Memory Statistics from my Mac:
Physical Memory: 32 GB
Memory Used: 27.20 GB
Cached Files: 4.75 GB
Swap Used: 2.07 GB

Models:
Llama3.1:latest
Minstral:7b

Any friendly direction on how to debug this issue or how to change some environment variables (in docker-compose) to just make this issue go away would be greatly appreciated.

FWIW, I am not interesting in performance. I am just futzing with integrating my IDE with a local LLM. And at the end of the day, I just want to see it work...

Thank you for all your hard work. Please let me know what comments, questions, or concerns you have.

OS

macOS

GPU

Other

CPU

Apple

Ollama version

ollama version is 0.3.14

Originally created by @nathan-hook on GitHub (Oct 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7423 ### What is the issue? I hope that this a PEBCAK issue and that there is quick environment setting, but with my searching I couldn't find one. ## TL;DR When using the [Continue Plugin](https://plugins.jetbrains.com/plugin/22707-continue) in my Intellij and then configuring it to talk to my local Docker containerized Ollama instance, I get the following error from the plugin: ``` HTTP 500 Internal Server Error from http://127.0.0.1:11434/api/chat {"error":"model requires more system memory (10.1 GiB) than is available (4.1 GiB)"} ``` ## Longer Version I am running Ollama in a Docker container along with Open WebUI via docker-compose on my Apple M2 Pro with 32 GB or memory: ``` services: ollama: volumes: - ollama:/root/.ollama container_name: ollama pull_policy: always ports: - 11434:11434 environment: - OLLAMA_DEBUG=1 tty: true restart: unless-stopped # image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest} image: ollama/ollama:latest open-webui: build: context: . args: OLLAMA_BASE_URL: '/ollama' dockerfile: Dockerfile # image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main} image: ghcr.io/open-webui/open-webui:latest container_name: open-webui volumes: - open-webui:/app/backend/data depends_on: - ollama ports: # - ${OPEN_WEBUI_PORT-3000}:8080 - 3000:8080 environment: - 'OLLAMA_BASE_URL=http://ollama:11434' # - 'WEBUI_SECRET_KEY=' extra_hosts: - host.docker.internal:host-gateway restart: unless-stopped volumes: ollama: {} open-webui: {} ``` In general, these instances have been able to handle most requests that I've made to them. I have asked general chat questions and then some [Fabric AI](https://github.com/danielmiessler/fabric) queries and Ollama/Open WebUI seemed to do just fine. When making the requests from Intellij with the Continue plugin, memory suddenly became a problem. Here is the exact error message from the Continue plugin: ``` HTTP 500 Internal Server Error from http://127.0.0.1:11434/api/chat {"error":"model requires more system memory (10.1 GiB) than is available (4.1 GiB)"} ``` Then here are the logs from Ollama with the DEBUG=1 environment variable set: ``` 2024-10-30 10:01:33 2024/10/30 16:01:33 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.049Z level=INFO source=images.go:754 msg="total blobs: 10" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.050Z level=INFO source=images.go:761 msg="total unused blobs removed: 0" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.050Z level=INFO source=routes.go:1205 msg="Listening on [::]:11434 (version 0.3.14)" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 cpu]" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=sched.go:105 msg="starting llm scheduler" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:94 msg="searching for GPU discovery libraries for NVIDIA" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcuda.so* 2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths=[] 2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcudart.so* 2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]" 2024-10-30 10:01:33 cudaSetDevice err: 35 2024-10-30 10:01:33 time=2024-10-30T16:01:33.056Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.12.4.99: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" 2024-10-30 10:01:33 cudaSetDevice err: 35 2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.11.3.109: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=DEBUG source=amd_linux.go:416 msg="amdgpu driver not detected /sys/module/amdgpu" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=INFO source=gpu.go:384 msg="no compatible GPUs were discovered" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="23.4 GiB" available="4.6 GiB" 2024-10-30 10:01:46 [GIN] 2024/10/30 - 16:01:46 | 200 | 3.489458ms | 172.18.0.3 | GET "/api/tags" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.061Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="4.6 GiB" before.free_swap="6.9 MiB" now.total="23.4 GiB" now.free="4.1 GiB" now.free_swap="6.9 MiB" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.061Z level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x7f22c0 gpu_count=1 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=sched.go:211 msg="cpu mode with first model, loading" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="4.1 GiB" before.free_swap="6.9 MiB" now.total="23.4 GiB" now.free="4.1 GiB" now.free_swap="6.9 MiB" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=INFO source=server.go:105 msg="system memory" total="23.4 GiB" free="4.1 GiB" free_swap="6.9 MiB" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=memory.go:103 msg=evaluating library=cpu gpu_count=1 available="[4.1 GiB]" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.070Z level=WARN source=server.go:137 msg="model request too large for system" requested="10.1 GiB" available=4443041792 total="23.4 GiB" free="4.1 GiB" swap="6.9 MiB" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.070Z level=INFO source=sched.go:428 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 error="model requires more system memory (10.1 GiB) than is available (4.1 GiB)" 2024-10-30 10:02:05 [GIN] 2024/10/30 - 16:02:05 | 500 | 18.648125ms | 192.168.65.1 | POST "/api/chat" 2024-10-30 10:06:17 [GIN] 2024/10/30 - 16:06:17 | 200 | 181.5µs | 127.0.0.1 | GET "/api/version" 2024-10-30 10:34:21 [GIN] 2024/10/30 - 16:34:21 | 200 | 232.25µs | 127.0.0.1 | HEAD "/" 2024-10-30 10:34:21 [GIN] 2024/10/30 - 16:34:21 | 200 | 281.833µs | 127.0.0.1 | GET "/api/ps" 2024-10-30 10:47:05 [GIN] 2024/10/30 - 16:47:05 | 200 | 1.833166ms | 172.18.0.3 | GET "/api/tags" 2024-10-30 10:47:05 [GIN] 2024/10/30 - 16:47:05 | 200 | 77.917µs | 172.18.0.3 | GET "/api/version" 2024-10-30 10:47:16 [GIN] 2024/10/30 - 16:47:16 | 200 | 61.084µs | 172.18.0.3 | GET "/api/version" 2024-10-30 10:47:22 [GIN] 2024/10/30 - 16:47:22 | 200 | 223.334µs | 172.18.0.3 | GET "/api/version" ``` Here are my `docker stats`: ``` CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 2bef887cf4a9 open-webui 0.41% 698.3MiB / 23.44GiB 2.91% 28.4kB / 20.1kB 348MB / 41.5MB 22 29e804e5b9e1 ollama 0.00% 22.76MiB / 23.44GiB 0.09% 9.32kB / 8.49kB 15.2MB / 0B 12 ``` My Docker Desktop has the following resource settings: CPU Limit: 8 Memory Limit: 24 Gigs SWAP: 4GB Memory Statistics from my Mac: Physical Memory: 32 GB Memory Used: 27.20 GB Cached Files: 4.75 GB Swap Used: 2.07 GB Models: Llama3.1:latest Minstral:7b Any friendly direction on how to debug this issue or how to change some environment variables (in docker-compose) to just make this issue go away would be greatly appreciated. FWIW, I am not interesting in performance. I am just futzing with integrating my IDE with a local LLM. And at the end of the day, I just want to see it work... Thank you for all your hard work. Please let me know what comments, questions, or concerns you have. ### OS macOS ### GPU Other ### CPU Apple ### Ollama version ollama version is 0.3.14
GiteaMirror added the bug label 2026-04-22 10:07:54 -05:00
Author
Owner

@nathan-hook commented on GitHub (Oct 30, 2024):

Possibly related?
https://github.com/ollama/ollama/issues/3415

<!-- gh-comment-id:2447731304 --> @nathan-hook commented on GitHub (Oct 30, 2024): Possibly related? https://github.com/ollama/ollama/issues/3415
Author
Owner

@rick-github commented on GitHub (Oct 30, 2024):

If you include more of the log that contains the server environment variables and the calculations that ollama used to assess the memory requirements, the answers would be more certain. But at a guess, continue is using a large context window that is exacerbated by ollama using a default value of 4 for OLLAMA_NUM_PARALLEL. These two factors result in a memory requirement to load the model of 10G, which exceeds the available free RAM/swap. At a base level, the model you are using can fit in just over 5G:

$ ollama ps
NAME                            ID              SIZE    PROCESSOR       UNTIL   
mistral:7b-instruct-v0.3-q4_0   2ae6f6dd7a3d    5.1 GB  100% GPU        Forever

which unfortunately is still more than your available resources. ollama thinks that you have only 7 Megabytes free swap, so despite your config of 4 Gigabytes, something is sucking up those resources. The upshot is that you don't have enough memory to load this model along with whatever else you are running. You can try adding more swap, or closing some programs.

<!-- gh-comment-id:2447801546 --> @rick-github commented on GitHub (Oct 30, 2024): If you include more of the log that contains the server environment variables and the calculations that ollama used to assess the memory requirements, the answers would be more certain. But at a guess, continue is using a large context window that is exacerbated by ollama using a default value of 4 for OLLAMA_NUM_PARALLEL. These two factors result in a memory requirement to load the model of 10G, which exceeds the available free RAM/swap. At a base level, the model you are using can fit in just over 5G: ```console $ ollama ps NAME ID SIZE PROCESSOR UNTIL mistral:7b-instruct-v0.3-q4_0 2ae6f6dd7a3d 5.1 GB 100% GPU Forever ``` which unfortunately is still more than your available resources. ollama thinks that you have only 7 Megabytes free swap, so despite your config of 4 Gigabytes, something is sucking up those resources. The upshot is that you don't have enough memory to load this model along with whatever else you are running. You can try adding more swap, or closing some programs.
Author
Owner

@nathan-hook commented on GitHub (Oct 30, 2024):

Thank you for your reply.

The whole logs have been added to the original post and here they are again for reference:

2024-10-30 10:01:33 2024/10/30 16:01:33 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.049Z level=INFO source=images.go:754 msg="total blobs: 10"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.050Z level=INFO source=images.go:761 msg="total unused blobs removed: 0"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.050Z level=INFO source=routes.go:1205 msg="Listening on [::]:11434 (version 0.3.14)"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 cpu]"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:94 msg="searching for GPU discovery libraries for NVIDIA"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcuda.so*
2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths=[]
2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcudart.so*
2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]"
2024-10-30 10:01:33 cudaSetDevice err: 35
2024-10-30 10:01:33 time=2024-10-30T16:01:33.056Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.12.4.99: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
2024-10-30 10:01:33 cudaSetDevice err: 35
2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.11.3.109: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=DEBUG source=amd_linux.go:416 msg="amdgpu driver not detected /sys/module/amdgpu"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=INFO source=gpu.go:384 msg="no compatible GPUs were discovered"
2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="23.4 GiB" available="4.6 GiB"
2024-10-30 10:01:46 [GIN] 2024/10/30 - 16:01:46 | 200 |    3.489458ms |      172.18.0.3 | GET      "/api/tags"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.061Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="4.6 GiB" before.free_swap="6.9 MiB" now.total="23.4 GiB" now.free="4.1 GiB" now.free_swap="6.9 MiB"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.061Z level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x7f22c0 gpu_count=1
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=sched.go:211 msg="cpu mode with first model, loading"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="4.1 GiB" before.free_swap="6.9 MiB" now.total="23.4 GiB" now.free="4.1 GiB" now.free_swap="6.9 MiB"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=INFO source=server.go:105 msg="system memory" total="23.4 GiB" free="4.1 GiB" free_swap="6.9 MiB"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=memory.go:103 msg=evaluating library=cpu gpu_count=1 available="[4.1 GiB]"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.070Z level=WARN source=server.go:137 msg="model request too large for system" requested="10.1 GiB" available=4443041792 total="23.4 GiB" free="4.1 GiB" swap="6.9 MiB"
2024-10-30 10:02:05 time=2024-10-30T16:02:05.070Z level=INFO source=sched.go:428 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 error="model requires more system memory (10.1 GiB) than is available (4.1 GiB)"
2024-10-30 10:02:05 [GIN] 2024/10/30 - 16:02:05 | 500 |   18.648125ms |    192.168.65.1 | POST     "/api/chat"
2024-10-30 10:06:17 [GIN] 2024/10/30 - 16:06:17 | 200 |       181.5µs |       127.0.0.1 | GET      "/api/version"
2024-10-30 10:34:21 [GIN] 2024/10/30 - 16:34:21 | 200 |      232.25µs |       127.0.0.1 | HEAD     "/"
2024-10-30 10:34:21 [GIN] 2024/10/30 - 16:34:21 | 200 |     281.833µs |       127.0.0.1 | GET      "/api/ps"
2024-10-30 10:47:05 [GIN] 2024/10/30 - 16:47:05 | 200 |    1.833166ms |      172.18.0.3 | GET      "/api/tags"
2024-10-30 10:47:05 [GIN] 2024/10/30 - 16:47:05 | 200 |      77.917µs |      172.18.0.3 | GET      "/api/version"
2024-10-30 10:47:16 [GIN] 2024/10/30 - 16:47:16 | 200 |      61.084µs |      172.18.0.3 | GET      "/api/version"
2024-10-30 10:47:22 [GIN] 2024/10/30 - 16:47:22 | 200 |     223.334µs |      172.18.0.3 | GET      "/api/version"

I will try closing everything down and trying again.

Thanks for the advice.

Besides shutting everything down, are there any other nobs that could be turned or switches that could be flipped to get things to work?

<!-- gh-comment-id:2447872311 --> @nathan-hook commented on GitHub (Oct 30, 2024): Thank you for your reply. The whole logs have been added to the original post and here they are again for reference: ``` 2024-10-30 10:01:33 2024/10/30 16:01:33 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.049Z level=INFO source=images.go:754 msg="total blobs: 10" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.050Z level=INFO source=images.go:761 msg="total unused blobs removed: 0" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.050Z level=INFO source=routes.go:1205 msg="Listening on [::]:11434 (version 0.3.14)" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 cpu]" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=DEBUG source=sched.go:105 msg="starting llm scheduler" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.051Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:94 msg="searching for GPU discovery libraries for NVIDIA" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcuda.so* 2024-10-30 10:01:33 time=2024-10-30T16:01:33.052Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths=[] 2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcudart.so* 2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.053Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]" 2024-10-30 10:01:33 cudaSetDevice err: 35 2024-10-30 10:01:33 time=2024-10-30T16:01:33.056Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.12.4.99: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" 2024-10-30 10:01:33 cudaSetDevice err: 35 2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.11.3.109: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=DEBUG source=amd_linux.go:416 msg="amdgpu driver not detected /sys/module/amdgpu" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=INFO source=gpu.go:384 msg="no compatible GPUs were discovered" 2024-10-30 10:01:33 time=2024-10-30T16:01:33.057Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="23.4 GiB" available="4.6 GiB" 2024-10-30 10:01:46 [GIN] 2024/10/30 - 16:01:46 | 200 | 3.489458ms | 172.18.0.3 | GET "/api/tags" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.061Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="4.6 GiB" before.free_swap="6.9 MiB" now.total="23.4 GiB" now.free="4.1 GiB" now.free_swap="6.9 MiB" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.061Z level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x7f22c0 gpu_count=1 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=sched.go:211 msg="cpu mode with first model, loading" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="4.1 GiB" before.free_swap="6.9 MiB" now.total="23.4 GiB" now.free="4.1 GiB" now.free_swap="6.9 MiB" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=INFO source=server.go:105 msg="system memory" total="23.4 GiB" free="4.1 GiB" free_swap="6.9 MiB" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server 2024-10-30 10:02:05 time=2024-10-30T16:02:05.069Z level=DEBUG source=memory.go:103 msg=evaluating library=cpu gpu_count=1 available="[4.1 GiB]" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.070Z level=WARN source=server.go:137 msg="model request too large for system" requested="10.1 GiB" available=4443041792 total="23.4 GiB" free="4.1 GiB" swap="6.9 MiB" 2024-10-30 10:02:05 time=2024-10-30T16:02:05.070Z level=INFO source=sched.go:428 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 error="model requires more system memory (10.1 GiB) than is available (4.1 GiB)" 2024-10-30 10:02:05 [GIN] 2024/10/30 - 16:02:05 | 500 | 18.648125ms | 192.168.65.1 | POST "/api/chat" 2024-10-30 10:06:17 [GIN] 2024/10/30 - 16:06:17 | 200 | 181.5µs | 127.0.0.1 | GET "/api/version" 2024-10-30 10:34:21 [GIN] 2024/10/30 - 16:34:21 | 200 | 232.25µs | 127.0.0.1 | HEAD "/" 2024-10-30 10:34:21 [GIN] 2024/10/30 - 16:34:21 | 200 | 281.833µs | 127.0.0.1 | GET "/api/ps" 2024-10-30 10:47:05 [GIN] 2024/10/30 - 16:47:05 | 200 | 1.833166ms | 172.18.0.3 | GET "/api/tags" 2024-10-30 10:47:05 [GIN] 2024/10/30 - 16:47:05 | 200 | 77.917µs | 172.18.0.3 | GET "/api/version" 2024-10-30 10:47:16 [GIN] 2024/10/30 - 16:47:16 | 200 | 61.084µs | 172.18.0.3 | GET "/api/version" 2024-10-30 10:47:22 [GIN] 2024/10/30 - 16:47:22 | 200 | 223.334µs | 172.18.0.3 | GET "/api/version" ``` I will try closing everything down and trying again. Thanks for the advice. Besides shutting everything down, are there any other nobs that could be turned or switches that could be flipped to get things to work?
Author
Owner

@rick-github commented on GitHub (Oct 30, 2024):

The only knob that will help here is OLLAMA_NUM_PARALLEL, setting it to 1 will reduce the amount of memory allocated for KV space. Other than that, you would need to look at Continue to reduce the size of the context window, or configure another, smaller model. I don't know if it has the appropriate training for Continue, but llama3.2 is quite a capable model and is smaller than mistral:

$ ollama ps
NAME                            ID              SIZE    PROCESSOR       UNTIL   
llama3.2:3b-instruct-q4_K_M     a80c4f17acd5    3.1 GB  100% GPU        Forever
mistral:7b-instruct-v0.3-q4_0   2ae6f6dd7a3d    5.1 GB  100% GPU        Forever

The qwen2.5 models also come in a range of sizes, some very small.

<!-- gh-comment-id:2447901950 --> @rick-github commented on GitHub (Oct 30, 2024): The only knob that will help here is `OLLAMA_NUM_PARALLEL`, setting it to 1 will reduce the amount of memory allocated for KV space. Other than that, you would need to look at Continue to reduce the size of the context window, or configure another, smaller model. I don't know if it has the appropriate training for Continue, but llama3.2 is quite a capable model and is smaller than mistral: ```console $ ollama ps NAME ID SIZE PROCESSOR UNTIL llama3.2:3b-instruct-q4_K_M a80c4f17acd5 3.1 GB 100% GPU Forever mistral:7b-instruct-v0.3-q4_0 2ae6f6dd7a3d 5.1 GB 100% GPU Forever ``` The qwen2.5 models also come in a range of sizes, some very small.
Author
Owner

@nathan-hook commented on GitHub (Oct 30, 2024):

@rick-github

Thank you for your answers and your patience. Both are very much appreciated.

By downloading the llama3.2:3b model and editing my docker compose file and setting the OLLAMA_NUM_PARALLEL to 1, I was able to call Ollama from the Continue plugin. YES!

I was able to make my calls with all three models I have installed: llama3.2:3b, llama3.1:latest, and mistral:7b

All of them worked. Even better.

I then started futzing with the OLLAMA_NUM_PARALLEL and setting it to higher and higher numbers, getting up to 8 and all three models kept working.

I then decided, well, let me just try to replicate the original error by commenting out OLLAMA_NUM_PARALLEL from my docker compose file.

And of course, it still worked. Gosh dang it. What could I have done to cause it to still work even when set back to the default value that was originally not working?

I have no idea what I might have changed. Here are the things I can remember doing between Continue not working and now, when Continue is working:

  • Closed my browser (since it was using lots of memory)
  • Opened my browser
  • Closed my Intellij
  • Opened my Intellij
  • I might have open and closed my Docker Desktop.
  • Futzed with my docker compose file and changed the OLLAMA_NUM_PARALLEL values.

Here are all the same logs/information/stats as before.

Ollama Logs:

2024-10-30 12:33:08 2024/10/30 18:33:08 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.462Z level=INFO source=images.go:754 msg="total blobs: 15"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.462Z level=INFO source=images.go:761 msg="total unused blobs removed: 0"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.462Z level=INFO source=routes.go:1205 msg="Listening on [::]:11434 (version 0.3.14)"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cuda_v11 cuda_v12]"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:94 msg="searching for GPU discovery libraries for NVIDIA"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcuda.so*
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths=[]
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcudart.so*
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]"
2024-10-30 12:33:08 cudaSetDevice err: 35
2024-10-30 12:33:08 time=2024-10-30T18:33:08.464Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.12.4.99: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
2024-10-30 12:33:08 cudaSetDevice err: 35
2024-10-30 12:33:08 time=2024-10-30T18:33:08.465Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.11.3.109: your nvidia driver is too old or missing.  If you have a CUDA GPU please upgrade to run ollama"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.465Z level=DEBUG source=amd_linux.go:416 msg="amdgpu driver not detected /sys/module/amdgpu"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.465Z level=INFO source=gpu.go:384 msg="no compatible GPUs were discovered"
2024-10-30 12:33:08 time=2024-10-30T18:33:08.465Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="23.4 GiB" available="22.0 GiB"
2024-10-30 12:33:16 [GIN] 2024/10/30 - 18:33:16 | 200 |    1.464917ms |      172.18.0.3 | GET      "/api/tags"
2024-10-30 12:33:33 time=2024-10-30T18:33:33.740Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="22.0 GiB" before.free_swap="3.9 GiB" now.total="23.4 GiB" now.free="21.5 GiB" now.free_swap="3.9 GiB"
2024-10-30 12:33:33 time=2024-10-30T18:33:33.741Z level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x7f22c0 gpu_count=1
2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=DEBUG source=sched.go:211 msg="cpu mode with first model, loading"
2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="21.5 GiB" before.free_swap="3.9 GiB" now.total="23.4 GiB" now.free="21.5 GiB" now.free_swap="3.9 GiB"
2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=INFO source=server.go:105 msg="system memory" total="23.4 GiB" free="21.5 GiB" free_swap="3.9 GiB"
2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=DEBUG source=memory.go:103 msg=evaluating library=cpu gpu_count=1 available="[21.5 GiB]"
2024-10-30 12:33:33 time=2024-10-30T18:33:33.775Z level=INFO source=memory.go:326 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[21.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.3 GiB" memory.required.partial="0 B" memory.required.kv="4.0 GiB" memory.required.allocations="[10.3 GiB]" memory.weights.total="7.6 GiB" memory.weights.repeating="7.2 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="2.1 GiB" memory.graph.partial="2.2 GiB"
2024-10-30 12:33:33 time=2024-10-30T18:33:33.775Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
2024-10-30 12:33:33 time=2024-10-30T18:33:33.775Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
2024-10-30 12:33:33 time=2024-10-30T18:33:33.775Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
2024-10-30 12:33:33 time=2024-10-30T18:33:33.777Z level=DEBUG source=gpu.go:699 msg="no filter required for library cpu"
2024-10-30 12:33:33 time=2024-10-30T18:33:33.777Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 32384 --batch-size 512 --embedding --verbose --threads 8 --no-mmap --parallel 4 --port 37131"
2024-10-30 12:33:33 time=2024-10-30T18:33:33.777Z level=DEBUG source=server.go:405 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/runners/cpu:/usr/local/nvidia/lib:/usr/local/nvidia/lib64]"
2024-10-30 12:33:33 time=2024-10-30T18:33:33.777Z level=INFO source=sched.go:449 msg="loaded runners" count=1
2024-10-30 12:33:33 time=2024-10-30T18:33:33.777Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding"
2024-10-30 12:33:33 time=2024-10-30T18:33:33.778Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
2024-10-30 12:33:33 INFO [main] starting c++ runner | tid="281473522201664" timestamp=1730313213
2024-10-30 12:33:33 INFO [main] build info | build=10 commit="70b2d9b" tid="281473522201664" timestamp=1730313213
2024-10-30 12:33:33 INFO [main] system info | n_threads=8 n_threads_batch=8 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="281473522201664" timestamp=1730313213 total_threads=8
2024-10-30 12:33:33 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="37131" tid="281473522201664" timestamp=1730313213
2024-10-30 12:33:33 llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest))
2024-10-30 12:33:33 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2024-10-30 12:33:33 llama_model_loader: - kv   0:                       general.architecture str              = llama
2024-10-30 12:33:33 llama_model_loader: - kv   1:                               general.type str              = model
2024-10-30 12:33:33 llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
2024-10-30 12:33:33 llama_model_loader: - kv   3:                           general.finetune str              = Instruct
2024-10-30 12:33:33 llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
2024-10-30 12:33:33 llama_model_loader: - kv   5:                         general.size_label str              = 8B
2024-10-30 12:33:33 llama_model_loader: - kv   6:                            general.license str              = llama3.1
2024-10-30 12:33:33 llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
2024-10-30 12:33:33 llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
2024-10-30 12:33:33 llama_model_loader: - kv   9:                          llama.block_count u32              = 32
2024-10-30 12:33:33 llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
2024-10-30 12:33:33 llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
2024-10-30 12:33:33 llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
2024-10-30 12:33:33 llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
2024-10-30 12:33:33 llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
2024-10-30 12:33:33 llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
2024-10-30 12:33:33 llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
2024-10-30 12:33:33 llama_model_loader: - kv  17:                          general.file_type u32              = 2
2024-10-30 12:33:33 llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
2024-10-30 12:33:33 llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
2024-10-30 12:33:33 llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
2024-10-30 12:33:33 llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
2024-10-30 12:33:33 llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
2024-10-30 12:33:33 llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2024-10-30 12:33:33 llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
2024-10-30 12:33:33 llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
2024-10-30 12:33:33 llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
2024-10-30 12:33:33 llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
2024-10-30 12:33:33 llama_model_loader: - kv  28:               general.quantization_version u32              = 2
2024-10-30 12:33:33 llama_model_loader: - type  f32:   66 tensors
2024-10-30 12:33:33 llama_model_loader: - type q4_0:  225 tensors
2024-10-30 12:33:33 llama_model_loader: - type q6_K:    1 tensors
2024-10-30 12:33:34 llm_load_vocab: special tokens cache size = 256
2024-10-30 12:33:34 time=2024-10-30T18:33:34.029Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
2024-10-30 12:33:34 llm_load_vocab: token to piece cache size = 0.7999 MB
2024-10-30 12:33:34 llm_load_print_meta: format           = GGUF V3 (latest)
2024-10-30 12:33:34 llm_load_print_meta: arch             = llama
2024-10-30 12:33:34 llm_load_print_meta: vocab type       = BPE
2024-10-30 12:33:34 llm_load_print_meta: n_vocab          = 128256
2024-10-30 12:33:34 llm_load_print_meta: n_merges         = 280147
2024-10-30 12:33:34 llm_load_print_meta: vocab_only       = 0
2024-10-30 12:33:34 llm_load_print_meta: n_ctx_train      = 131072
2024-10-30 12:33:34 llm_load_print_meta: n_embd           = 4096
2024-10-30 12:33:34 llm_load_print_meta: n_layer          = 32
2024-10-30 12:33:34 llm_load_print_meta: n_head           = 32
2024-10-30 12:33:34 llm_load_print_meta: n_head_kv        = 8
2024-10-30 12:33:34 llm_load_print_meta: n_rot            = 128
2024-10-30 12:33:34 llm_load_print_meta: n_swa            = 0
2024-10-30 12:33:34 llm_load_print_meta: n_embd_head_k    = 128
2024-10-30 12:33:34 llm_load_print_meta: n_embd_head_v    = 128
2024-10-30 12:33:34 llm_load_print_meta: n_gqa            = 4
2024-10-30 12:33:34 llm_load_print_meta: n_embd_k_gqa     = 1024
2024-10-30 12:33:34 llm_load_print_meta: n_embd_v_gqa     = 1024
2024-10-30 12:33:34 llm_load_print_meta: f_norm_eps       = 0.0e+00
2024-10-30 12:33:34 llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
2024-10-30 12:33:34 llm_load_print_meta: f_clamp_kqv      = 0.0e+00
2024-10-30 12:33:34 llm_load_print_meta: f_max_alibi_bias = 0.0e+00
2024-10-30 12:33:34 llm_load_print_meta: f_logit_scale    = 0.0e+00
2024-10-30 12:33:34 llm_load_print_meta: n_ff             = 14336
2024-10-30 12:33:34 llm_load_print_meta: n_expert         = 0
2024-10-30 12:33:34 llm_load_print_meta: n_expert_used    = 0
2024-10-30 12:33:34 llm_load_print_meta: causal attn      = 1
2024-10-30 12:33:34 llm_load_print_meta: pooling type     = 0
2024-10-30 12:33:34 llm_load_print_meta: rope type        = 0
2024-10-30 12:33:34 llm_load_print_meta: rope scaling     = linear
2024-10-30 12:33:34 llm_load_print_meta: freq_base_train  = 500000.0
2024-10-30 12:33:34 llm_load_print_meta: freq_scale_train = 1
2024-10-30 12:33:34 llm_load_print_meta: n_ctx_orig_yarn  = 131072
2024-10-30 12:33:34 llm_load_print_meta: rope_finetuned   = unknown
2024-10-30 12:33:34 llm_load_print_meta: ssm_d_conv       = 0
2024-10-30 12:33:34 llm_load_print_meta: ssm_d_inner      = 0
2024-10-30 12:33:34 llm_load_print_meta: ssm_d_state      = 0
2024-10-30 12:33:34 llm_load_print_meta: ssm_dt_rank      = 0
2024-10-30 12:33:34 llm_load_print_meta: ssm_dt_b_c_rms   = 0
2024-10-30 12:33:34 llm_load_print_meta: model type       = 8B
2024-10-30 12:33:34 llm_load_print_meta: model ftype      = Q4_0
2024-10-30 12:33:34 llm_load_print_meta: model params     = 8.03 B
2024-10-30 12:33:34 llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW) 
2024-10-30 12:33:34 llm_load_print_meta: general.name     = Meta Llama 3.1 8B Instruct
2024-10-30 12:33:34 llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
2024-10-30 12:33:34 llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
2024-10-30 12:33:34 llm_load_print_meta: LF token         = 128 'Ä'
2024-10-30 12:33:34 llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
2024-10-30 12:33:34 llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
2024-10-30 12:33:34 llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
2024-10-30 12:33:34 llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
2024-10-30 12:33:34 llm_load_print_meta: max token length = 256
2024-10-30 12:33:34 llm_load_tensors: ggml ctx size =    0.14 MiB
2024-10-30 12:33:34 llm_load_tensors:        CPU buffer size =  4437.81 MiB
2024-10-30 12:33:34 time=2024-10-30T18:33:34.531Z level=DEBUG source=server.go:632 msg="model load progress 0.06"
2024-10-30 12:33:34 time=2024-10-30T18:33:34.783Z level=DEBUG source=server.go:632 msg="model load progress 0.17"
2024-10-30 12:33:35 time=2024-10-30T18:33:35.034Z level=DEBUG source=server.go:632 msg="model load progress 0.25"
2024-10-30 12:33:35 time=2024-10-30T18:33:35.285Z level=DEBUG source=server.go:632 msg="model load progress 0.31"
2024-10-30 12:33:35 time=2024-10-30T18:33:35.538Z level=DEBUG source=server.go:632 msg="model load progress 0.35"
2024-10-30 12:33:35 time=2024-10-30T18:33:35.789Z level=DEBUG source=server.go:632 msg="model load progress 0.39"
2024-10-30 12:33:36 time=2024-10-30T18:33:36.040Z level=DEBUG source=server.go:632 msg="model load progress 0.43"
2024-10-30 12:33:36 time=2024-10-30T18:33:36.293Z level=DEBUG source=server.go:632 msg="model load progress 0.48"
2024-10-30 12:33:36 time=2024-10-30T18:33:36.545Z level=DEBUG source=server.go:632 msg="model load progress 0.53"
2024-10-30 12:33:36 time=2024-10-30T18:33:36.798Z level=DEBUG source=server.go:632 msg="model load progress 0.57"
2024-10-30 12:33:37 time=2024-10-30T18:33:37.048Z level=DEBUG source=server.go:632 msg="model load progress 0.62"
2024-10-30 12:33:37 time=2024-10-30T18:33:37.304Z level=DEBUG source=server.go:632 msg="model load progress 0.66"
2024-10-30 12:33:37 time=2024-10-30T18:33:37.558Z level=DEBUG source=server.go:632 msg="model load progress 0.71"
2024-10-30 12:33:37 time=2024-10-30T18:33:37.811Z level=DEBUG source=server.go:632 msg="model load progress 0.76"
2024-10-30 12:33:38 time=2024-10-30T18:33:38.065Z level=DEBUG source=server.go:632 msg="model load progress 0.81"
2024-10-30 12:33:38 time=2024-10-30T18:33:38.317Z level=DEBUG source=server.go:632 msg="model load progress 0.85"
2024-10-30 12:33:38 time=2024-10-30T18:33:38.569Z level=DEBUG source=server.go:632 msg="model load progress 0.90"
2024-10-30 12:33:38 time=2024-10-30T18:33:38.820Z level=DEBUG source=server.go:632 msg="model load progress 0.93"
2024-10-30 12:33:39 time=2024-10-30T18:33:39.075Z level=DEBUG source=server.go:632 msg="model load progress 0.99"
2024-10-30 12:33:39 llama_new_context_with_model: n_ctx      = 32384
2024-10-30 12:33:39 llama_new_context_with_model: n_batch    = 512
2024-10-30 12:33:39 llama_new_context_with_model: n_ubatch   = 512
2024-10-30 12:33:39 llama_new_context_with_model: flash_attn = 0
2024-10-30 12:33:39 llama_new_context_with_model: freq_base  = 500000.0
2024-10-30 12:33:39 llama_new_context_with_model: freq_scale = 1
2024-10-30 12:33:39 time=2024-10-30T18:33:39.326Z level=DEBUG source=server.go:632 msg="model load progress 1.00"
2024-10-30 12:33:39 time=2024-10-30T18:33:39.577Z level=DEBUG source=server.go:635 msg="model load completed, waiting for server to become available" status="llm server loading model"
2024-10-30 12:33:41 llama_kv_cache_init:        CPU KV buffer size =  4048.00 MiB
2024-10-30 12:33:41 llama_new_context_with_model: KV self size  = 4048.00 MiB, K (f16): 2024.00 MiB, V (f16): 2024.00 MiB
2024-10-30 12:33:41 llama_new_context_with_model:        CPU  output buffer size =     2.02 MiB
2024-10-30 12:33:41 llama_new_context_with_model:        CPU compute buffer size =  2119.26 MiB
2024-10-30 12:33:41 llama_new_context_with_model: graph nodes  = 1030
2024-10-30 12:33:41 llama_new_context_with_model: graph splits = 1
2024-10-30 12:33:43 DEBUG [initialize] initializing slots | n_slots=4 tid="281473522201664" timestamp=1730313223
2024-10-30 12:33:43 DEBUG [initialize] new slot | n_ctx_slot=8096 slot_id=0 tid="281473522201664" timestamp=1730313223
2024-10-30 12:33:43 DEBUG [initialize] new slot | n_ctx_slot=8096 slot_id=1 tid="281473522201664" timestamp=1730313223
2024-10-30 12:33:43 DEBUG [initialize] new slot | n_ctx_slot=8096 slot_id=2 tid="281473522201664" timestamp=1730313223
2024-10-30 12:33:43 DEBUG [initialize] new slot | n_ctx_slot=8096 slot_id=3 tid="281473522201664" timestamp=1730313223
2024-10-30 12:33:43 INFO [main] model loaded | tid="281473522201664" timestamp=1730313223
2024-10-30 12:33:43 DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="281473522201664" timestamp=1730313223
2024-10-30 12:33:44 DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=0 tid="281473522201664" timestamp=1730313224
2024-10-30 12:33:44 time=2024-10-30T18:33:44.118Z level=INFO source=server.go:626 msg="llama runner started in 10.34 seconds"
2024-10-30 12:33:44 time=2024-10-30T18:33:44.118Z level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
2024-10-30 12:33:44 time=2024-10-30T18:33:44.119Z level=DEBUG source=routes.go:1422 msg="chat request" images=0 prompt="<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nTesting if the llama model is working.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
2024-10-30 12:33:44 DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1 tid="281473522201664" timestamp=1730313224
2024-10-30 12:33:44 DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=2 tid="281473522201664" timestamp=1730313224
2024-10-30 12:33:44 DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=19 slot_id=0 task_id=2 tid="281473522201664" timestamp=1730313224
2024-10-30 12:33:44 DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=2 tid="281473522201664" timestamp=1730313224
2024-10-30 12:34:19 DEBUG [print_timings] prompt eval time     =    1260.03 ms /    19 tokens (   66.32 ms per token,    15.08 tokens per second) | n_prompt_tokens_processed=19 n_tokens_second=15.079041957037429 slot_id=0 t_prompt_processing=1260.027 t_token=66.31721052631579 task_id=2 tid="281473522201664" timestamp=1730313259
2024-10-30 12:34:19 DEBUG [print_timings] generation eval time =   34017.42 ms /   370 runs   (   91.94 ms per token,    10.88 tokens per second) | n_decoded=370 n_tokens_second=10.876779551372236 slot_id=0 t_token=91.93897837837838 t_token_generation=34017.422 task_id=2 tid="281473522201664" timestamp=1730313259
2024-10-30 12:34:19 DEBUG [print_timings]           total time =   35277.45 ms | slot_id=0 t_prompt_processing=1260.027 t_token_generation=34017.422 t_total=35277.449 task_id=2 tid="281473522201664" timestamp=1730313259
2024-10-30 12:34:19 DEBUG [update_slots] slot released | n_cache_tokens=389 n_ctx=32384 n_past=388 n_system_tokens=0 slot_id=0 task_id=2 tid="281473522201664" timestamp=1730313259 truncated=false
2024-10-30 12:34:19 DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=36516 status=200 tid="281473420685536" timestamp=1730313259
2024-10-30 12:34:19 [GIN] 2024/10/30 - 18:34:19 | 200 | 45.725803813s |    192.168.65.1 | POST     "/api/chat"
2024-10-30 12:34:19 time=2024-10-30T18:34:19.440Z level=DEBUG source=sched.go:466 msg="context for request finished"
2024-10-30 12:34:19 time=2024-10-30T18:34:19.440Z level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe duration=30m0s
2024-10-30 12:34:19 time=2024-10-30T18:34:19.440Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe refCount=0

Here are my docker stats:

CONTAINER ID   NAME         CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O        PIDS
2e69f64b89af   open-webui   0.33%     521.5MiB / 23.44GiB   2.17%     16.5kB / 7.57kB   156MB / 41.5MB   22
5f46b8f57e44   ollama       0.00%     8.407GiB / 23.44GiB   35.87%    28.5kB / 80kB     3.44GB / 0B      20

Interestingly enough, the ollama MEM USAGE is way higher than it was before:
Previously: 22.76MiB / 23.44GiB 0.09%
Now: 8.407GiB / 23.44GiB 35.87%

My Docker Desktop has the following resource settings:
CPU Limit: 8
Memory Limit: 24 Gigs
SWAP: 4GB

Memory Statistics from my Mac:
Physical Memory: 32 GB
Memory Used: 27.05 GB
Cached Files: 4.92 GB
Swap Used: 3.17 GB

Any friendly suggestions on why things are working now and why they weren't before would be greatly appreciated! :)

<!-- gh-comment-id:2448067454 --> @nathan-hook commented on GitHub (Oct 30, 2024): @rick-github Thank you for your answers and your patience. Both are very much appreciated. By downloading the llama3.2:3b model and editing my docker compose file and setting the `OLLAMA_NUM_PARALLEL` to 1, I was able to call Ollama from the [Continue](https://plugins.jetbrains.com/plugin/22707-continue) plugin. YES! I was able to make my calls with all three models I have installed: llama3.2:3b, llama3.1:latest, and mistral:7b All of them worked. Even better. I then started futzing with the `OLLAMA_NUM_PARALLEL` and setting it to higher and higher numbers, getting up to 8 and all three models kept working. I then decided, well, let me just try to replicate the original error by commenting out `OLLAMA_NUM_PARALLEL` from my docker compose file. And of course, it still worked. Gosh dang it. What could I have done to cause it to still work even when set back to the default value that was originally not working? I have no idea what I might have changed. Here are the things I can remember doing between Continue not working and now, when Continue is working: * Closed my browser (since it was using lots of memory) * Opened my browser * Closed my Intellij * Opened my Intellij * I might have open and closed my Docker Desktop. * Futzed with my docker compose file and changed the `OLLAMA_NUM_PARALLEL` values. Here are all the same logs/information/stats as before. Ollama Logs: ``` 2024-10-30 12:33:08 2024/10/30 18:33:08 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.462Z level=INFO source=images.go:754 msg="total blobs: 15" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.462Z level=INFO source=images.go:761 msg="total unused blobs removed: 0" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.462Z level=INFO source=routes.go:1205 msg="Listening on [::]:11434 (version 0.3.14)" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cuda_v11 cuda_v12]" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=sched.go:105 msg="starting llm scheduler" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:94 msg="searching for GPU discovery libraries for NVIDIA" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcuda.so* 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths=[] 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:505 msg="Searching for GPU library" name=libcudart.so* 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:528 msg="gpu library search" globs="[/usr/lib/ollama/libcudart.so* /usr/local/nvidia/lib/libcudart.so* /usr/local/nvidia/lib64/libcudart.so* /usr/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so*]" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.463Z level=DEBUG source=gpu.go:562 msg="discovered GPU libraries" paths="[/usr/lib/ollama/libcudart.so.12.4.99 /usr/lib/ollama/libcudart.so.11.3.109]" 2024-10-30 12:33:08 cudaSetDevice err: 35 2024-10-30 12:33:08 time=2024-10-30T18:33:08.464Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.12.4.99: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" 2024-10-30 12:33:08 cudaSetDevice err: 35 2024-10-30 12:33:08 time=2024-10-30T18:33:08.465Z level=DEBUG source=gpu.go:578 msg="Unable to load cudart library /usr/lib/ollama/libcudart.so.11.3.109: your nvidia driver is too old or missing. If you have a CUDA GPU please upgrade to run ollama" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.465Z level=DEBUG source=amd_linux.go:416 msg="amdgpu driver not detected /sys/module/amdgpu" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.465Z level=INFO source=gpu.go:384 msg="no compatible GPUs were discovered" 2024-10-30 12:33:08 time=2024-10-30T18:33:08.465Z level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant="no vector extensions" compute="" driver=0.0 name="" total="23.4 GiB" available="22.0 GiB" 2024-10-30 12:33:16 [GIN] 2024/10/30 - 18:33:16 | 200 | 1.464917ms | 172.18.0.3 | GET "/api/tags" 2024-10-30 12:33:33 time=2024-10-30T18:33:33.740Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="22.0 GiB" before.free_swap="3.9 GiB" now.total="23.4 GiB" now.free="21.5 GiB" now.free_swap="3.9 GiB" 2024-10-30 12:33:33 time=2024-10-30T18:33:33.741Z level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x7f22c0 gpu_count=1 2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=DEBUG source=sched.go:211 msg="cpu mode with first model, loading" 2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="23.4 GiB" before.free="21.5 GiB" before.free_swap="3.9 GiB" now.total="23.4 GiB" now.free="21.5 GiB" now.free_swap="3.9 GiB" 2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=INFO source=server.go:105 msg="system memory" total="23.4 GiB" free="21.5 GiB" free_swap="3.9 GiB" 2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server 2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server 2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server 2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=DEBUG source=memory.go:103 msg=evaluating library=cpu gpu_count=1 available="[21.5 GiB]" 2024-10-30 12:33:33 time=2024-10-30T18:33:33.775Z level=INFO source=memory.go:326 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[21.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.3 GiB" memory.required.partial="0 B" memory.required.kv="4.0 GiB" memory.required.allocations="[10.3 GiB]" memory.weights.total="7.6 GiB" memory.weights.repeating="7.2 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="2.1 GiB" memory.graph.partial="2.2 GiB" 2024-10-30 12:33:33 time=2024-10-30T18:33:33.775Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server 2024-10-30 12:33:33 time=2024-10-30T18:33:33.775Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server 2024-10-30 12:33:33 time=2024-10-30T18:33:33.775Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server 2024-10-30 12:33:33 time=2024-10-30T18:33:33.777Z level=DEBUG source=gpu.go:699 msg="no filter required for library cpu" 2024-10-30 12:33:33 time=2024-10-30T18:33:33.777Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 32384 --batch-size 512 --embedding --verbose --threads 8 --no-mmap --parallel 4 --port 37131" 2024-10-30 12:33:33 time=2024-10-30T18:33:33.777Z level=DEBUG source=server.go:405 msg=subprocess environment="[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/runners/cpu:/usr/local/nvidia/lib:/usr/local/nvidia/lib64]" 2024-10-30 12:33:33 time=2024-10-30T18:33:33.777Z level=INFO source=sched.go:449 msg="loaded runners" count=1 2024-10-30 12:33:33 time=2024-10-30T18:33:33.777Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" 2024-10-30 12:33:33 time=2024-10-30T18:33:33.778Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" 2024-10-30 12:33:33 INFO [main] starting c++ runner | tid="281473522201664" timestamp=1730313213 2024-10-30 12:33:33 INFO [main] build info | build=10 commit="70b2d9b" tid="281473522201664" timestamp=1730313213 2024-10-30 12:33:33 INFO [main] system info | n_threads=8 n_threads_batch=8 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="281473522201664" timestamp=1730313213 total_threads=8 2024-10-30 12:33:33 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="37131" tid="281473522201664" timestamp=1730313213 2024-10-30 12:33:33 llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest)) 2024-10-30 12:33:33 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2024-10-30 12:33:33 llama_model_loader: - kv 0: general.architecture str = llama 2024-10-30 12:33:33 llama_model_loader: - kv 1: general.type str = model 2024-10-30 12:33:33 llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct 2024-10-30 12:33:33 llama_model_loader: - kv 3: general.finetune str = Instruct 2024-10-30 12:33:33 llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 2024-10-30 12:33:33 llama_model_loader: - kv 5: general.size_label str = 8B 2024-10-30 12:33:33 llama_model_loader: - kv 6: general.license str = llama3.1 2024-10-30 12:33:33 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... 2024-10-30 12:33:33 llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... 2024-10-30 12:33:33 llama_model_loader: - kv 9: llama.block_count u32 = 32 2024-10-30 12:33:33 llama_model_loader: - kv 10: llama.context_length u32 = 131072 2024-10-30 12:33:33 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 2024-10-30 12:33:33 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 2024-10-30 12:33:33 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 2024-10-30 12:33:33 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 2024-10-30 12:33:33 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 2024-10-30 12:33:33 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2024-10-30 12:33:33 llama_model_loader: - kv 17: general.file_type u32 = 2 2024-10-30 12:33:33 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 2024-10-30 12:33:33 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 2024-10-30 12:33:33 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 2024-10-30 12:33:33 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe 2024-10-30 12:33:33 llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... 2024-10-30 12:33:33 llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2024-10-30 12:33:33 llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... 2024-10-30 12:33:33 llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 2024-10-30 12:33:33 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 2024-10-30 12:33:33 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... 2024-10-30 12:33:33 llama_model_loader: - kv 28: general.quantization_version u32 = 2 2024-10-30 12:33:33 llama_model_loader: - type f32: 66 tensors 2024-10-30 12:33:33 llama_model_loader: - type q4_0: 225 tensors 2024-10-30 12:33:33 llama_model_loader: - type q6_K: 1 tensors 2024-10-30 12:33:34 llm_load_vocab: special tokens cache size = 256 2024-10-30 12:33:34 time=2024-10-30T18:33:34.029Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" 2024-10-30 12:33:34 llm_load_vocab: token to piece cache size = 0.7999 MB 2024-10-30 12:33:34 llm_load_print_meta: format = GGUF V3 (latest) 2024-10-30 12:33:34 llm_load_print_meta: arch = llama 2024-10-30 12:33:34 llm_load_print_meta: vocab type = BPE 2024-10-30 12:33:34 llm_load_print_meta: n_vocab = 128256 2024-10-30 12:33:34 llm_load_print_meta: n_merges = 280147 2024-10-30 12:33:34 llm_load_print_meta: vocab_only = 0 2024-10-30 12:33:34 llm_load_print_meta: n_ctx_train = 131072 2024-10-30 12:33:34 llm_load_print_meta: n_embd = 4096 2024-10-30 12:33:34 llm_load_print_meta: n_layer = 32 2024-10-30 12:33:34 llm_load_print_meta: n_head = 32 2024-10-30 12:33:34 llm_load_print_meta: n_head_kv = 8 2024-10-30 12:33:34 llm_load_print_meta: n_rot = 128 2024-10-30 12:33:34 llm_load_print_meta: n_swa = 0 2024-10-30 12:33:34 llm_load_print_meta: n_embd_head_k = 128 2024-10-30 12:33:34 llm_load_print_meta: n_embd_head_v = 128 2024-10-30 12:33:34 llm_load_print_meta: n_gqa = 4 2024-10-30 12:33:34 llm_load_print_meta: n_embd_k_gqa = 1024 2024-10-30 12:33:34 llm_load_print_meta: n_embd_v_gqa = 1024 2024-10-30 12:33:34 llm_load_print_meta: f_norm_eps = 0.0e+00 2024-10-30 12:33:34 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 2024-10-30 12:33:34 llm_load_print_meta: f_clamp_kqv = 0.0e+00 2024-10-30 12:33:34 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 2024-10-30 12:33:34 llm_load_print_meta: f_logit_scale = 0.0e+00 2024-10-30 12:33:34 llm_load_print_meta: n_ff = 14336 2024-10-30 12:33:34 llm_load_print_meta: n_expert = 0 2024-10-30 12:33:34 llm_load_print_meta: n_expert_used = 0 2024-10-30 12:33:34 llm_load_print_meta: causal attn = 1 2024-10-30 12:33:34 llm_load_print_meta: pooling type = 0 2024-10-30 12:33:34 llm_load_print_meta: rope type = 0 2024-10-30 12:33:34 llm_load_print_meta: rope scaling = linear 2024-10-30 12:33:34 llm_load_print_meta: freq_base_train = 500000.0 2024-10-30 12:33:34 llm_load_print_meta: freq_scale_train = 1 2024-10-30 12:33:34 llm_load_print_meta: n_ctx_orig_yarn = 131072 2024-10-30 12:33:34 llm_load_print_meta: rope_finetuned = unknown 2024-10-30 12:33:34 llm_load_print_meta: ssm_d_conv = 0 2024-10-30 12:33:34 llm_load_print_meta: ssm_d_inner = 0 2024-10-30 12:33:34 llm_load_print_meta: ssm_d_state = 0 2024-10-30 12:33:34 llm_load_print_meta: ssm_dt_rank = 0 2024-10-30 12:33:34 llm_load_print_meta: ssm_dt_b_c_rms = 0 2024-10-30 12:33:34 llm_load_print_meta: model type = 8B 2024-10-30 12:33:34 llm_load_print_meta: model ftype = Q4_0 2024-10-30 12:33:34 llm_load_print_meta: model params = 8.03 B 2024-10-30 12:33:34 llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) 2024-10-30 12:33:34 llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct 2024-10-30 12:33:34 llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' 2024-10-30 12:33:34 llm_load_print_meta: EOS token = 128009 '<|eot_id|>' 2024-10-30 12:33:34 llm_load_print_meta: LF token = 128 'Ä' 2024-10-30 12:33:34 llm_load_print_meta: EOT token = 128009 '<|eot_id|>' 2024-10-30 12:33:34 llm_load_print_meta: EOM token = 128008 '<|eom_id|>' 2024-10-30 12:33:34 llm_load_print_meta: EOG token = 128008 '<|eom_id|>' 2024-10-30 12:33:34 llm_load_print_meta: EOG token = 128009 '<|eot_id|>' 2024-10-30 12:33:34 llm_load_print_meta: max token length = 256 2024-10-30 12:33:34 llm_load_tensors: ggml ctx size = 0.14 MiB 2024-10-30 12:33:34 llm_load_tensors: CPU buffer size = 4437.81 MiB 2024-10-30 12:33:34 time=2024-10-30T18:33:34.531Z level=DEBUG source=server.go:632 msg="model load progress 0.06" 2024-10-30 12:33:34 time=2024-10-30T18:33:34.783Z level=DEBUG source=server.go:632 msg="model load progress 0.17" 2024-10-30 12:33:35 time=2024-10-30T18:33:35.034Z level=DEBUG source=server.go:632 msg="model load progress 0.25" 2024-10-30 12:33:35 time=2024-10-30T18:33:35.285Z level=DEBUG source=server.go:632 msg="model load progress 0.31" 2024-10-30 12:33:35 time=2024-10-30T18:33:35.538Z level=DEBUG source=server.go:632 msg="model load progress 0.35" 2024-10-30 12:33:35 time=2024-10-30T18:33:35.789Z level=DEBUG source=server.go:632 msg="model load progress 0.39" 2024-10-30 12:33:36 time=2024-10-30T18:33:36.040Z level=DEBUG source=server.go:632 msg="model load progress 0.43" 2024-10-30 12:33:36 time=2024-10-30T18:33:36.293Z level=DEBUG source=server.go:632 msg="model load progress 0.48" 2024-10-30 12:33:36 time=2024-10-30T18:33:36.545Z level=DEBUG source=server.go:632 msg="model load progress 0.53" 2024-10-30 12:33:36 time=2024-10-30T18:33:36.798Z level=DEBUG source=server.go:632 msg="model load progress 0.57" 2024-10-30 12:33:37 time=2024-10-30T18:33:37.048Z level=DEBUG source=server.go:632 msg="model load progress 0.62" 2024-10-30 12:33:37 time=2024-10-30T18:33:37.304Z level=DEBUG source=server.go:632 msg="model load progress 0.66" 2024-10-30 12:33:37 time=2024-10-30T18:33:37.558Z level=DEBUG source=server.go:632 msg="model load progress 0.71" 2024-10-30 12:33:37 time=2024-10-30T18:33:37.811Z level=DEBUG source=server.go:632 msg="model load progress 0.76" 2024-10-30 12:33:38 time=2024-10-30T18:33:38.065Z level=DEBUG source=server.go:632 msg="model load progress 0.81" 2024-10-30 12:33:38 time=2024-10-30T18:33:38.317Z level=DEBUG source=server.go:632 msg="model load progress 0.85" 2024-10-30 12:33:38 time=2024-10-30T18:33:38.569Z level=DEBUG source=server.go:632 msg="model load progress 0.90" 2024-10-30 12:33:38 time=2024-10-30T18:33:38.820Z level=DEBUG source=server.go:632 msg="model load progress 0.93" 2024-10-30 12:33:39 time=2024-10-30T18:33:39.075Z level=DEBUG source=server.go:632 msg="model load progress 0.99" 2024-10-30 12:33:39 llama_new_context_with_model: n_ctx = 32384 2024-10-30 12:33:39 llama_new_context_with_model: n_batch = 512 2024-10-30 12:33:39 llama_new_context_with_model: n_ubatch = 512 2024-10-30 12:33:39 llama_new_context_with_model: flash_attn = 0 2024-10-30 12:33:39 llama_new_context_with_model: freq_base = 500000.0 2024-10-30 12:33:39 llama_new_context_with_model: freq_scale = 1 2024-10-30 12:33:39 time=2024-10-30T18:33:39.326Z level=DEBUG source=server.go:632 msg="model load progress 1.00" 2024-10-30 12:33:39 time=2024-10-30T18:33:39.577Z level=DEBUG source=server.go:635 msg="model load completed, waiting for server to become available" status="llm server loading model" 2024-10-30 12:33:41 llama_kv_cache_init: CPU KV buffer size = 4048.00 MiB 2024-10-30 12:33:41 llama_new_context_with_model: KV self size = 4048.00 MiB, K (f16): 2024.00 MiB, V (f16): 2024.00 MiB 2024-10-30 12:33:41 llama_new_context_with_model: CPU output buffer size = 2.02 MiB 2024-10-30 12:33:41 llama_new_context_with_model: CPU compute buffer size = 2119.26 MiB 2024-10-30 12:33:41 llama_new_context_with_model: graph nodes = 1030 2024-10-30 12:33:41 llama_new_context_with_model: graph splits = 1 2024-10-30 12:33:43 DEBUG [initialize] initializing slots | n_slots=4 tid="281473522201664" timestamp=1730313223 2024-10-30 12:33:43 DEBUG [initialize] new slot | n_ctx_slot=8096 slot_id=0 tid="281473522201664" timestamp=1730313223 2024-10-30 12:33:43 DEBUG [initialize] new slot | n_ctx_slot=8096 slot_id=1 tid="281473522201664" timestamp=1730313223 2024-10-30 12:33:43 DEBUG [initialize] new slot | n_ctx_slot=8096 slot_id=2 tid="281473522201664" timestamp=1730313223 2024-10-30 12:33:43 DEBUG [initialize] new slot | n_ctx_slot=8096 slot_id=3 tid="281473522201664" timestamp=1730313223 2024-10-30 12:33:43 INFO [main] model loaded | tid="281473522201664" timestamp=1730313223 2024-10-30 12:33:43 DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="281473522201664" timestamp=1730313223 2024-10-30 12:33:44 DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=0 tid="281473522201664" timestamp=1730313224 2024-10-30 12:33:44 time=2024-10-30T18:33:44.118Z level=INFO source=server.go:626 msg="llama runner started in 10.34 seconds" 2024-10-30 12:33:44 time=2024-10-30T18:33:44.118Z level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe 2024-10-30 12:33:44 time=2024-10-30T18:33:44.119Z level=DEBUG source=routes.go:1422 msg="chat request" images=0 prompt="<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nTesting if the llama model is working.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" 2024-10-30 12:33:44 DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=1 tid="281473522201664" timestamp=1730313224 2024-10-30 12:33:44 DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=2 tid="281473522201664" timestamp=1730313224 2024-10-30 12:33:44 DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=19 slot_id=0 task_id=2 tid="281473522201664" timestamp=1730313224 2024-10-30 12:33:44 DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=2 tid="281473522201664" timestamp=1730313224 2024-10-30 12:34:19 DEBUG [print_timings] prompt eval time = 1260.03 ms / 19 tokens ( 66.32 ms per token, 15.08 tokens per second) | n_prompt_tokens_processed=19 n_tokens_second=15.079041957037429 slot_id=0 t_prompt_processing=1260.027 t_token=66.31721052631579 task_id=2 tid="281473522201664" timestamp=1730313259 2024-10-30 12:34:19 DEBUG [print_timings] generation eval time = 34017.42 ms / 370 runs ( 91.94 ms per token, 10.88 tokens per second) | n_decoded=370 n_tokens_second=10.876779551372236 slot_id=0 t_token=91.93897837837838 t_token_generation=34017.422 task_id=2 tid="281473522201664" timestamp=1730313259 2024-10-30 12:34:19 DEBUG [print_timings] total time = 35277.45 ms | slot_id=0 t_prompt_processing=1260.027 t_token_generation=34017.422 t_total=35277.449 task_id=2 tid="281473522201664" timestamp=1730313259 2024-10-30 12:34:19 DEBUG [update_slots] slot released | n_cache_tokens=389 n_ctx=32384 n_past=388 n_system_tokens=0 slot_id=0 task_id=2 tid="281473522201664" timestamp=1730313259 truncated=false 2024-10-30 12:34:19 DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=36516 status=200 tid="281473420685536" timestamp=1730313259 2024-10-30 12:34:19 [GIN] 2024/10/30 - 18:34:19 | 200 | 45.725803813s | 192.168.65.1 | POST "/api/chat" 2024-10-30 12:34:19 time=2024-10-30T18:34:19.440Z level=DEBUG source=sched.go:466 msg="context for request finished" 2024-10-30 12:34:19 time=2024-10-30T18:34:19.440Z level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe duration=30m0s 2024-10-30 12:34:19 time=2024-10-30T18:34:19.440Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe refCount=0 ``` Here are my `docker stats`: ``` CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 2e69f64b89af open-webui 0.33% 521.5MiB / 23.44GiB 2.17% 16.5kB / 7.57kB 156MB / 41.5MB 22 5f46b8f57e44 ollama 0.00% 8.407GiB / 23.44GiB 35.87% 28.5kB / 80kB 3.44GB / 0B 20 ``` Interestingly enough, the ollama MEM USAGE is way higher than it was before: Previously: 22.76MiB / 23.44GiB 0.09% Now: 8.407GiB / 23.44GiB 35.87% My Docker Desktop has the following resource settings: CPU Limit: 8 Memory Limit: 24 Gigs SWAP: 4GB Memory Statistics from my Mac: Physical Memory: 32 GB Memory Used: 27.05 GB Cached Files: 4.92 GB Swap Used: 3.17 GB Any friendly suggestions on why things are working now and why they weren't before would be greatly appreciated! :)
Author
Owner

@rick-github commented on GitHub (Oct 30, 2024):

2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=INFO source=server.go:105 msg="system memory" total="23.4 GiB" free="21.5 GiB" free_swap="3.9 GiB"

At the time the model was being loaded, there was 21.5G RAM and 3.9G swap, plenty of room for the 10.3G required for the model:

2024-10-30 12:33:33 time=2024-10-30T18:33:33.775Z level=INFO source=memory.go:326 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[21.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.3 GiB" memory.required.partial="0 B" memory.required.kv="4.0 GiB" memory.required.allocations="[10.3 GiB]" memory.weights.total="7.6 GiB" memory.weights.repeating="7.2 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="2.1 GiB" memory.graph.partial="2.2 GiB"

Since OLLAMA_NUM_PARALLEL is unset, ollama uses a default value of 4. Combined with a context size of 8K set by Continue, the total context size allocated is 32K (--ctx-size). This contributes to the size of the KV cache allocated, 4G (memory.required.kv, above). This is the allocation that varies depending on the number of parallel queries and the size of the context window. Setting OLLAMA_NUM_PARALLEL=1 will reduce this allocation to around 1G.

2024-10-30 12:33:33 time=2024-10-30T18:33:33.777Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 32384 --batch-size 512 --embedding --verbose --threads 8 --no-mmap --parallel 4 --port 37131"

My guess for the most likely culprit as the memory hog earlier would be the browser. Extensions and pages that present lots of visual effects are notorious for sucking up RAM.

<!-- gh-comment-id:2448386814 --> @rick-github commented on GitHub (Oct 30, 2024): ``` 2024-10-30 12:33:33 time=2024-10-30T18:33:33.774Z level=INFO source=server.go:105 msg="system memory" total="23.4 GiB" free="21.5 GiB" free_swap="3.9 GiB" ``` At the time the model was being loaded, there was 21.5G RAM and 3.9G swap, plenty of room for the 10.3G required for the model: ``` 2024-10-30 12:33:33 time=2024-10-30T18:33:33.775Z level=INFO source=memory.go:326 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[21.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.3 GiB" memory.required.partial="0 B" memory.required.kv="4.0 GiB" memory.required.allocations="[10.3 GiB]" memory.weights.total="7.6 GiB" memory.weights.repeating="7.2 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="2.1 GiB" memory.graph.partial="2.2 GiB" ``` Since `OLLAMA_NUM_PARALLEL` is unset, ollama uses a default value of 4. Combined with a context size of 8K set by Continue, the total context size allocated is 32K (`--ctx-size`). This contributes to the size of the KV cache allocated, 4G (`memory.required.kv`, above). This is the allocation that varies depending on the number of parallel queries and the size of the context window. Setting `OLLAMA_NUM_PARALLEL=1` will reduce this allocation to around 1G. ``` 2024-10-30 12:33:33 time=2024-10-30T18:33:33.777Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 32384 --batch-size 512 --embedding --verbose --threads 8 --no-mmap --parallel 4 --port 37131" ``` My guess for the most likely culprit as the memory hog earlier would be the browser. Extensions and pages that present lots of visual effects are notorious for sucking up RAM.
Author
Owner

@ross-rosario commented on GitHub (Nov 13, 2024):

Same here. Issue started occurring with ollama v0.4, even without docker or continue dev.

<!-- gh-comment-id:2474778102 --> @ross-rosario commented on GitHub (Nov 13, 2024): Same here. Issue started occurring with ollama v0.4, even without docker or continue dev.
Author
Owner

@rick-github commented on GitHub (Nov 13, 2024):

Server logs will aid in debugging.

<!-- gh-comment-id:2474822656 --> @rick-github commented on GitHub (Nov 13, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@nathan-hook commented on GitHub (Dec 3, 2024):

Thank you for closing this ticket and sorry for not replying before now...

Closing the browser and reducing used ram was the culprit. Once I closed my browser and only had a window with a couple of tabs open everything started to work as expected.

I have a terrible habit of keeping way too many browser tabs/windows open.

Thank you again for your help and insights.

<!-- gh-comment-id:2515087942 --> @nathan-hook commented on GitHub (Dec 3, 2024): Thank you for closing this ticket and sorry for not replying before now... Closing the browser and reducing used ram was the culprit. Once I closed my browser and only had a window with a couple of tabs open everything started to work as expected. I have a terrible habit of keeping way too many browser tabs/windows open. Thank you again for your help and insights.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30481