[GH-ISSUE #7786] Can't find error log #67029

Closed
opened 2026-05-04 09:17:01 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @rolkey on GitHub (Nov 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7786

What is the issue?

  • docker-compose.yml
version: '1.0'

services:
  ollama:
    image: ollama/ollama
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ./ollama:/root/.ollama
    restart: always
    environment:
      OLLAMA_DEBUG: 1
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

volumes:
  ollama:
  • docker logs ollama
2024/11/21 18:07:01 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:http://192.168.137.1:7890 HTTP_PROXY:http://192.168.137.1:7890 NO_PROXY:127.0.0.1,localhost,192.168.0.0/16,172.100.0.0/16,registry.docker-cn.com OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy:http://192.168.137.1:7890 https_proxy:http://192.168.137.1:7890 no_proxy:127.0.0.1,localhost,192.168.0.0/16,172.100.0.0/16,registry.docker-cn.com]"
time=2024-11-21T18:07:01.000Z level=INFO source=images.go:755 msg="total blobs: 0"
time=2024-11-21T18:07:01.000Z level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-21T18:07:01.000Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.2)"
time=2024-11-21T18:07:01.001Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
time=2024-11-21T18:07:01.001Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-11-21T18:07:01.001Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-11-21T18:07:01.001Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
time=2024-11-21T18:07:01.001Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
time=2024-11-21T18:07:01.001Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 cpu]"
time=2024-11-21T18:07:01.001Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-11-21T18:07:01.001Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-11-21T18:07:01.001Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-21T18:07:01.003Z level=DEBUG source=gpu.go:94 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-11-21T18:07:01.003Z level=DEBUG source=gpu.go:509 msg="Searching for GPU library" name=libcuda.so*
time=2024-11-21T18:07:01.003Z level=DEBUG source=gpu.go:532 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-11-21T18:07:01.005Z level=DEBUG source=gpu.go:566 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.550.120]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.120
dlsym: cuInit - 0x714dde67cbc0
dlsym: cuDriverGetVersion - 0x714dde67cbe0
dlsym: cuDeviceGetCount - 0x714dde67cc20
dlsym: cuDeviceGet - 0x714dde67cc00
dlsym: cuDeviceGetAttribute - 0x714dde67cd00
dlsym: cuDeviceGetUuid - 0x714dde67cc60
dlsym: cuDeviceGetName - 0x714dde67cc40
dlsym: cuCtxCreate_v3 - 0x714dde67cee0
dlsym: cuMemGetInfo_v2 - 0x714dde686e20
dlsym: cuCtxDestroy - 0x714dde6e1850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-11-21T18:07:01.019Z level=DEBUG source=gpu.go:129 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.120
[GPU-429069ec-d6c9-40a8-58e2-489df32b2feb] CUDA totalMem 16076 mb
[GPU-429069ec-d6c9-40a8-58e2-489df32b2feb] CUDA freeMem 15934 mb
[GPU-429069ec-d6c9-40a8-58e2-489df32b2feb] Compute Capability 8.9
time=2024-11-21T18:07:01.075Z level=DEBUG source=amd_linux.go:416 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-11-21T18:07:01.075Z level=INFO source=types.go:123 msg="inference compute" id=GPU-429069ec-d6c9-40a8-58e2-489df32b2feb library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4060 Ti" total="15.7 GiB" available="15.6 GiB"
2024/11/22 01:11:38 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:http://192.168.137.1:7890 HTTP_PROXY:http://192.168.137.1:7890 NO_PROXY:127.0.0.1,localhost,192.168.0.0/16,172.100.0.0/16,registry.docker-cn.com OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy:http://192.168.137.1:7890 https_proxy:http://192.168.137.1:7890 no_proxy:127.0.0.1,localhost,192.168.0.0/16,172.100.0.0/16,registry.docker-cn.com]"
time=2024-11-22T01:11:38.136Z level=INFO source=images.go:755 msg="total blobs: 0"
time=2024-11-22T01:11:38.136Z level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-22T01:11:38.136Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.2)"
time=2024-11-22T01:11:38.137Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
time=2024-11-22T01:11:38.137Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-11-22T01:11:38.137Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-11-22T01:11:38.137Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
time=2024-11-22T01:11:38.137Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
time=2024-11-22T01:11:38.137Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 cpu cpu_avx cpu_avx2]"
time=2024-11-22T01:11:38.137Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-11-22T01:11:38.137Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-11-22T01:11:38.137Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-22T01:11:38.138Z level=DEBUG source=gpu.go:94 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-11-22T01:11:38.138Z level=DEBUG source=gpu.go:509 msg="Searching for GPU library" name=libcuda.so*
time=2024-11-22T01:11:38.138Z level=DEBUG source=gpu.go:532 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-11-22T01:11:38.139Z level=DEBUG source=gpu.go:566 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.550.120]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.120
dlsym: cuInit - 0x75db8e67cbc0
dlsym: cuDriverGetVersion - 0x75db8e67cbe0
dlsym: cuDeviceGetCount - 0x75db8e67cc20
dlsym: cuDeviceGet - 0x75db8e67cc00
dlsym: cuDeviceGetAttribute - 0x75db8e67cd00
dlsym: cuDeviceGetUuid - 0x75db8e67cc60
dlsym: cuDeviceGetName - 0x75db8e67cc40
dlsym: cuCtxCreate_v3 - 0x75db8e67cee0
dlsym: cuMemGetInfo_v2 - 0x75db8e686e20
dlsym: cuCtxDestroy - 0x75db8e6e1850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-11-22T01:11:38.161Z level=DEBUG source=gpu.go:129 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.120
[GPU-429069ec-d6c9-40a8-58e2-489df32b2feb] CUDA totalMem 16076 mb
[GPU-429069ec-d6c9-40a8-58e2-489df32b2feb] CUDA freeMem 15934 mb
[GPU-429069ec-d6c9-40a8-58e2-489df32b2feb] Compute Capability 8.9
time=2024-11-22T01:11:38.228Z level=DEBUG source=amd_linux.go:416 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-11-22T01:11:38.228Z level=INFO source=types.go:123 msg="inference compute" id=GPU-429069ec-d6c9-40a8-58e2-489df32b2feb library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4060 Ti" total="15.7 GiB" available="15.6 GiB"
  • docker exec -it ollama ollama list
Error: something went wrong, please see the ollama server logs for details

can't find ollama/logs/error.log

OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

No response

Originally created by @rolkey on GitHub (Nov 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7786 ### What is the issue? * docker-compose.yml ```yaml version: '1.0' services: ollama: image: ollama/ollama container_name: ollama ports: - "11434:11434" volumes: - ./ollama:/root/.ollama restart: always environment: OLLAMA_DEBUG: 1 deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] volumes: ollama: ``` * docker logs ollama ```bash 2024/11/21 18:07:01 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:http://192.168.137.1:7890 HTTP_PROXY:http://192.168.137.1:7890 NO_PROXY:127.0.0.1,localhost,192.168.0.0/16,172.100.0.0/16,registry.docker-cn.com OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy:http://192.168.137.1:7890 https_proxy:http://192.168.137.1:7890 no_proxy:127.0.0.1,localhost,192.168.0.0/16,172.100.0.0/16,registry.docker-cn.com]" time=2024-11-21T18:07:01.000Z level=INFO source=images.go:755 msg="total blobs: 0" time=2024-11-21T18:07:01.000Z level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-21T18:07:01.000Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.2)" time=2024-11-21T18:07:01.001Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server time=2024-11-21T18:07:01.001Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server time=2024-11-21T18:07:01.001Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server time=2024-11-21T18:07:01.001Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server time=2024-11-21T18:07:01.001Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server time=2024-11-21T18:07:01.001Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 cpu]" time=2024-11-21T18:07:01.001Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-11-21T18:07:01.001Z level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-11-21T18:07:01.001Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-21T18:07:01.003Z level=DEBUG source=gpu.go:94 msg="searching for GPU discovery libraries for NVIDIA" time=2024-11-21T18:07:01.003Z level=DEBUG source=gpu.go:509 msg="Searching for GPU library" name=libcuda.so* time=2024-11-21T18:07:01.003Z level=DEBUG source=gpu.go:532 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2024-11-21T18:07:01.005Z level=DEBUG source=gpu.go:566 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.550.120] initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.120 dlsym: cuInit - 0x714dde67cbc0 dlsym: cuDriverGetVersion - 0x714dde67cbe0 dlsym: cuDeviceGetCount - 0x714dde67cc20 dlsym: cuDeviceGet - 0x714dde67cc00 dlsym: cuDeviceGetAttribute - 0x714dde67cd00 dlsym: cuDeviceGetUuid - 0x714dde67cc60 dlsym: cuDeviceGetName - 0x714dde67cc40 dlsym: cuCtxCreate_v3 - 0x714dde67cee0 dlsym: cuMemGetInfo_v2 - 0x714dde686e20 dlsym: cuCtxDestroy - 0x714dde6e1850 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2024-11-21T18:07:01.019Z level=DEBUG source=gpu.go:129 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.120 [GPU-429069ec-d6c9-40a8-58e2-489df32b2feb] CUDA totalMem 16076 mb [GPU-429069ec-d6c9-40a8-58e2-489df32b2feb] CUDA freeMem 15934 mb [GPU-429069ec-d6c9-40a8-58e2-489df32b2feb] Compute Capability 8.9 time=2024-11-21T18:07:01.075Z level=DEBUG source=amd_linux.go:416 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2024-11-21T18:07:01.075Z level=INFO source=types.go:123 msg="inference compute" id=GPU-429069ec-d6c9-40a8-58e2-489df32b2feb library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4060 Ti" total="15.7 GiB" available="15.6 GiB" 2024/11/22 01:11:38 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:http://192.168.137.1:7890 HTTP_PROXY:http://192.168.137.1:7890 NO_PROXY:127.0.0.1,localhost,192.168.0.0/16,172.100.0.0/16,registry.docker-cn.com OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy:http://192.168.137.1:7890 https_proxy:http://192.168.137.1:7890 no_proxy:127.0.0.1,localhost,192.168.0.0/16,172.100.0.0/16,registry.docker-cn.com]" time=2024-11-22T01:11:38.136Z level=INFO source=images.go:755 msg="total blobs: 0" time=2024-11-22T01:11:38.136Z level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-22T01:11:38.136Z level=INFO source=routes.go:1240 msg="Listening on [::]:11434 (version 0.4.2)" time=2024-11-22T01:11:38.137Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server time=2024-11-22T01:11:38.137Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server time=2024-11-22T01:11:38.137Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server time=2024-11-22T01:11:38.137Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server time=2024-11-22T01:11:38.137Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server time=2024-11-22T01:11:38.137Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v11 cuda_v12 cpu cpu_avx cpu_avx2]" time=2024-11-22T01:11:38.137Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-11-22T01:11:38.137Z level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-11-22T01:11:38.137Z level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-22T01:11:38.138Z level=DEBUG source=gpu.go:94 msg="searching for GPU discovery libraries for NVIDIA" time=2024-11-22T01:11:38.138Z level=DEBUG source=gpu.go:509 msg="Searching for GPU library" name=libcuda.so* time=2024-11-22T01:11:38.138Z level=DEBUG source=gpu.go:532 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2024-11-22T01:11:38.139Z level=DEBUG source=gpu.go:566 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.550.120] initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.120 dlsym: cuInit - 0x75db8e67cbc0 dlsym: cuDriverGetVersion - 0x75db8e67cbe0 dlsym: cuDeviceGetCount - 0x75db8e67cc20 dlsym: cuDeviceGet - 0x75db8e67cc00 dlsym: cuDeviceGetAttribute - 0x75db8e67cd00 dlsym: cuDeviceGetUuid - 0x75db8e67cc60 dlsym: cuDeviceGetName - 0x75db8e67cc40 dlsym: cuCtxCreate_v3 - 0x75db8e67cee0 dlsym: cuMemGetInfo_v2 - 0x75db8e686e20 dlsym: cuCtxDestroy - 0x75db8e6e1850 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2024-11-22T01:11:38.161Z level=DEBUG source=gpu.go:129 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.120 [GPU-429069ec-d6c9-40a8-58e2-489df32b2feb] CUDA totalMem 16076 mb [GPU-429069ec-d6c9-40a8-58e2-489df32b2feb] CUDA freeMem 15934 mb [GPU-429069ec-d6c9-40a8-58e2-489df32b2feb] Compute Capability 8.9 time=2024-11-22T01:11:38.228Z level=DEBUG source=amd_linux.go:416 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2024-11-22T01:11:38.228Z level=INFO source=types.go:123 msg="inference compute" id=GPU-429069ec-d6c9-40a8-58e2-489df32b2feb library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4060 Ti" total="15.7 GiB" available="15.6 GiB" ``` * docker exec -it ollama ollama list ```bash Error: something went wrong, please see the ollama server logs for details ``` ### can't find ollama/logs/error.log ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-05-04 09:17:01 -05:00
Author
Owner

@rolkey commented on GitHub (Nov 22, 2024):

  • docker exec -it olllama ollama --version
Warning: could not connect to a running Ollama instance
Warning: client version is 0.4.2
<!-- gh-comment-id:2492713628 --> @rolkey commented on GitHub (Nov 22, 2024): * docker exec -it olllama ollama --version ```bash Warning: could not connect to a running Ollama instance Warning: client version is 0.4.2 ```
Author
Owner

@rick-github commented on GitHub (Nov 22, 2024):

docker logs ollama is the error log.

The ollama client can't connect to the ollama server inside the container. You have HTTP_PROXY set, which would cause this, but you also have NO_PROXY set, which would fix it. These environmental variables are not in your docker-compose, so there's some disconnect between your config and your logs. I would suggest not setting HTTP_PROXY in the container.

<!-- gh-comment-id:2493344543 --> @rick-github commented on GitHub (Nov 22, 2024): `docker logs ollama` is the error log. The ollama client can't connect to the ollama server inside the container. You have HTTP_PROXY set, which would cause this, but you also have NO_PROXY set, which would fix it. These environmental variables are not in your docker-compose, so there's some disconnect between your config and your logs. I would suggest not setting HTTP_PROXY in the container.
Author
Owner

@rolkey commented on GitHub (Nov 23, 2024):

docker logs ollama is the error log.

The ollama client can't connect to the ollama server inside the container. You have HTTP_PROXY set, which would cause this, but you also have NO_PROXY set, which would fix it. These environmental variables are not in your docker-compose, so there's some disconnect between your config and your logs. I would suggest not setting HTTP_PROXY in the container.

Think you!!
I work find in cli:

curl http://localhost:11434/api/tags
<!-- gh-comment-id:2495284399 --> @rolkey commented on GitHub (Nov 23, 2024): > `docker logs ollama` is the error log. > > The ollama client can't connect to the ollama server inside the container. You have HTTP_PROXY set, which would cause this, but you also have NO_PROXY set, which would fix it. These environmental variables are not in your docker-compose, so there's some disconnect between your config and your logs. I would suggest not setting HTTP_PROXY in the container. Think you!! I work find in cli: ```bash curl http://localhost:11434/api/tags ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67029