[GH-ISSUE #4805] can not serve VL models #3031

Closed
opened 2026-04-12 13:26:44 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @techResearcher2021 on GitHub (Jun 4, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4805

What is the issue?

As I served my VL models, It can not work correctly.
Here. I tried the Minicpm-llama3-V-2.5, and convert it to GGUF format under the instruction from the official repository: https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md.
Then I use the service from open-webui.
The running log is shown below:

{"log":"2024/06/04 04:30:04 routes.go:1007: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"\n","stream":"stderr","time":"2024-06-04T04:30:04.87310074Z"}
{"log":"time=2024-06-04T04:30:04.874Z level=INFO source=images.go:729 msg="total blobs: 17"\n","stream":"stderr","time":"2024-06-04T04:30:04.874631858Z"}
{"log":"time=2024-06-04T04:30:04.876Z level=INFO source=images.go:736 msg="total unused blobs removed: 0"\n","stream":"stderr","time":"2024-06-04T04:30:04.876140538Z"}
{"log":"time=2024-06-04T04:30:04.877Z level=INFO source=routes.go:1053 msg="Listening on [::]:11434 (version 0.1.41)"\n","stream":"stderr","time":"2024-06-04T04:30:04.87753063Z"}
{"log":"time=2024-06-04T04:30:04.877Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1378338659/runners\n","stream":"stderr","time":"2024-06-04T04:30:04.878027708Z"}
{"log":"time=2024-06-04T04:30:08.540Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"\n","stream":"stderr","time":"2024-06-04T04:30:08.540929839Z"}
{"log":"time=2024-06-04T04:30:08.772Z level=INFO source=types.go:71 msg="inference compute" id=GPU-70127701-8921-747f-9194-ce6a8699d820 library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="21.6 GiB"\n","stream":"stderr","time":"2024-06-04T04:30:08.772983612Z"}
{"log":"time=2024-06-04T04:30:08.772Z level=INFO source=types.go:71 msg="inference compute" id=GPU-61837e28-1bfe-a560-ddd2-0a14a55cf642 library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="22.7 GiB"\n","stream":"stderr","time":"2024-06-04T04:30:08.773014492Z"}
{"log":"time=2024-06-04T04:31:24.775Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="22.7 GiB" memory.required.full="18.9 GiB" memory.required.partial="18.9 GiB" memory.required.kv="2.0 GiB" memory.weights.total="14.0 GiB" memory.weights.repeating="13.0 GiB" memory.weights.nonrepeating="1002.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"\n","stream":"stderr","time":"2024-06-04T04:31:24.776105102Z"}
{"log":"time=2024-06-04T04:31:24.782Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="22.7 GiB" memory.required.full="18.9 GiB" memory.required.partial="18.9 GiB" memory.required.kv="2.0 GiB" memory.weights.total="14.0 GiB" memory.weights.repeating="13.0 GiB" memory.weights.nonrepeating="1002.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"\n","stream":"stderr","time":"2024-06-04T04:31:24.782196984Z"}
{"log":"time=2024-06-04T04:31:24.782Z level=WARN source=server.go:230 msg="multimodal models don't support parallel requests yet"\n","stream":"stderr","time":"2024-06-04T04:31:24.782266894Z"}
{"log":"time=2024-06-04T04:31:24.783Z level=INFO source=server.go:341 msg="starting llama server" cmd="/tmp/ollama1378338659/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a7a6ce348ebc060ceb8aa973f3b0bad5d3007b7ced23228c0e1aeba59c1fb72f --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --mmproj /root/.ollama/models/blobs/sha256-64fdb7da947f450c745dae303caae7e186d84531cf4acdcddb791fb4503535b6 --flash-attn --parallel 1 --port 46139"\n","stream":"stderr","time":"2024-06-04T04:31:24.783576905Z"}
{"log":"time=2024-06-04T04:31:24.784Z level=INFO source=sched.go:338 msg="loaded runners" count=1\n","stream":"stderr","time":"2024-06-04T04:31:24.784619881Z"}
{"log":"time=2024-06-04T04:31:24.784Z level=INFO source=server.go:529 msg="waiting for llama runner to start responding"\n","stream":"stderr","time":"2024-06-04T04:31:24.784803776Z"}
{"log":"time=2024-06-04T04:31:24.785Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"\n","stream":"stderr","time":"2024-06-04T04:31:24.785306271Z"}
{"log":"INFO [main] build info | build=1 commit="5921b8f" tid="139656081428480" timestamp=1717475484\n","stream":"stdout","time":"2024-06-04T04:31:24.804439262Z"}
{"log":"INFO [main] system info | n_threads=40 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139656081428480" timestamp=1717475484 total_threads=80\n","stream":"stdout","time":"2024-06-04T04:31:24.804461692Z"}
{"log":"INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="79" port="46139" tid="139656081428480" timestamp=1717475484\n","stream":"stdout","time":"2024-06-04T04:31:24.804468913Z"}
{"log":"ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes\n","stream":"stderr","time":"2024-06-04T04:31:24.817467652Z"}
{"log":"ggml_cuda_init: CUDA_USE_TENSOR_CORES: no\n","stream":"stderr","time":"2024-06-04T04:31:24.817475275Z"}
{"log":"ggml_cuda_init: found 1 CUDA devices:\n","stream":"stderr","time":"2024-06-04T04:31:24.817478125Z"}
{"log":" Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes\n","stream":"stderr","time":"2024-06-04T04:31:24.817480583Z"}
{"log":"GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/examples/llava/clip.cpp:1024: new_clip-\u003ehas_llava_projector\n","stream":"stderr","time":"2024-06-04T04:31:24.817483029Z"}
{"log":"time=2024-06-04T04:31:25.036Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model"\n","stream":"stderr","time":"2024-06-04T04:31:25.036981546Z"}
{"log":"time=2024-06-04T04:31:26.492Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server not responding"\n","stream":"stderr","time":"2024-06-04T04:31:26.492654151Z"}
{"log":"time=2024-06-04T04:31:27.230Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"\n","stream":"stderr","time":"2024-06-04T04:31:27.2308139Z"}
{"log":"time=2024-06-04T04:31:27.481Z level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) "\n","stream":"stderr","time":"2024-06-04T04:31:27.481522057Z"}
{"log":"[GIN] 2024/06/04 - 04:31:27 | 500 | 4.771330699s | 172.17.0.1 | POST "/api/chat"\n","stream":"stdout","time":"2024-06-04T04:31:27.481601072Z"}
{"log":"time=2024-06-04T04:31:31.627Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="22.7 GiB" memory.required.full="16.6 GiB" memory.required.partial="16.6 GiB" memory.required.kv="512.0 MiB" memory.weights.total="14.0 GiB" memory.weights.repeating="13.0 GiB" memory.weights.nonrepeating="1002.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="677.5 MiB"\n","stream":"stderr","time":"2024-06-04T04:31:31.627507969Z"}
{"log":"time=2024-06-04T04:31:31.633Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="22.7 GiB" memory.required.full="16.6 GiB" memory.required.partial="16.6 GiB" memory.required.kv="512.0 MiB" memory.weights.total="14.0 GiB" memory.weights.repeating="13.0 GiB" memory.weights.nonrepeating="1002.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="677.5 MiB"\n","stream":"stderr","time":"2024-06-04T04:31:31.633800635Z"}
{"log":"time=2024-06-04T04:31:31.633Z level=WARN source=server.go:230 msg="multimodal models don't support parallel requests yet"\n","stream":"stderr","time":"2024-06-04T04:31:31.633876464Z"}
{"log":"time=2024-06-04T04:31:31.634Z level=INFO source=server.go:341 msg="starting llama server" cmd="/tmp/ollama1378338659/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a7a6ce348ebc060ceb8aa973f3b0bad5d3007b7ced23228c0e1aeba59c1fb72f --ctx-size 4096 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --mmproj /root/.ollama/models/blobs/sha256-64fdb7da947f450c745dae303caae7e186d84531cf4acdcddb791fb4503535b6 --flash-attn --parallel 1 --port 40265"\n","stream":"stderr","time":"2024-06-04T04:31:31.634136257Z"}
{"log":"time=2024-06-04T04:31:31.634Z level=INFO source=sched.go:338 msg="loaded runners" count=1\n","stream":"stderr","time":"2024-06-04T04:31:31.63453042Z"}
{"log":"time=2024-06-04T04:31:31.634Z level=INFO source=server.go:529 msg="waiting for llama runner to start responding"\n","stream":"stderr","time":"2024-06-04T04:31:31.634547574Z"}
{"log":"time=2024-06-04T04:31:31.634Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"\n","stream":"stderr","time":"2024-06-04T04:31:31.634736673Z"}
{"log":"INFO [main] build info | build=1 commit="5921b8f" tid="140615737364480" timestamp=1717475491\n","stream":"stdout","time":"2024-06-04T04:31:31.652889203Z"}
{"log":"INFO [main] system info | n_threads=40 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140615737364480" timestamp=1717475491 total_threads=80\n","stream":"stdout","time":"2024-06-04T04:31:31.652907647Z"}
{"log":"INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="79" port="40265" tid="140615737364480" timestamp=1717475491\n","stream":"stdout","time":"2024-06-04T04:31:31.652991725Z"}
{"log":"ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes\n","stream":"stderr","time":"2024-06-04T04:31:31.66543453Z"}
{"log":"ggml_cuda_init: CUDA_USE_TENSOR_CORES: no\n","stream":"stderr","time":"2024-06-04T04:31:31.66545219Z"}
{"log":"ggml_cuda_init: found 1 CUDA devices:\n","stream":"stderr","time":"2024-06-04T04:31:31.665455804Z"}
{"log":" Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes\n","stream":"stderr","time":"2024-06-04T04:31:31.666761164Z"}
{"log":"GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/examples/llava/clip.cpp:1024: new_clip-\u003ehas_llava_projector\n","stream":"stderr","time":"2024-06-04T04:31:31.666775789Z"}
{"log":"time=2024-06-04T04:31:31.887Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model"\n","stream":"stderr","time":"2024-06-04T04:31:31.887765132Z"}
{"log":"time=2024-06-04T04:31:32.525Z level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.043738778\n","stream":"stderr","time":"2024-06-04T04:31:32.525213094Z"}
{"log":"time=2024-06-04T04:31:32.739Z level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.257773327\n","stream":"stderr","time":"2024-06-04T04:31:32.739367248Z"}
{"log":"time=2024-06-04T04:31:32.990Z level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.50864897\n","stream":"stderr","time":"2024-06-04T04:31:32.990175753Z"}
{"log":"time=2024-06-04T04:31:33.092Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server not responding"\n","stream":"stderr","time":"2024-06-04T04:31:33.092506363Z"}
{"log":"time=2024-06-04T04:31:33.875Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"\n","stream":"stderr","time":"2024-06-04T04:31:33.875501116Z"}
{"log":"time=2024-06-04T04:31:34.126Z level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) "\n","stream":"stderr","time":"2024-06-04T04:31:34.126784317Z"}
{"log":"[GIN] 2024/06/04 - 04:31:34 | 500 | 4.781078255s | 172.17.0.1 | POST "/v1/chat/completions"\n","stream":"stdout","time":"2024-06-04T04:31:34.127032625Z"}
{"log":"time=2024-06-04T04:31:39.349Z level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.223073684\n","stream":"stderr","time":"2024-06-04T04:31:39.350100964Z"}
{"log":"time=2024-06-04T04:31:39.600Z level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.47392563\n","stream":"stderr","time":"2024-06-04T04:31:39.60084398Z"}
{"log":"time=2024-06-04T04:31:39.850Z level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.723634591\n","stream":"stderr","time":"2024-06-04T04:31:39.850455842Z"}

OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.1.41

Originally created by @techResearcher2021 on GitHub (Jun 4, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4805 ### What is the issue? As I served my VL models, It can not work correctly. Here. I tried the Minicpm-llama3-V-2.5, and convert it to GGUF format under the instruction from the official repository: https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md. Then I use the service from open-webui. The running log is shown below: {"log":"2024/06/04 04:30:04 routes.go:1007: INFO server config env=\"map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]\"\n","stream":"stderr","time":"2024-06-04T04:30:04.87310074Z"} {"log":"time=2024-06-04T04:30:04.874Z level=INFO source=images.go:729 msg=\"total blobs: 17\"\n","stream":"stderr","time":"2024-06-04T04:30:04.874631858Z"} {"log":"time=2024-06-04T04:30:04.876Z level=INFO source=images.go:736 msg=\"total unused blobs removed: 0\"\n","stream":"stderr","time":"2024-06-04T04:30:04.876140538Z"} {"log":"time=2024-06-04T04:30:04.877Z level=INFO source=routes.go:1053 msg=\"Listening on [::]:11434 (version 0.1.41)\"\n","stream":"stderr","time":"2024-06-04T04:30:04.87753063Z"} {"log":"time=2024-06-04T04:30:04.877Z level=INFO source=payload.go:30 msg=\"extracting embedded files\" dir=/tmp/ollama1378338659/runners\n","stream":"stderr","time":"2024-06-04T04:30:04.878027708Z"} {"log":"time=2024-06-04T04:30:08.540Z level=INFO source=payload.go:44 msg=\"Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]\"\n","stream":"stderr","time":"2024-06-04T04:30:08.540929839Z"} {"log":"time=2024-06-04T04:30:08.772Z level=INFO source=types.go:71 msg=\"inference compute\" id=GPU-70127701-8921-747f-9194-ce6a8699d820 library=cuda compute=8.9 driver=12.4 name=\"NVIDIA GeForce RTX 4090\" total=\"23.6 GiB\" available=\"21.6 GiB\"\n","stream":"stderr","time":"2024-06-04T04:30:08.772983612Z"} {"log":"time=2024-06-04T04:30:08.772Z level=INFO source=types.go:71 msg=\"inference compute\" id=GPU-61837e28-1bfe-a560-ddd2-0a14a55cf642 library=cuda compute=8.9 driver=12.4 name=\"NVIDIA GeForce RTX 4090\" total=\"23.6 GiB\" available=\"22.7 GiB\"\n","stream":"stderr","time":"2024-06-04T04:30:08.773014492Z"} {"log":"time=2024-06-04T04:31:24.775Z level=INFO source=memory.go:133 msg=\"offload to gpu\" layers.requested=-1 layers.real=33 memory.available=\"22.7 GiB\" memory.required.full=\"18.9 GiB\" memory.required.partial=\"18.9 GiB\" memory.required.kv=\"2.0 GiB\" memory.weights.total=\"14.0 GiB\" memory.weights.repeating=\"13.0 GiB\" memory.weights.nonrepeating=\"1002.0 MiB\" memory.graph.full=\"1.1 GiB\" memory.graph.partial=\"1.1 GiB\"\n","stream":"stderr","time":"2024-06-04T04:31:24.776105102Z"} {"log":"time=2024-06-04T04:31:24.782Z level=INFO source=memory.go:133 msg=\"offload to gpu\" layers.requested=-1 layers.real=33 memory.available=\"22.7 GiB\" memory.required.full=\"18.9 GiB\" memory.required.partial=\"18.9 GiB\" memory.required.kv=\"2.0 GiB\" memory.weights.total=\"14.0 GiB\" memory.weights.repeating=\"13.0 GiB\" memory.weights.nonrepeating=\"1002.0 MiB\" memory.graph.full=\"1.1 GiB\" memory.graph.partial=\"1.1 GiB\"\n","stream":"stderr","time":"2024-06-04T04:31:24.782196984Z"} {"log":"time=2024-06-04T04:31:24.782Z level=WARN source=server.go:230 msg=\"multimodal models don't support parallel requests yet\"\n","stream":"stderr","time":"2024-06-04T04:31:24.782266894Z"} {"log":"time=2024-06-04T04:31:24.783Z level=INFO source=server.go:341 msg=\"starting llama server\" cmd=\"/tmp/ollama1378338659/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a7a6ce348ebc060ceb8aa973f3b0bad5d3007b7ced23228c0e1aeba59c1fb72f --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --mmproj /root/.ollama/models/blobs/sha256-64fdb7da947f450c745dae303caae7e186d84531cf4acdcddb791fb4503535b6 --flash-attn --parallel 1 --port 46139\"\n","stream":"stderr","time":"2024-06-04T04:31:24.783576905Z"} {"log":"time=2024-06-04T04:31:24.784Z level=INFO source=sched.go:338 msg=\"loaded runners\" count=1\n","stream":"stderr","time":"2024-06-04T04:31:24.784619881Z"} {"log":"time=2024-06-04T04:31:24.784Z level=INFO source=server.go:529 msg=\"waiting for llama runner to start responding\"\n","stream":"stderr","time":"2024-06-04T04:31:24.784803776Z"} {"log":"time=2024-06-04T04:31:24.785Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server error\"\n","stream":"stderr","time":"2024-06-04T04:31:24.785306271Z"} {"log":"INFO [main] build info | build=1 commit=\"5921b8f\" tid=\"139656081428480\" timestamp=1717475484\n","stream":"stdout","time":"2024-06-04T04:31:24.804439262Z"} {"log":"INFO [main] system info | n_threads=40 n_threads_batch=-1 system_info=\"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | \" tid=\"139656081428480\" timestamp=1717475484 total_threads=80\n","stream":"stdout","time":"2024-06-04T04:31:24.804461692Z"} {"log":"INFO [main] HTTP server listening | hostname=\"127.0.0.1\" n_threads_http=\"79\" port=\"46139\" tid=\"139656081428480\" timestamp=1717475484\n","stream":"stdout","time":"2024-06-04T04:31:24.804468913Z"} {"log":"ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes\n","stream":"stderr","time":"2024-06-04T04:31:24.817467652Z"} {"log":"ggml_cuda_init: CUDA_USE_TENSOR_CORES: no\n","stream":"stderr","time":"2024-06-04T04:31:24.817475275Z"} {"log":"ggml_cuda_init: found 1 CUDA devices:\n","stream":"stderr","time":"2024-06-04T04:31:24.817478125Z"} {"log":" Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes\n","stream":"stderr","time":"2024-06-04T04:31:24.817480583Z"} {"log":"GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/examples/llava/clip.cpp:1024: new_clip-\u003ehas_llava_projector\n","stream":"stderr","time":"2024-06-04T04:31:24.817483029Z"} {"log":"time=2024-06-04T04:31:25.036Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server loading model\"\n","stream":"stderr","time":"2024-06-04T04:31:25.036981546Z"} {"log":"time=2024-06-04T04:31:26.492Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server not responding\"\n","stream":"stderr","time":"2024-06-04T04:31:26.492654151Z"} {"log":"time=2024-06-04T04:31:27.230Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server error\"\n","stream":"stderr","time":"2024-06-04T04:31:27.2308139Z"} {"log":"time=2024-06-04T04:31:27.481Z level=ERROR source=sched.go:344 msg=\"error loading llama server\" error=\"llama runner process has terminated: signal: aborted (core dumped) \"\n","stream":"stderr","time":"2024-06-04T04:31:27.481522057Z"} {"log":"[GIN] 2024/06/04 - 04:31:27 | 500 | 4.771330699s | 172.17.0.1 | POST \"/api/chat\"\n","stream":"stdout","time":"2024-06-04T04:31:27.481601072Z"} {"log":"time=2024-06-04T04:31:31.627Z level=INFO source=memory.go:133 msg=\"offload to gpu\" layers.requested=-1 layers.real=33 memory.available=\"22.7 GiB\" memory.required.full=\"16.6 GiB\" memory.required.partial=\"16.6 GiB\" memory.required.kv=\"512.0 MiB\" memory.weights.total=\"14.0 GiB\" memory.weights.repeating=\"13.0 GiB\" memory.weights.nonrepeating=\"1002.0 MiB\" memory.graph.full=\"296.0 MiB\" memory.graph.partial=\"677.5 MiB\"\n","stream":"stderr","time":"2024-06-04T04:31:31.627507969Z"} {"log":"time=2024-06-04T04:31:31.633Z level=INFO source=memory.go:133 msg=\"offload to gpu\" layers.requested=-1 layers.real=33 memory.available=\"22.7 GiB\" memory.required.full=\"16.6 GiB\" memory.required.partial=\"16.6 GiB\" memory.required.kv=\"512.0 MiB\" memory.weights.total=\"14.0 GiB\" memory.weights.repeating=\"13.0 GiB\" memory.weights.nonrepeating=\"1002.0 MiB\" memory.graph.full=\"296.0 MiB\" memory.graph.partial=\"677.5 MiB\"\n","stream":"stderr","time":"2024-06-04T04:31:31.633800635Z"} {"log":"time=2024-06-04T04:31:31.633Z level=WARN source=server.go:230 msg=\"multimodal models don't support parallel requests yet\"\n","stream":"stderr","time":"2024-06-04T04:31:31.633876464Z"} {"log":"time=2024-06-04T04:31:31.634Z level=INFO source=server.go:341 msg=\"starting llama server\" cmd=\"/tmp/ollama1378338659/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a7a6ce348ebc060ceb8aa973f3b0bad5d3007b7ced23228c0e1aeba59c1fb72f --ctx-size 4096 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --mmproj /root/.ollama/models/blobs/sha256-64fdb7da947f450c745dae303caae7e186d84531cf4acdcddb791fb4503535b6 --flash-attn --parallel 1 --port 40265\"\n","stream":"stderr","time":"2024-06-04T04:31:31.634136257Z"} {"log":"time=2024-06-04T04:31:31.634Z level=INFO source=sched.go:338 msg=\"loaded runners\" count=1\n","stream":"stderr","time":"2024-06-04T04:31:31.63453042Z"} {"log":"time=2024-06-04T04:31:31.634Z level=INFO source=server.go:529 msg=\"waiting for llama runner to start responding\"\n","stream":"stderr","time":"2024-06-04T04:31:31.634547574Z"} {"log":"time=2024-06-04T04:31:31.634Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server error\"\n","stream":"stderr","time":"2024-06-04T04:31:31.634736673Z"} {"log":"INFO [main] build info | build=1 commit=\"5921b8f\" tid=\"140615737364480\" timestamp=1717475491\n","stream":"stdout","time":"2024-06-04T04:31:31.652889203Z"} {"log":"INFO [main] system info | n_threads=40 n_threads_batch=-1 system_info=\"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | \" tid=\"140615737364480\" timestamp=1717475491 total_threads=80\n","stream":"stdout","time":"2024-06-04T04:31:31.652907647Z"} {"log":"INFO [main] HTTP server listening | hostname=\"127.0.0.1\" n_threads_http=\"79\" port=\"40265\" tid=\"140615737364480\" timestamp=1717475491\n","stream":"stdout","time":"2024-06-04T04:31:31.652991725Z"} {"log":"ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes\n","stream":"stderr","time":"2024-06-04T04:31:31.66543453Z"} {"log":"ggml_cuda_init: CUDA_USE_TENSOR_CORES: no\n","stream":"stderr","time":"2024-06-04T04:31:31.66545219Z"} {"log":"ggml_cuda_init: found 1 CUDA devices:\n","stream":"stderr","time":"2024-06-04T04:31:31.665455804Z"} {"log":" Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes\n","stream":"stderr","time":"2024-06-04T04:31:31.666761164Z"} {"log":"GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/examples/llava/clip.cpp:1024: new_clip-\u003ehas_llava_projector\n","stream":"stderr","time":"2024-06-04T04:31:31.666775789Z"} {"log":"time=2024-06-04T04:31:31.887Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server loading model\"\n","stream":"stderr","time":"2024-06-04T04:31:31.887765132Z"} {"log":"time=2024-06-04T04:31:32.525Z level=WARN source=sched.go:512 msg=\"gpu VRAM usage didn't recover within timeout\" seconds=5.043738778\n","stream":"stderr","time":"2024-06-04T04:31:32.525213094Z"} {"log":"time=2024-06-04T04:31:32.739Z level=WARN source=sched.go:512 msg=\"gpu VRAM usage didn't recover within timeout\" seconds=5.257773327\n","stream":"stderr","time":"2024-06-04T04:31:32.739367248Z"} {"log":"time=2024-06-04T04:31:32.990Z level=WARN source=sched.go:512 msg=\"gpu VRAM usage didn't recover within timeout\" seconds=5.50864897\n","stream":"stderr","time":"2024-06-04T04:31:32.990175753Z"} {"log":"time=2024-06-04T04:31:33.092Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server not responding\"\n","stream":"stderr","time":"2024-06-04T04:31:33.092506363Z"} {"log":"time=2024-06-04T04:31:33.875Z level=INFO source=server.go:567 msg=\"waiting for server to become available\" status=\"llm server error\"\n","stream":"stderr","time":"2024-06-04T04:31:33.875501116Z"} {"log":"time=2024-06-04T04:31:34.126Z level=ERROR source=sched.go:344 msg=\"error loading llama server\" error=\"llama runner process has terminated: signal: aborted (core dumped) \"\n","stream":"stderr","time":"2024-06-04T04:31:34.126784317Z"} {"log":"[GIN] 2024/06/04 - 04:31:34 | 500 | 4.781078255s | 172.17.0.1 | POST \"/v1/chat/completions\"\n","stream":"stdout","time":"2024-06-04T04:31:34.127032625Z"} {"log":"time=2024-06-04T04:31:39.349Z level=WARN source=sched.go:512 msg=\"gpu VRAM usage didn't recover within timeout\" seconds=5.223073684\n","stream":"stderr","time":"2024-06-04T04:31:39.350100964Z"} {"log":"time=2024-06-04T04:31:39.600Z level=WARN source=sched.go:512 msg=\"gpu VRAM usage didn't recover within timeout\" seconds=5.47392563\n","stream":"stderr","time":"2024-06-04T04:31:39.60084398Z"} {"log":"time=2024-06-04T04:31:39.850Z level=WARN source=sched.go:512 msg=\"gpu VRAM usage didn't recover within timeout\" seconds=5.723634591\n","stream":"stderr","time":"2024-06-04T04:31:39.850455842Z"} ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.41
GiteaMirror added the bug label 2026-04-12 13:26:45 -05:00
Author
Owner

@agdkgg commented on GitHub (Jun 4, 2024):

Windows 10 system, other same error, enable flash-attn function.

<!-- gh-comment-id:2147177134 --> @agdkgg commented on GitHub (Jun 4, 2024): Windows 10 system, other same error, enable flash-attn function.
Author
Owner

@jmorganca commented on GitHub (Jun 9, 2024):

Hi there, merging this with https://github.com/ollama/ollama/issues/4900. Thanks for the issue!

<!-- gh-comment-id:2156703862 --> @jmorganca commented on GitHub (Jun 9, 2024): Hi there, merging this with https://github.com/ollama/ollama/issues/4900. Thanks for the issue!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3031