[GH-ISSUE #6447] Ollama instance restart when using Mistral Nemo, tried different mistral nemo models #4055

Closed
opened 2026-04-12 14:57:00 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Hyphaed on GitHub (Aug 20, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6447

What is the issue?

Ollama instance restart when using Mistral Nemo, tried different mistral nemo models

`INFO [local_instance.py | start] Starting Alpaca's Ollama instance...
2024/08/20 21:13:35 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ferran/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-20T21:13:35.135+02:00 level=INFO source=images.go:781 msg="total blobs: 10"
time=2024-08-20T21:13:35.136+02:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-20T21:13:35.136+02:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11435 (version 0.3.3)"
time=2024-08-20T21:13:35.136+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/home/ferran/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama2352795879/runners
INFO [local_instance.py | start] Started Alpaca's Ollama instance
INFO [local_instance.py | start] Ollama version: 0.3.3
time=2024-08-20T21:13:39.958+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
time=2024-08-20T21:13:39.958+02:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-08-20T21:13:40.312+02:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-d8759212-99fb-5816-f4d7-aa3b8079b843 library=cuda compute=8.6 driver=0.0 name="" total="7.7 GiB" available="6.9 GiB"
[GIN] 2024/08/20 - 21:13:40 | 200 | 556.264µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/08/20 - 21:13:40 | 200 | 314.3µs | 127.0.0.1 | GET "/api/tags"
time=2024-08-20T21:13:53.989+02:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=41 layers.offload=9 layers.split="" memory.available="[6.9 GiB]" memory.required.full="23.6 GiB" memory.required.partial="6.4 GiB" memory.required.kv="320.0 MiB" memory.required.allocations="[6.4 GiB]" memory.weights.total="20.6 GiB" memory.weights.repeating="19.4 GiB" memory.weights.nonrepeating="1.3 GiB" memory.graph.full="172.0 MiB" memory.graph.partial="801.0 MiB"
time=2024-08-20T21:13:53.990+02:00 level=INFO source=server.go:384 msg="starting llama server" cmd="/home/ferran/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama2352795879/runners/cuda_v11/ollama_llama_server --model /home/ferran/.var/app/com.jeffser.Alpaca/data/.ollama/models/blobs/sha256-7a9581ae7a87e5727aa1b0670f439ffe2a31a4bcb38ca201f9cd76ac975d31ae --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 9 --parallel 1 --port 46655"
time=2024-08-20T21:13:53.990+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-20T21:13:53.990+02:00 level=INFO source=server.go:584 msg="waiting for llama runner to start responding"
time=2024-08-20T21:13:53.990+02:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="6eeaeba" tid="139205684236288" timestamp=1724181234
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139205684236288" timestamp=1724181234 total_threads=16
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="46655" tid="139205684236288" timestamp=1724181234
llama_model_loader: loaded meta data with 35 key-value pairs and 363 tensors from /home/ferran/.var/app/com.jeffser.Alpaca/data/.ollama/models/blobs/sha256-7a9581ae7a87e5727aa1b0670f439ffe2a31a4bcb38ca201f9cd76ac975d31ae (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Mistral Nemo Instruct 2407
llama_model_loader: - kv 3: general.version str = 2407
llama_model_loader: - kv 4: general.finetune str = Instruct
llama_model_loader: - kv 5: general.basename str = Mistral-Nemo
llama_model_loader: - kv 6: general.size_label str = 12B
llama_model_loader: - kv 7: general.license str = apache-2.0
llama_model_loader: - kv 8: general.languages arr[str,9] = ["en", "fr", "de", "es", "it", "pt", ...
llama_model_loader: - kv 9: llama.block_count u32 = 40
llama_model_loader: - kv 10: llama.context_length u32 = 1024000
llama_model_loader: - kv 11: llama.embedding_length u32 = 5120
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 13: llama.attention.head_count u32 = 32
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: llama.attention.key_length u32 = 128
llama_model_loader: - kv 18: llama.attention.value_length u32 = 128
llama_model_loader: - kv 19: general.file_type u32 = 1
llama_model_loader: - kv 20: llama.vocab_size u32 = 131072
llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 22: tokenizer.ggml.add_space_prefix bool = false
llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 24: tokenizer.ggml.pre str = tekken
llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,131072] = ["", "", "", "[INST]", "[...
llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
time=2024-08-20T21:13:54.241+02:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server loading model"
Exception in thread Thread-4 (generate_chat_title):
Traceback (most recent call last):
File "/app/lib/python3.11/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
ERROR [window.py | connection_error] Connection error
^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/urllib3/connectionpool.py", line 537, in _make_request
INFO [local_instance.py | reset] Resetting Alpaca's Ollama instance
INFO [local_instance.py | stop] Stopping Alpaca's Ollama instance
response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/urllib3/connection.py", line 466, in getresponse
httplib_response = super().getresponse()
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/http/client.py", line 1395, in getresponse
response.begin()
File "/usr/lib/python3.11/http/client.py", line 325, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/http/client.py", line 294, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/app/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/urllib3/util/retry.py", line 470, in increment
raise reraise(type(error), error, _stacktrace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/urllib3/util/util.py", line 38, in reraise
raise value.with_traceback(tb)
File "/app/lib/python3.11/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/urllib3/connectionpool.py", line 537, in _make_request
response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/urllib3/connection.py", line 466, in getresponse
httplib_response = super().getresponse()
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/http/client.py", line 1395, in getresponse
response.begin()
File "/usr/lib/python3.11/http/client.py", line 325, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/http/client.py", line 294, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/usr/lib/python3.11/threading.py", line 982, in run
self._target(self._args, **self._kwargs)
File "/app/share/Alpaca/alpaca/window.py", line 684, in generate_chat_title
response = connection_handler.simple_post(f"{connection_handler.URL}/api/generate", data=json.dumps(data))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/share/Alpaca/alpaca/connection_handler.py", line 23, in simple_post
return requests.post(connection_url, headers=get_headers(True), data=data, stream=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/requests/adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
INFO [local_instance.py | stop] Stopped Alpaca's Ollama instance
INFO [local_instance.py | start] Starting Alpaca's Ollama instance...
2024/08/20 21:13:55 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ferran/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:
https://localhost:
http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:
https://127.0.0.1:
http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:
https://0.0.0.0:
app://
file://
tauri://
] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-20T21:13:55.690+02:00 level=INFO source=images.go:781 msg="total blobs: 10"
time=2024-08-20T21:13:55.691+02:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-20T21:13:55.691+02:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11435 (version 0.3.3)"
time=2024-08-20T21:13:55.691+02:00 level=WARN source=assets.go:100 msg="unable to cleanup stale tmpdir" path=/home/ferran/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama2352795879 error="remove /home/ferran/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama2352795879: directory not empty"
time=2024-08-20T21:13:55.691+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/home/ferran/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3510795719/runners
INFO [local_instance.py | start] Started Alpaca's Ollama instance
INFO [local_instance.py | start] Ollama version: 0.3.3
INFO [window.py | show_toast] There was an error with the local Ollama instance, so it has been reset
time=2024-08-20T21:14:00.641+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60102 cpu]"
time=2024-08-20T21:14:00.641+02:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-08-20T21:14:00.832+02:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-d8759212-99fb-5816-f4d7-aa3b8079b843 library=cuda compute=8.6 driver=0.0 name="" total="7.7 GiB" available="6.7 GiB"
INFO [main] model loaded | tid="139205684236288" timestamp=1724181245
`

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

client version is 0.3.6

Originally created by @Hyphaed on GitHub (Aug 20, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6447 ### What is the issue? Ollama instance restart when using Mistral Nemo, tried different mistral nemo models `INFO [local_instance.py | start] Starting Alpaca's Ollama instance... 2024/08/20 21:13:35 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ferran/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-20T21:13:35.135+02:00 level=INFO source=images.go:781 msg="total blobs: 10" time=2024-08-20T21:13:35.136+02:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" time=2024-08-20T21:13:35.136+02:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11435 (version 0.3.3)" time=2024-08-20T21:13:35.136+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/home/ferran/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama2352795879/runners INFO [local_instance.py | start] Started Alpaca's Ollama instance INFO [local_instance.py | start] Ollama version: 0.3.3 time=2024-08-20T21:13:39.958+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]" time=2024-08-20T21:13:39.958+02:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-08-20T21:13:40.312+02:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-d8759212-99fb-5816-f4d7-aa3b8079b843 library=cuda compute=8.6 driver=0.0 name="" total="7.7 GiB" available="6.9 GiB" [GIN] 2024/08/20 - 21:13:40 | 200 | 556.264µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/08/20 - 21:13:40 | 200 | 314.3µs | 127.0.0.1 | GET "/api/tags" time=2024-08-20T21:13:53.989+02:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=41 layers.offload=9 layers.split="" memory.available="[6.9 GiB]" memory.required.full="23.6 GiB" memory.required.partial="6.4 GiB" memory.required.kv="320.0 MiB" memory.required.allocations="[6.4 GiB]" memory.weights.total="20.6 GiB" memory.weights.repeating="19.4 GiB" memory.weights.nonrepeating="1.3 GiB" memory.graph.full="172.0 MiB" memory.graph.partial="801.0 MiB" time=2024-08-20T21:13:53.990+02:00 level=INFO source=server.go:384 msg="starting llama server" cmd="/home/ferran/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama2352795879/runners/cuda_v11/ollama_llama_server --model /home/ferran/.var/app/com.jeffser.Alpaca/data/.ollama/models/blobs/sha256-7a9581ae7a87e5727aa1b0670f439ffe2a31a4bcb38ca201f9cd76ac975d31ae --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 9 --parallel 1 --port 46655" time=2024-08-20T21:13:53.990+02:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 time=2024-08-20T21:13:53.990+02:00 level=INFO source=server.go:584 msg="waiting for llama runner to start responding" time=2024-08-20T21:13:53.990+02:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="6eeaeba" tid="139205684236288" timestamp=1724181234 INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139205684236288" timestamp=1724181234 total_threads=16 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="46655" tid="139205684236288" timestamp=1724181234 llama_model_loader: loaded meta data with 35 key-value pairs and 363 tensors from /home/ferran/.var/app/com.jeffser.Alpaca/data/.ollama/models/blobs/sha256-7a9581ae7a87e5727aa1b0670f439ffe2a31a4bcb38ca201f9cd76ac975d31ae (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Mistral Nemo Instruct 2407 llama_model_loader: - kv 3: general.version str = 2407 llama_model_loader: - kv 4: general.finetune str = Instruct llama_model_loader: - kv 5: general.basename str = Mistral-Nemo llama_model_loader: - kv 6: general.size_label str = 12B llama_model_loader: - kv 7: general.license str = apache-2.0 llama_model_loader: - kv 8: general.languages arr[str,9] = ["en", "fr", "de", "es", "it", "pt", ... llama_model_loader: - kv 9: llama.block_count u32 = 40 llama_model_loader: - kv 10: llama.context_length u32 = 1024000 llama_model_loader: - kv 11: llama.embedding_length u32 = 5120 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: llama.attention.key_length u32 = 128 llama_model_loader: - kv 18: llama.attention.value_length u32 = 128 llama_model_loader: - kv 19: general.file_type u32 = 1 llama_model_loader: - kv 20: llama.vocab_size u32 = 131072 llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 22: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = tekken llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "[INST]", "[... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... time=2024-08-20T21:13:54.241+02:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server loading model" Exception in thread Thread-4 (generate_chat_title): Traceback (most recent call last): File "/app/lib/python3.11/site-packages/urllib3/connectionpool.py", line 793, in urlopen response = self._make_request( ERROR [window.py | connection_error] Connection error ^^^^^^^^^^^^^^^^^^^ File "/app/lib/python3.11/site-packages/urllib3/connectionpool.py", line 537, in _make_request INFO [local_instance.py | reset] Resetting Alpaca's Ollama instance INFO [local_instance.py | stop] Stopping Alpaca's Ollama instance response = conn.getresponse() ^^^^^^^^^^^^^^^^^^ File "/app/lib/python3.11/site-packages/urllib3/connection.py", line 466, in getresponse httplib_response = super().getresponse() ^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/http/client.py", line 1395, in getresponse response.begin() File "/usr/lib/python3.11/http/client.py", line 325, in begin version, status, reason = self._read_status() ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/http/client.py", line 294, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/lib/python3.11/site-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( ^^^^^^^^^^^^^ File "/app/lib/python3.11/site-packages/urllib3/connectionpool.py", line 847, in urlopen retries = retries.increment( ^^^^^^^^^^^^^^^^^^ File "/app/lib/python3.11/site-packages/urllib3/util/retry.py", line 470, in increment raise reraise(type(error), error, _stacktrace) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/lib/python3.11/site-packages/urllib3/util/util.py", line 38, in reraise raise value.with_traceback(tb) File "/app/lib/python3.11/site-packages/urllib3/connectionpool.py", line 793, in urlopen response = self._make_request( ^^^^^^^^^^^^^^^^^^^ File "/app/lib/python3.11/site-packages/urllib3/connectionpool.py", line 537, in _make_request response = conn.getresponse() ^^^^^^^^^^^^^^^^^^ File "/app/lib/python3.11/site-packages/urllib3/connection.py", line 466, in getresponse httplib_response = super().getresponse() ^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/http/client.py", line 1395, in getresponse response.begin() File "/usr/lib/python3.11/http/client.py", line 325, in begin version, status, reason = self._read_status() ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/http/client.py", line 294, in _read_status raise RemoteDisconnected("Remote end closed connection without" urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner self.run() File "/usr/lib/python3.11/threading.py", line 982, in run self._target(*self._args, **self._kwargs) File "/app/share/Alpaca/alpaca/window.py", line 684, in generate_chat_title response = connection_handler.simple_post(f"{connection_handler.URL}/api/generate", data=json.dumps(data)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/share/Alpaca/alpaca/connection_handler.py", line 23, in simple_post return requests.post(connection_url, headers=get_headers(True), data=data, stream=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/lib/python3.11/site-packages/requests/api.py", line 115, in post return request("post", url, data=data, json=json, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/lib/python3.11/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/lib/python3.11/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/lib/python3.11/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/lib/python3.11/site-packages/requests/adapters.py", line 501, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) INFO [local_instance.py | stop] Stopped Alpaca's Ollama instance INFO [local_instance.py | start] Starting Alpaca's Ollama instance... 2024/08/20 21:13:55 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ferran/.var/app/com.jeffser.Alpaca/data/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-20T21:13:55.690+02:00 level=INFO source=images.go:781 msg="total blobs: 10" time=2024-08-20T21:13:55.691+02:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" time=2024-08-20T21:13:55.691+02:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11435 (version 0.3.3)" time=2024-08-20T21:13:55.691+02:00 level=WARN source=assets.go:100 msg="unable to cleanup stale tmpdir" path=/home/ferran/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama2352795879 error="remove /home/ferran/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama2352795879: directory not empty" time=2024-08-20T21:13:55.691+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/home/ferran/.var/app/com.jeffser.Alpaca/cache/tmp/ollama/ollama3510795719/runners INFO [local_instance.py | start] Started Alpaca's Ollama instance INFO [local_instance.py | start] Ollama version: 0.3.3 INFO [window.py | show_toast] There was an error with the local Ollama instance, so it has been reset time=2024-08-20T21:14:00.641+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60102 cpu]" time=2024-08-20T21:14:00.641+02:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-08-20T21:14:00.832+02:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-d8759212-99fb-5816-f4d7-aa3b8079b843 library=cuda compute=8.6 driver=0.0 name="" total="7.7 GiB" available="6.7 GiB" INFO [main] model loaded | tid="139205684236288" timestamp=1724181245 ` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version client version is 0.3.6
GiteaMirror added the needs more infobug labels 2026-04-12 14:57:00 -05:00
Author
Owner

@pdevine commented on GitHub (Aug 30, 2024):

I just tried this on main and it's working correctly. I'm not sure what Alpaca is, but are you sure the issue is w/ Ollama? Can you test outside of Alpaca and update to the latest version?

<!-- gh-comment-id:2322424720 --> @pdevine commented on GitHub (Aug 30, 2024): I just tried this on main and it's working correctly. I'm not sure what `Alpaca` is, but are you sure the issue is w/ Ollama? Can you test outside of Alpaca and update to the latest version?
Author
Owner

@dhiltgen commented on GitHub (Sep 30, 2024):

If you're still having troubles, please update to the latest version of Ollama, and try to share a clean server log that isn't mixed with the python client. The logs you shared above don't appear to show any Ollama errors.

<!-- gh-comment-id:2384323734 --> @dhiltgen commented on GitHub (Sep 30, 2024): If you're still having troubles, please update to the latest version of Ollama, and try to share a clean server log that isn't mixed with the python client. The logs you shared above don't appear to show any Ollama errors.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4055