[GH-ISSUE #13969] 'IndexError: list index out of range' error with Python 3.13 in Debian 13 #9137

Closed
opened 2026-04-12 21:59:24 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Efenstor on GitHub (Jan 29, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13969

What is the issue?

Flatpak version, local instance does not work with either Vulkan or CPU.

Relevant log output

INFO    [main.py | main] Alpaca version: 9.0.0
INFO    [ollama_instances.py | start] Starting Alpaca's Ollama instance...
INFO    [ollama_instances.py | start] Started Alpaca's Ollama instance
Couldn't find '/home/user/.ollama/id_ed25519'. Generating new private key.
Your new public key is: 
time=2026-01-29T16:45:12.623+07:00 level=INFO source=routes.go:1631 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:1 HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/media/extra/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://127.0.0.1:11435 http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:1 http_proxy: https_proxy: no_proxy:]"

INFO    [ollama_instances.py | start] Ollama version is 0.15.2
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAZwkG4iVrQVtemhAMU8WsNOxBOJjNiHEx2jqM3pUi3A
time=2026-01-29T16:45:12.623+07:00 level=INFO source=images.go:473 msg="total blobs: 5"

time=2026-01-29T16:45:12.623+07:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
time=2026-01-29T16:45:12.624+07:00 level=INFO source=routes.go:1684 msg="Listening on 127.0.0.1:11435 (version 0.15.2)"
time=2026-01-29T16:45:12.624+07:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-01-29T16:45:12.624+07:00 level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=0
time=2026-01-29T16:45:12.624+07:00 level=WARN source=runner.go:485 msg="user overrode visible devices" HIP_VISIBLE_DEVICES=1
time=2026-01-29T16:45:12.624+07:00 level=WARN source=runner.go:485 msg="user overrode visible devices" ROCR_VISIBLE_DEVICES=1
time=2026-01-29T16:45:12.624+07:00 level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"
time=2026-01-29T16:45:12.625+07:00 level=INFO source=server.go:429 msg="starting runner" cmd="/home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/bin/ollama runner --ollama-engine --port 35499"
time=2026-01-29T16:45:12.655+07:00 level=INFO source=server.go:429 msg="starting runner" cmd="/home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/bin/ollama runner --ollama-engine --port 33783"
time=2026-01-29T16:45:12.673+07:00 level=INFO source=server.go:429 msg="starting runner" cmd="/home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/bin/ollama runner --ollama-engine --port 46817"
time=2026-01-29T16:45:12.730+07:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-01-29T16:45:12.730+07:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.3 GiB" available="27.1 GiB"
time=2026-01-29T16:45:12.730+07:00 level=INFO source=routes.go:1725 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
[GIN] 2026/01/29 - 16:45:12 | 200 |     357.618µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/01/29 - 16:45:12 | 200 |  234.040889ms |       127.0.0.1 | POST     "/api/show"
Exception in thread Thread-5 (generate_message):
Traceback (most recent call last):
  File "/usr/lib/python3.13/threading.py", line 1044, in _bootstrap_inner
    self.run()
    ~~~~~~~~^^
  File "/usr/lib/python3.13/threading.py", line 995, in run
    self._target(*self._args, **self._kwargs)
    ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/share/Alpaca/alpaca/widgets/instances/ollama_instances.py", line 81, in generate_message
    messages[-1].get('content'),
    ~~~~~~~~^^^^
IndexError: list index out of range

OS

Debian Linux 13 (trixie)

GPU

AMD Radeon 5600 XT (Vulkan)

CPU

AMD Ryzen 7 2700 Eight-Core Processor

Ollama version

0.15.2

Originally created by @Efenstor on GitHub (Jan 29, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13969 ### What is the issue? Flatpak version, local instance does not work with either Vulkan or CPU. ### Relevant log output ```shell INFO [main.py | main] Alpaca version: 9.0.0 INFO [ollama_instances.py | start] Starting Alpaca's Ollama instance... INFO [ollama_instances.py | start] Started Alpaca's Ollama instance Couldn't find '/home/user/.ollama/id_ed25519'. Generating new private key. Your new public key is: time=2026-01-29T16:45:12.623+07:00 level=INFO source=routes.go:1631 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:1 HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/media/extra/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://127.0.0.1:11435 http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:1 http_proxy: https_proxy: no_proxy:]" INFO [ollama_instances.py | start] Ollama version is 0.15.2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAZwkG4iVrQVtemhAMU8WsNOxBOJjNiHEx2jqM3pUi3A time=2026-01-29T16:45:12.623+07:00 level=INFO source=images.go:473 msg="total blobs: 5" time=2026-01-29T16:45:12.623+07:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-01-29T16:45:12.624+07:00 level=INFO source=routes.go:1684 msg="Listening on 127.0.0.1:11435 (version 0.15.2)" time=2026-01-29T16:45:12.624+07:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-01-29T16:45:12.624+07:00 level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=0 time=2026-01-29T16:45:12.624+07:00 level=WARN source=runner.go:485 msg="user overrode visible devices" HIP_VISIBLE_DEVICES=1 time=2026-01-29T16:45:12.624+07:00 level=WARN source=runner.go:485 msg="user overrode visible devices" ROCR_VISIBLE_DEVICES=1 time=2026-01-29T16:45:12.624+07:00 level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again" time=2026-01-29T16:45:12.625+07:00 level=INFO source=server.go:429 msg="starting runner" cmd="/home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/bin/ollama runner --ollama-engine --port 35499" time=2026-01-29T16:45:12.655+07:00 level=INFO source=server.go:429 msg="starting runner" cmd="/home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/bin/ollama runner --ollama-engine --port 33783" time=2026-01-29T16:45:12.673+07:00 level=INFO source=server.go:429 msg="starting runner" cmd="/home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/bin/ollama runner --ollama-engine --port 46817" time=2026-01-29T16:45:12.730+07:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-01-29T16:45:12.730+07:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.3 GiB" available="27.1 GiB" time=2026-01-29T16:45:12.730+07:00 level=INFO source=routes.go:1725 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" [GIN] 2026/01/29 - 16:45:12 | 200 | 357.618µs | 127.0.0.1 | GET "/api/tags" [GIN] 2026/01/29 - 16:45:12 | 200 | 234.040889ms | 127.0.0.1 | POST "/api/show" Exception in thread Thread-5 (generate_message): Traceback (most recent call last): File "/usr/lib/python3.13/threading.py", line 1044, in _bootstrap_inner self.run() ~~~~~~~~^^ File "/usr/lib/python3.13/threading.py", line 995, in run self._target(*self._args, **self._kwargs) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/share/Alpaca/alpaca/widgets/instances/ollama_instances.py", line 81, in generate_message messages[-1].get('content'), ~~~~~~~~^^^^ IndexError: list index out of range ``` ### OS Debian Linux 13 (trixie) ### GPU AMD Radeon 5600 XT (Vulkan) ### CPU AMD Ryzen 7 2700 Eight-Core Processor ### Ollama version 0.15.2
GiteaMirror added the bug label 2026-04-12 21:59:24 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 29, 2026):

Alpaca github issues is here.

<!-- gh-comment-id:3817020291 --> @rick-github commented on GitHub (Jan 29, 2026): Alpaca github issues is [here](https://github.com/Jeffser/Alpaca/issues).
Author
Owner

@Efenstor commented on GitHub (Jan 29, 2026):

Oops, sorry.

<!-- gh-comment-id:3817412912 --> @Efenstor commented on GitHub (Jan 29, 2026): Oops, sorry.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9137