mirror of
https://github.com/moghtech/komodo.git
synced 2026-05-14 13:21:22 -05:00
[GH-ISSUE #1273] Make Komodo ZFS-Aware for disk pool mount reporting #7784
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Crosis47 on GitHub (Mar 26, 2026).
Original GitHub issue: https://github.com/moghtech/komodo/issues/1273
I have attempted to add my ZFS pools to Komodo to track disk usage however there is a flaw in Komodo when used with ZFS. The standard Linux commands to get disk space stats do not work on ZFS pools. Here's a good explanation by ChatGPT as to the actual issue:
Here is something that would fix the issue as well as how the output looks (This command could be enabled possibly with an environment variable or option in the included mounts config to tell Komodo that a mount is ZFS and to use the ZFS commands instead of df for that specific mount) I would like to see a new environment variable such as "PERIPHERY_INCLUDE_ZFS_POOLS":
zpool list

Variation to pull specific pool's data

Unfortunately, without this implementation, Komodo disk mount reporting on ZFS filesystems is essentially useless
There is also a command to list datasets and the space they each use but I don't think that would be useful in the context of Komodo stat reporting. If anyone else disagrees feel free to share the command.
@NeurekaSoftware commented on GitHub (Mar 26, 2026):
Same issue here! Both the disk and ram issue need resolved within Komodo.
@Zeilar commented on GitHub (Mar 26, 2026):
It's weird, for me there is one "disk" on path
/etc/hostnamewhich happens to include my ZFS array, so it somehow accidentally correctly reports the total amount.But since this path is nearly empty, then of course Komodo says as much, and my storage is therefore reported at 99% empty.
There is an option to hide unwated mounts, but I don't see a way to add any.
@Crosis47 commented on GitHub (Mar 26, 2026):
This is happening because /etc/hostname is always going to exist on the boot disk of any Linux system
so that is showing you your boot pool. You can add other pools though it's just going to show you the current available space as total space with 0 used.EDIT: I stand corrected:

I did some testing and /etc/hostname in a Docker stack is going to be a part of the actual Docker installation so it is going to be whichever ZFS pool your Docker environment is installed in. In my case it is my NVME-based Apps pool as shown here from my periphery container and adguard container.
It is very terribly documented but to add disk mounts you have to mount them as volumes in the periphery container so that it can access them (read-only to be safe) like below:
Then you have to include them in the environment variables like so:
PERIPHERY_INCLUDE_DISK_MOUNTS: /mnt/Apps,/mnt/DataWhen you mount them, you can mount them as any path in the container and use that path in the variable. Though, I personally feel using anything other than the original path would just cause confusion as to what exactly you are looking at in the UI. It is an option if you find it useful however.
I would assume that a systemd periphery installed via binary would just need the mounts added to the environment variable as there would be no container to bind the mounts in so it should already have access to them.