Posts by johnvick

    I happens with every reboot.


    Code
    root@omv:~# journalctl -u rrdcached
    -- Logs begin at Sun 2021-10-03 10:11:13 NZDT, end at Sun 2021-10-03 10:18:15 NZ
    Oct 03 10:11:29 omv systemd[1]: Starting LSB: start or stop rrdcached...
    Oct 03 10:11:29 omv rrdcached[7222]: rrdcached started.
    Oct 03 10:11:29 omv systemd[1]: Started LSB: start or stop rrdcached.
    lines 1-4/4 (END)...skipping...
    -- Logs begin at Sun 2021-10-03 10:11:13 NZDT, end at Sun 2021-10-03 10:18:15 NZDT. --
    Oct 03 10:11:29 omv systemd[1]: Starting LSB: start or stop rrdcached...
    Oct 03 10:11:29 omv rrdcached[7222]: rrdcached started.
    Oct 03 10:11:29 omv systemd[1]: Started LSB: start or stop rrdcached.

    A few days ago I started to get error emails on restart as follows :

    Code
    The system monitoring needs your attention.
    Host: \omv
    Date: Sat, 02 Oct 2021 15:57:35
    Service: rrdcached
    Event: Does not exist
    Description: process is not running
    This triggered the monitoring system to: restart

    Followed by a second:


    Code
    The system monitoring needs your attention.
    Host: \omv
    Date: Sat, 02 Oct 2021 15:58:10
    Service: rrdcached
    Event: Exists
    Description: process is running with pid 7146
    This triggered the monitoring system to: alert

    Is this anything to worry about?

    Thanks changed the policy to mfs, rebooted and the new drive is being added to. Still errors with mergerfs.balance but less need to use it now.


    Using the exclude option deals with the aquota* error but now some new errors to investigate today.

    Thanks for your input.


    I created two directories that exist on the pool on the /srv/dev-disk-by-uuid-a017c3b4-20f1-4ab5-a0de-e39565cfbf07 drive (the new 3TB USB drive) and again ran :


    mergerfs.balance /srv/144ab994-0e0f-4a42-a06b-f37e84454803


    Same error message.



    Changed policy to most free space - same error message.


    Something isn't right with Disk 3 but can't work out what it is.

    I have four disk mergerfs setup :


    /srv/dev-disk-by-label-Disk1

    /srv/dev-disk-by-label-Disk1

    /srv/dev-disk-by-label-Disk3

    /srv/dev-disk-by-label-Disk4


    The array is /srv/144ab994-0e0f-4a42-a06b-f37e84454803


    I add 3TB external USB drive:


    /srv/dev-disk-by-uuid-a017c3b4-20f1-4ab5-a0de-e39565cfbf07


    Shows as added in file system pages.


    Policy is existing path, most free space.


    The new drive is not being added to with new files so I run mergerfs.balance with output as below - so something wrong with disk 3 - how can I fix?


    Any help appreciated.



    I have several drives in the system and they are listed in /srv/ as:


    144ab994-0e0f-4a42-a06b-f37e84454803 (MergerFS)

    803befbe-2c10-4331-9b07-1691582f4de1 (Remote NFS mount)

    dev-disk-by-label-NVME (OS drive)

    dev-disk-by-label-Disk1 (WD 4TB drive)

    dev-disk-by-label-Disk2 (WD 4TB drive)

    dev-disk-by-label-Disk3 (WD 4TB drive)

    dev-disk-by-label-Disk4 (WD 4TB drive)

    dev-disk-by-label-Parity (5TB USB SnapRAID parity drive)


    I added a new USB 3TB drive today, added it to the MergerFS pool and SnapRAID.


    It has appeared as follows:


    dev-disk-by-uuid-a017c3b4-20f1-4ab5-a0de-e39565cfbf07


    All drives have been added under OMV 5 i.e. no upgrading from OMV 4.


    Why is the new drive mounted by UUID when the older ones are mounted by label?


    All working just curious to know.


    John

    You can create some extra useable space on the parity drive as follows- quoting from SnapRAID manual:


    In Linux, to get more space for the parity, it's recommended to format the parity file-system with the -m 0 -T largefile4 options. Like:

    Code
    mkfs.ext4 -m 0 -T largefile4 DEVICE

    On a 8 TB disk you can save about 400 GB. This is also expected to be as fast as the default, if not faster.

    My board has an M.2 slot so may as well use it for the OS they are too expensive and low capacity to load up with movies etc.