Posts by ananas

    If I remember correct ...
    You need to have "build-essential" installed.


    util-linux is available here: https://mirrors.edge.kernel.org/pub/linux/utils/util-linux/
    Download, extract, ./configure, make but do not "make install"
    It was running from the directory where I had compiled it.


    But as other users had reported, the wipefs that ships with debian seem to have done the trick too.
    My issue with the "onboard" wipefs was, that it did just report ONE zfs signature whereas the one that I had compiled
    showed all zfs signatures.
    Maybe you can get along with version 2.29.2
    anyway, do "man wipefs" and read carefully !


    repeat
    1. "wipefs -n" to list signatures
    2. "wipefs -o <offset reported by wipefs in step 1> -t zfs" to get rid of ONE zfs signature
    until there are no more zfs signatures listed


    My filesystem got mounted after deleting the last zfs signature.


    Good luck,
    Thomas

    Are the physical disks still showing up underneath "Storage --> Disks" ?
    Did you recently upgrade from 3.x to 4 ?
    Please post the output of "lsblk"


    Cheers,
    Thomas

    Hi,
    to get rid of the ZFS signatures, run:


    (1)
    wipefs -b -o 0xe8e0d3f000 -t noext4 /dev/sdb
    wipefs -b -o 0xe8e0c3f000 -t noext4 /dev/sdb1


    (This will remove any signature BUT ext4 at the given offset on the given device)


    (2)
    After that run the following commands again:
    wipefs -n /dev/sdb
    wipefs -n /dev/sdb1


    If there are still ZFS signatures displayed, run the following commands


    wipefs -b -o <newly-found-offset> -t noext4 /dev/sdb


    wipefs -b -o <newly-found-offset> -t noext4 /dev/sdb1


    Continue with (2) until there are no more ZFS signatures displayed.
    I needed 15 runs of "wipefs -b -o ..." (probably because I was playing around with ZFS before going for mdadm/ext4)


    Good luck,
    Thomas

    I had a similar issue after upgrading from 3.x to 4.x.
    Issue was a ZFS signature on my 4 disks.
    Looks like the newer tools are a little bit more sensitive.
    Use "wipefs" (be very carefully) to remove the other signatures from the disk.


    I had to download and compile "util-linux-2.32" as the onboard wipefs told me
    it had removed the ZFS signatures, but it hadn't.


    Good luck,
    Thomas

    Let me answer this myself.


    blkid did not list the md raid, but lsblk did.
    blkid -p /dev/md127 gave a hint:
    /dev/md127: ambivalent result (probably more filesystems on the device, use wipefs(8) to see more details)


    man wipefs ...


    wipefs -n /dev/md127 gave
    offset type
    ----------------------------------------------------------------
    0x82fcbbefc00 zfs_member [filesystem]
    LABEL: pool1
    UUID: 17414797601597129307



    0x438 ext4 [filesystem]
    LABEL: data1
    UUID: 870cef82-b75f-4c78-a111-213389d87c3f
    I can remember having fooled around with zfs and some BSD based system before going for openmediavault.


    wipefs -b -o 0x82fcbbefc00 -t noext4 /dev/md127
    said it had removed the zfs signature, but it didn't


    searching for known bugs in wipefs ...
    install buid-essential, download and unpack /util-linux-2.32, ./configure and make wipefs



    the new compiled version of wipefs revealed:



    DEVICE OFFSET TYPE UUID LABEL
    md127 0x438 ext4 870cef82-b75f-4c78-a111-213389d87c3f data1
    md127 0x82fcbbef000 zfs_member 17414797601597129307 pool1
    ...
    wow, 15 zfs signatures (b*tch)
    ...
    md127 0x82fcbbe0c00 zfs_member 17414797601597129307 pool1


    After 15 runs of wipefs with the listed offsets there was just the ext4 and no more zfs signature.
    blkid now lists the md array and the filesystem is now visible in openmediavault



    This might be helpfull for others where the filesystem is not shown by blkid and hence not in openmediavault.


    Cheers,
    Thomas

    Hi there,
    I just upgraded my OMV version 3.x (latest) to version 4.x
    After the final reboot the RAID5 array (md127) does not get mounted.
    /etc/fstab contains:
    ...
    /dev/disk/by-label/fs1 /srv/dev-disk-by-label-fs1 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    ...


    But there is no directory "/dev/disk/by-label"



    I can see my RAID5 array in "/dev/disk/by-id/md-uuid-1f0917e1:8db686f6:f532b32f:76597ae5"


    After:
    mkdir /banane
    mount -r /dev/disk/by-id/md-uuid-1f0917e1\:8db686f6\:f532b32f\:76597ae5 /banane/ -t ext4


    The filesystem is mounted and all data is visible.
    (I unmounted if afterwards)


    How can I get "/dev/disk/by-label" populated ?
    OR
    How can I get OMV to use "/dev/disk/by-id/xxx" ?



    Thanks,


    Thomas

    ... running a RAID-5 array with 4 * WD RED 3TB for 2 years now without any issue. (always on)
    SMART status is fine.
    But meanwhile the 4 TB drives offer a better capacity/price ratio.


    Cheers,
    T.