Posts by davidknudsen

    New drive arrived. 'zpool replace' worked without a hitch, even if the old drive was showing as FAULTED instead of OFFLINE.

    For reference, this is what worked for me:

    /etc/init.d/zfs-zed stop
    zpool replace data ata-ST12000NM0007-2A1101_ZJV2LDGN /dev/disk/by-id/ata-ST12000NM0007-2A1101_ZJV1T4YF
    /etc/init.d/zfs-zed start

    In summary: Just chill, everything will work out fine. :-)

    Shortly after setting up my zpool, one Seagate drive started showing bad sectors ... so it is being returned for a replacement.

    The procedure for replacing the drive, seems to be:

    • zpool offline <pool> <bad drive>
    • zpool replace <pool> <bad drive> <new drive>
    • wait for resilver

    Unfortunately, zpool offline does not have the expected result of changing the state to 'OFFLINE' -- the drive still shows up as 'FAULTED'.

    I have tried all the variations of 'zpool offline' I could think of:

    • zpool offline data ata-ST12000NM0007-2A1101_ZJV2LDGN
    • zpool offline data /dev/disk/by-id/ata-ST12000NM0007-2A1101_ZJV2LDGN
    • zpool offline data 3630560290011746901 (GUID)
    • zpool offline -f data 3630560290011746901 (GUID)

    There are no errors reported from the 'zpool offline' commands, but also no change in the state of the drive.

    Will this cause me trouble when the time comes to 'zpool replace' this drive with a new drive or should I just chill? :-)

    Thanks for any insights or shared experiences!

    Thank you for responding!

    After rebooting, the pool is still missing from the ZFS overview. Also, the message 'A mirror must contain at least 2 disks' immediately pops up when selecting ZFS. zpool status still seems OK.

    Is the ZFS GUI unable to handle a pool with several mirror vdevs? Maybe I need to use a couple of RAIDZ2 instead (4x12 TB + 4x8TB).

    Edit: Creating the pool by CLI and rebooting: Same result. No pool in ZFS GUI, error message pops up, zpool status looks OK.

    Fresh install of OMV5 with proxmox kernel (1) and openmediavault-zfs (2) from Trying to create a ZFS pool with four mirrored vdevs (2x12 TB + 2x12 TB + 2x8 TB + 2x8 TB).

    I'm able to create the pool with the first mirror vdev, but expanding the pool with second set of mirror drives fails. Error message: A mirror must contain at least 2 disks. Yes, I am selecting two drives when expanding the pool :-)

    Confusingly, zpool status seems to be OK?

    root@omv-nas:~# zpool status
    pool: zfs-tank
    state: ONLINE
    scan: none requested
    zfs-tank ONLINE 0 0 0
    mirror-0 ONLINE 0 0 0
    ata-ST12000NM0007-2A1101_ZJV310R4 ONLINE 0 0 0
    ata-ST12000NM0007-2A1101_ZJV3B4T7 ONLINE 0 0 0
    mirror-1 ONLINE 0 0 0
    ata-ST12000NM0007-2A1101_ZJV24C09 ONLINE 0 0 0
    ata-ST12000NM0007-2A1101_ZJV2LDGN ONLINE 0 0 0

    After the attempt to expand with a second set of drives, the pool is missing from the ZFS overview. Also, the message 'A mirror must contain at least 2 disks' immediately pops up when selecting ZFS.

    Should I expand the pool using CLI instead of GUI, or what am I doing wrong here?

    Thanks for any pointers in the right direction!

    (1) Proxmox kernel installed because the 5.2 kernel in backports is currently uninstallable: linux-image-amd64 depends on linux-image-5.2.0-0.bpo.3-amd64 which is missing from the backports repository.

    (2) openmediavault-zfs installation initially failed because zfs module wasn't ready when setting up zfsutils-linux. Fixed with modprobe zfs; apt install.