Change / Upgrade drives / need double check

  • Hi,


    my disks get into age (6 years of runtime) and I like to replace them with something new / larger. I need a double check , if what I plan sounds sensibe.


    I have a HP Gen 8 Microserver running ESXi 6.7. ESXi boots from an intern USB stick, The ESXi data store is a SSD disk connected to the internal SATA adapter of the Microserver. OMV5 runs as a VM with its boot disk on the ESXi datastore:

    • Four disks in the drive bays a connected to an LSI HBA converted to IT mode (diska are seen as four single SATA disks)
    • The OMV boot disk is in the ESXi data store, the LSI HBA is passed through to OMV.
    • The four disks (3 TB each) are managed as a RAID5 array in OMV, giving 8 TB of effective space. 40% of it is used.
    • Backup is stored on external disk and a seperate OMV server.
    • I need the redundancy for availability reasons, so I have to go for RAID or ZFS and came to the conclusion ZFS is the way to go for future expandability.
    • The new drives will be 8TB disks.


    What I plan to do:

    1. create a snapshot of the OMV boot disk in ESXi
    2. install ZFS using omv-extras
    3. degrade the array by removing one disk in the OMV UI
    4. replace this disk with a new 8 TB disk
    5. create a single disk VDEV of the new disk
    6. create a ZFS pool with this single VDEV
    7. rsync the contents of the array to the ZFS pool
    8. shift the services from the array to the ZFS pool
    9. delete the array in OMV
    10. replace the disks
    11. add a disk to the VDEV, making it a mirror


    Questions:

    1. Is ZFS the way to go for my needs?
    2. I have a lot of hard links (rsnapshot backups). Does ZFS cope with that?
    3. Is the plan feasable in your opinion or does it need modification


    Thanks in advanced and happy new year to everybody.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

    • Offizieller Beitrag

    After studying the options that exist, I came to the same conclusion, ZFS seems like the right thing to do. The downside is that ZFS is not yet very flexible regarding expansion procedures. I read somewhere that this year they plan to release new functionalities about it, it is still in development.


    Regarding your plan, you will need to copy the data twice. The first in the first 8TB disk and the second when creating the mirror. Since you have a backup, wouldn't it be easier to delete the current Raid and create the mirror with the two 8TB disks? You would only have to copy the data once from your second server.

  • Thanks Chente,


    I can't have the downtime, so I have to do all this white it is running (excpet for some small stepts during the last rsync).

    Regarding copying twice: wouldn't ZFS automatically resilver the VDEV, if I add a second disk to a VDEV?

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

    • Offizieller Beitrag

    Regarding copying twice: wouldn't ZFS automatically resilver the VDEV, if I add a second disk to a VDEV?

    I don't know the exact process, but it is logical to think that ZFS will do it automatically as you say, in any case it will end up being a second copy, the data must be transferred to that disk also in one way or another, at the end of the day you are creating a mirror.

  • You are right, data will be moved quite a bit and I expect it to take a looonnnngggg time until the system is in it's final state. My major concern is did I forget something and to avoid downtime.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

    • Offizieller Beitrag

    If you have free bays on your second server you could do it another way:

    - Create the ZFS pool on the second server and copy the data.

    - Export the pool and remove the drives.

    - Remove the old HP Gen 8 drives.

    - Install the two 8TB drives in HP Gen 8.

    - Import the pool.

    - Reconfigure services.

    Downtime would be tight.

  • OK, I made 1 step ahead and freed up 2 bay of my other host and added 2 8TB drives.


    Next question: What properties need to be set on the new pool / filesystem before continuing?


    Did some reading about zfs, but came to no conclusion what flags are needed / good to have.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

    • Offizieller Beitrag

    If you are going to use ACL permissions you need this:

    acltype = posixacl

    xattr = sa

    Also look at the sector size of your disks, it can be 512B or 4KB. If memory serves me correctly this is set at the beginning in ZFS as well.

    Compression to take into account as well, but personally I prefer to define it in file systems. It depends on the content you may or may not be interested in.

  • Thanks chente. ashift is set to 12 (4k) which is correct for the drives.

    I will set the posix attributes.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

    • Offizieller Beitrag

    The downside is that ZFS is not yet very flexible regarding expansion procedures.

    Not true my friend, at least in my opinion. ZFS can easily be expanded by adding another Vdev which can be a single disk.

    I believe it's better to add mirrors to a pool, but ZFS will allow any kind of Vdev, a mirror, RAIDZ1, a single disk, etc., to be added to an existing pool.

    In the following, the pool is made up of a mirror (RAID1 equivalent), RAIDZ1 (RAID5 equivalent)) and at the bottom is a basic volume (a single disk). I did have to use the force option to add the single disk.

    This was done in three separate steps, in the GUI:
    Create a mirror, then expand adding RAIDZ1, then expand again adding a basic volume



    What ZFS won't do is allow any kind of Pool contraction or a Vdev removal. (For this reason, using a single disk as a Vdev is a real bad idea.)

    BTRFS allows for an array to be downsized, as in actually contracting the filesystem size and removing devices, but I've never tested it.

    _____________________________________________________________________________


    Questions:
    1. Is ZFS the way to go for my needs?
    2. I have a lot of hard links (rsnapshot backups). Does ZFS cope with that?
    3. Is the plan feasable in your opinion or does it need modification


    1. I think you can do it with one modification.
    - If you have the room in the case (or could temporarily cable in two drives laying beside the case) I would add a second pool, using the larger 8TB disks in a mirror.
    - Use rsync to recreate the entire structure of the existing ZFS array to the new 8TB mirror.
    - You wouldn't have to reconfigure OMV. Just redirect the shared folder (music for example) to the recreated music folder / device on the new array. The stuff layered onto the shared folder, like a samba share, will follow.

    Once the rsync job is finished, shared folders are redirected and samba shares are tested. Then the older disks could be removed without consequence. Once done the old disks could be wiped, used as a backup pool, or used in another device.

    2. I believe it will. If these hard links exist in the current array, the new array should be fine. ZFS did have issues with overlayfs (used by Dockers) a few years ago, but I believe that was solved in later pool versions.

    3. You could give your "single disk to a mirror upgrade" a try. It worked fine for me, but it's a command line operation.

    I did the following in a VM:

    zpool attach poolname existinghdd blankhdd

    zpool attach ZFS1 ata-VBOX_HARDDISK_VB5ee42a80-ac4d8f53 /dev/sdd


    (Even if it's a bit risky, an shown above, the device name will work as well. :) )

    The above upgraded a single drive to a mirror as follows:

    • Offizieller Beitrag

    Not true my friend, at least in my opinion. ZFS can easily be expanded by adding another Vdev which can be a single disk.

    I believe it's better to add mirrors to a pool, but ZFS will allow any kind of Vdev, a mirror, RAIDZ1, a single disk, etc., to be added to an existing pool.

    You are right, you can add a vdev to an existing pool. But in my opinion this is not a real expansion, you could call it an expansion perhaps. I consider an expansion to add a disk to a mirror vdev and turn this mirror vdev into a 3 disk Raidz1 vdev. Or add a disc to a 3-disc Raidz1 vdev and convert it to a 4-disc Raidz1 vdev. Currently ZFS cannot do this. This is what I mean when I say that this functionality is in development and is expected to be released this year 2022. Or so I read a long time ago. ZFS is a developing technology. mdadm can do this, if I'm not mistaken. I don't know if BTRFS can do it, but I lost interest in BTRFS when I saw the problems they have with the equivalent of mdadm's Raid5 or ZFS's RaidZ1.

    On the other hand, it is not recommended in ZFS to add a Raidz1 vdev to a mirror vdev in the same pool. It is recommended to add vdevs of the same type within each pool. It can be done, but it is not the best option. I could not say the reasons, I also read this a long time ago and I do not remember, I suppose that Google will have the answer, as always. I suspect that the resulting pool will be limited as a whole by the worst characteristics of each individual vdev. It's something that I wouldn't do. In this case I would prefer to make a second pool.

    That is why I said at the beginning that ZFS is still a bit rigid in its expandability. In the case of Zoki it will have a mirror in a 4 bay box. You simply won't be able to add a single disk to your existing pool in the future. Your best option will be to add another two disks and create another mirror vdev, which in this case could add to the existing pool and have two mirrors within the pool. Unless the actual expansion method has already been developed and you can turn your mirror into a RaidZ1, then yes, you could add a single disk.

    - If you have the room in the case (or could temporarily cable in two drives laying beside the case) I would add a second pool, using the larger 8TB disks in a mirror.
    - Use rsync to recreate the entire structure of the existing ZFS array to the new 8TB mirror.
    - You wouldn't have to reconfigure OMV. Just redirect the shared folder (music for example) to the recreated music folder / device on the new array. The stuff layered onto the shared folder, like a samba share, will follow.

    Zoki cannot add two ZFS mirrored disks to its current box. In his first post Zoki said he has a box with 4 bays and all 4 are occupied. Zoki's initial plan was to downgrade the existing Raid to free up a bay, add a disk as ZFS vdev, copy data, remove the 3 existing disks, add another disk, and mirror the vdev. Read the first post. That is why I suggested creating the pool on another server, exporting and importing. It seemed like an easier way.

    • Offizieller Beitrag

    Zoki cannot add two ZFS mirrored disks to its current box. In his first post Zoki said he has a box with 4 bays and all 4 are occupied.

    If he has two additional SATA ports (or even one) on the motherboard, one or two drives can be cabled to the motherboard, laying or even hanging out of the side of the case. One has to be careful not to bump the drives with power applied but it works fine for temporary use. I've done it before.

    The second choice, degrading the array to open up a slot, is possible with the option of upgrading a single disk to a mirror. The command line VM exercise was the first time I tried that. It worked without any issues.

    ____________________________________________________________________

    As far as ZFS expansion is concerned, I suppose it's how you look at it. Upgrading ZFS is a bit more ridge than traditional RAID or LVM, but I believe the benefits are worth it. It's just a matter of opinions. Really, there's no right or wrong.

  • Thanks, i tried what chente proposed, but reality struck back. The rsync process for the whole drive takes forever to complete, even the second time with allmost no files changed.

    So i have to do the migration share by share which means disks have to be connected to one server.


    Will check for an additional port in the server. But will need an additional sas to 4 times sata cable.


    Last resort wil be degrading the array and keep fingers crossed.


    Btw. I gree up with the command line, no problem with this.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Yesterday i got my new SAS 8087 to 4 times sata cables and plugged it into my microserver which is open now and has a stack of drives lying next to it. Now I can start the migration the easy way ny installing zfs, rsyncing and migrating each share step by step.


    Less than 10 Minutes downtime for plugging power and SAS - SATA cable and rebooting the server.


    Thanks to all.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

    • Offizieller Beitrag

    Yesterday i got my new SAS 8087 to 4 times sata cables and plugged it into my microserver which is open now and has a stack of drives lying next to it

    That's cheating. :)

  • That's cheating. :)

    It feels a bit like that, but there are > 20 devices / servers continuousliy writing data and backups to it ad a few users who complain


    No a need to find a decent case for the drives to keep the stack.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!