ZFS Raid-Z2 Expansion

  • Hi there.


    I currently have (4) equal-capacity drives set up as two mirrored vdevs in a single pool. I want to expand my storage so I purchased (2) more drives of the same capacity.


    I could simply create another mirrored vdev and expand the pool, but with (6) drives now available, I am thinking of transitioning to RAID-Z2 to gain ~30% more capacity. I read that newer versions of ZFS allow the expansion of raid vdevs, so I was considering backing up my data to the two new drives and then creating a (4) disk RAID-Z2 vdev, copying all the old data to the new Z2 vdev, and then expanding the vdev with the (2) remaining drives.


    Is the expand vdev function available in openmediavault-zfs 7.1.4? I'm definitely a beginner in all this so any help/guidance is appreciated. Thanks.

  • votdev

    Approved the thread.
  • It looks like the openmediavault-zfs 7.1.4 plugin is running zfs 2.3.1 (at least on my setup).


    I'll give the operation a try. Thank you, Krisbee.

  • Lazareth

    Added the Label resolved
  • It's not the plugin version that determines the openzfs version. But anyway, don't cut corners and attach the new drives one at a time to your raidz2 pool once its loaded with data and perform a scrub after each attach. It can be a lengthy process depending on your drive size and used pool capacity.


    I should point out that there is no re-balancing during a raidz expansion and there are issues with the capacity reporting post expansion as the old data + parity is till spread across 4 drives, where as newly added data+parity is spread across 6 drives.


    There's a long thread about that here and more a recent thread here. Openzfs 2.4.0 introduces a "zfs rewrite" command which partly resolves the issue, in the meantime there is also this script: https://github.com/markusressel/zfs-inplace-rebalancing ( use at your own risk).

  • Oh, thank you for the clarification. I suppose that why zfs -V is the way.


    I will attach one drive at a time to the Z2 vdev. I certainly don't want to cut any corners, as data loss during this transition would be a real bummer.


    I haven't performed a scrub before, at least not manually - can you ELI5 why that's important?

  • Oh, thank you for the clarification. I suppose that why zfs -V is the way.


    I will attach one drive at a time to the Z2 vdev. I certainly don't want to cut any corners, as data loss during this transition would be a real bummer.


    I haven't performed a scrub before, at least not manually - can you ELI5 why that's important?

    Just double check you start form a healthy pool when you do the expansion with zpool status -x .


    Actually, a separate scrub post each expansion is not necessary as I checked and it is automatically follow an expansion, for example:



    If you want to dry-run your attach commands just use the "-n" option. As you're destroying and re-creating a brand new 4 drive raidz2 to begin it should automatically have the new raidz-expansion feature. You can check that with: zpool get all | grep expansion

  • This is great, thank you.


    I have created a temporary basic pool with the two new drives and am currently copying everything from the existing mirrored pool to the temp pool. Should finish up in a couple days, then I'll move on to creating a new Z2 vdev with the old drives.


    Is there any way of confirming the data integrity of this initial backup to the temp pool? I can open a few things to see if it appears to be working, but I was curious if there's a more technical approach. Is that even something I should be concerned about?

  • Update:


    After reading about the capacity reporting issues with raidz expansion, I decided to take the easy way out and just add another mirrored vdev to the pool. It seems like the feature needs more time in the oven to be flawless and I want a setup that displays capacity correctly.


    Again, I really appreciate taking the time to assist. It was still a fun process along the way.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!