ext4 16TiB limit

  • I'm getting the following error when I try to resize my ext4 filesystem after growing a four-disk RAID 6 array (5.41TiB after parity) to eight disks (16.4TiB after parity):


    Zitat

    "Failed to grow the filesystem '/dev/md0': resize2fs 1.42.5 (29-Jul-2012) resize2fs: New size too large to be expressed in 32 bits"


    with the following stack trace:



    This appears to be an issue where the ext4 filesystem was created as a 32-bit filesystem and I'm trying to expand it beyond what can be addressed with a 32-bit filesystem. I'm not the first to have come across this issue, which appeared to have been fixed back in OMV 1.8. After doing some additional reading, it seems that while ext4 itself has supported volumes larger than 16TiB for some time, the e2fsprogs tools didn't until version 1.42. I have version 1.42.5 of the tools, however. (And for what it's worth, switching to the backported 3.16 kernel didn't help.)


    Some information about my system:


    I've got a very recent install of 64-bit OpenMediaVault 2.1.19 running on ESXi 6.0. I'm passing through an LSI SAS 9207-8i, which is connected to eight 3TB disks. I created the filesystem after completely updating OMV, so all of the versions and settings have been thus:


    Zitat

    root@omv:~# uname -a
    Linux omv 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u6 x86_64 GNU/Linux


    Zitat

    root@omv:~# apt-cache show e2fsprogs
    Package: e2fsprogs
    Version: 1.42.5-1.1+deb7u1
    [...]



    Ultimately, my question is this: is there any hope of migrating my existing, 32-bit filesystem to a 64-bit filesystem (ext4 or XFS or whatever) or am I going to need to rebuild the filesystem from scratch and re-copy the 5TiB of data back over? While the latter is a possibility, it's one I'm loathe to do because of the hardware limitations in my system make it extremely difficult—it's the primary reason I started with four disks and expanded to eight in the first place.


    Any help would be greatly appreciated.

    • Offizieller Beitrag

    You can reduce the array but not from the web interface. It is risky though. I think you could convert to btrfs to beat the 16tb limit.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I'm not afraid of the command line, so no worries there. I figured any solution to this problem (if any) would be fairly complex and take place outside of the OMV admin UI.


    Are there any major disadvantages to using btrfs in OMV? I know its on-disk format is stable at this point, so I'm not concerned about stability or anything like that. But one of the things that drew me to OMV was its appliance-like nature. Would managing shared folders and all of OMV's other features continue to work without issue? While I generally wouldn't expect a filesystem format to interfere with such things, I know that btrfs is has its fingers in more pies than your average filesystem and I don't know quite enough about it to say for sure one way or the other.

  • I started to reply to this this morning, but then the forum went down and now seems some posts might be missing...


    Anyway, if I understand right you plan to create a btrfs filesystem on top of an existing RAID device (either mdadm or hardware from your controller card)?
    This will work just fine and in fact since the RAID device presents as a single block device, you won't have any problems with incorrect sizes or whatever in the WebGUI (these issues are largely just cosmetic anyway).
    I think that you will also lose the ability to manage user quotas via the WebGUI as this is not supported for btrfs.


    Yes, a 3.16 kernel and 3.17 btrfs-tools would be best (better still 4.2 kernel), but you shouldn't have any problems with the base 3.2 kernel even, as you will only be using the single data profile.


    The only other thing to watch out for is subvolumes, if you create these manually from the command line and then use the path when creating a shared folder in the WebGUI - if you later delete the shared folder, you won't be able to delete it from the WebGUI as it will try to use rmdir whereas subvolumes can only be deleted via the btrfs utility.


    However, using btrfs on top of regular RAID is not the best use of btrfs, as you will be foregoing some of the benefits it can offer if you do the RAID within btrfs, such as recovery from checksum errors (this is one if the features that works even though no other control for btrfs is surfaced in the WebGUI).
    Having said that, RAID5/6 in btrfs requires kernel 3.19 or later I think and is possibly still not yet production stable.

  • I started to reply to this this morning, but then the forum went down and now seems some posts might be missing...


    Yeah, weird. But before the posts disappeared, I did manage to see that @subzero79 had expressed interest in the outcome of this process, though, so I'll be posting the results right after this.



    Anyway, if I understand right you plan to create a btrfs filesystem on top of an existing RAID device (either mdadm or hardware from your controller card)?


    Correct. This is an mdadm RAID 6 array running on top of an LSI 9207-8i HBA. I'm converting an existing ext4 filesystem that either didn't have the "64bit" flag set, or that e2fsprogs tools didn't want to treat as 64-bit. Either way, it wasn't wanting to let me resize the filesystem to fill the newly-grown 16.37TiB partition.



    This will work just fine and in fact since the RAID device presents as a single block device, you won't have any problems with incorrect sizes or whatever in the WebGUI (these issues are largely just cosmetic anyway).


    Indeed it has! Details to follow...



    I think that you will also lose the ability to manage user quotas via the WebGUI as this is not supported for btrfs.


    Lucky for me, then, that I wasn't ever planning to use quotas.



    The only other thing to watch out for is subvolumes, if you create these manually from the command line and then use the path when creating a shared folder in the WebGUI - if you later delete the shared folder, you won't be able to delete it from the WebGUI as it will try to use rmdir whereas subvolumes can only be deleted via the btrfs utility.


    I won't be using subvolumes. The only subvolume on the volume is the one left over from the conversion from ext4. I'll be removing that as soon as I'm satisfied that everything is where I expect it to be.


    I plan to use plain old folders for shared folders. I just want a big, dumb block of storage on which to dump files in meticulously organized folders. Which leads into your next point:



    However, using btrfs on top of regular RAID is not the best use of btrfs, as you will be foregoing some of the benefits it can offer if you do the RAID within btrfs, such as recovery from checksum errors (this is one if the features that works even though no other control for btrfs is surfaced in the WebGUI).


    You're absolutely right. I'm using a nuke to kill a gnat. I mostly chose the Btrfs route because it could convert from ext4 in-place and could be resized without worrying about 32-bit constraints. It's absolutely overkill, but it also solves the problem in the least painful way of the options I have available to me.



    Having said that, RAID5/6 in btrfs requires kernel 3.19 or later I think and is possibly still not yet production stable.


    Everything I've read about Btrfs' RAID56 is that it's absolutely not ready for "production" use--even if "production" in this case is mostly just storing hundreds of rips from my Blu-ray collection. That's why I opted for mdadm and (initially) ext4. When Btrfs RAID and its associated tools become "stable," I'll likely switch over. But from what I can tell, we're probably still a year or more from that point and, well, I need more storage today. :P

  • Here's a brief (for me) recap of the process and the results. First I went through the grueling process of removing shared folders and properly unmounting the filesystem through the OMV UI because a normal umount didn't work. Next, converting the filesystem:


    Zitat

    btrfs-convert /dev/md0 &


    Next, wait 10-12 hours. ;)


    After verifying that the conversion worked, I modified /etc/fstab so I could re-mount the partition with some Btrfs-specific options. But because OMV doesn't seem to follow the mounting status of anything outside of the

    Zitat

    # >>> [openmediavault]
    # <<< [openmediavault]

    section, it wouldn't let me choose the filesystem through the OMV UI for creating shared folders or anything else. So I decided to try a different tack: mount the filesystem with OMV's UI knowing it wiil use the wrong mount options, then manually modify the managed entry in /etc/fstab and remount the filesystem.


    For the sake of comparison, this is what OMV created in /etc/fstab:


    Zitat

    UUID=b53b4a31-3d47-445c-a882-bd36390d9653 /media/b53b4a31-3d47-445c-a882-bd36390d9653 btrfs defaults,nofail 0 2


    And here's what I changed it to:


    Zitat

    UUID=b53b4a31-3d47-445c-a882-bd36390d9653 /media/b53b4a31-3d47-445c-a882-bd36390d9653 btrfs autodefrag,compress=lzo,noatime,nodiratime 0 0


    Remounting the filesystem was nothing special:


    Zitat

    mount -a


    Worked like a charm. The filesystem is now mounted with the appropriate settings and now I can select the filesystem as a shared folder target in the OMV UI.


    The only thing I have to remember, though, is that if I unmount the filesystem in OMV's UI in the future, I'll need to redo this modification when I re-mount it. Considering how unlikely I am to do that once everything is set back up, I'm not too concerned about such a small thing. (And I put a comment in /etc/fstab to remind myself what to do because I will inevitably forget the specifics.)


    And now that I've got the filesystem properly re-mounted, time to resize:


    Zitat

    btrfs filesystem resize max /media/b53b4a31-3d47-445c-a882-bd36390d9653


    Two seconds later, and it's ready:


    Zitat

    root@omv:~# df -h /dev/md0
    Filesystem Size Used Avail Use% Mounted on
    /dev/md0 17T 4.6T 12T 29% /media/b53b4a31-3d47-445c-a882-bd36390d9653


    OMV's UI properly shows 16.37TiB total in the filesystem with 11.53TiB free.


    I'll run a few tests to make sure everything is still good, but after that I'll drop the ext2_saved subvolume and set all of my shared folders back up. It may have been massive overkill to switch to Btrfs just for the sake of expanding a filesystem, but hey, it seems to have worked and I'm happy with the result. I call this a win! :D

    • Offizieller Beitrag

    I definitely call this a win. Any other method would have taken a lot more time :)

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Yes. Use env vars for that. Omv rewrites fstab even with a nfs share.
    Type locate globals.inc in terminal and you'll find the file with the fs type variables names for using at /etc/default/openmediavault


    Rule #1 never modify a file that omv has control…


    Is this file something that will get overwritten when I update OMV? I'm not getting a vibe that this file is intended for users to modify to suit their needs. Anyway, I found this line in config.inc, which seems obvious enough what to do with:


    Code
    $GLOBALS['OMV_FSTAB_MNTOPS_BTRFS'] = "defaults,nofail";


    But there doesn't seem to be any option for the "pass" value in /etc/fstab. Given that Btrfs doesn't have an fsck utility right now, it's important to set that value to 0, which the config.inc file doesn't seem to be able to account for.


    I had also taken a look through the /etc/openmediavault/config.xml file that @igrnt referenced and found this section:


    Code
    <mntent>
            <uuid>e7c462cb-70ec-415c-96be-0c521a74e011</uuid>
            <fsname>b53b4a31-3d47-445c-a882-bd36390d9653</fsname>
            <dir>/media/b53b4a31-3d47-445c-a882-bd36390d9653</dir>
            <type>btrfs</type>
            <opts>defaults,nofail</opts>
            <freq>0</freq>
            <passno>2</passno>
            <hidden>0</hidden>
          </mntent>


    So it seems to have everything I would need to make this change. But is it kosher to modify this file by hand, or does it fall under the rule of not modifying files that OMV controls?


    Sorry for all the questions, but I can't find any documentation on this, and I'd like to do things The Right Way.

    • Offizieller Beitrag

    For mntent, modifying the globals doesn't help existing entries. So, you need to modify by hand. Future entries will not need manual editing. The OMV defaults file will survive upgrades. I generally advise against modifying the config.xml file but if you make a backup and know what you are doing, it should be ok in this instance.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    The forum hosting company was(is) having problems with the database server. I know a couple of my posts disappeared.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!