Increasing Raid1 filesystem size

  • So I had an asymmetrical Raid1 as a basic file server. One drive was .5Tb and the other was 2Tb. I just replaced the .5 with another 2Tb, and I'm trying to resize the filesystem to match the new size.


    My issue is functionally identical to the one in this thread, so I followed the same instructions, ie forcing mdadm to resync to the max capacity. However now that it's completed, the webclient still is showing the former size, and the thread essentially stops after running this command. What do I need to do to get it the right size? My guess was to ssh into the system and run parted, but I figured there's a better way to do this.


    EDIT: After looking around on the forum, it looks like doing a Raid1 was a bad idea from the get-go (or at least should have been migrated at some point over the past couple of years). Since I have the old drive and no new data had been added, could I create a btrfs pool with the 2Tb drives, plug the old drive into my desktop, and copy everything over?

  • For the filesystem, and thanks for the reply. I just tried that, but when I click on the resize button, it shows the warning, refreshes the list of filesystems a couple of times, and nothing changes.

    Assuming it is ext4, you could try resize2fs from the command line.

    omv 5.5.9 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Yeah, it is ext4, and here's the output:
    $ sudo umount /dev/dm-0[sudo] password for laptop:$ fsck -f /dev/dm-0-dash: 6: fsck: not found$ resize2fs /dev/dm-0-dash: 7: resize2fs: not found$ sudo resize2fs /dev/dm-0resize2fs 1.43.3 (04-Sep-2016)Please run 'e2fsck -f /dev/dm-0' first.

    $ sudo e2fsck -f /dev/dm-0e2fsck 1.43.3 (04-Sep-2016)Pass 1: Checking inodes, blocks, and sizesPass 2: Checking directory structurePass 3: Checking directory connectivityPass 4: Checking reference countsPass 5: Checking group summary informationClementine: 222434/30523392 files (0.8% non-contiguous), 119426708/122063360 blocks$ sudo resize2fs /dev/dm-0resize2fs 1.43.3 (04-Sep-2016)The filesystem is already 122063360 (4k) blocks long. Nothing to do!

  • I should have noticed you were working with a devicemapper device and that is was either encrypted or using LVM. You have resized the array and device but now need to resize the luks container with cryptsetup resize. Then you can resize the filesystem.

    omv 5.5.9 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Alright, I ran cryptsetup resize /dev/dm-0, and after it completed, I went to the web GUI and ran the resize command. It's now showing the server at full size.
    As for the other question about converting to a btrfs system, should I post in a new thread?

  • As for the other question about converting to a btrfs system, should I post in a new thread?

    You don't have to. It would be easy to convert the filesystem to btrfs but actually use btrfs for pooling (replacing mdadm), I am pretty sure you would have to wipe the drives.

    omv 5.5.9 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • I haven't added anything new to the drives, so the old drive should have all of my data. Can I just copy from it?


    Sure. And 4GB is sufficient to serve hundreds of users regardless of the filesystem chosen (RAM in fileservers usually is used for filesystem buffers and caches which will speed up operation in some environments but usually not at home). You only need to care about memory constraints once you want to use deduplication (which is something I would not do with btrfs and OMV3 anyway since all the btrfs code lives inside the kernel and you most probably run with an outdated kernel then).

  • Is there any configuration I'd need to do for the bulk of the data preservation benefits?


    I would do the following


    • if you stay at OMV3 (why not upgrading to OMV4?) then at least install backports kernel if you're still on kernel 3.x (btrfs code lives inside the kernel --> the higher the kernel version the better)
    • I would then transfer data after creating the btrfs mirror simply using rsync -a
    • Since you have plenty of free disk space then and most probably do only normal things (NAS use case) you don't need to care about anything else (with full filesystems and adding tons of small files in very short time you can run into the ugly situation that btrfs can not allocate more metadata space -- something I ran into recently when doing performance tests. In case you want to fill your filesystem completely or do constantly snapshots without deleting older ones better read this already now)
    • why not upgrading to OMV4?

    I haven't needed to until now, and while I know dist-upgrade is pretty stable, I've had bad experiences with it in the past and I wanted to make sure I deleted everything I needed to to upgrade. I most likely will upgrade after moving everything around.


    Otherwise, thanks for all of the info. I'll poke around and try to get this set up when I have a spare minute.

  • How do I go about backing up? Just an rsync or is there a dedicated tool?

    Can't help here since I don't use OMV on x86 hardware and on all my ARM boxes it's just shutting the server down and then cloning with ddrescue the SD card or USB pendrive the server runs from (in fact I do incremental backups on an own system but this gets too complicated here)

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!