Extending filesystem to new added space on RAID 5

  • Hello all,


    I need some help in extending a RAID 5 btrfs filesystem.

    I had a RAID 5 setup on 4 X 2TB disks.

    I added two additional 2TB disks, wiped them, and using the GUI I grew the RAID 5.

    Original size was 8TB and now the size shows as 13.64TB.

    I have read many posts on extending the filesystem from 8TB to 13TB but without success.


    Version : 1.2

    Creation Time : Tue Mar 16 10:52:51 2021

    Raid Level : raid5

    Array Size : 14650682880 (13971.98 GiB 15002.30 GB)

    Used Dev Size : 2930136576 (2794.40 GiB 3000.46 GB)

    Raid Devices : 6

    Total Devices : 6

    Persistence : Superblock is persistent


    Intent Bitmap : Internal


    Update Time : Sun Apr 18 07:37:12 2021

    State : clean

    Active Devices : 6

    Working Devices : 6

    Failed Devices : 0

    Spare Devices : 0


    Layout : left-symmetric

    Chunk Size : 512K


    Consistency Policy : bitmap


    Name : openmediavault:MyRAID5

    UUID : 1134c51d:9501000a:e6dd1b1a:bfab5bb9

    Events : 50349


    Number Major Minor RaidDevice State

    0 8 48 0 active sync /dev/sdd

    1 8 64 1 active sync /dev/sde

    2 8 80 2 active sync /dev/sdf

    3 8 96 3 active sync /dev/sdg

    5 8 32 4 active sync /dev/sdc

    6 8 16 5 active sync /dev/sdb


    I have tried:


    Code
    mdadm --grow /dev/md0 --bitmap none
    mdadm --grow /dev/md0 --size=max


    Nothing seems to work and the filesystem is still at 8TB.


    /dev/md0 8790402048 4701954984 4087948824 54% /srv/dev-disk-by-id-md-name-openmediavault-MyRAID5


    Any help appreciated.


    Sridhar

  • That is because mdadm is not 'aware' of the btrfs file system, do a google search on how to expand a btrfs filesystem

    I know that one of the file system options when using mdadm is btrfs. This is what

    Code
    /dev/md0       8790402048 4701954984 4087948824  54% /srv/dev-disk-by-id-md-name-openmediavault-MyRAID5

    indicates to me. Using the command line btrfs is simple to expand/change when mdadm is not used. I am not sure that it is applicable when OMV puts btrfs on mdadm.


    If this is the case, I would personally start over and not use mdadm at all, but I have 3 backups of all of my data so it is not as much of a risk.

    • Offizieller Beitrag

    Using the command line btrfs is simple to expand/change when mdadm is not used. I am not sure that it is applicable when OMV puts btrfs on mdadm

    It is, if you were to create a btrfs raid it would have to done from the cli, but it would appear in omv's raid management, so your command mdadm --grow /dev/md0 --size=max will not work as that is the command for ext4, and as I said mdadm is not aware of the file system.

    In the last 6 months I have dealt with 2 issues in relation to mdadm and not using ext4, one was using btrfs and the other xfs both required specific commands in relation to each file system.


    If you look at the likes of Synology and I think Qnap they use btrfs on mdadm but they add an LVM, it works but it's confusing.

  • I have successfully extended the btrfs file system to use the max available disk space.

    I mounted /dev/md0 on /mnt

    Ran the command:

    btrfs filesystem resize max /mnt

    Then unmounted /mnt.


    Now I see the following:


    /dev/md0 14650682880 4703136740 9947049212 33% /srv/dev-disk-by-id-md-name-openmediavault-MyRAID5


    Previously it was:


    /dev/md0 8790402048 4701954984 4087948824 54% /srv/dev-disk-by-id-md-name-openmediavault-MyRAID5.


    The GUI also shows the increased space on the file system.


    Thank you.:)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!