Unable to grow EXT3 FS after RAID 5 Expansion

    • OMV 1.0
    • Resolved
    • Unable to grow EXT3 FS after RAID 5 Expansion

      Hi folks

      I had a 4 disk RAID 5 array which had a single EXT3 file system on it that housed all my shared folders.

      I recently added a 5th drive to the array, and successfully reshaped the array overnight.

      When I try to resize the file system through the GUI, and after clicking "do you really want to..." confirmation dialog box - nothing happens. The file system remains the same size.

      I am OK on the CLI, and have a reasonable, but basic understanding of Linux systems and specifically RAID.

      On doing a tonne of searching around (before posting here) I found a number of things to try but didn't make any progress.

      I am not sure what sysinfo to post, so will include what I most commonly saw asked for. I would appreciate any guidance / help.

      System Info:

      Source Code

      1. ================================================================================
      2. = OS/Debian information
      3. ================================================================================
      4. Distributor ID: debian
      5. Description: Debian GNU/Linux 7 (wheezy)
      6. Release: 7.8
      7. Codename: wheezy
      8. ================================================================================
      9. = OpenMediaVault information
      10. ================================================================================
      11. Release: 1.16
      12. Codename: Kralizec
      Display All


      RAID Info:

      Brainfuck Source Code

      1. ================================================================================
      2. = Linux Software RAID
      3. ================================================================================
      4. md127 : active raid5 sdb1[0] sdf[4] sdd1[3] sdc1[2] sde1[1]
      5. Personalities : [raid6] [raid5] [raid4]
      6. md127 : active raid5 sdb1[0] sdf[4] sdd1[3] sdc1[2] sde1[1]
      7. 3907035648 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
      8. unused devices: <none>
      9. --------------------------------------------------------------------------------
      10. # mdadm.conf
      11. #
      12. # Please refer to mdadm.conf(5) for information about this file.
      13. #
      14. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      15. # alternatively, specify devices to scan, using wildcards if desired.
      16. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      17. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      18. # used if no RAID devices are configured.
      19. DEVICE partitions
      20. # auto-create devices with Debian standard permissions
      21. CREATE owner=root group=disk mode=0660 auto=yes
      22. # automatically tag new arrays as belonging to the local system
      23. HOMEHOST <system>
      24. # definitions of existing MD arrays
      25. ARRAY /dev/md127 metadata=0.91 UUID=52a29072:c70466de:b369d280:f2f1a372
      Display All


      Block Devices

      Source Code

      1. ================================================================================
      2. = Block device attributes
      3. ================================================================================
      4. /dev/sda1: UUID="2af18760-96e3-4d1f-9815-3d5cbe8d3fa3" TYPE="ext4"
      5. /dev/sda5: UUID="17ba6c11-3a43-4b18-9131-d235c467cb8b" TYPE="swap"
      6. /dev/sdb1: UUID="52a29072-c704-66de-b369-d280f2f1a372" TYPE="linux_raid_member"
      7. /dev/sdc1: UUID="52a29072-c704-66de-b369-d280f2f1a372" TYPE="linux_raid_member"
      8. /dev/sde1: UUID="52a29072-c704-66de-b369-d280f2f1a372" TYPE="linux_raid_member"
      9. /dev/sdd1: UUID="52a29072-c704-66de-b369-d280f2f1a372" TYPE="linux_raid_member"
      10. /dev/md127: UUID="XmuRHu-h8Yb-wPSq-5yx8-3oMB-iiY2-j3K8S4" TYPE="LVM2_member"
      11. /dev/mapper/1tb_sata_raid5-samba_share: UUID="21a9613f-746f-4492-b5cd-5e4f3d2b4341" SEC_TYPE="ext2" TYPE="ext3"
      12. /dev/sdf: UUID="52a29072-c704-66de-b369-d280f2f1a372" TYPE="linux_raid_member"
      Display All


      File System

      Source Code

      1. ================================================================================
      2. = File system disk space usage
      3. ================================================================================
      4. Filesystem Type 1024-blocks Used Available Capacity Mounted on
      5. rootfs rootfs 476868980 2494684 450150688 1% /
      6. udev devtmpfs 10240 0 10240 0% /dev
      7. tmpfs tmpfs 809112 924 808188 1% /run
      8. /dev/disk/by-uuid/2af18760-96e3-4d1f-9815-3d5cbe8d3fa3 ext4 476868980 2494684 450150688 1% /
      9. tmpfs tmpfs 5120 0 5120 0% /run/lock
      10. tmpfs tmpfs 2400560 0 2400560 0% /run/shm
      11. tmpfs tmpfs 4045540 0 4045540 0% /tmp
      12. /dev/mapper/1tb_sata_raid5-samba_share ext3 2781053936 2562812520 77005224 98% /media/21a9613f-746f-4492-b5cd-5e4f3d2b4341
      13. /dev/mapper/1tb_sata_raid5-samba_share ext3 2781053936 2562812520 77005224 98% /export/blah1
      14. /dev/mapper/1tb_sata_raid5-samba_share ext3 2781053936 2562812520 77005224 98% /export/blah2
      15. /dev/mapper/1tb_sata_raid5-samba_share ext3 2781053936 2562812520 77005224 98% /export/blah3
      16. /dev/mapper/1tb_sata_raid5-samba_share ext3 2781053936 2562812520 77005224 98% /export/blah4
      17. ================================================================================
      18. = Partitions
      19. ================================================================================
      20. major minor #blocks name
      21. 8 0 488386584 sda
      22. 8 1 484472173 sda1
      23. 8 2 1 sda2
      24. 8 5 3911796 sda5
      25. 8 16 976762584 sdb
      26. 8 17 976758991 sdb1
      27. 8 32 976762584 sdc
      28. 8 33 976758991 sdc1
      29. 8 48 976762584 sdd
      30. 8 49 976758991 sdd1
      31. 8 64 976762584 sde
      32. 8 65 976758991 sde1
      33. 8 80 976762584 sdf
      34. 9 127 3907035648 md127
      35. 253 0 2825388032 dm-0
      Display All
    • You set up your array in the beginning and then gave it a LVM format and you don't know what you did?

      How's is this possible?, i don't personally use LVM i will have to go to the virtual machine, but i am pretty sure it has a LVM grow button.

      LVM is format that goes in before the FS, it allows to resize partitions in a logical way, avoiding the restrictions of the typical hard drive partitions boundaries and having to move data across to allocate space.

      Your blkid indicates you have LVM signature there, so you probably need to install the LVM plugin to expand the volume groups and logical partitions.
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • Subzero79 is right.

      You need to grow your volume in LVM first and then you can grow the ext3.

      Commandline:

      Source Code

      1. [root@tng3-1 ~]# lvextend -l +100%FREE /dev/mapper/1tb_sata_raid5-samba_share


      After that you need to resize your ext3 with:

      Source Code

      1. ​resize2fs /dev/mapper/1tb_sata_raid5-samba_share


      That should do the job.
      Everything is possible, sometimes it requires Google to find out how.
    • Well a HUGE thank you to you guys for your help - awesome support thanks.

      Installed the LVM plugin, expanded the physical and logical volumes and then the FS expand worked perfectly.

      All this reminded me, this was in fact a RAID array that was originally created in OpenFiler, and imported when I moved to OMV a long, long time ago. That would explain why it's not typical.