Wrong Filesystem Ext4 size on raid 5 array created then expanded, extended (HP smart array card p410i)

  • Hi,


    This is a new enigmatic situation for me, with one of my NAS.


    my concerned NAS:


    OMV 3.0.99, OS on SSD, HP Smart array P410i card with 5 x 4TB HDD, and with motherboard sata ports, 3 hdds used as single partition each.


    Raid 5 created and managed by HP smart array card P410i with FWBC 1Gb, Logical Volume ~14.5TB seen by OMV, device /dev/sdb1/ EXT4 formatted by OMV3.


    Everything works fine, ssh, ftp, samba, omv-extras, HWraid plugin, HPraid infos, shared folders.


    ssh connection and df -h command i can see /dev/sda1 15T 12T 3,2T 79% /srv/dev-disk-by-label-NAS.... and i see same thing on OMV webgui and same value from Windows 10 browser.



    New step:
    A few days ago, i install a new 4TB HDD on the array, i used HP ACU and ran a few successive commands to Expand, Transforming, Parity Progression, and then extend the logical volume to 18.2TB. Parity 100% complete.


    with HP tools (ssacli, hpacucli) i can see this

    Code
    Smart Array P410 in Slot 1
       Array A
          Logical Drive: 1
             Size: 18.2 TB


    I thought it was finished.


    i restart system.


    OMV displays the same physical disk "Logical Volume" with 18.19TB size on left menu "Storage/Physical Disks".
    OMV displays the previous size of partition in the left menu "Storage/File Systems", /dev/sda1 NAS_genna Ext4 14.44 TB 3,14 TB 11.29 TB Mounted...
    previously, i delete every references, to unmount, if it is the right thing to do. but not enough.


    Resizing does not work for this. I mount/unmount, delete share folders (without contents), i can unmount it because there is no reference to any shared folder... i shutdown and went to HP ACU ... size is ok, raid 5 status is ok.


    I thought HP utility will grow up the volume itself without any other command. It did it. But OMV doesn't use this new space/volume.


    as i do not use mdadm for this raid, i am not able to use it.


    i have another NAS with a volume 19.3 TB formatted by OMV, managed with p410i too... so , i don't think there is a strange limit.


    I don't want to loose any data with a inappropriate command.


    I don't know if i have to run resize2fs or not. Is it just a single partition resizing to do ?


    partprobe displays that i don't use the full capacity.
    fdisk -l displays :


    Code
    Disk /dev/sda: 18,2 TiB, 20003765968896 bytes, 39069855408 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: gpt
    
    
    Device     Start         End     Sectors  Size Type
    /dev/sda1   2048 31255884430 31255882383 14,6T Linux filesystem

    can you help me ? it seems i need another command to extend like gparted can do on single hdd.


    thanks.


    edit:


    i search on forum how to grow Ext4...
    i run this:
    resize2fs -p /dev/sda1
    and as result, i was informed that i need to run this first : e2fsck -f /dev/sda1
    i ran again resize2fs -p /dev/sda1 and i got this :


    Code
    resize2fs 1.43.3 (04-Sep-2016)
    
    
    Le système de fichiers a déjà 3906985297 blocs (4k). Rien à faire ! (Nothing to do).

    nothing has changed. is there a limit or a link with -O 64bit when i created the 1st partition (size under 16TB) ?
    Do i have to re-create the Volume ?

    • Offizieller Beitrag

    The problem is your filesystem is on a partition. So, the raid card grows the "disk" that Linux sees. Now, you need to grow the partition. This can't be down while the filesystem is mounted. parted can resize it after it is umounted. Then resize2fs or the resize button in the web interface will work.


    The resize button in the web interface is really there for LVM and mdadm raid.


    If you know you are going to be adding disks, having the filesystem on top of LVM would work much better.


    The 64bit flag is automatically specified if you created the filesystem with OMV on a 64 bit machine.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Code
    (parted) resizepart
    Partition number? 1
    Warning: Partition /dev/sda1 is being used. Are you sure you want to continue?
    Yes/No? yes
    End?  [16,0TB]? 16
    Warning: Shrinking a partition can cause data loss, are you sure you want to continue?

    i don't know if parted works like gparted (i suppose yes, but no experience on cli). afraid to do a silly action.
    as you can see, i can't go above 16TB.


    what i remember with my other NAS (19tb), i create the full raid array with 8 HDDs and with 19tb, i format it with OMV.


    and yes, in a next future i will add one or two disks, so, i 'm wondering if i would wait til i have all the Hdds.

  • You better put the server down and boot systemrescuecd iso to launch gparted and proceed with partition resize.

    ok thanks subzero79, i did a lot of gparted actions on Hdds, but never on raid 5 Partition. if there was not data on it, i would try a lot of actions, but, in this case, i am on such thin ice :)


    more secure to backup all datas before doing anything else. After that, gparted on boot :)

  • hi,


    in progress. 1st, i ran checking of partition, to be sure. all flags were green.
    and then, resizing...


    with subzero79 's systemrescuecd iso systemrescuecd-x86-5.2.2.iso
    thanks for the tips :)


    edit: And then...


    it tooks 17'... i thought more but, nice job :)


    after shares and uses setup:


    i learn new things with this experience. Thank you all :)

  • what a good joke!


    i 've just received a new 4TB HDD... now, with your help, i have the right method to do what i want, i am able to grow up my partition (again)... til the next time ( on 8 ports, 7 are used. Last raid card port is free).

    • ACU => expand (use available unit)
    • ACU => parity check
    • ACU => extend (logical volume)
    • Gparted => check and resize filesystem.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!