Unable to resize

    • OMV 1.0
    • Unable to resize


      I have been using OMV for a little over a year. I started out with 3x4TB drives in a software RAID5 using xfs filesystem. I eventually made it to 6x4TB drives using the same configuration.

      I chose to use the software raid as I wasn't an expert on setting up the hardware (RocketRaid 2720) controller and I didn't want to have to pull system from shelf and connect keyboard and monitor every time I add a drive. However, four or five times during the last year the raid would fail with a missing drive. With the help from the forum users here, I would be able to put it back together using cli without data loss. So, I didn't worry too much. But, last week it happened again with two missing drives. I lost all of the data. Luckily, I had the important data on a backup.

      I decided that I would take the time and get the raid controller drivers and webui working before storing data again. I am hoping this will be more reliable. I was able to install the latest drivers and webui. I used the webui to create a RAID 5 with 3 drives. I then setup the OMV apps that I normally use for testing. After that, I added another drive to the RAID via the webui. It took about 2 days, but finally completed this morning. I even received an email from the app saying the expansion was complete.

      In OMV, I had to reboot the system before it detected the larger disk size. Shouldn't it have detected it when I clicked on Scan?

      But, when I click on Resize, nothing happens.

      What should I do now?

      Thanks, Eddie
      • highpoint.png

        28.85 kB, 653×260, viewed 741 times
      • physical.png

        10.53 kB, 539×65, viewed 662 times
      • filesystem.png

        20.2 kB, 898×104, viewed 694 times
    • I am still trying to figure this out. I think it has to do with the partition not being resized.

      The disk now shows 16TB, but the partition is only 12TB. How do I get them to match?

      Model: HPT DISK_4_0 (scsi)
      Disk /dev/sdb: 16.0TB
      Sector size (logical/physical): 512B/512B
      Partition Table: gpt

      Number Start End Size File system Name Flags
      1 1049kB 12.0TB 12.0TB xfs
    • Ok, I finally was able to figure out how to do this. I wanted to post how here in case it helps someone else.

      First you have to disable SMB and shutdown any services, ie SABnzbd, Deluge, etc, that may be using the drive. I basically just used cli to shutdown services, ie 'service sabnzbd stop'.

      Then you need to use cli to unmount the drive.

      Source Code

      1. umount /media/omvuuid

      Once drive is unmounted, use the parted command to delete and create the partition.

      Source Code

      1. parted /dev/sdb
      Change /dev/sdb to your drive.
      Use 'print' to see current configuration. I changed the units to TB by using 'unit TB'. Not sure if this was needed or not.

      As you can see from my earlier post from today, I was going from 12TB to 16TB in my partition. While in parted mode, I issued the following commands.

      Source Code

      1. rm 1
      2. mkpart 1 0 16.0TB

      I used print again to confirm it worked. I then quit parted and rebooted my system.

      Once I logged back into OMV I could see my drive was connected and no data was lost. Under File Systems, there was no change. But after I clicked on Resize, it did show the correct increase in size. I then re-enabled SMB.
    • wow ... that was dangerous ....

      the real thing would have been with parted:

      Source Code

      1. ​parted /dev/sdb
      2. print

      now note down the start of your partition

      Source Code

      1. ​resize 1 <start> 16.0TB

      BTW: you do not have Raid protection on your drive - you should only have 12 TB with Raid5 ... You obviously have Raid0 which leads to a total data loss in case one of your 4 drives fail.
      Everything is possible, sometimes it requires Google to find out how.
    • Thank you for the reply. I'm sorry I didn't thank you sooner, but I just saw your reply while I was searching my posts.

      SerErris wrote:

      BTW: you do not have Raid protection on your drive - you should only have 12 TB with Raid5 ... You obviously have Raid0 which leads to a total data loss in case one of your 4 drives fail.
      This was/is RAID5. The reason it showed 16TB on the last post was because I had added another drive since the previous one.

      Now, I am needing to add another 4TB drive for a total of 20TB. I tried your suggestion "resize 1 0.00TB 20.0TB", but am getting the following error.

      Source Code

      1. WARNING: you are attempting to use parted to operate on (resize) a file system.
      2. parted's file system manipulation code is not as robust as what you'll find in
      3. dedicated, file-system-specific packages like e2fsprogs. We recommend
      4. you use parted only to manipulate partition tables, whenever possible.
      5. Support for performing most operations on most types of file systems
      6. will be removed in an upcoming release.
      7. No Implementation: Support for opening xfs file systems is not implemented yet.
      So, it appears the way I had done it before seems to be the only way to do it. I was able to upgrade it to 20TB using that.