Replaced smaller HDD in RAID5 with larger HDD

    • OMV 1.0
    • Resolved
    • Replaced smaller HDD in RAID5 with larger HDD

      Hello, all! Long time lurker and first time poster. I currently have a RAID5 setup as follows: 2 x 500 HDD, and 1 x 1TB HDD. I was initially running a 320 GB in tandem with 2 x 500 GB HDD in RAID5, so the most storage allocated to me was 600 GB across the three drives. I know that it caters to the weakest link. The 320 GB HDD was recently replaced with a 1 TB HDD due to lack of storage in my case. I powered everything down (although my mobo does support hot swap) and everything was rebuilt well. My only issue is to try to figure out how to grow the RAID5 setup as the 320 GB HDD is no longer the weakest link. I should be seeing around a TB of storage. Can anyone walk me through this? I have proper backups of all my folders good to go in case I need to rebuild the whole thing. Thanks!

      Edit: I am running the latest version of OMV at 1.4.
    • No, the first step should be grow under raid management, then resize under file systems.
      Homebox: Bitfenix Prodigy Case, ASUS E45M1-I DELUXE ITX, 8GB RAM, 5x 4TB HGST Raid-5 Data, 1x 320GB 2,5" WD Bootdrive via eSATA from the backside
      Companybox 1: Standard Midi-Tower, Intel S3420 MoBo, Xeon 3450 CPU, 16GB RAM, 5x 2TB Seagate Data, 1x 80GB Samsung Bootdrive - testing for iSCSI to ESXi-Hosts
      Companybox 2: 19" Rackservercase 4HE, Intel S975XBX2 MoBo, C2D@2200MHz, 8GB RAM, HP P212 Raidcontroller, 4x 1TB WD Raid-0 Data, 80GB Samsung Bootdrive, Intel 1000Pro DualPort (Bonded in a VLAN) - Temp-NFS-storage for ESXi-Hosts
    • Thanks for the reply, guys! Yes, I have tried to grow the array in the GUI and tried to grow the file share. However, the only option for growing the RAID array is by adding another disk that is not already in the array. Without the array realizing that the smaller HDD is no longer the weakest link, it's looking like I'm going to have to rebuild.
    • Ok, then you should go the way growing the raid from the command line.
      What's the output of mdadm --detail /dev/md127?
      Homebox: Bitfenix Prodigy Case, ASUS E45M1-I DELUXE ITX, 8GB RAM, 5x 4TB HGST Raid-5 Data, 1x 320GB 2,5" WD Bootdrive via eSATA from the backside
      Companybox 1: Standard Midi-Tower, Intel S3420 MoBo, Xeon 3450 CPU, 16GB RAM, 5x 2TB Seagate Data, 1x 80GB Samsung Bootdrive - testing for iSCSI to ESXi-Hosts
      Companybox 2: 19" Rackservercase 4HE, Intel S975XBX2 MoBo, C2D@2200MHz, 8GB RAM, HP P212 Raidcontroller, 4x 1TB WD Raid-0 Data, 80GB Samsung Bootdrive, Intel 1000Pro DualPort (Bonded in a VLAN) - Temp-NFS-storage for ESXi-Hosts

      The post was edited 1 time, last by datadigger ().

    • Datadigger, the output from that command is:

      Source Code

      1. root@NAS:/# mdadm --detail /dev/md127
      2. /dev/md127:
      3. Version : 1.2
      4. Creation Time : Sat Nov 15 13:39:02 2014
      5. Raid Level : raid5
      6. Array Size : 624879616 (595.93 GiB 639.88 GB)
      7. Used Dev Size : 312439808 (297.97 GiB 319.94 GB)
      8. Raid Devices : 3
      9. Total Devices : 3
      10. Persistence : Superblock is persistent
      11. Update Time : Sun Dec 7 03:00:19 2014
      12. State : clean
      13. Active Devices : 3
      14. Working Devices : 3
      15. Failed Devices : 0
      16. Spare Devices : 0
      17. Layout : left-symmetric
      18. Chunk Size : 512K
      19. Name : AsrockOMV:RaidShare
      20. UUID : 5f73024b:b13e554f:0fe163f1:42238228
      21. Events : 344
      22. Number Major Minor RaidDevice State
      23. 0 8 16 0 active sync /dev/sdb
      24. 3 8 32 1 active sync /dev/sdc
      25. 2 8 48 2 active sync /dev/sdd
      Display All


      I am running all this from SSH, is that adequate for these operations? Or do I need to be directly interfacing with the machine? Thank you for your input!
    • If you can trust your network quality than is is ok, I do all the management stuff over SSH because all these machines in my footer do not have a monitor or keyboard and some of them are far away from my desk.

      Ok, your raid actually looks good. Now let's do some final checks.
      Please post the output of:
      df -h
      fdisk -l
      egrep 'ata[0-9]\.|SATA link up' /var/log/dmesg
      So we can check if the hdd's are correctly recognized.
      Homebox: Bitfenix Prodigy Case, ASUS E45M1-I DELUXE ITX, 8GB RAM, 5x 4TB HGST Raid-5 Data, 1x 320GB 2,5" WD Bootdrive via eSATA from the backside
      Companybox 1: Standard Midi-Tower, Intel S3420 MoBo, Xeon 3450 CPU, 16GB RAM, 5x 2TB Seagate Data, 1x 80GB Samsung Bootdrive - testing for iSCSI to ESXi-Hosts
      Companybox 2: 19" Rackservercase 4HE, Intel S975XBX2 MoBo, C2D@2200MHz, 8GB RAM, HP P212 Raidcontroller, 4x 1TB WD Raid-0 Data, 80GB Samsung Bootdrive, Intel 1000Pro DualPort (Bonded in a VLAN) - Temp-NFS-storage for ESXi-Hosts
    • Datadigger, it looks like everything is sound as far as HDD recognition. The SMART enabled tasks also seem to see everything correctly. Here is the output from those commands:

      Source Code

      1. root@NAS:/# df -h
      2. Filesystem Size Used Avail Use% Mo unted on
      3. rootfs 28G 2.1G 25G 8% /
      4. udev 10M 0 10M 0% /d ev
      5. tmpfs 773M 5.1M 768M 1% /r un
      6. /dev/disk/by-uuid/1586cda7-5767-4819-bd44-b0fd1583fd57 28G 2.1G 25G 8% /
      7. tmpfs 5.0M 0 5.0M 0% /r un/lock
      8. tmpfs 1.8G 0 1.8G 0% /r un/shm
      9. tmpfs 3.8G 1.5M 3.8G 1% /t mp
      10. /dev/md127 587G 399G 188G 68% /m edia/7b8aff7b-ad6a-4b83-b93e-44050f01241e
      11. root@NAS:/# fdisk -l
      12. Disk /dev/sdb: 500.1 GB, 500107861504 bytes
      13. 255 heads, 63 sectors/track, 60801 cylinders, total 976773167 sectors
      14. Units = sectors of 1 * 512 = 512 bytes
      15. Sector size (logical/physical): 512 bytes / 512 bytes
      16. I/O size (minimum/optimal): 512 bytes / 512 bytes
      17. Disk identifier: 0x00000000
      18. Disk /dev/sdb doesn't contain a valid partition table
      19. Disk /dev/sdd: 500.1 GB, 500107861504 bytes
      20. 255 heads, 63 sectors/track, 60801 cylinders, total 976773167 sectors
      21. Units = sectors of 1 * 512 = 512 bytes
      22. Sector size (logical/physical): 512 bytes / 512 bytes
      23. I/O size (minimum/optimal): 512 bytes / 512 bytes
      24. Disk identifier: 0x10121012
      25. Disk /dev/sdd doesn't contain a valid partition table
      26. Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
      27. 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
      28. Units = sectors of 1 * 512 = 512 bytes
      29. Sector size (logical/physical): 512 bytes / 4096 bytes
      30. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      31. Disk identifier: 0x00000000
      32. Disk /dev/sdc doesn't contain a valid partition table
      33. Disk /dev/sda: 32.0 GB, 32017047552 bytes
      34. 255 heads, 63 sectors/track, 3892 cylinders, total 62533296 sectors
      35. Units = sectors of 1 * 512 = 512 bytes
      36. Sector size (logical/physical): 512 bytes / 512 bytes
      37. I/O size (minimum/optimal): 512 bytes / 512 bytes
      38. Disk identifier: 0x0004b9b2
      39. Device Boot Start End Blocks Id System
      40. /dev/sda1 * 2048 59895807 29946880 83 Linux
      41. /dev/sda2 59897854 62531583 1316865 5 Extended
      42. /dev/sda5 59897856 62531583 1316864 82 Linux swap / Solaris
      43. Disk /dev/md127: 639.9 GB, 639876726784 bytes
      44. 2 heads, 4 sectors/track, 156219904 cylinders, total 1249759232 sectors
      45. Units = sectors of 1 * 512 = 512 bytes
      46. Sector size (logical/physical): 512 bytes / 4096 bytes
      47. I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
      48. Disk identifier: 0x00000000
      49. Disk /dev/md127 doesn't contain a valid partition table
      50. root@NAS:/# egrep 'ata[0-9]\. |SATA link up' /var/log/dmesg
      51. [ 1.090837] ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
      52. [ 1.090865] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
      53. [ 1.094770] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
      54. [ 1.098786] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
      Display All


      There is a 32 GB SSD that is connected to the SATA3 port that hosts the OS. Then I have 2 x 500 GB Hitachi drives in tandem with 1 x 1 TB WD Red on the SATA2 ports. So everything seems to be in order.
      Edited for accuracy and posterity!

      The post was edited 1 time, last by johndoe86x ().

    • Datagravedigger, please. :)

      Yes that looks ok. So let's start growing. Do you have a complete backup of the data partition handy? Even the makers of mdadm will and can not guarantee that everything will go well and every system has it's peculiarities.

      From the command line enter mdadm --grow /dev/md127 --size=max
      If you look in parallel at the web-gui/raid management you should see that the resyncing process has been started immediately. And look at the capacity value, this should show the already growed disk space - roughly about 1TB (500x3-500). Let it finish.

      If you enter df -h after the resyncing process is complete you will see no difference. Now the filesystem has to be growed, too. This can be done from the web-gui, click on the raid partition and on the resize-button. This should be fast and the new size can be viewed. Enter df -h again and now it should have the new size.
      Homebox: Bitfenix Prodigy Case, ASUS E45M1-I DELUXE ITX, 8GB RAM, 5x 4TB HGST Raid-5 Data, 1x 320GB 2,5" WD Bootdrive via eSATA from the backside
      Companybox 1: Standard Midi-Tower, Intel S3420 MoBo, Xeon 3450 CPU, 16GB RAM, 5x 2TB Seagate Data, 1x 80GB Samsung Bootdrive - testing for iSCSI to ESXi-Hosts
      Companybox 2: 19" Rackservercase 4HE, Intel S975XBX2 MoBo, C2D@2200MHz, 8GB RAM, HP P212 Raidcontroller, 4x 1TB WD Raid-0 Data, 80GB Samsung Bootdrive, Intel 1000Pro DualPort (Bonded in a VLAN) - Temp-NFS-storage for ESXi-Hosts
    • johndoe86x wrote:

      Awesome, I'll give that a shot when I get home from the office. When you say a complete backup of the data partition, do you mean the files on the Samba share?

      Exactly, you are on the way to alter the base of the samba shares. The system drive wouldn't be affected (Fingers crossing...).

      johndoe86x wrote:

      Or is there something more technical I need to backup?

      Well, it is always a good idea to have a full backup of the system for the rainy days.
      There are two plugins available for backup: openmediavault-backup 1.0.6 (Delivered by omv-extras) for the system and openemdiavault-usbbackup 1.1 for the data shares.

      johndoe86x wrote:

      I apologize for the name misspelling! 8|

      Nevermind. Just kidding. ;)
      Homebox: Bitfenix Prodigy Case, ASUS E45M1-I DELUXE ITX, 8GB RAM, 5x 4TB HGST Raid-5 Data, 1x 320GB 2,5" WD Bootdrive via eSATA from the backside
      Companybox 1: Standard Midi-Tower, Intel S3420 MoBo, Xeon 3450 CPU, 16GB RAM, 5x 2TB Seagate Data, 1x 80GB Samsung Bootdrive - testing for iSCSI to ESXi-Hosts
      Companybox 2: 19" Rackservercase 4HE, Intel S975XBX2 MoBo, C2D@2200MHz, 8GB RAM, HP P212 Raidcontroller, 4x 1TB WD Raid-0 Data, 80GB Samsung Bootdrive, Intel 1000Pro DualPort (Bonded in a VLAN) - Temp-NFS-storage for ESXi-Hosts
    • Well, I decide to do some final prep work before I run those commands you gave me. I ran the update manager from the GUI, and I am now on version 1.5. Then I heard an odd "crunching" noise coming from one of the HDDs in the NAS. I did a reboot from the GUI, and it all seemed silent. I decided to add the backup plugin (I already had USB Backup) just for one more layer of ease. It doesn't seem like it ever finished installing the backup plugin, and when I look at the CPU usage it's hovering around 57% on a quad-core i5 2500k. It's been like this for about 10 minutes or so. I'm going to leave it alone for now until I get some feedback. Datadigger, what would you say to do?

      Edit: Seems to be back up and running. Away we go! I'll report back in a moment!
      Edit: Just entered the first command to grow it from the CLI. Everything looks great so far! I will probably expand the share in the morning. I'll report back. Thanks so much again, Datadigger!

      Update: Before I grew the array, the "crunching" turned into a "the HDD won't spin up". As of right now, I have the NAS turned off while I'm waiting on the replacement drive. When I shut it down, the array was in a degraded state, yet it was at 1 TB. Thanks, again, Datadigger!

      The post was edited 3 times, last by johndoe86x ().