Replace defective hard disk with a lager one (RAID 1)

    • Offizieller Beitrag
    1. Make sure your backups are up to date.
    2. Replace the drives.
    3. Reconfigure OMV to use the new drives instead of the old.
    4. Restore the backup to the new filesystem.


    Alternatively, if you have no backups, consider the good 4TB drive to be the backup, and restore/copy data from that in step 4.


    It is faster to do a local restore/copy over SATA rather than over GbE.

  • Hi Adoby,


    I dont know if I understood you right:


    All 4tb drives out of the nas and the two 8tb in it.

    Configure the new ones as a raid 1 and setup the mouts/shares?

    Put back the old working 4tb and restore it to the new raid/shares?


    Does restore means a special function in omv or just the copying via a computer?



    1. Make sure your backups are up to date.
    2. Replace the drives.
    3. Reconfigure OMV to use the new drives instead of the old.
    4. Restore the backup to the new filesystem.


    Alternatively, if you have no backups, consider the good 4TB drive to be the backup, and restore/copy data from that in step 4.


    It is faster to do a local restore/copy over SATA rather than over GbE.

    • Offizieller Beitrag

    You say that you want to replace the two 4TB drives with two 8TB drives.


    Then do that.


    Please note that my point 1 is critical. Please, please don't skip that. Never skip backups. If you skip backups you will lose your data, sooner or later.


    So I assume that you have good recent backups. If not, make sure to fix that before doing anything else. Otherwise you may lose all your data. Not having good backups is a way to ensure that you are likely to lose data.


    Only then replace the 4TB drives with the 8TB drives.


    Afterwards copy the contents of your most recent backups back to the new filesystem. How you do that depends on how you did the backups. External hard drive or over the network. Do the restore the same way, but in the other direction.


    If you continue to use RAID1 or not is up to you. I assume that you have very good reasons if you do? Using RAID1 instead of backups is a really, really bad reason.

  • I'm having the same idea....


    One of two 2TB disks died lately in my running RAID 1, so I bought two new 8 TB drivers.


    I replaced the failing 2TB disk with one of the new 8 TB disks.

    Then I did a rebuild of the RAID 1

    Next I moved out the second / working 2TB disk and replaced it with the second 8TB disk.

    Then I did a rebuild of the RAID 1 again


    Now I got two 8 TB drives, containing all my data and all is working fine, except that now I got a RAID 1 with still 2 TB instead of 8 TB in size.

    Is there any way to increase an already running and working RAID 1? Like a partition resize or something?



    I thought I could do it here:



    But clicking on resize didn't do anything.....



    Thanks

  • Ok, I manged it the way you told me. I thought using RAID 1 is the best way of having a backup because of the mirroring?


    Last thing is I need to incearse the size, but I saw stefan and geaves topics and will try them...

    If you continue to use RAID1 or not is up to you. I assume that you have very good reasons if you do? Using RAID1 instead of backups is a really, really bad reason.

    • Offizieller Beitrag

    Ok, I manged it the way you told me. I thought using RAID 1 is the best way of having a backup because of the mirroring

    Nope first mistake users make and with 8TB drives an even bigger one.


    Best way to this is to have one drive for data and second running rsync or rsnapshot creating a copy of the first, the sync can done overnight through scheduled jobs and run as regularly as you want. If you delete a file on your raid it's gone, if you delete a file on your data drive there's a chance it will be on the other.

  • Nope first mistake users make and with 8TB drives an even bigger one.


    Best way to this is to have one drive for data and second running rsync or rsnapshot creating a copy of the first, the sync can done overnight through scheduled jobs and run as regularly as you want. If you delete a file on your raid it's gone, if you delete a file on your data drive there's a chance it will be on the other.

    In my case the raid/nas is already a backup of my data... I managed now to recover my 4tb into the new 8tb drives but the arrey is still 4tb.

    Clicking "grow" in the RAID Panel shwos me a new window but there are no drives avaible to add - both drives are already in the raid arrey...

    Clicking "resize" in file systems does nothing...

    • Offizieller Beitrag

    In my case the raid/nas is already a backup of my data

    Still not a reason to use raid setup, doing what I suggested would backup the backup.


    ssh into omv and execute mdadm --grow /dev/mdX --size=max where X is your raid reference i.e. 0, 127 etc. then resize the file system, that you should be able to from the WebUI

    • Offizieller Beitrag

    After I resized the raid, it is permanetly resyncing and as soon its done it stsarts again

    Not had experience of this, but, it would suggest there's a drive issue or a controller problem, have done a search and one option would be to run mdadm --monitor /dev/mdX where X is the raid reference

  • Not had experience of this, but, it would suggest there's a drive issue or a controller problem, have done a search and one option would be to run mdadm --monitor /dev/mdX where X is the raid reference

    Hi,

    unfortunately I was busy with other things in the last days. Yesterday I entered the command per putty, it seems to be successful but without feedback in SSH. Since last night the RAID is clean now, but now another problem has appeared, the disks do not go into standby anymore. I had set 1 (spindown) with a time of 10 minutes (I use the server occasionally and then continuously), but unfortunately the spindown does not happen. Smart-value check is switched off and manually by hdparm -y the disks can be put into standby.

    • Offizieller Beitrag

    Since last night the RAID is clean now, but now another problem has appeared, the disks do not go into standby anymore.

    TBH I would not say that is a success, if the resyncing continued to restart there could be an underlying issue and you want to spin the drives down :huh: I don't do this, never have, but there is a guide here using hd-idle, read the whole thread as there have been some changes.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!