Migration from Rsync to RAID 1

    • Migration from Rsync to RAID 1

      Hello guys,I have an HP Proliant G8 with three disks: (1) SSD for the system, (2) primary 3TO WD RED and (3) WD Green 2TO for the backup. I use Rsync everynight to copy the important data from the primary disk to the backup disk.

      I'm seeing around some disk crashes on my friends NAS and I'm thinking to have more redundancy on my system.If my primary disk fails I will have to replace it, copy the information from the backup to the primary, rebuild the sharing folders, the plexmediaserver folder, etc, etc.... it's quite anoying...If you have a raid, and a disk crashes, just need to unplug the old one, plug the new one and rebuild the RAID.

      So, would you recommend to buy another WD RED 3TO and create a raid 1 with my actual primary WD RED ? or maybe use another brand to prevent some "brand defects"' ?
      Will I lose some perf with the RAID 1 on read/write ?

      I know it's gonna take some time to rebuild my shares with the new UID, etc....


      Thanks for you advice guys!

      The post was edited 1 time, last by siulman ().

    • Actually, if a disk failed, all you would have to do is change the filesystem uuid of the drive still working to the filesystem uuid of the failed drive and reboot.

      To convert to raid1, you will have to take the data off the two drives (unless you want to risk creating a degraded array from command line). Create the raid 1 array and create the filesystem. Depending on how much command line work you want to do, you could actually change the uuid of the filesystem on the array to the filesystem uuid of the original single drive. OMV wouldn't know the difference after a reboot (or remounting).

      The raid1 array will use a bit more cpu but probably not noticeable on most systems.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      Actually, if a disk failed, all you would have to do is change the filesystem uuid of the drive still working to the filesystem uuid of the failed drive and reboot.

      To convert to raid1, you will have to take the data off the two drives (unless you want to risk creating a degraded array from command line). Create the raid 1 array and create the filesystem. Depending on how much command line work you want to do, you could actually change the uuid of the filesystem on the array to the filesystem uuid of the original single drive. OMV wouldn't know the difference after a reboot (or remounting).

      The raid1 array will use a bit more cpu but probably not noticeable on most systems.


      Hi @ryecoaaron and thank you for your answer.
      I want to be sure that I understood it in the righ way.

      1) Are you telling me that with my actual configuration (only one primary active disk and a backup disk for rsync), if my primary WD RED fails, I just need to copy its uuid and paste it on the new disk that I would bought?
      How do I do it? I don't see this option on the OMV GUI.

      2) In order to have a clean installation, I would format my primary WD RED and would create a RAID with this one and a new disk.
      Can I take then the uuid of my WD RED and put it on the RAID 1 so I don't have to change any parameter of OMV for the sharing folders, etc... ???
      Again, how?

      3) Concerning the brand of this new disk, would you recommend to buy another WD RED 3TO and create a raid 1 with my actual primary WD RED ? or maybe use another brand to prevent some "brand defects"' ?
    • 1) Most of what I recommend can't be done in the web interface. BUT, you should only have to do it IF the drive fails. It is complicated but easier than changing everything in the web interface. Why don't we wait until it happens to worry about a bunch of commands that would need to be changed to exact commands when it happens.

      2) Yes, you can take the uuid of the RED and change the array to use it. Once again, why don't we wait until the actual time you are going to do this.

      3) I have identical drives in my arrays.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      1) Most of what I recommend can't be done in the web interface. BUT, you should only have to do it IF the drive fails. It is complicated but easier than changing everything in the web interface. Why don't we wait until it happens to worry about a bunch of commands that would need to be changed to exact commands when it happens.

      2) Yes, you can take the uuid of the RED and change the array to use it. Once again, why don't we wait until the actual time you are going to do this.

      3) I have identical drives in my arrays.


      Thanks @ryecoaaron , very clear answers.
      So, I'm going to order this new WD RED 3TO. Probably today. So, I think we need to get into this asap :)

      1) I will start copying all my primary disk to the backup disk to have it updated.
      2) I will format both WD RED disks with ext4 as filesystem
      3) I will create the RAID1

      4) Change uuid of the array to take the old uuid of my primary WD RED Disk.
    • Need to make some changes... You don't put filesystems on the individual disks for a raid array

      1 - Definitely backup all files to the backup disk.
      2 - Put the new drive in
      3 - boot

      The rest will have some command line stuff...
      4 - Get uuid from old WD Red from /etc/fstab or blkid
      5 - umount old WD Red - umount /dev/sdXY (will replace sdXY when you have the system running)
      6 - Wipe the old WD Red with dd if=/dev/zero of=/dev/sdX bs=512 count=100000 (will replace sdX when you have the system running)
      7 - Create raid 1 array in web interface
      8 - Create a filesystem on the array from command line - mkfs.ext4 /dev/mdX (will replace mdX when you have the array running)
      9 - Change uuid tune2fs /dev/mdX -U long_uuid
      10 - reboot
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Migration from Rsync to RAID 1

      Thanks @ryecoaaron
      Clear.

      1) Could I choose a 4To disk for the raid with my actual 3To? I found a 4To good price.
      I know that my Raid 1 would be only 3To but if I need more capacity in the future, just need to upgrade one disk.

      2) can I move the backup actual disk from slot number 2 to the slot number 3 so I have the two disks of the raid on slot 1 & 2 together? Or do you think it's a useless thing that would give extra work for nothing? Ahaha

      Thanks !
    • 1 - yep, you can do that.

      2 - The slots don't matter. OMV/Debian uses the uuid of the drive.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Migration from Rsync to RAID 1

      ryecoaaron wrote:

      Yep. sdX isn't used. uuid is :)


      Great!
      I'm ordering the disk the next week. I'll keep you updated!

      Is it better to create the raid 1 and then recreate the shared folders, etc... From the beginning?

      Or the uuid changing manipulation is something clean?

      How does the uuid works? When installing a new disk, omv check the existing uuid's in the system to be sure not chosing an existing one?
    • No, you don't need to recreate the shared folders.

      Changing uuid is easy.

      It isn't OMV using the uuid. It is /etc/fstab. It just looks for a drive with the specific uuid and mounts it at the mount point in fstab. Once you get your drive we will change the uuids of the new array and the old drive. Let me know when you get the new drive. I wouldn't start doing anything with it until I let you know what I would do.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      No, you don't need to recreate the shared folders.

      Changing uuid is easy.

      It isn't OMV using the uuid. It is /etc/fstab. It just looks for a drive with the specific uuid and mounts it at the mount point in fstab. Once you get your drive we will change the uuids of the new array and the old drive. Let me know when you get the new drive. I wouldn't start doing anything with it until I let you know what I would do.


      Hi @ryecoaaron ,

      I have ordered the new Disk. Coming in two days. So, here are the steps to be confirmed by you:

      1 - Definitely backup all files to the backup disk. --> DONE
      2 - Put the new drive in --> TO BE DONE ON RECEPTION
      3 - boot --> TO BE DONE ON RECEPTION


      4 - Get uuid from old WD Red from /etc/fstab or blkid
      root@microserver:/# blkid
      /dev/sda1: UUID="33e26213-1515-44ce-b796-10264a059db6" TYPE="ext4"
      /dev/sda5: UUID="0a41d525-eb82-4c06-b49d-33b931e91bf5" TYPE="swap"
      /dev/sdb1: LABEL="Master" UUID="97f93447-ec94-467a-821c-9745c9a46684" TYPE="ext4"
      /dev/sdc1: LABEL="Backup" UUID="8c1e0c56-6589-487b-baaa-0bd650cd28ef" TYPE="ext4"


      5 - umount old WD Red - umount /dev/sdXY (will replace sdXY when you have the system running) --> TO BE DONE ON RECEPTION

      --> unmount /dev/sdb1

      6 - Wipe the old WD Red with dd if=/dev/zero of=/dev/sdX bs=512 count=100000 (will replace sdX when you have the system running) --> TO BE DONE ON RECEPTION

      --> dd if=/dev/zero of=/dev/sdb1 bs=512 count=100000


      7 - Create raid 1 array in web interface

      --> TO BE DONE ON RECEPTION

      8 - Create a filesystem on the array from command line - mkfs.ext4 /dev/mdX (will replace mdX when you have the array running)

      --> TO BE DONE ON RECEPTION
      --> I guess I will get this information with the "blkid" command to know the "mdX" of the new array right?. Then
      mkfs.ext4 /dev/mdX


      9 - Change uuid tune2fs /dev/mdX -U long_uuid10 - reboot --> TO BE DONE ON RECEPTION

      --> tune2fs /dev/mdX -U 97f93447-ec94-467a-821c-9745c9a46684
      - reboot


      10 - copy the files back to the new array.

      Did I understand it well?
      Did I miss something?

      if ok, as soon as I receive it, I'm following the steps.


      Thanks.
    • That looks good :) You can get the array number (mdX) from the web interface or cat /proc/mdstat. blkid won't show the array until it has a filesystem.

      If you get an error or any odd messages at any point, stop. This really should go smoothly (I've done it before) but you never know...
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • What is the output of: lsof | grep /media That will tell you what files are in use on the data drives.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      What is the output of: lsof | grep /media That will tell you what files are in use on the data drives.


      ok, it was a huge file but once I stopped plexmediaserver, it reduced.


      root@microserver:/media# lsof | grep /media

      python 3678 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      python 3678 3830 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 3830 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      python 3678 3831 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 3831 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      python 3678 3832 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 3832 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      python 3678 3833 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 3833 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      python 3678 3834 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 3834 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      python 3678 3835 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 3835 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      python 3678 3836 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 3836 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      python 3678 3837 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 3837 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      python 3678 3865 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 3865 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      python 3678 3866 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 3866 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      python 3678 3875 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 3875 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      python 3678 4400 plex cwd DIR 8,17 4096 142475306 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Data/com.plexapp.system

      python 3678 4400 plex 3w REG 8,17 31348 142475476 /media/97f93447-ec94-467a-821c-9745c9a46684/plexmediaserver/Library/Application Support/Plex Media Server/Logs/PMS Plugin Logs/com.plexapp.system.log

      bash 5964 siulman cwd DIR 8,17 4096 2 /media/97f93447-ec94-467a-821c-9745c9a46684

      sudo 6015 root cwd DIR 8,17 4096 2 /media/97f93447-ec94-467a-821c-9745c9a46684

      su 6019 root cwd DIR 8,17 4096 2 /media/97f93447-ec94-467a-821c-9745c9a46684

      bash 6020 root cwd DIR 8,1 4096 2621441 /media

      lsof 13259 root cwd DIR 8,1 4096 2621441 /media

      grep 13260 root cwd DIR 8,1 4096 2621441 /media

      lsof 13261 root cwd DIR 8,1 4096 2621441 /media

      root@microserver:/media#
    • Can you umount it after disabling plex? Make sure no users are in that directory as well. Looks like you sudo/su from that directory.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!