New Boot SSD, Reconnect Old HDDs

    • Offizieller Beitrag

    If I can make a suggestion, I would not put the content file on parity disks, especially if your parity disk and data disks are all the same size (rather than the parity disk being larger). My reason for this is that the content file on the data disks takes up a relatively small amount of space on each disk that actual data cannot occupy. This allows the available space of the parity disk to be just slightly larger than that of the data disks, ensuring that you can never "max out" the parity drive.


    However, if you write that content file to all drives including parity, you could potentially end up maxing out the available space of the parity drive. Maybe it's inconsequential with the way SnapRAID is designed; I don't know how the program would react if there is no more space available for the parity file, but it sounds like an easy problem to avoid. Also, with three data disks, you'll probably have a sufficient number of copies of the content file that a fourth is not likely to become useful for rebuilding lost data.

    I believe my content file is about 105 meg. Since I'm actually running only one protected disk (I sim'ed two disks as an example for @curious1), I had to set a content file on the parity disk to have redundancy. 105meg on a 2.7 TB disk is not much of a hit and, in my case, the parity disk is much larger than the disk being protected.


    But, for those who are stretching the limits of their storage, your point is legitimate and well taken.

    • Offizieller Beitrag

    I do wonder about why we would need content on the parity drive. If you could expand on that thought, FLMAXEY, I would appreciate hearing your take.

    As noted in the above post, you caught me. :)
    (I was trying to show you something in the short HOW-TO that made sense for your situation.)
    The content file doesn't have to be on the Parity drive but there needs to be at least two copies of it so that the potential for losing a single copy, on a failed drive, is not a possibility. As previously noted, if a single content file is on a failed disk, there's no recovery.


    1. Also, there is a Directory Service under Access Rights Management. Will that do anything to help?


    2. Yes, next is backup. Wonder what I'll run into there, LOL. Think I'll get a 6TB drive ... they're cheap enough.

    1. I don't have Directory service under Access Rights. Did you install the LDAP plugin? If you did, unless you're running a small business, have an enormous LAN or lots of users, you probably don't need it.


    2. For backing up to an external disk, consider Rsync. The following command line is what I use to replicate one data disk, to another. The source is on the left, the destination is on the right. The switches preserve permissions and delete files in the destination that do not exist in the source - in essence, it creates a mirror. (**This is why it's important to get the source and destination right. You would not want to mirror an empty disk, deleting all your working files in the process.**)


    This is the command line, and it's run as a scheduled job in the GUI, everyday, at 05:00AM:
    rsync -av --delete /srv/dev-disk-by-label-EXT401/ /srv/dev-disk-by-label-EXT402/



    With the fictitious SNAPRAID example set aside, this is actually how I'm using my backup device:


    I replicate all of my on-line shares from my main server, over the network, to the first drive of this backup server. The first drive is protected, with checksummed files, and is regularly scrubbed using SNAPRAID and a parity drive. Then the protected first disk, is fully mirrored to the second disk once a day, using Rsync. (I may increase that interval to allow some time for disaster intervention.)


    That creates two complete additional copies of my data, provided by a single low powered device (an Atom processor and 4GB RAM), that could also function as a backup server if needed. That's decent protection. (And I have an Arm backup device as well.)

    • Offizieller Beitrag

    Just a quick note before I'm out of here for the week:


    If I didn't state it before - using the later versions of OMV3 or OMV4, you'll have to customize the Rsync command line example for the names of your source and destination disks. (Replace red with your disk labels.)


    rsync -av --delete /srv/dev-disk-by-label-yourdisk1/ /srv/dev-disk-by-label-yourdisk2/


    The label (yourdisk1) will be what you named the disk when you formatted it. You can find the names for your installed drives under /srv.


    If you're not sure how to navigate the command line, take a look at this-> guide. It has detailed instructions for installing WinSCP on a Windows PC. (See the TOC for WinSCP.) WinSCP will allow you to navigate your server visually, in a manner that is somewhat similar to Internet Explorer.

  • Hi FLMAXEY. Turns out I have a lot of time away from my desk today, but I am taking a break and have a couple of items here.


    • Yes, I had installed the LDAP plug-in so that's probably where the Directory Service came from. I probably thought that might help, but you're right ... after reading the description more carefully, it doesn't help me at all. I'll remove it.
    • I have removed the SnapRAID from the unionfs (I think I already told you that), removed content from the parity disk, and assigned content on all three data drives. If I understand it right, this should give me the protection needed. The only niggling doubt I have is around the discussion that each data disk has unique files, even though the directory structures are identical on all three data disks. So, what I think I'm hearing is that the content file on each disk actually provides content info for all three?
    • For Rsync: I have a 4TB Seagate with an USB connection. Since my entire system is only 4TB, this should suffice. So, once I attach the backup drive via USB, I assume I'll have a drive label. Seems like I would need to format it first? Then you're command of "rsync -av --delete /srv/dev-disk-by-label-EXT401/ /srv/dev-disk-by-label-EXT402/", with the proper labels, would work the same way?
    • Because I will be using a USB connected device, wouldn't it be wise to disconnect after an Rsync mirror, to protect it from any kind of a power surge or other disaster? If that's the right way, isn't there an option for manual schedule?
    • Just to be sure, is Rsync the best recommendation for me? I assume so, since you recommended it :) .

    That's all I can think of right now.

    • Offizieller Beitrag
    • Yes, I had installed the LDAP plug-in so that's probably where the Directory Service came from. I probably thought that might help, but you're right ... after reading the description more carefully, it doesn't help me at all. I'll remove it.
    • I have removed the SnapRAID from the unionfs (I think I already told you that), removed content from the parity disk, and assigned content on all three data drives. If I understand it right, this should give me the protection needed. The only niggling doubt I have is around the discussion that each data disk has unique files, even though the directory structures are identical on all three data disks. So, what I think I'm hearing is that the content file on each disk actually provides content info for all three?
    • For Rsync: I have a 4TB Seagate with an USB connection. Since my entire system is only 4TB, this should suffice. So, once I attach the backup drive via USB, I assume I'll have a drive label. Seems like I would need to format it first? Then you're command of "rsync -av --delete /srv/dev-disk-by-label-EXT401/ /srv/dev-disk-by-label-EXT402/", with the proper labels, would work the same way?
    • Because I will be using a USB connected device, wouldn't it be wise to disconnect after an Rsync mirror, to protect it from any kind of a power surge or other disaster? If that's the right way, isn't there an option for manual schedule?
    • Just to be sure, is Rsync the best recommendation for me? I assume so, since you recommended it :) .

    That's all I can think of right now.

    Keyed to the above:


    1. That makes sense.
    2. A single content file is a compilation of ALL files and folders, on all disks, in the SNAPRAID array. (Note that in any array disk content is unique, except with RAID1 and versions thereof. ) You need at least two copies of the SNAPRAID content file which will be identical, with a detailed list of the content on all disks, for redundancy in case one copy is lost with a failed disk. Having an extra content file (3ea) won't hurt anything.
    3. Yes, a single 4TB disk should be fine for backup. However, in your case, the command line example would be something like:


    rsync -av --delete /srv/name-of-your-merged-uniondrive/ /srv/dev-disk-by-label-your4TBdrivename/


    This command line will copy the merged contents of your union (the source), to the USB connected 4TB drive (the destination). Note that you'll have to find out the name and location of the union (simulated) drive. I believe the unionfs plugin installs it under /srv but I'm not sure about that.
    4. You could do that if you like. "Cold Storage" would help to protect from power surges and if you backup every two weeks to one month or so, the external drive should last a long time. I have cold backup storage on an Raspberry PI with a USB connected 4TB WD portable drive. I bring it up once every week or two, replicate the shares on the main server, and shut it down.
    In your case you could set your scheduled job for once a year just to save the command line in the GUI, and use the "Run" button for a manually triggered job.
    In your case, note that your drive might not be recognized automatically when it's powered on so you'd have to shut down the server, turn the drive on, boot up and run the job. It would be wise to shut down the server before turning the drive off as well. That's the down side of doing it this way.
    5. There really is no "best way" to do anything - only what one is comfortable with. Rsnapshot might be a better backup method because it provides options to go back in time, to previous file states, but it gets into more complexity. I use ZFS Snapshots on my main server but if you're not familiar with the concepts, Snapshots might seem to be a bit much to take in on short notice.


    What you're doing with SNAPRAID, combined with full backup to a separate disk, is a solid approach. (And if you decide to ditch the union at some point in the future, in favor of a larger drive, your backup drive will have the consolidated contents of the union on one drive.) In the bottom line, you'll have options and a recovery path in the event of a failure.


    Me? I prefer simplicity, automation, multiple layers of backup and it's served me well. I haven't lost anything to a data disaster since coming over to IBM compatibles in the early 90's.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!