Posts by ITfactotum

    Hi,

    I have been searching for a while for a guides for what you mention that are specific to using SnapRaid and UnionFS in Openmediavault.

    Lots of things even Snapraid documenation of course relate only to themselves being used in a generic linux environment and sadly don't relate well to openmediavault and its GUI etc.


    What i don't want to do is start trying things i find and mangle my Openmediavault config so badly that i'm left with no choice but to start again.

    i have salvaged a USB 3.0/Sata from a shucked drive and have a 4tb connected to the system, but am uncertain how to dot he following:

    Clone the disk, grow the filesystem, and any preparations in the GUI of OMV before physically replacing the disks.


    I have read contradictory things and am rather apprehensive about screwing it up.

    I'm looking into how to use DD as you said, but also noticed that re-labeling the disk seems to be advisable? Something to do with a newer version working on label and not UUID? I may have confused myself further!

    I have finally resolved part of the above issue, by replacing the 2 parity drives with 8tb drives.

    I now need to go through and replace 2 of the 2TB drives with 4TB drives.

    Then in future my intention is replace a 4tb drive at a time with an 8TB drive.


    Given the speed with which i have filled 18TB of capacity i don't forsee needing more than that before i can afford to replace most of the setup entirely.


    My issue is that I do not know the ideal order/proceedure to replace these drives without making excess work for myself.

    I know that it is viable to backup everything and remove the shares, backup configs, snapraid, union fs everything, and just build the filesystem from scratch again with the new drives and copy the files from backup.

    But is it possible to copy the data from the disk being replaced over to a replacement disk, and then swap that disk in so that the system only had to update the device ID and free space etc, rather than having to rebuild the disk from parity etc?


    Just want to know from people with experience what the ideal method is.


    Kind Regards

    Ok, thanks.

    I'll check with that forum on the drives used for parity before i make any changes.

    ..... NVM, found it straight away on the forum. Although the plugin in OMV doesn't complain if you try and use 2 different size drives for parity. It seems i must have read up or realised it wasn't right to use mismatched drives for parity and never set it up that way!!


    I have 2x 4tb drives in parity, and 4x4, 2x2 in data for 17.9TB total in the merge.


    As much as i don't really want to use these 8TB for Parity (they are shucked Seagate Desktops so compute SMR drives) i guess its tough!

    I'll gain 4TB in the datastore and 2 fresh 8TB parity drives meaning i can replace any of the 4's with 6's or 8's when i can afford any more.


    Thanks for the assist. I assume it should be as simple as removing one of the drives from snapraid, turning off SMART monitoring for that drive, the pulling it. Provision the new drive, add it to the snapraid, and sync.

    Then repeat for the second one?

    Hi,


    Short answer. No idea.
    All i knew when i started was snapraid could be used with any size and drive combo, and that the parity drive should be as big or bigger than the largest data drive in the array. I knew multiple parity drives meant more resilience so as i had 8 bays and nothing i could see said parity drives had to be the same size, I started with 6x 4tb drives and 2x 2tb i thought it was best to use one for data nad one for parity have 2 parity drives?


    Please let me know if not and i'll reconfigure the whole array and restore from backup.

    What would be best setup for 6x 4TB 2x8TB ?


    I have a 6tb WD blue in my desktop that i'm tempted to drop into this server too so that would be give me:

    What would be best setup for 5x 4TB 2x 8TB 1x6TB ?


    Thanks

    Hi All,


    Just wanted to get some advice from people that have done these things many time more than I have.


    Currently config is an R510 with 8 x 3.5" drives formatted in ext4.
    - SnapRaid configured through GUI as:

    5x 4TB & 1x 2TB as Content/Data disks

    1x 4TB & 1x 2TB as Parity disks

    - Data disks are joined into a single datastore using the Union Filesystems plugin using the Most Free Space policy.


    I have just bought 2 new 8TB drives and intend to replace the 2 x 2TB drives with 8TB drives.
    I believe these a straight swap of these two disks is best as i believe a parity disk should always be the same or larger than the largest data disk, right?


    If these suppositions are correct, my question is this. How best to replace these disks?


    1. a. Shutdown, Replace 1 of the 2TB data disks, boot, remove the "broken" disk from the snapraid, wipe, format and mount new 8tb disk, add new disk to snapraid as data/content, run fix.

    1. b. Repeat above with 2TB parity disk.


    2. as 1, but in reverse order parity first.


    3. a. nuke the whole snapraid config, same with unionfs, replace 2x 2TB with 2x 8TB, wipe and initialise all drives, re-create snapraid and UnionFS.
    3. b. recreate shares, and then restore all files to the new UnionFS datastore from my 2x 12TB USB 3.0 external drives.


    I am uncertain which of the options above is best for both stress on the hardware, time taken to restore, time expended doing config etc.


    What does anyone recommend? is there a better way, if not which option is best from your experience?.


    Thanks in advance for any help.

    So i guess me puttinging in my original post that the manual didn't make sense to me as it was for snapraid with no mention of how this is handled in OMV, didn't get noticed.


    I have read that manual and that part, but sadly i have little experience managing things that way.


    So far i have done the following just as it makes sense, seems logical:


    - swapped the dead disk with a temp replacement.
    - booted back into OMV, and run WIPE from the disk tab
    - run CREATE from the Filesystem tab, then mounted the new disk
    - removed Parity_2 from the snapraid tab as this is the entry that pointed to the dead drive.
    - added the replacement disk/filesystem as a new Parity_2 entry in snapraid.


    What i am uncertain of is should my next step be sync or fix.
    Based on the manual i expect running a fix is the next step as thats whats next in the manual, but as the OMV GUI has to option to add logging to the Fix option etc, i wanted to see if anyone has done it like that before?
    I also wanted to leave a more detailed log of what has been done to hopefully successfully get through this so that the next person thats a newbie that searches for a fix finds some instructions without having to ask etc.


    Thanks


    Edit; ooof, if this works it's going to take 23 hours to complete... I expect that may be right, it's like doing an initial sync on a 15Tb datastore I guess....
    My worry now is this... as I hadn't synced in over a week and added a load of files during that time if a content drive failed I would have lost data for sure, BUT in this case with the drive failing being a parity drive, surely it's just reconstructing the parity data for the dead drive from the 6 data drives right? Like a fresh sync? Not rolling back all 6 data drives to the state recorded in the 1 remaining parity drive? Because if it's doing that I may as well wipe the array and start from scratch and copy from the usb backup. Anyone know?

    I have a spare 4tb drive and a new one on the way and want to first replace the dead drive in my OMV server.
    But as the snapraid docs do not mention OMV and I only know how I initially set things up there, I am unaware what is the correct way to proceed.
    As the drive is parity, if it has no content file(I'll check when I get home) can I just swap out the drive?
    I expect not, I expect that I need to tell snapraid that the drive itself is gone before adding a new one and reducing, but am unsure.


    Surely someone has swapped a drive out while using snapraid in OMV?

    Hi,


    I have shut the system down and brought it back online, the drive remains unresponsive, I haven't edited any scripts really when setting up this OMV server, only really used the GUI and a few plugins.
    This is my first foray into this type of system and i have no real Linux experience other than mint and Ubuntu so haven't had to do much of anything i couldn't copy and paste from a forum.


    My SnapRaid setup was done entirely in the GUI and I've simply run a sync every week or so as i found that there was conflicting info as to what is the best schedule so left it manual until i started to understand more.
    Sadly this drive has failed very quickly before i was prepared for it, thankfully it seems to be a parity disk. And my USBbackup ran when i rebooted the server so i should have a clean backup of the shares as well.


    OMV was fine for an amateur like me to setup from a few forums and youtube videos, but i'm a bit anxious about pulling the drive and mangling the whole datastore with shell commands i don't fully understand.
    I'd also like to make adjustments to the config so i have warning next time, and a better SnapRaid and USB backup schedule etc.


    Hoping someone doesn't mind helping first timer out :) I'm not sure how much of the SnapRAID guide relates to how you I would actually perform the tasks or set things up in OMV itself??

    Hi All,
    My current setup is an R510 with 6x 4TB SAS/SATA HDDs and 2x2TB SAS/SATA HDD.OMV 4 is running from a 128gb SSD.
    2x4TB HDD are devoted to SnapRAID Parity, with 2x2TB and 4x4TB as Data drives.Total available space in the snapraid datastore is combined with UnionFS into a single datastore.4 x SMB shares live on the Datastore.
    Tonight i noticed an odd sound, visually checked the server and noticed a single HDD light was flashing repeatedly when there should have been little activtiy. I went to the GUI and it would not enter the Disks, Smart, or filesystem tabs, attempting this caused a communication failure.


    Syslog shows this give or take the first line, repeated over and over:



    As the system was begining to get unresponsive i opted to try and free the GUI from what i read online may be a communication failure as a command had been sent prior to the drive possibly failing, and OMV timiing out waiting for the response?... I rebooted from the GUI.
    After restart and minor coronry! the GUI was working again but i'm still unable to load SMART or disks, but i was able to load snapraid enough to check/edit the name of the disks in the each of the data and parity sections, and it appears that its one of the two 4TB drives used for parity thats died/dying.
    As such the USB backup that runs each time the disk is connected (so started on reboot) doesn't worry me as much as i initially thought (was super worried it would sync lots of corrupted changes)
    I've been stupid and added files over the last week and not synced in 8 days (that i know of, i don't know how to check if something is scheduled as things seem to happen on the server that i can't find GUI schedules for!!)
    My Questions are, how to i confirm whats wrong with the unresponsive disk (its a Dell Constellation 4TB SAS) as i can't run a smart check, i assume its completely toast?
    If above is correct should i shutdown ASAP and remove the drive so the system doesn't thrash?
    My tiny brain is looking at http://www.snapraid.it/manual and isn't sure how to proceed replacing the drive from the OMV GUI, is that possible?


    Thank you for any assistance in advance, sorry for the long post.


    ** edit - can't access Disk or SMART tabs - Error: