Beiträge von cghill

    Thank you again @geaves, that was another great thread. After reading it I'm leaning a bit toward ditching RAID1 and instead having 2 mounted drives with an rsync running regularly to keep the redundant drive synced to the primary. But I still have a couple questions I was hoping you or @flmaxey (or anybody else) could answer.


    So it seems the general process for setting up OMV is: Disks -> File Systems -> Shared Folder -> SMB Share. So, let's anticipate a failure and recovery process on both drives.


    Redundant drive: A failure on the redundant drive is simple. You replace the drive, reformat the disk, mount the new filesystem and let rsync do its thing.


    Primary: This is more complex since there is already existing infrastructure with dependencies on the failed drive. Obviously its the same process to replace the drive and reformat, but what do we do about the Shared Folder and SMB shares? Do I have to recreate everything or is it possible to tell OMV that the existing Shared Folders are now in a different location (different drive, same data)?


    I guess the root of my question is this: If I'm going down the 2 volume w/ rsync route, at what point does OMV get in the way a bit? If I were doing this in base Linux I would just create symbolic links for both drives (named primary and redundant or something). Everything (including the SMB shares) would use those symbolic links. Then if the primary drive failed, I would just update the "primary" symbolic link to point to the redundant drive, and point the redundant symbolic link at a newly formatted drive as soon as its ready. Done. No reconfiguring things in a web console.


    It seems OMV is sitting somewhere in the middle between a base Linux server and Synology NAS. Its much more customizable than Synology (and not tied to their proprietary hardware) but doesn't have the same ease of use (plug and play RAID1). Now that I'm moving more towards the "separate drives mirrored w/ rsync" approach I'm wondering what benefits OMV offers over a normal Linux server. From my example above with symbolic links, it seems OMV might actually get in the way a bit by requiring you to do things the "OMV way". I know OMV is growing in popularity and usage so I'm hoping some more experienced/advanced OMV users can chime in on things that OMV offers that I might be overlooking.


    Thank again for any and all feedback.

    @geaves Thank you for your reply. I read the forum post you linked and it seems that's the same issue I'm having.


    Perhaps I should come at this from a different angle. As I mentioned, I'm doing all this to replace an old Synology DS211j. I actually recently purchased a DS218+ and have setup a dockerized Plex Media Server with Sonarr/Radarr/Deluge etc, but the DS218+ struggles a bit with CPU at times (understandably). I'm a software engineer so I'm comfortable w/ the Linux command line, so I thought I'd repurpose an old PC to a NAS instead of using an underpowered Synology device.


    I'm using RAID1 for redundancy (I'm scared to use the word "backup" and "RAID" in the same sentence on this forum). However, I can see that RAID doesn't get a lot of love here. I guess what I'm really looking for is whatever solution is the fastest, most reliable and easiest way to recover my data in the event of a catastrophic hardware failure. I'm mostly concerned about HDD failures since they're the most common, but since I'm running on older hardware it would be great to also have a solution that worked to recover the raid in another system entirely (in the case of a MB failure). I've had HDDs fail in my Synology, and it was as simple as swapping out drives and clicking 2 buttons to recover (and resize) the RAID.


    So, a question to the experts of this forum: Am I barking up the wrong tree with RAID1? I've thought about ditching RAID1 and just having 2 mounted drives and doing an hourly (or daily) rsync between them. I could even run my applications from a symbolic link to the "primary" mounted drive and have the rsync also use the symbolic links, so in the case of a failure it was be as simple as mounting and formatting a new drive and updating the symbolic links. Or, should I just learn the handful mdadm commands that are required to administrate a RAID in OMV?


    I'm open to any and all suggestions!

    After scouring the forums, it looks like this could be the same issue as this one: "Missing" RAID filesystem


    Some more relevant details: I plugged the 1 TB drive back in, and the RAID showed up again just fine. Then I tried unplugging the 2 TB drive this time, and when OMV booted up the RAID was missing once again. So it seems it happens regardless of the hard drive. If its relevant, I've included HDD device data of both drives below:


    1 TB drive:


    2 TB drive:

    Hey guys, I'm considering moving to OMV as my DS211j is getting a bit long in the tooth. I'm trying to test out RAID management in OMV, as it was super simple on my Synology, and right out of the gate things are a bit concerning:


    I built a RAID 1 mirror across 2 disks (a 2TB and 1TB). This went fine. Then I wanted to try "simulating" an HDD failure by removing the 1 TB drive. Then ultimately I wanted to test recovering the RAID with another 2 TB and test if the RAID automatically grew from 1 TB to 2 TB. Right away things are a bit troublesome, because as soon as I removed the 1 TB drive, the whole RAID is missing (even though the 2 TB drive is still there). How can I recover a RAID if OMV can't find the RAID? I'm a bit disappointed that HDD failures appear to be this troublesome, but perhaps I'm missing something.


    Code
    root@overlord:~# cat /proc/mdstat                                                                                                                                                             
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]                                                                                                         
    md0 : inactive sdb[0](S)                                                                                                                                                                      
          1953383512 blocks super 1.2                                                                                                                                                             
    
    
    
    
    unused devices: <none>
    Code
    root@overlord:~# blkid                                                                                                                                                                        
    /dev/sdb: UUID="815bd3c7-4d40-f329-de69-3fa5ba7da76e" UUID_SUB="0693a8f6-abcc-3281-9f40-901a87cf9d17" LABEL="overlord:0" TYPE="linux_raid_member"                                             
    /dev/sda1: UUID="bd7d8f6a-a330-4b98-8041-9bbf1d361544" TYPE="ext4" PARTUUID="0481b79f-01"                                                                                                     
    /dev/sda5: UUID="7e27c80f-e717-4816-81ee-e0151793574e" TYPE="swap" PARTUUID="0481b79f-05"
    Code
    root@overlord:~# fdisk -l | grep "Disk "                                                                                                                                                      
    Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors                                                                                                                               
    Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors                                                                                                                               
    Disk identifier: 0x0481b79f
    Code
    root@overlord:~# mdadm --detail --scan --verbose                                                                                                                                              
    INACTIVE-ARRAY /dev/md0 num-devices=1 metadata=1.2 name=overlord:0 UUID=815bd3c7:4d40f329:de693fa5:ba7da76e                                                                                   
       devices=/dev/sdb


    Any help is appreciated!