Wow, thank you @flmaxey for the very informative response as well as all the resources you linked. You make a very strong case for OMV and I think I will give it a try so I can sunset my DS211j. Thank you as well @geaves for helping me sort through some of my thoughts and questions. The forum support here is fantastic!
Beiträge von cghill
-
-
Thank you again @geaves, that was another great thread. After reading it I'm leaning a bit toward ditching RAID1 and instead having 2 mounted drives with an rsync running regularly to keep the redundant drive synced to the primary. But I still have a couple questions I was hoping you or @flmaxey (or anybody else) could answer.
So it seems the general process for setting up OMV is: Disks -> File Systems -> Shared Folder -> SMB Share. So, let's anticipate a failure and recovery process on both drives.
Redundant drive: A failure on the redundant drive is simple. You replace the drive, reformat the disk, mount the new filesystem and let rsync do its thing.
Primary: This is more complex since there is already existing infrastructure with dependencies on the failed drive. Obviously its the same process to replace the drive and reformat, but what do we do about the Shared Folder and SMB shares? Do I have to recreate everything or is it possible to tell OMV that the existing Shared Folders are now in a different location (different drive, same data)?
I guess the root of my question is this: If I'm going down the 2 volume w/ rsync route, at what point does OMV get in the way a bit? If I were doing this in base Linux I would just create symbolic links for both drives (named primary and redundant or something). Everything (including the SMB shares) would use those symbolic links. Then if the primary drive failed, I would just update the "primary" symbolic link to point to the redundant drive, and point the redundant symbolic link at a newly formatted drive as soon as its ready. Done. No reconfiguring things in a web console.
It seems OMV is sitting somewhere in the middle between a base Linux server and Synology NAS. Its much more customizable than Synology (and not tied to their proprietary hardware) but doesn't have the same ease of use (plug and play RAID1). Now that I'm moving more towards the "separate drives mirrored w/ rsync" approach I'm wondering what benefits OMV offers over a normal Linux server. From my example above with symbolic links, it seems OMV might actually get in the way a bit by requiring you to do things the "OMV way". I know OMV is growing in popularity and usage so I'm hoping some more experienced/advanced OMV users can chime in on things that OMV offers that I might be overlooking.
Thank again for any and all feedback.
-
@geaves Thank you for your reply. I read the forum post you linked and it seems that's the same issue I'm having.
Perhaps I should come at this from a different angle. As I mentioned, I'm doing all this to replace an old Synology DS211j. I actually recently purchased a DS218+ and have setup a dockerized Plex Media Server with Sonarr/Radarr/Deluge etc, but the DS218+ struggles a bit with CPU at times (understandably). I'm a software engineer so I'm comfortable w/ the Linux command line, so I thought I'd repurpose an old PC to a NAS instead of using an underpowered Synology device.
I'm using RAID1 for redundancy (I'm scared to use the word "backup" and "RAID" in the same sentence on this forum). However, I can see that RAID doesn't get a lot of love here. I guess what I'm really looking for is whatever solution is the fastest, most reliable and easiest way to recover my data in the event of a catastrophic hardware failure. I'm mostly concerned about HDD failures since they're the most common, but since I'm running on older hardware it would be great to also have a solution that worked to recover the raid in another system entirely (in the case of a MB failure). I've had HDDs fail in my Synology, and it was as simple as swapping out drives and clicking 2 buttons to recover (and resize) the RAID.
So, a question to the experts of this forum: Am I barking up the wrong tree with RAID1? I've thought about ditching RAID1 and just having 2 mounted drives and doing an hourly (or daily) rsync between them. I could even run my applications from a symbolic link to the "primary" mounted drive and have the rsync also use the symbolic links, so in the case of a failure it was be as simple as mounting and formatting a new drive and updating the symbolic links. Or, should I just learn the handful mdadm commands that are required to administrate a RAID in OMV?
I'm open to any and all suggestions!
-
After scouring the forums, it looks like this could be the same issue as this one: "Missing" RAID filesystem
Some more relevant details: I plugged the 1 TB drive back in, and the RAID showed up again just fine. Then I tried unplugging the 2 TB drive this time, and when OMV booted up the RAID was missing once again. So it seems it happens regardless of the hard drive. If its relevant, I've included HDD device data of both drives below:
1 TB drive:
Code
Alles anzeigenroot@overlord:~# smartctl -a /dev/sdb smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.18.0-0.bpo.1-amd64] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital Green Device Model: WDC WD10EZRX-00L4HB0 Serial Number: WD-WCC4J3287648 LU WWN Device Id: 5 0014ee 2b4a0a929 Firmware Version: 01.01A01 User Capacity: 1,000,204,886,016 bytes [1.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Sat Dec 8 22:25:31 2018 MST SMART support is: Available - device has SMART capability. SMART support is: Enabled
2 TB drive:
Code
Alles anzeigenroot@overlord:~# smartctl -a /dev/sdb smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.18.0-0.bpo.1-amd64] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Ultrastar A7K2000 Device Model: Hitachi HUA722020ALA331 Serial Number: B9G45J3F LU WWN Device Id: 5 000cca 222c1e634 Firmware Version: JKAOA3NH User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Form Factor: 3.5 inches Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Sat Dec 8 22:29:24 2018 MST SMART support is: Available - device has SMART capability. SMART support is: Enabled
-
Hey guys, I'm considering moving to OMV as my DS211j is getting a bit long in the tooth. I'm trying to test out RAID management in OMV, as it was super simple on my Synology, and right out of the gate things are a bit concerning:
I built a RAID 1 mirror across 2 disks (a 2TB and 1TB). This went fine. Then I wanted to try "simulating" an HDD failure by removing the 1 TB drive. Then ultimately I wanted to test recovering the RAID with another 2 TB and test if the RAID automatically grew from 1 TB to 2 TB. Right away things are a bit troublesome, because as soon as I removed the 1 TB drive, the whole RAID is missing (even though the 2 TB drive is still there). How can I recover a RAID if OMV can't find the RAID? I'm a bit disappointed that HDD failures appear to be this troublesome, but perhaps I'm missing something.
Coderoot@overlord:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive sdb[0](S) 1953383512 blocks super 1.2 unused devices: <none>
Coderoot@overlord:~# blkid /dev/sdb: UUID="815bd3c7-4d40-f329-de69-3fa5ba7da76e" UUID_SUB="0693a8f6-abcc-3281-9f40-901a87cf9d17" LABEL="overlord:0" TYPE="linux_raid_member" /dev/sda1: UUID="bd7d8f6a-a330-4b98-8041-9bbf1d361544" TYPE="ext4" PARTUUID="0481b79f-01" /dev/sda5: UUID="7e27c80f-e717-4816-81ee-e0151793574e" TYPE="swap" PARTUUID="0481b79f-05"
Coderoot@overlord:~# fdisk -l | grep "Disk " Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors Disk identifier: 0x0481b79f
Code
Alles anzeigenroot@overlord:~# cat /etc/mdadm/mdadm.conf cat: /etc/mdadm/mdadm.confcat: No such file or directory # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is # used if no RAID devices are configured. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # definitions of existing MD arrays ARRAY /dev/md0 metadata=1.2 name=overlord:0 UUID=815bd3c7:4d40f329:de693fa5:ba7da76e
Coderoot@overlord:~# mdadm --detail --scan --verbose INACTIVE-ARRAY /dev/md0 num-devices=1 metadata=1.2 name=overlord:0 UUID=815bd3c7:4d40f329:de693fa5:ba7da76e devices=/dev/sdb
Any help is appreciated!