Cannot create RAID, new install

    • OMV 4.x
    • Resolved
    • Cannot create RAID, new install

      I freshly install 4.1.14. I have OMV installed on a 2TB drive and there are two other 10TB drives I can see under Disks. I click on RIAD Management and click Create and none of my disks show up... Failr new to this so not sure what next steps I should take, and what info you generally would want to know when posting a thread like this.

      The post was edited 1 time, last by telijah ().

    • Go to Storage, Disks, select a drive and Wipe each drive. After both disks are wiped, then you'll go into RAID management, Create, etc.

      After your array is created, then you'll need to format it, or Storage, File Systems, Create.
      _________________________________________________

      Are you sure you want to do RAID?

      (And don't forget backup - RAID is NOT backup.)

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.13, Intel Server SC5650HCBRP, 6GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      2nd Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • flmaxey wrote:

      Go to Storage, Disks, select a drive and Wipe each drive. After both disks are wiped, then you'll go into RAID management, Create, etc.

      After your array is created, then you'll need to format it, or Storage, File Systems, Create.
      _________________________________________________

      Are you sure you want to do RAID?

      (And don't forget backup - RAID is NOT backup.)
      I had already done a quick wipe and the drives still did not show up when I went to create a RAID, so now I am currently awaiting the long process of a secure wipe to see if that will do the trick.
    • Have you tried skipping the RAID array, and just create a file system on one or both of the disk?

      If they had a file system on them before, what was it?
      Where the two disks in a RAID array before?

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.13, Intel Server SC5650HCBRP, 6GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      2nd Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • flmaxey wrote:

      Have you tried skipping the RAID array, and just create a file system on one or both of the disk?

      If they had a file system on them before, what was it?
      Where the two disks in a RAID array before?
      If I go straight to File System, it does appear to allow me to create new FS on both drives. I had previously installed xigmanas and I do believe i had started the process of putting them into a ZFS, but xigmanas was not very intuitive so I wiped the main drive and installed OMV. Secure wipe is almost done with /sdb, will start secure wipe on /sdc drive once that is done...

      Can I have two wipe actions going on at once via webgui?
    • First, I'd try to format them to EXT4. Then quick wipe them and try it again.

      telijah wrote:

      Can I have two wipe actions going on at once via webgui?
      I think you can secure wipe two drives at once. You'd start one operation, then open another separate web page into the server (or reload the first). But note that ZFS, LVM and mdadm (software) RAID can set persistent flags on drives.

      A secure wipe should do it.
      If they continue to be stubborn, give the free version of DBAN a try. DBAN wipes almost everything and it starts with the boot sector.
      ________________________________________________________________________

      Others on this forum prefer using dd on the command line. Since you're setting up a new server, other than watching that you don't format your boot disk, there's no risk in trying it.

      First do:
      fdisk -l
      This will give you a list of installed drives.

      The command to erase all locations on the drive with 0's is
      dd if=/dev/zero of=/dev/sd?
      (Where the "?" is the letter of the drive you're wiping .)
      _______________________________________________________


      Another command that is supposed to wipe RAID signatures is
      wipefs --all --force /dev/sd?
      (Where the "?" is the letter of the drive you're wiping .)


      Hopefully something works for you.


      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.13, Intel Server SC5650HCBRP, 6GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      2nd Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk

      The post was edited 2 times, last by flmaxey: edit ().

    • New

      geaves wrote:

      wipefs -n /dev/sd? (?=letter of drive) will show signatures on drive, booting using the rescue cd (in omv-extras) will allow you to remove a zfs signature whilst leaving the ext4 in place. Much quicker than a secure wipe on large drive.
      I think this is what I am looking for... the secure wipe took over 24 hours on the first drive, and if I close the webgui window, I have no way (that I know of) to view the status of the wipe... They're 10TB drives I shucked from a WD easystore I just picked up... I believe the second wipe is still running, because now I can see one of the drives in the Create RAID window (the one where the wipe finished) but do not see the second.
    • New

      geaves wrote:

      wipefs -n /dev/sd? (?=letter of drive) will show signatures on drive, booting using the rescue cd (in omv-extras) will allow you to remove a zfs signature whilst leaving the ext4 in place. Much quicker than a secure wipe on large drive.
      When I do it on sdb, I cannot tell if anything was done, I am just returned to my terminal prompt, when I do it on sdc, I get:


      Brainfuck Source Code

      1. offset type
      2. ----------------------------------------------------------------
      3. 0x9187ffe6000 zfs_member [filesystem]
      4. LABEL: PlexMedia01
      5. UUID: ###############
      After that, Create RAID still doesn't show sdc as an available drive.
    • New

      Since I have you two here... how long does this recycling usually take? I finally got to create a RAID and chose the two drives, but it has been recycling for a few hours now and is only at 30%. Is this normal?

      And if you guys could check my logic, I think all that was left was to put a file system onto them once the raid is completed, and then create a share for them so I can start transferring my files from my Synology to it...
    • New

      I'm using ZFS. As I remember it, ZFS was fast to sync a 4TB mirror. I've only set up mdadm RAID in VM's with 5GB (really small) drives. Obviously, that won't compare to 10TB. 10TB is a huge mirror.

      After array sync, the process would be:

      Create a file system on the array. (Given the 10TB size, this may take some time as well.)
      Create a Shared Folder
      Use SMB/CIF, to put your new shared folder on the network.
      _______________________________________________

      You could use the Remote Mount Plugin to set up an Rsync job.
      (Remote Mount adds a remote share, to OMV, as if it's a local drive.)

      That process would be:
      Add a new Remote Mount
      - Name it something shows it's remote like SYN-Music
      - Server (Synology IP address)
      - Share (Share name)
      You'd need to supply a user name and password with at least read access to the Synology share.

      After it's mounted, you'll see the remote mounted filesystem, in Storage, File systems.

      Then you would create a shared folder of the remote mounted file system:
      Name the shared folder something that indicates it's on a remote server, again, like SYN-music.
      (The path for a shared folder of a remote file system is a single / When created, you'll have to change the default.)

      Then set up a LOCAL rsync job with the remote Synology share as the source and the OMV share as the destination.
      (Do not try to configure a remote rsync job, in this case. It won't work.)
      ________________________________________________

      The Rsync job will go faster with a 1GB link.

      Cheers

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.13, Intel Server SC5650HCBRP, 6GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      2nd Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • New

      telijah wrote:

      After that, Create RAID still doesn't show sdc as an available drive.
      The wipefs -n will do nothing more than display the information you provided.

      telijah wrote:

      flmaxey wrote:

      Another command that is supposed to wipe RAID signatures is
      wipefs --all --force /dev/sd?
      This worked, thanks!
      That is interesting, as some have found that the current wipefs doesn't always remove the zfs signature hence the use of the systemrescuecd, possibly the use of --all and --force has removed 'all' signatures on the drive.

      What would have been a better option was to use a single 10Tb drive for data and the second 10Tb drive as a local rsync.
      Raid is not a backup! Would you go skydiving without a parachute?
    • New

      geaves wrote:

      telijah wrote:

      After that, Create RAID still doesn't show sdc as an available drive.
      The wipefs -n will do nothing more than display the information you provided.

      telijah wrote:

      flmaxey wrote:

      Another command that is supposed to wipe RAID signatures is
      wipefs --all --force /dev/sd?
      This worked, thanks!
      That is interesting, as some have found that the current wipefs doesn't always remove the zfs signature hence the use of the systemrescuecd, possibly the use of --all and --force has removed 'all' signatures on the drive.
      What would have been a better option was to use a single 10Tb drive for data and the second 10Tb drive as a local rsync.
      I have a 200gb drive for "data", if you mean for my drive that runs OMV... My plan was to use the two 10TB drives in a RAID as the storage for my Plex media. Once the RAID and file system were ready, I had planned to figure out the remote mounts so I could transfer the data from my old Synology box to my new 10TB RAID.

      Then, once that was done, I planed to wipe the two 6TB drives i nthe Synology, add them to my new NAS box running OMV where the other two 10TB drives are, also put them into mirror, and then if I understand this right, planed to put both RAIDs into an LVM(?) so that I can have one 16TB storage location... I am still reading up on all of this as I go along here.
    • New

      You would still have 16TB storage leaving them as single drives using a 10Tb and 6TB as rsync, if you go the way as you have suggested do you intend on backing this up, granted one drive failure in a mirror is fine, but having experienced a drive failure in a mirrored raid only for the second drive to go down whilst waiting for a replacement is something I would not want to happen again.

      A second option might be to use MergeFS (UnionFS) and Snapraid @flmaxey might have view on this.
      Raid is not a backup! Would you go skydiving without a parachute?
    • New

      geaves wrote:

      You would still have 16TB storage leaving them as single drives using a 10Tb and 6TB as rsync, if you go the way as you have suggested do you intend on backing this up, granted one drive failure in a mirror is fine, but having experienced a drive failure in a mirrored raid only for the second drive to go down whilst waiting for a replacement is something I would not want to happen again.

      A second option might be to use MergeFS (UnionFS) and Snapraid @flmaxey might have view on this.
      I dont plan on any comprehensive backup plan, as all four drives will simply be to store media content, which is not of huge concern to lose in the event such as what you explained. Once all four drives are in the OMV box, I planned to stick a few smaller drives back in the Synology for personal cloud type use for important stuff which will have a better backup setup.

      But, I am interested in learning more about my options for combining the two pairs to see as a single folder for storage, so I will also wait for @flmaxey input as well. Thanks again for all your help you two, @geaves
    • New

      telijah wrote:

      I planed to wipe the two 6TB drives i nthe Synology, add them to my new NAS box running OMV where the other two 10TB drives are, also put them into mirror, and then if I understand this right, planed to put both RAIDs into an LVM(?) so that I can have one 16TB storage location... I am still reading up on all of this as I go along here.
      It's your call, but you may be flirting with disaster. First, drives have a working life of about 4 to 5 years (note there's a LOT of give and take in that figure). The drives in the Synology are not new so, their age alone is something to think about. Mixing old and new drives, well,, failure will come from the weakest link. If they'll all be pooled together, the chances of a critical failure may be much shorter than 4 to 5 years and, depending on how you implement your storage, all or a great deal of data may be lost.
      ((BTW: I don't think you're going to be able to "LVM" a RAID1 array (10TB) together with another RAID1 array (6TB). LVM is usually implemented before RAID or a file format is applied. On the other hand, there's a way to merge disks together, but hear me out.))

      I'm not doing a ZFS mirror for "redundancy". I see it is a single drive because, functionally, that's what it is. I'm doing the zmirror (the ZFS version of RAID1) for "bitrot" protection only. In any case, with mdadm RAID1 (this is what you have) if anything goes wrong on the first drive, it's instantly replicated to the second drive. The scenarios of drive failure where RAID1 might work are few. The scenarios where there is substantial loss of data or outright failure (both drives) are many. Frankly, and this is my opinion, there really is no good reason to use mdadm RAID1 at home.

      So, as @geaves has already mentioned, why not get real backup with rsync? You can have a second full copy of all data, with the drives you already have. Also, abandoning the Synology may not be the best idea either. Two full copies of data, on two different platforms, is ideal for backup. Remote mount can help you achieve that.

      If you want more info on backing up using Rsync, you'll find drive-to-drive copying, using Rsync, in this User Guide. An Rsync copy is better than RAID1, and restoration is easy. This is explained in the guide.
      ______________________________________________

      Otherwise, you can merge drives together using the UnionFs plugin. U nionFs will merge two or more drives, into a common mount point and you can even add a drive, anytime if you like. If this is the route you want to go, a bit of safety for the collection of drives can be achieved using SNAPRAID (which is also a plugin).
      _______________________________________________

      Since you have 4 drives, there's lots of ways to protect your data. Give it some thought, but remember RAID is NOT backup. There are plenty of horror stories, in this forum and elsewhere, about data lost in RAID arrays.

      Again, your call.

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.13, Intel Server SC5650HCBRP, 6GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      2nd Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk

      The post was edited 1 time, last by flmaxey: edit ().

    • New

      OK, well let me explain this and see what you suggestion is...

      As mentioned, this OMV box will only be used to house the drives which will be holding nothing but media for my Plex server. As mentioned, I am not too terribly concerned about data loss in the event of a drive failure, but was looking for the smallest bit of redundancy I suppose. That was why I was simply going to have the two 10TB drives mirror and two 6TB drives mirror, and the only redundancy I have is if one drive goes down, I swap in a new one and let it mirror again (assuming the second drive doesn't go down in the event @geaves mentioned).

      That being said, which way do you suggest then? I would still like to put all four drives into the OMV box. If I put all four drives into a configuration to where it seems I have 32TB total space, what happens in the event I lose one drive? Do I lose the data across all four? If your answer is to first read the guide you posted, I will go do that first because honestly, I have not done a whole lot of research and I definitely need to "RTFM" :)
    • Users Online 1

      1 Guest