Raid 1 missing

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Raid 1 missing

      After having removed and mounted a disk (sdc) on a different computer, my raid 1 array now does not appear in the web GUI under raid management when returned to my server.


      cat /proc/mdstat

      Source Code

      1. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      2. md0 : inactive sdb[0](S)
      3. 2930135512 blocks super 1.2
      4. unused devices: <none>

      blkid

      Source Code

      1. /dev/sda1: UUID="b40a2842-935f-424f-be3c-8eff091e7a4b" TYPE="ext4" PARTUUID="326a2e34-01"
      2. /dev/sda5: UUID="31ed5f71-3b23-4db7-8d75-a7aefa980882" TYPE="swap" PARTUUID="326a2e34-05"
      3. /dev/sdb: UUID="9481f885-48b8-5b55-b930-b86b3e152e6c" UUID_SUB="1350e551-f6bc-41e3-f2c4-39f54de07cf5" LABEL="openmediavault:Raid1" TYPE="l inux_raid_member"
      4. /dev/sdd1: LABEL="1TBDrive" UUID="0163461d-12a9-47eb-91ea-f7e3f245aaeb" TYPE="ext4" PARTUUID="8e15da09-0f0e-4dda-8fe9-323cb7a9272a"

      fdisk -l | grep "Disk "

      Source Code

      1. Disk /dev/sda: 29.5 GiB, 31675383808 bytes, 61865984 sectors
      2. Disk identifier: 0x326a2e34
      3. Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      4. Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
      5. Disk identifier: A2B116B8-5357-45A5-9483-9C88D1D51CFD
      6. Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors

      cat /etc/mdadm/mdadm.conf

      Source Code

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      16. ARRAY /dev/md0 metadata=1.2 name=openmediavault:Raid1 UUID=9481f885:48b85b55:b930b86b:3e152e6c
      Display All

      mdadm --detail --scan --verbose

      Source Code

      1. INACTIVE-ARRAY /dev/md0 num-devices=1 metadata=1.2 name=openmediavault:Raid1 UUID=9481f885:48b85b55:b930b86b:3e152e6c
      2. devices=/dev/sdb

      As far as i can tell the device sdb still is listed as a raid member. However, in trying to rebuild the array (by adding sdc) gives the following error:

      Source Code

      1. root@openmediavault:~# mdadm --assemble /dev/md0 /dev/sdb /dev/sdc/
      2. mdadm: /dev/sdb is busy - skipping
      3. mdadm: cannot open device /dev/sdc/: Not a directory
      4. mdadm: /dev/sdc/ has no superblock - assembly aborted

      Keep in mind that im new to OMV and mdadm, just in case the above command is not how you are supposed to do it :P . Tried following tips found here, however to no avail. Any help is appreciated!
    • You need to stop md0 before assembling it. I would use the force and verbose flags when assembling.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Cool! Didnt know that. When i try that i get:

      Source Code

      1. root@openmediavault:~# mdadm --stop /dev/md0
      2. mdadm: stopped /dev/md0
      3. root@openmediavault:~# mdadm --assemble --force --verbose /dev/md0 /dev/sdb /dev/sdc/
      4. mdadm: looking for devices for /dev/md0
      5. mdadm: cannot open device /dev/sdc/: Not a directory
      6. mdadm: /dev/sdc/ has no superblock - assembly aborted

      sdc - "not a directory". Do i need to create a filesystem first on sdc or something? Also it seems the superblock is missing. I know that there is a --zero-superblock flag, however no entirely sure how to use it. Thanks for the help so far!
    • Ahh rookie mistake :/

      However, now im getting this:

      Source Code

      1. root@openmediavault:~# mdadm --assemble --force --verbose /dev/md0 /dev/sdb /dev/sdc
      2. mdadm: looking for devices for /dev/md0
      3. mdadm: No super block found on /dev/sdc (Expected magic a92b4efc, got 00000000)
      4. mdadm: no RAID superblock on /dev/sdc
      5. mdadm: /dev/sdc has no superblock - assembly aborted
    • In that case, I usually wipe the drive. It will have to resync. I would think about dumping the raid idea and using rsync or something else to sync the two drives.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • So wiping sdc or sdb? In other words, there is no way to sync those two drives back to how they were? Im not sure i understand the point of raid 1 then ?( . If sdb is dependent on something on sdc, wouldnt that defeat the purpose of raid, in that i cannot restore it after a potential drive failure? Oh well, the data wasnt all that important any way, guess i will wipe and start from fresh. Thanks for the help!
    • valentin wrote:

      So wiping sdc or sdb?
      Wipe sdc since it doesn't know that it is a member of the array anymore.

      valentin wrote:

      there is no way to sync those two drives back to how they were?
      That is what assembling does.

      valentin wrote:

      Im not sure i understand the point of raid 1 then
      It is about availability and redundancy but if there is something is wrong with your drive, either it needs to be fixed or replaced. Then a resync needs to happen. If you are trying to use raid as backup in case a drive fails, that is the wrong use.

      valentin wrote:

      If sdb is dependent on something on sdc, wouldnt that defeat the purpose of raid, in that i cannot restore it after a potential drive failure?
      sdb isn't dependent on sdc but the array is warning you that there is a problem. You can start a degraded array with one drive but it can be dangerous. In your case, I would start the array in degraded mode with just one drive and wipe/format the other drive with a filesystem. Then rsync the data to it. Then stop the array and do the same format to the other drive. Setup an rsync job to keep the drives sync'd.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!