RAID1 disappeared

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • RAID1 disappeared

      Hi everyone
      I'm recreating the topic because the original one was in the wrong section and also hard to read.

      I bought 2 Seagate Ironwolf 6TB and I created a RAID1.
      After that I immediately shared it and copied 1TB of files on it.
      After that I had to reboot my NAS, and I found that the RAID1 was gone!
      I tried to see with gparted what was happening, and I saw that both the 6TB hard drive didn't even have a partion, it was like I never created and ext4 RAID1.

      I checked like tkaiser suggest me if my usb key had problem with f3 and the result is fine.

      I'll post here everything:. The RAID1 that disappeared was made with 2 Ironwolf 6TB
      • cat /proc/mdsta

      Source Code

      1. root@Delibird:~# cat /proc/mdstat
      2. Personalities : [raid1]
      3. md127 : active raid1 sdb[0] sda[1]
      4. 3906887360 blocks super 1.2 [2/2] [UU]
      5. unused devices: <none>




      • blkid

      Source Code

      1. root@Delibird:~# blkid
      2. /dev/sda: UUID="ed696fd2-96fe-ba4f-ab44-fb72b800fb01" UUID_SUB="05959c09-ecb2-6cf8-facc-6603333b02f6" LABEL="NAS:Data" TYPE="linux_raid_member"
      3. /dev/sdb: UUID="ed696fd2-96fe-ba4f-ab44-fb72b800fb01" UUID_SUB="47bf0e53-a2c5-2b44-1db5-c0e2eadf7300" LABEL="NAS:Data" TYPE="linux_raid_member"
      4. /dev/sde1: LABEL="Test1" UUID="e2e97456-a32f-4c7b-82f2-8ba5d8320dc1" TYPE="ext4" PARTLABEL="HDDTest" PARTUUID="c3e5ef33-dd1a-46d5-84bc-d66470473ed3"
      5. /dev/sde2: LABEL="Test2" UUID="b34f8189-6a4e-4a02-bcf0-1fc73641d055" TYPE="ext4" PARTLABEL="HDDTest" PARTUUID="dfdbab85-4777-416e-8f09-51e55840336b"
      6. /dev/md127: LABEL="Dati" UUID="8d1d82dc-45af-438d-9c7c-271640aed5b2" TYPE="ext4"
      7. /dev/sdf1: UUID="3ea78407-b370-43c7-ae25-290c365a4927" TYPE="ext4" PARTUUID="a94754ac-01"
      8. /dev/sdf5: UUID="462536b4-33a7-439b-8ed3-13e27998acdb" TYPE="swap" PARTUUID="a94754ac-05"
      9. /dev/sdc: PTUUID="8defa52c-34a0-4c6e-8508-3c922ba3807d" PTTYPE="gpt"
      10. /dev/sdd: PTUUID="36911ee8-885f-4a0e-8662-a913cd447094" PTTYPE="gpt"


      • fdisk -l | grep "Disk "

      Source Code

      1. root@Delibird:~# fdisk -l | grep "Disk "
      2. Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      3. Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      4. Disk /dev/sdc: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
      5. Disk identifier: 8DEFA52C-34A0-4C6E-8508-3C922BA3807D
      6. Disk /dev/sdd: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
      7. Disk identifier: 36911EE8-885F-4A0E-8662-A913CD447094
      8. Disk /dev/sde: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
      9. Disk identifier: 62DA6565-E8B7-4E50-AF25-0DA69B3B5CB4
      10. Disk /dev/md127: 3.7 TiB, 4000652656640 bytes, 7813774720 sectors
      11. Disk /dev/sdf: 14.3 GiB, 15376000000 bytes, 30031250 sectors
      12. Disk identifier: 0xa94754ac
      Display All
      • cat /etc/mdadm/mdadm.conf

      Source Code

      1. root@Delibird:~# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md127 metadata=1.2 name=NAS:Data UUID=ed696fd2:96feba4f:ab44fb72:b800fb01
      18. # instruct the monitoring daemon where to send mail alerts
      19. MAILADDR pestotosto@outlook.com
      Display All
      • mdadm --detail --scan --verbose

      Source Code

      1. MAILFROM rootroot@Delibird:~# mdadm --detail --scan --verbose
      2. ARRAY /dev/md127 level=raid1 num-devices=2 metadata=1.2 name=NAS:Data UUID=ed696fd2:96feba4f:ab44fb72:b800fb01
      3. devices=/dev/sda,/dev/sdb
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • Just to save others some time: /dev/md127 is made up of sda/sdb and the 'problem' mdraid that's missing is sdc/sdb. The old thread is here forum.openmediavault.org/index.php/Thread/20206

      I'm out of this mdraid madness. A sane way to collect support data is missing in OMV as well as warnings in big red letters in the user interface if people want to start sending their data to /dev/null not knowing what they're doing...
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • Hi
      I have the same problem, I have odroid Xu4 with 2x usb to sata bridge boards hardkernel.com/main/products/p….php?g_code=G145197048960 and 2x 2TB HDD RED, I have created RAID, and after one day of working, after moving some files on it and rebooted, whole raid dissapeard. I have tried to follow some quides, but it looks that it lost config, in blklid I see the disks that are member of array, but in mdadm.conf was nothing and also in /etc/openmediavault/config.xml it loast raid definition. I have tried some steps without succes. I reinstall whole omv and make make disks separate, it looks like sw RAID1 has some real issues.
      Only proble I see is that in web interface hdd are identified by serial number readed from sata to usb bridge and not from serial of HDD itself (for example I turn on SMART monitor on sda, it automatically turn on also on sdb, maybe this is another issue tu post it on separate list)
      thank you for advices
      Images
      • hdd.png

        15.71 kB, 586×276, viewed 11 times
      Raspberry pi 2/3, Odroid XU4 those are my hw to build on.
    • betupet wrote:

      I have odroid Xu4 with 2x usb to sata bridge boards hardkernel.com/main/products/p….php?g_code=G145197048960
      You do NOT have these devices since they use a GL controller and you have JMicron. Then playing USB RAID is dangerous, RAID-1 is absolutely useless and you should try to get a clue why configuration changes aren't written to your installation (rootfs on SD card, true?)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • Hi thank you for quick response, I used emmc card to store system on it, I move only mysql database outside to HDD, I install Flashmemory plugin,
      So please witch hw you suppose to use to put 2 HDD inside and have raid1? I have only usb ports on odroidxu4 (no sata). when I build raid, config was written to files, but after it dissapears (I think it was during system up, after restart it did not came up at all, I have checked configs but raid was gone. so I decide to use only one hdd now and second hdd will be synced with RSnaphot, to make some redundancy (backup) in data.
      Only one tnink it freaks my out is that I see same serials on hdd
      Raspberry pi 2/3, Odroid XU4 those are my hw to build on.
    • betupet wrote:

      So please witch hw you suppose to use to put 2 HDD inside and have raid1?
      None of course. RAID-1 is IMO a pretty stupid waste of disks so why should I use it or recommend anything? Better use Rsnapshot configured correctly, way better than primitive RAID-1.

      betupet wrote:

      when I build raid, config was written to files, but after it dissapears (I think it was during system up, after restart it did not came up at all, I have checked configs but raid was gone.
      That's filesystem vs physical storage. When stuff appears to be there (filesystem buffer) but disappears after a reboot then it was not written to disk/eMMC. Though no idea why and how to diagnose (other than issueing a sync command prior to reboot what has to happen anyway). You might want to provide output from 'armbianmonitor -u' then I can have a look.

      Same serials with same USB-to-SATA bridges --> a firmware flaw in the chip. I would assume you use the newer Hardkernel boards that use JMS578 and this chip needs another firmware update urgently: forum.odroid.com/viewtopic.php?f=97&t=28535#p205745
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • Hi,
      here is output sprunge.us/aGJY but this is config on same hw but without raid enabled, one disk is not inicialised yet. Tbank you for your help, I read the link about fw of Jmicron, but I think I do not have this issue about spindown, I do not know if fw of Jmicron can be updated, sorry I am only user with limited skill and knowledge, but still trying to build something. thank you
      Raspberry pi 2/3, Odroid XU4 those are my hw to build on.
    • Hi,

      macom wrote:

      Did you have a look at this one?
      Yeah ... after the vacation ... "of course" :D

      Blabla wrote:

      I tried to see with gparted what was happening, and I saw that both the 6TB hard drive didn't even have a partion, it was like I never created and ext4 RAID1.
      That's normal, because OMV uses the blockdevice option of md ...

      But what happend to your RAID1 is unclear:
      - blkid shows your (correct) members
      - "cat /proc/mdstat" shows the raid-array running
      - "mdadm --detail --scan --verbose" show also correct infos as
      - "cat /etc/mdadm/mdadm.conf" looks also normal

      Questions:
      - current status?
      - which OMV-version do U use?

      Analysis (so far):
      It seems that your RAID1 got struck by two well-known bugs:
      - Debian related bug auf changing md0 to md127 (caused by naming the array)
      (this should be the minor error, since autodetect should do the trick ... automaticly)
      - and (possible) Bootsect-Bug of the harddrives, which made the "superblock" disappear after reboot
      (unconfirmed)
      - and fdisk readings are wired, try sfdisk instead ... the drives should show up as 5,7TiB-drives, not as 3,7 TiB!

      Hints:
      (especially for @betupet)
      - RAID1 is one of the badest raid-level (right after 0 :P) - here i agree with @tkaiser (one drive-config with rsync to the second drive instead, or ZFS)
      - never use RAID-setups with PicoPSU and/or USB/SATA-adapters, that includes also every raid-setup on RPi and other single-board computers

      Sc0rp
    • Thank you Sc0rp and tkaiser,
      I remake whole omv from start. I am now not using RAID, I will try to setup some Rsyns job. I change USB sata to another chip brand which works fine. Now everything looks ok, except SMART vaules, but it is a problem of WD RED disks, I receive some counters increments by mail every day, but this is not to post to this thread.
      thx all
      Raspberry pi 2/3, Odroid XU4 those are my hw to build on.