can't bind in Raid5 after reinstall from OMV3

  • Hy guys,


    after an reinstall from my NAS to OMV3, I can't bind in/mount my old Raid5.


    What have I done before, after the reinstall my Physical Disks were shown but not in the Raid
    Management. So I start with the commands from this Post to find the issue. The result was that the superblock on devices has an issue.
    This I solve with the below commads for my devices sda,sdb & sdd



    Code
    mdadm --zero-superblock /dev/sda[bd]
    and
    dd if=/dev/zero of=/dev/sda[bd] bs=1K count=1024

    now I see my Raid5 in the Raid management, but I can't mount it in the file system. the mount button is grey


    So how must i proceed?



    Below the current status



    cat /proc/mdstat output


    Code
    root@DoggiNas:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active (auto-read-only) raid5 sdd[0] sda[2] sdb[1]
    5860270080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    bitmap: 0/22 pages [0KB], 65536KB chunk
    unused devices: <none>


    blkid output


    Code
    root@DoggiNas:~# blkid
    /dev/sdb: UUID="de78e9c6-c5a2-647e-c01c-f1092ad82856" UUID_SUB="b159821d-29ff-751b-d736-63e0bb49cf05" LABEL="DoggiNas:doggi" TYPE="linux_raid_member"
    /dev/sda: UUID="de78e9c6-c5a2-647e-c01c-f1092ad82856" UUID_SUB="7f0aba23-2979-7069-f5ac-a365ec4c5f55" LABEL="DoggiNas:doggi" TYPE="linux_raid_member"
    /dev/sdd: UUID="de78e9c6-c5a2-647e-c01c-f1092ad82856" UUID_SUB="89f2e673-dc6d-e87e-d4f3-13de30e2973d" LABEL="DoggiNas:doggi" TYPE="linux_raid_member"
    /dev/sdc1: UUID="6a490532-3b24-46aa-a584-ae49e6a07abe" TYPE="ext4" PARTUUID="a21f89a7-01"
    /dev/sdc5: UUID="90d69f5d-117d-47ca-81a5-436ef36083e2" TYPE="swap" PARTUUID="a21f89a7-05"


    fdisk -l output


    cat /etc/mdadm/mdadm.conf output


    mdadm --detail --scan --verbose output


    Code
    root@DoggiNas:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/doggi level=raid5 num-devices=3 metadata=1.2 name=DoggiNas:doggi UUID=de78e9c6:c5a2647e:c01cf109:2ad82856
    devices=/dev/sda,/dev/sdb,/dev/sdd
  • ok THX
    I have done this, but the raid5 is still not avaible in the filesystem.
    and my device sdc is shown but I can do nothing


    Where make I an thinking mistake ?


    by the way when I restart the NAS md127 is again as active (auto-read-only) :(
    so I repeat mdadm --readwrite /dev/md127




    mdadm --assemble --scan o utput

    Code
    root@DoggiNas:~# mdadm --assemble --scan
    mdadm: No arrays found in config file or automatically
  • Did you look at the output of cat /proc/mdstat to see if it came out of auto-read-only mode? Look at dmesg to check for any errors.

    omv 5.5.6 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.5
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Yes it came out of auto-read-only mode. Only if I restart the NAS is the Raid back in the only-read mode.


    "dmesg" will I check today in the evening, when I'm back at home.

  • Now I'm back
    You see, after start from the NAS , it is in the auto-read-only mode,
    than I type the command "...--rewrite...." , and the output is active Raid.



    And here the output from dmesg
    Range from usd to systemd
    the complete log is in the attachment

    Hm I don't see any errors, but I don't really know what I must looking for (:

  • I'm guessing you shut down the server all the time? Do you let it finishing syncing once it is in readwrite mode? Not sure why it is going into readwrite mode. I would try update-initramfs -u before rebooting too.

    omv 5.5.6 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.5
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • normaly I don't shut down, if it run than I use the standby mode.


    this is the output from update-initramfs -u

    Code
    root@DoggiNas:~# update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-4.7.0-0.bpo.1-amd64
    W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
  • ok done

    Code
    root@DoggiNas:~# mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    root@DoggiNas:~# update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-4.7.0-0.bpo.1-amd64
    root@DoggiNas:~#
  • it tells

    Code
    root@DoggiNas:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active raid5 sdd[0] sda[2] sdb[1]
    5860270080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    bitmap: 0/22 pages [0KB], 65536KB chunk
    unused devices: <none>
  • OMG I think I have done a big thinking error. Correct me ryecoaaron if I not right.


    I have setup a new OMV3 system and I want set my old Raid5 in this new system.
    The new system has no array.
    Would I choose my old Raid5 to be create a new file sytem , all my datas will be erase, correct ?




    I thought I could setup a new system and then I can mount my old Raid5 with all of my data.


    This thread LINK open my eyes about my issue.



    Is there an way to create a new array with the old Raid5 without to loose the data?

  • Your array should still have the old filesystem on it. If you create a new filesystem, it erases everything. I would try rebooting to see if the array comes backup correctly first. If it does and you haven't create a new filesystem already, post the output of blkid.

    omv 5.5.6 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.5
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Ok I have reboot
    and this is the output from blkid

  • Hmm it is again in the auto-read-only mode


    Code
    root@DoggiNas:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active (auto-read-only) raid5 sdd[0] sda[2] sdb[1]
    5860270080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    bitmap: 0/22 pages [0KB], 65536KB chunk
    unused devices: <none>
  • It should still show a filesystem. My guess is the filesystem is gone. I would wipe the drives and start over.

    omv 5.5.6 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.5
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!