can't bind in Raid5 after reinstall from OMV3

  • Hy guys,


    after an reinstall from my NAS to OMV3, I can't bind in/mount my old Raid5.


    What have I done before, after the reinstall my Physical Disks were shown but not in the Raid
    Management. So I start with the commands from this Post to find the issue. The result was that the superblock on devices has an issue.
    This I solve with the below commads for my devices sda,sdb & sdd



    Code
    mdadm --zero-superblock /dev/sda[bd]
    and 
    dd if=/dev/zero of=/dev/sda[bd] bs=1K count=1024

    now I see my Raid5 in the Raid management, but I can't mount it in the file system. the mount button is grey


    So how must i proceed?



    Below the current status



    cat /proc/mdstat output


    Code
    root@DoggiNas:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active (auto-read-only) raid5 sdd[0] sda[2] sdb[1]
    5860270080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]      
    bitmap: 0/22 pages [0KB], 65536KB chunk
    unused devices: <none>


    blkid output


    Code
    root@DoggiNas:~# blkid
    /dev/sdb: UUID="de78e9c6-c5a2-647e-c01c-f1092ad82856" UUID_SUB="b159821d-29ff-751b-d736-63e0bb49cf05" LABEL="DoggiNas:doggi" TYPE="linux_raid_member"
    /dev/sda: UUID="de78e9c6-c5a2-647e-c01c-f1092ad82856" UUID_SUB="7f0aba23-2979-7069-f5ac-a365ec4c5f55" LABEL="DoggiNas:doggi" TYPE="linux_raid_member"
    /dev/sdd: UUID="de78e9c6-c5a2-647e-c01c-f1092ad82856" UUID_SUB="89f2e673-dc6d-e87e-d4f3-13de30e2973d" LABEL="DoggiNas:doggi" TYPE="linux_raid_member"
    /dev/sdc1: UUID="6a490532-3b24-46aa-a584-ae49e6a07abe" TYPE="ext4" PARTUUID="a21f89a7-01"
    /dev/sdc5: UUID="90d69f5d-117d-47ca-81a5-436ef36083e2" TYPE="swap" PARTUUID="a21f89a7-05"


    fdisk -l output


    cat /etc/mdadm/mdadm.conf output


    mdadm --detail --scan --verbose output


    Code
    root@DoggiNas:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/doggi level=raid5 num-devices=3 metadata=1.2 name=DoggiNas:doggi UUID=de78e9c6:c5a2647e:c01cf109:2ad82856   
    devices=/dev/sda,/dev/sdb,/dev/sdd
    • Offizieller Beitrag

    md127 : active (auto-read-only)

    Here is your problem. mdadm --readwrite /dev/md127

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ok THX
    I have done this, but the raid5 is still not avaible in the filesystem.
    and my device sdc is shown but I can do nothing


    Where make I an thinking mistake ?


    by the way when I restart the NAS md127 is again as active (auto-read-only) :(
    so I repeat mdadm --readwrite /dev/md127




    mdadm --assemble --scan o utput

    Code
    root@DoggiNas:~# mdadm --assemble --scan
    mdadm: No arrays found in config file or automatically
    • Offizieller Beitrag

    Something is wrong with the array. I will have to ask more questions in a bit...

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Did you look at the output of cat /proc/mdstat to see if it came out of auto-read-only mode? Look at dmesg to check for any errors.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Yes it came out of auto-read-only mode. Only if I restart the NAS is the Raid back in the only-read mode.


    "dmesg" will I check today in the evening, when I'm back at home.

  • Now I'm back
    You see, after start from the NAS , it is in the auto-read-only mode,
    than I type the command "...--rewrite...." , and the output is active Raid.



    And here the output from dmesg
    Range from usd to systemd
    the complete log is in the attachment

    Hm I don't see any errors, but I don't really know what I must looking for :)

    • Offizieller Beitrag

    I'm guessing you shut down the server all the time? Do you let it finishing syncing once it is in readwrite mode? Not sure why it is going into readwrite mode. I would try update-initramfs -u before rebooting too.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • normaly I don't shut down, if it run than I use the standby mode.


    this is the output from update-initramfs -u

    Code
    root@DoggiNas:~# update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-4.7.0-0.bpo.1-amd64
    W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
    • Offizieller Beitrag

    Then do:


    mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    update-initramfs -u

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ok done

    Code
    root@DoggiNas:~# mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    root@DoggiNas:~# update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-4.7.0-0.bpo.1-amd64
    root@DoggiNas:~#
    • Offizieller Beitrag

    Good. What is the output of: cat /proc/mdstat

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • it tells

    Code
    root@DoggiNas:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active raid5 sdd[0] sda[2] sdb[1]
          5860270080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
          bitmap: 0/22 pages [0KB], 65536KB chunk
    
    
    unused devices: <none>
  • OMG I think I have done a big thinking error. Correct me ryecoaaron if I not right.


    I have setup a new OMV3 system and I want set my old Raid5 in this new system.
    The new system has no array.
    Would I choose my old Raid5 to be create a new file sytem , all my datas will be erase, correct ?




    I thought I could setup a new system and then I can mount my old Raid5 with all of my data.


    This thread LINK open my eyes about my issue.



    Is there an way to create a new array with the old Raid5 without to loose the data?

    • Offizieller Beitrag

    Your array should still have the old filesystem on it. If you create a new filesystem, it erases everything. I would try rebooting to see if the array comes backup correctly first. If it does and you haven't create a new filesystem already, post the output of blkid.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Ok I have reboot
    and this is the output from blkid

    • Offizieller Beitrag

    I don't see any filesystem on the array. Post the output of cat /proc/mdstat

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hmm it is again in the auto-read-only mode


    Code
    root@DoggiNas:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active (auto-read-only) raid5 sdd[0] sda[2] sdb[1]
          5860270080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
          bitmap: 0/22 pages [0KB], 65536KB chunk
    
    
    unused devices: <none>
    • Offizieller Beitrag

    It should still show a filesystem. My guess is the filesystem is gone. I would wipe the drives and start over.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!