Recreate RAID10 after OS disk failure

    • Offizieller Beitrag

    Your syntax looks wrong for the mirror array. What is the output of cat /proc/mdstat and blkid now?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Zitat

    cat /proc/mdstat


    Personalities : [raid10] [raid1] [raid0]
    unused devices: <none>


    and


    Zitat

    blkid


    /dev/sda1: UUID="c17401d3-1d95-42d5-acc4-e0e64cdf0927" TYPE="ext4"
    /dev/sda5: UUID="0eb849ea-8a2b-4f6d-8618-a9a38cddcc9b" TYPE="swap"
    /dev/sdb1: UUID="0c0bc765-b7aa-4532-98eb-d4cb83d21b0e" TYPE="ext4"
    /dev/sdb5: UUID="f163d7c5-e228-4306-9f37-09bceb734ba1" TYPE="swap"


    but I suppose I have to assemble BEFORE the stripe (at least one) and then the mirror..


    well, I would be happy to succeed just with the first stripe array, so I could mount the FS and backup all..

    • Offizieller Beitrag

    I would try assembling both stripes.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • First:


    Zitat

    root@nasino:~# mdadm --assemble /dev/md127 /dev/sdb /dev/sdc --verbose --force
    mdadm: looking for devices for /dev/md127
    mdadm: Cannot assemble mbr metadata on /dev/sdb
    mdadm: /dev/sdb has no superblock - assembly aborted


    Second:


    Zitat

    root@nasino:~# mdadm --assemble /dev/md127 /dev/sdd /dev/sde --verbose --force
    mdadm: looking for devices for /dev/md127
    mdadm: no recogniseable superblock on /dev/sdd
    mdadm: /dev/sdd has no superblock - assembly aborted


    ehmm.. ?

    • Offizieller Beitrag

    I guess you array is linear. modprobe linear and try again.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Maybe try a different combination of drives?? Not sure. Not looking good when it says it isn't finding a superblock though.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hi!


    Are the additional disks directly attached to the system or are they external?


    After booting, if you type lsmod | grep -raid are the modules displayed? If no, then you may need to make sure that they are built into your ramdisk. Run update-initramfs -u -k all; then reboot the machine.

  • I just performed a search for your hardware, so forget my question about drive locations.


    Can you get me the output of mdadm --examine /dev/sd[abcde]* >> mdadm_examine_ecastellani.txt and also mdadm --examine /dev/sd[apcde]* | egrep 'Event|/dev/sd' >> mdadm_examine_event_ecastellani.txt?

  • after a reboot the lsmod |grep raid gives nothing


    so I run:

    Zitat

    ~# update-initramfs -u -k all


    update-initramfs: Generating /boot/initrd.img-3.2.0-4-amd64
    W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
    W: mdadm: no arrays defined in configuration file.


    the examine:



    and


    I reboot again but no change in lsmod or examine..

  • I forgot to mention that the modules should be loaded before you run the update-initramfs command.


    This is the result of my examine command:


    and for my events check:


    Also as you may have noticed, your drives don't have any RAID metadata so will probably not be combined in an array since there isn't any information on how to reassemble them. Had there been metadata then the event would have shown you if the difference between data writes had the integrity of the array been altered.


    Correct me if I'm wrong, but is /dev/sde your new boot disk? If from the RAID failure you didn't now shuffle the physical order of the drive then you should turn off or remove the members of the RAID array and make sure that your boot/OS disk is /dev/sda; then make the drives available. If the /dev/sda allocation doesn't persists then you should make it persistent via UDEV.


    I have to go out but will be back later in the day.

  • well, I loaded modules (modprobe raid0, modprobe raid1, modprobe linear) but the update command didn't show differences.


    and the examine is exactly the same than before..


    the boot disk is the sda, the 500Gb hard drive


    the HP ML150 has one bay for CDROM/HardDrive that I use for the sda and four bays for the RAID hard drives.


    The only change I made was to remove di usb drive that (according to a friend) should had work fine because I don't use intensively the NAS (once/twice per week, always in stand-by mode, just to keep safe the phioto archive created for lightroom).. well, the usb pen is almost dead..
    So I added the new hard disk and simply reinstalled the OMV..

  • ops, I correct the previous statement: the boot disk SHOULD be or USED to be sda, now appear to be sde..


    Zitat

    If the /dev/sda allocation doesn't persists then you should make it persistent via UDEV.


    well, I understand the meaning, but i'm not able to do it myself.. :D

  • First what I understand from all the things you wrote:


    You do not have a raid 10.


    IT looks like you have 2 Raid 1s (one for ext4 and one for swap). (whatever might be the reason to put swap on the raid ... ).


    My best advise would be:

    • Power down your system
    • unplug all raid drives
    • power on with only boot device attached
    • reinstall OMV
    • retry to see your raid.

    Everything is possible, sometimes it requires Google to find out how.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!