Posts by ananas

    I can not see the exact model number of your disks (column to narrow)

    Is it

    "WDC WD20EFRX-xxx" --> CMR drives

    or is it

    "WDC WD20EFAX-xxx" --> SMR drivers

    SMR drives are known to be less suitable for RAID arrays.

    Do you know the MAC address of your Pi or can you see in in your router ?

    If so, (assuming you're on windows, run as administrator)

    arp -s 192.168.1.123 B8:27:xx:xx:xx:xx <-- use the MAC address of your Pi

    After that ssh to 192.168.1.123

    The server's ethernet port isn't blinking green because there is no traffic, does it have a link ? (maybe check that on your router)

    If you change your laptop's IP to one from the old range you can ssh into your server using it's old static IP.

    Maybe your router doesn't work with the IP addresses from the old range, but the builtin switch doesn't care about IP addresses.

    Just give it a try, it really can't do any harm.

    Why so complicated ?

    Just give your PC a static IP address from the "old" range (e.g. 192.168.0.123)

    ssh into your OMV box using it's old static IP,

    use omv-firstaid to change the IP address into one from the new range.


    Cheers,

    Thomas


    PS: Don't forget to change the IP address of your PC again.

    To be honest, I don't know what's going on.

    There seems to be no logic behind the different error messages.

    From the kernel log I would suggest /dev/sdc to be damaged or have another issue.

    In case you have a backup, I would shutdown, check/replace the data cables,

    wipefs -a the drives and rebuild the array.


    In case you don't have a backup I would try (re)assemble the array,

    mount it and take a backup as long as is possible.


    Good luck,

    Thomas

    Taken from: https://raid.wiki.kernel.org/index.php/RAID_Recovery


    If your array won't assemble automatically, the first thing to check the reason for this

    (look into the logs using "dmesg" or check the log files).

    It's a frequent failure scenario that the event count of the devices do not match, which means mdadm won't assemble the array automatically.

    The event count is increased when writes are done to an array, so if the event count differs by less than 50,

    then the information on the drive is probably still ok.


    The higher difference, the more writes have been done to the filesystem and

    the greater the risk that the filesystem will have changed a lot since the

    differing event count drive was last in the array, and the higher the risk that your data is in jeopardy.


    In your case:

    /dev/sdb 15598

    /dev/sdd: 15598

    /dev/sde: 15598

    /dev/sda 15568

    /dev/sdf: 15572

    /dev/sdc: 15569


    So you might try to add the option "--force" to the mdadm command.


    mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcdef]


    Good luck,

    Thomas

    You cannot (re)asemble the array as long as the drives are busy (part of the array)

    You have to stop and (re)assemble the array.


    mdadm --stop /dev/md127

    mdadm --assemble --verbose /dev/md127 /dev/sd[abcdef]


    After that, please have a look at the ouput of "cat /proc/mdstat" and post it here.


    Cheers,

    Thomas

    As sdb, sdc and sdd are still part of the array, you have to stop the array first.


    mdadm --stop /dev/md127


    (Re)assemble the array with 3 of 4 drives:

    mdadm --assemble /dev/md127 /dev/sd[bcd]

    output should be something like

    ...

    mdadm: /dev/md127 has been started with 3 drives (out of 4).

    ...


    Now add the 4th drive to the array:

    mdadm --add /dev/md127 /dev/sda



    cat /proc/mdstat

    should now display something like

    ...

    [>....................] recovery = 0.0% ...

    ...

    You're trying to (re)assemble the RAID with the wrong members.

    mdadm --assemble /dev/md127 /dev/sd[abcdef]

    remove /dev/sde from the list (it is your boot device and not a raid member)

    remove /dev/sdf from the list (blkid doesn't list it, so it probably doesn't exist)

    so correct would be:

    mdadm --assemble /dev/md127 /dev/sd[abcd]

    You would have to:

    1. block all outgoing DNS traffic (UDP & TCP, Port 53) which is NOT going to your AG instance.

    OR

    2. rewrite DNS requests to go to your AG instance.

    I don't know if your router is capable of doing this, maybe have a look at https://openwrt.org

    Take a look at the output you posted:

    Code
    1: Loopback interface info
    2: enp3s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdidsc pfifo_fast state DOWN group default alen 1000
        link/ether SOME IPV6 LOOKING INFO
    3: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

    "... state DOWN ..." -->


    You don't have a link on both your NICs.

    Check the cable(s) router port etc.


    Cheers,

    Thomas