Beiträge von geaves

    I'm sorry I'm lost, #1 you asked if running mdadm --assemble would start the array, I said no unless you stop mdadm first that command is in #2


    Then #3 there is a reference from a log file regarding a failed mount point


    The blkid output regarding /dev/sg1 and /dev/sg2 in relation to the array makes no sense as the output clearly states TYPE="ntfs"


    So;


    cat /proc/mdstat


    mdadm --detail /dev/md127

    This is simply a bad idea to continue to use this, why?


    1) The two drives shown in raid management are displayed as /dev/sdb3 and dev/sdc3, the 3 defines a partition, OMV does not use partitions to create arrays it uses the 'full block device' (the whole drive). Whilst an array can be created on the cli using partition/s the problem comes when trying to replace a failed drive/partition within an array


    2) AFAIK DSM uses storage pools then volumes before creating an array, that can either be hybrid or a standard array Raid0, 1, 5, 6, etc


    3) Continuing using this would initially be OK, until a drive fails and requires replacing, you're than into another can of worms trying to replace a drive within the array that was not created in OMV's GUI


    4) Put the drives back in the Synology, back up the data you cannot afford to lose, then add the drives back to OMV, secure wipe them and create an array within OMV's GUI

    Using this which another user did could be a way of getting the array back, however, the --assume-clean switch is a last resort and can cause data loss, the missing option is something I've not seen nor done before. So;


    mdadm --stop /dev/md0


    mdadm --create /dev/md0 --assume-clean --level=5 --verbose --raid-devices=4 /dev/sd[ab] missing /dev/sd[cd]


    the above is taken from the other users thread, if you do this you do so at your own risk because there is a risk of data loss, but currently the array is not going to assemble as it can only find 2 of the 4 drives.


    As the other user did check the file system and then mount the array


    If this works perhaps you'll consider a backup

    Couldn't all four drives have dropped out of the array at the time

    Don't think so, best guess would be something related to the hardware raid which has 'written' something to the software raid, but it would explain why the array would not assemble. The output of blkid 'might' throw something.

    I've allowed the array to finish syncing and the active (auto-read-only) property has disappeared.

    Well that's a first

    If anyone can chime in on the process of removing the "removed" drive and adding a new one

    You can't, all mdadm is telling you is that it has removed a drive from the array, the --detail switch is giving you information


    mdadm: add new device failed for /dev/sdc as 4: Invalid argument

    :/ if 25% secure wipe did not work I would suggest running a secure wipe on the whole drive then try again


    Adding a drive can be done from the GUI, Raid Management, select the array and click recover on the menu and follow the on screen instructions

    Post the output of cat /proc/mdstat post in a code box please, this symbol </> on the forum bar makes it easier to read


    The output from #4 of the above shows the array as (auto-read-only), also to re add /dev/sdc with the 'Possibly out of date' error the drive will have to be securely wiped, this can usually be run to 25% then stopped, then try re adding the drive to the array. Do not add the drive until it has finished rebuilding and the (auto-read-only) is corrected.

    I always run a test vm on my windows box using vb, the install of OMV7 works flawlessly (wouldn't expect anything else :) ) but the installation itself 'appears' to be a lot slower and at times (particularly during network setup) it appears to hang, and all you get is a blue screen. Whilst the install continues it gives you the impression you're watching paint dry :)


    The hanging occurs particularly when the install is interrogating the network for an IPv6 address, which I don't have on my network, previous vm installs have not had this trait

    Regarding RAID 1: The reason was redundancy

    Raid is about availability not redundancy so I can understand your user case from what you've outlined

    And: How to detect Malfunctions on the drives? SMART isn´t working

    This is the kicker, unless you can login and 'view' the state of the array OMV cannot notify you of any smart errors on usb drives

    So my question remains the same: What do I do if a disk fails

    Whilst this is straight forward, the use of usb is the downside because the array will go into a clean/degraded state without warning.


    mdadm (software raid) will fail a drive and remove it from the array, shutdown, remove failed drive, insert new drive, wipe (to prepare it from omv), raid management -> recover and select the new drive, the array will rebuild. The problem here is that the 'good working' drive could fail during the rebuild, therefore one should have a backup of the data on the array

    but I will defer to geaves on this. He is the resident mdadm guru.

    TBH I read this twice when it was first posted and I simply cannot understand it, probably lost in translation, but as you've pointed out Raid0 is a bad idea.


    Redeploying as those that have done it know, disconnect data drives before a clean install, update, then reconnect data drives, even mdadm raids will be detected and should display in raid management, then just mount the filesystem (if necessary) in the filesystem tab.


    This -> But OMV cant find the old filesysteme on it and so i cant get access to my data (makes no sense in conjunction with) this -> OMV sees that my disk have data on it and that both where in RAID 0 OMV even automatically started the resync processes. But i cant find a way to see the filesysteme.


    But, you will not be able to 'see'/mount a filesystem until a resync is completed!!