Alles anzeigenI assume by that question this is your first time using/testing a raid array.
OMV uses mdadm (software raid) as with any software the end user has to input information for that software to function, but that part is taken care of by the GUI. If a user disrupts/changes the behaviour of the software then there's an excellent chance the software will cease to function as expected.
If a drive fails within an array then mdadm will automatically mark the drive as failed, remove the drive from the array and leave the array in clean/degraded state with access to the data within the array.
Mdadm is not hot-swap, if a user pulls a drive from an array by shutting down the server, then reboots, the check process that is implemented during the boot process will detect a missing drive and mark the array as inactive. This means that no arrays will be displayed in raid management therefore no access to data even though the existing drive still shows under Storage -> Disks. Whilst the array returns as inactive the array can be made inactive via the command line and the array will display as clean/degraded.
Have a look through the Raid on the forum, most issues are from inactive arrays usually after a power outage.
yes, it's my first contact with RAID, in this case of having to remount the raid, more specifically in my case, I use 2 mirrored disks, it would be simpler to use RSYNC to avoid having to remount the raid, no?I'll look more on the forum about these glitches! Thank you very much in advance!