Beiträge von geaves

    TBH there is only one option left for you to try, each version of OMV has the capability to install systemrescuecd, I think in V5 it's in omv-extras this installs and boots once to effectively a command line live cd.


    This works without the knowledge (best way to describe it) of OMV, but the mdadm commands will work, the device or resource busy is related to the cross usage of partitions, and each array pointing to shares. On a normally configured omv system this is not usually a problem but the way your system is configured it is.


    I would suggest you try that, if that doesn't work, I honestly don't know what to suggest or than contacting the person who set this up

    if I navigate there I'm only presented with the array, not the drives

    That's because the two drives are linked into a Raid1, therefore OMV presents the array to the user as a workable storage unit, in this case a raid array.

    Trying to access the array give i/o errors

    That points to the drive or your sata controller, google i/o errors linux

    is there any reason I cannot/should not unmount the array

    No, but to what end


    If this was me I would be using a live linux distro and a simple usb to sata adaptor to see if the drive would mount


    At this present moment in time this is not looking good, is there any way you can connect the surviving drive to the usb3 port, with an external usb case or usb to sata adaptor.

    It looks like I didn't had too much luck at this time

    At least your learning what not to do :)


    OK the problem is the partitions across different arrays;


    Try this one first;


    mdadm --stop /dev/md127 you should get a message that the array has stopped, then;


    mdadm --assemble --force --verbose /dev/md127 /dev/sdb1 /dev/sdf1  dev/sdi1 /dev/sdj1 /dev/sde1


    if that works, then try stopping the array for md126 then run the --readwrite option in #4


    None of this I am hopeful of at this moment in time, the --readwrite option can be run on an active array without it erroring

    The array is inactive, so;


    mdadm --stop /dev/md0


    mdadm --assemble --force --verbose /dev/md0 /dev/sd[bc]


    that should assemble the array in a clean/degraded state.


    To replace the failed drive with a new one, install it, then Storage -> Disks select the drive and click wipe on the menu, if the drive is new a quick wipe will be ok, if it's repurposed then secure


    Then Raid Management select the array and click recover on the menu

    If I were to use the OMV GUI, I would first delete the RAID configuration, correct

    No, that should not be necessary, what should be possible, is to use Rsync to create a backup of the complete drive to a new drive, have a look here then before doing anything else confirm the backup by using WinSCP on windows or midnight commander on linux to ensure the data has been copied.


    Then proceed with a clean install of OMV6, look in the guides section there are some comprehensive install guides written by a user so they are easy to follow.

    How is the array not accessible at all in a degraded state

    It should be, and I can't offer an explanation simply because I don't use SBC's other than for standalone specific tasks, I could emulate what has happened to you on a VM on a Windows machine (test rig) and the data would be accessible, as the array is in a clean/degraded state.


    But WinSCP would allow you to explore the degraded array and check the file content, the fact that PMS sees your media albeit intermittent could suggest there is a hardware issue, something that is degrading or failing.

    I guess if this isn't worth it I may just get rid of the RAID and copy the data from the healthy drive (presuming the other drive is indeed toast) to a new SSD and forget the RAID 1 and just use RSync to backup from one SSD to another.

    In all honesty this is the most sensible way forward, looking at the output, mdadm has removed the drive for whatever reason, you can't add that drive back to the array unless it can be repaired.


    Looking at the information you have provided, OMV is running on an SBC, your 'sata controller' is simply a usb to sata bridge, whilst the connections are sata the underlying hardware is still usb. The creation of raid using usb was removed in OMV4 (I think) when some users were experiencing data loss when using usb for raid creation.


    At this moment in time you have no idea if this is power related, the 'sata controller', the drive itself or the filesystem on that drive. Is the system recoverable by replacing that drive and rebuilding the array, possibly, but there would be no guarantee. Your best option is go down the Rsync route as you've suggested and hope that the existing drive continues to function.


    You could run fsck on the failed drive, fsck /dev/sdb I would hope that that would check the filesystem and ask you if you want to repair.


    As you have not added an OMV tag I'm guessing you are running an EOL version and would suggest, get the data off the working drive and deploy OMV6, re-create your shares etc. Whilst this might seem extreme it would about a day to get this set up

    In post #21 I asked to "copy this 'mdadm.conf' file taken from old OMV3 to the current OMV6"

    TBH I have no idea if that would work, it would work if you have your old OMV3 install with the missing array reference in the mdadm conf if you want to try it with OMV6 you will need to reboot after


    I will not download a txt file, copy and paste output into a code box as below;

    Code
    This is a code for command line output

    Apologies my bad, need to stop mdadm first as it will be running;


    mdadm --stop /dev/md0 you should get a notification about mdadm stopped, then proceed with the command in #27

    I've tried to re-direct mdadm's output into a text file, but it didn't work.

    Please don't use a text box, us a code box this </> on the menu, then copy and paste into the code box, text files have to be downloaded

    Software RAID simply says nothing to show (Keine Daten zum Anzeigen.)

    All that tells you is there is no filesystem, which is on top of the software raid, you need to look in Storage -> Software Raid, if nothing shows in there then we are back to the command line

    I've had to copy this 'mdadm.conf' file taken from old OMV3 to the current OMV6

    No, if the array is recognised in OMV6 then it should be possible to create an mdadm conf in OMV6 from the command line


    I'll try and clarify what I said #19


    You have OMV6 working, shutdown the server and connect your raid drives, mdadm (which loads during OMV's boot process) should detect those and begin rebuilding the array.

    Once the server is running login to OMV's GUI and under Raid Management the array should show as rebuilding


    That is all you have to do at this moment, if this works at least we know that OMV6 has recognised the drives and has begun reassembling the array.


    Likewise, if you get nothing displaying in Raid Management then we have to approach this differently

    Is the mdadm.conf list

    That told me that there was a line related to the array in that mdadm conf file, so at one time it was working.


    With OMV6 running shutdown the server and connect your drives, then restart the server and post what you see, I would hope that the array is shown re building in raid management

    root@omv-home-server:~# mdadm --stop /dev/md127

    mdadm: stopped /dev/md2

    root@omv-home-server:~# mdadm --readwrite /dev/md2

    mdadm: error opening /dev/md2: No such file or directory

    root@omv-home-server:~#

    ??? if you are typing that, please don't, it's confusing as you have gone from md127 to md2, I have no idea what is right and what is not. Use a code box this symbol </> on the forum menu bar, this will insert a box where command output can be entered ->

    Code
    root@omv-home-server:~# mdadm --stop /dev/md127
    
    mdadm: stopped /dev/md2
    
    root@omv-home-server:~# mdadm --readwrite /dev/md2
    
    mdadm: error opening /dev/md2: No such file or directory
    
    root@omv-home-server:~#


    If I start the computer again, the raid system is recognized

    If you restart, shutdown certain references within linux can change, this makes trying to help/find a solution harder, we are not sitting in front of your server so we can only be as helpful as the information supplied.

    Finally, I do hope there will a way to re-assemble the RAID or something like that

    There is, but I am at a loss at to what your post above is trying to tell me, so lets backup.


    You have a backup copy of the original OMV3 install?


    You have installed OMV6 and it works?


    Is the server running on OMV6 without the Raid drives connected?


    At this point in time testing in a VM is not necessary, and short smart tests don't give enough information.

    Whilst you have posted what I asked for doing a copy and paste makes it difficult to read, formatting using the </> (code box) on the forum menu bar will place the text in a readable format, including the full cli command is also helpful.


    The drives appear to be OK and recognised by mdadm, the issue is this line in the mdadm conf file;

    # definitions of existing MD arrays

    under that line should be a definition of the array and would explain why the array is showing as n/a, that conf file is created at the time the array is created in OMV, so an array definition would have been entered.

    Whilst that line is not critical initially, it's better in than out (said the actress to the bishop)


    If you create an array from the command line the definition also has to be created manually by running another command line option, using OMV's GUI this is done automatically for the user.


    I am going to assume you created this array using OMV's GUI and somehow that mdadm conf file has become corrupted or something has become corrupted within OMV.


    There are two ways this might be resolved;


    1) Install OMV6 as suggested by macom suggested in #5 and using Soma suggestions in #7 and follow this guide but do not add drives, just update and set up OMV6, then come back


    2) My less favourable approach, is a repair using your current setup OMV3


    At this moment in time your data should be still on those drives but I will not guarantee it, one drive failure in a Raid5 is fine, 2, and the data is toast, non recoverable.

    What you would need is the array reference, but you could try mdadm --examine /dev/sdb do that for each of the linux_raid_member that might throw something up and the output of cat /etc/mdadm/mdadm.conf