File System N/A, status missing

  • Dear all,



    my problem is as follows (see attachment 1-3).
    I found this after a large copy task was aborted and I could not get access to my files via AFP.
    The report ist also attached.



    The following commands did not help to reactivate the RAID, which I thought might bei the (only) problem:


    # mdadm -A /dev/md127
    # mdadm --examine --scan >> /etc/mdadm/mdadm.conf


    Output (see attachment 4).



    The interesting point here is, that I use 5 hard drives, but only 4 seem to be online.


    The command
    # omv-mkconf mdadm
    did not help either, more precisely it gave the following output:
    "mdadm: cannot open /dev/md/RAID: No such file or directory“



    The last information I have, which is from
    # cat /proc/mdstat
    is as follows (see attachment 5).



    Most important thing again:
    There should still be data on the RAID, which I would love to get back! It worked fine before and suddenly stopped working.
    Could it be that there is just one of the five hard drives broken and just plugging in a new one would solve the problem (thats what a RAID is for, isn’t it?!), more precisely enable the restoring-mode or something else?



    I would be very happy for your support!



    Many thanks & best regards,
    Denis

  • Hey,
    mein OMV hat ganz genau das selbe Problem. Nun komme ich an keinerlei Daten mehr ran. Ein System-Recovery hat ebenfalls keinen Erfolg gebracht.


    Gibt es Hilfe?
    Vielen Dank und freundliche Grüße.

    2 BananaPi, 1 OrangePiPC+, 1 OrangePiPC with OMV 6.0.x

    • Offizieller Beitrag

    @d3nh4nk, I realize you need help but to double post on 19 different threads?!? You will be banned if you do it again.


    Post the output from the commands in this thread. If you could login using putty so you can cut&paste, it would be helpful. Your pics are hard to read.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • @ryecoaaron, I am very sorry about the 19x-posting, but also very desperate. So I wasn't sure which thread might be the closest one to my problem and which contains the right experts. With that multiple posting I only wanted to make sure to reach the right person. My apologies again!


    Unfortunately I am using a beamer, because the RAID used to work smoothly in the past, so no monitoring screen has been neceassary; WebGUI was enough. Furthermore the RAID is running on a stand alone device without any other operating system and consequently no web browser running parallel in order to post things directly from the system. This is why C&P won't work I am afraid.
    What extracts should I type down?

    • Offizieller Beitrag

    Did you read the thread that I linked to? It has the three commands that I need the output from.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I am not in front of the system right now, but will post the output asap. Thank you very much so far!


    Any recommendations about the copy and paste problem? More precisely, is it somehow possible to access the console via WebGUI in order to post proper outputs?

    • Offizieller Beitrag

    Sorry to sound like an ass but again, did you read the thread??? The second line in the first post tells you were to get the proper info in the webgui.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hello,


    sorry to sound like a dummy again, but I haven't read the threat while I haven't been in front of the machine.
    However, I did now and have to tell, that I am neither a windows user (and there I am afraid to test windows-based commands), nor do I have omv-extras installed (and cannot see an option like this under plugins).
    Is there any other way to get the needed infos?

  • Okay, I installed the plugin and it seems to work. Here is the report:



    • Offizieller Beitrag

    mdadm --stop /dev/md127
    mdadm --assemble /dev/md127 /dev/sd[abcde] --verbose --force

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Wow, you are a genius!!!


    It now says, that the RAID is assembled with 4 out of 5 drives and I can see it. However, as you might already know, I used to use 5 drives; means one (sdc) is broken?
    Do I have to do something more now, like mounting it or something else to get all drives back? Sorry I might seem a bit "over-careful".

    • Offizieller Beitrag

    Don't do anything until it is done syncing.


    Normally, that would tell me something is wrong with the one drive. I would get the data off the array if you are worried. If not, we can zero the superblock on the missing drive and try to add it again.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Alright, but where can I see when it is finished? I did not even see that it is actually doing something.


    What could be wrong with the one drive?


    What do you mean I might be worried about, my data during zeroing the superblock etc.? Should I?
    If I tried to get the data off the raid, I would have to mount it etc., right?

  • Hi ryecoaaron,


    in June it deleted the broken HDD and recovered the RAID. I guess this is what you meant by "we can zero the superblock on the missing drive and try to add it again.". It worked quite well until a few days ago, the RAID was degraded again.


    So I bought a new HDD, replaced the broken one and restarted the system.
    But when I was trying to start the raid again, with the command "mdadm --stop /dev/md127", it said that there is no such directory and under "file systems" in the WebGUI, now the device is called "OMV.data.Store.ImplicitModel-ext-1468-23".
    So I tried the commands from Degraded or missing raid array questions which gave me the info, that there are no used devices, but none of the HDDs does contain a valid partition table. Very strange ...


    I am pretty sure you a very good idea again to solve my problem. May I ask what it could be, please?



    Many thanks & best regards,
    Denis

  • Hi all,


    I continued trying a few things and ... found the solution, maybe this helps for anyone how has the same problem:


    When I booted the system, I saw that it hast the name (in my case) "RAID:RAID", don't ask me why.
    As the system was always replying, that there is now such file or directory (when I worked with MD127), it just tried to use "RAID" instead in every command. That did not really work out either, but no worries - finally I came to ...


    madam -A /dev/md/RAID


    ... and the system said "dev/md/RAID has been started [...]" and in the WebGUI, the raid was marked as degraded again. All I had to do now was to enable the file system and start the recovery by selecting the new HDD.


    All I had do do no

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!