Problem

  • Hallo Leute ich brauche dringend Hilfe! Ich kann nicht mehr auf das RAID zugreifen, ich habe noch Verbindung zum OMV das auf einer separaten SSD residiert, sehe auch alle möglichen Informationen auf der Benutzeroberfläche, die phys. Disks werden aufgeführt, aber das RAID ist als N/V gekennzeichnet. Da meine gesamten Daten auf dem System liegen bitte ich dringend um Hilfe! Grüße Peter

  • votdev

    Hat das Thema freigeschaltet.
  • First of all a big thank you for reply. I'll try to do it in english. My Workstation is a Win10 AMD PC (6core; 16 MB; abunch of some 8 internal disks, mainly WD 1 to 3 TB). There is a file server in the WLAN. This is controlled by the OMV system. Config: 4 cor intel atom cpu, 2 GB RAM, 4 identical WD Red 2 TB disks + a 60 GB SSD where the System resides. The SSD is still accessible. OMV shows all 5 drives under 'Datenspeicher --> Reale Festplatten'.

    Right now, due to family constraints I can't take off the file server to try to run the commands you requested. In fact, I have to config the server to work with it stand alone. I feel a bit confident to do this tomorrow. Please let me know if you need additional information in the mean time.

    Again, thanks a lot regards Peter

  • Hello, it took a bit more time to come back again. Command mdstat is not available. Do I need a certain repository? And if yes, where do I find it?

    By the way, does it make more sens to upgrade openmediavault, let's say to version 4 or even 5?

    • Offizieller Beitrag

    By the way, does it make more sens to upgrade openmediavault, let's say to version 4 or even 5?

    I would do a fresh install of OMV6. Maybe on a new disk or thumb drive, so that you can go back (even though it does not seem very useful at the moment)

  • Thanks for your comment. Is it really possible to go directly from vers. 3 to 6? The OVM3 system partition is already saved on a separate disk. I'm not very familiar with linux, in fact currently I face only problems, regardless what I' m doing. One of these lousy things is I even lost the network connection between my windows WS to the OMV server. I have no idea why. May be it was caused by working as root on the server, simply don't know. ifconfig tells me IP-addr etc seem to be correct.

    Now, what I saw from the logs, 2 drives are not acessible as RAID members. Do you need additional information? I'll try to send some logs.

    Despite that, here are the outputs of the mdstat and blkid commands:


    MDSTAT

    Personalities : [raid6] [raid5] [raid4]

    unused devices: <none>


    BLKID

    /dev/sdb: UUID="6b67d6ab-5ec2-a5f4-e062-665c437450e3" UUID_SUB="294d5d4e-24da-a109-1042-210b7e7be042" LABEL="NASOMV00:RVOL00" TYPE="linux_raid_member"


    /dev/sda1: UUID="be3765ff-5c7b-4f5f-aa07-083e03359e34" TYPE="ext4" PARTUUID="697359e3-01"


    /dev/sda3: LABEL="SSD02" UUID="85287aee-1e05-4afc-964d-d5f7524e5d55" TYPE="ext4" PARTUUID="697359e3-03"


    /dev/sda5: UUID="8d986da5-7136-427c-ac3e-5d590dd32557" TYPE="swap" PARTUUID="697359e3-05"


    /dev/sdc: UUID="6b67d6ab-5ec2-a5f4-e062-665c437450e3" UUID_SUB="0abd923a-1cae-e6c3-f0a6-21932b1a3db2" LABEL="NASOMV00:RVOL00" TYPE="linux_raid_member"


    /dev/sdd: UUID="6b67d6ab-5ec2-a5f4-e062-665c437450e3" UUID_SUB="8ee64f4b-6323-934e-43aa-8bdeac408216" LABEL="NASOMV00:RVOL00" TYPE="linux_raid_member"


    /dev/sdg: UUID="6b67d6ab-5ec2-a5f4-e062-665c437450e3" UUID_SUB="0dfb5a23-4f78-ff30-8716-ad633a446225" LABEL="NASOMV00:RVOL00" TYPE="linux_raid_member"


    /dev/sdh1: LABEL="INT32FAT" UUID="6037-5276" TYPE="vfat" PARTUUID="200a0a6d-01" (this is just a USB-Stick for data transfer)

  • Is it really possible to go directly from vers. 3 to 6?

    That's not what macom said.

    What is meant to do is, a fresh NEW install of OMV6 on a different disk or better yet, a USB stick (32Gb is more than enough) with all drives disconnected in order to not install on any of the RAID drives.


    After OMV6 is installed and running, power down and plug back in, the RAID drives and they (should be) will be available and recognized on OMV.

  • Here's if config's output:


    eth0 Link encap:Ethernet Hardware Adresse 00:25:90:c0:47:7c

    inet Adresse:192.168.178.20 Bcast:192.168.178.255 Maske:255.255.255.0

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metrik:1

    RX packets:369 errors:0 dropped:318 overruns:0 frame:0

    TX packets:90 errors:0 dropped:0 overruns:0 carrier:0

    Kollisionen:0 Sendewarteschlangenlänge:1000

    RX bytes:27397 (26.7 KiB) TX bytes:14493 (14.1 KiB)

    Interrupt:16 Speicher:feae0000-feb00000


    lo Link encap:Lokale Schleife

    inet Adresse:127.0.0.1 Maske:255.0.0.0

    inet6-Adresse: ::1/128 Gültigkeitsbereich:Maschine

    UP LOOPBACK RUNNING MTU:65536 Metrik:1

    RX packets:258 errors:0 dropped:0 overruns:0 frame:0

    TX packets:258 errors:0 dropped:0 overruns:0 carrier:0

    Kollisionen:0 Sendewarteschlangenlänge:1

    RX bytes:21296 (20.7 KiB) TX bytes:21296 (20.7 KiB)

  • Well I did some more tests. fsck shows me the superblocks of 2 drives is unreadable. A (possible) fsck solution could be using another superblock!?! Sounds strange to me.

    Will a complete installation of OMV Vers6 from scratch heal these faulty superblocks? Is this more than wishful thinking?

    But whatelse should I do to get the RAID back, I really need the data!

    Regards Peter

  • 4 identical WD Red 2 TB disks

    Well I did some more tests. fsck shows me the superblocks of 2 drives is unreadable. A (possible) fsck solution could be using another superblock!?!

    Before doing anything else, wait for assistance, please.

    Can you help out with this geaves?

    Only proper info is the outputs from post #6:

    Code
    BLKID
    /dev/sdb: UUID="6b67d6ab-5ec2-a5f4-e062-665c437450e3" UUID_SUB="294d5d4e-24da-a109-1042-210b7e7be042" LABEL="NASOMV00:RVOL00" TYPE="linux_raid_member"
    /dev/sda1: UUID="be3765ff-5c7b-4f5f-aa07-083e03359e34" TYPE="ext4" PARTUUID="697359e3-01"
    /dev/sda3: LABEL="SSD02" UUID="85287aee-1e05-4afc-964d-d5f7524e5d55" TYPE="ext4" PARTUUID="697359e3-03"
    /dev/sda5: UUID="8d986da5-7136-427c-ac3e-5d590dd32557" TYPE="swap" PARTUUID="697359e3-05"
    /dev/sdc: UUID="6b67d6ab-5ec2-a5f4-e062-665c437450e3" UUID_SUB="0abd923a-1cae-e6c3-f0a6-21932b1a3db2" LABEL="NASOMV00:RVOL00" TYPE="linux_raid_member"
    /dev/sdd: UUID="6b67d6ab-5ec2-a5f4-e062-665c437450e3" UUID_SUB="8ee64f4b-6323-934e-43aa-8bdeac408216" LABEL="NASOMV00:RVOL00" TYPE="linux_raid_member"
    /dev/sdg: UUID="6b67d6ab-5ec2-a5f4-e062-665c437450e3" UUID_SUB="0dfb5a23-4f78-ff30-8716-ad633a446225" LABEL="NASOMV00:RVOL00" TYPE="linux_raid_member"
    /dev/sdh1: LABEL="INT32FAT" UUID="6037-5276" TYPE="vfat" PARTUUID="200a0a6d-01" (this is just a USB-Stick for data transfer)
    Code
    MDSTAT
    Personalities : [raid6] [raid5] [raid4]
    unused devices: <none>
  • Not the best news, :(


    But, since blkid see's the 4 drives as RAID Member's, is there any commands that OP can use to get more info?

    Just assuming...

  • Hi geaves thank you for cooperation. Here is the info you requested.


    mdadm.conf

    -----------------


    • Offizieller Beitrag

    Whilst you have posted what I asked for doing a copy and paste makes it difficult to read, formatting using the </> (code box) on the forum menu bar will place the text in a readable format, including the full cli command is also helpful.


    The drives appear to be OK and recognised by mdadm, the issue is this line in the mdadm conf file;

    # definitions of existing MD arrays

    under that line should be a definition of the array and would explain why the array is showing as n/a, that conf file is created at the time the array is created in OMV, so an array definition would have been entered.

    Whilst that line is not critical initially, it's better in than out (said the actress to the bishop)


    If you create an array from the command line the definition also has to be created manually by running another command line option, using OMV's GUI this is done automatically for the user.


    I am going to assume you created this array using OMV's GUI and somehow that mdadm conf file has become corrupted or something has become corrupted within OMV.


    There are two ways this might be resolved;


    1) Install OMV6 as suggested by macom suggested in #5 and using Soma suggestions in #7 and follow this guide but do not add drives, just update and set up OMV6, then come back


    2) My less favourable approach, is a repair using your current setup OMV3


    At this moment in time your data should be still on those drives but I will not guarantee it, one drive failure in a Raid5 is fine, 2, and the data is toast, non recoverable.

  • Hi geaves, again thanks for your valuable information. It's good not beeing alone with this damned problem. I'm really sorry I didn't inform you I've installed 3 or so days ago OMV 6. That works well. As I already told I made a physical copy of the OMV3 environment. Today I copied this partition to a usb stick and analysed it using a VM (virtualbox). What I found is a mdadm.conf list of OMV3 very close to the power loss time. I attached this list and I also send you some information of all 4 RAID disks gathered by smartctl. The file names mean 'volsda' = volume + drive.

    Hope this may help.

    Thanks for telling me how to create these boxes. I'll try to do my very best! :)

    Finally, I do hope there will a way to re-assemble the RAID or something like that. I thinkl I can accept a certain amount of data loss. Of course the lesser the better!

    Dear regards Peter


    PS: Are there standards for donation?


    Code
    [attach=32915][/attach]
    • Offizieller Beitrag

    Finally, I do hope there will a way to re-assemble the RAID or something like that

    There is, but I am at a loss at to what your post above is trying to tell me, so lets backup.


    You have a backup copy of the original OMV3 install?


    You have installed OMV6 and it works?


    Is the server running on OMV6 without the Raid drives connected?


    At this point in time testing in a VM is not necessary, and short smart tests don't give enough information.

  • Hi geaves,

    here are my answers to your questions:


    Q: You have a backup copy of the original OMV3 install?

    A: Yes on a separate disk connected via USB ; but I'm not shure that it's bootable.


    Q: You have installed OMV6 and it works?

    A: Yes and yes; but, you know, it's a short time since that. I hope it will persist.


    Q: Is the server running on OMV6 without the Raid drives connected?

    A: Yes; but I didn't do any specific tests in this direction. Simply spoken, it works.


    Q: At this point in time testing in a VM is not necessary, and short smart tests don't give enough information.


    A: I had no plan to use the RAID server inside a VM. I only used a VM to extract the mdadm.conf list out of the linux dirs. I found no md0 and/or md127 file.


    A question from me: Is the mdadm.conf list and the other stuff helpful for you?

    Appreciate your next statement, regards Peter

    • Offizieller Beitrag

    Is the mdadm.conf list

    That told me that there was a line related to the array in that mdadm conf file, so at one time it was working.


    With OMV6 running shutdown the server and connect your drives, then restart the server and post what you see, I would hope that the array is shown re building in raid management

  • Hi geaves,

    just to be clear: It is my understanding, I've had to copy this 'mdadm.conf' file taken from old OMV3 to the current OMV6.

    Then I'll connect the RAID drives and start the server.

    Now my question: Where do I find the information I've had to post? Just on the screen or in some logfiles? Please give me the name of this logfile.

    Thank you, have a nice day! Peter

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!