Missing Raid

  • Did you reboot or why did the disk name change from /dev/sdg to /dev/sde?

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Do what Soma said, because now it appears there is a partion on the disk:

    sde 8:64 0 3,7T 0 disk

    └─sde1 8:65 0 1K 0 part

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • /dev/disk/by-uuid/34361084361048EE /srv/dev-disk-by-uuid-34361084361048EE ntfs defaults,nofail,big_writes 0 2

    This is your drive mentioned on fstab and inside the # >>> [openmediavault]so, it was recognized properly when you first connected it.

    ???NTFS??? Why GOD?


    Now, the big issue is what fdisk is seeing.

    The partition table is all messed up.

    Honestly, the drive/data is lost unless you know how the partitions were created in the first place. The START/END sectors make no sense:

    A drive that big would (normally?!?) be something like this:

    Code
    Disklabel type: dos #<<< This should be showing gpt
    Disk identifier: 0x00000000 #<<<< Some Hex identifier but never 00000
    
    Device     Boot      Start        End    Sectors   Size Id Type
    /dev/sde1       4294967295 4820883454  525916160 250,8G  f W95 Ext'd (LBA) #<< First sector of 1st Partition is normally 2048
    /dev/sde5       4831721256 7829571976 2997850721   1,4T 43 unknown #<<< The size only shows 1.4T?????


    As for comparison, this is how a big drive should be seen on fdisk. Notice the start and end of the 1st partition and the second:


    I know that is possible to repair partition tables with gdisk (a more powerfull fdisk for linux) but the risk of losing DATA is very HIGH.


    On a side note, it's a good habit of making a copy of the output of fdisk whenever you create a new drive.

    It saved me sometimes when the partition(s) weren't seen and I just did it again with the same values and all DATA was seen again.

    Never tried it on NTFS, though.


    I hope someone can give you better options.

    Good luck.

    • Offizieller Beitrag

    Worst case scenario, I will start the backup from the beginning again...3 more months of waiting!

    Even if you managed a recovery, could you "trust" the result? I'd start over and this time I'd use EXT4 on the external disk.
    (BTW: EXT4 drives can "read" by Windows with a driver or a utility. There are plenty of tutorials on how to do it. Here's -> one. Google Windows and EXT4)

    If you're trying to backup a RAID array to a single external disk, consider using Rsync with EXT4 on the external disk. After the array is mirrored to the external drive, only changes are replicated. Rsync is fast.

  • Even if you managed a recovery, could you "trust" the result?

    You are right. The problem is that some hds in the RAID are failing and I should hurry up and back them up before they stop working.

    But if you can actually lose data or get damaged, it would be better to start from the beginning.

  • I formatted the USB HD. I'm starting from scratch.

    Problem: Many files are not present in the RAID!!!

    I see that \sdc is out of the RAID, how can I proceed to try to get it back in?

    I tried to do "Restore" but it gave me this error:


    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mdadm --manage '/dev/md127' --add /dev/sdc 2>&1' with exit code '1': mdadm: Failed to write metadata to /dev/sdc


    Errore #0:

    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mdadm --manage '/dev/md127' --add /dev/sdc 2>&1' with exit code '1': mdadm: Failed to write metadata to /dev/sdc in /usr/share/php/openmediavault/system/process.inc:195
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/raidmgmt.inc(419): OMV\System\Process->execute()
    #1 [internal function]: Engined\Rpc\RaidMgmt->add(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('add', Array, Array)
    #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('RaidMgmt', 'add', Array, Array, 1)
    #5 {main}

    • Offizieller Beitrag

    But if you can actually lose data or get damaged, it would be better to start from the beginning.

    I understand you're in a bad situation but it seems that you can't access the data on the External drive and that assumes that the data is even there. On the other side of the coin, your failing array is not getting any younger or healthier. Where to go from here is your call but resurrecting the backup (NTFS formatted drive) doesn't seem to be going anywhere, you've had help from some of the forum's experts, and time is ticking by.
    _____________________________________________________________________________


    If you decide to use EXT4 on the external drive, formatted and mounted by OMV:


    Take a look at this -> process. (Read through it first.)


    You can use Rsync to backup and, later, to restore your data to a new RAID array. To set up the command line, your source drive will be the RAID array mount point or the device name. (From your screen captures, your RAID device name appears to be /dev/md127


    After the backup, if you redirect shares to the backup external drive, you can get access to your data while working on getting your RAID array healthy again.


    _______________________________________________________________________________

    The second time around, you might consider ZFS or something else that runs a "scrub" on the array.) You also might consider turning on file system notifications and SMART testing. Both will give some advanced warnings of drive issues.

  • You could try connecting the "NTFS" formatted external drive to windows. If windows can't see the data....

    No, I have now formatted the external drive in EXT4.


    The only hope I have is in \sdc. It shows up in the drives but remains outside the RAID.


    You were already able to patch it, maybe repeating the same operations will recover the files anyway.

    • Offizieller Beitrag

    If you've reformatted it to EXT4, whatever was in NTFS format before is no longer there.

    If you didn't add the external drive to the RAID array, it will be outside. Mount it and look at the Rsync section of the doc-link.

  • If you didn't add the external drive to the RAID array, it will be outside. Mount it

    I have this situation:

    In the 'Discs' section, I see that there is a \sda and \sdc:



    In 'File System' it can be seen that this \sda is used for 111GB.

    You can also see that it has an NTFS file system! How this is possible I do not know. I had formatted everything in ext4 when building the RAID, I'm sure! I can't figure it out!

    \sdc is not present


    In the RAID \sda is not present and I cannot get it to fit. If I click on Recover it makes me choose \sdc, which does not appear in any of the previous sections, but not \sda.



    If I try to add \sdc to the RAID instead, I get this error

    • Offizieller Beitrag

    - First, the disks you have listed under Storage, RAID Management are the disks in the array. Any other disks that are NOT listed under Storage, RAID Management are not in the array. The disks in your array appear to be:

    /dev/sdd 931GB

    /dev/sdg 931GB

    /dev/sdh 465GB??

    /dev/sdi 931GB

    /dev/sdj 931GB


    Under Storage, Management, using the Details button, you might consider reviewing the array's information. The information found there will reveal which disk is faulted or failing. Note that disk /dev/sdh is smaller than the remaining members of the array which, while it will work, is a RAID No, No. From appearances, /dev/sdh has limited the available space in the array. You would have more storage space (roughly +2.7TB versus the +2.2TB you have now) if you would have left /dev/sdh out of the array when you created it.

    - Second. You can't add disks to an array (using the grow button) or recover the array (using the recover button) using a disk that has a file system on it. A disk, going into an array, must be "wiped" and clean before it can be used in a RAID array.

    - Third. The disk that's formatted NTFS (dev/sda2 with 1.8TB) was not formatted to NTFS using the OMV GUI. This disk was likely formatted by Windows. In any case, it's not relevant to the questions below.

    /dev/sde 3.64TB, a WD Elements drive formatted to EXT4, appears to be the external drive you're trying to use for backing up the array. It's large enough for the task.

    ____________________________________________________________

    Here's what I consider to be the questions you must answer:
    - Do you want to try to recover the array? If you attempt this and it goes wrong, that's it. There won't be anything nothing left to recover.
    (OR)

    - Do you want to backup the data that remains in the RAID array?

    The answer to the above is your call.
    _____________________________________________________________

    If it was me, I'd backup the failing array to /dev/sde as soon as possible. If you go that route, depending on the version of OMV5 that you're using, the Rsync command line might be l-o-n-g. I checked, using a VM, and successfully backed up a RAID5 array to a single disk, using the "mount point" as it's displayed in File Systems. You'll have to add the mount point column as shown in the -> document I linked to before.

    In my VM example, using OMV 5.6.24-1, the command line was as follows:
    rsync -av /srv/dev-disk-by-uuid-ffbc3d2b-450b-4a4f-8bdb-96e18a641cee/ /srv/dev-disk-by-uuid-ad63a908-dcc3-46fb-8e38-dd3b01f3cfee/
    (The above UUID's will not work for you.)

    Hopefully your version of OMV is earlier than 5.5.20 so your disk mount points will be "by label". They're much easier to read and copy. If you have OMV 5.5.20 or later, with disks added after upgrading to 5.5.20, you may have to copy much longer Unique Identifiers into the Rsync command line. In either case, I'd recommend using Notepad so you can carefully copy and paste your mount points into a scheduled task command line.)

  • Here's what I consider to be the questions you must answer:
    - Do you want to try to recover the array? If you attempt this and it goes wrong, that's it. There won't be anything nothing left to recover.
    (OR)

    - Do you want to backup the data that remains in the RAID array?

    The answer to the above is your call.

    Hi.

    Yes, my intention is to back up first. And recover what little is left, less than 50%.

    After that one could try to recover the array?


    As for the rsync command, I had already tried using it but it crashes when it finds an incomplete/damaged file.

    I am using "MC" so that I can step in and skip the incomplete/damaged files.

    It will take some time, always hoping that some other adverse event doesn't happen!


    Another thing: all disks apart from the USB one and the OS one were part of the RAID before the disaster!



    Thank you for your valuable help!

  • Hi guys.

    Here I am again.

    I have saved the remaining available data.

    So at the moment those few remaining files are safe.

    Now I would ask for one last help in trying to merge the whole array and retrieve other files as well.

    If that doesn't work, patience. But at least we tried!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!