Beiträge von riki77

    You should be able to mount it, the 'play' icon is for mounting an existing filesystem

    OK yes, I had panicked and couldn't remember how to mount it.


    Problem: The drive letter used to be sdh now it is sdj.

    So the status of md0 is "read-auto".

    sdj seems not to be in the raid.

    Two hours after my last message, since as I said the hard drives are new, I went to the machine and unplugged/replugged the connectors from the broken hd. The RAID is working again!

    Now I would like to know how it is possible to constantly have problems with the connectors, I had already once lost everything because of them! I have changed them all and now I have the same problem.


    RAID md0 is 'clean'. I do not see it in the file system (Status is "Missing"). What should I do now to fix it without damaging anything else?

    You're running Raid0 there is no recovery unless you use recovery software or pay for professional recovery

    OK, thank you. Fortunately I have backups of most of the files. I'm just missing the last few things I created, but whatever, I can redo them.


    I wonder why I always have problems with hard drives. They are also new! I would like to investigate the reason for these crashes

    Looks like you used raid 0 this is striped and parts of files might have end up on different drives in your array.

    if the array fails you damage those files so chances are virtually 0 to recover your array.


    I hope you have a backup.

    Yes, I have a backup of almost everything, I am missing a few things, if they recover well, otherwise I may lose them.

    But I would like to attempt recovery first before giving up

    Hello everyone.

    I have version 6.0.46-1 of OMV.

    I have created two RAIDs, one is working, the other is in BROKEN state (md0).

    The RAID information not working is this:


    Is there a chance to restore it?

    Thank You


    Code
    root@nas:~# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid0 num-devices=3 metadata=1.2 name=nas:Anna UUID=31167f13:46400c9b:eba60ea1:875f57a0
       devices=/dev/sdc,/dev/sdd
    ARRAY /dev/md1 level=raid0 num-devices=3 metadata=1.2 name=nas:1 UUID=a3747a6f:6aea4af3:204d8c4f:c818a923
       devices=/dev/sdb,/dev/sde,/dev/sdg

    You'd need to answer a few questions:

    1. How big is the data store.
    2. If it's large is it mostly video?

    Right now I have more than 2 tb of recovered data, mixed between videos and photos.

    I produce an average of 4 gb of data per week, always mixed video and photos.

    The total hard disk available to date is 10tb.

    A 4tb usb hard disk for backup (Once I reach its limit I will continue the backup on another usb hd)

    OK guys.

    I saved what I could.

    Now I have to redo everything. I reinstall OMV and connect the drives.

    Given what has happened to me I ask for your advice:

    is it more convenient to create a RAID 5 or merge all the hard drives and have a larger capacity available?

    Hi guys.

    Here I am again.

    I have saved the remaining available data.

    So at the moment those few remaining files are safe.

    Now I would ask for one last help in trying to merge the whole array and retrieve other files as well.

    If that doesn't work, patience. But at least we tried!

    Here's what I consider to be the questions you must answer:
    - Do you want to try to recover the array? If you attempt this and it goes wrong, that's it. There won't be anything nothing left to recover.
    (OR)

    - Do you want to backup the data that remains in the RAID array?

    The answer to the above is your call.

    Hi.

    Yes, my intention is to back up first. And recover what little is left, less than 50%.

    After that one could try to recover the array?


    As for the rsync command, I had already tried using it but it crashes when it finds an incomplete/damaged file.

    I am using "MC" so that I can step in and skip the incomplete/damaged files.

    It will take some time, always hoping that some other adverse event doesn't happen!


    Another thing: all disks apart from the USB one and the OS one were part of the RAID before the disaster!



    Thank you for your valuable help!

    If you didn't add the external drive to the RAID array, it will be outside. Mount it

    I have this situation:

    In the 'Discs' section, I see that there is a \sda and \sdc:



    In 'File System' it can be seen that this \sda is used for 111GB.

    You can also see that it has an NTFS file system! How this is possible I do not know. I had formatted everything in ext4 when building the RAID, I'm sure! I can't figure it out!

    \sdc is not present


    In the RAID \sda is not present and I cannot get it to fit. If I click on Recover it makes me choose \sdc, which does not appear in any of the previous sections, but not \sda.



    If I try to add \sdc to the RAID instead, I get this error

    You could try connecting the "NTFS" formatted external drive to windows. If windows can't see the data....

    No, I have now formatted the external drive in EXT4.


    The only hope I have is in \sdc. It shows up in the drives but remains outside the RAID.


    You were already able to patch it, maybe repeating the same operations will recover the files anyway.

    I formatted the USB HD. I'm starting from scratch.

    Problem: Many files are not present in the RAID!!!

    I see that \sdc is out of the RAID, how can I proceed to try to get it back in?

    I tried to do "Restore" but it gave me this error:


    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mdadm --manage '/dev/md127' --add /dev/sdc 2>&1' with exit code '1': mdadm: Failed to write metadata to /dev/sdc


    Errore #0:

    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mdadm --manage '/dev/md127' --add /dev/sdc 2>&1' with exit code '1': mdadm: Failed to write metadata to /dev/sdc in /usr/share/php/openmediavault/system/process.inc:195
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/raidmgmt.inc(419): OMV\System\Process->execute()
    #1 [internal function]: Engined\Rpc\RaidMgmt->add(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('add', Array, Array)
    #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('RaidMgmt', 'add', Array, Array, 1)
    #5 {main}

    Even if you managed a recovery, could you "trust" the result?

    You are right. The problem is that some hds in the RAID are failing and I should hurry up and back them up before they stop working.

    But if you can actually lose data or get damaged, it would be better to start from the beginning.

    More than attempting to retrieve it....

    Worst case scenario, I will start the backup from the beginning again...3 more months of waiting!

    I followed the guide to the end but I still don't see the hd in File System

    It seems that the disk is sde


    root@NAS:~# lsblk

    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

    sda 8:0 0 1,8T 0 disk

    ├─sda1 8:1 0 16M 0 part

    └─sda2 8:2 0 1,8T 0 part /srv/dev-disk-by-uuid-34361084361048EE

    sdb 8:16 0 298,1G 0 disk

    ├─sdb1 8:17 0 297,1G 0 part /

    ├─sdb2 8:18 0 1K 0 part

    └─sdb5 8:21 0 976M 0 part [SWAP]

    sdc 8:32 0 1,8T 0 disk

    └─md127 9:127 0 2,3T 0 raid5 /srv/dev-disk-by-uuid-3c50a097-b1fe-4c69-93ae

    sdd 8:48 0 931,5G 0 disk

    └─md127 9:127 0 2,3T 0 raid5 /srv/dev-disk-by-uuid-3c50a097-b1fe-4c69-93ae

    sde 8:64 0 3,7T 0 disk

    └─sde1 8:65 0 1K 0 part

    sdg 8:96 0 931,5G 0 disk

    └─md127 9:127 0 2,3T 0 raid5 /srv/dev-disk-by-uuid-3c50a097-b1fe-4c69-93ae

    sdh 8:112 0 465,8G 0 disk

    └─md127 9:127 0 2,3T 0 raid5 /srv/dev-disk-by-uuid-3c50a097-b1fe-4c69-93ae

    sdi 8:128 0 931,5G 0 disk

    └─md127 9:127 0 2,3T 0 raid5 /srv/dev-disk-by-uuid-3c50a097-b1fe-4c69-93ae

    sdj 8:144 0 931,5G 0 disk

    └─md127 9:127 0 2,3T 0 raid5 /srv/dev-disk-by-uuid-3c50a097-b1fe-4c69-93ae