Replace a dead data disk in Snapraid + Unionfile system

  • Apology that English is not my first language so please forgive me for the spelling and grammer.


    I being search through the forum and google the last 2 weeks but can't find a solution. Hope you intelligent people here can help me out.


    I had 4 data disks and 1 parity disk running snapraid with unionfile system. Unforuntately one of the data disk is damaged (no response at all). I put in a new disk and run the snapraid fix but result an error as below


    Self test...

    Loading state from /srv/dev-disk-by-label-SDB/snapraid.content...

    Decoding error in '/srv/dev-disk-by-label-SDB/snapraid.content' at offset 187

    The file CRC is correct!

    Disk 'SDD' with uuid '2424ce9a-778b-4733-a606-724f97ea63e9' not present in the configuration file!

    If you have removed it from the configuration file, please restore it


    I try editing the config.xml but nothing change and try clonzilla with the old data disk but Clonzilla stated the old disk have no partition and can't clone the disk.


    Please if anyone kindly point me to the right directions!



    Many thanks!

  • When I used this set up I bookmarked this page as a just in case, but you have the added problem that new drives are set with UUID rather than label, hence the error drive not present in config, which I think is the content file

    Raid is not a backup! Would you go skydiving without a parachute?

  • If there any way I can label all the disks now

    ?( When I used mergerfs and snapraid, the mergerfs drives were named/labelled disk1, disk2 and disk3, in snapraid they named data1, data2 and data3 the fourth drive was simply named parity, this was in the each of the plugins when first set up.

    I also try reinstall OMV and setup new snapraid array

    Unless you used the exact same name/labels from your original install then the content file on the parity drive is going to be wrong.

    Raid is not a backup! Would you go skydiving without a parachute?

  • Change label of disk


    "If you remove the shared folders on the disk and unmount it in the web interface, you could re-label it and mount it again in the web interface if you really want to change it."


    Read this from above link but I did label the drive when create the new ext4 disk within FileSystem.


    Will snapraid recognize the old label when remount the old disks in FileSystem or I need to unmount and remount again?


    Thanks

  • Read this from above link but I did label the drive when create the new ext4 disk within FileSystem

    This has nothing to do with the file system, this is about how each drive was named when added to the plugin.

    Raid is not a backup! Would you go skydiving without a parachute?

  • I upload the images of my Disk and Snapraid setting.

    One thing that jumps out straight away in the Union File System what options do you have set?


    OK but to get back to the link I posted and the guide, you are using the name of the drive in the snapraid image you've posted

    Raid is not a backup! Would you go skydiving without a parachute?

  • Wingclai

    Please re-read the restoration process carefully and try it again, paying very close attention to detail.
    This time note that it's very important to get the replacement drive label (the name), to be exactly like the drive it's replacing.


    If the restore does not work, hopefully you have backup. Apparently something has gone wrong in your restoration process. This may happen on a rare occasion which is why backup is important. Please note that there's very little anyone can do to help you fix an unexplained error.

  • Finally I managed to solve this. Just to post this here so it hope can assist other who may come across this situation while one of the data disk is dead and cloning / copy the data to the new disk is impossible.


    The process as follow:


    1. Make an Ubuntu bootable usb

    2. Run Ubuntu from the usb

    3. Open Terminal in Ubuntu desktop

    4. Find out the /dev/sdx (x the port number, my case is sda) for the new disk

    Code
    sudo blkid

    5. Unmount the disk with command

    Code
    umount -v /dev/sda1

    6. Change the UUID of the new disk to '2424ce9a-778b-4733-a606-724f97ea63e9' (the same UUID of the damaged disk) with command

    Code
    tune2fs /dev/{device} -U {uuid}

    7. Make a new directory with command #

    Code
    sudo mkdir /new

    8. Then mount the disk #

    Code
    sudo mount /dev/sda1 /new

    9. Exit Ubuntu

    10. Reboot OMV and mount the new disk in Storage, File System with the same label of the damaged disk (my case is YBPC)

    11. Add the new disk to replace the damaged one in Storage, Union File Systems

    12 Also add the new disk in Services, Snapraid, Drives

    13. Run command in shell to restore the files to the new disk

    Code
    snapraid fix -d {Disk label} - l logfile.txt
  • 6. Change the UUID of the new disk to the same UUID of the damaged disk

    Appreciate the summary.

    Open queestions:

    - for 4. the command is lsusb or lspci right?

    - What is the command to achieve 6. ?

    omv 5.5.23-1 (usul) on RPi4 with Kernel 5.10.x and WittyPi 3 RTC HAT

    2x 6TB HDD formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!