PSU upgrade resulted in x2 drives with different UUID's

  • Hi,


    So I lost all the data. :(


    I've made an OMV install on SSD with x2 data drives that were encrypted with Luks, added in SnapRaid, and combined with MergerFS. (tutorial)

    As a beginner, I have not made any backups in regard to encryption and configured the SnapRaid without a parity drive, so in a sense, I have accepted my NAS doom.


    The OMV setup has been placed on the fridge, connected directly to the router via a network cable.

    So far so good, played with containers, made shared folders as per requirements, etc.


    As the setup had a bronze PSU, I have decided to upgrade it to a gold one.


    And so I did.


    The following steps were undertaken to change PSU from bronze to gold:


    1. Shutdown OMV from GUI

    2. Take box from the fridge

    3. Disconnect SATA cables from HDD

    4. Remove and add the Gold PSU

    5. Bootup without network cable to make changes in BIOS, as I have also added a cooler with PWM support.

    6. Rest of the settings have been left the same. (UEFI, boot priorities)

    7. Box back on the fridge


    And the fun has started:


    1. First connection was refused on localhost and ssh.

    2. Box is back on the table without a network cable.

    3. ACPI errors due to a bug ( non - )

    4. Grub set with acpi=off ( related , the ACPI error was just there, the fact that I have not added 1 second delay to mount the encrypted drives was annoying and mislead me to believe it is ACPI)

    5. OMV is in emergency mode (what?)

    6. Read the journal and found out that has a problem in mounting the x2 HDD

    7. Checked fstab, everything as usual

    8. BLKID, wow drive UUID's are different

    9. Changed fstab to reflect the new UUID

    10. Booted correctly, but OMV in read-only mode ( officially I have been god smacked)

    11. Investigated the issue on google and nothin'

    12. Tried to login in the GUI, Failed to connect to socket: Connection refused (OMV still in read-only)

    13. mount -o remount, rw / + touch /forcefsck\

    14. Back in the game, was able to login

    15. Went to Storage -> File Systems, I'm informed that they are missing

    16. Selected Create file system, was tired and brain farted loudly thinking that renaming it with the same volume name, all will be good. Lost all data from 1st drive.


    Was thinking, why the interface didn't ask: Are you sure that you want to create a new file system on this drive? All data will be erased.


    Instead:

    Select drive, name volume, select file system, OK button.

    Drive was initializing, in dispair, force rebooted OMV without any luck...


    17. Tried to recover the data from the 2 drive encrypted with Luks, the key phrase is good ( I know that cuz I have it on a paper )

    Message: No key available with this phrase.

    Read countless articles and forums, simply because I have not made a backup of the Luks keys :) Lost all data from 2nd drive.


    18. Omitted that mergerFS need the new UUID's as well, update fstab, OMV complains that No such file or directory.

    19. OMV still in read-only mode.

    20. To be continued (spoiler alert: reinstall OMV, no data recovery)


    So in a sense, I have learned a very valuable lesson, backups are important.


    On the other part, I still don't understand why OMV worked so well, and after the PSU upgrade decides to change the UUID of the drives, messing everything up.


    Did you guy experienced something similar?

    What may be the trigger for the UUID change of the two drives ? (maybe all of them)


    My guess is that interchanging the SATA ports, made the above soup.

    My second guess is an update, because I have two versions of GRUB at the moment.


    grubs.png


    Either way, I hope this does not happen to someone else.


    Cheers

  • macom

    Approved the thread.
  • So, I don't know what happens, but after reinstalling OMV and using omv-update before I made any configurations in the omv interface for my setup, it worked for a while and today the same thing happened.


    Encrypted luks drives with mergerFS have their UUID changed and the system goes to emergency mode.


    I made a backup of my fstab, at least, but I won't change anything as last time it messed up the entire system.


    At this point, I'm clueless about what is exactly happening, but what I can say is: I didn't do anything, just found the system one day turned off.

    The new installation lasted 1 week before this issue has reappeared.


    Seems that my USB stick was also corrupted as I had to repair its sectors in order to access the backup files.

    On the bright side, luks encrypted drive's accept my key phrase :)


    Can you please help me get it back and aid me in identifying the root cause?


    Here is a picture of the current system state.

    20210320-080936.jpg


    Here is my FSTAB


    Here is my omv configuration file:


    https://pastebin.com/embed_js/vypQGAsU


    And here is the jourlanctl from emergency mode:


    https://pastebin.com/embed_js/jkmUA6YL

  • Ok, so it seems I've made some progress..


    encrypted drives are no longer decrypted automatically.


    Seems wierd because it worked after fresh installation of omv.


    I managed to make it work again by opening them using cryptSetup luksOpen /dev/sdb data1, data1 being the label used by this drive

    Afterwards running systemctl default seems to get me out of the emergency room. yay


    After a reboot omv goes back to emergency mode.


    But what next, why it is not decrypted upon boot? help?

  • I'll use this post as a journal.


    So I've decided to follow edogd short tutorial on unlocking the drives via ssh with dropbear.

    It is a start, after all, I was wondering what will happen if the entire box is stolen, as I haven't encrypted the Debian partition, it can be accessed somehow if the system is not disassembled.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!