howto mount drives without loss of data (NOOB)

  • HI,
    I am a noob as it comes to linux. I am a windows guy. But I just like OMV too much as it has been serving me great for so long. THanks !!!


    THink I am running into.
    I upgraded from OMV 3.x to 4.x. using Putty, but this failed somehow. SO I decided to do a fresh OMV 4.1 install.
    I have 4 disks, 1 SSD for OS and 3 HDD using snapraid.
    Disk 1, in use disk, used in network.
    Disk 2, mirror with parrity of D1+3
    Disk 3, mirror with parrity of D1+2


    Currently, 1 SSD disk is found and mounted as the OS is sitting on it.
    The other 3 disks are unmounted.


    lsblk -f gives me:
    NAME FSTYPE LABEL UUID MOUNTPOINT
    sda
    ├─sda1 ext4 f97b0225-c7ab-40b9-b743-b338761fc7b7 /
    ├─sda2
    └─sda5 swap 1bbde266-34be-41c3-b6ca-d2ec5a4ad398 [SWAP]
    sdb zfs_member
    └─sdb1 zfs_member
    sdc zfs_member
    └─sdc1 zfs_member
    sdd zfs_member
    └─sdd1 zfs_member


    What commands do I need to put in to mount the 3 drives?, (with keeping all data on the drives intact.)


    I am to scared to do anything wrong, so I would like expert guiding me.
    Otherwise I risk loosing all my documents, photos and videos of my family. ... Better be safe.


    Hope you can help.
    THank you very much in advance.



    [edit]
    Used ISO on USB stick to install OMV ver. "openmediavault_4.1.3-amd64.iso"
    Updated multiple times OMV + repositories.

    Einmal editiert, zuletzt von edterbak () aus folgendem Grund: added info. More will follow when available

    • Offizieller Beitrag

    The giveaway here is the drives will not mount for the following reason;


    sdb zfs_member
    └─sdb1 zfs_member
    sdc zfs_member
    └─sdc1 zfs_member
    sdd zfs_member
    └─sdd1 zfs_member


    They are part of a zfs pool which would require the zfs plugin, whilst zfs can be used with snapraid it would make no sense as each drive would be it's own pool!


    There are some on here that use zfs and maybe able to help. @hoppel118 @flmaxey

  • Thanks geaves for the time.!! Much appreciated.


    zfs you say. I saw that as well. But the 'weird' thing is that I am not aware that I was using zfs.
    I will try next. Enable zfs plugin. See what happens. Maybe I'm lucky and things are detected and visible after that.


    Are there any commands I can use to list / try to mount (in a non wiping manner) ?

    • Offizieller Beitrag

    Thanks geaves for the time.!! Much appreciated.


    zfs you say. I saw that as well. But the 'weird' thing is that I am not aware that I was using zfs.
    I will try next. Enable zfs plugin. See what happens. Maybe I'm lucky and things are detected and visible after that.


    Are there any commands I can use to list / try to mount (in a non wiping manner) ?

    TBH if you're not sure and I'm definitely not sure, wait for those that know! Install the plugin, update the thread with that information and add ZFS to the title, also add the version of omv.

  • Otherwise I risk loosing all my documents, photos and videos of my family. ... Better be safe

    Better do backup in the future!


    For now it seems you are running into a problem with remaining ZFS signatures now with Debian 9 preventing the disks to be mounted, see for example here: migration OMV 2.x to omv 4.x


    Since next steps seem to be dangerous I would start to do backups now, acquire at least one other large disk and clone each of the existing disks to this new disk before fiddling around.

  • Thank you for the guidance so far. Much appreciated!


    I do feel a bit silly now, going to upgrade without backup first. I felt safe because of the snapraid, and probably a wrong understanding of how this actually works.
    I guess grabbing one HDD out of the current NASbox, putting it in a windows machine / or other OS is out of the question to create a backup?


    To go forward.
    I have ordered a new 4TB HDD to tie next to other HDD's, to backup the stuff. Its coming soon.
    >> Do you have a tip to backup HDD's completely, suited for this situation? I mean, the drives are not mounted/readable in omv currently, so I assume backup/clone from within OMV is out of the question. (is this assumption correct?)
    Maybe you know of a preferred program (on a bootable USB stick probably)


    Again, much appreciated the help so far. Thanks.
    /ed

  • I assume backup/clone from within OMV is out of the question. (is this assumption correct?)

    Nope. You can clone any block device without the need to mount partitions on it. You can do this within Linux (OMV) with tools like ddrescue or dcfldd for example. Or most probably also in Windows using Clonezilla (no idea, neither use Windows nor Clonezilla).


    The important part is cloning the disk and not individual partitions to be safe. I would do a web search for 'forensic disk clone' or something like that.

    • Offizieller Beitrag

    I guess grabbing one HDD out of the current NASbox, putting it in a windows machine / or other OS is out of the question to create a backup?

    NNNNNOOOO!!!! wait for the guys I have tagged.


    I have ordered a new 4TB HDD to tie next to other HDD's, to backup the stuff. Its coming soon.

    Well that's a good start :thumbup:


    ZFS is capable of 'snapshots' which in essence is a backup of your data, you can also set up 'scrubs' this interrogates the integrity of your data (I think), but wait for the tagged guys. I think @flmaxey 'might be' away until the week end.


    I know you want to get this sorted, but if something goes wrong and sh*t happens, there's no recovery.

  • Just to be sure: You're not using ZFS right now, you don't want to use it right now and your whole issue is you having a Snapraid setup that became inaccessible now after updating to OMV4 with all your data on three Snapraid disks without an external backup?

    Correct, I curerntly am NOT using ZFS. Plugin is currently not even enabled (maybe Ill try that tonight, just enable it, but I dont expect this will have any effect)
    Correct, I dont want to use ZFS in future.
    Correct, My issue is that I had a running system with 3x Ext4 HDD's + snapraid. Upgrade failed, dont know exactly where, but i could not connect anyway after half an hour. So I choose fresh instal OMV OS >> not accessable now > no external backup ( I know, Im an idiot...)

  • Here is the requested extra information about my system.
    Release: 4.1.12
    Codename: Arrakis
    Linux NAS 4.14.0-0.bpo.3-amd64 #1 SMP Debian 4.14.13-1~bpo9+1 (2018-01-14) x86_64 GNU/Linux


    Below is a big chunk of System information - Reports > copy/past
    If this is insufficient, please let me know.


    I will wait until the ordered HDD is in, try to clone with ddrescue then from within OMV.


    IF there are harmless commands I can give more usefull information, or try something harmless, please just let me know. :) Regards,


  • Hi,


    Likely this evening my new hdd will arrive.


    Here are the step I have in mind doing. Please reply if something is wrong.
    ssh webclient
    root
    password


    Then:
    ddrescue -n /dev/sdb /dev/sde rescue.log
    Disk I want to backup from is disk sdb
    Disk I want to backup to will be disk sde


    Q1: When done this task, how do I check log to see if it was done correctly? What to check.
    Q2: In case the next steps go wrong I need to put backup from sde back again on sdb. Below is correct?
    ddrescue -n /dev/sde /dev/sdb rescue.log


    to try and correct things, step 1:
    fsck [b]/[/b]dev[b]/[/b]sdborfsck -t ext4 /dev/sdborfsck -a [b]/[/b]dev[b]/[/b]sdb
    reboot
    Then I'm hopefull it will run..


    If not, I am thinking of forcing the mount of sdb to Ext4.
    # mount -f -t ext4 /dev/sdb
    Kindly your thoughts about the above. :)

    Einmal editiert, zuletzt von edterbak () aus folgendem Grund: changed force mount ext command, THink this is correct.

  • Hi @geaves, I read the thread from the beginning. But I am sorry! I can't help here, because I never used snapraid. But you are right. Seemingly they are part of a zfs pool.


    Well, then it seems you suffer from zfs signatures on your drives for whatever reasons you need to get rid off now (see link in first answer in this thread). I would do a forum search for 'wipefs zfs' and always clone the disk prior to play with wipefs.


    That is the way, I would go. Before doing anything, I would buy some extra disks, if I don't have a proper backup, and clone them.


    Regards Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    • Offizieller Beitrag

    Regards Hoppel

    Thanks anyway, this zfs_member has come up before and after further searching there is a thread that explains how to deal with it. Had the same problem myself whilst adding an additional drive, but for me not an issue, no data on the drive.

  • Hi


    First of all. thank you for your attention.


    I have installed ZFS plugin. Nothing else for now.
    When selecting ZFS, the option [Import Pool] is clickable.


    Is this the first thing to try? (after backup ofcourse)


    regards,


    PS. action on my side is currently slow. Im getting married this week. Busy busy. :)

  • I have installed ZFS plugin. Nothing else for now.
    When selecting ZFS, the option [Import Pool] is clickable.


    Is this the first thing to try? (after backup ofcourse)

    Absolutely not. Since you don't use ZFS and your data is on Snapraid you clearly neither want the ZFS plugin installed nor start to let the plugin slaughter your data. Please reread the first answers. You're suffering from ZFS signatures present on your disks for whatever reason which prevent mounting them now after the Debian upgrade.

    • Offizieller Beitrag

    I have installed ZFS plugin. Nothing else for now.

    There is no need what @tkaiser has stated has occurred after an upgrade, your not the first, but this thread does have a solution to remove those signatures but you'll either have to compile util-linux-2.32 or use the system rescue cd. According to @ananas in that thread wipefs the current version in omv doesn't remove the signatures (my guess is it removes the start but not the end).


    ZFS creates 2 signatures one in the first 2 blocks and another in last 2 blocks, both need to be removed and each disk will need to be completed, as a word of warning using the wrong switch on wipefs could/will erase your data!


    I warned about the zfs_member because I had not seen this before and it didn't happen to me during an upgrade.

  • Hi,


    I have a backup now.


    Did:
    wipefs -n /dev/sdb

    Code
    offset               type                                                                                                                                                      
    -----------------------------------------------                                                                                                              
    0x200                gpt   [partition table]                                                                                                                                   
    0xe8e0d3f000         zfs_member   [filesystem]

    wipefs -n /dev/sdb1


    Code
    offset               type                                                                                                                                                      
    ----------------------------------------------------------------                                                                                                               
    0xe8e0c3f000         zfs_member   [filesystem]                                                                                                                                                                                                                                                               
    0x438                ext4   [filesystem]                                                                                                                                       
                         LABEL: sdbwcc4j4726757                                                                                                                                    
                         UUID:  df805e5e-1311-49b3-aa41-445a83e63e75

    zfs list
    No datasets available



    Does this suggest there IS actually a zfs system with an ext4 inside?? (sounds insane to me to be honest...)
    What do you suggest I do next.


    wipefs -o 0xe8e0d3f000 /dev/sdb ?
    or import ZFS pool?

    Einmal editiert, zuletzt von edterbak () aus folgendem Grund: added 2nd options

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!