howto mount drives without loss of data (NOOB)

    • OMV 4.x
    • Resolved
    • howto mount drives without loss of data (NOOB)

      HI,
      I am a noob as it comes to linux. I am a windows guy. But I just like OMV too much as it has been serving me great for so long. THanks !!!

      THink I am running into.
      I upgraded from OMV 3.x to 4.x. using Putty, but this failed somehow. SO I decided to do a fresh OMV 4.1 install.
      I have 4 disks, 1 SSD for OS and 3 HDD using snapraid.
      Disk 1, in use disk, used in network.
      Disk 2, mirror with parrity of D1+3
      Disk 3, mirror with parrity of D1+2

      Currently, 1 SSD disk is found and mounted as the OS is sitting on it.
      The other 3 disks are unmounted.

      lsblk -f gives me:
      NAME FSTYPE LABEL UUID MOUNTPOINT
      sda
      ├─sda1 ext4 f97b0225-c7ab-40b9-b743-b338761fc7b7 /
      ├─sda2
      └─sda5 swap 1bbde266-34be-41c3-b6ca-d2ec5a4ad398 [SWAP]
      sdb zfs_member
      └─sdb1 zfs_member
      sdc zfs_member
      └─sdc1 zfs_member
      sdd zfs_member
      └─sdd1 zfs_member

      What commands do I need to put in to mount the 3 drives?, (with keeping all data on the drives intact.)

      I am to scared to do anything wrong, so I would like expert guiding me.
      Otherwise I risk loosing all my documents, photos and videos of my family. ... Better be safe.

      Hope you can help.
      THank you very much in advance.


      [edit]
      Used ISO on USB stick to install OMV ver. "openmediavault_4.1.3-amd64.iso"
      Updated multiple times OMV + repositories.

      The post was edited 1 time, last by edterbak: added info. More will follow when available ().

    • The giveaway here is the drives will not mount for the following reason;

      sdb zfs_member
      └─sdb1 zfs_member
      sdc zfs_member
      └─sdc1 zfs_member
      sdd zfs_member
      └─sdd1 zfs_member

      They are part of a zfs pool which would require the zfs plugin, whilst zfs can be used with snapraid it would make no sense as each drive would be it's own pool!

      There are some on here that use zfs and maybe able to help. @hoppel118 @flmaxey
      Raid is not a backup! Would you go skydiving without a parachute?
    • Thanks geaves for the time.!! Much appreciated.

      zfs you say. I saw that as well. But the 'weird' thing is that I am not aware that I was using zfs.
      I will try next. Enable zfs plugin. See what happens. Maybe I'm lucky and things are detected and visible after that.

      Are there any commands I can use to list / try to mount (in a non wiping manner) ?
    • edterbak wrote:

      Thanks geaves for the time.!! Much appreciated.

      zfs you say. I saw that as well. But the 'weird' thing is that I am not aware that I was using zfs.
      I will try next. Enable zfs plugin. See what happens. Maybe I'm lucky and things are detected and visible after that.

      Are there any commands I can use to list / try to mount (in a non wiping manner) ?
      TBH if you're not sure and I'm definitely not sure, wait for those that know! Install the plugin, update the thread with that information and add ZFS to the title, also add the version of omv.
      Raid is not a backup! Would you go skydiving without a parachute?
    • edterbak wrote:

      Otherwise I risk loosing all my documents, photos and videos of my family. ... Better be safe
      Better do backup in the future!

      For now it seems you are running into a problem with remaining ZFS signatures now with Debian 9 preventing the disks to be mounted, see for example here: migration OMV 2.x to omv 4.x

      Since next steps seem to be dangerous I would start to do backups now, acquire at least one other large disk and clone each of the existing disks to this new disk before fiddling around.

      The post was edited 1 time, last by tkaiser ().

    • Thank you for the guidance so far. Much appreciated!

      I do feel a bit silly now, going to upgrade without backup first. I felt safe because of the snapraid, and probably a wrong understanding of how this actually works.
      I guess grabbing one HDD out of the current NASbox, putting it in a windows machine / or other OS is out of the question to create a backup?

      To go forward.
      I have ordered a new 4TB HDD to tie next to other HDD's, to backup the stuff. Its coming soon.
      >> Do you have a tip to backup HDD's completely, suited for this situation? I mean, the drives are not mounted/readable in omv currently, so I assume backup/clone from within OMV is out of the question. (is this assumption correct?)
      Maybe you know of a preferred program (on a bootable USB stick probably)

      Again, much appreciated the help so far. Thanks.
      /ed
    • edterbak wrote:

      I assume backup/clone from within OMV is out of the question. (is this assumption correct?)
      Nope. You can clone any block device without the need to mount partitions on it. You can do this within Linux (OMV) with tools like ddrescue or dcfldd for example. Or most probably also in Windows using Clonezilla (no idea, neither use Windows nor Clonezilla).

      The important part is cloning the disk and not individual partitions to be safe. I would do a web search for 'forensic disk clone' or something like that.
    • edterbak wrote:

      I guess grabbing one HDD out of the current NASbox, putting it in a windows machine / or other OS is out of the question to create a backup?
      NNNNNOOOO!!!! wait for the guys I have tagged.

      edterbak wrote:

      I have ordered a new 4TB HDD to tie next to other HDD's, to backup the stuff. Its coming soon.
      Well that's a good start :thumbup:

      ZFS is capable of 'snapshots' which in essence is a backup of your data, you can also set up 'scrubs' this interrogates the integrity of your data (I think), but wait for the tagged guys. I think @flmaxey 'might be' away until the week end.

      I know you want to get this sorted, but if something goes wrong and sh*t happens, there's no recovery.
      Raid is not a backup! Would you go skydiving without a parachute?
    • edterbak wrote:

      I am not aware that I was using zfs
      Just to be sure: You're not using ZFS right now, you don't want to use it right now and your whole issue is you having a Snapraid setup that became inaccessible now after updating to OMV4 with all your data on three Snapraid disks without an external backup?
    • tkaiser wrote:

      edterbak wrote:

      I am not aware that I was using zfs
      Just to be sure: You're not using ZFS right now, you don't want to use it right now and your whole issue is you having a Snapraid setup that became inaccessible now after updating to OMV4 with all your data on three Snapraid disks without an external backup?
      Correct, I curerntly am NOT using ZFS. Plugin is currently not even enabled (maybe Ill try that tonight, just enable it, but I dont expect this will have any effect)
      Correct, I dont want to use ZFS in future.
      Correct, My issue is that I had a running system with 3x Ext4 HDD's + snapraid. Upgrade failed, dont know exactly where, but i could not connect anyway after half an hour. So I choose fresh instal OMV OS >> not accessable now > no external backup ( I know, Im an idiot...)
    • Here is the requested extra information about my system.
      Release: 4.1.12
      Codename: Arrakis
      Linux NAS 4.14.0-0.bpo.3-amd64 #1 SMP Debian 4.14.13-1~bpo9+1 (2018-01-14) x86_64 GNU/Linux

      Below is a big chunk of System information - Reports > copy/past
      If this is insufficient, please let me know.

      I will wait until the ordered HDD is in, try to clone with ddrescue then from within OMV.

      IF there are harmless commands I can give more usefull information, or try something harmless, please just let me know. :) Regards,

      Display Spoiler
      ================================================================================
      = Linux Software RAID
      ================================================================================
      Not used

      ================================================================================
      = Monit status
      ================================================================================
      Monit 5.20.0 uptime: 20h 43m

      System 'NAS'
      status Running
      monitoring status Monitored
      monitoring mode active
      on reboot start
      load average [0.07] [0.04] [0.01]
      cpu 1.8%us 0.9%sy 0.0%wa
      memory usage 816.1 MB [10.5%]
      swap usage 0 B [0.0%]
      uptime 20h 53m
      boot time Wed, 10 Oct 2018 00:18:25
      data collected Wed, 10 Oct 2018 21:11:43

      Process 'rrdcached'
      status Running
      monitoring status Monitored
      monitoring mode active
      on reboot start
      pid 724
      parent pid 1
      uid 0
      effective uid 0
      gid 0
      uptime 20h 53m
      threads 8
      children 0
      cpu 0.0%
      cpu total 0.0%
      memory 0.1% [4.3 MB]
      memory total 0.1% [4.3 MB]
      data collected Wed, 10 Oct 2018 21:11:43

      Filesystem 'rootfs'
      status Accessible
      monitoring status Monitored
      monitoring mode active
      on reboot start
      permission 755
      uid 0
      gid 0
      filesystem flags 0x1000
      block size 4 kB
      space total 47.1 GB (of which 5.1% is reserved for root user)
      space free for non superuser 40.9 GB [87.0%]
      space free total 43.4 GB [92.1%]
      inodes total 3153920
      inodes free 3057009 [96.9%]
      data collected Wed, 10 Oct 2018 21:11:43



      ================================================================================
      = Block device attributes
      ================================================================================
      /dev/sda1: UUID="f97b0225-c7ab-40b9-b743-b338761fc7b7" TYPE="ext4" PARTUUID="6ebb369b-01"
      /dev/sda5: UUID="1bbde266-34be-41c3-b6ca-d2ec5a4ad398" TYPE="swap" PARTUUID="6ebb369b-05"

      ================================================================================
      = File system disk space usage
      ================================================================================
      Filesystem Type 1024-blocks Used Available Capacity Mounted on
      udev devtmpfs 3973104 0 3973104 0% /dev
      tmpfs tmpfs 798060 9032 789028 2% /run
      /dev/sda1 ext4 49362520 3889512 42935792 9% /
      tmpfs tmpfs 3990284 0 3990284 0% /dev/shm
      tmpfs tmpfs 5120 0 5120 0% /run/lock
      tmpfs tmpfs 3990284 0 3990284 0% /sys/fs/cgroup
      tmpfs tmpfs 3990284 84 3990200 1% /tmp
      overlay overlay 49362520 3889512 42935792 9% /var/lib/docker/overlay2/07d00a0dbc79706362997037bd77862500d3717b66ef8b7e6966055611b24910/merged
      shm tmpfs 65536 0 65536 0% /var/lib/docker/containers/c5453bb6e9551d81510e61df63d4e1db77b66b14b5b0afa36fb3f23bfdf281b8/mounts/shm
      overlay overlay 49362520 3889512 42935792 9% /var/lib/docker/overlay2/c01e7cbba14f3513319013ddde29d6a0358057c195d63c6d6b981c71244477bc/merged
      shm tmpfs 65536 0 65536 0% /var/lib/docker/containers/add284802e639d4162f65b0451070aa96d89bde1dd73fc711dc6e1972d56cd3d/mounts/shm

      #
    • Hi,

      Likely this evening my new hdd will arrive.

      Here are the step I have in mind doing. Please reply if something is wrong.
      ssh webclient
      root
      password

      Then:
      ddrescue -n /dev/sdb /dev/sde rescue.log
      Disk I want to backup from is disk sdb
      Disk I want to backup to will be disk sde

      Q1: When done this task, how do I check log to see if it was done correctly? What to check.
      Q2: In case the next steps go wrong I need to put backup from sde back again on sdb. Below is correct?
      ddrescue -n /dev/sde /dev/sdb rescue.log

      to try and correct things, step 1:
      fsck [b]/[/b]dev[b]/[/b]sdborfsck -t ext4 /dev/sdborfsck -a [b]/[/b]dev[b]/[/b]sdb
      reboot
      Then I'm hopefull it will run..

      If not, I am thinking of forcing the mount of sdb to Ext4.
      # mount -f -t ext4 /dev/sdb
      Kindly your thoughts about the above. :)

      The post was edited 1 time, last by edterbak: changed force mount ext command, THink this is correct. ().

    • geaves wrote:

      The giveaway here is the drives will not mount for the following reason;

      sdb zfs_member
      └─sdb1 zfs_member
      sdc zfs_member
      └─sdc1 zfs_member
      sdd zfs_member
      └─sdd1 zfs_member

      They are part of a zfs pool which would require the zfs plugin, whilst zfs can be used with snapraid it would make no sense as each drive would be it's own pool!

      There are some on here that use zfs and maybe able to help. @hoppel118 @flmaxey
      Hi @geaves, I read the thread from the beginning. But I am sorry! I can't help here, because I never used snapraid. But you are right. Seemingly they are part of a zfs pool.

      tkaiser wrote:

      Well, then it seems you suffer from zfs signatures on your drives for whatever reasons you need to get rid off now (see link in first answer in this thread). I would do a forum search for 'wipefs zfs' and always clone the disk prior to play with wipefs.

      That is the way, I would go. Before doing anything, I would buy some extra disks, if I don't have a proper backup, and clone them.

      Regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - tvos | android tv | libreelec | win10 | kodi krypton
      frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
      -------------------------------------------
      backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • edterbak wrote:

      I have installed ZFS plugin. Nothing else for now.
      When selecting ZFS, the option [Import Pool] is clickable.

      Is this the first thing to try? (after backup ofcourse)
      Absolutely not. Since you don't use ZFS and your data is on Snapraid you clearly neither want the ZFS plugin installed nor start to let the plugin slaughter your data. Please reread the first answers. You're suffering from ZFS signatures present on your disks for whatever reason which prevent mounting them now after the Debian upgrade.
    • edterbak wrote:

      I have installed ZFS plugin. Nothing else for now.
      There is no need what @tkaiser has stated has occurred after an upgrade, your not the first, but this thread does have a solution to remove those signatures but you'll either have to compile util-linux-2.32 or use the system rescue cd. According to @ananas in that thread wipefs the current version in omv doesn't remove the signatures (my guess is it removes the start but not the end).

      ZFS creates 2 signatures one in the first 2 blocks and another in last 2 blocks, both need to be removed and each disk will need to be completed, as a word of warning using the wrong switch on wipefs could/will erase your data!

      I warned about the zfs_member because I had not seen this before and it didn't happen to me during an upgrade.
      Raid is not a backup! Would you go skydiving without a parachute?
    • Hi,

      I have a backup now.

      Did:
      wipefs -n /dev/sdb

      Brainfuck Source Code

      1. offset type
      2. -----------------------------------------------
      3. 0x200 gpt [partition table]
      4. 0xe8e0d3f000 zfs_member [filesystem]
      wipefs -n /dev/sdb1

      Brainfuck Source Code

      1. offset type
      2. ----------------------------------------------------------------
      3. 0xe8e0c3f000 zfs_member [filesystem]
      4. 0x438 ext4 [filesystem]
      5. LABEL: sdbwcc4j4726757
      6. UUID: df805e5e-1311-49b3-aa41-445a83e63e75
      zfs list
      No datasets available


      Does this suggest there IS actually a zfs system with an ext4 inside?? (sounds insane to me to be honest...)
      What do you suggest I do next.

      wipefs -o 0xe8e0d3f000 /dev/sdb ?
      or import ZFS pool?

      The post was edited 1 time, last by edterbak: added 2nd options ().