Raid1 with 1TB of data disappeared!

    • OMV 3.x
    • Raid1 with 1TB of data disappeared!

      Hi everyone.
      I don-t know how but after that I rebooted my NAS with OMB 3.0.89 the RAID1 that I built was gone!
      This is a huge hissue since I already copied more than 1TB of data on it!

      How can I recover it?!

      In the filesystem tab I can see it, but his status is missing as shown in the attachment.
      In the RAID Management it say that I can build a RAID1 with the hard drive that I previously used to build it!


      This is the output of cat /etc/fsta

      Source Code

      1. root@Delibird:~# cat /etc/fstab
      2. # /etc/fstab: static file system information.
      3. #
      4. # Use 'blkid' to print the universally unique identifier for a
      5. # device; this may be used with UUID= as a more robust way to name devices
      6. # that works even if disks are added and removed. See fstab(5).
      7. #
      8. # <file system> <mount point> <type> <options> <dump> <pass>
      9. # / was on /dev/sde1 during installation
      10. UUID=3ea78407-b370-43c7-ae25-290c365a4927 / ext4 errors=remount-ro 0 1
      11. # swap was on /dev/sde5 during installation
      12. UUID=462536b4-33a7-439b-8ed3-13e27998acdb none swap sw 0 0
      13. tmpfs /tmp tmpfs defaults 0 0
      14. # >>> [openmediavault]
      15. UUID=8d1d82dc-45af-438d-9c7c-271640aed5b2 /media/8d1d82dc-45af-438d-9c7c-271640aed5b2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      16. UUID=b070e115-e1c4-4184-9931-a59aa74ba01e /media/b070e115-e1c4-4184-9931-a59aa74ba01e ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      17. /dev/disk/by-label/Test1 /srv/dev-disk-by-label-Test1 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      18. /dev/disk/by-label/Test2 /srv/dev-disk-by-label-Test2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      19. /dev/disk/by-id/md-name-Delibird:Telefilm /srv/dev-disk-by-id-md-name-Delibird-Telefilm ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      20. /srv/dev-disk-by-label-Test1:/srv/dev-disk-by-label-Test2 /srv/d4dbecef-0281-42f7-a4ea-92d88e62f9c0 fuse.mergerfs defaults,allow_other,direct_io,use_ino,category.create=epmfs,minfreespace=4G 0 0
      21. # <<< [openmediavault]
      Display All





      This is the output of mount:

      Source Code

      1. root@Delibird:~# mount
      2. sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
      3. proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
      4. udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=977467,mode=755)
      5. devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
      6. tmpfs on /run type tmpfs (rw,nosuid,relatime,size=1570000k,mode=755)
      7. /dev/sdf1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      8. securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
      9. tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
      10. tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
      11. tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
      12. cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
      13. pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
      14. cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
      15. cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
      16. cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
      17. cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
      18. cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
      19. cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
      20. cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
      21. cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
      22. cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
      23. tmpfs on /etc/machine-id type tmpfs (ro,relatime,size=1570000k,mode=755)
      24. systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=23,pgrp=1,timeout=300,minproto=5,maxproto=5,direct,pipe_ino=9767)
      25. mqueue on /dev/mqueue type mqueue (rw,relatime)
      26. debugfs on /sys/kernel/debug type debugfs (rw,relatime)
      27. hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
      28. fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
      29. tmpfs on /tmp type tmpfs (rw,relatime)
      30. 1:2 on /srv/d4dbecef-0281-42f7-a4ea-92d88e62f9c0 type fuse.mergerfs (rw,relatime,user_id=0,group_id=0,allow_other)
      31. /dev/sde2 on /srv/dev-disk-by-label-Test2 type ext4 (rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)
      32. /dev/sde1 on /srv/dev-disk-by-label-Test1 type ext4 (rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)
      33. /dev/md127 on /media/8d1d82dc-45af-438d-9c7c-271640aed5b2 type ext4 (rw,noexec,relatime,stripe=256,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)
      34. /dev/sdf1 on /var/folder2ram/var/log type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      35. folder2ram on /var/log type tmpfs (rw,nosuid,nodev,noexec,relatime)
      36. /dev/sdf1 on /var/folder2ram/var/tmp type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      37. folder2ram on /var/tmp type tmpfs (rw,nosuid,nodev,noexec,relatime)
      38. /dev/sdf1 on /var/folder2ram/var/lib/openmediavault/rrd type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      39. folder2ram on /var/lib/openmediavault/rrd type tmpfs (rw,nosuid,nodev,noexec,relatime)
      40. /dev/sdf1 on /var/folder2ram/var/spool type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      41. folder2ram on /var/spool type tmpfs (rw,nosuid,nodev,noexec,relatime)
      42. /dev/sdf1 on /var/folder2ram/var/lib/rrdcached type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      43. folder2ram on /var/lib/rrdcached type tmpfs (rw,nosuid,nodev,noexec,relatime)
      44. /dev/sdf1 on /var/folder2ram/var/lib/monit type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      45. folder2ram on /var/lib/monit type tmpfs (rw,nosuid,nodev,noexec,relatime)
      46. /dev/sdf1 on /var/folder2ram/var/lib/php5 type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      47. folder2ram on /var/lib/php5 type tmpfs (rw,nosuid,nodev,noexec,relatime)
      48. /dev/sdf1 on /var/folder2ram/var/lib/netatalk/CNID type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      49. folder2ram on /var/lib/netatalk/CNID type tmpfs (rw,nosuid,nodev,noexec,relatime)
      50. /dev/sdf1 on /var/folder2ram/var/cache/samba type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      51. folder2ram on /var/cache/samba type tmpfs (rw,nosuid,nodev,noexec,relatime)
      52. /dev/sdf1 on /var/lib/docker/plugins type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      53. /dev/sdf1 on /var/lib/docker/overlay2 type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      Display All
      This is the outpud of blkid

      Source Code

      1. root@Delibird:~# blkid
      2. /dev/sde1: LABEL="Test1" UUID="e2e97456-a32f-4c7b-82f2-8ba5d8320dc1" TYPE="ext4" PARTLABEL="HDDTest" PARTUUID="c3e5ef33-dd1a-46d5-84bc-d66470473ed3"
      3. /dev/sde2: LABEL="Test2" UUID="b34f8189-6a4e-4a02-bcf0-1fc73641d055" TYPE="ext4" PARTLABEL="HDDTest" PARTUUID="dfdbab85-4777-416e-8f09-51e55840336b"
      4. /dev/sda: UUID="ed696fd2-96fe-ba4f-ab44-fb72b800fb01" UUID_SUB="05959c09-ecb2-6cf8-facc-6603333b02f6" LABEL="NAS:Data" TYPE="linux_raid_member"
      5. /dev/md127: LABEL="Dati" UUID="8d1d82dc-45af-438d-9c7c-271640aed5b2" TYPE="ext4"
      6. /dev/sdb: UUID="ed696fd2-96fe-ba4f-ab44-fb72b800fb01" UUID_SUB="47bf0e53-a2c5-2b44-1db5-c0e2eadf7300" LABEL="NAS:Data" TYPE="linux_raid_member"
      7. /dev/sdf1: UUID="3ea78407-b370-43c7-ae25-290c365a4927" TYPE="ext4" PARTUUID="a94754ac-01"
      8. /dev/sdf5: UUID="462536b4-33a7-439b-8ed3-13e27998acdb" TYPE="swap" PARTUUID="a94754ac-05"
      9. /dev/sdd: PTUUID="36911ee8-885f-4a0e-8662-a913cd447094" PTTYPE="gpt"
      10. /dev/sdc: PTUUID="8defa52c-34a0-4c6e-8508-3c922ba3807d" PTTYPE="gpt"
      Finally this is the outpud of df -h:


      Source Code

      1. root@Delibird:~# df -h
      2. Filesystem Size Used Avail Use% Mounted on
      3. udev 10M 0 10M 0% /dev
      4. tmpfs 1.5G 9.2M 1.5G 1% /run
      5. /dev/sdf1 14G 7.0G 5.7G 56% /
      6. tmpfs 3.8G 16K 3.8G 1% /dev/shm
      7. tmpfs 5.0M 0 5.0M 0% /run/lock
      8. tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
      9. tmpfs 3.8G 0 3.8G 0% /tmp
      10. 1:2 915G 140M 869G 1% /srv/d4dbecef-0281-42f7-a4ea-92d88e62f9c0
      11. /dev/sde2 403G 71M 382G 1% /srv/dev-disk-by-label-Test2
      12. /dev/sde1 513G 70M 487G 1% /srv/dev-disk-by-label-Test1
      13. /dev/md127 3.6T 2.1T 1.5T 59% /media/8d1d82dc-45af-438d-9c7c-271640aed5b2
      14. folder2ram 3.8G 342M 3.5G 9% /var/log
      15. folder2ram 3.8G 0 3.8G 0% /var/tmp
      16. folder2ram 3.8G 920K 3.8G 1% /var/lib/openmediavault/rrd
      17. folder2ram 3.8G 1.5M 3.8G 1% /var/spool
      18. folder2ram 3.8G 9.9M 3.8G 1% /var/lib/rrdcached
      19. folder2ram 3.8G 8.0K 3.8G 1% /var/lib/monit
      20. folder2ram 3.8G 4.0K 3.8G 1% /var/lib/php5
      21. folder2ram 3.8G 0 3.8G 0% /var/lib/netatalk/CNID
      22. folder2ram 3.8G 488K 3.8G 1% /var/cache/samba
      Display All



      Please, can someone help me to recover the 1TB data that I lost?
      Images
      • 2017-10-21 19_29_07-openmediavault control panel - Delibird.local.jpg

        122.75 kB, 1,089×280, viewed 23 times
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • Blabla wrote:

      Please, can someone help me to recover the 1TB data that I lost?
      Really unbelievable. Why do people skip backup and trust in BS like RAID when they're after data protection/safety? RAID-1 besides that it's the most stupid way to waste a disk for redundancy is only about availability.

      BTW: Check the contents of /media/8d1d82dc-45af-438d-9c7c-271640aed5b2 before you totally panic.
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • That's the old RAID1 that is currently working.
      The new RAID1 had this path: /srv/dev-disk-by-id-md-name-Delibird-Telefilm and if I try to use "ls" it doesn't show anything.
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • Blabla wrote:

      /srv/dev-disk-by-id-md-name-Delibird-Telefilm and if I try to use "ls" it doesn't show anything.
      Yeah, there's nothing mounted. I've no idea why OMV makes it that easy to create RAID arrays (at least accepting once a dialog that reads 'I understand that RAID is not backup' should IMO be mandatory) and I've no idea why the 'recovery procedures' don't mention that people should provide log files when something happened.
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • Blabla wrote:

      So if I try to use the "recover" button I may be able to recover everything?
      No idea, I don't use mdraid (and especially the totally useless RAID-1 mode) since there exist almost no reasons to do so in 2017.

      Please don't panic, remain calm, read through various threads and online resources how to recover from this. Maybe those guys supporting RAID recovery jump in later... And please the next time you do something with storage think about backup first and then about availability (RAID is ONLY about the latter! And average people don't need this especially at home)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • tkaiser wrote:

      Really unbelievable. Why do people skip backup and trust in BS like RAID when they're after data protection/safety? RAID-1 besides that it's the most stupid way to waste a disk for redundancy is only about availability.
      BTW: Check the contents of /media/8d1d82dc-45af-438d-9c7c-271640aed5b2 before you totally panic.

      actually i am a noob then, but i think raid1 is actually my backup. if one disk failes, i just have to replace corrupted one and everything is fine again.
      i can backup my data into external usb drive, but it would like 3 disk and raid5, isn't it?
    • xv1nx wrote:

      actually i am a noob then, but i think raid1 is actually my backup. if one disk failes, i just have to replace corrupted one and everything is fine again.
      What is @Blabla then talking about?

      There's an easy test to check for the difference of redundancy wasted for availability (RAID) and backup: Just delete everything by accident or intention. On a RAID-1 you have two empty disks afterwards so if you did it correctly and do backup it's now time to restore from the backup. If you confused availability with data protection (and did not backup your data) then now your data has gone.

      xv1nx wrote:

      i can backup my data into external usb drive, but it would like 3 disk and raid5, isn't it?
      It's your future data loss not mine. At least I don't want this RAID stupidity without backup since I deal with failed RAID now since 20 years.

      @Blabla: It's too annoying to try to analyze the pretty incomplete data you provided in post #1 (since for whatever reasons you did not follow @'ryecoaaron''s list of things to provide when an mdraid failed: Degraded or missing raid array questions)

      There should be two disk devices (no idea which) which hopefully contain your data. If all that 'RAID-1 is great since so easy to recover from failures' babbling is true it should be sufficient to simply mount the partitions on one of the two array members and you have access to your data.
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • tkaiser wrote:

      There's an easy test to check for the difference of redundancy wasted for availability (RAID) and backup: Just delete everything by accident or intention. On a RAID-1 you have two empty disks afterwards so if you did it correctly and do backup it's now time to restore from the backup. If you confused availability with data protection (and did not backup your data) then now your data has gone.
      i used freenas before, which i only have to save the configuration file to backup the system. The data are backup via raid1. Sure if i delete my data, i will loose them. But i can live with it.

      But on my new system, i don't have enough RAM to use freenas.
      Do you have a guide to manuelly backup via rsync every night? i would use 1 disk as primary disk and 1 as backup disk. Every night i want to copy only new files. Is there a plugin for such use case?
    • Sorry, I didn't see that post.

      I'll post here everything:. The RAID1 that disappeared was made with 2 Ironwolf 6TB
      • cat /proc/mdsta

      Source Code

      1. root@Delibird:~# cat /proc/mdstat
      2. Personalities : [raid1]
      3. md127 : active raid1 sdb[0] sda[1]
      4. 3906887360 blocks super 1.2 [2/2] [UU]
      5. unused devices: <none>




      • blkid

      Source Code

      1. root@Delibird:~# blkid
      2. /dev/sda: UUID="ed696fd2-96fe-ba4f-ab44-fb72b800fb01" UUID_SUB="05959c09-ecb2-6cf8-facc-6603333b02f6" LABEL="NAS:Data" TYPE="linux_raid_member"
      3. /dev/sdb: UUID="ed696fd2-96fe-ba4f-ab44-fb72b800fb01" UUID_SUB="47bf0e53-a2c5-2b44-1db5-c0e2eadf7300" LABEL="NAS:Data" TYPE="linux_raid_member"
      4. /dev/sde1: LABEL="Test1" UUID="e2e97456-a32f-4c7b-82f2-8ba5d8320dc1" TYPE="ext4" PARTLABEL="HDDTest" PARTUUID="c3e5ef33-dd1a-46d5-84bc-d66470473ed3"
      5. /dev/sde2: LABEL="Test2" UUID="b34f8189-6a4e-4a02-bcf0-1fc73641d055" TYPE="ext4" PARTLABEL="HDDTest" PARTUUID="dfdbab85-4777-416e-8f09-51e55840336b"
      6. /dev/md127: LABEL="Dati" UUID="8d1d82dc-45af-438d-9c7c-271640aed5b2" TYPE="ext4"
      7. /dev/sdf1: UUID="3ea78407-b370-43c7-ae25-290c365a4927" TYPE="ext4" PARTUUID="a94754ac-01"
      8. /dev/sdf5: UUID="462536b4-33a7-439b-8ed3-13e27998acdb" TYPE="swap" PARTUUID="a94754ac-05"
      9. /dev/sdc: PTUUID="8defa52c-34a0-4c6e-8508-3c922ba3807d" PTTYPE="gpt"
      10. /dev/sdd: PTUUID="36911ee8-885f-4a0e-8662-a913cd447094" PTTYPE="gpt"


      • fdisk -l | grep "Disk "

      Source Code

      1. root@Delibird:~# fdisk -l | grep "Disk "
      2. Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      3. Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      4. Disk /dev/sdc: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
      5. Disk identifier: 8DEFA52C-34A0-4C6E-8508-3C922BA3807D
      6. Disk /dev/sdd: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
      7. Disk identifier: 36911EE8-885F-4A0E-8662-A913CD447094
      8. Disk /dev/sde: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
      9. Disk identifier: 62DA6565-E8B7-4E50-AF25-0DA69B3B5CB4
      10. Disk /dev/md127: 3.7 TiB, 4000652656640 bytes, 7813774720 sectors
      11. Disk /dev/sdf: 14.3 GiB, 15376000000 bytes, 30031250 sectors
      12. Disk identifier: 0xa94754ac
      Display All
      • cat /etc/mdadm/mdadm.conf

      Source Code

      1. root@Delibird:~# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md127 metadata=1.2 name=NAS:Data UUID=ed696fd2:96feba4f:ab44fb72:b800fb01
      18. # instruct the monitoring daemon where to send mail alerts
      19. MAILADDR pestotosto@outlook.com
      Display All
      • mdadm --detail --scan --verbose

      Source Code

      1. MAILFROM rootroot@Delibird:~# mdadm --detail --scan --verbose
      2. ARRAY /dev/md127 level=raid1 num-devices=2 metadata=1.2 name=NAS:Data UUID=ed696fd2:96feba4f:ab44fb72:b800fb01
      3. devices=/dev/sda,/dev/sdb
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • Well, there's no definition for a second RAID-1 but since there are /dev/sdc and /dev/sdd maybe the data is there (check 'cat /proc/partitions').

      Maybe others familiar with RAID-1 recovery will jump in and guide you, I've no experiences with mdraid here since for me classic/anachronistic RAID-1 ist just a stupid waste of disks. In case no one tried to help you --> google. And please start to reconsider your storage 'strategy' focusing on useless availability (RAID) now and set up a working backup!

      PS: 'Working' backup is defined by 'restore works' -- this needs testing, testing and testing.

      PPS: IIRC you run off an USB thumb drive. If that's the case (right?) you might have bought a counterfeit thumb drive showing a faked capacity (reporting eg. 16GB to the OS while having only 2GB). Normal symptom of such crap drives is that they discard every writes that happen after the total amount of written data exceeded their real capacity. So you setup your RAID, change $something and after a reboot all latest changes including the RAID you defined are gone.

      The forum here is full of these symptoms (eg. here) but usually users prefer to blame software for their hardware issues and don't believe that it's possible that fake flash media exists (while it's a MASSIVE problem).
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • Raid1 with 1TB of data disappeared!

      Didn't thought about a usb problem. Tonight I'll test it!
      THe weird thing is that if I boot into a gparted live it say that the hard drives have no partition

      Send by my Sony XZ1 using Tapatalk
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • Blabla wrote:

      Didn't thought about a usb problem. Tonight I'll test it!
      Well, I would wait until you get access to your data since testing your USB thumb drive for fake capacity means wiping out the installation on it. Though you can install the f3 tool now (apt install f3) and then simply run

      Source Code

      1. f3write /usr/local
      2. f3read /usr/local
      The fake flash issue is real and very common and symptoms look exactly like yours. Many USB thumb drives and SD cards report a much larger size to the OS than they can cope with and every write after some time (amount of data written) goes then to /dev/null instead of the flash. Almost everything seems fine as long as the system is running (since Linux has the recently written stuff in its filesystem caches and keeps filesystem structures in RAM ) but once the fs cache is flushed and the fs structures have to be read from disk again (reboot) everything has gone.

      Blabla wrote:

      THe weird thing is that if I boot into a gparted live it say that the hard drives have no partition
      Well, as already said I'm no mdraid expert since we rarely use it (only exception is RAID10 with just two disks and far layout, that's the only mode where mdraid IMO makes sense, everything else should be avoided since it's 2017 and not 2007 any more).

      I would consider shutting your box down, removing one of the drives to keep it in a safe place and then try to recover from an array 'made on another host' (since all mdraid metadata are lost it's essentially that). Something like this for example unix.stackexchange.com/questio…mdadm-raid-1-on-another-m might be sufficient.
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • xv1nx wrote:

      Do you have a guide to manuelly backup via rsync every night? i would use 1 disk as primary disk and 1 as backup disk. Every night i want to copy only new files. Is there a plugin for such use case?
      This is a bit hijacking the thread. So if you are not happy with the answer, please open a new thread.
      If you just want to to a rsync you can create a rsync job in OMV.
      If you want to create snapshots you should install the rsnapshot plug-in. By using rsnapshot you can keep previous versions of your files.
      BananaPi - armbian - OMV4.x | Asrock Q1900DC-ITX - 16GB - 2x Seagate ST3000VN000 - 1x Intenso SSD 120GB - OMV3.x 64bit