file system missing

    • OMV 3.x
    • file system missing

      My four HDDs were connected to a faulty serial SATA power. The thing was working fine till I made a restart today and power for two HDDs failed, upon checking the web interface first it showed me only 2 HDD and later none.

      I fixed the power issue but once the system came up there are no RAIDS to display in RAID Management, and the file system tab show n/a.

      From another thread I tried the following command which too didn't do any help.

      Source Code

      1. # rm -f /etc/monit/conf.d/*
      2. # omv-mkconf monit
      3. # service monit restart



      This is the output for cat /etc/fstab :

      Source Code

      1. # /etc/fstab: static file system information.
      2. #
      3. # Use 'blkid' to print the universally unique identifier for a
      4. # device; this may be used with UUID= as a more robust way to name devices
      5. # that works even if disks are added and removed. See fstab(5).
      6. #
      7. # <file system> <mount point> <type> <options> <dump> <pass>
      8. # / was on /dev/sda1 during installation
      9. UUID=e6c1d965-45a7-4d78-82f9-1892174952cc / ext4 errors=remount-ro 0 1
      10. # swap was on /dev/sda5 during installation
      11. UUID=4d396a6c-e23a-4ce8-b319-3d007a83fa35 none swap sw 0 0
      12. /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
      13. tmpfs /tmp tmpfs defaults 0 0
      14. # >>> [openmediavault]
      15. UUID=9897dede-f61a-438b-90bf-868dc6577bd7 /media/9897dede-f61a-438b-90bf-868dc6577bd7 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      16. # <<< [openmediavault]
      Display All

      This is the output of cat /etc/mtab :


      Source Code

      1. cat: /etc/mtab: No such file or directory


      This is the output for blkid :


      Source Code

      1. /dev/sdc: UUID="12505788-fecf-a058-19b9-265d0b406889" UUID_SUB="68e2d1d1-dde4-133d-656c-7478577d409e" LABEL="JsMediaVault:MediaRaid" TYPE="linux_raid_member"
      2. /dev/sda: UUID="12505788-fecf-a058-19b9-265d0b406889" UUID_SUB="10cfbb8a-a494-4d3c-29c2-838c15cdd297" LABEL="JsMediaVault:MediaRaid" TYPE="linux_raid_member"
      3. /dev/sdd: UUID="12505788-fecf-a058-19b9-265d0b406889" UUID_SUB="cbfe5fc0-6a29-8df0-f7f3-2a6bbba7193b" LABEL="JsMediaVault:MediaRaid" TYPE="linux_raid_member"
      4. /dev/sde1: UUID="e6c1d965-45a7-4d78-82f9-1892174952cc" TYPE="ext4" PARTUUID="9e89de26-01"
      5. /dev/sde5: UUID="4d396a6c-e23a-4ce8-b319-3d007a83fa35" TYPE="swap" PARTUUID="9e89de26-05"
      6. /dev/sdb: UUID="12505788-fecf-a058-19b9-265d0b406889" UUID_SUB="30e79b55-c5de-d8a3-5213-6ddce7bb988d" LABEL="JsMediaVault:MediaRaid" TYPE="linux_raid_member"


      The data is really important. || :(
    • RAID again...

      If a single drive crashes, there are utilities out their that can recover files and some of them are free. But recovery from drives in a RAID array? Not so much.

      Please say that you have backup.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.88 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • To create another RAID even the HDD are not being shown.

      I had turned on the Flash Memory briefly before the restart.

      And, No I really don't have a backup. ||
      Images
      • Disk.png

        25.26 kB, 600×180, viewed 16 times
      • Raid.png

        22.87 kB, 738×357, viewed 19 times
      • FileSystem.png

        18.42 kB, 898×131, viewed 19 times
      • FileSystem1.png

        29.9 kB, 897×297, viewed 17 times
      • FlashMemory.png

        11.98 kB, 582×192, viewed 17 times
    • The first thing I would do is try to determine if the NAS itself is still healthy. (Otherwise, testing drives is pointless.)
      The PC you're using as a NAS may have other unknown problems created by the power supply.

      You're going to need to burn a few diagnostic tools - live CD's. Go here, do a quick read, and make a choice. 5 Rescue CD's Since it's a bit more geared to what happened to you, I lean toward the "The Ultimate Boot CD", but there are plenty out there and you can burn more than one.

      If a rescue CD indicates there's problems in the NAS (it fails memory tests, CPU tests, tests, or other - be through):
      I'd install one of the drives in another PC temporarily (you just need a sata and a power connection, it doesn't have to be permanently installed. Take the side of a case off, disconnect the internal drive, and hook it up using the existing cables.)

      Boot up on a live Gparted CD (you can get a live CD here-> Gparted)
      (Alternately, you could go the drive manufactures web site. Some have live CD diagnostic tools.)

      If the partitions still exist, you should see them. If they don't, I'd check my cable connections to be sure, but your drives may be toast.

      In any case, you need to prepare yourself for the idea that your drives may be dead, and your NAS PC may be mortally wounded as well.
      _______________________________

      Lastly, as you're now aware, RAID is not backup.

      Please read and heed (in the future) the very next line of my signature below.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.88 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 2 times, last by flmaxey: edit ().

    • I will check up on with the Live CD

      I tried a few commands and these were the outputs

      fdisk -l :

      Source Code

      1. The primary GPT table is corrupt, but the backup appears OK, so that will be used.
      2. Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      3. Units: sectors of 1 * 512 = 512 bytes
      4. Sector size (logical/physical): 512 bytes / 4096 bytes
      5. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      6. Disklabel type: gpt
      7. Disk identifier: 9AF77657-D898-423E-A3B9-40AFC1BEB1D2
      8. Device Start End Sectors Size Type
      9. /dev/sdb1 34 262177 262144 128M Microsoft reserved
      10. /dev/sdb2 264192 5860532223 5860268032 2.7T Microsoft basic data
      11. Partition 2 does not start on physical sector boundary.
      12. Disk /dev/sde: 465.8 GiB, 500107862016 bytes, 976773168 sectors
      13. Units: sectors of 1 * 512 = 512 bytes
      14. Sector size (logical/physical): 512 bytes / 4096 bytes
      15. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      16. Disklabel type: dos
      17. Disk identifier: 0x9e89de26
      18. Device Boot Start End Sectors Size Id Type
      19. /dev/sde1 * 2048 943228927 943226880 449.8G 83 Linux
      20. /dev/sde2 943230974 976771071 33540098 16G 5 Extended
      21. /dev/sde5 943230976 976771071 33540096 16G 82 Linux swap / Solaris
      22. Partition 3 does not start on physical sector boundary.
      23. Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      24. Units: sectors of 1 * 512 = 512 bytes
      25. Sector size (logical/physical): 512 bytes / 4096 bytes
      26. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      27. Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      28. Units: sectors of 1 * 512 = 512 bytes
      29. Sector size (logical/physical): 512 bytes / 4096 bytes
      30. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      31. Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      32. Units: sectors of 1 * 512 = 512 bytes
      33. Sector size (logical/physical): 512 bytes / 4096 bytes
      34. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Display All
      mount :




      Source Code

      1. sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
      2. proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
      3. udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=1010388,mode=755)
      4. devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
      5. tmpfs on /run type tmpfs (rw,nosuid,relatime,size=1635388k,mode=755)
      6. /dev/sde1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
      7. securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
      8. tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
      9. tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
      10. tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
      11. cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
      12. pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
      13. cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
      14. cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
      15. cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
      16. cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
      17. cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
      18. cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
      19. cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
      20. cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
      21. cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
      22. tmpfs on /etc/machine-id type tmpfs (ro,relatime,size=1635388k,mode=755)
      23. systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=23,pgrp=1,timeout=300,minproto=5,maxproto=5,direct,pipe_ino=11734)
      24. hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
      25. debugfs on /sys/kernel/debug type debugfs (rw,relatime)
      26. fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
      27. mqueue on /dev/mqueue type mqueue (rw,relatime)
      28. tmpfs on /tmp type tmpfs (rw,relatime)
      29. rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
      30. binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
      Display All
    • jobinjv wrote:

      To create another RAID even the HDD are not being shown.

      I had turned on the Flash Memory briefly before the restart.

      And, No I really don't have a backup. ||
      I looked at the screen captures from this post. The fact that the hard drives show up at all, under Physical drives, is promising. However, you'd have to wipe the drives, in Physical drives, before creating a new file system or (heaven forbid) another array. If you do that, wipe the disks, any chance of data recovery is out the window.

      First, test your NAS "extensively". Again, if something is wrong with it, looking at the hard drives is a waste of time. (And don't dismiss the idea arbitrarily, thinking that your NAS PC is OK. Some of the voltages that go to your drives also go to the MOBO.)

      If the NAS tests OK, and you want to take a shot at reconstituting the array, this is a good guide to do it. -> Linux RAID Recovery
      _______________________________________________________

      If you manage to get your array operating again, give serious thought to getting your data off of it and setting up real backup.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.88 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • I was away from home and only a remote to my home pc available, so i kept on trying out commands from other similar threads,

      fdisk -l :

      Source Code

      1. The primary GPT table is corrupt, but the backup appears OK, so that will be used.
      2. Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      3. Units: sectors of 1 * 512 = 512 bytes
      4. Sector size (logical/physical): 512 bytes / 4096 bytes
      5. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      6. Disklabel type: gpt
      7. Disk identifier: 9AF77657-D898-423E-A3B9-40AFC1BEB1D2
      8. Device Start End Sectors Size Type
      9. /dev/sdb1 34 262177 262144 128M Microsoft reserved
      10. /dev/sdb2 264192 5860532223 5860268032 2.7T Microsoft basic data
      11. Partition 2 does not start on physical sector boundary.
      12. Disk /dev/sde: 465.8 GiB, 500107862016 bytes, 976773168 sectors
      13. Units: sectors of 1 * 512 = 512 bytes
      14. Sector size (logical/physical): 512 bytes / 4096 bytes
      15. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      16. Disklabel type: dos
      17. Disk identifier: 0x9e89de26
      18. Device Boot Start End Sectors Size Id Type
      19. /dev/sde1 * 2048 943228927 943226880 449.8G 83 Linux
      20. /dev/sde2 943230974 976771071 33540098 16G 5 Extended
      21. /dev/sde5 943230976 976771071 33540096 16G 82 Linux swap / Solaris
      22. Partition 3 does not start on physical sector boundary.
      23. Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      24. Units: sectors of 1 * 512 = 512 bytes
      25. Sector size (logical/physical): 512 bytes / 4096 bytes
      26. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      27. Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      28. Units: sectors of 1 * 512 = 512 bytes
      29. Sector size (logical/physical): 512 bytes / 4096 bytes
      30. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      31. Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      32. Units: sectors of 1 * 512 = 512 bytes
      33. Sector size (logical/physical): 512 bytes / 4096 bytes
      34. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Display All
      Which lead me to more search and finally issuing this :

      mdadm --assemble /dev/md127 /dev/sd[abcd] --verbose --force

      Well, this brought the raid up, with an output that only two HDDs are functioning. Anyways, backing up the data and will try another fresh install. Or may be another Distro. Seems usually the help here is sparse or selective.
    • flmaxey wrote:

      jobinjv wrote:

      To create another RAID even the HDD are not being shown.

      I had turned on the Flash Memory briefly before the restart.

      And, No I really don't have a backup. ||
      I looked at the screen captures from this post. The fact that the hard drives show up at all, under Physical drives, is promising. However, you'd have to wipe the drives, in Physical drives, before creating a new file system or (heaven forbid) another array. If you do that, wipe the disks, any chance of data recovery is out the window.
      First, test your NAS "extensively". Again, if something is wrong with it, looking at the hard drives is a waste of time. (And don't dismiss the idea arbitrarily, thinking that your NAS PC is OK. Some of the voltages that go to your drives also go to the MOBO.)

      If the NAS tests OK, and you want to take a shot at reconstituting the array, this is a good guide to do it. -> Linux RAID Recovery
      _______________________________________________________

      If you manage to get your array operating again, give serious thought to getting your data off of it and setting up real backup.
      Thank you for your help.
    • jobinjv wrote:

      Thank you for your help.

      I've been away from, for the last few days, myself. (Unfortunately, there's no Internet where I was. Yeah, it's "that" remote.)

      BTW: Support for RAID issues is sparse "everywhere". When an array goes south, the news is usually not good and troubleshooting tends to go down the "RAID rabbit hole". Recovery, if it's possible at all, must be done in a specific sequence that can easily be botched. Few, if any, want to touch it or be the bearer of bad news.

      In any case, I hope you got your data off of the array without loss.
      _____________________________________________________________

      Seriously, give some thought to a new approach that includes full backup. If you look at my signature (below), you'll see how I do a complete server backup and note that doesn't have to be expensive. I'm using a Raspberry PI (with OMV) and rsync'ing the main servers data folders to a 4TB USB drive on the R-PI. You could, just as easily, retask an old PC to do the same thing by adding a big drive to it or 2 of the drives from your RAID array.

      As it seems, many folks want a single mount point which is made of up multiple drives. While RAID does that, there are other methods of doing it as well. LVM2 and UnionFS will do the same thing, without virtually eliminating the available tools for recovering data.

      (Frankly, I don't understand the reason why people want to consolidate drives into a single mount point. While it may help somewhat, in administration, things can get real complicated when there's a failure. When a folder is shared to the network, network clients have no idea which of the servers physical drives the share is on. In practical terms, it doesn't matter.)
      ______________________________________________________________

      So what was the final outcome of the drives - the two that didn't respond? Are they toast or did a wipe and format bring them back?
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.88 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 1 time, last by flmaxey: edit ().