My NAS move from FreeNAS Muad Dib to OMV on a new systems.

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • I did search a bit through the forum and tried some commands. UNfortunately I cannot access SSH so I cant copy them to full extend here.

      Tried
      cat /proc/mdstat
      blkid
      fdisk -l | grep "Disk "

      mdstat shows all 3 Raided Disks in md0 and active so looks ok.
      blkid shows sda, sdb and sde (the md0 ext 4 RAID5) and the two sdc1 and sdd1 - those two are my UFS drives.
      sdc1 and sdd1 have exactly the same entries and numbers for LABEL, UUID, TYPE and PARTUUID but apart from that nothing obviously wroong points to my eye....
      fdisk though showed something:
      on sdb and sdc it says Backup GPT table is corrupt, but the primary seems OK, so that will be used.

      Does this point somebody to an obvious issue?
      I am a bit lost...
    • PepeOMV wrote:

      But as I want to hotplug this drive (I do not want to leave it connected all the time for good reason but store it away in another place), how can it be unplugged?
      The drive used in with usbbackup plugin as a target is never mounted in the Filesystems tab and would never be referenced. Therefore, it does not have an entry in fstab and shouldn't cause boot problems. If the usb disk had the same label as a disk permanently mounted in the OMV box, you might have issues.

      PepeOMV wrote:

      /srv/dev-disk-by-label-StorageRaid5 couldn't be found on /dev/md0 Raid
      This seems to be your real issue.
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • PepeOMV wrote:

      mdstat shows all 3 Raided Disks in md0 and active so looks ok.
      blkid shows sda, sdb and sde (the md0 ext 4 RAID5) and the two sdc1 and sdd1 - those two are my UFS drives.
      sdc1 and sdd1 have exactly the same entries and numbers for LABEL, UUID, TYPE and PARTUUID but apart from that nothing obviously wroong points to my eye....
      fdisk though showed something:
      on sdb and sdc it says Backup GPT table is corrupt, but the primary seems OK, so that will be used.

      Does this point somebody to an obvious issue?
      Having summarized output is not really helpful. And yes, I realize you don't have cut and paste. If you created the array in OMV, the GPT table being corrupt wouldn't matter since OMV created raid arrays don't use partitions.

      If I were you, I would have manually mounted ufs drives (since support isn't all that good in Linux) and rsync'd them (from the command line) to your storage. Then I would have wiped the ufs drives and put a good filesystem on them.
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • So
      /srv/dev-disk-by-label-StorageRaid5 couldn't be found on /dev/md0 Raid

      is preventing the system to boot to webGUI and restrict SSH access too? Shouldn't OMV system be unaffected to be able to debug?

      Anyway it has me now. I don't know how to progress other than starting from scratch....
      I've made my backup so this should be ok, but I'd be interested in what this is before moving on..
    • PepeOMV wrote:

      /srv/dev-disk-by-label-StorageRaid5 couldn't be found on /dev/md0 Raid
      For some reason, the mount point doesn't exist. Boot a rescue cd and make the mountpoint.

      PepeOMV wrote:

      is preventing the system to boot to webGUI and restrict SSH access too?
      If the system doesn't boot completely, the web interface and ssh will never start.

      PepeOMV wrote:

      Shouldn't OMV system be unaffected to be able to debug?
      This has nothing to do with OMV. This is a Linux situation. This is why single user mode and rescue disks exist. freebsd would be the same way if there was something preventing it from booting.

      Filesystems are mounted nofail in the fstab which should let the system keep booting even if a filesystem is unavailable (unless it is root). But you have a raid array that has to be found otherwise the system gets angry. Are the raid drives usb by any chance?
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ...on the root cli I was able to mount the md0 array.

      I used
      mkdir /dev/md0 to create the mountpoint /mnt/md0 and then mounted the /dev/md0 to that which worked ok and on cli I can now see my Raid data.

      So far so good.

      but how would I convince OMV system to do so during booting?
      Can't I repair from root cli so I don't need a systemrescue cd or USB stick?
    • PepeOMV wrote:

      but how would I convince OMV system to do so during booting?
      Can't I repair from root cli so I don't need a systemrescue cd or USB stick?
      Doesn't your grub menu have a rescue entry?
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • grub countains 4 Entries
      OMV4.16
      OMV4.16 recovery mode
      OMV4.9
      OMV4.9. recovery mode´

      In the meantime fdisk -l /dev/sdc yielded:

      GPT PMBR size mismatch (size differs by one) will be corrected by w(rite).
      The backup GPT is corrupt, but the primary appears OK, so that will be used.


      Is this defect MBR the error?

      it is otherwise correctly identified as a FreeBSD UFS.
      the other disks a,b, d, e and f appear ok.


      Basically I was ready to put sdc / sdd into md0 Array, I was able to mount them and back them up.
      could I "wipe" sdc ?
      I read about wipefs but do not understand it fully. When wiping the MBR how can it then be accessed... pretty rookie here too...
      or format sdc?
    • Soo...

      finally i was again a bit frustrated about my non-knowledge of Linux and decided from a time perspective it would be best to clean the system and rebuild.

      I am now back on OMV 3 and had extended the Raid, which needed to be done again on CLI as this doesn't work in OMV webgui for whatever reason.

      The ubuntu Community was really helpful here.

      A problem is that the mdadm after wiping the UFS disks on after the other and the new systems and after rebooting showed /dev/md127 instead of /dev/md0 as before.... obviously a common issue in mdadm.
      I wasn't ablt to rectify this yet and it still is md127 but I can access it and the raid 5 is now online after 9h of extension, checks and rebuild with 3.5TB (I was expecting something closer to 4).

      A good thing is that all in all the data was so far never lost because of any user Errors, fails or oddities...