Trying to add existing data drives to fresh install of OMV5 with Shared Folder path of "/", getting error

  • Getting an XFS error, from a RAID array that's being presented as a virtual drive, over a USB link is not surprising. USB bridges, by themselves, can filter drive ATA commands.

    Since you're using the functional equivalent of a hardware RAID controller, that's housed in the box, the lack of SMART data is not surprising either. I had an old Adaptec RAID controller that did the same thing. (For this reason, I replaced the controller.)

    Here's the bottom line:
    It's unlikely that OMV is responsible for the XFS error. You can run RAID in this manner if you like, but since you have an RPI and you're using USB connected drives, I'd consider setting up MergerFS+SNAPRAID versus using hardware RAID. MergerFS+SNAPRAID will give you RAID like drive aggregation, protection from bit-rot, it's a form of backup and it will also allow you to replace a drive that goes bad. Since you'd be running the box in JBOD mode, you "might" see SMART data from your drives. However, there's a learning curve involved in the setup and maintenance.

    If it's working the way you want it to, there's no need to change it. On the other hand, personally, I'd find it difficult to trust the box to interpret the condition of the hard drives, especially as they the older.

  • These MediaSonic PRORAID enclosures are purpose-built for running RAID arrays. Serious question: what makes a software RAID built and maintained by a single guy in his spare time more trustworthy than a professionally-built hardware RAID? SnapRAID looks cool but it seems like a lot more trouble than simply using the box as-intended (assuming I can get it working as-intended), plus it puts extra load on the RPi.

    Please don't think I'm here to accuse OMV of being at fault for anything. My assumption has been that I have failed to properly account for something. I have two of these enclosures, a 4-bay and an 8-bay, and the smaller one I have already set up in an identical manner with no issues and am currently sharing it on our network as a file history target. Since one of these devices is working properly in OMV, I know that what I'm trying to do is possible. Since the one that isn't working in OMV works in Windows and there's no indication of a failure of any kind, I think it's unlikely the device or the drives are faulty.

    But there must be something different about the two. The working device is 2.73TB while the one I'm having trouble with is 27.3TB so it crossed my mind that the size of the volume could be the issue. XFS itself supports volumes much larger than this but I don't think that necessarily means the software or hardware does. Perhaps the RPi can't handle a volume this size? Another possibility is that there's some difference in the firmware between the two that Windows supports but OMV doesn't, although this doesn't seem likely. The error that I'm getting about the backup GPT headers being invalid and needing to be regenerated makes me think something from Windows is hanging around and getting in the way. I'm not sure why I can't just blow away all the headers and create new ones though.

    omv 5.5.23-1 (usul) on RPi4 with Kernel 5.4.x

  • what makes a software RAID built and maintained by a single guy in his spare time more trustworthy than a professionally-built hardware RAID?

    Software RAID is standard. It's built into the Linux Kernel. (mdadm, BTRFS, and ZFS. ZFS is not in the kernel by default, but kernel modules can be added in a repeatable manner and the promox kernel has ZFS built in.) That means the resultant RAID array is "portable". If the host fails, the drive set can be moved to another host with the data intact.

    On the other hand, hardware RAID mates the disks used, to create an array, to the controller permanently. If the controller fails, it would have to be replaced by the exact same controller or, in some cases, a specific family of controller. Often, if a few years pass, the same controller or family of controller can not be obtained in retail channels. If it can be found at all, it might have to be from the used market. In this context, Mediasonic may change to a different controller in the same model series. They don't disclose detailed technical info.

    In your case, if you really want to run hardware RAID, the enclosure is the only reasonably safe way to do it, because other forms of RAID do not work reliably over USB connections.

    In any case the risk of data loss is lowered substantially with 100% backup. (RAID is NOT backup.)


    As for the rest, I don't have any answers. Anything I say would be speculation at best. If you're really interested in the differences between the enclosures, the Mediasonic forum might have something.

    I do believe those boxes have a JBOD mode. (It's probably a hardware switch.) With access to each individual disk, Linux should be able to format them. However, as noted, setting up a software RAID array over USB is not recommended. OMV's GUI won't do it, by default,

  • I hadn't thought about what would happen if the controller failed and I couldn't get another one. I've just assumed MediaSonic would take of it, one way or another (tho my backup strategy does account for such an eventuality). Still, that doesn't really speak to the question of reliability.

    I configured one of our desktops to dual-boot Ubuntu so I could test further. Using the "Disk" utility, I was able to create and mount an EXT4 partition without any errors. I unmounted it, plugged it back up to the RPi, and tried to mount it in OMV but got this error:

    omv 5.5.23-1 (usul) on RPi4 with Kernel 5.4.x

  • That error looks like the disk (/srv/dev-disk-by-uuid-98a2a013-63a8-491b-b1c0-12b07129b53a) wasn't umounted in OMV.

    Function: mount.mounted

    Name: /srv/dev-disk-by-uuid-98a2a013-63a8-491b-b1c0-12b07129b53a

    Result: True

    Comment: Target was already mounted

    Started: 13:05:47.908547

    Duration: 122.687 ms


    Then you say it was (re)formatted in Ubuntu and an attempt was made to remount the disk (already known and still mounted, with a new and unknown filesystem) in OMV. This is not an import of an unknown disk. Not surprisingly, an error resulted.

    When removing a disk, there's a process that should be followed, to avoid confusing OMV.
    1. Remove all references to the hard disk. (Shared folders, SAMBA, etc.)
    2. Unmount the disk.
    3. Delete the Filesystem.

  • I'm aware of the process but there was no file system to unmount or delete. Remember, my whole trouble is that I can't create one in OMV. Perhaps some configuration data was written somewhere while attempting to create the file system and then not removed when it failed? Is there a way to check for and remove any orphaned references to disks?

    omv 5.5.23-1 (usul) on RPi4 with Kernel 5.4.x

  • Perhaps some configuration data was written somewhere while attempting to create the file system and then not removed when it failed?

    Perhaps. I don't know. This is getting into remote forensics with an impossible number of unknowns.

    - I don't know which of the 2 enclosures we're talking about. (With or without the issue.)
    - Anything related to the enclosures is a guess. There's the USB bridge and ATA commands that, maybe, are being filtered. The lack of SMART pass through would be a concern to me.
    - I don't know what the Mediasonic virtual RAID volume looks like to OMV. (That's beyond my knowledge.)
    - I don't know if you're trying to format the virtual RAID volume with XFS, EXT4, what has happened before, etc.
    - I don't know if any of these issues are still related to exFAT. (In one enclosure or the other.)

    You could look for old drive entries in;


    Under <mntent>

    Look for a drive entry that doesn't exist between

    <mntent> and </mntent>

    An example of a drive entry:











    **Edit** Delete everything between <mntent> and </mntent> to include the statements <mntent> and </mntent>
    Save the file.

    Finally on the CLI enter:

    omv-salt deploy run fstab


    We've already hashed through some of the options you have with the enclosures, like JBOD mode and SNAPRAID+MergerFS, or simply running the RAID volume you have (with the noted permission caveats).

    Your best bet, with a guaranteed result, might be to rebuild. With the backup you have, this is the cleanest option. If you do, read this ->advice. When you configure up a working NAS, consider backing up your working SD-card so you can back out of issues in the future.

  • I checked but the only drive entry there was for the working drive.

    I was beginning to suspect that the Raspberry Pi itself was somehow causing the problem so I installed DietPi to see if the problem persisted. However, I had no trouble formatting and mounting the drive on the Raspberry Pi with DietPi and I have successfully created and tested Samba shares for both drives. So, it seems like it must be some kind of bug in OMV after all. It's beyond my ability to troubleshoot it any further though. I'm just going to run with DietPi since it's working fine. I can even run a lightweight Node.js server this way too, so that's nice.

    Thanks for your help.

    omv 5.5.23-1 (usul) on RPi4 with Kernel 5.4.x

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!