Openmediavault crashes after mounting new drive

  • Hi all,


    I'm having the following issue with openmediavault:

    I attached a new 3 Tb drive and within the openmediavault interface I partitioned and formatted the drive which is then visible in the 'filesystem' tab.

    However as soon as I try to mount the drive it gives me the following error: 'Removing the directory '/' has been aborted, the resource is busy.' and than the whole system reboots.


    Why can't I attach another drive (it worked with the other four drives (2 x ext4 and 2 x ntfs)?

    • Offizieller Beitrag

    Maybe a disk error? I would do a SMART test on this disk to verify that the status is correct.

    What system do you use? OMV5 or OMV6?

    X64 or ARM architecture?

    • Offizieller Beitrag

    However as soon as I try to mount the drive it gives me the following error: 'Removing the directory '/' has been aborted, the resource is busy.' and than the whole system reboots.

    I'm sorry, but this is not true. According to https://github.com/openmediava…ined/module/fstab.inc#L65 this error message is only thrown when you DELETE a file system. If you only add a file system this can never happen. So the error message and the scenario you describe can not match. Please describe exactly how we can reproduce this or follow the correct code path to identify the reason.

  • The navigation is simple: create a partition and format the drive with the file system tab. Then try to mount the drive and the error occurs. I tried writing zero's to the disc with the cockpit console before and it gave no errors.


    My system is as follows:

    - boot openmediavault from usb-stick.

    - attached are 4 drives 2x ntfs and 2x ext4 through an extra raid controller in my machine (no raid setup).

    - Virtual machine with Windows10.

    - Shared several folders with samba from the mounted drivers.


    No errors so far, everthing was working fine.


    Then I connected all the 4 drives to my internal sata controleer and so far so good, again no errors/crashes.

    However when I start connecting extra drives to the extra raid controller again and partition and format them, the above mentioned error occurs for each drive as soon as I try to mount these extra drives.

    • Offizieller Beitrag

    1. - attached are 4 drives 2x ntfs and 2x ext4 through an extra raid controller in my machine (no raid setup).

    2. - Then I connected all the 4 drives to my internal sata controleer and so far so good, again no errors/crashes.

    3. - However when I start connecting extra drives to the extra raid controller again and partition and format them, the above mentioned error occurs for each drive as soon as I try to mount these extra drives.

    First, where are you "partitioning" and formatting the 5th disk? Is it OMV or a Windows client? (NTFS?)

    Second, why are you reordering disks, removing them from the RAID controller, then connecting them to the motherboard? What is there to be gained by this? Why not leave the disks where they were (on the RAID controller) and add your 5th disk to a motherboard SATA port?
    ____________________________________________________________________


    When a RAID controller is added, physical SATA port numbers may be reordered. How that's handled depends on the motherboard's brand, age and BIOS type. As an example, in speculation, you may have a SATA2 motherboard and a SAS/SATA3 RAID card. I'm not sure how a motherboard's BIOS would handle that. Finally note that your RAID card may have it's own BIOS which may impact how the motherboard handles drive ports. In the bottom line, mixing and rematching SATA ports to drive connections will not help matters. It might be productive to reestablish the original connections, to the RAID card, and work from there.


    There's also the possibility that something is wrong with one of the hard drives. As chente has already suggested, it would be best to test ALL drives with at least a SMART short test and take note of SMART's stat's afterward. It only takes one faulty drive to cause bizarre effects.


    Finally, it's worth noting that this behavior is most likely due to the way Debian is interacting with the motherboard's BIOS and / or the BIOS of the RAID card. OMV is a top level NAS application. OMV has nothing do with the behavior of Debian (the OS) as it interacts physical hardware.

  • - SATA3 mainboard and SATA2 raid card

    - Nothing wrong with the hard drives, I tried to mount the 6th drive and boom, the

    system reboots again with the following error (see picture).

    The weird thing is that I only want to mount the drive, not trying to remove anything.

    If this is a Debian or raid card error configuring the drive(s) manually would also fail which is not the case.


  • If you have all those drives, this might make some sense to you:


    Before attempting to set the last drive (6th drive or whatever), disable all services that might be accessing the other drives:

    NFS, SMB, FTP.


    If you have any kind of mergerFS, KVM, docker containers, whatnot, that live on the drives.

    Stop everything that is NOT on the OS disk and prevent containers from restarting.


    Shutdown the server and remove all the drives.

    Plug only the one that is giving problems and reboot the server.


    Try to do what you want to do.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!