Posts by GiuseppeChillemi

    Subzero, people tend to behave more carefully adding a BIG RED box with a WARNING just in the place you are putting yourself in danger than having read it on a text in another scenario.
    Think about street signs. is a WRONG WAY signal more effective and life saving before entering the street or having read about a "wrong way" in a map together with hundred more streets informations ?

    I have a dual array setup. I have added 1+1 drives and recovered from a degraded state. Then I have added 2 drives to MD127 and one to MD126 (33TB and 17TB)
    I have gone to filesystem and asked to enlarge bot of them nonthing appens. Rebooted, retried but no result.
    Anything I could to to discover what is happening ? Drives are recognized but no more available as free. Seems they have been added but resize has not started.

    Today I have reinstalled OMV. It has been a clean install without major problems.
    However, I have noticed that no special check is done when you select a partition to install to.
    You can be tyred and selecting your 30TB RAID 6 partition could be a matter of a click !

    Could you please add multiple BIG warnings to inform the user that he is selecting an big partition ? Something like "BIG PARTITION DETECTED, ARE YOU REALLY SURE ?"

    Thanks in advance

    I have a OMV setup with 2 raid 6 and 12+5 drives.
    Drives are time to time kicked off from the array and it runs degrated. It seems OMV is sensitive to timeouts as expulsion from the raid appear to be related to entering the SMART page of OMV web page and SLEEP/AWAKE occurrence of the drives is active. Their position change in the bays seems related too. Also, after the setup has grown I am no more able to access the SMART page as I always have TIMEOUT errors.

    I am considering moving to FreeNAS because I desperate and I could have more luck, but before this happen I kindly ask for help.

    My SETUP is HP DL 380G7 with 96GB, IBM1015b flashed to IT MODE, HD SAS EXPANSION. Everything runs on a virtual machine of an ESX server but I have always had this problem even when the sutup was on bare metal.

    is there a way to add 2 drives at the same to a DEGRADED raid 6 and have the 2 drives replacing the failed ones in just one rebuld ?

    Here is the actual configuration.

    I have just formatted them in OMV3.x
    Also I do no use SMART monitoring anymore as with 20+ DRIVES it seems that multiple long error/timeouts cause MDADM to kick the drive.
    (It is just a theory, must be confirmed)


    I have moved the system drives and the IBM 1015 controller to another HP Gx machine. There was an ESXi Hypervisor installed. I have created a virtual machine and passed the system drives as RAW and the controller as passthrough device...
    ... everything was there, even the spare drive for each of the 2 RAID6.
    Openmediavault worked at first run after omv-firstaid network configuration.

    Thanks god !

    my OMV installation WAS 3.0.7x. It was not remotely answering from a couple of weeks and I could not phisically access it until this evening.
    It is a 2 parts hardware setup: one part is an HP316M1 with IBM1015 controller on it, the other part a case with 24 3,5" bays and HP sas expander, apower supply and a PCI module to power to the HP expander.
    Two raid groups were on it:
    14x3TB drives
    6x4TB drives
    RAID6 both !

    Switching the NAS on a smell of burned electronic raised from the HP316M1 and soon it switched off.
    From the 24 Bays case a continuos clicking was coming up from :( one or two drives.

    Ok, you are right, it is an horror story.

    Now I need to try to rebuild and recovery everything.

    I need some advice from the experts:

    Once I move the IBM card to another PC and connect it back to the 24Bays case, I need to discover what happened to the drives and eventually which have failed and recover sanity before trying to reinstall OMV. I need to do this in the safe possible way.

    Which tools do I need ? Which shell commands ? Which distribution/iso ?
    Any warning ? It is really an huge setup and I want to recover everything or as much as possible.
    Also I don't want any tool to start rebuilding the drives automatically but only if I manually start the process.

    Any help is highly appreciated !

    I wish to migrate my OMV installation on a ESX virtual machine I will host in the same PC I have used as OMV3.X NAS.

    The IBM 1015 controller to which I have connected 20 drives will be assigned to the VM via VT-D with the same drives configuration.

    What should I do to move the installation ? Clean Install ? Migration ? (How ?)