Posts by prashkd

    THANK YOU. I was just struggling with same issue. Resetting UI fixed the problem.

    Hi all,


    I am in the process of setting up my omv5 NAS where I have 4x HDDs in RAID 10 config as "datastorage" and a 250GB SSD as "activestorage" with the plan to store all important, large and somewhat less frequent files to be stored on HDDs and all the frequently accessed files such as db and application config files being used by services (plex etc) being stored on SSD.

    Just wanted to check with more experienced people in this form whether this is a good strategy or not and if there's any room for improvement here? My hope if to give HDDs a chance to spin down as much as possible to extend their life. I know similar questions have been asked in the forum a few times but I am just fishing for any idea out there which may not been discussed in the forum.


    Regards,

    I am trying to setup firewall on my OMV5 setup using ufw only because I have used it a few times in the past with no issues. having set the default incoming/outgoing to "deny" and allowing only port 80 for OMV web UI and 443 for SSH. However for some reason I am still able to access some docker containers (e.g. Portainer port 9000). Can someone explain to me what am I missing here? IN the past when i setup ufw to allow one port and enable it, all other connections immediately drop out.

    As far as I read there is a discussion about using BTRFS as the underlaying filesystem for the OMV install. I'm certain OMV will still support all regular linux filesystems for your datadisks. So from that perspective there is no pressure to use BTRFS.


    Set up the filesystem with the OMV Gut and then configure the snapshots / backups via the command line. But I strongly suggest to familiarize yourself with the filesystem first before completely relying on it. That involves reading the documentation and researching the tools (e.g. btrbk or snapper for automated snapshots), so in case of a problem you know what to do.

    Thanks Morlan. Sure I am doing a deep dive in brtfs file system to fully understand and feel like I should be able to setup it up the way I want to. I just wasn't sure how OMV will react to these manual changes. I think I can completely setup and manage a Debian based home NAS using command line however for ease of use and to save time I'd prefer to use OMV for as much config. as I can.

    I managed to get my hands on an old Dell Optiplex 9010 with i5 processor and 16gb ram (non-ecc). With this hardware I can build a cheap nas with better performance then commercial ones for less then half the price. Regarding RPi, I've actually tested OMV5 on RPi4 8 GB RAM and used it to run some database servers and I found its performance degrade within 1 week of continuous operation. On the other hand Old computer hardware has been behaving much better with all those applications + plex media which would've added a lot of load on the computer. Cost wise, old hardware will still beat Rpi4 as accessories for computers are cheaper compared to RPi.

    Hi,

    I am building a NAS using old dell optiplex 9010 (i5 with 16GB RAM), added a startech 4 port sata iii controller card and 4x 4TB WD RED HDD. I've managed to install OMV5 on usb which is working just fine. After some research I've decided to go with RAID 10 and now I am trying to decide best way to implement file system. I read somewhere that OMV 6 will only be supporting btrfs which is prompting me to go with this file system to future proof my NAS as I will most definitely upgrade to OMV6 as soon as its released and stable however I am not sure what's the best way to implement btrfs in OMV5 .


    Btrfs timeshot feature is something I am particularly interested in however OMV UI does not have any option to set it yet so i was wondering if its safe to use command line to setup filesystem and OMV5 will accept it or do I need to stick with OMV's UI for configuring filesystem?

    Bypassing the above error message, you created a Mirror under Raid Management, you should then go to File Systems to format that Mirror

    I think I didn't provide sufficient information in my last post on the issue but there was a "Device" under File System with "missing" status and was causing the previous error. I found similar issue on this OMV form and managed to get rid of the error by manually editing /etc/openmediavault/config.xml and removing that device. So I was able to create a new Device which is now "Online" (which is good?)


    Now when I click on "Mount", I get a pop up windows with error "Failed to execute XPath query '//system/fstab'."


    Error #0:
    OMV\Config\DatabaseException: Failed to execute XPath query '//system/fstab'. in /usr/share/php/openmediavault/config/database.inc:262
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/fstab.inc(123): OMV\Config\Database->set(Object(OMV\Config\ConfigObject))
    #1 [internal function]: Engined\Rpc\FsTab->set(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('set', Array, Array)
    #4 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(897): OMV\Rpc\Rpc::call('FsTab', 'set', Array, Array)
    #5 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->mount(Array, Array)
    #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #7 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('mount', Array, Array)
    #8 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('FileSystemMgmt', 'mount', Array, Array, 1)
    #9 {main}


    Did I break OMV5 by manually editing the config.xml ? Everything else seems to be working fine though (docker instances etc)

    Spoken too soon ! Although I was able to add HDD to RAID 0 config, when I went to File System to add a new folder It created the folder but when I click on "Mount", I get following error message:


    Error #0:

    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color fstab 2>&1' with exit code '1': debian:

    ----------

    ID: create_filesystem_mountpoint_1a70f282-ad92-45c0-a74b-9c67a8858578

    Function: file.accumulated

    Result: True

    Comment: Accumulator create_filesystem_mountpoint_1a70f282-ad92-45c0-a74b-9c67a8858578 for file /etc/fstab was charged by text

    Started: 10:49:29.251064

    Duration: 0.667 ms

    Changes:

    ----------

    ID: mount_filesystem_mountpoint_1a70f282-ad92-45c0-a74b-9c67a8858578

    Function: mount.mounted

    Name: /srv/dev-disk-by-label-Data

    Result: True

    Comment:

    Started: 10:49:29.252190

    Duration: 354.949 ms

    Changes:

    ----------

    ID: create_filesystem_mountpoint_a624274a-3f40-4783-a262-1fee6d15e174

    Function: file.accumulated

    Result: True

    Comment: Accumulator create_filesystem_mountpoint_a624274a-3f40-4783-a262-1fee6d15e174 for file /etc/fstab was charged by text

    Started: 10:49:29.607538

    Duration: 1.98 ms

    Changes:

    ----------

    ID: mount_filesystem_mountpoint_a624274a-3f40-4783-a262-1fee6d15e174

    Function: mount.mounted

    Name: /srv/dev-disk-by-label-sqldatabase

    Result: False

    Comment: mount: /srv/dev-disk-by-label-sqldatabase: mount(2) system call failed: Structure needs cleaning.

    Started: 10:49:29.609711

    Duration: 55.932 ms

    Changes:

    ----------

    ID: append_fstab_entries

    Function: file.blockreplace

    Name: /etc/fstab

    Result: True

    Comment: No changes needed to be made

    Started: 10:49:29.667477

    Duration: 6.724 ms

    Changes:


    Summary for debian

    ------------

    Succeeded: 4

    Failed: 1

    ------------

    Total states run: 5

    Total run time: 420.252 ms in /usr/share/php/openmediavault/system/process.inc:182

    Stack trace:

    #0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute()

    #1 /usr/share/openmediavault/engined/rpc/config.inc(167): OMV\Engine\Module\ServiceAbstract->deploy()

    #2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)

    #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)

    #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)

    #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(588): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusrc...', '/tmp/bgoutputD5...')

    #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))

    #7 /usr/share/openmediavault/engined/rpc/config.inc(189): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)

    #8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array)

    #9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)

    #10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)

    #11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)

    #12 {main}

    Alright, after spending few hours with StarTech tech support I managed to get it working. Just for record, it was the stupid BIOS screen from Marvell. I had to add both HDD as "virtual disks" for the RAID card but instructions around it weren't clear. At one point of the process, I just had to press space key instead of enter and I was able to mount HDD as virtual disk on RAID card. Later I logged into OMV5 and configured them as "Mirror" and it worked. Phew!


    Spent entire day debugging this and after reading post from geaves I was afraid I will have to invest more $$ for a new RAID card which would've defeated my plan of building a budget home NAS.


    Now next step is to experiment with some "system failure scenarios" to understand how RAID handles drive failure and how such errors can be recovered from. Thanks geaves for your response.

    The RAID card is a StarTech 4 Port PCIe SATA III Controller which has Marvell 88SE9230 Controller. Not sure if this is relevant but there are some bug reports online on compatibility of this controller with Linux kernel and had to install its own BIOS / Firmware that came with the card. So now when I boot up the computer, first the Dell BIOS comes up and if I press ctrl+M, it opens up another BIOS for RAID card which shows the two HDD attached to it. In my understanding this proves that the card and HDD are recognised by the Dell firmware.



    The CPU I am using is a Dell OptiPlex 9010 (2013) model with 16GB RAM and intel i5 Processor. Not sure if this is relevant here but going through user manual of this CPU I noticed that Dell recommend maximum of 1 TB storage with this CPU model. I assume this limitation is for the onboard raid controller and with an external RAID PCIe card i should be able to expand this to 16TB which is the recommended limit for this RAID controller card.

    Hello,


    I've just started experimenting with OMV5 and am planning to build my own NAS. I am using an old Dell CPU with 1x 240GB WD SSD connected to onboard SATA port running OMV5 OS. I then installed a PCIe RAID controller and hooked up 2x old SATA HDD (1TB and 500GB) connected to this RAID controller card to see if I can configure then as Mirror RAID. My problem is that I can see these these HDDs under "Disks" but when I go to RAID Management and try to create a "Mirror" configuration by adding both of these drives to it, I get a "Device or resource busy" error. I read in some article that secure wipe usually gets rid of this issue but unfortunately this didn't work for me. Any suggestion on troubleshooting this issue will be appreciated.



    Error message when configuring RAID "Mirror" from OMV5 GUI:

    error.txt



    Other Debug outputs :


    Blkid:

    blkid.txt


    fdisk:

    fdisk.txt


    mdadm detail:

    mdadm detail.txt


    mdadm:

    mdadm.txt


    mdstat:

    mdstat.txt