Posts by greybeard

    Just a heads up in case anyone else has a similar setup using OMV as a NFS server for remote backup and wants to use autoshutdown.


    My primary server is running Koozali (SME/e-smith) based on centos 6. I'm using my OMV server as a NFS server as a backup location.
    The Koozali machine uses DAR backup which is configured to use WOL prior to the backup to start up the OMV server. It then performs the backup via a NFS connection to the OMV server. After the backup the autoshutdown on the OMV machine shuts down until the next backup. All good most of the time.
    Occasionally the OMV server shutsdown prior to the backup completing. Investigating further it appears the backup process on the Koozali server has varying periods when it is processing but not accessing the OMV nfs folder. Most of the time the autoshutdown detects the NFS connection but a closer look shows that after the main backup process it isn't detecting a NFS TCP connection but the occasional disk and/or cpu usage that keeps it from shutting down.
    But when the Koozali machine isn't accessing the NFS drive quickly enough the OMV server shutsdown.
    What I've found is that the NFS TCP connection is dropping after a few minutes of non use. It reconnects as soon as it is needed again but the backup process is randomly taking to long between accesses.
    I don't think anything isn't working as it should, ie no problem with OMV NFS software, autoshutdown plugin or the backup process on the koozali machine. Just the way it is.
    My interim solution is to use the 'force enable' setting on the autoshutdown to prevent the OMV machine from shutting down before the backup is completed.
    I'm investigating some form of TCP keepalive on the koozali machine to keep the NFS TCP connection alive whilst the NFS folder is mounted. (ie for the duration of the backup)


    From what I have observed any process that utilises NFS will see a similar dropping of the TCP connection of idle for too long which would potentially cause a problem if the autoshutdown plugin is in use.

    Rule one for working on computer issues.
    If it worked before you made a change and it doesn't work after you have made a change then undo what you have done and see if it works again. Undo all changes made, even the ones you won't admit to making. You can use your detailed change log to see what needs undoing. You did make a detailed change log as you made the changes didn't you?
    If it does work again after undoing the changes then compare you change plan with what you actually did and compare that to the manual.


    For the boot to still fail after undoing you changes, maybe boot using something like the system recovery CD (or load it onto USB) and restore the relevant boot and grub info. BUT, make sure you know what you are doing.


    As for changing your impression of the OMV software, just make sure there isn't cause for the software to give you a return score!

    Running unionFS and snapraid.
    System 5.2.6-1 ursul
    snapraid 5.05
    unionFS 5.1


    Noticing a strange issue after configuration and system operation. File xfer via NFS seems to be ok and no other errors appear to be noticeable.


    When loading any popup in the gui that selects folders (ie NFS files share add, SMB shares add etc) the following exception is thrown and the drop down selection box fails to load.
    It appears that there is an issue when passing the unionFS drive.

    Code
    Feb 1 13:06:51 omvserver omv-engined[3782]: PHP Fatal error: Uncaught Error: Call to a member function getImpl() on null in /usr/share/php/openmediavault/system/filesystem/filesystem.inc:898
    Feb 1 13:06:51 omvserver omv-engined[3782]: Stack trace:
    Feb 1 13:06:51 omvserver omv-engined[3782]: #0 /usr/share/openmediavault/engined/rpc/sharemgmt.inc(158): OMV\System\Filesystem\Filesystem::getImplByMountPoint('/srv/531b3bee-b...')
    Feb 1 13:06:51 omvserver omv-engined[3782]: #1 [internal function]: Engined\Rpc\ShareMgmt->enumerateSharedFolders(Array, Array)
    Feb 1 13:06:51 omvserver omv-engined[3782]: #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    Feb 1 13:06:51 omvserver omv-engined[3782]: #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('enumerateShared...', Array, Array)
    Feb 1 13:06:51 omvserver omv-engined[3782]: #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('ShareMgmt', 'enumerateShared...', Array, Array, 1)
    Feb 1 13:06:51 omvserver omv-engined[3782]: #5 {main}
    Feb 1 13:06:51 omvserver omv-engined[3782]: thrown in /usr/share/php/openmediavault/system/filesystem/filesystem.inc on line 898


    My fstab looks like this


    Previously I was using a version of mergerFS which was subsequently superceded by the unionFS plugin. This appeared to be ok after some stuffing around with recreating the mergerFS. At that stage the dropdown gui was working ok. The error has only been noticed today.

    I was using the mergerfs addon and combing drives. After the update I loaded the unionfs addon prior to reading this thread to try and resolve the error. As per the other user comments above this appeared to resolve the reported error. I then decided to remove the existing mergerfs addon as I wasn't using it. This repeated the error but it was now reporting about a unionfs config issue. I then resinstalled the mergerfs addon and the error went away (yes, I know I've gone down the dumb road with multiple install/uninstalls and not keeping notes and testing inbetween steps).
    I did observe that the folders I was sharing all seemed to be intact and where I expected them to be.
    Only after all of the above did I try and access the 'file systems' tab on the web gui. There are no entries in the file system web page and I small popup showing a 'loading' message.
    I don't know when in the above process this occurred.
    ie if it was as a result of my install/uninstall effort or as a result of the unionfs addon install with an existing mergerfs configuration.

    Running OMV 5.0.12-1


    In the process of deleting an NFS share when trying to apply the changes the following error is reported:

    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run nfs 2>&1' with exit code '1': /usr/lib/python3/dist-packages/salt/utils/decorators/signature.py:31: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly *salt.utils.args.get_function_argspec(original_function) /usr/lib/python3/dist-packages/salt/utils/decorators/signature.py:31: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly *salt.utils.args.get_function_argspec(original_function) /usr/lib/python3/dist-packages/salt/utils/decorators/signature.py:31: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly *salt.utils.args.get_function_argspec(original_function) debian: ---------- ID: configure_default_nfs-kernel-server Function: file.managed Name: /etc/default/nfs-kernel-server Result: True Comment: File /etc/default/nfs-kernel-server is in the correct state Started: 10:50:01.873922 Duration: 157.951 ms Changes: ---------- ID: configure_nfsd_exports Function: file.managed Name: /etc/exports Result: True Comment: File /etc/exports is in the correct state Started: 10:50:02.032167 Duration: 198.301 ms Changes: ---------- ID: start_rpc_statd_service Function: service.running Name: rpc-statd Result: True Comment: The service rpc-statd is already running Started: 10:50:04.089224 Duration: 102.912 ms Changes: ---------- ID: start_nfs_kernel_server_service Function: service.running Name: nfs-kernel-server Result: False Comment: Job for nfs-server.service canceled. Started: 10:50:04.195751 Duration: 246.768 ms Changes: Summary for debian ------------ Succeeded: 3 Failed: 1 ------------ Total states run: 4 Total run time: 705.932 ms



    Apart from looking for some further information to help resolve this, Is the forum the best avenue for reporting any issues found during testing of beta software? Or should this be done elsewhere?