Posts by marz

    Linux newbie here but would it be possible to add the option to set a umask value to the web GUI? As I understand the daemon now runs with default umask 022 and would like to change this.

    Some nice little projects here. Might have to consider keeping my eyes open for one too ...

    Extra SATA ports would really make this thing a winner though, so I'm also in favor of the SATA m-PCIe BIOS mod. Unfortunately I'm not skilled enough either ...

    Too bad the chipset only supports 2 SATA because it looks like there is room for an additional SATA connector next to the blue one. (I believe I read somewhere that some thin clients just need one soldered one)

    For power consumption, maybe a picoPSU-90 with a high efficiency (active power factor correction) power block (using the p4 plug only) could even improve performance. And with the modded BIOS one could use the extra plugs to power more disks. Off course this would drive the costs up a lot.

    I would use the internal USB header to attach a female USB port with a mSATA-to-USB adapter plugged in. I'm using this setup in my OMV system currently without problems so far and then I can have two SATA ports for data disks with RAID1 or snapraid.


    The reason the script in /etc/pm/sleep.d doesn't work for you is probably because you call the rtcwake command directly. This doesn't use pm-utils I believe and thus the script is not called on resuming. However, there is a way to use pm-utils with rtcwake so a wake up timer gets set when calling pm-suspend or pm-hibernate. The script also does not work when powering up/down and when using system.d (as per this link).

    That would be the best solution IMHO if the number of cycles are configurable with an environment variable in /etc/default/openmediavault like there is for LOADAVG (OMV_MONIT_SERVICE_SYSTEM_LOADAVG_1MIN_CYCLES=3), that way it would survive openmediavault updates. Maybe a feature request?

    A cycle is 30 seconds (see /etc/monit/monitr). Since 30 seconds was not enough of a delay in my case (and apparently yours too), 2 cycles seems a good minimum value.

    Ok, thanks, gonna look into it. Just thought I'd mention it although users worried about security probably would not be using mhddfs in the first place.

    So I am still using mhddfs (gonna look into unionfs) and was checking out my file permissions. During that process I noticed that I was able to chown any file/directory as non-privileged user to i.e. root:users and vice versa for shares under the mhddfs path (i.e. /media/mhddfs_guid).

    Is this an added "bonus" of using mhddfs?

    To be clear, a file owned by "root:users" under a share exported by mhddfs could be changed to be owned by "my_username:users" and vice versa. Verified with a clean OMV install.

    Yes, thank you, finally figured it out now too.

    The key is setting "ClientAliveCountMax" to "0" and not some other value. Most resources were either trying to keep the connection alive and the others did not expressly state this, hence my confusion. Ah well, sometimes it's in the details.

    I think the documentation does not make this very clear, or at least I am very bad at interpreting what is in the documentation as it is implied if you read it thoroughly (5 - 10 times in my case :thumbsup: ).

    Ok, nevermind, I was wrongly assuming that a client without any user input for the specified amount of time would be disconnected. That's not what's happening though, it disconnects only when the client is inactive i.e. the computer goes to sleep without closing the connection.

    So how to close an idle connection?

    So I'm trying to configure the SSH service to disconnect an idle client automatically after a certain amount of time.

    I added the following to the "Extra options" textfield:

    ClientAliveInterval 30
    ClientAliveCountMax 3

    Saved and disabled and re-enabled the service (also tried manually from command line: sudo service ssh restart) but doesn't seem to work. What am I forgetting/doing wrong ... ?

    Output from "w" through another ssh session:

    xxxx pts/0 mbp.lan 21:12 1.00s 0.20s 0.00s w
    xxxx pts/1 kodi.lan 21:25 4:42 0.08s 0.08s -bash

    Session from kodi.lan doesn't disconnect after 90 seconds. I am not using any ServerAliveInterval option on the client.

    My sshd_config file is as follows:

    So, I'm trying to create a RAID 1 with a few partitions. I followed the steps I believe are necessary which are:

    1) create all partitions equally on both drives
    2) create the arrays and create a filesystem on them
    3) update mdadm.conf (DEVICES and ARRAY lines)

    Now, I noticed that upon reboot mdadm assigns the device names /dev/md125 /dev/md126 and /dev/md127 instead of the originally assigned device names i.e. /dev/md0 etc.

    The "solution" to this would be calling

    update-initramfs -u

    but as I understand it, this would have to be done each time the kernel is updated?

    Is this procedure about right? Any other tips or best practices I should be aware of?

    No, the code I posted yesterday was for the "Resource limit matched Service fs_media_d70c9d42-7315-42d3-8e4b-9d16e1806b50" notification when one of your filesystems reaches > 80% usage. I use it to keep monitoring some filesystems but exclude some of which I know are full but don't want to receive a notification from everytime I wake/power up OMV.

    The "CPU wait usage" notification is the first block of code. That one still seems to work fine on 2.1.x but maybe change the script name to 99custom-monit (I edited my post) although it shouldn't make a difference. If you made the script as described above, check if it is execcutable with "ls -al", if not "chmod+x 99custom-monit" and retry.

    I don't understand why OMV give me this error even though my cpu is an i3 with 16gb ram.
    It only happened when it resumes from hibernation/sleep/off.

    First, I am not an expert on this but this is the way I see it:

    It takes some time for the filesystems to become available after wake up/resume because mechanical drives are much slower than the CPU. CPU wait usage means the time the CPU is waiting for I/O to complete and just sits there doing nothing. Because of this its wait usage is > 99%, it has nothing to do with how fast your CPU is. On normal boot up there is a 30 second delay before monit starts, my scripts emulates the same behaviour on resume from sleep or hibernation only.

    Also, monit checks in 30 seconds delays so only the first check after resuming is missed. You'll notice that the time between resource limit failed and succeeded always equals about 30 seconds.


    Sorry I didn't reply earlier, didn't get a notification because the setting was turned of by default and did not know. Now, my earlier post was for OMV 1.9. Could it be you are running 2.1?

    I recently upgraded to 2.1 and also noticed it didn't work anymore so here is the version for 2.1 (which is a much cleaner way that will probably work for 1.9 too).

    But first, backup the original to the root home folder:

    cd /usr/share/openmediavault/mkconf/monit.d
    cp filesystem ~/filesystem_2.1_ori

    Change the "#Monitor mounted filesystems." line in /usr/share/openmediavault/mkconf/monit.d/filesystem to:

    # Monitor mounted filesystems.
    xmlstarlet sel -t \
    -m "//system/fstab/mntent[not(contains(opts,'bind') or contains(opts,'loop') or contains(fsname,'your-fsname') or contains(fsname,'your-fsname'))]" \

    Change "your-fsname" with the ridiculous number you get in your mailbox or search your /media for the ones you want excluded. Add as many as needed.

    Make config:

    omv-mkconf monit

    if you want, check out the resulting config file, it should not include the filesystems specified.

    more /etc/monit/conf.d/openmediavault-filesystem.conf

    and then restart monit

    service monit restart

    Again, you could make a copy in another directory to avoid losing the changes when updating.

    TBH, I didn't actually count them, it was just a personal impression. The plugins you linked too are for the previous version and there are quite a lot of plugins shattered throughout unRAID's forums. On top of that, since version 6 supports Docker containers, these can be considered plugins too and then it probably starts adding up. But I saw someone is working on a Docker plugin for OMV too, so ...

    I was just giving my personal view, unRAID seemed to have more plugins I may find interesting to use. Then again OMV has a few which I consider very valuable and that work much more stable, plus most of the other plugins I want. But I am still getting to know OMV and only have experience with unRAID 5 so maybe my opinion must be taken with a grain of salt.

    Technically, it can use a USB flash drive but because of the many writes of OMV and the limited wrtie operations of most flash drives, it is not recommended. If you don't mind replacing your USB flash drive frequently and have a good backup of your current config (great functioning plugin by the way, you can make backups on a headless system) it is possible.

    In my case, I had problems making it work, even with a notebook drive in a USB enclosure, it failed after 24 hours or so. Didn't take the time to investigate further though may do so in the future. Maybe you've got more luck than me, but you'll also have to live with the lack of S.M.A.R.T. data this way.

    Another option I was considering is one of those PCIe SSD's, but they get kind of expensive and your motherboard must support booting from it. Depending on were you're from, you might be able to pick up a regular SATA to PCIe adapter for 20$ or so. Maybe that is an option too.

    The hard drive can be part of a snapRAID setup, I don't think it is good idea to include it in a soft RAID config even when possible. You can use it for a VM or as a regular share for backups for instance.

    UPDATE: There is a much cleaner method posted in my next post.

    Ok, I whipped something up for #2 also. Thought I'd share, maybe someone finds it useful. Don't blame me if your monitoring stops working though.

    So, first make a copy in another directory of /usr/share/openmediavault/mkconf/monit.d/filesystem so you can revert back to the original when necessary (forgot that one myself of course :thumbsup: ).

    Then replace the part from "#Monitor mounted file systems." with this:

    Note the long numbers on line 15, replace these with your own values of the file systems to ignore, don't forget the asterisks before each number. Instead of looping through all mounted filesystems and writing a command to check the file systems, it replaces the commands of these specific file systems in monit's config file with a comment.


    omv-mkconf monit

    if you want, check out the resulting config file

    more /etc/monit/conf.d/openmediavault-filesystem.conf

    and then restart monit

    service monit restart

    Test and make a copy in another directory when working as expected because it might get overwritten when updating. Finally, enjoy your clean inbox!

    I've been using unRAID 4.x and 5.x for a while but am now making the switch to OMV.

    But let's clarify something first: unRAID is not RAID, it's parity. Second, neither OMV nor unRAID use the "raid card on the motherboard" OOTB.

    IMO, it all depends on your requirements, which you haven't stated. Therfore, I would suggest you to try out OMV since it is free, setup snapRAID and some other plugins and see how you like it. The biggest differences for me are:

    unRAID: real-time parity, moderate SMB performance without cache drive but good enough for some basic file sharing, Slackware based so good luck trying to something unordinary, reiserfs = trying to lock you in to their system, read-only filesystem so again: not easy to work with, sleep doesn't work OOTB for a lot of motherboards, it isn't free. I also was not impressed with a lot of the plugins (they didn't work, lots of problems, ...) although this may have been improved with their support for Docker.

    OMV: WebGUI looked much cleaner and detailed OOTB, plugins can be completely configured from the webGUI, Debian based, no read-only filesystem so easier to work with, sleep works fine and it's free. Combined with snapRAID for media and a software RAID 1 for files/backups this makes a good alternative IMO. A little less plugins but you get what you pay for and if you raaly need something it should be possible to set it up. A rather big disadvantage is the need for a SATA port for the boot drive.

    So far I am much more impressed with OMV even though it isn't perfect either.

    I'm in the process of migrating from unRAID to OMV + snapRAID + mhddfs and am trying to work out some minor problems. Anyone care to help?

    1) The first one is more of a validity check. I've been getting the following alerts every time when resuming from sleep (using the autoshutdown plugin). They occur almost instantly after resuming and the moniterd state returns to normal after 30 seconds. This behaviour is also talked about in this thread.


    Resource limit matched Service localhost

    Date: Tue, 28 Apr 2015 15:27:30
    Action: alert
    Host: skeelo.lan
    Description: cpu wait usage of 100.0% matches resource limit [cpu wait usage>95.0%]


    Resource limit succeeded Service localhost

    Date: Tue, 28 Apr 2015 15:28:01
    Action: alert
    Host: skeelo.lan
    Description: 'localhost' cpu wait usage check succeeded [current cpu wait usage=9.4%]

    Even though in the aforementioned thread they talk about cpu load, I believe this is a measure of the cpu waiting on I/O, more specifically in this case waiting for the disks to be spun up and ready. The suggested solution (i.e. adding OMV_MONIT_SERVICE_SYSTEM_CPUUSAGE_WAIT=99 to /etc/default/openmediavault and reconfiguring monit) does not work either since setting OMV_MONIT_SERVICE_SYSTEM_CPUUSAGE_WAIT=99 will still trigger the cpu wait usage, as per the following (note the adjusted value of 99%).


    Resource limit matched Service localhost

    Date: Tue, 28 Apr 2015 20:36:29
    Action: alert
    Host: skeelo.lan
    Description: cpu wait usage of 100.0% matches resource limit [cpu wait usage>99.0%]

    Therefore I came up with the following:

    I noticed monit was called with a delay of 30 seconds upon boot, so I figured there must be a way to retrigger the delay on resume. Thus I created a script in /etc/pm/sleep.d named i.e. 99custom-monit (starting with "9" makes it run early after resume) which contains the following code:

    It simply stops all monitoring on sleep and restarts all monitoring 32 seconds after resume. My testing confirms I get no more alert messages. I only hope I'm not creating some other problem down the line, that's why I wanted to ask someone more knowledgeable about the validity of this workaround.

    2) Monit, kind as she is, also alerts me of the space usage above 80% of every file system. Problem is, I don't need this info from every filesystem. I know some of them are full and am fine with it. Is there a way to disable the monitoring of a few specific file systems while keeping others still monitored? I already tried editing /etc/monit/conf.d/openmediavault-filesystem.conf, commenting out the lines I did not need, running omv-mkconf monit and restarting monit but my changes don't seem persistent.

    3) I don't know if this has been suggested before but when removing shared folders through the webGUI, it asks to delete the content of the folders recursively. I really think this is rather dangerous because a single moment of carelessness can cause a lot of problems. I think it would be better to handle this with a separate button or with a checkbox (default unchecked) to create the same behaviour. This requires a well-thought-out action on behalf of the user, instead of mindlessly clicking a button underneath a bunch of words in an alert box which most people tend to ignore most of the time.