Posts by bkeadle

    I'll hijack this's my output:

    root@omv:/etc/apt# dpkg -l | grep linux
    ii console-setup-linux 1.88 all Linux specific part of console-setup
    ii firmware-linux 0.36+wheezy.1 all Binary firmware for various drivers in the Linux kernel (meta-package)
    ii firmware-linux-free 3.2 all Binary firmware for various drivers in the Linux kernel
    ii firmware-linux-nonfree 0.36+wheezy.1 all Binary firmware for various drivers in the Linux kernel
    ii libselinux1:i386 2.1.9-5 i386 SELinux runtime shared libraries
    ii linux-base 3.5 all Linux image base package
    ii linux-image-3.2.0-4-686-pae 3.2.73-2+deb7u3 i386 Linux 3.2 for modern PCs
    ii linux-image-686-pae 3.2+46 i386 Linux for modern PCs (meta-package)
    ii util-linux 2.20.1-5.3 i386 Miscellaneous system utilities

    So if I have Hosts Allow, then ONLY the hosts defined in the allow can access?

    That makes the description for Host Deny a bit misleading:


    This option is a comma, space, or tab delimited set of host which are NOT permitted to access this share. Where the lists conflict, the allow list takes precedence. In the event that it is necessary to deny all by default, use the keyword ALL (or the netmask and then explicitly specify to the hosts allow parameter those hosts that should be permitted access. Leave this field empty to use default settings

    I have a share that had this Hosts allow entry:


    and this Hosts Deny entry:


    I have these clients that could not write to the share until I removed the ALL Hosts Deny:

    Those clients should have been allowed by the

    Do I have a syntax problem?

    Interesting. When I try to enable the write cache I got an error code and it wouldn't save. I just *ASSUMED* the failed BBU was the reason for it.

    So I just looked it up, and it's confirmed:


    For Controllers using the Battery Backup Unit (BBU)
    • BIOS reports "Error 01D2 in opcode 13. Press any key to continue" This message
    will appear when creating an array with cache enabled and the BBU is not read (e.g.
    in testing or charging mode). Create arrays before starting any BBU tests, or create
    the arrays with cache disabled.

    I was able to boot into a 2.16 kernel of a LiveCD just to test and found that the performance was still bad. I checked the RAID controller to see that the write cache was enabled - it WAS NOT. When I tried to Enable it, it through a cryptic error. I then noticed that the BBU of the RAID controller is failed, so probably why I couldn't enable the write cache. Could *THAT* be why performance is so pathetic?!?

    Yes, I write to OF all the time. Yes, read speeds are similar. (and interesting...true...that my write speed would be faster than reads. Could just be how the tool is doing it's thing).

    But because this (OMV) is to be a backup device, I need performance on the write speeds. What used to take about 20 minutes for a backup job to run, it was taking over 1.5 days, at less than 2Mbs.

    I suspect the answer to this problem is in here somewhere, but it's getting far too deep for me.

    I hate to give up and "downgrade" to OpenFiler, I desperately want OMV to be my solution, but if we're out of ideas... :/

    ryecoaaron, thank you so much for your attention and support. I'm hoping you still have some options/ideas for me.

    hwinfo showing 3ware driver for AMCC controller?

    Though, comparing to my openfiler install, looks like the same driver (and version) is being used (kernel version=2.6.32-131.17.1.el6-0.11.smp.gcc4.4.x86_64)

    I stumbled through installing the 3.16 kernel manually. But sadly, it did not solve the problem:

    root@omv:~# uname -r
    root@omv:~# dd if=/dev/zero of=dd if=/media/947281ee-363e-455e-ac7d-fa42313a2962/output.img bs=3k count=256k conv=fsync
    262144+0 records in
    262144+0 records out
    805306368 bytes (805 MB) copied, 73.6799 s, 10.9 MB/s

    But in the OMV-Extras, I see HWRaid repository. Maybe I'll try to find something compatible with my RAID controller.

    I'm open for any other suggests though.

    When I click on "Install Backports 3.16 kernel" it opens an "Install kernel headers..." with a Start button. I press Start and it says "Updating...", but doesn't seem to do anything after that. Looking at top I'm not seeing anything evident running for this task. What should I be looking for? How do I know it's actually doing what is supposed to be doing? I've been waiting for quite a while already.

    Now we're talkin'! I'll give that a go and report back.
    8o (hopeful)

    BTW: I keep seeing these cool plugins and additional functionality. Badly needed is some interface to define monit alerts (thresholds). I've disabled the load average alerts because I was getting slammed by those alerts, but I'd like to be able to tweak them. I also would like to change the subject line that it sends. But my attempts to modify these alerts (following this post) haven't been successful.

    I had had these options set, except for max protocol = SMB2

    I wasn't sure whether these were to be on the Settings page, or the extra options for the Share itself.

    Also, after setting those and Saving, wasn't sure if I needed to disable then reenable the settings for them to kick in.

    Whatever, some mix of all that, I'm not seeing any noticeable benefit.

    Comparing my "PD-DF140" (top) with "OMV" (bottom), this is telling:

    [root@[b]pd-df140[/b] ~]# dd if=/dev/zero of=/mnt/vg-sda2/sda2-shares/output.img bs=3k count=256k conv=fsync
    262144+0 records in
    262144+0 records out
    805306368 bytes (805 MB) copied, 10.5415 s, 76.4 MB/s

    root@[b]omv[/b]:~# dd if=/dev/zero of=/media/5d27afc6-3886-4889-8f7f-4f831e781efc/output.img bs=3k count=256k conv=fsync
    262144+0 records in
    262144+0 records out
    805306368 bytes (805 MB) copied, 108.506 s, 7.4 MB/s

    But I'm not sure what it's telling me. This is the same hardware, so why such dramatic difference in this performance test?

    In a loosely related post here, I questioned why iSCSI would be so much slower than SMB. Little did I know that SMB was unacceptably slow.

    So, any suggestions on how to identify the problem here? I have the same hardware using Openfiler. Here's a comparison of the performance between OMV vs. OpenFiler on the same hardware. I'd really like to have OMV compete with OpenFiler, as it's much more polished and better supported.

    After much benchmarking to choose the best filesystem and configuration for OMV to be a backup target for vRanger backups (large files), I chose XFS as the filesystem and serving it up with SMB. My existing vRanger server was backing up to a a different device (DataDomain 510) and was providing good performance, so I know the vRanger server is configured acceptably.

    Having switched the backups to my OMV implementation, I was very disappointed to see abysmal performance, but I'm not sure why. I see my top process(es) is smbd. Here are a couple of the charts (since a picture is worth a thousand words), hoping someone might be able to offer some suggestions to make the OMV a viable alternative: