Beiträge von RedneckBob

    Thanks, I found the version on the system information panel. One was 2.1 Stoneburner unpatched (i.e.; fresh install) and 2.2.4 Stoneburner. Both systems have outstanding patches, so I'm going to get them on the same level and then dig into my NFS settings. I did a basic mount with no options which is probably not a good idea.


    My goal here is to rsync the two systems each night so I have an extra copy of all my data. I started out using rsync over the wire with both ssh and rsh as options, but it left my more slow system CPU bound. Then I switched to rsync over NFS which performed beautifully but overnight it started causing problems.

    I have yet to figure out why NFS was causing this, but I've only started debugging the issue. The NFS mount was between two OMV systems, but they were at different patch levels. I believe one was at 2.2 and the other at 2.1, though I'm not 100% positive because I don't know how to determine the version of OMV on the box.

    iotop shows me this. Looks like jbd2 (ext4 journal) is writing all the time.


    Total DISK READ: 0.00 B/s | Total DISK WRITE: 593.75 K/s
    TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
    2214 be/3 root 0.00 B/s 0.00 B/s 0.00 % 96.69 % [jbd2/md0-8]
    3109 be/4 root 0.00 B/s 39.06 K/s 0.00 % 14.12 % [nfsd]
    3111 be/4 root 0.00 B/s 46.88 K/s 0.00 % 13.67 % [nfsd]
    3110 be/4 root 0.00 B/s 42.97 K/s 0.00 % 10.08 % [nfsd]
    3115 be/4 root 0.00 B/s 39.06 K/s 0.00 % 9.99 % [nfsd]
    3112 be/4 root 0.00 B/s 42.97 K/s 0.00 % 9.56 % [nfsd]
    3108 be/4 root 0.00 B/s 50.78 K/s 0.00 % 8.62 % [nfsd]
    3114 be/4 root 0.00 B/s 50.78 K/s 0.00 % 8.19 % [nfsd]
    3113 be/4 root 0.00 B/s 42.97 K/s 0.00 % 7.39 % [nfsd]
    371 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.03 % [jbd2/sda1-8]
    1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init [2]
    2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
    3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0]
    6 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0]
    7 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/0]
    8 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/1]
    10 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/1

    I'm having an odd problem with my OMV 2.1 box where the load average is pegged at 9.0 or greater and the box is extremely slow. About half the CPU is tied up in IO wait (wa is 49.5) and half is idle (48.0). There is no single process taking up more than 3% of the CPU. In fact, md0_raid6 is the top CPU consumer and it is only 1% or 2%.


    I don't see any evidence the array is rebuilding, unless I don't understand how to interpret the output of mdam.


    Any thoughts from the resident experts?


    top - 18:32:10 up 10:41, 3 users, load average: 9.34, 9.13, 9.07
    Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie
    %Cpu(s): 0.0 us, 2.0 sy, 0.0 ni, 48.0 id, 49.5 wa, 0.0 hi, 0.5 si, 0.0 st
    KiB Mem: 7865624 total, 7683392 used, 182232 free, 331456 buffers
    KiB Swap: 4789244 total, 0 used, 4789244 free, 6790720 cached


    # mdadm -D /dev/md0
    /dev/md0:
    Version : 1.2
    Creation Time : Sat Sep 26 10:14:10 2015
    Raid Level : raid6
    Array Size : 19534435840 (18629.49 GiB 20003.26 GB)
    Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB)
    Raid Devices : 7
    Total Devices : 7
    Persistence : Superblock is persistent


    Update Time : Tue Jun 14 18:58:13 2016
    State : active
    Active Devices : 7
    Working Devices : 7
    Failed Devices : 0
    Spare Devices : 0


    Layout : left-symmetric
    Chunk Size : 512K


    Name : omv-6th:omv18T (local to host omv-6th)
    UUID : 363e30b4:3db6c23d:5c6d3a5a:47907af9
    Events : 196


    Number Major Minor RaidDevice State
    0 8 64 0 active sync /dev/sde
    1 8 80 1 active sync /dev/sdf
    2 8 96 2 active sync /dev/sdg
    3 8 128 3 active sync /dev/sdi
    4 8 144 4 active sync /dev/sdj
    5 8 160 5 active sync /dev/sdk
    6 8 176 6 active sync /dev/sdl

    Wanted to write and say thanks to the OMV team. I'm always very reluctant to upgrade major releases, but I took the leap and had ZERO issues. Nada, none, zilch.


    1. From the GUI I went to Update Manager and made sure all the latest patches were applied.
    2. From the command line I ran omv-release-upgrade
    3. I rebooted when done, cleared cookies on my browser just for kicks, and was able to login.
    4. Checked all critical areas from RAID, NFS mounts, cron jobs, and after 3 days no issues yes.


    Many thanks to the OVM team for such a seamless and painless major upgrade.


    -RB

    I had a small typo in my post, I wanted to say that "I've never considered a setup like that" rather than "Never consider a setup like that". Just a slight difference in meaning :)


    I tend to gravitate towards simplicity when configuring a NAS, so installing a additional applications on the same physical box or running the NAS operating system in a VM on a hypervisor or installing all the plug-ins, tends to cause me concern.

    Hmm, I set my MTU to 6000 and the GUI does appear.


    Did you check the the log files for nginx? They are located in /var/log/nginx and I'd check error.log first, then maybe poke around in access.log and openmediavault-webgui_access.log. Also, run dmesg from the command line and look for anything related to the MTU.


    From my /etc/network/interfaces file:
    # The loopback network interface
    auto lo
    iface lo inet loopback
    iface lo inet6 loopback


    # eth0 network interface
    auto eth0
    allow-hotplug eth0
    iface eth0 inet dhcp
    post-up /sbin/ifconfig $IFACE mtu 6000
    iface eth0 inet6 manual
    pre-down ip -6 addr flush dev eth0


    I did make the change via the GUI:


    Proxmox will run fine on a OpenMediaVault system, witht he exception that iscsi may not work with the custom kernel.


    Greetings
    David


    Install Proxmox and OVM, bare metal, on the same physical box?


    I've never considered a setup like that.


    For this specific configuration I have have 3 physical servers, 2 are setup as Proxmox hypervisors and 1 is setup as a NAS. I'm slowly inching towards a High Availability setup with Proxmox and unfortunately I've spent an extraordinary amount of time debugging the slow write speeds with nas4free/ZFS/NFS.


    I loved the idea of ZFS, but the incredible amount of time I've spent tracking down the write speed issue is more than ridiculous. That is what brought me to OMV. Oh, by the way, writes speeds over NFS with OMV are fantastic! Also, I had two 3T SATA-III drives managed under the LVM in OMV. My third drive arrived in the mail, plugged it in, and a few clicks later the filesystem was resizing to accommodate the new drive. Fantastic!

    "The operation failed many times before succed"


    Questions:


    1. What is the exact error message?
    2. Is the error in Open Media Vault or in the ownCloud desktop client?
    3. Type the exact steps you take to get the error

    Gracias, subzero79. Good point about core services, I suppose that I'm more concerned about impacting the drive array. Having said that, I suspect it too is largely effected by the underlying OS rather than OMV. I gain a considerable amount of comfort if I can get to the CLI.


    I have to confess, I'm really liking OMV. Up next is NFS performance testing from Poxmox. I've had absolutely unacceptable performance issues with nas4free and NFS. Really curious too start testing with OMV.

    Speaking of new releases, is it safe to update on a regular basis? Since October there were 20 "stable" releases.


    I have a semi-important production environment that I'm converting to Proxmox+NAS from a virsh+local storage. While it doesn't run a hospital, I would like to avoid downtime. I'm evaluating nas4free, freenas, OMV, and a couple others. Comforting to see active users such as yourself and lively forum.

    I'm new to OMV. Installed it a few days ago, then yesterday I installed a bunch of plugins, booted up the system today and have a blank screen. I used FireBug to check for an error message in the GUI, but nothing. Checked the disk drive and yesterday I swear it was running a 100%, which I thought was odd. However, today it isn't anywhere near 100% (currently sitting at 3% full).


    In /var/log/nginx/openmediavault-webgui_error.log I see:


    2015/01/06 07:50:40 [error] 2086#0: *23 FastCGI sent in stderr: "PHP message: PHP Warning: Invalid argument supplied for foreach() in /usr/share/php/openmediavault/htmlpage.inc on line 168" while reading response header from upstream, client: ::ffff:192.168.1.126, server: openmediavault-webgui, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm-openmediavault-webgui.sock:", host: "192.168.1.195"

    Line 168 is:


    // Append the additional Javascript files.
    foreach ($incList as $incListv) {
    print "<script type='application/javascript' src='{$incListv}'></script>\n";
    }


    Hmm, the JS include files are missing? Could the cache files be missing?


    root@openmediavault:/var/log/nginx# ls -atl /var/cache/openmediavault/cache*
    -rw-r--r-- 1 openmediavault openmediavault 0 Jan 5 14:06 /var/cache/openmediavault/cache.omvwebguilogin_js.json
    -rw-r--r-- 1 openmediavault openmediavault 9320 Jan 4 17:38 /var/cache/openmediavault/cache.omvwebgui_admin_js.json

    As you can see, one of the cache files is empty. Happened Jan 5th at 2:06pm which was around the time the disk was at 100%. The drive is a 120G SSD and I don't have a clue how it filled up.


    In any case, I removed the cache files with this command:


    rm -fr /var/cache/openmediavault/cache*


    Reloaded the GUI and now I can login. Wanted to document this in case others have the same problem.