Beiträge von micsaund

    Yep, AHCI is on. Here are the BIOS settings:


    Here is the mdadm output:



    Oh, and I just disabled that aggressive link power management and re-tested with the same results @ ~14.1MB/sec.


    Thanks!
    Mike

    Installed backports from omvextras:


    Code
    login as: root
    root@omv's password:
    Linux omv 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt2-1~bpo70+1 (2014-12-08) x86_64


    But, performance remains similar:


    Code
    omv ~ $ dd if=/dev/urandom of=./test.dd bs=1048576 count=2048
    2048+0 records in
    2048+0 records out
    2147483648 bytes (2.1 GB) copied, 161.29 s, 13.3 MB/s


    Thanks,
    Mike

    Hi all,


    I just finished building a brand new machine for my OMV server. It's pretty beefy, with the intent (someday) of using ZFS/BRTRFS once it's 'baked-into' OMV.


    Anyway, doing some very basic tests, I'm seeing write speeds that just don't make sense. I'm used to a single drive being able to do (conservatively) about 50MB/sec or so on SATA. However, here's what I'm getting on OMV:


    Code
    omv ~ $ dd if=/dev/urandom of=./test.dd bs=1048576 count=2048
    2048+0 records in
    2048+0 records out
    2147483648 bytes (2.1 GB) copied, 155.324 s, 13.8 MB/s


    That's, IMHO, exceptionally slow for a 4x2TB WD Red RAID5 array when the CPU is not even close to pegged.


    Read speeds are in the range that I think is maybe reasonable:


    Code
    omv ~ $ dd if=./test.dd of=/dev/null bs=1048576
    2048+0 records in
    2048+0 records out
    2147483648 bytes (2.1 GB) copied, 23.5908 s, 91.0 MB/s


    Any ideas how the hell brand new hardware is getting a measly 13MB/sec write speed? What can I do to bring this up to something usable for running VMs/etc.? Are my expectations for software RAID in the wrong ballpark? I'd think that modern hardware wouldn't even break a sweat calculating parity at 5x that rate...


    Anyway, I'm not sure what might be wrong, so I'm unsure what to post - please request info if I don't provide it.


    Build:

    • ASrock E3C224D2I server mobo
    • Intel E3-1231v3 Xeon CPU
    • Crucial 16GB ECC (2x8GB) CT2KIT102472BD160B
    • 4x2TB WD Red drives (RAID5 in OMV) connected to the four SATA3 ports
    • 1x320GB Hitachi 2.5" drive for OMV boot/install drive
    • OMV 1.9 fresh install as of last night


    Thanks!
    Mike

    Thanks for the input, both of you. I'll go back to ext4 now, and hope that my NFS problem goes (back) away as a result. I was just hoping to get on an 'advanced' filesystem sooner than later as the more data I build-up, the harder it is to transition in the future...


    Mike

    Hmm. I don't see the dev section to browse, so let me ask - what is the ZFS roadmap? I came to BTRFS because ZFS was not available either, and it sounded like BTRFS was more 'baked-in". I'm fine with either - just would like one or the other.

    Hi all,


    In the light of my recent challenges from a btrfs upgrade, I've decided to completely destroy and rebuilt the RAID and filesystem on my 1.7 OMV.


    Right now, the RAID5 is building (about 2 hours remaining), so I figured I'd ask what the 'right' way is to format the resulting volume with BTRFS is? I don't see that format available in the pulldown (only EXT4/3, XFS, and JFS). I understand that BTRFS is not fully supported in the WebUI yet, so I wanted to see what the advised way to do this was so that I'd have a maximal chance of things "just working" (as one would want from storage) in the future.


    Thanks!
    Mike

    Hi all,


    My OMV is the latest release with all patches (1.7).


    The other night, I decided to upgrade the filesystem on my RAID to BTRFS (still using mdadm). So, I unmounted the filesystem and ran the btrfs-convert tool on the volume.


    Everything went OK, and short of having to adjust the shared folders to use the new uuid name and minor stuff like that, I'm up and running (CIFS/SSH/etc.).


    Except for my ESXi host that was running from the NFS share...


    Now, when I try to browse the NFS from the VIClient (VIC) it shows the two VM folders at the top level, but when I try to click into the folder, I see nothing. On the esxi cmdline, if I go into the folder and type 'ls' I see this:



    Looking in the vmkernel.log, I see this repeated many times every time I try to view one of the NFS dirs:


    Code
    2014-12-26T21:21:29.803Z cpu0:41172)WARNING: NFS: 1359: File handle too big (60)
    2014-12-26T21:21:29.803Z cpu0:41172)WARNING: NFS: 1359: File handle too big (60)
    2014-12-26T21:21:29.804Z cpu0:41172)WARNING: NFS: 1359: File handle too big (60)
    2014-12-26T21:21:29.805Z cpu0:41172)WARNING: NFS: 1359: File handle too big (60)
    2014-12-26T21:21:29.805Z cpu0:41172)WARNING: NFS: 1359: File handle too big (60)


    Note that this ESXi host was working just fine and running VMs from the NFS like this prior to the BTRFS upgrade. Is there some kind of oddity with BTRFS that is making its way through NFS to the client to upset it? The share ACL for 'unknown' is r/w/x and the share is setup in OMV as r/w.


    I'm at a loss for what might be going wrong here, especially since it worked just fine for months up to the upgrade and I thought the point of NFS was to hide filesystem details from clients. Right now, I'm scp-ing the VM files from OMV onto the ESXi host's internal disk just fine, so all is not lost. But, I'd prefer to run the VMs from the NFS to take advantage of the RAID/etc.


    Thanks,
    Mike