Beiträge von ellnic

    System.Data.SQLite.dll.config


    Code
    <configuration>
      <dllmap dll="sqlite3" target="libsqlite3.so" os="linux"/>
    </configuration>


    MediaBrowser.Server.Mono.exe.config



    ImageMagickSharp.dll.config


    Code
    <configuration>
      <dllmap dll="CORE_RL_Wand_.dll" target="libMagickWand-6.Q8.so" os="linux"/>
      <dllmap dll="CORE_RL_magick_.dll" target="libMagickCore-6.Q8.so" os="linux"/>
    </configuration>


    MediaBrowser.MediaInfo.dll.config


    Code
    <configuration>
      <dllmap dll="MediaInfo" target="./MediaInfo/osx/libmediainfo.dylib" os="osx"/>
    </configuration>


    ls -la /usr/local/lib


    Log:



    What did I miss?

    Permissions:



    I had an imagemagick common, so removed and installed your pkg, checked config files and changed as appropriate and ran ldconfig.


    When starting I get:


    Code
    /opt/mediabrowser # service mediabrowser start
    Removing stale pid file
    Starting mediabrowser
    ------------------------------------------------------------
    /opt/mediabrowser # service mediabrowser status
    Removing stale pid file
    mediabrowser is not running ... failed!

    The rule of thumb for ZFS is that you should use at least 4GB of RAM or a gig per TB of storage, whichever is greater - however this isn't always the case. There are plenty of good ZFS setups with 16GB of RAM with 96TB of storage. I would say you'd need to follow the RAM rule up to 16TB then you'll probably be fine for most home applications.


    As others have said, ECC memory is an absolute must. ZFS protects your data from corruption and relies on trusting that what it gets from RAM is not corrupted. By not using ECC you'll be in a situation where ZFS could unknowingly write corrupted data back to the disks because it trusts that the RAM is checking what it has given it.


    I'm running ZFS on a HP N54L and haven't had any CPU bottlenecks so far so you should be fine in that regard. :)

    I've heard back from the seller and their answer was "don't know, sorry" :-/


    I also found this: 231032800368


    Looks more like the card you have but says in the description that it's supports port multiplier and doesn't mention raid so I guess its the same card already in IT mode. It's from the U.S. Though, so will cost about £20 and I'll probably get import on that too. :-/


    The hunt goes on..


    Edit: got it :) 320890941897


    Edit 2: looks like you enable either the internal or external ports via a jumper. Also, I'll have to do something with that bracket but that shouldn't be an issue. Wire cutters, pliers and drill- hey presto, low profile >:-)


    Edit 3: do you know if ASMedia ASM1061 works under Linux? There is also this: 261654850621 which is already low profile and faster.


    Edit 4: seem to be finding the answers to my own questions ;) found this: http://superuser.com/questions…ler-supported-under-linux


    Think I might take a chance on the ASMedia card. I'll wait until tomorrow to order it. I'm hoping no one will chime in and tell me it's flakey :P

    Nice build. I'm looking to do the same thing but with 10 drives in RAID-Z2. It should leave 2 bays free in the HP, one for a SSD caching (at some point) and the other as a spare slot (My OS drive is run from the SATA on the board next to the USB and is sat in the optical bay).


    Do you mean one of these? eBay number: 151495721938


    That shouldn't be a problem it's only £6.39 from HK. The x1 slot is still available so this should work out nicely.


    Could you point me in the direction of the firmware you used? I've had a quick google but I'm getting Italian results. ;)

    Hi all,


    I've got a HP N54L and have been running OMV for a while. All is well, but I've been considering adding more storage after playing around with the almost complete ZFS plugin.


    I currently have 4 drives in the N54L's 4 bays in a soft RAID. I am considering moving those to a RAID-Z2 but expanding at the same time.


    The ideal number of drives for a RAID-Z2 is:


    4, 6, 10, 18, 34 etc for 4K sector HDD's.


    6 drives probably won't be enough and 18 is a little overkill for my needs so I am considering 10 x 4TB drives.


    I have been looking around for a JBOD enclosure and have come across the Icy Box IB-3680SU3 8 bay enclosure.


    My questions is: can anyone confirm that the eSATA port in the HP N54L supports port multiplier under Debian Wheezy? Also, has anyone plugged 8 disks into it?


    Thanks :)

    This is interesting ;)


    On a reboot my ZFS pool isn't mounted. The pool isn't listed in the ZFS plugin panel... I try to reimport and I get:



    Hrm... 'zpool status Media' gives me:



    It appears that this is because the drives were originally: sdd, sde, and sdf. They now appear to have been reassigned sdb, sdc, and and sdd.


    This could be a failing on my part, I expect I should have selected 'By ID' not 'By Path' but the "Specifies which device alias should be used. Don't change unless needed." made me leave it alone... surely it should always be by ID and this should be the default? Anyway, fixed with:


    Code
    zpool export Media


    then imported the pool again. :)


    By importing the pool again, it appears that the drives have now been imported by ID:



    Thus preventing this from occuring again.


    Is it possible to have the default option for 'Device alias' changed to 'By ID'? - especially if the user is warned not to touch it.