Beiträge von Markess

    This was an OEM card, and not sold directly by LSI. That doesn't matter from a specification and update standpoint though, as @raulfg3 notes above.


    This card was sold by HP. HP's Model number was XP310AA. Here's a link to HP's product specification overview. It may be helpful to you. Its not direct from HP's site (I couldn't find it there), but it is the HP document:


    LSI 9212 QuickSpecs


    If you're using LSI firmware, version P10 & higher will have no limitation on disk size, so you could use any size you want. I believe the current firmware version is P20.


    Your card has individual SATA ports for each disk, so you're limited to 4 disks total. Many other cards with the LSI SAS 2008 chip had "high density" SFF 8087 connectors that could connect to expander cards and increase the number of disks, but that isn't an option here.


    You can use SATA or SAS disks. But if you use SAS disks, you'll need to have power and data cables that work with SAS, because the SAS connectors are slightly different.

    I'm building a backup NAS based on a "new" BCM MX67QMD industrial motherboard I ran across for cheap. I need the SATA ports for data disks, so need to pick a different solution for the system drive. The board only has USB 2.0, but it also has a Compact Flash port based on the JMicron JMB368/InnoDisk IDB368 controller. This is a UDMA 6 (133MB/s maximum) PATA controller running on a single PCI-E lane. I also happen to have an unused 64GB CF card in my spare parts box.


    So, I was wondering if there was any particular advantage/disadvantage to using the CF card as the system drive, rather than a USB 2.0 flash drive? I'd be using the flash plugin in either case. Thanks!

    Interesting indeed. Thanks.


    I was thinking of something when you suggested a wood custom case: is 3D printing feasible (for those like me that aren't that comfortable with a circular saw :/ ). Is it easy to design a model (with the proper software (links?))? But, that means I've got to find a professional with a 3D printer on the street. Don't know if that's cheap either.


    Good ideas so far. Even more?

    For designing your own parts for 3D printing or fabrication, there's a lot of free options to get started with. The paid programs are often better, but a free option lets you test to see if its something you're interested in. The issue is going to be learning curve, even the free options can get complex and they don't always have a lot of documentation. You can try Tinkercad. (Tinkercad) and see if you like it. Its a good choice to start with because there are a number of tutorials online. SImply google Tinkercad + what you're trying to do ( i.e. create a hole in an object, resize, copy, etc.) and someone has documented it for you, even with video.


    For inspiration or designs to start with, you can try Thingiverse (Thingiverse), which is has a lot of 3D printer project files for free download. You sometimes need to get creative there with searches. I tried "banana pi", 'NAS" and "pi server" and got a lot of lot of hits. There's one for a Raspberry Pi and 4 disks, which could be adapted I suppose. Plus a Banana Pi server and others. I didn't look through all the files, but you get the idea.



    If you see something you like, you can also download the files and import them into Tinkercad to modify them to suit your needs. My computers are full of little parts I've made that way (mounts, ducts, etc.)



    There are services that will 3D Print your part for you. Ones like 3DHubs are more commercial, but there's some that will simply connect you to local people willing to print your part. Its very much dependent on where you are, some services only work with people in certain countries. I know that where I am in central California, there's 10 or so people within 10km that will print things to order.



    If you are having someone else do the work, another option is to design and having someone cut your parts with a laser cutter out of Acrylic or wood. Metal is an option, but the lasers that do metal cost a lot more so you usually get charged more accordingly. Tinkercad will do models for lasercutting too, but personally I use Inkscape for those designs (Inkscape). I made a server for my home office with two disk "trays" cut from acrylic that hold 4 disks each, for example. I made them to simply screw down to the motherboard tray in an ATX sized case that had an mITX motherboard in it, so there was room next to the motherboard. Laser cutters are less fussy than 3D printing, but you're limited to flat pieces for the most part.

    Well, a four port NIC will provide four discrete networks to the machine with all the complexity that brings in with it. Unless you bridge the ports and then it behaves like the switch you want to eliminate.

    I thought I'd try setting it up with three peer-to-peer connections first to see how it worked (NAS to Desktop, and NAS to each ESXi). I know pretty much how to do that. If that didn't work well, then I'd have to do some reading about how to set it up for switching. I had most of the hardware already, and found the 4 port Chelsio NIC for $15 (US), so thought it was a low cost experiment.


    What I had no idea about was if the interface for the OS disk impacted throughput. If the OS on USB 2.0 isn't going to affect performance, I think the USB DOM is easiest to do. Nothing sticking out on the outside of the box for me to bump (I'm pretty clumsy).

    What are you going to do with a four port NIC in the PCIE slot?

    Currently, everything with a 10Gbe NIC runs through a layer 3 switch (my desktop, two servers running ESXi, and a NAS running OMV). But, since all the higher speed traffic is to/from the NAS, I thought I'd experiment with direct connections to the NAS and cut out the switch entirely. So, this "new" machine would have the onboard Gigabit connection for updates and other users, plus three 10 Gb direct connections for my desktop and the two servers.


    No....I don't really need 10GB speed all that often, but I had the hardware and like tinkering.

    I'm re-purposing a PC for OMV with all flash storage and 10Gbe. Data disks are occupying all the SATA ports on the ASRock Industrial IMB-181-L motherboard, with a 4 port 10Gbe NIC in the PCIE slot....so no SATA expansion options for an OS disk. The board has external USB 3.0 but only USB 2.0 internally. I'm doing this project with parts I had on hand, and I'm trying to determine the best place to put the OS with what I've got.


    Normally, I'd go with an internal USB 2.0 solution and Flash Media Plugin, because its physically tidy without any wires. But, with all flash storage and four 10GBe ports, I don't know if the OS on USB 2.0 would have any impact? Here's what I have on hand to work with, any advice appreciated!


    1. 16GB USB 2.0 Industrial DOM on an internal header
    2. 32GB USB 3.0 flash drive externally
    3. 120GB Flash Drive in an external USB 3.0 enclosure
    4. Install the OS to one of the Data Disks and re-partition after the install.

    If there's good contact between the case and the plastic of the drive caddies, then you can also use a dual part epoxy adhesive, such as JB Weld or similar. This is going to make a "mostly" permanent bond though, So if you'd need to remove the caddies frequently to access the drive hold down screws, this wouldn't be an option. With the slightly flexible plastic drive tray inserts, so long as the drive tray surface is left smooth, you can usually pop the cage off if you have to after the glue sets. But not something you want to do regularly.

    How do you find the AtomicPi? If you don't mind my asking, what do you plan to use it for? I almost grabbed one when it was on Kickstarter earlier this year. But, as I didn't have a use case already in mind, I decided to pass and instead keep the spousal approval factor a couple points higher.

    Do you have any guidance/best practices on how to set this up?

    I'd suggest you take a look at how they work first and see if its something you're interested in. The Snapraid site has some good info and comparison on how, and other solutions, works https://www.snapraid.it/ . If you decide to use them (i noticed that the MergerFS plugin is actually called Unionfilesystems), then you can find tutorials here and on the web. Here's a couple examples. Not my work, so thanks to the authors.


    From forum member flmaxey there's lots of good info here on the forums, this is just one example.



    A project log from the web this one is interesting, as the author goes into a little depth on the practical uses of both plugins. It was done with earlier versions of the plugins though, so the screenshots will differ a bit from the current plugins.

    Can you enlighten me on why using RAID5 and hardware backed raid are a waste of time/money?

    I suppose there's a lot to say about this, but my thoughts (which I admit others may not agree with) are...


    1. RAID 5: RAID isn't backup, it's about availability: having access to your data 24/7 even if you loose a disk. For home use (I assume that's your use case?), I think the first and foremost requirement to plan for is a good backup plan. For many folks, a good backup plan (with a little down time to replace a failed disk and recover from the backup) is less costly and less maintenance intensive than maintaining the extra disk(s) and complexity of a RAID array running 24/7/365 on the chance that you'll loose a disk. That's not the only argument against RAID 5, but for home use, I think its a valid one. Personally, I use the MergerFS and SnapRAID plugins to address the availability issue. I think it's a little more flexible than RAID 5 for home use.


    2. Hardware RAID: The drawback of hardware RAID is that you are tied to specific hardware. If you loose your disk controller, you loose your array until you can get your hands on a replacement controller of the same type. In a datacenter, you'll have spares. But, are you going to maintain a spare at home? Having to wait for a new controller to arrive defeats the purpose of having RAID. With a software array, if you loose a disk controller or motherboard, you can pretty much drop the disks in another computer, and pick up where you left off. I think the big advantage of Hardware RAID is faster throughput. But, if you have a 1Gb LAN connection does it matter? Your system is only going to be as fast as its slowest components, and even a software array will be able to keep up with 1Gb networking. Unless your are going to have 10Gb or faster networking? Then the acceleration hardware brings makes more sense.

    I can't answer as to if this will be good NAS hardware, but would mention that almost every Chinese company closes for a time around Chinese new year. The official holiday period is 7 days, but its not uncommon for manufacturing facilities to be closed for longer, some as long as a month. So, I wouldn't take a Chinese company being closed as a sign that they aren't serious. But, it is problematic if you really need something from a Chinese vendor at this time of year!

    Will the cost of the upgrade be worth it? ecc used to be the only way to go but now not so much.

    Its important to keep your context in mind. ZFS with ECC was originally designed for, and many would argue works best in, enterprise environments where huge amounts of data are both moved and stored. With huge amounts of data, loss due to mechanical or electrical problems is a "when", not an "if" issue. In a datacenter with a lot of machines and a small number of people, ZFS with ECC its a really great combination for avoiding disaster.


    If you have only one machine, you could go years without a problem. I have absolutely no idea about the OMV userbase, but I'd guess a whole lot of users don't have ECC and they do just fine.


    On the other hand, when a time comes that it would save your data, ECC comes in really handy! Even without ZFS, combining ECC with a journaling file system, like EXT4, is still better than EXT4 by itself. Its a matter of deciding how much risk you're willing to take (even if its low) and how much you are willing to spend to avoid the risk. If you are buying a totally new system, I think you'll always pay a stiff premium for ECC, both because the memory costs more and because the motherboards that use it are usually server grade. But, you already have the motherboard and if you can divert your non-ECC memory and the i7 to another build, then the cost isn't that great.


    If you do use ECC though, I think the best advice is what @tkaiser gave above: keep an eye on the error logs.

    am currently using ZFS with Non-ECC Ram.

    If it's the system shown in your signature, the i7-6700 doesn't support ECC RAM. So, while you can install the ECC RAM and it will probably boot, you won't have any ECC functionality.


    The motherboard does support ECC RAM, but you'd need to swap out both the RAM and CPU to have ECC functionality.

    That's what people talk about the WD Blue the thread starter was talking about: https://community.wd.com/t/wdi…ue-drives-any-help/193353

    WD Greens were set to park after 5 seconds idle time. By comparison, Reds are set to park after 300 seconds idle.


    There were some quality control issues with Reds some time back, and a number of them shipped with the 5 second setting (as some folks have noted). This caused them to die in no time as NAS/low end server drives, and led to some conspiracy theories online that Reds and Greens were both the same, except for firmware settings and warranty.


    "5400 RPM Class Blue" replaced Greens recently (at least in the US), and they also had the 5 second head parking. However, my most recently purchased Blue drive had parking set to 300, and not 5.


    According to the internet, the latest generation drives won't work with WDIDLE3. WDIDLE3 wouldn't work on mine when I tested it, so I don't have reason to doubt that is true. I have found that idle3-tools on linux did work to manage firmware settings on my newer WD drives, where WDIDLE3 did not. This was just my experience, I can't guarantee it will work on all WD drives. It seems to me, in just my limited experience with WD, that there's some inconsistency in firmware settings from drive to drive, so there's no telling what will and won't work. I suppose there's a reason why those cost less and enterprise grade cost more, and you usually get what you pay for!

    mobo and cpu: Asus J3455M-E

    The PCI-E x16 slot on this board functions as a PCI-E 2.0 x1 electrically. So, while the PERC H310 (which has an x8 connector) will fit and will probably function in it, you'll have slow throughput if you connect more than a few drives. If you have a lot of drives, or you don't want bottlenecks with throughput, this may need to factor into your decision.

    WD LCC issue affects all WD desktop drives that are not used as such in Windows (affects some of their older 'NAS drives' as well). Ignoring this is IMO really not very smart.

    So, I shucked a recently purchased WD Essentials external a month ago, to find a 5400RPM Class Blue (which recently replaced mechanical Greens in the US Catalog). Parking speed was set to 300 by default.


    I don't have enough information, of course, to know if its a change by WD, or if its that WD isn't consistent with its hardware settings batch by batch, and you don't know what you'll get,

    I had a SATA2-3GBS HBA (an Adaptec 5405Z). Since it had a bit of age on it, I looked into the speed of current drives versus the bandwidth of this controller. I found that consumer drives that are rated for SATA3 - 6GBS can only maintain that transfer rate for a few millisec's. After that very short burst, a sata 2 controller can easily keep up with drives spinning at 7200rpm or less. For consumer drives, advertising them as "SATA3-6GBS" is a sales gimmick. It takes SSD's to get real benefit from SATA3-6GBS. (Which is why most HBA benchmarking is done with SSD's.)


    I swapped out the SATA2 Adapter HBA, for the flashed Perc H200, for transparent SMART stat's pass through. Otherwise, 3GBS (SATA2) would have been fine.

    Yeah, my Adaptec 5805Z also kept up with the disks just fine. I too, swapped out for an LSI SAS2008 controller based card (Supermicro AOC-USAS2-L8I), so I could have IT mode pass through and access the SMART data. But, I also noticed that idle power went down almost 10 watts. The Adaptec (at least my Adaptec) was a big power hog.

    I installed debian x86 netinst following this guide, then installed b43 drivers and wpa_supplicant to try to connect to wifi, I can see the wifis, but can't connect. Then I followed this guide to install OMV 4 on debian 9 and after issuing the last command omv-initsystem I got what you see in the picture.

    I'm not sure what that means. I didn't have any errors when I installed (although I didn't install OMV 4, only 3). I'm not an expert, but I think the "missing firmware" messages are for the wired ethernet, and don't necessarily mean it won't work. It doesn't need that many firmware modules.


    I've gotten the last message before ("mdadm..."), and it was an installation issue for me that was fixable. If you try rebooting, can you press the "e" key when the GRUB boot menu comes up (there is a notice at the bottom of that screen notifying you to press "e' to edit and "c" for command line) It will bring you to a screen allowing you to edit the boot commands. If you look down to the line that starts with "linux", it should identify the boot drive using a long UUID string. If it doesn't have a long UUID string, and instead lists a drive ("/dev/sdb1" or similar), that may be the problem. If its not "sda" or "sda1", (for example if its "sdb" or "sdb1"), try changing the "b" or "b"s (or "c" or other letter etc) to an "a", so it becomes "sda" "sda1" etc, then continue the boot process and see if it finishes booting. If it does boot, you can then log in through the Web Interface on another computer like you normally would, and apply any pending updates in the "Update Management" section. For me, one of the updates updated the GRUB configuration and fixed the issue. This has worked for me multiple times when I've tested different hardware configurations.


    If its something else, then I'm stumped!


    Good luck!

    Sorry I've corrected my previous message it is nadm issues.

    Could the error message be "mdadm"? I see you have a USB stick in your system configuration. If you are using it for your OS, I know that some fresh installs of OMV 4 fail when installing from a USB device to a USB device. This is apparently a Debian installer thing, and not an error with OMV. I have see in my case, the error is an "mdadm" error for "missing array", even though the systems I've installed on had no RAID array.


    I know that sometimes, as @gderf mentions, you'll get a series of errors and eventually the system boots normally. But, I've experienced myself that some configurations that it won't boot at all.


    One thing to try (and has worked for me), is as noted above: reinstall OMV 4 with all the data disks disconnected and only the installation USB and your USB drive for the operating system inserted. But, after the install is done, remove the installation USB AND leave the other disks disconnected during the first reboot. Boot with just the OS USB flash drive inserted.


    When the GRUB menu displays, there will be a message at the bottom of the screen stating, among other things, to hit the "e" key, to interrupt the boot sequence and allow you to edit boot parameters. Hit the "e" key and the edit screen will come up.


    From the editing screen look for the line for "linux", which will probably indicate the boot drive as /dev/sdb or /dev/sdb1 or similar, rather than a long UUID string (which is preferred). If the listed drive isn't "sda" (or sda1 or similar drive ID), change it to "sda", or "sda1" etc. and continue the boot process (the F10 key does that I believe). If it does list a UUID, then you probably have a different error all together.


    If your system does finish booting, you can then log in through the Web UI, and apply the updates available in the "Update Management" tab. One (or more) the the updates will trigger an update to the grub configuration and that update should also permanently fix the issue from the installation. Once the updates are complete, you can shutdown, connect your data disks, and try booting again.


    Details on this issue (if it is this issue in your case) and information on how to fix it manually with manual edits to the configuration files, are in this Forum string: mdadm-no-arrays-found-in-config-file-or-automatically. But, since some time has passed and there's now an update to OMV (or the underlying Debian) that happens to fix the issue automatically (at least for me) through the Web UI, I've found that easier.