Which Sata Card?

  • Not much point in this thread....In all the years I have worked with Linux - I have never felt like a total dickhead....


    I can't see how this thread can help others because it doesn't help me....


    I will stick to trial and error and do some testing in vmware because really getting answers it is better to do your own tests and not rely on others!


    I guess I have outstayed my welcome....no matter...I can live with that....

    2 Mal editiert, zuletzt von bookie56 () aus folgendem Grund: waste of time

  • I used the: IO Crest 4 Port SATA III PCI-e 2.0 x 1 4-port Marvell 88SE9215
    Installed in PC and booted.. Worked without any issues. Can't be used for the boot drive.

    Read the reviews and questions on Amazon for the performance. Good info..


    Mkpd

  • Not much point in this thread....In all the years I have worked with Linux - I have never felt like a total dickhead....


    I can't see how this thread can help others because it doesn't help me....


    I will stick to trial and error and do some testing in vmware because really getting answers it is better to do your own tests and not rely on others!


    I guess I have outstayed my welcome....no matter...I can live with that....

  • According to your link above the mainboard has 6 SATA ports so using 2 cheap eSATA to SATA cables and you can use 6 disks in total. Then you need only a single or dual SATA card to attach the new 10TB backup drive to keep your data safe. Or do you use an external USB disk for this purpose or even another host where you transfer backups too?


    BTW: 'mdadm raid 10' sounds like a very special use case. What's the use case for this array?

  • Hi,


    i'm not sure about the RAID Lvl10 construct ... it has probably less cpu utilization than RAID6, but with the disadvantage of adding pairs of drives instead of only one in case of RAID6. The overall speed remains the same, same to the redundancy.


    Anyway, i use the IOcrest Controllers too, look at the thread Which RAID or HBA card ... there you'll find all necessary informations and links (even to Amazon - look at the warehouse deals in Sweden).


    Sc0rp

  • the advantages of Raid 10 are enticing....

    When used with only 2 drives I agree (yes, almost nobody knows but mdraid can implement RAID 10 with just 2 drives and takes care about both redundancy and speed). I can imagine some rare use cases where I would be interested in this but with 4 drives and as in your case with 'cold data' I would more probably go with RAID5 instead if I would use anachronistic RAID (which I don't do since it's 2017).


    I care about data integrity and that's why I use ZFS or btrfs where possible (they do checksumming, they support snapshots, they allow for transparent file compression).


    If I understand correctly you're storing 'system images' made with Clonezilla on this host? For such use cases we rely on ZFS/btrfs, maximum compression and snapshots/clones.


    Given an image is 100 GB in size I let the creating software NOT do any compression but the filesystem on my archive server (usually such images then need just 50 GB -- most probably even less -- on disk with maximum compression). When I need to store a new image I let a snapshot create, transform this into a clone and let the imaging software overwrite the image that's already there (imaging software stores still uncompressed). If for example 10 GB might have changed in the meantime that are nicely compressable I need for this new 100 GB only 5 GB more real disk space (since compression happens at the filesystem layer and since I let the software overwrite the raw image only those blocks/sectors that really changed need more space on disk).


    So with 55GB physical space on disk I can already store 2 x 100 GB images. And I get versioning for free. And I get the ability to implement REAL backup for free (sending the snapshots to another machine where also versions are stored). And if I use btrfs' RAID1 or zmirrors on reliable hardware with ECC DRAM I also get data integrity and even self healing for free.

  • Re,


    i assume, that @tkaiser is to much "pro" :D - I wouldn't recommend ZFS for normal use at all. Basically you have to learn, how ZFS is organized and what you have tot do, to extend your box with additional drives - the layout of ZFS is often missunderstood, which leads to the most user-complains about "how i can add a drive to my ZFS" (it is not trivial).


    BTRFS is new, and the RAID-Code is not "production ready" - I waiting for that. I'm using RAID since about 20 years and i'm a little bit "scared" about ZFS, because of the high complexity (btw. all my boxes have enough ECC-RAM). XFS does well for me a long time, but needs an underlaying RAID (software or hardware) for big blockdevices, but i tried also LVM2 in the past. My "new" NAS-box works absolutely fine with EXT4 ... so the next level for me will be BTRFS ...
    (btw. ZFS was developed for big datacenters - who of the "normal" OMV-Users have a big datacenter (even in mind) ?).


    The greatest advantage of BTRFS and ZFS is to get rid of the additional (and nowadays unneeded) RAID layer - COW, Snapshots and other pro-features are not primar to "normal" users, i think - energy saving with HBA's, short boottimes (without loading RAID-Firmware) and the better STR/STD compliance count more in this environment, i think.


    Sc0rp

  • a good guide about VDev, zpool, ZIL and L2ARC and other newbie mistakes in FreeNAS forum.


    info show is usefull regardless of O.S used, because is ZFS implemented (not O.S related).

  • ECC Memory is the only way to go for safety


    ECC memory is for data INTEGRITY. Those small HP Microservers featuring ECC DRAM with 4 drive bays (or the older ones which allow to attach even 6 drives) aren't expensive and since you again pointed out that you're only wasting your time gettings answers requiring you to waste even more of your valuable time reading/understanding documentation...

  • Just for the record (mostly for other readers who stumble accross this thread later)

    • mdraid1 as well as mdraid10 provide ZERO data integrity features (they don't care about data mismatches between two copies and if they would they couldn't make a decision which copy is correct in case a mismatch occurs)
    • btrfs raid1 as well as ZFS zmirrors provide data integrity and SELF HEALING capabilities in setups with a similar waste of resources (100% disk capacity for redundancy) since they use checksumming and can therefore deal with corrupted data as long as only one copy is damaged (disk failures, cable/connector problems between controller and disks)
    • RAID10 is nice if you're suffering from your array showing too low random IO performance (IOPS) and/or too high latency (if you don't know what IOPS means you're not affected anyway). When dealing with 'cold data' as approx. +99% of OMV users do this is just a pretty nice waste of resources compared to RAID5 and even RAID6 since the latter variants provide some DATA INTEGRITY features (due to parity information calculated amongst all data blocks and thereby able to detect and also REPAIR silent data corruption when running regular scrubs)
    • RAID10 adds some complexity unlike RAID1. Folks who hate dealing with technology and avoid efforts for testing are therefore best served by an attempt with as less complexity as possible
    • All the higher RAID levels (applies also to raidz, raidz2, raidz3) add a lot of more complexity which combined with users blindly trusting in technology, not taking the time to understand basics and neither testing regularly nor even initially is a great way to loose all your data stored on an array. If you're the type of guy blindly trusting in things working as they should and don't test... your only hope are working backups (questionable since users who don't test have no working backups by design)
    • ECC DRAM is NOT a requirement for data integrity. It's just 'nice to have' and if you spend already a lot of money on your whole NAS setup the few more bucks to add some redundancy to cope with data corruption at the lowest layer are always worth the costs
    • The scrub of death is a myth, nothing more, nothing less
    • The '1GB of DRAM for 1 TB of storage' ZFS story is just the same though a useful recommendation for novices who hate to deal with technology or can't afford some time to get familiar with basics (eg. when and why much RAM is mandatory for ZFS and when it's just 'nice to have' since the more DRAM the faster in use cases 99% of OMV users aren't affected from anyway)
    • Backup and data safety/protection is not related to anything listed above. This is something totally different but modern attempts like btrfs and ZFS do not only provide data integrity features as explained above but of course also help here (since designed in this century and not 30 years ago like everything else). Regular snapshots with btrfs and ZFS and the ability to even send them to another disk, host or location in another city or country allow by using freely available filesystem features already to gain a pretty high data protection/safety level
    • A 'working backup' is only defined by a regularly tested restore that finishes within the defined allowed/time and is intact (that's important but it's useless to talk about since this a matter of experience). If backup also includes full systems or system files then 'working' is defined by a tested desaster recovery operation that finished within the defined allowed/time (!).

    How to deal with this? Depends. On skills, understanding, use cases, hard and soft requirements, amount of money to throw on a problem and so on and so on. There is no 'one size fits it all'.

  • A cheap sata card that allows future expansion with port multipliers and works with OMV, is any card (pcie is the cheapest I've found) with an Sil3124 chipset. Passes SMART through port multipliers, which is a key factor in my opinion, and has decent throughput.

    Fan of OMV, but not a fan of over-complication.

  • heh, will be testing the PCI-X in a couple of days. I'm a big fan of server grade, but old, kit.


    Sorry, as always the devil is in the details.


    I haven't tested the PCIe version so couldn't give a link the ones i have are several years old and are PCI and are £100 each.


    What's the limitation of PCI32 based controller cards?

    Fan of OMV, but not a fan of over-complication.

  • Re,

    What's the limitation of PCI32 based controller cards?

    It's bandwith in modern computer environments, it's power consumption and the lack of slots on modern mainboards ... and at least the disadvantages from an "old" bus-design?


    Get me right, i'm a fan of the SiL3124 chip, since it was a very cheap solution for additional SATA- and eSATA-ports, but by the lack of PCI32-slots and the overall performance i'm now fan of the Marvell's :D (based on PCIe).


    Sc0rp

  • It is a shame that you and many others like you on the web are good at blowing their own trumpet but not prepared to really help those asking

    Sure. First class unpaid personal technical support is the future, especially if the one asking ignores everything being answered already multiple times. You might not have realized that after my first few answers to your questions that all got totally ignored I'm no longer answering your questions at all. I just add some remarks for future readers who stumble across threads like this.


    It's really no wonder why most regulars avoid threads like this thinking just 'it's your future data loss drama not mine' :)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!