Posts by Cpoc

    That's what makes OMV so special. Is that it can be used as a low end system or a very high end system. I agree that the core OMV should be small and bare and expanded via plugins and scripts. Should ZFS be included in the core the answer is no. Should there be a ZFS plunging the answer is yes. ZFS is not for anybody. However BTRFS is not production ready and BTRFS should also be a plugin. This way if there are bugs its a lot easier to fix than it was included in the core system. In the future BTRFS will be the default Linux file system its just not ready.


    As for myself I may end up upgrading my system real soon and I may stick to software raid just because the new motherboard does not have hardware raid and I already have a HP SAS expander and a HBA card. So I will be sticking with software raid, mdadm and lvm. It works really well.


    The point of all this is choice. That's what OMV gives us. Now my decision may not be the best or the fastest but I trust ext4, mdadm and lvm and I have been using it for years with no issues. What Volker is doing is right. By making changes in the core system back end storage it allows OMV to be flexible and gives us choice. Now no comerical NAS can do that.

    I looked at Snapraid and its amazing. It can do what ZFS can do but with up to 6 parity drives and what's really amazing is that it supports different size drives. Even unraid can't do that. If we can get a GUI plugin as well it would make OMV the best open sourced platform out there. I believe Volker should make that back end which allows all of these plugins to work. More choices = more users, more users = more devs which equals more donations.

    Yes I see that all the time. They see AMD64 and they think its for AMD CPU's only. Sine i386 has PAE it works fine with more than 4 gigs of ram. The first time I saw that I thought the same. Its just called that because AMD came up with 64 bit extension before Intel did.

    I agree with Volker that we should drop 32 bit for 0.6 and see how that goes. I mean who is still running 32 bit systems for servers. As for ZFS it should not be in the core but offered as a plugin. I won't be using ZFS but I'm sure that many will. The same with BTRFS should also be done via plugin. This way the core system is kept small and all addons to be added via plugin or script.

    What really gets me is how little info/press that OMV gets. Just do a Google search for openmediavault review and you will see reviews that say OMV cant't be used by installers even though Volker changed the licence to GPL3 but there is no update on the review to reflect the change. You have another review that says Open filler is a much better than OMV when open filler is pretty much dead and is riddled with bugs, I know because I have used it and it does not even come close to OMV. With bad press like that no wonder why we have such little dev support. Its amazing that you think how stable OMV is with 1 main dev and 3 plugin devs. I mean I have zero issues with OMV and when there is an issues its fixed in days not weeks or months. We need to get the word out at least by fully supporting ZFS via plugin OMV would start to get some needed attention.


    OMV would be the only NAS that would have the most choices. From LVM to MDADM raid to BTRFS and ZFS. The bad press and reviews need to stop. OMV can hold its ground and then some just that people don't know about it.

    I am writing this thread because I would like to give my thoughts on OpenMediaVaults future. I have been personally using OMV since 0.2 and I love it. Its easy to use and you can add features via official plugins, third party plugins and scripts. However I'm quite disapointed that OMV has gotten little to no press and exposure. Its one of the most open source stable NAS systems out there. I know because I have tried them all. Its based on stable Debian so adding programs is pretty straight forward.


    The only problem is most of the dev work is done by 1 man. We need to get more devs on board which means more exposure for OMV. I have a few ideas and its up to Volker to decide to use them or not.


    Fisrt of all I think for version 0.6 and forward we should drop 32 bit support and only use 64 bit. Now when OMV was first released for vesion 0.2 I wanted 32 bit support as Volker and others wanted 64 bit support only. That was then this is now. We are almost at 2014 and its time for 32 bit support to go. You can pick up a socket 771 board for like $20.00 on ebay and even the 771 systems all support 64 bit. If you are still running 32 bit only then its time to upgrade.


    I beleive OMV should support ZFS. Now I know that Volker said that ZFS would not be supported and BTRFS would be used instead but I think this needs to be changed. As for myself I will be using hardware raid so ZFS is out for me but there are allot of users that would switch to OMV if it had ZFS support. Installing ZFS backend is very easy because of a project called ZFS on Linux and installing it is also very easy, http://zfsonlinux.org/debian.html
    BTRFS is not ready for production in fact not even all the features are implemented. Also BTRFS is very dependent on the Linux kernel so an older kernel will not have all the new features of BTRFS and bugfixed. ZFS on Linux as of 0.6.1 is production ready and works really well from what I have read its now at version 0.6.2. All it needs is the gui interface for OMV. When BTRFS is ready for production I beleive it will be the default file system for Linux but it is far from there yet. By offering ZFS on OMV it wil bring many new users and hopfully a few devs as well. There are lots of users that want to use ZFS but want Linux as its OS and not one of the BSD OS's


    Any thoughts feedback

    In reality I only need 200 megs per second anything more than that is gravy. I am planning on bonding 2 gigabit NICs hence the need for 200 megs. I would use 5 disks but 4 disk arrays are just much easier to use when using rack mount cases. So if I get 500 megs with the hardware raid per array I'm happy more than enough I will ever need. That box would easy handle about 8 users cause that is the max I'll ever have at the same time.

    I can live with that. It seems that a 2048 KB file system is much better than 512 KB file system. Like I said before 5 disks the math does not work and harder to keep track of which disks are in which array. A 4 disk system each row has an array. I do mostly read cause its a storage has. I'm sticking with 4 disks arrays and LVM management. I'm not to worried about write performance but I'm sure a 5 disk array would be faster than a 4 disk array. Someone else had told me this a long time ago he had said that the 9 disk array was fastest of them all.

    As for my 270 megs that's write speeds from one raid array to another. I tested this with Linux mc. My old hardware raid setup would max out at 250 megs again tested with mc. My new hardware raid should get between 500 to 600 megs that's because it has a 1 gig cache of ram ddr3. As for network speed I can always do some network bonding. I have a layer 2 managed switch and it supports bonding so thats not an issue. I'll keep my raid 4 disk system it just makes it so much easier because 4 disks works with my16 bay SAS expanders as well 5 disk the math does not work. Each row had a raid 5 setup. It just works.

    I've never had an issue in a 4 disk raid system. I can always use a 5 disk raid system when I switch to hardware raid. Do you have a link where it show about a 4 disk raid system. As of now my drives are very cool anywhere between 22 and 28 degrees. So no extra cooling is needed. As long as I know where all the drives are it won't matter if I switch to 5 disk raid system. Been using raid for a long time with 4 disks with no problems.


    Oh and I forgot to mention the thrird factor the kills hard disks. So for a recap we have.



    1. #1 killer is heat
    2. #2 killer dead air or no air flow
    3. #3 killer is shock


    Yup you get lots of DOA devices because it got damaged in transportation or dropping a drive.

    In my nas setup I do 80 % read and 20 % write so I not to worried about the 4 disk issue. Like I said its just easier to remember each row in my case is a raid 5 setup. I have a chart that has every serial number so if a disk goes bad I now which one it is. So far its been just over 1 year and no disks have failed. In my older setup I had 2 raid 5 setups in hardware raid running for just over 5 years without any hardware failures. The best you can do for platter disks is to have a good quality power supply and make sure the disks run as cool as possible. In my old case I had 13 drivers in a tower and needed the side cover removed and in the summer I also ran a fan to keep the drives cool. Heat is the #1 killer for platter drivers. The #2 killer is poor air circulation what they call dead air you need air flow without it leads to heat and the #1 reason why drives die. Also in my old setup all 13 drives used the 5 1/4 to 3 1/2 adapters so their was spacing for air flow between each drive. I never install a hard disk in a 3 1/2 slot unless its the only disk in the case. If you got lots of disks best to use rackmount 4U cases as they are designed with proper air flow in mind compared to most tower cases.

    They are 3 directories to backup if you use LVM
    /etc/lvm
    /etc/mdadm
    /etc/fstab
    If you are using hardware raid then no need for /etc/mdadm but /etc/lvm is required. You can get away with not having /etc/fstab because you can get the uuid info from /etc/lvm. I do daily backups to USB of all 3 dir to USB stick with the USB backup plugin. It works great. I'm still using software raid in current setup at about 270 megs per second. Not that bad for software raid.

    Here is PDF directly from WD http://www.wdc.com/wdproducts/…Sheet/ENG/2879-771442.pdf. WD drives are designed to run 24/7 however they are consumer drives not data center drives. They are build to run a bit cooler due to the fact that many people use them in DVR CCTV setups and small nas cases but they are not designed to run in large raid setups. LVM is just a container that holds the info of where the partions start and end nothing more the data protection is handled by the raid soft or hard. LVM gives the flexability of growing a partition very easy. The code base is very stable and even all vituralization centers all use it for their back end systems. If its good enough for them its good enough for me. I would not use WD reds in a large array setup as they are not designed for this purpose. As for the 4 drive setup I'm not to concerned about the speed because its a storage nas . If I want speed and data protection I would use 8 disk raid 10 which would give 8 times the read and 4 times the write and use SSD instead of platter disks. Raid 5 is cheapest method with some data protection and maximum disk space.

    Another reason is raid 5 has only 1 fault tolerance per array. The more drives you have the more of a chance that 2 drives will fail at the exact time. However keeping the array small like 3 to 4 drives has a lesser chance that 2 drives will fail at the same time compared to a 16 or 24 drive array. I choose 4 drives because I use a norco case which has 20 drives. Each row holds 4 drives so its very easy to manage. Every row is a raid 5 array then manage all the arrays with LVM.