FreeNas Convert New to OMV - Just a JBOD OK?

  • Hi everyone,


    Hope this forum is as friendly as everyone says. Looks good so far. ;)


    I'm very technical but new to the world of servers in the past few years. I decided instead of sharing everything out via my desktop, for my home media server, to just have a dedicated machine to handle that, downloading, storage, etc. So, this server is very much a media unit serving up audio and video content, mostly to XBMC machines around the house. For the past 2-3 years I've used FreeNAS but am quickly realizing it's more than I need and time for a change.


    I don't need backups of all my video so I don't have a lot of interest in redundancy (I'll back up music and photos myself). FreeNAS was so big on ZFS but I didn't use anything but JBOD anyway and FreeNAS seems needlessly complex and bloated at times, so here I am.


    So, quick question - Is it easy for me to set up my new OMV box as a JBOD (lose one drive, lose only that data) and what file format would you recommend for the drives? I'm going to have to manually copy files over and blow away the drives to get away from ZFS, so it'll take time, just want to make sure I am using the best format on my 7 hard disks to do this.


    Thanks, very excited to move into OMV.


    B.

    • Offizieller Beitrag

    Yup... JBOD is a piece of cake. Just create a filesystem on each drive and they will mount under their unique UUID's.


    Personally, my server is JBOD.. and I use rsync to sync my main data "Drive A" (that I use to feed services, etc..) to my backup "Drive B". Works perfect, and no raid1. :)


    If you're formatting the drives new.. ext4 would probably be the filesystem to use.

  • There is a ZFS-plugin for OMV in the works, so you could use your drives without reformat your drives when the plugin is out.
    @ Aaron, tekkb etc. please correct me, if I'm wrong, thx


    I'd recommend the following:
    - ext4 for the drives
    - Pooling the drives with AUFS or mhddfs (no Guide yet, but it is self-explaining)
    - creating Parity with SnapRAID on 1 or 2 dedicated drives -> Link
    - for "backup" you can use Greyhole -> Link

  • Wow! You all are friendly AND fast. Thanks.


    ext4 would be fine, I don't mind ditching ZFS that much as I don't know how much benefit I get from it in a JBOD anyway and it's a resource hog from what I understand. Probably more than ext4 is.


    As far as pooling with AUFS or mhddfs, I still don't have any risk of data loss across drives if one develops issues, right? Lose one drive, just lose data on that drive, not on the others? That's what I'm trying to go for.


    Thanks for the other links, reading and soaking in as much info as I can!


    Hardware to build the box will arrive this weekend, only thing I haven't bought yet is RAM, I am building in a Mini ITX box so only have 2 RAM slots and only have some old 2GB DDR3 sticks around, not sure if 4GB will be enough, thinking I need at least 8GB to be effective.

  • Yup... JBOD is a piece of cake. Just create a filesystem on each drive and they will mount under their unique UUID's.


    Personally, my server is JBOD.. and I use rsync to sync my main data "Drive A" (that I use to feed services, etc..) to my backup "Drive B". Works perfect, and no raid1. :)


    If you're formatting the drives new.. ext4 would probably be the filesystem to use.


    Now I'm getting curious, but how is formatting many drives and mounting them in different folders JBOD? :) As far as I know, JBOD is a bunch of disks (of various or same size) combined as a single partition which means it doesn't have any redundancy.

  • You know what, you're right...I misspoke totally to use the term JBOD then. My stupidity showing.


    Yes, I am not even doing JBOD then but instead just formatting and mounting separate drives to work independently in a way that if one fails, there is no data loss on the others.

    • Offizieller Beitrag

    I can't really answer your questions about the drive pooling, as I don't do it... but as far as the specs go...


    You don't say what your CPU is, but if the media does not need to be transcoded.. I would think 4gb is plenty. If your media needs to be transcoded... then obviously, more cpu and more ram will probably make things significantly smoother.


    None of my media movies need transcoded, and I'm running a Celeron 1610 with 8gigs of RAM that rarely gets above 10% and frankly works perfectly. If I try to transcode, it spikes to 100% and stays there, the movie will stutter, etc.. I suspect if I upgraded to a better CPU (maybe an i3 or i5) this would be a non-issue.

    • Offizieller Beitrag


    Now I'm getting curious, but how is formatting many drives and mounting them in different folders JBOD? :) As far as I know, JBOD is a bunch of disks (of various or same size) combined as a single partition which means it doesn't have any redundancy.


    Actually, I've always understood "JBOD" to be "just a box of disks"... and basically they were all represented as being independent of each other. (apparently, it looks like it does mean, "just a bunch of disks.." )


    JBOD (derived from "just a bunch of disks"): an architecture involving multiple hard drives, while making them accessible either as independent hard drives, or as a combined (spanned) single logical volume with no actual RAID functionality.


    So it appears this is one of those situations where everyone is right.. :)

  • Well, isn't that great? :D I never heard the term used when using multiple disks independently though, didn't think there was any use of a term for that!

  • I never heard the term used when using multiple disks independently though, didn't think there was any use of a term for that!


    Same goes for me, but a friend of mine recently showed me that a JBOD can show each disk individually too. ;)


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Thanks, you are all great. This is enough to get me started. I'm going to gradually (one drive at a time with a new empty drive to start with) copy my ZFS content over to a ext4 drive (then use THAT source drive as the next destination drive, etc, until I piggyback them all over into the new box).


    I know everyone says that ZFS is the stuff, however I didn't get a great benefit from it in my configuration and overall I think it lends itself to more potential issues than help when just running individual JBOD. iJBOD? :)

  • Can I trouble you folks for one more thing?


    So I'm running a copy over SMB from my FreeNAS box (ZFS drive) to an ext4 USB connected external drive (which I will eventually crack open and use in the OMV system) under a Debian live CD.


    It's running pretty slow, 26 MB/s or so. Going to take over a day to move the data.


    All that said, I can be patient, but is there a way I can verify that the files all coped correctly before I blow away the source drive? Maybe I should be doing this copy another way to do integrity checks?

  • As you don't have another hardware, an option would be eSATA.
    If you have a another PC, then put the ext4-drive in the PC, with a Live-CD, if you don't have Linux running, and you can use RSync to copy files. Then you are sure, the files are copied correctly.

  • Thanks guys, I started over using an rsync command to grab the content from the ZFS drive to copy the files over to the ext4 drive connected externally to another PC running a live Debian CD. It's connected via USB 3 ports but still pretty slow, so I think for the next copy I'll rip the drive out of the enclosure (actually, the next drives will already be out of the enclosures), connect direct via SATA, and this should go faster.


    Moving 15 TB of data to a new file format and a new server is just going to take a while since ZFS doesn't offer a lot of easy conversion paths :)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!