Posts by Hellspawn

    This may be more of an extension/enhancement to the links* plugin, but I keep wanting a way to add links to various web interfaces for services on the OMV log-in page. Or perhaps simply allowing something like accessing myomvserver/mylinks to skip having to login and navigate the OMV WebGUI first.


    *edit: correction, originally mentioned website plugin, but meant links

    FYI, I hate to make more work for you, but the latest SnapRAID is currently v5.1. I assume you are compiling SnapRAID from source for the plugin and GitHub shows v5.0 for the binary from what I can tell. Unfortunately, I am unable to check this on my test OMV VM atm.


    Thanks again for the work on this plugin. I hope to find time to go over the plugin development posts, then perhaps I can take a stab at contributing more directly to OMV.

    Any PVR solution would have to provide a similar/better experience to MythTV for me to switch. Aside from the challenges of configuration, which includes installing some X-libs (not a full Xserver or GUI) to support running mythtv-setup using X over SSH, MythTV has been very stable and enjoyable to use. In any case, eventually moving to Wheezy with OMV 0.6 will likely be a big help to all of the options on the table.

    As ryecoaaron said, Plex and minidlna are servers that stream to clients using DLNA rather than relying on NFS/SMB. They allow you to consolidate multiple sources under each "library" (eg. TV, Movies, Music, etc.) you create.


    Quote from "ryecoaaron"

    I am changing snapraid to add its own samba share for the pool just like greyhole adds samba shares. I will also add a button in the commands tab to execute the pool statement. All of this is why I am working on an aufs plugin which would be a read/write pool. Then you wouldn't need the snapraid pool.


    Awesome news ryecoaaron. I have played around with aufs a bit and I like that it supports pretty much any underlying filesystem without introducing CPU overhead like mhddfs (a FUSE filesystem). I look forward to trying out the plugin. Currently, I think aufs is the best alternative to something like Greyhole as they provide quite similar functionality with aufs being a lighter weight option that doesn't require dependencies like MySQL and also allows NFS (Greyhole is SMB only). I will have to spend some time to see where the real tradeoffs are.

    I believe sharing the SnapRAID pool should be as simple as setting the pool path in your snapraid.conf to match that of a shared folder used by a SMB/CIFS share previously created through OMV. At least I think thats how I got around having to fight with OMV overwriting your config file. The pool command simply creates symlinks to what is in your pool and as long as SMB is set to follow symlinks then all should be well.


    I have basically tried this once before and it seemed to work ok from what I recall. The biggest drawback was that the pool command needs to be issued any time your pool drive contents change, otherwise your symlinks are outdated. This adds a manual/scheduled process and takes away from the usefulness of the feature. Additionally, a SMB/CIFS share of the SnapRAID pool is safe to use as read only, but writes may be a gamble since this does not give one sufficient control over where new files are actually stored in the pool drives. This is why I instead prefer to use Greyhole to manage the pooling side and SnapRAID to protect my data. This creates an extremely flexible but still robust NAS storage environment.


    As for RAID 5, it does nothing to prevent silent corruption, a.k.a. bit rot which is almost guaranteed to occur, for example, during a rebuild of an array of your size (see http://www.zdnet.com/blog/stor…stops-working-in-2009/162). LVM is a bad idea because loss of a single member drive will kill the whole array if I recall correctly. Furthermore, neither RAID5 nor LVM provide a way to recover accidentally deleted, corrupted, or modified files. Greyhole can add a sort of recycle bin to quickly recover accidentally deleted files, but this must be periodically emptied to recover drive space. SnapRAID can also restore both accidentally deleted files as well as files that have been changed/corrupted by rolling back to the version stored in the most recent sync snapshot.

    Thanks for the heads up. It looks like VDR is supported by XBMC and may have commercial detection/skipping (added via plugin?). I'll have to look at the details around transcoding options and if there is a web interface...it is hard to give up the ability to grab basically anything with a web browser to view the guide info, schedule a recording, or even configure settings.


    However, it seems MythTV has upped the ante. I was wordering if Plex had any sort of PVR channel...and so I just found out someone has a basic MythTV Plex channel (a nice bonus for my two Rokus).
    https://forums.plexapp.com/ind…pic/86644-mythtv-plug-in/


    Too many devices and ways of doing things makes being a couch potato a second job! :lol:

    I also have a hdhomerun, but I decided to set up a headless MythTV backend on my OMV server to create a central networked PVR. Setup and configuration of a headless MythTV backend without a GUI is a bit convoluted, but in the end I like the freedom of using either mythweb or XBMC to access/control a "whole home" PVR device on my existing OMV server. Assuming there isn't an easier PVR alternative out there, it would be very nice to figure out a mythtv-backend w/mythweb plugin.

    I am very pleased to finally see not only Greyhole ported to OMV 0.5 but also SnapRAID. I have actually been using both of these programs with OMV 0.3 for months now and I was putting off updating my OMV server until at least MySQL and Greyhole were ported. Thanks to all who helped to bring this very useful functionality to an already great product.


    However, while running through a basic setup of the new SnapRAID plugin on an OMV 0.5 VM, I noticed that there is a software check that prevents one from placing a copy of the content file on a parity disk. Is there a specific reason for the plugin to impose this restriction or can this check be removed?


    From the online SnapRAID manual:

    Quote

    The list of files is saved in the "content" files, usually stored in the data, parity or boot disks. These files contain the details of your backup, with all the checksums to verify its integrity. The "content" file is stored in multiple copies, and each one must be in a different disk, to ensure that in even in case of multiple disk failures at least one copy is available.


    I currently have an OS drive, 3 pooled data drives (using Greyhole), and a single parity drive. I prefer to use my pool drives for data only and so I keep copies of my content files on my OS drive and my parity drive. I understand there is some concern with running out of space on the parity drive, but this can be mitigated by use of the exclusion list rules.



    On a side note,

    Quote from "viald"

    According to me Greyhole is not the best choice, because you don't obtain a virtual file system but a CIFS share which of course it's not view by the system the same way.
    For example if you want to add a path to your Plex configuration, I think that you can't do it using greyhole, because Plex lists only all file systems even if they are virtual.


    FYI, in my experience Greyhole has worked just fine with a Plex, minidlna, or MythTV server on the same machine. Greyhole uses a CIFS/SMB share which in turn is tied to a shared folder (aka landing zone) in the file system. Greyhole then maintains that shared folder with symlinks to the actual files in your pool after moving files out into the pool. As long as there are not any issues with following symlinks then just point whatever program (Plex, etc.) to the shared folder that is associated with your Greyhole CIFS/SMB share. Note that this method is primarily for READ ONLY access since writes directly to the shared folder are not processed by Greyhole. Additionally, having my Greyhole "landing zone" on an SSD essentially acts as a cache drive to greatly speed up writes to my pool (ie. I am only limited by either the source medium or gigabit ethernet speeds) all while letting individual drives spin down when idle to save power, etc.


    A better option is probably to locally mount the CIFS/SMB share directly. One quirk of directly accessing the shared folder is that creating new files doesn't seem to trigger Greyhole to properly copy them into the actual pool. A simple check for orphaned files in Greyhole will trigger a "cleanup", but this essentially introduces a manual or scheduled secondary process. As an alternative, the link below describes a quick way to mount a Greyhole share locally, which I plan to do while migrating my OMV from 0.3 to 0.5.


    https://github.com/gboudreau/G…e/wiki/MountSharesLocally

    Quote

    Mounting your Samba shares locally is useful when you are using Greyhole, and want to write or in any way work with those files locally. Greyhole data should only be accessed through shares, so mounting those shares locally is an easy way to work with Greyhole data safely.

    I pulled these quotes from a post on the old forums titled: "RAID and external USB disks - RAID vanishes after reboot"


    I have used this method to allow time for an external USB drive to initialize during bootup. However, I reduced the delay time to only 5 seconds to minimize boot time. My USB drive is NOT part of a RAID array so this may change the required delay time (15 seconds seems to be on the safe side).


    ORIGINAL DESCRIPTION OF PROBLEM
    Posted by franzxohk


    POSSIBLE SOLUTION
    Posted by stephanebocquet

    Quote

    But I've run into a bigger issue over here, its that I roughly got 4,8TB of Movies/Series and whatnot on my NTFS drives (5x2TB disks, diskpooled with Stablebit Drivepool)


    I am not familiar with Stablebit Drivepool, but my understanding is that it does not use a RAID-style configuration where data is either striped or mirrored across all drives. Instead it just pieces multiple drives together using normal NTFS partitions.


    Assuming your drives are essentially being used in a JBOD fashion, I had a situation similar to yours. I was using two NTFS formatted drives to store my media library on a Windows 7 machine, 1TBx1 and 2TBx1. I had more than 2TB of total data so it was not possible to temporarily move everything to one drive and reformat. What I did at first was to simply mount my NTFS drives inside OMV. In this way, it is possible to transplant drives with existing data to an OMV server. You will, of course, not be able to use mdadm for RAID but everything else should be functional. The biggest downside is that NTFS is not a native file system and, as a result, you will suffer high CPU usage during disk I/O.


    I have since gotten a second 2TB drive (used as a parity drive with Snapraid) and moved enough data around so I could reformat my drives one by one. Using ext4 has significantly improved performance compared to NTFS. If I understand your WHS2011 setup correctly, you may want to consider using Greyhole as a similar way to pool your drives. There is a plugin available for OMV which greatly simplifies the process. I searched online to find out how to scan drives for pre-existing files (Greyhole considers these "orphaned" files). Using Snapraid along side of Greyhole allows me to easily add/remove drives of arbitrary size to my drive pool and also to have RAID-like fault tolerance to protect from either data or a drive going bad. All of this, and my data lives as regular files on a single partition for each physical disk. This allows the drives to be used outside of the server if one were to need to recover files, re-install OMV, transplant the drives w/data again, etc.


    It sounds like you may be able to just mount your drives as they are. My suggestion would be to move data off of at least one drive so you can reformat/migrate your data to an ext4/ext3 file system and take it from there...HTH