Posts by ribbles

    This is my next project, thank you for the guide.
    Now I am thinking about pointing that data directory to a greyhole share thru the local samba mount. Anyone do this? Don't see any reason why this wouldn't work. But because it works, doesn't mean its a good idea :)

    Yes from my understanding, .3 is the "stable" version and .4 is the "beta" type version, there is some thread somewhere explaining something to that effect. I just had some problems I was hoping were fixed in .4 so I went ahead and upgraded.


    Landing zone should be as big as much data as you think you can possibly ever need to import into the pool at once. Since everything has to go there first, if you are in the habit of copying dozens of large video files around at once, then it needs to be big enough to handle that.


    People seem to use different strategies for landing zone, I think a more popular one is to create a separate partition on one drive and use that exclusively for the landing zone.


    I didn't do that, I just stuck it in the same partition as one of the data drives. I don't know if this will cause performance issues or some such, but that isn't something I am terribly worried about anyway. I figured I can always move it later if I have problems.


    Now the only issue with using the SSD as the LZ is that because you have OMV installed on it, OMV will not allow you to create a share on that drive. That means it will also not allow you to create the samba mount that will point to it either, which means you will have to do ALL of that manually via editing the conf files, which could be a bit of an administrative headache. In fact, I suggest you do not do that, it just makes it too hard to keep track of ACLs and whatnot, i'd strongly recommend keeping the LZ shares within the OMV framework. Well, unless you create a separate partition on it just for LZ, I suppose that could work, if it could be large enough for your purposes. I'm not sure how large the avg landing zone is, I reserved 30gb on my datadrive for it.
    (Actually not even sure you can partition the system drive, does it force you to use the entire thing? Cannot remember.)


    What I did is create a /shares directory on my 1st data drive and use that for LZ, then on same drive create /gh directory and use that for pool. Then after I set up mount_shares_locally I rsync all the data into the local samba mount. After it copied over and I verified everything was working (did md5 checks pool vs. original) then I went ahead and formatted the other data drives and add them to pool, and set copies[share] to however many i needed. I never even ran a balance as most of my stuff I did not require dupes of, and the stuff I did just duped by itself anyway.

    Quote from "moosefist"

    yeah. Greyhole is exactly what i want, but from the CLI its a little daunting. I wish the gui plugin would be updated. Highly considering rolling back to .3


    Honestly, its not bad at all, the command line is mostly used for just checking status of various things. (--stats, --iostat, --logs, --status, --view-queue) The seemingly "hard" part would probably be editing the config to add your drives, and copies per samba share. But all you really do is copy from the examples. You literally only need to add those 2 things. The config is very well documented, you really can't screw much up, the default values were fine for everything for my setup.


    The thing I had the most trouble with was setting up the mount_shares_locally script so i could use the greyhole shares with the transmission BT server, and that needs to be done on the CLI anyway there is no OMV plugin for that.

    Quote from "moosefist"

    Is greyhole supported on fedaykin? I am setting up a new NAS tomorrow and I think greyhole is perfect for what I need. I am starting with a 3tb drive and very soon after migrating a bunch of data I will be adding a 2tb drive to the pool. All the data is scattered currently between 2x500gb and the 2tb drive. Basically I plan on adding storage as needed or when I can afford it and I think greyhole makes that easier than RAID arrays or am I mistaken. Thanks.


    The plugin does not work with fedaykin but you can just manage it from the command line.... Once you edit the config to set it up and make sure its working there isn't much to do anyway unless you are add or removing a drive.
    I like greyhole because it lets me maximize my disk space by only duping the data I need duped so no wasting drive space on parity, or mirroring stuff I don't care about. And I can just take any greyhole drive and put it in another computer and get to the data, which you cannot do with a RAID. Also RAID is for availability and fault tolerance, something I don't really care about.

    Quote from "crimsonblaed"

    +1


    This is how I stream my content to my TV /Media Centre, works a charm.


    You stream over wifi or cable? Can it handle 720p without stuttering at all? Which version of PI?

    OK yes he contacted me, and he gave me clue, saying greyhole uses lsof to see if file is open by other processes.
    I then check and see that lsof is not installed on my system?!?!?!
    Ok so OpenMediaVault doesnt install lsof by default, and is not dependency when greyhole is installed.
    I installed it now, will see if this still happens, but sounds like i find problem :D:D:D


    He says he is going to add lsof dependancy for greyhole so should not happen to anyone else.
    Also said he thinks it deleting the original file on fail like that is a bug and is looking into fixing it. am I hero now? lol

    I think greyhole start copying file into the pool before it was being done written to the share. Please look at bold parts below. ALso note "HNowEa" on end of file name , i think that is rsync checksum value (or temp filename) so rsync did not get to rename it to real filename meaning it was never completed. Also original filename did not start with "." Why would greyhole start copying before it is finished writing?


    Dec 11 23:43:47 6 write: Now working on task ID 17565: write music/.10.23.2012-cf.m4a.HNowEa
    Dec 11 23:43:47 6 write: File created: music/.10.23.2012-cf.m4a.HNowEa - 66.5MB
    Dec 11 23:43:47 7 write: Loading metafiles for music/.10.23.2012-cf.m4a.HNowEa ...
    Dec 11 23:43:47 7 write: Got 0 metadata files.
    Dec 11 23:43:47 7 write: 0 metafiles loaded.
    Dec 11 23:43:47 7 write: Drives with available space: /media/2bff11c8-c52d-4c0a-898a-432ea8540a70/gh (654GB avail)
    Dec 11 23:43:47 7 write: Saving 1 metadata files for music/.10.23.2012-cf.m4a.HNowEa
    Dec 11 23:43:47 7 write: Saving metadata in /media/2bff11c8-c52d-4c0a-898a-432ea8540a70/gh/.gh_metastore/music/.10.23.2012-cf.m4a.HNowEa
    Dec 11 23:43:47 7 write: Copying 68.3MB file to /media/2bff11c8-c52d-4c0a-898a-432ea8540a70/gh/music/.10.23.2012-cf.m4a.HNowEa
    Dec 11 23:43:47 7 write: (using rename)
    Dec 11 23:43:47 4 write: Failed file copy. Will mark this metadata file 'Gone'.
    Dec 11 23:43:47 7 write: Saving 1 metadata files for music/.10.23.2012-cf.m4a.HNowEa
    Dec 11 23:43:47 7 write: Saving metadata in /media/2bff11c8-c52d-4c0a-898a-432ea8540a70/gh/.gh_metastore/music/.10.23.2012-cf.m4a.HNowEa
    Dec 11 23:43:47 7 sleep: Nothing to do... Sleeping.


    Since on same device greyhole renames it instead of copying.


    Code
    if ($source_dev === $target_dev && $source_dev !== FALSE) {
    gh_log(DEBUG, " (using rename)");
    gh_rename($source_file, $temp_path);



    And since its doing this in middle of rsync, $source_size is NOT equal to the gh_filesize and it deletes it!

    Code
    $it_worked = file_exists($temp_path) && gh_filesize($temp_path) == $source_size;
    
    
    if ($it_worked) {
    ....
    } else {
    gh_log(WARN, " Failed file copy. Will mark this metadata file 'Gone'.");
    @unlink($temp_path);


    I know the cause now, the question is WHY greyhole process in the middle of file being written to share?!?!

    Quote from "sbocquet"

    If I understand well, you use rsync to copy file to a SMB share ? Mmmm, have you tried with a "cp command', cause maybe it's a bug between rsync and samba/greyhole.


    Personnaly, my copies are between a windows client and the server.


    Well I have also lost 3 files that I d/l from bittorrent into a GH share when I moved them to another GH share -- so that wasn't rsync. Those files were 100% complete and checked and not corrupt in any way, was just moving them with the BT client (Move Torrent Data) and GH make them disappear, I got same error as files from rsync. (copy failed)


    Also I should be able to rsync without losing files like that, that is silly if that were acceptable.

    I don't think its SMB, I rsynced all 73GB of files over to a new non-GH SMB share then did md5 diff for every single file and only 1 (different) file came back as not a match, but it was the same exact size so that just means some bit somewhere got reversed or something. (which is odd, I thought rsync recopied a file if the checksum wasn't the same?) So yeah, I think this is still a greyhole issue.

    Quote from "sbocquet"

    Personnaly, my LZ is on another disk that the pool disks, but don't think it could be a problem.


    You can put Greyhole in debug mode to see what happens in detail (very verbose !)


    Yes, debug was already on, (seems to be the default)


    Quote


    Have you tried to copy those files to a SMB share outside of greyhole pool to search if it's greyhole related or samba related ?


    Trying this now! Thanks for the suggestion, how ever I have to wonder what kind of Samba issue would cause this?


    edit: actually I used rsync so copy should have been checksummed automatically, no?

    Quote from "sbocquet"

    Is it deleting your file in the landing zone and/or the copies ?


    It is deleting the file in the landing zone.


    Dec 11 23:43:47 7 write: Copying 68.3MB file to /media/2bff11c8-c52d-4c0a-898a-432ea8540a70/gh/music/Blahblahblah.m4a.HNowEa
    Dec 11 23:43:47 7 write: (using rename)
    Dec 11 23:43:47 4 write: Failed file copy. Will mark this metadata file 'Gone'.


    ALso note this filesize on the top is wrong size, is only half actual filesize. its as if its trying to move it before file is completely written or something?


    LZ and Pool on same drive so it tries to mv or copy the file into the pool and this is what fails
    exec("mv ".escapeshellarg($filename)." ".escapeshellarg($target_filename)." 2>/dev/null", $output, $result);
    or
    exec(get_copy_cmd($source_file, $temp_path));


    tho i can't tell if it thinks its the same device or not.

    Quote from "sbocquet"

    Hi,


    Personnaly, using Greyhole since months with a SUN D1000 12 disks array, and never had a single problem... Sound strange. Nothing in greyhole logs ?


    No, it just says copying file into the greyhole folder, then it says "Failed file copy. Will mark this metadata file 'Gone'."
    It happened again last night while I was rsyncing 73GB of music onto greyhole, 6 files give this error again and get deleted. You think, maybe the drive is bad? But I have no SMART errors or anything I ran drive tests, etc. I don't know why I am always having these weird problems ugh. Maybe I check the SATA cable. But why only 6 small files you think if something was wrong I'd have more problems then that.

    Quote from "drap"


    Hi ribbles, would you be able to suggest some alternatives to greyhole? Thanks!


    I just updated to latest version 0.9.23 today and also contacted the greyhole dev so I will wait and see if it stops eating my files or if he has any insight into my problem. I will PM you about alternatives, I do not want to bring the thread off topic 8-)

    Quote from "Kega"

    I'm sorry, but I think my description was not precise enough. What you are describing is if SABnzbd is runnig on windows and I wan't to move it to a SMB share on OMV, current? What I should have specified was that SABnzbd is installed on OMV and I want a file to be moved from the OMV disk to another disk connect to the same machine. Does that makes sense? And sorry if I misunderstood you.



    You can mount your samba shares locally and just get to them that way, check out a script called mount_shares_locally
    What it does is read your smb.conf and mount all the shares it finds in there under /mnt/samba/.


    For example your samba share /music that points to /media/15ed7e72-8449-43e7-9028-xxxxxxxxxxxx/music share will be mounted under /mnt/samba/music


    I have this setup to run every reboot its pretty nice, can help if you run into any problems.

    I am using 0.4.6 and just managing greyhole from the CLI just fine. Once you set up your shares there isn't much to do anyway. (Maybe an occasional --fsck or --balance)
    I have found that the samba GUI inserting the dupe wide links/unix extensions params (from Extra Options) is fine, as the correct ones will come at the bottom of the config and they are the ones that will be used.


    If the author doesn't answer questions about the plugin for months, to me that means its unsupported and not something you want to be using anyway.


    The only major problem i had was setting up the mount_shares_locally script for local usage of the shares... just make sure you mount them NOT as root but as a user in the OMV users group otherwise other programs (apache, transmission, etc) won't be able to use them.


    I might makea blog and write about specifics, if not to help others just to help remind myself in case i need to do it again.

    I also am getting 2 drives with the same UUID after creating a share. Was there any resolution to this?


    I tried change it back to the original UUID in fstab but it still wont mount it.
    In blkid it still shows the original UUID.


    I see thread with people adding a USB HD get same problem: http://forums.openmediavault.org/viewtopic.php?f=10&t=1027


    edit2: oh i just noticed they have the same LABEL. i am going to try changing one and see what happens
    edit3: nope that didnt fix anything
    edit4: fixed! changed UUID back to proper one in fstab; mkdir /media/<oldUUID>; mount -a
    now only question is why fstab changed the UUID to be the same as the other drive?


    edit5: oh the webgui is creating duplicate entries! I unmounted one drive, then tried to remount and now my fstab back to having same UUID for 2 different drives. why this happen!?!?
    edit6: somehow my UUIDs in config.xml get messed around, UUID for one drive point to /dev for other drives, I just delete everything in config.xml <fstab> and /etc/fstab, remount everything now it is sorted.


    ALl I know is I have it correct now, am not going to touch anything :D


    edit7: there is def. something screwy with mounting NTFS drives, after every reboot i will have to manually mount them as last time my EXT4 data drive was pointing to the mount point of a NTFS drive

    And it works! Very strange! All my data is intact as well. Maybe the 1st install was screwy for some reason? Ah well you got me out of a jam, thanks!



    ^^ Was not using RAID, just JBOD. Worked fine under WHS, thats why I found it odd to not boot in OMV.